CubeSQL locking database

Also, since we’re in the subject.

Does anyone know the difference between a DB locked error and an SQLite error 5 (database busy) ?

Thanks
R

Despite cubeSQL being a multi-user database it still has the same constraints of SQLite. If you open a transaction on one table for writing then the entire table is locked and cannot be written to by another connection.

Actually, now that I think if it, my recent experience of a locked DB was not with cubeSQL. It was with SQLite itself. And it was because of a Transaction I failed to commit in a method. I still have not had a locked cubeSQL DB file since instituting AutoCommit=True.

Not sure if that “an error occurred while executing sock_read” occurred with cubeSQL either. it might have also been with SQLite during some testing of new code.

Isn’t it the entire file that’s locked, not just the table?

Yeah I dunno why I said table. I’m up late :slight_smile:

Suffice to say cubesql is great for many use cases but if multiple simultaneous and rapid writes are necessary it is not ideal. Other database servers can lock single rows which can be advantageous.

I added some code to my desktop program just the other day and indeed I also forgot to close a Transaction…drove me crazy for a little bit until I remembered this thread :slight_smile:

[quote=371780:@Roman Varas]
At last, maybe you guys have an idea… I have seldom seen the DB reporting error 830 (an error occurred while executing sock_read). What could be causing this? Ideas ? R.[/quote]

Marco did reply back to me promptly and had this to say about the sock_read error:

[quote]error 830 is a client side related error (not generated by the server and the server is not affected by this error).
It usually means that client is disconnected probably due to a timeout or a network loss (or your computer goes to sleep mode and interrupt all network operation).[/quote]

It happens so rarely on my work intranet and also at home I have no Idea where to start looking for the problem :confused:

Thanks for sharing the info, Brian.