Why is SQLiteDatabase 14 times slower than REALSQLDatabase?

Today based on help I got for this crash report, I switched out a deprecated REALSQLDatabase object for the “new” SQLiteDatabase object. Here’s a paraphrase of the client’s response to using the new version:

[quote]1. the journal file is constantly being deleted and recreated. I don’t remember seeing that ever happen in the old version.
2. in the old version creating the database took 1 second, and now it takes 14 seconds.[/quote]

The app does massive calculations and creates very large databases requiring days to process. The example 1 second DB creation cited above is a benchmark test we made for the app. The old benchmark was 1 second, and with the new DB object that is now 14 seconds. Obviously, this is unacceptable.

I understand that when an object gets deprecated and replaced by something new, the code has to follow suit. Even when all the functionality of the old object is retained and old code will work, that doesn’t mean the old code shouldn’t be rewritten. So the question here is: how does the old code need to be rewritten for the new object? Are the journal file observation and slow performance enough clues to go on?

Can you post an example project so we can see the code and test?

Not easily. What I can do is describe the way I was working before (which was a while ago)…

It looks like I was doing most DB transactions using Recordset objects, which worked fine back then. I’m guessing this is now sub-optimal?

P.S. It is past midnight here and I have to sleep now, will return tomorrow morning. Thanks for any help in the mean time …

You can, you just need to explicitly START TRANSACTION using the SQLExecute command and then do subsequent Commit/Rollback statements. Without starting the transaction it will write the changes to disk after every update.

Check how you are handling transactions.
The Autocommit property does not exist in SQLiteDatabase so you need to execute a BEGIN TRANSACTION at the start of each transaction.

Did you switch to SQLiteDatabase yet?
That’s a much newer code base.

And also set cache size to move big parts of database in memory cache?

I made a little blog article: Move from REALSQLdatabase to SQLiteDatabase

[quote=424602:@Kevin Gale]Check how you are handling transactions.
The Autocommit property does not exist in SQLiteDatabase so you need to execute a BEGIN TRANSACTION at the start of each transaction.[/quote]
The point here is that every write to the database (such as UPDATE), will, unless you group them together using BEGIN TRANSACTION and COMMIT, be its own transaction, which could slow thing down.

Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.

OP: do you have appropriate indexed on your database?

[quote=424655:@Tim Streater]Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.
[/quote]
+1
but with heavy complex calculations that’s not always possible.

Open your SQLite database in RAM rather than on disk for extra speed.

The size of the database exceeds available RAM, so I don’t think we can do that, but thanks for the suggestion.

[quote=424655:@Tim Streater]The point here is that every write to the database (such as UPDATE), will, unless you group them together using BEGIN TRANSACTION and COMMIT, be its own transaction, which could slow thing down.

Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.

OP: do you have appropriate indexed on your database?[/quote]

Thank you all for pointing this out. A quick search tells me there are no “BEGIN TRANSACTION” commands in the old code, so I see this is the first thing I need to do. I can see also it won’t be easy because I will have to trace my steps backward through a maze of method calls from every commit. I was using many-to-one logic from begin-to-commit.

Great, that’s also helpful. The larger cache size should certainly be better. I can’t imagine I’m the only person to find myself in this situation, replacing the old object for the new one, so it’s good for everyone to have this information.

see also
https://www.mbsplugins.de/archive/2017-10-16/Big_SQLite_Cache_for_64-bit_in/monkeybreadsoftware_blog_newest

We have an app where we opens several big SQLite databases and we give each a 2 GB cache.
It really makes a huge performance improvement.

[quote=424676:@Christian Schmitz]see also
https://www.mbsplugins.de/archive/2017-10-16/Big_SQLite_Cache_for_64-bit_in/monkeybreadsoftware_blog_newest

We have an app where we opens several big SQLite databases and we give each a 2 GB cache.
It really makes a huge performance improvement.[/quote]

Good to know. I think the bigger cache will be even better for this project because the DB becomes enormous.

2 GB cache is only practical for 64-bit apps.
If you have enough memory, the cache could be set higher than the database size.
Only used parts will be allocated, so cache doesn’t grow over database size.

Maybe I should ask this in a new thread, but as it is related to the above – to me it is not clear from the docs, what is the situation with DB.SQLExecute(“BEGIN TRANSACTION”) and DB.commit when

DB.InsertRecord

is used? I need to use BEGIN TRANSACTION or not? Because the code in the doc page for InsertRecord does not use it, which is inconsistent with the main text about the object.

It’s BEGIN TRANSACTION. And, if you don’t do DB.SQLExecute(“BEGIN TRANSACTION”) then one is done automatically for you. Then you can do the insert-records and finish with a COMMIT.

I’ve seen it said that after you insert a few hundred records, do a db.commit so the journal file doesn’t get too big.

There’s good documentation available at http://www.sqlite.org. There are helpful people, too, on the SQLite Users’ MailingList - you can join here http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users (always assuming you’re not already on it). :slight_smile:

You may wish to see whether using WAL mode for the database helps. But for these detailed SQLite questions I’d recommend the mailing list. Several of the developers hang out there and are quick to respond.

[quote=424690:@Tim Streater]It’s BEGIN TRANSACTION. And, if you don’t do DB.SQLExecute(“BEGIN TRANSACTION”) then one is done automatically for you. Then you can do the insert-records and finish with a COMMIT.

I’ve seen it said that after you insert a few hundred records, do a db.commit so the journal file doesn’t get too big.

There’s good documentation available at http://www.sqlite.org. There are helpful people, too, on the SQLite Users’ MailingList - you can join here http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users (always assuming you’re not already on it). :slight_smile:

You may wish to see whether using WAL mode for the database helps. But for these detailed SQLite questions I’d recommend the mailing list. Several of the developers hang out there and are quick to respond.[/quote]

Okay, thank you for the help, Tim.

@AAron Hunt — Also noteworthy is that you are limited in the number of transactions per seconds (a few dozens/s on a 7200 rpm hard-drive). Here is an excerpt of SQLite FAQs:

quote INSERT is really slow - I can only do few dozen INSERTs per second

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite…

By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN…COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

Another option is to run PRAGMA synchronous=OFF. This command will cause SQLite to not wait on data to reach the disk surface, which will make write operations appear to be much faster. But if you lose power in the middle of a transaction, your database file might go corrupt.[/quote]

Thank you, that is very helpful information.

@Aaron Hunt , I’m curious to know what difference these suggestions make for you. I hope you’ll post your results.