Why is SQLiteDatabase 14 times slower than REALSQLDatabase?

  1. ‹ Older
  2. 5 weeks ago

    Kem T

    Feb 11 Pre-Release Testers, Xojo Pro, XDC Speakers New York

    Can you post an example project so we can see the code and test?

  3. Aaron H

    Feb 11 Pre-Release Testers Europe (Germany)
    Edited 5 weeks ago

    @Kem T Can you post an example project so we can see the code and test?

    Not easily. What I can do is describe the way I was working before (which was a while ago)...

    It looks like I was doing most DB transactions using Recordset objects, which worked fine back then. I'm guessing this is now sub-optimal?

    P.S. It is past midnight here and I have to sleep now, will return tomorrow morning. Thanks for any help in the mean time ...

  4. Bob K

    Feb 11 Pre-Release Testers, Xojo Pro Kansas City

    You can, you just need to explicitly START TRANSACTION using the SQLExecute command and then do subsequent Commit/Rollback statements. Without starting the transaction it will write the changes to disk after every update.

  5. Kevin G

    Feb 11 Pre-Release Testers, Xojo Pro Gatesheed, England

    Check how you are handling transactions.
    The Autocommit property does not exist in SQLiteDatabase so you need to execute a BEGIN TRANSACTION at the start of each transaction.

  6. Christian S

    Feb 11 Pre-Release Testers, Xojo Pro, XDC Speakers Germany
    Edited 5 weeks ago

    Did you switch to SQLiteDatabase yet?
    That's a much newer code base.

    And also set cache size to move big parts of database in memory cache?

  7. Christian S

    Feb 11 Pre-Release Testers, Xojo Pro, XDC Speakers Germany

    I made a little blog article: Move from REALSQLdatabase to SQLiteDatabase

  8. Tim S

    Feb 12 Canterbury, UK
    Edited 5 weeks ago

    @Kevin G Check how you are handling transactions.
    The Autocommit property does not exist in SQLiteDatabase so you need to execute a BEGIN TRANSACTION at the start of each transaction.

    The point here is that every write to the database (such as UPDATE), will, unless you group them together using BEGIN TRANSACTION and COMMIT, be its own transaction, which could slow thing down.

    Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.

    OP: do you have appropriate indexed on your database?

  9. Jean-Yves P

    Feb 12 Pre-Release Testers, Xojo Pro Europe (France, Besançon)

    @Tim S Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.

    +1
    but with heavy complex calculations that's not always possible.

  10. David C

    Feb 12 Pre-Release Testers, Xojo Pro Derby, ITM

    Open your SQLite database in RAM rather than on disk for extra speed.

  11. Aaron H

    Feb 12 Pre-Release Testers Europe (Germany)

    @David C Open your SQLite database in RAM rather than on disk for extra speed.

    The size of the database exceeds available RAM, so I don't think we can do that, but thanks for the suggestion.

    @Tim S The point here is that every write to the database (such as UPDATE), will, unless you group them together using BEGIN TRANSACTION and COMMIT, be its own transaction, which could slow thing down.

    Personally I never modify recordset objects, I just read from them; to update I always use UPDATE.

    OP: do you have appropriate indexed on your database?

    Thank you all for pointing this out. A quick search tells me there are no "BEGIN TRANSACTION" commands in the old code, so I see this is the first thing I need to do. I can see also it won't be easy because I will have to trace my steps backward through a maze of method calls from every commit. I was using many-to-one logic from begin-to-commit.

    @ChristianSchmitz I made a little blog article: Move from REALSQLdatabase to SQLiteDatabase

    Great, that's also helpful. The larger cache size should certainly be better. I can't imagine I'm the only person to find myself in this situation, replacing the old object for the new one, so it's good for everyone to have this information.

  12. Christian S

    Feb 12 Pre-Release Testers, Xojo Pro, XDC Speakers Germany

    see also
    https://www.mbsplugins.de/archive/2017-10-16/Big_SQLite_Cache_for_64-bit_in/monkeybreadsoftware_blog_newest

    We have an app where we opens several big SQLite databases and we give each a 2 GB cache.
    It really makes a huge performance improvement.

  13. Aaron H

    Feb 12 Pre-Release Testers Europe (Germany)

    @ChristianSchmitz see also
    https://www.mbsplugins.de/archive/2017-10-16/Big_SQLite_Cache_for_64-bit_in/monkeybreadsoftware_blog_newest

    We have an app where we opens several big SQLite databases and we give each a 2 GB cache.
    It really makes a huge performance improvement.

    Good to know. I think the bigger cache will be even better for this project because the DB becomes enormous.

  14. Christian S

    Feb 12 Pre-Release Testers, Xojo Pro, XDC Speakers Germany

    2 GB cache is only practical for 64-bit apps.
    If you have enough memory, the cache could be set higher than the database size.
    Only used parts will be allocated, so cache doesn't grow over database size.

  15. Aaron H

    Feb 12 Pre-Release Testers Europe (Germany)
    Edited 5 weeks ago

    Maybe I should ask this in a new thread, but as it is related to the above -- to me it is not clear from the docs, what is the situation with DB.SQLExecute("BEGIN TRANSACTION") and DB.commit when

    DB.InsertRecord

    is used? I need to use BEGIN TRANSACTION or not? Because the code in the doc page for InsertRecord does not use it, which is inconsistent with the main text about the object.

  16. Tim S

    Feb 12 Canterbury, UK

    It's BEGIN TRANSACTION. And, if you don't do DB.SQLExecute("BEGIN TRANSACTION") then one is done automatically for you. Then you can do the insert-records and finish with a COMMIT.

    I've seen it said that after you insert a few hundred records, do a db.commit so the journal file doesn't get too big.

    There's good documentation available at http://www.sqlite.org. There are helpful people, too, on the SQLite Users' MailingList - you can join here http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users (always assuming you're not already on it). :)

    You may wish to see whether using WAL mode for the database helps. But for these detailed SQLite questions I'd recommend the mailing list. Several of the developers hang out there and are quick to respond.

  17. Aaron H

    Feb 12 Pre-Release Testers Europe (Germany)

    @Tim S It's BEGIN TRANSACTION. And, if you don't do DB.SQLExecute("BEGIN TRANSACTION") then one is done automatically for you. Then you can do the insert-records and finish with a COMMIT.

    I've seen it said that after you insert a few hundred records, do a db.commit so the journal file doesn't get too big.

    There's good documentation available at http://www.sqlite.org. There are helpful people, too, on the SQLite Users' MailingList - you can join here http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users (always assuming you're not already on it). :)

    You may wish to see whether using WAL mode for the database helps. But for these detailed SQLite questions I'd recommend the mailing list. Several of the developers hang out there and are quick to respond.

    Okay, thank you for the help, Tim.

  18. @AAron Hunt — Also noteworthy is that you are limited in the number of transactions per seconds (a few dozens/s on a 7200 rpm hard-drive). Here is an excerpt of SQLite FAQs:

    (19) INSERT is really slow - I can only do few dozen INSERTs per second

    Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.
    Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..

    By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

    Another option is to run PRAGMA synchronous=OFF. This command will cause SQLite to not wait on data to reach the disk surface, which will make write operations appear to be much faster. But if you lose power in the middle of a transaction, your database file might go corrupt.

  19. Aaron H

    Feb 13 Pre-Release Testers Europe (Germany)

    @Stéphane ;Mons @AAron Hunt — Also noteworthy is that you are limited in the number of transactions per seconds (a few dozens/s on a 7200 rpm hard-drive). Here is an excerpt of SQLite FAQs:

    Thank you, that is very helpful information.

  20. Kem T

    Feb 13 Pre-Release Testers, Xojo Pro, XDC Speakers New York

    @Aaron H , I'm curious to know what difference these suggestions make for you. I hope you'll post your results.

  21. Aaron H

    Feb 13 Pre-Release Testers Europe (Germany)

    @Kem T @Aaron H , I'm curious to know what difference these suggestions make for you. I hope you'll post your results.

    Thanks, I'm as curious as you, and I'll report back if I ever get a chance to do the work. The client seemed all ready to go but is balking now, so the project looks dead again, at the moment. But I expect when properly implemented the new DB object will be much faster than the old one, and will finally allow this app to do what it was designed for. The biggest advantage will be the (virtually) unlimited 64-bit cache size.

or Sign Up to reply!