SQLite optimization question

Any SQLite experts out there ??

I have a process that will open an SQLite database in single user mode and do massive updating depending on data content. This process takes between 8-10 hours to complete. Bear in mind that the process will certainly imply UPDATE, INSERT and DELETE operations on multiple tables.

I then simulated the same process with an SQLite completely in memory (not connected to an actual database file). Now the process takes 10-15 minutes. Obviously the difference is because of disk access.

So, anyone know a solution to: a) read a SQLite database file into memory, b) then I do my calculations, c) write the database back to the same file. (Remember this is single user mode)

Any ideas will be greatly appreciated.

Are you doing it inside of a transaction? If not, do and it should be pretty close to the memory performance. i.e.

db.SQLExecute "BEGIN"
db.SQLExecute "DO MASSIVE WORK HERE"
db.SQLExecute "COMMIT"

This causes SQLite to do much less disk activity, at least until the COMMIT

create the in memory db
attach the one on disk
select the data from the on disk one into the in memory one
do all the manipulations in memory
clean out the on disk db tables
select all the data from the in memory db back into the on disk ones

Also, this may be a case where a spinning hard drive is 100x slower than a SSD, which are excellent at lots of tiny little reads & writes as compared to hard drives.

Wow, these seem great suggestions. However, some questions:

Jeremy: I forgot to mention that within my calculations I also perform transactions that in some exceptions could be rolled back. Can I do those within the big transaction you suggest (transaction within transaction) ?

Norman: I had thought about something like that, but I am worried that the end resulting time would be similar. The cleaning out and the writing could also add up to lots of time, right?

Michael: I cannot control the kind of disk . In fact, in some cases there will be hard drives, and in other SSD.

I think under these circumstances, Norman’s suggestion is best. Run the end copy in a single transaction.

I’d doubt it unless we’re talking enormous amounts of data
That said I would be curious what you’re doing that takes 8 - 10 hours
I’d bet we could speed that up right off the bat if we could review how you do what you do but then I’d understand if you don’t want to share as well
Even talking in general terms about it could help find some speed ups

Instead of DELETE FROM table; VACUUM; , you could probably delete the whole db file and recreate the structure on disk if that turns out to be an issue. Heck, just dropping the tables might work.

is there a way in the XOJO to find out the SQL Statement for CREATE TABLE with the structure before doing a DROP TABLE as you mention above?

If you are using SQLite then you can retrieve the Create Table instruction thus:

Select sql From sqlite_master Where tbl_name = 'Accounts'

exactly what i want… so cool

What about memory? How big can the database get before you are running out of memory?

Per process limits apply