SQLite Speed Question

Hi everyone,

I need to add a lot of Records to a SQLite-Database. Can you tell me whats faster? To create a long Transaction String, with multiple Insert-Statements, or the more SQL Injection save way to do it via SQLitePreparedStatement for each Record? Any pro and contra?


would be faster

would be safer, especially if you don’t know/trust the source of the data

If you have control of the data, and know its “clean/safe” the first method will be much faster.

and you can do it all with a single insert statement

INSERT INTO mytable (f1,f2,f3) VALUES(1,2,3),(4,5,6),(7,8,9)

I believe prepared statements are faster, but it will sort of depend on what you mean by “a lot of records”, and how many different tables you are inserting into.

If you are inserting a bunch of records into a single table, then prepared statements will be faster than running a bunch of separate insert statements.

I may be confusing the syntax in my head with mysql, but can’t you also do a single insert statement that will insert a whole bunch of rows at at time? Oh yeah, found it here: http://www.sqlitetutorial.net/sqlite-insert/ Scroll down to “Inserting multiple rows into a table” - I’m under the impression that doing mass inserts this way is even faster than prepared statements.

As usual, Dave beat me to it, and in a more concise and precise way. :slight_smile:

To get more clearness to you, the database has around 20 tables. Each object I would like to transfer to the database has multiple properties of various datatypes (e.g. Text, Integer). Until now I created a long SQL-Execute String with various comma separated SQL-Insert statements.

The Problem is, I think this slows down the database transaction, I use a small method to make apostrophes inside Text-Properties work. Like this:

Return ReplaceAll(TempString, "'", "''")

I imagine, this will slows down my program, because I add millions of records to the database. Somebody told me to do it this way, via transaction to get it fast. But with millions of Insert-Statements, the Execute String gets bigger and bigger and my memory don’t likes it :slight_smile: Thats why I ask at the forum.

As I wrote above, millions of record and around 20 different tables.

@Eugene Dakin wrote in his SQLite Book:

[quote]Prepared Statements are a different way to write the SQL query to the SQLite database. Some specialists claim that these statements are faster and some code execution is slower. After finding no real concrete answers, my guess is that the speed is very similar to a raw SQL statement. The main reason for using a Prepared Statement is security. The type of security exposure is called an SQL injection attack.
The Prepared Statement is also called a Compiled SQL Statement.[/quote]
Looks like he don’t know the right way too.

There is no “right” way… just “wrong” ways.

first off. you don’t need ONE massive insert statement… break it into managable blocks
the problem I see with “prepared statements” in this regard, is you will have the overhead of preparing the statement, binding the values etc.
the other option is to use a 3rd party tool if your data is in a CSV or similar file.
my Tadpole app can do it… its quick, but not super fast especially when millions of records are involved, but any other tool might suffer from that same issue.

And Eugene’s statements are correct. since once the SQL is “prepared” and “santized” it then executes just like any other SQL statement. At some level the code has to interact with the physical structure of the database engine

In short: rather than 1 giant string that contains millions of discreet sql statements, try lots of shorter strings that contain 1 sql insert statement that inserts something like 10,000 rows at a time.

If your table is called Foo, and has 2 columns: Bar, and Baz, you could have your statements set up like this:

“insert into Foo(Bar, Baz) values (‘asdf’, ‘ghjk’), (‘qwer’, ‘yuio’), (‘zxcv’,‘vbnm’)… (roughly 10,000 more pairs of values later)… (‘yhgt’, ‘skjf’);”

Doing your inserts in batches like this will be FAR faster than either prepared statements or single insert statements run one at a time.

my best speed were with some 1000 insert statements in one query.
more seemed to slow down the process.

Thanks for all the help.
Feedback on the topic. So it looks like I’ll have to work my way up to the right solution for me. I will take up the proposal to execute these after 1000 statements and simultaneously empty the SQLExecute variable. This should probably also improve the memory. Then your own replace method for secure text properties (apostrophe) should no longer slow down the application.
If anyone has any further suggestions and tips on this topic, please feel free to post them in this thread.

To see things up even more make sure you do it within a transaction. I did some benchmarks a while back and just doing that one thing makes a night and day difference when inserting data. Instead of writing to the db each insert it won’t write until you commit. Seriously, if time is still a concern then I’d look at which insert method is faster.

Premature optimization is a rabbit hole that we go down too quickly sometimes - especially when there are very simple ways to speed things up.