Valentina SQLite Server bench tests

Really? In my experience, updating a progress bar will dramatically slow your code.

Maybe I should send you my test project so you can adapt it and we can compare apples to apples.

How difficult is it to convert from SQLite to PostgreSQL?

I don’t use any foreign keys, joins or unions. I do use complex select statements and a lot of prepared statements. The latter would need changing I assume.

I would be interested in seeing your test. I only added the progress bar after the app seemed to freeze for an extended period. On other tests the progress bar added about exactly one second for 2095 records.

I just tried the PostgreSQL test with the progress bar. 0.42 second instead of 0.33 seconds. The one second was with refreshing a label yet at every insert.

[quote=305443:@Neil Burkholder]How difficult is it to convert from SQLite to PostgreSQL?

I don’t use any foreign keys, joins or unions. I do use complex select statements and a lot of prepared statements. The latter would need changing I assume.[/quote]

No in that case it should be just a case of choosing the correct data types to match the ones you are using. If you are on a mac you can even use the https://postgresapp.com rather than installing it.

Thanks. I have only windows clients for this app.

Well from what I remember the windows install covers all the bits and bobs you need. But you will need to download the GUI admin tool separately.

I’ve posted my project here:

https://dl.dropboxusercontent.com/u/26920684/PSQL%20Test.xojo_binary_project

You just have to fill in the appropriate credentials for your DB servers in the appropriate places.

Would be interesting to see everyones results.

I was able to test 2 of the options here ( I currently only have pgsql setup), so pgsql over the local network and flat file sqlite :

pgsql result : 896068

sqlite result : 24672

Both were an average of 6 runs.

Running the pgsql option on the db server itself instead of going over the network (Used ‘localhost’) it changed to an average of 301506.

Hi Kem,

So 100ms is better of postgre?

I see that nobody yet try VDB.

I see that so far only inserts are tested. Why?
Inserts should be similar for most dbs. Because HDD plays the main role saving N Mb.

On my laptop postgres has shown the worst performance, but I also suspect there is something about my installation that’s making it slower generally.

I tried Valentina yesterday but couldn’t get it going, then ran out of time.

PostgreSQL doesn’t use AUTOINCREMENT?

Use bigserial as primary key - then there is autoincrement.

Here is a summary of the conversion steps required. Not simple for my size project. 45 tables. About 8 hours start to finish.

  1. Convert database - 3 hours to write conversion utility.
  2. Change all prepared statements to use $1,$2 instead of ?,?.
  3. changed CURRENT_TIMESTAMP to ‘now()’
  4. changed some problems with data types ie. Round aggregate doesn’t work with type real.
  5. multi-table queries need more of the fields added to the ORDER BY clause

be very careful with the data types. sqlite is very weak about it, and postgres is very strict.
a query can run ( and give good results) in sqlite, and completely fail in postgres.
also write method to convert one db to the other, dont do it by hand.
you will likely do it more than one time ( and more than one database).

I did create a conversion program first thing. As you said data types was the biggest hurdle. SQLite would even let you enter 1.5 into an integer field.

The obstacle I’m hung up on at the moment is AUTOINCREMENT. I converted using Serial as a replacement. The problem is that if an ID is specified when inserting like “INSERT table(ID,Test) Values(Max(ID)+1, ‘test’)” the serial number isn’t incremented. So in this case next time an insert is done the ID is less than the max ID. I tried using “ID INTEGER DEFAULT MAX(ID + 1)” but get an error saying an aggregate can’t be used for a default value.

I’m thought about adding a trigger to increase the serial next value whenever a record is inserted, but I’m not sure how.

Why do you want to write a value into ‘ID’ (probably the primary key). Don’t write anything in it and everything will be ok. (If your datatype is bigserial - but serial should also be possible, but I never have used it for that purpose.)

Or do I missunderstand the intention?

you must import your tables with the values of your ID
then after the import process, set the sequences values to max(id) +1 and then the autoincrement process can start on its own

serial transforms to an int4 with an autoincrement sequence
bigserial gives an int8 with autoincrement