For sharing a SQLite database in a client/server set up it’s possible to use NodeJS + SQLite3. I’ve been experimenting with this because I’ve become a bit of an enthusiast for RPC servers.
Instead of sending each individual SQL statement to the server, waiting and checking the response before sending the next you can send a JSON object that represents the data you want inserted/updated and handle all the database work server side. For example if you were writing an accounting package and you had a routine to create an invoice your client program could create a JSON object containing all the data you’d expect to find on an invoice: the date, customer id, delivery address, an array of items and their unit prices, etc… and then send that to the server. The server validates your JSON object, extracts the data from it, handles all the necessary SQL commands to write the data to the database and then returns a JSON object to the client to confirm the routines success or failure.
Out of curiosity I tried creating an equivalent of these benchmarks using NodeJS + SQLite3 and the methodology I’ve described above:
[quote=201396:@Ruslan Zasukhin]
We have spend some time making simple Xojo project which do N inserts into:
a) Valentina SQLite LOCAL db using our copy in V4RB;
b) Valentina Sqlite SERVER;
c) cubeSQL Sqlite SERVER;
It has simplest Table T1 with 2 integer fields f1 and f2. No indexes during inserts.
Results for 100K records INSERTS, for LOCALHOST servers (Macbook, i7, 8GB):
LOCAL SQLite – about 0.5 - 1 second on different runs.
Valentina Sqlite Server – 6 seconds.
cubeSQL Server – about 65-70 seconds.
Results for 100K SELECTs, each SELECT finds 1 record.
LOCAL SQLite – about 3-4 seconds
Valentina Sqlite Server – about 23 seconds.
cubeSQL Server – about 60 seconds.[/quote]
For both of these benchmarks NodeJS+SQLite3 delivered in < 4 seconds (Win 7 64bit, AMD 3.4GHz, 12GB).
Now I confess in this scenario most of the saving is likely due to be the massive reduction in network traversal but I think it’s interesting nonetheless.
For clarity my implementation of the insert benchmark creates a JSON array of arrays of the 100,000 items to be inserted ([ [1,0],[2,1],[3,2],… ]) on the client and sends that to the ‘insertBench’ method of the server, the server performs the 100,000 inserts and sends an acknowledgement to the client that the job is done. The clock starts before the array is created and stops when the response is received.
Likewise with the select benchmark: the clock starts, the client creates an array containing 100,000 record ids, sends those to the ‘selectBench’ method of the server, the server performs the 100,000 selects by looping over the array finding 1 row for each element and then sends the whole lot back to the client in one shot and the clock is stopped on receipt.
Of course you could argue that it’s an apples and oranges comparison (and there’s some truth in that) but I tried to remain as faithful as possible to the benchmark without sacrificing one of the big advantages of the methodology I was testing.
Working this way does require properly learning NodeJS (atm I’m still having to look at the api ref at every step) and doing the server side programming in JavaScript but beyond the impressive performance I’ve described above a couple of other advantages spring to mind:
Your front end can be anything that can send JSON over HTTP, be it a Xojo desktop app, an iOS app or even a standard web page.
NodeJS + SQLite3 are free and open source.