Use prepared statement in timer

To ensure my writes to a database are not blocked by another user of the app, I have tried putting the write code in a timer that repeats the operation until it gets a good write or times out. It works fine with when I pass the time a string with an sql statement, but I have been unable to pass a prepared statement to the timer. I am hoping someone can tell me how to approach this. I am looking for a generic timer object, not a separate one written for each place where I write to the db.

This points to a fundamental flaw in the design. Each session should have its own database connection, and you should let the database manage concurrency. You shouldn’t have timing or write issues this way. Sending transactions to a database and leaving them hanging based on your own timeout is only going to aggravate any database I/O issues.

If you’re doing this because you’re using SQLite, I don’t recommend it for anything but WE apps with small user loads and low write activity. Go with a true multi user database like Postgres or MySQL and make use of the Multithreaded property to prevent SQL functions from blocking.

Using a true multi-user database is not an option. This app will be a WE stand-alone one running in an Atom base Linux box. The program will be reading electric meters on a set schedule. There will be no user interface to the operating system unless the user knows Linux well enough to RDP into it from another computer. I anticipate few users, but there will be heavy write activity whenever the system reads the meters and logs the data. I am concerned that a user may attempt to update the configuration while the meter readings are being logged.

I have read suggestions in the forums that a timer or thread be used to ensure the data is properly written. I have seen other posts that suggest setting the SQLIte.MultiUser (Sqlite WAL) to True will do the same thing. One of Geoff’s early blog posts said he was unable to create a conflict by having multiple users try to save data at the same time.

One worry about using a timer is that if a user interrupts the writing of meter readings and the data logger has to wait in the timer, another meter reading may come in first and things could get really messed up. If a system only has a few meters to read, it is not likely to be a problem, but if 400 or more meters are being read at 4 per second, it could take a couple minutes to write everything.

I would like to hear from someone with experience with the SQLite Write Ahead Logging (WAL) function with an opinion on whether or not WAL would be reliable in this case.

Hi Dean

First let me note that I am not an SQLlite expert and that I only ever use it in single user applications. So I’m going to suggest a change in the structure of your program.

  1. Create a an array to hold your writes something like Dim ps() As SQLLitePreparedStatement.
  2. Every time you want to write data to the database you create a prepared statement and append it to that array.
  3. You have a single timer that pulls a single ps from the beginning of the arrray, processes it to the database and removes it from the array - ps.delete 0. The timer should have a loop that processes the array while it’s ubound > -1.

This way your writes are queued and there is no conflict when updating the database.



Using SQLiteDatabase.MultiUser = True with a unique SQLiteDatabase per Session would be a good start. If any database work is being done at the App level then it too should have a unique SQLiteDatabase and use SQLiteDatabase.MultiUser = True to enable Write-Ahead Logging. Daniel’s suggestion to let the database engine manage concurrency is a good one. It’s likely to do it faster and cleaner than you can in Xojo.

To maximize efficiency, use transactions for blocks of writes. SQLiteDatabase.SQLExecute "Begin" before the block of writes and SQLiteDatabase.Commit at the end will do it. Wayne’s suggestion can be used as a means of letting one thread do all the writing. Just be sure to wrap that loop inside of a single transaction to make the writes as efficient as possible.

So where do those 400 meters get read? Does each read have its own thread? Are they read sequentially in one thread? A high-level overview of the design would help us to better answer your questions. Or you can just run with the feedback you’ve already received.

All that said, SQLite is fast and efficient. If it’s not fast enough then perhaps a Solid State Drive is in order.

Whatever the design and whatever the hardware, it’s a very good idea to thoroughly test it with loads beyond what you expect it to encounter. Hopefully that will expose any weaknesses. It’s better that you discover and address them before deployment. Otherwise you’ll not only have to fix them but also deal with some potentially unhappy users and colleagues.

The meters are all connected to an RS485 network which can only read one meter at a time. While multiple RS485 networks would likely be used in very large systems, there is no advantage to trying to read the different networks in separate threads. We already have a console app that we use with a desktop version of the program that reads the schedule from the database then reads and logs all the meters at the appropriate times. I had been planning to continue using that program, but that would mean a second user writing to the SQLite database. That leaves two viable possibilities. One program to write to one database or two programs, each writing to a different database - one for configuration data and one a log of meter readings. The downside to this second choice is two queries will be needed to get all the info for a particular meter location, but I believe that is outweighed by the benefit: Each of the two programs would be writing to a separate database meaning no possible problem trying to write to a locked database. In addition, it may make removing old records and compacting the database more efficient.

Most meter reading systems will utilize a 4GB compact flash card as the only drive. Speed should not be a problem and space won’t be a problem for landlords who just want to read the kilowatt hours once a month and invoice their tenants for the power used. But industrial users who want kwh, volts, watts, amps and power factor for all three phases logged every minute or two will quickly fill up the drive. It will probably be necessary to purge some of the old data and save a summary. At some time the purged data may need to be VACUUMed to clean up the database. I don’t have enough experience with the long term usage of SQLite to properly evaluate VACUUM vs AUTO-VACUUM. Disk fragmentation is not a problem with SSD drives, but I don’t know if that rule applies to the data inside a database. If it looks like a VACUUM command should be issued, having a separate log database would be beneficial because you need almost as much free space as the database requires.

Another possible benefit to the two program/two database scenario is that if a user were to somehow cause a crash of the web app, the console app should continue reading the meters on schedule (as long as the whole computer didn’t crash).

sqlite should be able to handle that with no problems. Both programs writing to the same database isn’t an issue. You don’t need a timer for this.

Agreed. Let SQLite manage concurrency. SQLite can block and cause delays with a higher number of concurrent writes. But this should not be a problem. The Timer approach, as originally described, is complicated, error prone, and at best will only create delays where there should be none.

Serializing writes as Wayne suggested is an option in some situations, but probably not necessary here.

Dean, I conquer with Tim. SQLite can handle way more concurrency than you might imagine IF you design your tables and transactions with care. Additionally, it will prove a whole lot more debuggable for you in a Web Edition application context than using a database plugin for an external server. Perhaps the biggest problem with Xojo and Real Studio with external databases is client-side lockups caused by dropped connections and server blocking. It’s difficult to handle these gracefully without impacting the rest of your app. Especially true when your app needs to stay responsive to multiple requests, as Web Edition apps must.


Multiple threads all sharing one db connection is a recipe for problems.
Don’t do it.

Thank you everyone for the input. With a better understanding of how it works, I will have a better product.