SQLite error "database disk image is malformed"

Hi I just installed a newer version of my program, and now one of my users is getting an occasional error. “database disk image is malformed”. I’ve googled this error and found useful information about how to restore corrupted databases.

What makes my problem unique, is that an integrity check returns ok. Also everything continues to work properly. The errors are unnerving though. I’ve noticed that half the time the error logging mechanism (see error logging details below), doesn’t write the error back to the db.

Another thing that is unsettling is that it only happens on one computer, and at only one place in the code. Here is the (seemingly normal) statement where the problem occurs. I might also mention that this method is being called from a timer once a second max (when certain flags are true). It is possible that another user is writing to the db at the time it is being read. Also the db is at a networked location on another computer. Windows 7 if that makes a difference.

dim rs as RecordSet = MainDb.SQLSelect("SELECT * FROM ProductionProgress WHERE ProductionID = '" + PID + "' ORDER BY Idx") MainDb.ErrCheck "PListProduction 1"

I check for errors after every db read or write. Here is the error logging code:

Sub ErrCheck(Extends DB as SQLiteDatabase, msg as String = "", Log as Boolean = True) if db.Error then dim em As String = "Error: " + db.ErrorMessage + EndOfLine + msg if Log Then SQLPS = Main.MainDb.Prepare("INSERT INTO ErrorLog(Value,User,Time) VALUES(?,?,DATETIME(CURRENT_TIMESTAMP,'LOCALTIME'))") SQLPS.BindAll 2 SQLPS.SQLExecute(em,System.EnvironmentVariable("COMPUTERNAME")) end if msgbox em end if End Sub

I see a similar question was asked on stack exchange over a year ago with no answer. I hope I can find a solution.
sqlite-database-corrupted-but-integrity-check-passes

as a storage professional, I would put money (my personal money) on the table that is what is causing the corruption. SQLite can on a network drive/share/network location. But it shouldn’t as it can cause random corruption.

I would check out How To Corrupt An SQLite Database File although it doesnt mention this particular case. I have seen data files corrupted this way many many many times all due to multiple hosts writing to the same data file (SQLite file in this case) at the same time over a network share.

Yikes. I have large program that relies on multiple users accessing a database at a networked location. What other solution can I use? I have very much code and use many in memory databases so I was hoping to not go away from SQLite. Would Valentina SQLite Server be a good option? I’ve been doing this for over a year with a smaller program and no problem.

something is “not good”… i would try to dump the contents of the “possibly somehow corrupted” sqlite-db to a new/clean one.
if you’re not doing it your own way, then a tool like SQLiteDoctor or SQLiteManager might do it for you.

that is lucky.

you can use something like cubeSQL or Valentina SQLIte Server. Or make your own. The make your own, is write a single app that reads/writes to the sqlite database. Then have the other apps talk to that one app for all their database needs.

good luck!

this works great at fixing most issues with corrupt sqlite databases. depending on how bad the corruption is, will depend on if there is data loss and how much.

We, and SQLite.org, have been telling people not to do this for many years
It appears to work until it doesnt and then your database is corrupted to the point it cant be used

So far the database ‘appears’ to not be corrupt. Everything works fine except for the occasional errors. Integrity_check is ok as stated above.

The reason I’m concerned is because I do not wish for a catastrophe to happen. I need a solution that is easy to implement because I have almost 600 lines of code that access the maindb file. I’m doing an installation in 3 weeks that is 11 users. $10k set up and $500 per month support. I have major features to add in this time and can’t afford to spend time writing a server and changing all my database access code. Also I attach my maindb to an in memory db quite frequently because I can make complex changes to an in memory db very rapidly.

My current largest install is five users, so I see almost certain pending doom with my current code going into an 11 user setup.

It seems that Valentina SQLite server would be the easiest to implement but even that will take some time to set up.

Attaching to the DB as a server probably wont work

Going from 1 user (stand alone db) to 2 (with a DB server & a client/server configuration) is sometimes harder than going from 2 to 200 as thats more just a matter of scaling the server hardware

My recommendation would be the CubeSQL server. Easy to install and support and the switch from a simple SQLite db is not difficult.

So lets say I use Valentine SQLite Server. All my existing SQL statements should work with minor changes. I see that all my attaches are simply inserting records from my in memory db to the main or vice versa.

Here is a simple one that fetches items into memory for fast searching and filtering:

if db.AttachDatabase(main.MainDb.DatabaseFile,"m") Then db.SQLExecute("DELETE FROM Items") db.SQLExecute("INSERT INTO Items SELECT * FROM m.QBItems") db.DetachDatabase("m") db.ErrCheck "GetItems" end if

How would I do this with out attaching? Would I need to read into a recordset and then insert each row using a loop / rs.movenext? This would seem to be much slower, especially when writing back to the maindb on disk.

I’ve found it’s much quicker to load an entire table into memory, edit the records, and then delete all records from the on disk db and insert the entire table at once, than it is to update desired records directly to the on disk db. I’m talking 20k records with 1k records being modified, for example.

I would say that right there is a recipe for disaster… not a big deal in a single user app with a local database… but a terrible idea for a multi-user app with a remote database.

You have a period of time (regardless how short) where the entire data in under control of a single client, and another period of time where the table is in “flux”… who knows what might happen if two clients try to replace an entire table at the same time

This makes no since in a multiuser system! If two users load the table into memory, make changes and save, the last user will win, in other words you will loose some of the changes.

You need to take account of the fact that a record can be changed by another user while you are editing.

If you need to do mass updates then you need to design for mass updates.

  • import the data into a staging area
  • massage the data as required
  • update the data based on the contents of the staging area

another idea is use semaphores.

10 if semaphore flag for tablexyz set?
20 if yes then wait a few seconds and goto 10
30 set the semaphore to block other processes
40 do what ever you have to do to the table
50 release the semaphore so other processes can play

obviously the above is NOT real code, but an illustration of an idea

most true multiuser database engines do this (or similar) automatically…
if you implement this yourself, be sure to have an out, if there is an issue between 30 and 50 otherwise your table is forever locked

Hmmm… I was mostly importing the mass data into memory for faster access and then receiving a notification when the data changed and needed to be reimported. For the actual mass changes to the db, and example would be an order with 100 lines. I would delete all 100 lines and then add the 100 lines modified or not. I set a flag on the order record to indicate it is being saved, so any read would discover that. I’ve found this changed the save time from 15 seconds to .5 seconds (a necessary change).

I just checked my code. The only other time (other than the order example above) I’m doing mass changes to the on disk db is cases where I read a list of items from QuickBooks and insert them into the main db (this using a ‘server’ program). After inserting the records I notify the users that the list has changed. The users then import the table into memory.

No not in my book! Simply add a version control field to the record either a time stamp or my preferred way an int field. Then update statement is:

UPDATE … version = version + 1 WHERE … (version = ?)

If rows affected is zero it means someone else updated the record while you were editing.

Did some tests with Valentina SQLite Server.

  1. reading to memory is acceptable .03 seconds for 2095 records read.
  2. 1.95 - 2 seconds to write using a transaction with separate inserts
    Writing to SQLite db at networked location using Attach (my current method) takes 0.28 - 0.37 seconds

As you can see I need to find a faster way to insert multiple records at once. I can’t create an in memory VSQLiteDatabase like I can a regular database.

Edit: Ok I had code in my test to update a progress bar (not wrapping the inserts in a transaction took 60 seconds) with that code commented out it took 0.93 - 1.00 seconds. Still 3x as long but probably acceptable.