Multi User Access to SQLite on Network Drive


I have written a program that accesses single SQLite database (files) on a network drive under Windows. The program can be started by multiple users on multiple computers.

At the moment I do the following:
I write into the database name, date and time when the user starts editing (needs to press a button). When another user opens the database I read out those fields an say something like ‘database is edited by mister x. you can only ready.’

However, in tests the modifier user has problems to close the database when another user opens the database (only read operation, no write operations).

What’s the best solution to grant only one person read-write access to the database and all others only read.

According to I shouldn’t enable multiuser: “WAL does not work over a network filesystem”

Have a look at Brad Hutching’s StudioStable Database

[quote=116274:@Urs Jäggi]
According to I shouldn’t enable multiuser: “WAL does not work over a network filesystem”[/quote]

CubeSQLServer is also a good alternative…

have you checked that file caching is turned off on the server?

look at properties of the share, advanced settings -> caching.
make it so no files are allowed to be cached.

might make it work better. I’ve used it with fox pro tables in the past.

also mssqlserver has a free version that works well

network shares use caching (most of them at multiple layers) and is not good for having multiple people edit or potentially edit the file. As a storage admin/engineer, that is the biggest pain in the backside that we have when it comes to network filesshares.

if you need multiple people editing/viewing/adding to a common database, I would either use a RDBMS (like PostgreSQL, MySQL, MS SQL, etc) or something like cubeSQL. I use cubeSQL a lot and works very well.

good luck!!

[quote=116328:@Markus Winter]Have a look at Brad Hutchings’ [sic] StudioStable Database

Please don’t do this, Markus. The recommendation requires explanation and motivation. In the past, empty recommendations like that have gotten me angry users who don’t understand why shared database file access on a network is a disaster.

Well what if the case requires that only, for example we have 4 teams with a very restricted access and they can access only the shared drive via the vpn, so no servers no databases no mysql, no postgresql , what we can do in this case ?

I was thinking to use a method same as the File Lock system which means if one of the team need to access the db to open it, save it in the Memory and create a file on the drive stating who is using it, then once the db is updated and closed it will dump all the in Memory db to the file overriding the shared file db and updating the file that it finished accessing the db so that the next one can use it. i know it`s overkilling but in this way they can still do something.

Eventually with this method they can even open the db in memory as a read only if anybody else is accessing the db that time.

Any other ideas on this ?

Unfortunately that is the only option for me the Shared drive and i have to use it as much as possible.

Doing network shared SQLite file over VPN would be even WORSE than over a local network. You absolutely do not want to do this. Use cubeSQL (or now Valentina Server) to provide a true client/server capability for remote clients.

often the shared drive is on a NAS, and you can most of the time install postgres or mysql on it.
if it’s not the case, change the shared drive ASAP.

If you don’t have any option to install a service next to your shared folder then I would use a cloud based DB over a shared drive option in a heartbeat. It will not be a case of IF my database corrupts, but WHEN. Corruption will happen, especially over a transient (external) connection where you can’t control the connection.

All that being said, a way to do it would be:

File locking, the first person to open the database drops a file telling all other clients they can only read from it.

If someone performs an update/insert, drop a file with the current date/time containing the sql query they would have performed.
Get the person who has the lock on the database to run the sql form that file and delete the file when its complete, then move onto the next file if one exists.

When the person is finished with the db, they remove the lock.

The only issue you have to make a contingency for is having the lock file left when/if the person who owns it having just lost their internet connection. Put a timeout into the lock file that tells other connections when they can remove the file and lock it themselves as you haven’t seen that user for X hours so they must be having a problem.

This will all be pretty clunky but pretty robust, you could still corrupt the db if you lost internet in the middle of doing something important though, so keep a rolling backup using your preferred method (GFS etc)

See for more information on locking.

Well i guess ill use it just for storage as the data is not updated all the time so the way ill work i guess will be getting the db every other day, so one day for one team to update , the next day other team and so on, and the drive will be for storage only for the meantime.

Thanks again for all the advices.

People considering having many users access an SQLite database over a network should read:

In particular, the section on when NOT to use SQLite. See under “Client/Server Applications” where the advice is:

IOW, you cannot fix it in your application, whatever you do.

pretty sure it’s also buggy under macos !

With Postgre and free versions of every major RDBMS (MSSQL, Oracle, DB2…) plus other choices such as Valentina, Firebird, etc., one really does not have much justification to use SQLite in networked, multi-user scenarios.

Every warning was made. It is just a plain bad idea to use SQLite in this context. Even more so when (more) capable and free or low cost alternatives abound. The learning curve is there, but major RDBMS are well documented. Pick the one that suits your situation best and run with it. Forget SQLite in this context.

MacOS falls under UNIX in this quote.

Sure. And like consumer disk drives that lie about whether they’ve written the data to disk or not, I expect that consumer level file systems lie too, in order to get better performance.

yes yes they do.

Hi all, sorry for reviving this old thread, but i find myself is a similar situation as the parent poster - trying to implement a multi-user solution in a corporate environment that absolutely does not allow any server installation or access, where the only option is to have a solution on a shared drive.

However as stated, I can’t just have an SQLite file just sitting on a shared drive as while multiple users can read, any multiuser write will eventually lead to database corruption.

Can the database write functions be delegated to a helper/console app on the shared drive?