Multi User Access to SQLite on Network Drive

[quote=483455:@Stam Kapetanakis]Hi all, sorry for reviving this old thread, but i find myself is a similar situation as the parent poster - trying to implement a multi-user solution in a corporate environment that absolutely does not allow any server installation or access, where the only option is to have a solution on a shared drive.

However as stated, I can’t just have an SQLite file just sitting on a shared drive as while multiple users can read, any multiuser write will eventually lead to database corruption.

Can the database write functions be delegated to a helper/console app on the shared drive?[/quote]

Find Aloe Express…
https://aloe.zone/archives.php

Use that in front of your SqliteDatabase for desktop client server apps on your local network if you can using some PC or Mac on teh network as a server… That was my plan for my workplace before some things changed.

-Karen

[quote=483455:@Stam Kapetanakis]
Can the database write functions be delegated to a helper/console app on the shared drive?[/quote]
If you’re going to run software on that machine this is a “server” for all intents & purposes
Use one designed for multiple users like Postgresql etc

[quote=483459:@Norman Palardy]If you’re going to run software on that machine this is a “server” for all intents & purposes
Use one designed for multiple users like Postgresql etc[/quote]

Probably the best bit of advice you are going to get… but in my experience also the most likely to be ignored.

I don’t know why, but picking the appropriate tools for the job is not a skill rated by the industry. None of us would hire a mechanic to work on our car if they only owned a hammer, yet we regularly fail to challenge similar choices in software engineering…

The first and two only relevant questions to the solution you will ultimately choose is can you open custom ports on the workstations and can you send/receive traffic between them? If the site security policy/infrastructure doesn’t allow this then you wont be able to use anything other than what the admins allow you access i.e. the share so advising aloe or postgres will be a mute point. If you are truly only allowed access to the share, this will limit your design/speed but it is doable with the right design.

[quote=483455:@Stam Kapetanakis]Hi all, …

Can the database write functions be delegated to a helper/console app on the shared drive?[/quote]

A console app copied to the shared drive will still be executed on the pc from where it was invoked, not on the fileserver. This does not solve the problem.

If you want to have a multi-user accessible solution for your database, without data corruption, then you need a DATABASE SERVER

That right design calls for a multi user database which the it appears the OP does not have access to and that is the reality. There are alternative designs to get around the issue, but they will all result in an unstable outcome, the only question being the degree of that stability and that is also the reality.

Given the OP’s circumstances I’d probably opt to build the app in MS Access rather than XOJO because it at least has some built in capabilities to work of shared database file, unlike XOJO/SQLite. It would not completely eliminate the issue, but it would reduce it to the minimum possible.

MS Access is EXTREMELY fragile when shared on a network!

There’s no good answer to Stam’s situation but I’d much sooner wing it with SQLite than MS Access.

I suppose it all depends on how many/frequent the writes are.
In the olden days, we used to write a text file for each client and then have a single process that was run. This first tried to rename the files by changing the extension. If that worked it then read the files in and all the clients could then read the master database. Worked very well for years to be honest but you needed enough time between writes for the file to be renamed.

[quote=483521:@Steve Wilson]MS Access is EXTREMELY fragile when shared on a network!

There’s no good answer to Stam’s situation but I’d much sooner wing it with SQLite than MS Access.[/quote]

I have experience of one application with about 50 users on average and a db file of about 800mb and other with around 20 users and a db file of around 400mb. I did not build either, but supported both for a year or so. I just had one corruption in that period.

I’m glad you got away with it, you were lucky.

Two of my most significant projects, including the one I’ve just finished, have been replacing MS Access applications that were just endlessly falling over. The corruptions were happening multiple times per week. Two different clients, two different environments, 3 different MS Access developers, exactly the same outcome.

[quote=483459:@Norman Palardy]If you’re going to run software on that machine this is a “server” for all intents & purposes
Use one designed for multiple users like Postgresql etc[/quote]
Hi @Norman Palardy - thanks, i appreciated the generic concept of a server, but it’s not quite the same.

Sadly i have zero control any any installation anywhere, other than what can be copied manually. Certainly installing a DBMS, as much as that would be preferable, is completely out of the question (this is for work in a tightly regulated healthcare organisation).

In my mind i was hoping to be able to manually put a console app on the shared drive, point to it with some data in a shell from the main user gui app and delegate the writing to sqlite there.

However i don’t know if this would risk concurrent writes - if multiple users do this, will it launch multiple instances of the console app and risk concurrent apps? is there a way to block this and just return an error to the gui app if the console is already running? (the writes take less than a second, so the gui app could just try again in a second if needed)

[quote=483533:@Stam Kapetanakis]n my mind i was hoping to be able to manually put a console app on the shared drive, point to it with some data in a shell from the main user gui app and delegate the writing to sqlite there.

[/quote]

Apps don’t run on a drive, they run on a computer… and if the computer with the console app is accessing the drive through a file server it is still teh same as any other client… But then all the clients would need to access that computer so it is essentially a server and if your IT department does not allow that you are out of luck.

if the app is mostly read only a tactic I have used in the past was having a way to check if the DB file has changed on startup and if so copy teh file down…

Another approach to deal with SQLite non-concurrency
that you can have a each app create a “lock file” on teh file server when it needs to do DB stuff and delete it when done… If a Lock File exists the other clients would tell the user the DB is busy and keep checking until it is free… And of course get teh DB open for as short a time as possible in the clients.

Alternatively you could take your chances and have your clients keep the DB open for a short a time as possible and hope for the best… Some people have been OK with that if there are not too many clients … Do you feel lucky? :wink:

-Karen

Like everyone else says, SQLite is NOT a good solution if multiple users can potentially update it. It is designed for single instance updates. Also like everyone says, just placing a console app on a shared drive then launching the app does not solve the problem, as all you did was move the location of the app to the shared drive but each user is still launching it as a local process.

And I think that is where the suggest for Aloe Express comes in. If the console app uses Aloe and runs on some machine on the LAN, then it can do ALL the database access as a single process and expose the data via a REST interface. If that machine does not also run a webserver, then you can configure Aloe to use port 80.

However, you would not only need to be able to copy the console app using Aloe Express to that machine, you would need that machine to be configured to autostart that app and keep it running. Bringing us full circle to it essentially acting like a “server”, albeit it one where it only has to run a Xojo console app providing both the REST interface and SQLite access.

It seems to me we really don’t have enough information on what your IT department will allow or what it prohibits. And I think you need to have that nailed down and an approach “blessed” by them before starting to develop any solution.

Barring that, if each user does have read/write access to the same shared drive, then I think @Karen’s suggestion of placing the SQLite file there plus ALSO putting a “lock file” there to act like a mutex seems like a good circumvention.

Also, I think I heard once the SQLite corruption problems are when mulitple people can WRITE to the database. I think you can have multiple machines open the SQLite database and perform only read operations without concern. So if the data is used most often for queries and less often for updates, then you may be able to minimize the delays for the “lock file” to become available by only “locking” while needing to perform updates.

what i do is make one of the machine that is going to be using the application to be the server and have the data. i run the cubesql database server and just making sure the machine is booted up before other people can connect to this machine. make sure everyone has the program on their own machine and just connect to the data file.

If policy allows, could you use a server outside the network? There would be cost involved, but an option like Heroku Postgres could get you a multi user database with little effort.

Not my normal recommendation, but your hands are pretty tied.

[quote=483534:@Karen Atkocius]Alternatively you could take your chances and have your clients keep the DB open for a short a time as possible and hope for the best… Some people have been OK with that if there are not too many clients … Do you feel lucky? :wink:

-Karen[/quote]
The situation might be different with newer versions of SQLite but I remember a few years ago we found that we couldn’t even connect to a SQLite database stored on a network drive.

Thanks all, a lot of this discussion is quite useful.

Just to give an idea about the setup issues:
This is in a major hospital where IT/information governance is extremely prohibitive. It’s currently not possible to run a server either within the intranet or allow access to sever externally (otherwise I’d be using my own Valentina or FileMaker Pro server although the latter would be a costly solution). Neither is it possible to install any software that requires ‘admin’ access for installation. And perhaps most importantly, any machine logged in to the network (windows only) is set to hibernate/sleep at short intervals for security reasons, so using these to provide a REST interface isn’t going to work.

The only option DBMS option would be to get our IT dept to provide a virtual MSSQL server. However this is a painful and drawn out process which involves being prioritised by the COO office, having a systems architect provide a plan, go through an approval process and if successful then implemented by a different systems architect etc. I’ve been through these processes before and can’t begin to describe the delay, cost and pain of going through these. Sadly our IT is monolithic in every sense of the word.

The current solution I have in place is a FileMaker Pro stand-alone runtime app residing on the shared drive; any user can launch this and it’s an a workable solution. But there is strict file locking through the runtime that means that no one else can read or write (basically they can’t run the app at all).

I’ve tested a xojo SQLite solution and it works fine on the shared drive with concurrent usage in preliminary testing, but this clearly runs the risk of database corruption.

I was hoping that I’d a console app could simulate the file lock for database writes but sounds like that can’t happen, and actually it may be simpler and easier to just use a lock file as suggested by @Karen Atkocius

Thanks once again all, and grateful for any further suggestions…

This is one corruption too much. I am administrating MS SQL Server databases for more than 20 years and RB / CubeSQLServer for about 10 years - not one single corruption!

I agree, but as I said, I did not write either of them…

I ended up having to toss CubeSQL and go back to PostgreSQL on a recent project though… I needed to import a very large data set from the SEC (800+ mb) and after the fourth attempt I gave up on it. It just generated random memory leaks and then disappeared without at trace. Sometimes it would happen on the first import and sometimes it would run for almost a month with a nightly import before it would die.

I have CubeSQL databases larger than 1 GB, including BLOBS, and they run well and fast. Any reason why it failed?