Ah - perhaps you meant to post a link. It came through as plain text.
Hereâs the link.
scroll until item #7.
Being that section 7 says nothing about databases shared on a networkâŚ
I think the main problem is that on a network share, you canât always guarantee that the data arrives in the correct order because of network latencies however small.
Protocols like file copies use software on both ends to make sure that the packets are sent in order and then received and written in the same order. In the case of SQLite, the âengineâ only exists on the âsendâ side, so any network glitch could cause things to not be written in the correct order. Since even database transactions are highly order-sensitive, you have the potential for problems.
This is why software like CubeSQL and Valentina are so much more robust. They add an engine on the server side to receive and record the data as it was sent.
So if your local network is in perfect working order and you can guarantee that it will be for all eternity, then by all means, roll the dice and connect directly to a SQLite database on a network share. But letâs be clear, when the makers of SQLite tell you that this is a bad plan, a sample size of one is hardly a good reason to ignore their advice.
This isnât an issue for file sharing or any other service based on TCP, or no data whatsoever would be safe transmitted over a network. The TCP stack guarantees that packets are received by the application, service, driver, etc. in the order they are sent; you may perhaps be thinking of UDP, where this is not the case.
They are more robust, but it has nothing to do with network latency. Database servers take a more global approach to managing databases, allowing multiple clients to read and write simultaneously by implementing row-level locks; consolidating data from multiple tables into a single result to reduce network load; performing intensive calculations on the data before returning it; providing transactional rollback capabilities; the list goes on.
While you COULD in theory do these things with a fire shared amongst app instances, the big problem is contention. For safety and integrity reasons, only one app can have access to the file at one time, so in any non-trivial scenario youâll quickly run into contention issues; one app instance will have the file open, preventing the other instances from getting access to it, and the problems start to pile up like traffic behind a car crash. Sure, this will work in very simple cases - a low number of instances, no complex queries - but it doesnât take much for the system to tip over into delays, frustration, and potentially lost data.
I still have nightmares about corrupted Access databases on network shares in multi-user VB6 (and VFP) applications, so I always recommend a database server of some sort over database files on a network share. Better safe than sorry. Spending time reentering all of the data, even if youâre doing daily backups, isnât fun.
Yes the nightmares.
Had a Access DB on a NAS, warned Management but they knew better.
Good thing I wrote a script to backup the DB every night because one day corruption happened.
I was able to recover the previous day but not good enough since a days worth of data was lost.
LOL in IT land never the hero
Here are y 2 cents. @Bill_Snaith asked a very simple question, a question that was asked many times. Many answered, for example @kevin_g , @PaulS , @Greg_O and others, donât do that !
I experienced the same
. Sharng a SQLite database question is not if corruption will happen itâs when will it happen.
@Bill_Snaith , you should not take any chance, you have two good suggestions here:
- Use CubeSQL
- Valentina SQLite Server
@Joseph_E , you seem to be quite lucky, buy a loto ticket.
Well I stand by my opinion. Putting a SQLite database on a network share and accessing it from multiple locations is going to work until it doesnât. At which point the database may not be recoverable. Youâd be smart to have lots of backups.
Staged backups, by the way - keep daily images for two weeks, weekly for two months, monthly for a year. Database corruption can take a while to discover and if all you have is a nightly dump, youâre out of luck if the corruption happened silently last week.
Absolutely. SQLite became wildly popular because of the generosity of the license but outside of the use cases at the time, it was a real problem if you wanted to scale up to even use in a relatively small workgroup.
Back when VB6 / VBA was such a big deal, I know it was extensively by the federal government for custom, in-office apps, often on pretty critical data. Scary thought.
I think its worth prioritizing given that while our tool sets evolve / change / get replaced over time, what ultimately matters to end users is end user data. If you want to keep selling to them over years, you have to put end user data first.
The only way I use the SqlLite file on a network share is if the app is on and run from the same network share.
+1000
I came from Oracle, Informix, etc., but wrote quite a few shrink-wrap applications in MS Access, and only a couple with VB - Mainly because it was cheaper for our customer-base to buy a bit more memory for their PCs and use our included Access run-times - Simply because the time-to-production was less than a third, and Access had a better report writer. The secret is to separate the data into a shared-back-end on the server; and put the forms, reports, code, and macros into separate front-end installed on each client (I learnt that from R:Base). Generally we thought that these were OK for <10 concurrent users with maybe 30,000 rows of data, without too many complex multi table queries. Corruption was rare, but (almost always) recoverable. For anything bigger (or for WiFi LANs) we used a MS SQL Server back-end with sever views, triggers, stored procedures, etc. We never hit a performance limit, but I estimate that it would have been good for ~100 concurrent users and >1 million or so rows of data. I sold the company and retired about 15 years ago, but the company still sells Access based descendants of a system I first wrote 30 years ago - I believe they have about 800 organizations using it, most are networked.
I have a db background going back to the early days of Oracle, Informix, etc. Iâm retired from paid work, but still write pro-bono stuff. I now only use SQLite, and have found it better than many suggest. Based on lived experience: SQLite will work as a âserverâ on a reliable multi-user LAN for a fairly large number of reads, but fewer writes - WiFi, not so muchâŚ
What helps/works:-
- Turn on Write Ahead Logging.
- Use Transactions for anything complex
- Triggers to update multiple tables will save a lot of traffic - will fire irrespective of how you make changes to the data.
- Control concurrency with, say an update trigger like: UPDATE Tasks SET taskLog = (taskLog + â1â) WHERE ID=OLD.ID - Use this by reading the taskLog integer when opening the record, then check that it is the same before writing your changes; if different, display changed record and ask what user wants to doâŚ
- Use a Web application - I have found that it works well with ~20 users on a LAN with a lot of reads and writes. The web application is a âsingle userâ - all users one connection.
- If you have licencing problems, and want to look at Linux/Raspberry Pi use only, a console REST server might be the way to go - Iâm still looking at this, as there is not much modern Xojo documentation on setting up the Server and connecting to SQLite (Lots of stuff on writing clients, but not so much on the server side?). Again the console app is a âsingle userâ.
From the SQLite website:
â˘Section on how to corrupt your database:
â˘This states that WAL mode wonât work on a network drive:
You could also read up on âopportunist lockingâ via a Google search.
â˘This states that WAL mode wonât work on a network drive:
[sqlite.org](https://www.sqlite.org/wal.html)
You could also read up on âopportunist lockingâ via a Google search.
True, but is does work (well) with a Xojo Web App (and for me, possibly with a REST-JSON Console App?).
Just tried running one of my test databases on an iMac with local terminal with WAL; and with 3 different WiFi ssh clients running:
sqlite3 /Users/tim/WebTestSQLite/webtest.sqlite
using multiple scripted concurrent reads and updates from each instance. All seems good. Iâm guessing that it should work for most systems if the actual app (sqlite3 in this case) is run on the server by each client. Perhaps a single Xojo desktop App run in the same way would work? Maybe everything in a transaction with any intermediate SQL writes/select done in a separate temporary memory database before the final write?
The problem with WAL will be when you have more than one computer trying to access the data.
All processes using a database must be on the same host computer; WAL does not work over a network filesystem. This is because WAL requires all processes to share a small amount of memory and processes on separate host machines obviously cannot share memory with each other.
Yes, I saw that (and knew it
) The processes that I just tried were run on the same computer as separate SSH terminal instances - i.e. run by server computer but with separate processes (PID). I donât use Windows anymore, but was thinking that something like VNC to log-in (separate accounts?) to get a GUI.
Just tried (seems to work) with âDB Browser for SQLiteâ on a Local Raspberry Pi with its native GUI along with a concurrent single WiFI VNC connection to a separate Pi account from an iPad. Itâs probably more trouble than itâs worth though, hence my use of Web Apps (and looking at REST-JSON for a console server?).