I know this topic has been discussed a ton.
I’m using SQLite for a client application and they want to have the database shared.
I tried CubeSQL and it was too slow.
I tried migrating it all to MS SQL Server and it was too slow.
When I say too slow, this is because of the way the database schema was originally created (prior to me) where EVERYTHING was in the database, including things that had no business to be in a database and it required a ton of selects just to get enough data into the app to start. It works well enough on SQLite and I know how to refactor to make it faster, but it’s a huge plumbing problem.
I’ve seen phenomenal performance in other apps using Postgres, but it will take some time to port to Postgres, so I only want to do so if there is a good chance of success. Does anyone know the performance comparison between CubeSQL and Postgres?
one: If I share the db on a network, but ONLY ONE user ever writes to it, does it still risk corruption or is that only a risk when more than one are writing to it?
two: if I can find a way to optimize the queries perhaps I can speed it up enough that CubeSQL will do the trick. I have zero problem promoting that solution if it works well enough.
The primary issue is one select to get a listing of all the “items” that will be live in memory, and then I have to repeat a number of other queries to serialize all the components of that item into memory.
Does that make sense? Any suggestions, or is this too vague.
The original design of the database is a mess, not my choosing, and not changeable without huge challenges.
I use MySQL Community Server running on a Windows Server 2019 VM. It is very fast and I have dozens of users connecting to multiple databases using Xojo Desktop and Web apps. It will also run on Linux and MacOS. I’ve been using MySQL for close to 20 years with Xojo. https://dev.mysql.com/downloads/mysql/
I am afraid that what ever database system you pick, using the original bad schema will always be a pain: every time you will have to modify existing functionalities or create new one, things will be complicated. I think you better recreate the schema using either SQLite or a new database system, transfer the data to the new database and forget the old schema for good.
in my experience, a network database will always be slower than a local database, in a 2-3x factor.
the main reason is the time needed to go to the server and back.
so the solution is to optimize the sql queries to have the less possible queries from the client to the server.
try to make all your queries in a single one and send it to the server.
this is also a good exercice for sqlite database because it often speeds things up.
I’m afraid you’ve better start all the queries from scratch, using postgres.
your sqlite queries are badly made, too many successive queries that could be into one
and that is what causes the slowdown when you try to share them other a network.
rethink the schema, and rethink the queries.
I’d suggest the only way to share SQLite data over a network is to create an application server that exposes an API (e.g. JSON-RPC) to the clients. The application server and the SQLite database should be on the same computer and no process other than the application server should touch the database. This also significantly reduces the network traversal slowdown Jean-Yves refers to.
Just as an example of the concept, if your app was for raising invoices the client app could create a JSON object that contained the invoice details - the customer, products, quantities, etc… and send that off to a function on the application server that extracted that data and made the necessary updates to the database.