I normally connect to PostgreSQL database when my app opens and disconnect when closing. This normally works fine.
I’m now developing an app where the connection will often be interrupted. I’m wondering what the best (read fastest) way is to check if the server is available.
If the db is connected and the network connection is lost the are no warnings or errors until trying to access the db again. Then the app freezes for 10 seconds resulting in an expected db error. Subsequent access to the db immediately result in a db error even it the server becomes available again.
It seems like calling db.connect returns an immediate response. Perhaps connecting before each db access is the correct way?
My ultimate goal is to create an app with multiple users. When an app disconnects from the server it continues to function writing all changes to a local SQLite file. When a connection is again detected it will automatically connect, sync the changes (warnings if necessary), and once again write the the main PostgreSQL db. The idea is that while on the roads the app would still be usable in areas with no service.
Don’t hold the db connection open. When you need it, call a function that opens a new connection and returns the instance. Especially with multiple users, this is more efficient anyway.
Ideally though, you would go through a middleware app.
Sure. When you distribute an app that needs to connect to a remote database, you either have to set up each user individually with their own username and password which they’ll enter to connect, or you have to distribute the username/password in each copy of the app. You also have to leave your database exposed to the world.
(If this is for private use, these risks are mitigated somewhat especially if you are behind a VPN.)
A middleware app sits between your GUI and that database. The GUI sends “messages” to the middleware app which then connects to the database, does its work, then sends data back to the GUI. You can control access to the middleware app with some scheme that makes sense, and you do not have to expose the database to the world.
I’d refrain from recommending a middleware or closing connections if all we know is that Neil will have multiple users.
Postgres offers built-in connection pooling to deal with multiple connections and even those are needed only if you got more than a couple of hundred concurrent users. Also having persistent connections is not such a terrible thing at all, it speeds things up and allows your client apps to receive notifications to keep the UI up to date. In Xojo, just issue your commands, and if the errormessage says “No connection”, than reconnect and issue the command again. I built a system like this for 400 concurrent users and never ran into any trouble at all, fwiw. You will have to dig into all the various settings in your postgresql.conf file (especially the timeout settings) though.
Just asked because the the question was about detecting connection and the fact that it can take a while, about 10 seconds, till you are sure there is no connection possible. So if your middleware drops the connection and has to establish a new one on first request, it can also take the time. I have made a PG connection object with a timer keeping alive the connection with a simple “SELECT 1” query. Once it accidentally loses it’s connection it will try to reconnect automatically, in order to be prepared for serving it’s clients.
[quote]so, at the end, if we have about 100 concurrent users, what better to do?
Connect and disconnect the database or keep it connected?[/quote]
Well, I’d try what appears simplest to me first, which admittedly is opening and closing a connection on demand. If you don’t mind a little challenge and the speed gains of persistent connections are tangible, keep it open. Be ready to put on your database administrator hat and dig a little deeper into postgresql.conf and the various “timeout” and “keep alive” settings. As I said, especially the possibility to receive notifications from the server made that approach worthwhile for me (Xojo’s pg-plugin deals well with them, and I think Christians plugin is even better in this respect as you don’t need to constantly poll the connection object I think). I use the pg_notify command in triggers and plpgsql-functions a lot to make that happen.
This topic has been discussed a number of times already on this forum and on one occasion eventually turned into a flame war. People get very passionate about this for some reason…
I had in mind only serving PostgreSQL on the local network. Half the computers, would be connected all the time. These are office desktop computers. For the Salesman laptops the problem is when traveling on the road.
I’m considering doing this:
Creating a server program that connects to all the apps with a TCP socket and runs on the same IP as the database.
With this I can tell easily and quickly whether I’m connected or not.
For the mobile apps I would do something like this:
'write changes to local SQLite db...
if connected then
'write changes to PostgreSQL
else
'write a log in the db with the details of the modification as well as the time of the change
end if
If connected to the server app then the server would receive notifications of any changes in the db. I would add a time stamp and username field to all tables. The notification would contain the record ID as well as the user and time. This info would be passed to all connected apps with a mobile flag, so they could update their local db with the changes. This way when disconnected all date (local) is up to date.
While disconnected my app would try to reconnect to the server TCP socket every 5 or 10 seconds.
When reconnected all modifications while offline would be pushed to the main database. Then the local database would be updated.
That might just work. You just need to be ready for those ugly corner cases where e.g. user a deletes a record on the server, which user b in turn updates locally and so on. You could easily end up in a world of pain. This is why syncing databases is not trivial.
I’m thinking every time a record is updated the datemodified is set to now. Then when syncing offline changes it would compare the change time with the ‘last_connected’ time. If the record in the main db was updated since the ‘last_connected’ time it would need interaction from the user to decide which modification to honor. I would make a option to always user most recent modification (in other words no warnings for sync issues).
If you’re writing new quotes, then there will never be a collision, so a local database will work. I do the same with my clients. They sync enough info to be able to work before they leave the office. When they get back, they upload their orders. Since you’re always adding new records, it’s less of a hassle to sync up.
Occasionally I would need to update customer info, but it’s not like I’m syncing documents that will have changes from multiple users on a regular basis.
[tongue in cheek] Back when I was in sales and dinosaurs roamed the earth, we carried these neat little gizmos called pens. We could drag them across flat sheets of dried wood mash and make marks. At the end of the day we would look at these marks and update the data in our computers. [/remove tongue]
I’ve done what you are doing Neil and I strongly agree with Tim. New information is easy. Synching brings on a level of potential hazard that will eventually get you.
That’s where I’m at now. For the past 5 years I’ve been using an excel based program and syncing when I got back to the office. Now with additional salesmen I’m running into unanticipated issues.
Since it’s hard to go back to an abacus I was trying to ponder the options. I should tell Verizon to put up more towers… But then that might be like waiting for Xojo to fix a bug instead of finding a workaround.
I’ll probably end up limiting functionality while offline. “Warning! You can’t edit customer info while offline.”