was a record added? who added it? Master or Transaction?
was a record deleted?
was a record changed? if both Master and Transaction changed it… which changes to keep?
I add a UUID to each record, which helps with bullet #1
never delete a record until after synch (just mark that is should be deleted) this helps with #2
Timestamps might help with #3 assuming all the computers use a common timezone such as GMT
The tricky part is figuring out ALL the rules. If two clients change a record and then sync, who wins? What happens to the record that ‘loses’? What happens if a record is marked as deleted, but a client doesn’t know that and edits it? Database synchronization is not easy because you’ll have to be really rigid on the rules and then figure out how to reconcile the sync when a rule is broken. Is the user notified? Does it get logged? Do you care? Etc, Etc.
Another question to ask yourself is do you want the sync to happen ‘over the air’ or do you send the local SQLite database to the server and then get the copy back with everything sync’d? The first option might be very time consuming since you’re doing queries. The second option might take a while with large databases. Both options have drawbacks.
My experience with scenarios as described (mobile, desktops, servers) it might work with 1 to 5 people but not with a lot of people or if you start to scale this. Sooner or later you will run in to Byzantine fault.
What about… ehm… Blockchain? (Please don’t beat me for this)
as recommendation: you should serialize your data in that way, that important values (like prices or stock numbers) are not stored as static values in DB fields but calculated each time on basis of a journal or transaction log.
Is it essential that customers/devices can operate offline?
wouldn’t it be easier if customers could perform operations online, but not if they are offline?
remember… with more context information we can help you better
my suggestion… don’t do it that way. if you synchronize local and server, you have A LOT of work to do and no one guarantees that there will NEVER be errors. maybe your effort will better used if you think how to connect all devices to JUST ONE database (xojo cloud?)
I have a local SQLite database that has basically the same structure as a remote database, a MySQL database. But the remote datasource can be a WebApp that manages a SQLite database on that server.
Every table in the database have a set of 3 columns: UUID, CREATED, MODIFIED. The Created and Modified fields are set automatically via Triggers. So, I don’t have to deal with those. The UUID is generated when a record is created.
I added a table where I store the changes, called the “ChangeLog”. Every table has Triggers that add a record to this ChangeLog. When I insert/update/delete a record I add a record to this ChangeLog, with a UTC timestamp, Record UUID, Action (insert/update/delete)
To sync I just check the last record in the ChangeLog on one database and use it to get all the changes in the other. And copy / delete / etc according to the records in the ChangeLog.
This approach worked very well for me. Most of the trigger creation and sync operations happen behind the scenes, since I made my own little database framework (or SQLite-wrapper). So, to actually work with is a piece of cake.
I work as a freelance cameraman. I receive something called a callsheet, that tells me where I have to work, the equipment needed, the people I work with, contact info for location, etc. I add this info to an online database. I add weather and traffic info.
An iOS app helps me to see the right info when I’m on the road. But sometimes reception is terrible. That’s why I have something like a local cache of the data I view.
When I add or change data while being working somewhere, the changes are made in the local cache and synced with the server when connected with the internet.
I have a future project in mind that will need this same functionality. There is one problem I don’t have worked out yet in my mind.
When I print out proposals there is an incrementing proposal number that is printed. This makes it possible to look up the proposal when the customer calls back. The way I handled that in the past was to prefix a machine ID so the proposal would look like E116. I was trying to come up with a better system. One thought I had was for each machine to reserve a ‘block’ of ID numbers when online. Then those ID’s could be used up knowing they are already ‘reserved’, and won’t be used by anyone else.
That is exactly what I did. I generated a UUID and added the machine’s CPU serial number.
Another way I thought that would work is maintaining a separate table in the master database that holds all the UUID’s. When generating it would check if a UUID already exists. If not, a new UUID can be passed to the new record.
The problem remained that when a record was created while the device was offline, this table in the master database can’t be accessed.
Therefore, a prefix is a good solution to avoid UUID collisions.
A prefix can also be a shorter series of characters, based on the name of the machine, that is registered and thus unique to the system.
[quote=463053:@Brian O’Brien]copy the SQLite file over to the server and process it there…
Listen to dave![/quote]
Like Jeremie said, sending the entire file can consume a lot of data.
It should be doable to make a subset of only the changed records in a separate sqlite file. and send that one over.
The server can attach the new file to the master file and update the master records from there. And detach the updated db-file when done.
It actually sounds like that might work and it might actually be a bit faster to just run some SQL to copy the changed data within one sqlite db object.
The other benefit of sending a sqlite file is that the subset-file can be encrypted. Especially when the file is transferred over a non-secured network this approach would be a nice one.