I have moved my cubeSQL host from London to Dallas as I am nearing the release of a Xojo application that is useable from anywhere in the world. Most of the user’s data is local to their site, but some data is central.
When I test the Desktop and Web versions they run much faster in response in the USA than from Europe. I am thinking of moving the database location again, but only if I can guarantee better worldwide speed.
I would go a different way.
First you probably know where your client life, but probably you will end up with something like a server in the USA, one in Europe and one in east Asia.
So for your tables and records, please use UUIDs as keys. Then make an implementation, where you would have 3 or more servers. Whenever you make/change a record in one, also apply those changes to the other databases. So eventually within a few minutes, all should have the same records.
If you do it right, you should have no problems going to duplicate servers and go with 6.
MySQL and PosgreSQL have “clone” features, so anything that happens in one database, automatically happens to the other “connected” databases. You should not be having to write to multiple databases from your software directly, or trying to determine which database to connect to… That’s bad coding. I use this method for websites with lots of users…
1 database for reads, 1 database for writes…1 database for backup(unreachable to anyone but me)…1 or more databases for spill/over load-balancing… All perfect clones of one-another.
It’s bad practice to use one database for reading and writing honestly (doesn’t stop anyone though)… Facebook, myspace… Large networks all use this method of self-cloning databases for scalability… Some even employ “graph databases” to speed up retrieval of data further…
I’m also using multiple servers spread across the globe and using DNS to point to the closest server in the same way as Christian described.
The databases are SQLite as I had a lot of issues with MySQL back when I used OSX server, so have a replication service built in Xojo to keep each db in sync. The primary keys are UUIDs.
EDIT: I forgot to mention the other main reason I haven’t used auto sync - some countries have specific laws about where the data can and can’t be replicated to, so our sync engine and app ensure that these rules are followed and data is fetched from the ‘local’ server when needed.
Google, AWS and Azure have explicit database services for that.
Called read-write-replica and you can select the regions where the replicas are located. A load balancer in front of the database service then decides which datacenter is used.
We use this within Europe with four different locations to keep the latency small and it works like charm.
While this is all good advice, if you were to pick a single location to host from, I think New York would be your best choice. Using a tool such as Ping Test - Simultaneously Ping From 10 Global Locations | KeyCDN Tools to get ping times from around the world, gives nearly identical times from both San Francisco and London into a server in New York. Of course, Eastern US fares much better, but that’s not surprising. Unfortunately Sydney’s ping times are very long, which is why spreading the work over servers in multiple locations is the ideal solution. You can’t pick just one location to serve the whole planet.
Because “Most of the user’s data is local to their site” I think the use of read-only clones is overkill. All that is central is the the User, Help and Licence Key tables.
As these tables are used mainly on login, I wanted to make the login process as fast as possible. Once logged in other 99% of the data is either local, or I can make it as local to their country as they require.
For my need, I think that New York or the Netherlands would be best until it gets too popular!
Quite possible but that’s a big task and probably beyond my capabilities.
This would need to be raised as an internal project - with the snowflake engineering team. not something I am authorised to do.
Hi Beatrix - Interesting comment.
The odbc driver supplied by Snowflake is pretty comprehensive.
From what I understand, the only way to use the driver in Xojo is via ODBCDatabase class, which stops you using the many features of the snowflake odbc driver. I’m no expert on odbc drivers but it feels like there needs to be some sort of bridge / interface between the snowflake odbc driver and xojo ODBCDatabase class.