Best way to implement postgre

Hi all,

I am making an app, customer database (more than that but the customer database is where i am at at the moment).
I am using a postgre DB to store customer info in one single table. This DB will be accessed over the internet by multiple users.
I have an app with a search box and a list box.
At start it loads the entire table into a record set and displays it in the listbox.
Then you can search by typing in the search box.
I have a trigger in the text changed event of the search box that puts the cursor to the nearest match in the listbox. Works great, but if new records are added on another machine, it does not display as the record set has not been updated (loading the entire customer table over the internet would take considerable time with 2-3000+ customer entries.

  1. Is there a way to update the record set with any changes to the table without having to reload the entire table?
    Then I could call an update at the beginning of the text changed event of the search box.

  2. My list box will always be 10 rows in height. So what if i loaded the first 20 items at startup, then in the text changed event i could query the database of the search, and load the next 20 rows and 10 previous rows (giving the 10 to display and 10 previous and 10 post). The reason for this is the user can give the listbox focus and up or down arrow to scroll through the listbox. each time the user hits the up or down arrow i could query the database for the current record - 11 or plus 21 to keep the “buffer” and would seem to be scrolling naturally without huge load times. Would this work and anyone done this or have example code?

  3. Any other thoughts?

Thanks heaps

Populate the Listbox in the search field changed event. Select your customer items with a select statement using the ‘like’ keyword in SQL.

Not sure that would be very quick however. For example, in the search box they enter a surname (smith for example), they start by typing ‘s’, then the db would be queried for all customers containing a ‘s’ in the surname or if i omit the LIKE keyword it would still query for all customers surname starting with ‘s’. That could be 100’s of rows, each with 40+ records in each row. Loading 4000+ records would take considerable time i would imagine.

Something i probably should have mentioned is i am in Australia, our internet speeds are not impressive by international standards. For example, i have a fast connection (for oz) my ping is 33ms and my download speed tops out at 1.94 Mbps but for the first 4 seconds of the test i was only achieving 0.56Mbps with an average of 1.77Mbps.

Am i overestimating the time it would take to load these records? It needs to be as seamless as possible. I don’t think i would be happy with a wait time of over 1 second for the listbox to repopulate after the query.

Also my upload was a laughable 0.14Mbps.

Whilst I am not an Internet specialist I have written a lot of network database apps and I can confirm that the speed is linked to the SQL being executed. Normally, I don’t let the search SQL start until the user has typed at least three letters and this does return fewer records. Each subsequent letter returns fewer results.

Provided you have optimised your database and have an index set up on the search field (surname in this case) then the results will be very fast.

The only thing I can say is try it!

Thanks for your help. I have tried something similar, but it was slow.
Not sure if it is my slow speed (probably not as it was only around 100 records, 3 rows) or if the free heroku postgre db i was using was the issue. Heroku give free db’s to developers with limited size and uptime restrictions. They don’t mention slow speeds, but maybe that is a limitation on it.
Will try with another postgre db.
Does anyone know where i might find a free postgre db hosted for testing?
It works fine running localhost, but thats not much help.


I’d suggest adding a “lastUpdated” timestamp field to your table. Every time you query the database, cache the time at which you did. Every time after you first fill the list, you can then include a “WHERE lastUpdated > xxx” in your query, hopefully reducing the number of records in the recordset considerably.

That should work, and i gave it some thought, but i came across one snag that stopped me. 3 users on 3 different machines, maybe even in 3 different locations. (timezone aside as that could be worked out be making all users use a specific timezone for this task)
What if machine 1 is 5 seconds different in time to machine 2? Can we be certain all machines have synchronised times?
The only thing i could think of is if there was some sql statement that got the time from the DB server, then all times can be compared from that.
Does anyone know if there is such an sql statement to get date/time from the server?

Yes; I was thinking the same thing. This is from a page on the site:

Result: 2001-12-23 14:39:53.662522-05[/quote]

great, will try it out.

[quote=61434:@Andrew Willyan]I am making an app, customer database (more than that but the customer database is where i am at at the moment).
I am using a postgre DB to store customer info in one single table. This DB will be accessed over the internet by multiple users.

Can you go a little more in detail as to how the connection to the DB will work? Are you planning on using a VPN of some sort or poke a hole in the firewall in which the DB is exposed to the world?

The Db will be exposed. Connection via ssl.

Apparently there is a command to include in an SQL statement that can return the transaction end time. This would be perfect and nog require a desperate statement.
Will let you know how it goes.

I don’t think you would need a separate SELECT statement.

Given a persistent property in the application/module/object/window called “mLastUpdateTS”, and a column named “lastUpdated” in the database table which gets set to the current timestamp at the time of and INSERT or UPDATE, I think you could do something like:

myRS = SELECT foo, bar, CURRENT_TIMESTAMP FROM tblMyTable WHERE lastUpdated > mLastUpdateTS if myRS <> NIL then // do whatever you need to with the recordSet here // and then cache the value of CURRENT_TIMESTAMP for use the next time you check the database mLastUpdated = myRS.field("CURRENT_TIMESTAMP").stringValue end if
thus keeping it in a single SELECT.

I’d actually look in to using PostgreSQL notifications
Then whenever a row is inserted / updated /deleted etc you could send a notification that other clients will just get & you don’t have to poll for new records etc

Looked up about the notifications and it is exactly what i was looking for, just wasn’t sure what to call it.

To my understanding, i create a channel (i assume i can do this from xojo code) and then at the listbox creation i issue a listen command specifying the channel.
Then i will get notification when a record is added/changed/deleted and in the notification i can set a payload string which could be the primary id key of the table so i would know exactly which record to verify/update.

Not sure how i would handle receiving the notification though, would i need to create an event, is there an event handler built into xojo for this?


I am using this notification approach heavily in my app and it works just great. Upon connect in a class that encapsulates a PostgreSQL connection subclass I do this:

me.mConnection.Listen(Lowercase(listenhandle))//Listenhandle might be "myTableChanges")
me.mNotificationChecker= new DelegatingTimer
me.mNotificationChecker.Period =1000//every second- it only takes about  17 ms
me.mNotificationChecker.Action = AddressOf me.CheckForNotifications
me.mNotificationChecker.Mode = Timer.ModeMultiple

where me.CheckForNotifications is

#pragma DisableAutoWaitCursor//to suppress flickering
#pragma DisableBackgroundTasks
#pragma DisableBoundsChecking

me.mConnection.CheckForNotifications//that’s fast - takes 17 Microseconds on average

In your PostgreSQL connection subclass you can then handle the “ReceivedNotification”-event to your hearts content.

Thanks everyone so far who has had input to this discussion,

My personal preference so far is to implement notifications, but I do have one concern.

My databases in total can be up to 200+MB, keeping all this data in RAM would make my app a resource hog, would it not?

It makes my app very fast and great to use, but storing all the data in a recordset, seems to be almost irresponsible? Not to mention initial load time is huge (for Australia at least, which would take at least 10 minutes to download 200MB worth of data).

I am looking to implement my 2nd thought, Load an initial 20 rows at startup, then at every text changed event, load a total of 30 rows, 10 display, 10 previous, 10 post. I think this would lessen the download burden to an acceptable level.

The notification method would work well for localhost and maybe overseas where you guys are getting insane download/upload speeds. (my download speed is an average of 1.77Mbps and upload is a crazy 0.14Mbps).

Please correct me if i am wrong, i have done no real world tests, just basic calc’s.