I am using Xojo to construct a web application, and while doing so, I have run across several database query performance concerns. My application retrieves a substantial amount of data by connecting to a PostgreSQL database. Lately, I have seen that the queries are taking longer to complete, which is having an impact on the applicationâs overall performance.
Here are a few details about my setup:
Xojo version: 2023r1.1
Database: PostgreSQL 13
Database connection: Using Xojoâs built-in PostgreSQLDatabase class
Server specs: 4 CPUs, 16GB RAM
I have experimented with adding indexes and examining the database tables in an attempt to improve the searches, but the results have only been somewhat better. Any advice or best practices that are specific to Xojo would be very appreciated.
I also check this: https://forum.xojo.com/t/xojo-web-app-postgres-database-connection-question But I have not found any solution. Could anyone guide me the best solution for this? And has anyone else experienced anything similar? Which techniques or resources have you found useful for maximizing database interactions in Xojo web applications? I would be grateful for any guidance on query optimization, connection handling, or overall performance enhancements.
Partitioning your database will improve its performance.
I have encountered the same problem with you before and with some professional help
managed to improve my postgresql database performance via partitioning.
Find out which SQL calls are the slowest. Do an analysis of these commands in pgAdmin.
Test the speed of these SQL calls through pgAdmin versus from Xojo and see they theyâre the same. If Xojo is slower by a lot, check out why (eg is it your manipulating the retrieved data that is slow, is it too many record youâre trying to display)
Are you using a LIKE â%âŠ%â command or converting to lowercase for comparisons. These will slow down greatly if the database gets large.
Are you storing data as String when it could be as Integer eg isPensioner = âyesâ or = 1. Donât index a Boolean field, only fields with a large number of variant values.
Davidâs advice is spot on. There is usually not much going on that is Xojo specific, slow queries are just slow, Xojo canât do anything about that. And before doing any partitioning verify that it would help, usually proper indexing is all thatâs needed. Learn how to read the output of Explain Analyze MyQuery to understand how the query planner in postgres will gather your data (PostgreSQL: Documentation: 16: EXPLAIN). I often head over to https://explain.depesz.com/ to get some help understanding its output. Good luck!
Set indicies for the fields you filter. Filter data by date range or status value too.
Select only interesting columns not just *
Use minimal joins to other tables
In what way should setting an index on a boolean field slow down the database?
Apart from that? I know of some situations that get a huge speed advantage by setting an index on boolean fields.
In this respect, setting such an index makes perfect sense.
You give little to no information that is relevant to narrowing down the problem.
Basically, my advice would now relate to a high-performance system (millions of data records, > 100,000 queries / second)
However, I donât suspect that this happens on a daily basis with a typical XOJO web application.
With small applications, the performance problem usually lies in the server configuration, the (not) set indexes, the database layout and, above all, poor query design. Without any information about what you are doing there and what your database looks like, I can therefore give you practically no advice.
My source might be out of date, but I remember from CS at uni that adding too many indexes or indexing fields with few variants can slow the database as it has to spend too much time searching or updating the indexes. As the database becomes large, searching an index of 0 or 1 can become much slower than relying on other index fields to do the grunt work, then just compare the final Boolean values left to search.
I think if you have 1 million records where they are 50/50 0 and 1, then the Boolean index still has to search through the 500,000 records that match the 0 or 1 and are returned internally by the index for further comparisons, slowing down the search.
From practical experience I can tell you that with large tables (> 1 million) an index on a boolean field can bring an extreme speed boost.
Basically, it is always a question of design. With databases, I usually assume that there are significantly more read actions than write actions.
An additional index can of course have an effect when writing, but this is much less relevant today.
Basically, it is also relevant whether I have a 50:50 distribution or 1:99 of true/false.
But as always (especially with web apps): Reading must be as fast as possible (even 5ms lesss for a query is very important here) . If an index brings me a benefit, I use it and pay little attention to disadvantages when writing. If it doesnât bring me any advantages when reading (e.g. also because I never select by this field anyway), then I donât set the index.
In relation to a boolean field, you can test this very easily in each individual case. However, I would definitely not advise against this in general.