With just simple a primary key lookup, time spend on reparsing and replanning on the server side for the not cached PS should not be to significant. So the difference here is mainly the communication overhead - this explains the vast difference between local and remote. However with complex statements, reusing the same PS may also have a significant influence on the server’s time to process the query. (But this may lead to an anomaly after 5 runs, see PostgreSQL: Documentation: 14: PREPARE )
For topics around PostgreSQLPreparedStatement please see also <https://xojo.com/issue/37805> Esp. important to know is that the server side statements don’t get deallocated once the Xojo PS runs out of scope.
So, yes - generally you should build a system to maintain a pool of already prepared statements and reuse them if you can.
The example Joost chose (“SELECT * FROM tbl_rel WHERE id = $1” ) is a bit unfortunate, as the planner (which is usually insanely fast already) has practically nothing to do. Unless you have really complex statements, caching the statements is hardly worth the trouble (the planner usually takes less than 1% of the entire processing time with simple statements). I wouldn’t use a prepared statement at all for a query like “SELECT * FROM tbl_rel WHERE id = $1” (unless $1 is somehow a user entered string). Remember, the main problem prepared statements solve are protection against SQL-Injection and quoting strings.
I took a very simple statement for testing since I know the benefit of reusing prepared statements with remote databases is mostly the communication. You see in the graph, it saves me 30% of the time and probably if you have lot’s of connections it might save you some server processing power.
Also using a local database it saves you 30% of the time, although that does not make the difference since it’s already ultra fast.