PostgreSQL slows system down to a standstill - help please

Hello all,

Not sure if this is the right place to post this…

I am using PostgreSQL 9.4 on a Raspberry Pi 3b+ and 4b 4Gig. Both have a 128G mSD card. Have also tried a 128G USB Flash Drive. Raspian Jesses and the latest as well.

It appears that as the database grows, saving transactions, the performance drop dramatically to the point where the computer slows to a crawl and can crash - like its out of memory or something.

There are 3 tables that save transaction data:
Transactions 12776 rows
PMS transations 4215 rows
Access 4158 rows

I found that if I delete these, everything starts working properly and fast again.

I tried to access the database location, but even as root, it would not allow me citing permissions - that location appears to be owned by bluetooth! How that happened I have no idea…

Can anyone suggest what to do to fix this?

Would it be advisable to move the database to another location? If so how to do that? Even create a new database, of the same name, in a different location?

Thank you,
Tim

the performance drop dramatically to the point where the computer slows to a crawl and can crash

login into your pi and open task manager and look at cpu and ram usage if it is at limit.
pg 12.x came with a pgAdmin tool for database manage. (at least the setup at pc)

I would never run a database on an usb stick… even the fastest one.
there are tricks now to boot a Pi from a SSD disk, you should try that.

2 Likes

Before deleting the records, the cpu is about 50% and the memory usage approaches the limit. After deleting the records, it is back to normal. Seems to climb as the number of records climb.

I do us pgAdmin II on the PC and the Pi. That’s how I was able to find the number of records.

It does not make sense, to me at least, why the RAM use and CPU use would climb as the number of records climb. I’m not reading these records at all, only adding to them as useage continues.

Tim

Might you be starting a transaction but not doing a commit? Try an explicit begin transactions ahead of your writes, then an explicit commit after a set of related writes.

I’m not a fan of using a USB stick for a database either. And unlike SQLite, Postgres has a more complicated collection of files. You don’t just rename or move a single file or check permissions on a single file.

Yes, each time I am doing a begin transaction and a commit. The data is there for sure, that is not a problem.

Any other ideas?

have a look for latest postgresSQL 9.x release or update to 12.x may solve our problem.
or make a test xojo app which do database related things in a fast way and look
what happens on pi3.

I would never run a database on an usb stick… even the fastest one.
there are tricks now to boot a Pi from a SSD disk, you should try that.

i would definetly try this first