Hmmm, uncompress. Surely tied to some mix of contents, local cache and xfer and compression algorithms. If not tied to local temporary cache space including memory, or some permissions, at the user side, probably there’s some bug somewhere at the algorithms tied to the IO of large (and compressed) contents. Sometimes they need some specific “stream” to trigger them.
Could you clone such DB and write some garbage over the sensitive data? and Try again? Two things would occur: 1. Sadly the fail vanishes due to a data change. Or 2. The problem still shows, but now you can send a sample for replication.
It would be difficult to test as it might be load related - if I start the server early in the morning before people start connecting to it - it can run fine until mid-late afternoon, then completely break requiring a restart - so six hours of testing might be required . It’s a very busy time of year for this particular client, I’ve noticed, on occasion 150+ users connected at any one time.
Good idea. Or, extract only the rows that would match the failing SELECT (but without the LIMIT or OFFSET) and write them to another DB, see if the problem persists. If so you could then divide the smaller dataset into halves and repeat until - who knows - maybe you get to one row that causes the problem. Or, you have a smaller dataset to try @Rick_Araujo 's additional experiment.
I’ll also point out that Valentina Studio PRO ($199) includes database diagramming features (we just updated it also so if you have a mouse-wheel / third button mouse, you get more happiness while diagramming).