Nil RecordSet returned but no DB.Error

I recently saw a SQL Server query that a user had created that ran fine from SSMS but failed when executed within my Xojo written query library. It turned out she had (accidentally) joined to another table and the SELECT * failed as there were many (possibly hundreds) of columns in this particular table (a commercial ERP program), I never looked too deep I just fixed into the problem, I just specified the exact columns she required and all was well. I had never seen that before. Weird.

We could know why (or not) if you followed my advice to check for a possible exception message making an API2 sample simulating your condition.

The why is: “Because it is possible, and sometimes obvious”.

That said, for those unable to understand why it may be obvious, when an exception happens it means that something unexpected happened that in the current context it means an ERROR. And in such context, 0 is just one code, sometimes ignorable, not needing a meaning, because the meaning is carried by the exception per se; or sometimes may have a meaning; like CatastrophicException 1 = black hole, and 0 = atoms collided. Messages “black hole will engulf the system” and “atoms collided, atomic explosion imminent”.

That said again, as API1 is deprecated, and API2 is advancing, old functions will start to use new API2 code behind the scenes before being just removed, and with it may inherit side effects as receiving a 0 as error code (the default exception error code) causing a “regression” in API1 code.

In the mean time I’ve been able to create a comparable test table, not exactly the same data, but close enough. 14 million records. It fails to load. If I use LIMIT to 12 million results, it loads.

I did this, and it appears to load just fine (and also much faster), but I can’t display the results, because the project is API1. So you’ll tell me I should just switch to API2.0, but that returns a RowSet object instead of a RecordSet object. The project uses RecordSets with Kem Tekinay’s DOD listbox which requires the RecordSet object and does not work with RowSets.

This is a bit frustrating.

Perhaps chat up Kem?

For the record , I believe Aaron’s referring to @Kem_Tekinay 's data-on-demand listbox (which is over 12 years old!?! listbox with 30000 records)

I’d think this could be adjusted to use RowSet (haven’t looked at it yet). You can use RowSet with older desktop controls (and it appears you have access to a relatively recent version of xojo?)

That’s correct. This is a long-term project which began in 2012 and we’ve gone through a lot of changes since the beginning (remember 32-bit memory limits?) but the DOD Listbox has been working fine throughout.

Yes, IME the differences are small.

a) It’s called a RowSet rather than a RecordSet

b) You use ExecuteSQL rather than SQLExecute, similarly for Select

c) and:

Var  myInt as Integer = rs.Field("mycol").IntegerValue

becomes

Var  myInt as Integer = rs.Column("mycol").IntegerValue

The major advantage of API2 database methods IMO is that you get protection against SQL injection attacks FOR FREE.

I looked into the differences a while ago and there are some other gotchyas, such as the IdxField disappearing. The project is gigantic and I’ll do everything possible to avoid changing this much code, but perhaps it will be necessary.

@Aaron_Hunt - I didn’t realize this wasn’t an open source component.

Sounds like you might need to roll your own or see if @Kem_Tekinay could update it to use RowSet? (The data-on-demand listbox is old enough and complex enough that it wasn’t easy for me to just download it and see how it worked)

And 100% understood there may be other complexities vis-a-vis your project and how it uses it which may make additional challenges which aren’t obvious to someone who hasn’t worked with the project.

But if the table has 14 million rows, why select * ?
You surely aren’t going to try to display 14million rows in a table for the user to scroll through.

You need to be using some criteria, at least.
people who live in Boston, surnames starting with F… something to limit the result set?

And you probably don’t need all the fields. If your table has 40 fields and you only want surname, forename, why waste memory on shoe size and address, which you arent going to display?

Yes, we are. That’s what the DOD Listbox is for.

You’re making (normal) assumptions that simply don’t apply to this project (which isn’t normal).

In terms of the way that they are used, there is not a lot of difference, but to migrate from API1 to API2 is nowhere near what you describe. It took me 5 hours to migrate only this aspect of the project to API2.

RecordSet → RowSet

  • Making this change resulted in almost 700 errors within the project. With each updated method and property, that number varied unpredictably.

Here are the other changes that had to be made. (Keep in mind that these are almost never simple serach and replace operations, and in many cases custom objects with methods based on API1 have to be rewritten to accomodate the changes.)

SQLSelect → SelectSQL
.MoveFirst → .MoveToFirstRow
.EOF → .AfterLastRow
.BOF → .BeforeFirstRow
.Field → .Column
.MoveNext → .MoveToNextRow
.Edit → .EditRow
.Update → .SaveRow
.RecordCount → .RowCount
IdxField(1) → ColumnAt(0) // Careful! This is a ones-based to zero-based conversion!
.DeleteRecord → .RemoveRow

SQLiteDatabase:
.TableSchema → .Tables
.InsertRecord → .AddRow

DatabaseRecord → DatabaseRow

  • This was a huge mess, because I had many methods and properties using the word “Record” which all now should use “Row”. To my mind this is a poor design choice because “Row” is used for controls whereas “Record” is obviously a database term, so all these methods lost their uniqueness and the potential for error by confusion/collision is far greater than before. That said, after everything has been changed over and starts working again, it doesn’t seem quite so terrible.

.BooleanColumn(“name”) → Column(“name”).BooleanValue
.IntegerColumn(“name”) → Column(“name”).IntegerValue
This change seems an obvious improvement.

DatabaseField → DatabaseColumn

  • At this point, no errors and the project built but broke immediately with an unhandled DatabaseException, as expected.

Exception handling:

  • The new paradigm requires rewriting all the error handling. Exceptions must be handled whereas before they could simply be ignored. In cases where the error doesn’t matter, this is slightly annoying. All operations are now in try … catch blocks and for me that kind of code is harder to organise.

After all the error handling was updated, an hour was spent figuring out why, after making the change over to API2, it still did not work.

The most noticeable benefit we get from moving to API2 here is that exceptions are more accurate or at least more descriptive. Whereas API1 gave us Error 0: “not an error”, API2 gave us Error 0: “Unable to allocate memory requested for the operation”, which makes much more sense.

I had already looked for any PRAGMA settings in SQL which might cause this, and I asked ChatGPT for help. Nothing worked. On the forum, this same error in a similar context was already reported in 2020 here. And if you check the error tracker I also had filed a bug report related to this same problem in 2019. The problem is this:

If the result of your query is 1GB or more in size, the Xojo SQLite plugin cannot return those results.

I don’t think Xojo is going to do anything about this, because for most users there are practical ways around the problem, like loading in chunks. By design, we can’t do that. So I tried the MBS SQL plugin, adapting it slightly to make it a “drop in” replacement for the Xojo object, and it works just fine. (So many thanks to Christian Schmitz for his work.)

That is an important limitation that should be in the docs. I wasn’t aware of it.

Would you like to make a small example that demonstrates this, and submit it wth a new Issue? I would view it as a bug, if this is the case.

Hmmm. I’m not convinced.

I just took a mailbox from my app, and exported the contents to a text file. I then re-imported that a number of times, until the size of the original mailbox (now with many dupes of each mail) was now more than 1gig. I then dropped the indexes and ran VACUUM on the mailbox, from the sqlite3 CLI. This then gave a database with a size of 1,191,010,304 bytes.

Since my app doesn’t do select * from the database, I modified a test app to do that and it reports the number of rows in the main table correctly at 26106. The other table has 1 row. The test app hits a break after querying the db but before losing the RowSet at the event handler end, so it goes to the debugger. At that point the test app is reported by iStat Menus to be using 1.14Gbytes of memory (Activity Monitor reports 1.09Gb) dropping to 45-50Mbytes after I resume from the break.

So a result set size of >1GB does not appear to be a problem.

It may be that 1GB is the wrong number. It’s somewhere around there. Here’s the bug report. The code is there, you can run it too and see that it throws ‘Unable to allocate memory requested for the operation.’

Issue 77828

Note: I edited the issue title to put a question mark next to 1GB(?) since it looks like that number might be wrong.

Ah… In Memory DB!
I guess it is treated differently of an external one where operations can be manipulated using chunks of memory and releasing them ASAP. An In-memory DB needs EVERYTHING in memory and some limits must exist. Move the operation to disk and see if this limit still affects you.

No, as written in the first comment in the report, a DB on disk behaves the same way. Our project uses disk-based DB. The in-memory DB is just easier for a bug report.

db.BeginTransaction
for j as integer = 1 to 14000000
  r.Column( "data" ) = s
  db.AddRow( "test", r )
next
db.CommitTransaction

You may have exceeded some limit for just one transaction.

No. No error is thrown there and that isn’t the problem anyway. Maybe run the code before replying? It doesn’t matter how the table is created. The problem is loading the table. From disk or from RAM, it doesn’t matter. The error is the same. Create your large table some other way. Put it on a disk or make it in-RAM – who cares, the result is the same, it doesn’t work.