I am experimenting with accelerating a listbox in an inherited project that uses ActiveRecords currently to handle the Postgres data. With a classical setup, it takes about 8.5 seconds on my Intel iMac to fill the listbox with about 10.000 rows.
When the usual approach of filling the listbox with empty rows and creating the text in CellTextDraw did not help much, I first suspected AR’s numerous Introspection calls to be the bottleneck. I tried an own caching system with JSONItem and JSONMBS, but that even doubled the caching time. Dictionary works much faster.
The fastest approach I found is to fill the dictionary with DatabaseColumn.NativeValues – caching time went down from mentioned 8.5 to 0.5 seconds. Which works nicely for everything String-based but not for numerals. Because of that I included a Select Case in the caching routine, writing value instead of native value into the dictionary where necessary – like for Integers and DateTimes. Which then extends the caching time to about 5.5 seconds again, although there are only 4 out of > 50 columns which need that special handling.
So the fastest caching IMHO would be to use NativeValues for every column and then retrieve the real value in CellTextPaint (or Cell DataSource method of a NSTableView as this is a Mac-only project).
But how? Is this even possible?
Thanks, Beatrix. Introspection is the way Active Records handle their column types. And the conversion to their own classes take quite some time, although that’s not due to Introspection but due to value conversions.
I need to have some caching for the NSTableViewMBS anyway to have the DataSource working, and thus I found a Dictionary with NativeValues being the fastest solution.
For Int64, I could figure out the conversion from a nativeValue String:
Public Function NativeValueToMB(extends s as String) As MemoryBlock
var mb As MemoryBlock = s
mb.Size = 8
mb.LittleEndian = False
(and then use mb.Int64Value(0))
But the way TimeStamps are encoded is still unclear to me.
Public Function NativeValueToMDate(extends s as String) As Datetime
var mb As MemoryBlock = s
mb.LittleEndian = false
var year As Integer = mb.Int16Value(0)
var month As Integer = mb.Int16Value(2)
var day As Integer = mb.Int16Value(4)
Return New DateTime(year, month, day)
Because of the different binary encoding. Which is obviously done by the framework when you use a different value type. And which, when done during RowSet conversion, extends the caching time to about 1100 %. So I rather do the conversion during the cell datasource event. Or do you see a faster way that I miss?
That extends the caching time to about 9 x. Obviously getting the Variant value includes conversion from nativeValue to the respective variant. That’s why I want to do the native decoding in the datasource cell event.
Thank you, but this being a pure Mac project makes it possible to add some more Mac-feeling and I rather use ListboxTV. Although scrolling is a bit weird. It pauses regularly for a short time. With conversion of the native Integer and Date values in cell event and some optimisations on the Select, I can now load, buffer and display 10.000 records in about 0.2 seconds fully on an Intel iMac from 2014 in a debug build. Guess that’ll be even much better on newer machines. And good enough for me to declare this question answered for now.