WebListbox Only Stores 64 rows

Please run this project, which simply loads 95 rows into a listbox.

Using a scroll mouse, scroll the listbox slowly downwards, then back up.

What I see is that around row 64, you get zebra stripes as the last 30 items load. This isn’t ideal, but I can live with it.

However when you scroll back up, around item 32 you get zebra stripes again, as if it’s re-loading the first 32 items again.

Behavior seems to be as if the listbox can only store 64 rows?

In this day and age of multi-gigabyte computers and networking speeds, this seems rather limited.

I argue:

  • a listbox should load more than 64 rows at once
  • once loaded, these rows should not be flushed

What do you all think?

Sample project:

listbox95.xojo_binary_project.zip (8.2 KB)

2 Likes

Additional problem: when these 32 rows load (or re-load), the listbox scroll position is lost, and it jumps around to a different scroll position.

The behavior depends on how fast you scroll.

I think what’s happening is that the listbox doesn’t refresh the missing rows until you stop scrolling, which on macOS can be for 0.5 second (or even longer) given the scroll acceleration/ deceleration behavior.

It’s just very clunky.

Here’s a video showing a variety of scroll speeds:

1 Like

It varies depending the visual size of the weblistbox.
I guess is part of the lazy loading.

Maybe @Ricardo_Cruz can create some adjustments to this.

This is how the underlying library works. It can definitely be tweaked a bit, but it doesn’t support that kind of caching.

  1. It waits ~200ms before sending the request to the server to avoid sending too many requests to the server. We can tweak this behavior.
  2. The library won’t do any caching at the moment, but even if they add support for it, it won’t be able to cache every row. The listbox could contain millions of rows.

Something that I think we could do is disabling lazy loading automatically if the row amount isn’t too big (we’ll have to find a sweet spot to understand what “big” means in this case), or maybe add an option to disable it.

3 Likes

+1 to add a property to disable it. never try to guess any possible usage of a listbox !

2 Likes

Fun to read this conversation. If this had been web 1, the discussion would have been “I’m trying to load 10,000 rows and the browser just locks up. Give us lazy loading!!!”

11 Likes

What about 2 settings?
preload rows limit = 100
lazy load block = 80

Preload will be loaded at the init and kept there. Above that new blocks are loaded as needed.

A preload of 0 will just use lazy load.
A preload of 100000 will need some patience. :grinning:

Logically if number of rows the listbox can hold in the screen (nrs) is bigger than preload the system will assume loading nrs instead of preload. The same for lazy block.

1 Like

This is what I was thinking as well.

I kind of disagree with the idea of a switch to turn it off.

2 Likes

The problem is that the developer can’t know what the optimal settings will be at any given time for any given server. What’s “correct” will likely be determined by network conditions.

The issue Xojo has is that you can’t easily emulate what an internet connection will be like nor is a dev likely to want to run in that mode all the time anyway. A lazy-load listbox solves that.

Not to mention… what’s the correct behavior if data on the back end is updated? If the list had 1000 rows added, does it clear the list and reload everything, one row at a time? Does it send just the rows that changed? How does it know what’s considered “changed” ?

The thing that the Xojo framework can’t do is meet everyone’s expectations. It can do some but inevitably someone’s not going to be happy.

My position is that if you want a simple list that just displays a table of data, make a feature request. Bootstrap makes it very easy to build such a thing and I bet it would be trivial to add to the framework if it fits into xojos roadmap.

6 Likes

Configs. Start with good for most cases. Allow changes in a config panel after.

Load the most recent data.

Nope, the new entire block.

.

It does not care.

What’s important is not reading, but writing. Whatever write system you design it should abort on inconsistent transaction and reload new data from the server and start over. You should lock the update, and have a fingerprint field that must match, if it match, update, and also create a new fingerprint unique value and save for that row, unlock the row. If fingerprints does not match, raise a inconsistent record exception and work on it.

1 Like

But that’s my point. Everything you wrote there should be prefaced with “in my opinion”. All of the questions I wrote were questions that someone could answer differently depending on the circumstances. For instance…

In my opinion…

Not if the user is looking at row 10 and the data that changed was on a row several pages away. No refresh should happen in this case, and when the user scrolls to that point, they just get the correct row data. Now if the data is on screen, I see two possibilities… if the user must have the latest-greatest data at all times, the rows that changed should update, but if the data is not time-sensitive, it could wait until its next looked at.

Again, my point is that this depends on the use-case and what the individual developer (or more likely their client) wants to happen.

Imagine if you had 200 users connected, all looking at the same table and making changes to the backend data, which could then cause refreshes on everyone else’s screens. Besides a huge CPU drain on both the server and all of the clients, I bet mayhem as well as general confusion and frustration would occur. It’s one thing to do this on a local network with 10-100Gbps bandwidth between the machines. It’s entirely different to do it at 10 to 1000 Mbps over the global internet.

3 Likes

Folks, the issues of “data changing” is interesting, but also somewhat off-topic from this.

I would like to draw discussion back to the main issue.

The only way the current 64 row limitation makes sense is:

  1. the internet is very very slow (so the connection to the server is limited)

AND

  1. the browser has CPU or RAM limitations, and can’t handle storing more than 64 rows (perhaps 1MB of data) in RAM

In 2025,
#1 is almost always false
#2 is always false

Average internet speeds in the USA are > 200mbps, and the average browser has several GB of RAM to use.

Edit to add:

  • It’s not that rows are loaded on-demand in small batches, is a problem. That’s probably fine (though I would argue that “small batches” should be a bit bigger).

  • the problem is that no more than 64 rows are stored ever. Once row 65 is loaded, row 1 is deleted. That’s just…bad.

3 Likes

I try never to complain too very much about Web-based listbox loading. Most of the stuff I did in Web 1.0 was a self imposed limit of between 100 and 500 rows and then I just added pagination. I could load about 500 rows with a respectable number of columns in just a second. Not enough time for the end user to get bored waiting. Now for Desktop Apps… I have to agree that with 64 Billion Bytes of fast ram…you would think loading a 50 MB Comma delimited file would be a breeze. How many rows? Columns? Who cares…it’s like 1/10th of 1% of my Ram…is that too much to ask? Apparently so.

Mine is around 17 Mbps.
Using an average reference of a given location as a way to change the whole behaviour wouldn’t make much sense.

2 Likes

That’s your opinion, and you know I’m ok with opinions. But I offered solution and you are offering a can’t do it. Well, I can do it.

I agree, give the tools to the guy, and power to implement his engines.

200 watching a screen, nothing happens. 200 browsing data would be an unfortunate event but works with the proper backend, a bit slow, but works, you probably will change preload to 0, and lazy to 40. And to have 200 browsing a screen at the same time you probably have more than 2000 users that coincidentally are at this utility of the system, because most of the time they should be watching and managing some kind of form / panel.
But as you said, the designer of such solution is with a custom problem demanding a custom solution,and must have the tools to design a solution, including distributed readers and one writer at the backend.

Give people tools to handle their needs, everyone will design a solution changing settings, implementing events, methods, changing hardware…

Many cases his browsing content is limited, like 300 rows, and he may opt to preload 400, lazy 40.

He designs what he wants.

1 Like

Digging into the framework code, it looks like the algorithm is this:

// datatables.min.js

l.defaults = {
            boundaryScale: .5,
            displayBuffer: 9,
            loadingIndicator: !1,
            rowHeight: "auto",
            serverWait: 200

[...]


this.s.dt._iDisplayLength = this.s.viewportRows * this.s.displayBuffer

So the buffer size is not a fixed 64 rows, but rather 9 * the number of visible rows.

This should be fairly easy to hack by changing that value of “9” to something else.

I’ve submitted a feature request if anyone wants to sign on:

https://tracker.xojo.com/xojoinc/xojo/-/issues/78771