Performance of Web 2.0 (data transfer rate)

Wow! Doing the exact same thing in Web 1.0 (2019 R3.2) gives me 4.4 MB/s with up to 4 simultaneous downloads. Almost 6 times faster.
Just wow, I have no words :confused:

Xojo team, if this is universal and not something specific to my setup, you really need to make it a top priority. This is serious.

1 Like

The main limiting factor is the fact that Xojo is the lack of multiple threads like
a full fledged web server. As Xojo has said many times, Don’t serve files using xojo (beyond the normal resourses of the web page), it is simply not made for that.

Web 2 looks like really slower on many circumstances, Let’s hope they fix it instead of just denying it.

1 Like

Hi Ivan, thanks for your answer.

Unfortunately, it’s not that simple for me. I’m building something that serves binary content out of a database, not from the filesystem, and is accessible only via a REST API. In my case, I don’t care how fast webpages full of controls load, it’s not a web application.

This is why my test was focused on data throughput and not responsiveness of a web GUI.
Having said that, I would be happy with the 4MB/sec of Web 1.0, unless having 20 download sessions running, not only slows it down but crashes the server entirely. Now, that’d be a definite dealbreaker.

Since I’m not serving static files, but binary content that comes out of a data source, I can’t just use a web server that’s “made for that”. There is also a significant amount of logic between the data source and the web server and writing the content to the filesystem in order to have someone else serve it, is not really an option.

I wasn’t aware of this particular Xojo design guideline. Never seen it anywhere in the documentation and it’s not exactly a minor hint or remark.

I’d be willing to accept blame for not doing my homework right, but even at this time, I would really, really appreciate a Xojo employee confirming that Xojo is not what I ought to use for such application, so I don’t waste my time building something that’s not going to work properly in practice.

1 Like

You can also create a Feedback case with sample code and post the link here.
You can contact Xojo directly: https://www.xojo.com/company/contact.php

“To be honest, transmitting any large files through a web app will be an issue though.”

Ivan, thank you for doing the research I should have done, I appreciate it! :slight_smile:

These are not necessarily bad news for me though.
Greg is saying: be careful with your memory overhead, because so and so…
I’m perfectly fine with that. I expect my typical data object to be 10-20MB on average, I can afford to have them take up twice the memory space momentarily; the server isn’t going to be running on a Raspberry Pi :slightly_smiling_face:

I have 2 absolute dealbreakers, if :

  1. The server is ridiculously slow, right from the first and only download/upload. The 750kb/sec of Web 2.0 falls into the category of “ridiculously slow”. The 4,4Mb/sec of Web 1.0 does not, for my standards that is
    .
  2. The server becomes unstable when concurrent downloads/uploads are increasing and then chokes to death.

I wouldn’t sweat too much if the 4MB/s become 2MB/sec when serving 20 concurrent requests : that’s what load balancing is for.

1 Like

I don’t know Xojo Web but could you not write the data to disk in a location that is accessible by a web server and then return a HTTP 302 redirect to the URL?

Thanks for your suggestion :slight_smile:

I could, but it is something I’d like to avoid, for three reasons:

  1. It’s more complicated. Simplicity is something I really value. I don’t want the dependency of another webserver that might or might not be listening at the time of the request, as part of my solution.

  2. I don’t want to involve the filesystem in this. Consider a Read in such design: data is retrieved from the data source, written to disk, sent to client, then deleted. What I’m doing here is I’m wearing out a non-volatile storage medium every time someone wants to read something. That’s not acceptable for me. Of course I could set up and use a Ramdrive for that. I consider that too intrusive towards my customer’s system, telling him, “hey, I’m going to steal some of your RAM and make one more filesystem on your server, just to do something that shouldn’t be done like this in the first place. Otherwise, I’m going to be making your hard drive die sooner”. As an (aspiring) commercial product designer (and not a hacker), that’s not a measure of how smart I am, finding such workarounds. It’s a measure of how inadequate I am at doing a simple job in a simple, straightforward way :slight_smile:

  3. It’s also a matter of security: One of the selling points of my system is storage of content that even your system administrator cannot casually access. That claim becomes a bit harder to defend, if that content appears unencrypted on the filesystem, even momentarily.

Personally I would use a web server to serve the files as they have been designed specifically for this purpose and will nearly always be more efficient than a web app (I do agree that Xojo Web appears to have an abysmal transfer rate though). This will also free up the web app session quicker rather than it being held open until the file transfer has completed.

Is having the data on disk for a short period of time that much of a risk? A cron job could delete files older than 1 minute which should give the client time to start the download. On Linux this should delete files even while they are being downloaded - not sure about MS-Windows.

Kevin, what you’re suggesting is 100% valid within what I’d call a “hacker mentality”. There’s nothing wrong with that, there are contexts in which it is highly desirable, even lifesaving :slight_smile:

But in my case, what I’d like to go out and say to a potential customer is:
“Take this One application, configure the absolute minimum (like the port it’s going to be listening to), fire it up and it’s going to be doing everything that needs to be done for the service it promises to provide: You don’t need to worry about anything else.”
So, that means: no third-party web servers, no guessing of download completions, no cron jobs to remove files that an auditor could call a liability being there.

The key point I’m trying to make is that, I’m not aiming for the technically optimal. I know that I’m not going to get that with Xojo. I know that Xojo is not going to give me stellar performance in anything.
What I’m trying to figure out here, is whether Xojo will be good enough, without being forced to violate the design principles I’ve laid out for this product.
The political takes precedent over the technical in this discusion :slight_smile:

Don’t take it as criticism to what you suggested. I appreciate the time you put to solve my issue :slight_smile:

If you plan on posting file data to your to your API, I would recommend that you test it. Recently, we created an API for internal use with Web 2 using HandleURL. Everything was working great until we started posting data to the API in the 2mb to 10mb range. We found that as the size increases, the posts begin to fail. It started around 5mb, I believe. I can’t remember exactly, but I think the issue was that we were just not getting a response back… our posts were timing out. Our posts to the API were made using a desktop app and we tried URLConnection as well as CurlMBS. Our Xojo Web 2 API app is running on a separate PC on our local network.

Since this is an internal app, we had the option to change the functionality so that files are uploaded via SFTP. Then we post to the API and reference the file that was uploaded.

We reported this issue before and it was fixed. This time we did not take the time to figure out what was happening and report it again. We just decided it was better to upload files separately.

1 Like

Brandon, many thanks for warning me of this kind of behavior!
If this is true, then Web 2.0 checks both my deal-breakers: unacceptably slow and unreliable.

I will do some more automated stress testing on both Web 2.0 and 1.0.
But even if Web 1.0 proves fast enough AND relatively reliable, why would I base a new project on an obsolete, unsupported framework, without any guarantees that its successor will improve in the foreseeable future?

Xojo is definately not the perfect technology for creating APIs.

Reasons:

  • Security is not so easy to achieve (JWT)
  • load balancing is hard
  • performance can be or get bad
  • testing and API documentation makes headaches

While Xojo makes a solid job in creating usual applications, developing a web service or a web API can be more convenient in other languages.

See node.js with Express or PHP with pRESTige. Both are simple to learn and to use, for sure more stable and the performance is mich better.

There was a Xojo project called „Aloe“ which’s purpose was to provide an out-of-the-box web service. This was one of the most mature attempts to build a stable, secure and easy to use web interface using Xojo.

However, the developer eventually decided to rewrite it completely on PHP due to Xojo bugs, performance and inconsistencies.

1 Like

Lars, thanks for taking the time to point these out.
Yes, you’re absolutely right: Xojo is not the perfect technology for building web APIs.
Given that doing it with node.js or Golang (I don’t think I’d go into PHP) is a significant investment in time and effort in my situation, I’m interested in establishing whether Xojo would be good enough.

As far as I’m concerned, the main issue here is performance, that seems to have taken a nose-dive, moving from Web 1.0 to 2.0. Reliability is another issue that I -mistakenly- took for granted. I’ll have to look into that too.
I’m not legally or morally bound to use Xojo, but I’d strongly prefer to :slight_smile:

I"m really intrigued by your comment on load balancing, because that’s really important too.
Why is a Xojo server harder to load-balance (using nginx or haproxy) than node.js in this sort of application?

And yes, I know Aloe, as well as AloeXWS. I’m considering using the latter for the project. It was initially made for Web 1.0 but porting it to Web 2.0 seemed really trivial to me.

If you already know Nginx and Haproxy, then it‘ll be easer for you to loadbalance your application. But because you have to execute a Xojo programm, you‘ll always habe to keep at least one instance running. If your app crashes, you‘ll habe to handle that, also after server restarts and such. And then is there the current memory leak in Web 2, which blows your application‘s memory usage, even without users.

If you have an API with for example node, or an different script language, which is interpreted, then this is definitely easier. Also the whole update process becomes easier.

Also, documentation, like with swagger, is much easier in Node or Golang, since you can annotate you code right away and the documents are generated out of that. This is a huge time keeper, which you should have to keep in mind.

There are many other advantages. I develop REST-APIs for 10 years now. And I’am experienced in Xojo. So I am pretty sure you can develop a stable and fast API in Xojo, but it is not faster nor easier than using another tech stack.

But other languages are built exactly for this usecase, unlike Xojo, which is not built obly for this purpose.

My 2ct

2 Likes

I would not be so generous with the word “stable” as discussed in this thread.

4 Likes

PHP is fast and stable.

2 Likes

I think it’s missing alot of “crucial” parts. Especially in terms of stability and possibility (upload, download, rest-api handling etc.) it’s even leaking memory (2020r2.1) just check Runtime.MemoryUsage you’ll see it grow while nothing is being done (no sessions, no webpages) other than having App.HandleURL being called…

3 Likes

It’s also horrendous to program.

2 Likes