Add the WebSocket protocol to WE apps

[quote=169331:@olivier vidal]Currently, the browser sends regularly a request to the server to see if there is anything to recover. It’s expensive. “502 bad gateway” errors also regularly appear in the browser console. (Even if it has no impact, it worries customers).


The “502 Bad Gateway” error is not a Xojo/browser error - it’s returned by a proxy or load balancer. It means that for whatever reason the underlying Xojo app did not respond within its timeout period. Depending on what you are using to proxy you may need to adjust the time out. It’s possible this happens in CGI mode as well as Apache does have a CGI time out as well.

Also keep in mind that regardless of traditional HTTP or WS if the Xojo app is busy doing something then its not responding to your request. The key is to run more apps behind a balancer, not tweak the specifics of the protocol. The other key is to keep long running tasks out of your web app. Delegate to an accessory console app or run a background queue, etc.

In high usage environments you want your web app only responding to requests, not performing any long running tasks.

WS will help with the chatter though and make the app generally more responsive - especially on mobile - assuming the app CAN respond because its not busy with other users/tasks.

Thank you Phillip!

Yes I know. But usually people use the CGI app or behind a proxy. So it happens often. The customer sees it. And he sees that it does not happen with other solutions (WebSockets).I do not know whether we can put a huge timeout without consequences.

It would still be good to best optimize the framework before asking users to build a gas plant. After that, indeed, we can use external tools.

Notwithstanding, many people will be surprised to realize that a Xojo web app can only manage a few users (in a fairly intensive application). And that if they want to manage more, they need to set up a system (load balancing, helper apps…) which will be ultimately more complex as a node.js.
On the contrary, usually people come here for simplicity without high cost

Node.js has similar issues - if you make your Node app as chatty as a Xojo app and do long running tasks you will run into similar limitations.

I know that you are right and that these tools (load balancing, helper apps …) help enormously. But I think Xojo can already improve the availability of its product.

Imagine a restaurant with 10 bartenders. Without WebSockets, there will be many requests to the server, even if there is only one or two bartenders who record an invoice at the same time. There will also probably many threads working on receiving requests on the server.

Yes I understand your point - and you are right - Xojo could do better here.

However they are currently limited because a Xojo app is only single core aware. That means they literally can only process one request at a time. Does not matter if it comes over HTTP or WS if that request takes 5 seconds to process thats 5 seconds you aren’t responding to someone else. Hence the gateway timeout.

There are performance implications of turning up the timeout. However if your responses are typically fast then you will never hit the time out. The timeout is being hit because either you have too many users on that process or you have long running tasks clogging the system.

Node is also only single core aware but they have a technology that allows it to spin up sub-processes and communicate/balance between them. It would be nice if Xojo could load balance itself. A built in controller that merely passes requests back/forth to sub-processes that do all the work.

You sure about that? From what I could see, where Xojo can handle 10 users Node.js can handle a hundred. and easily. It’s very impressive. Friends use it and they increase the number of users without the need for load balancing, helper apps, etc.

(excuse me for the delay in reply, write in English is not easy for me)

You have to compare apples to apples. Not every Node app has a giant javascript framework communicating a ton of events like “MouseMove”. If you capture events like that then you increase the chatter by a ridiculous amount. Also the V8 engine is very optimized so the same code in Javascript/Node is likely going to run faster than Xojo thereby cutting down response time. The issue is still present though - its just not very few Node apps have the same interactivity as a Xojo app.

That being said if your server/setup can’t handle the Xojo interactivity and it degrades performance then its a moot point and ultimately bad.

BTW your English is very readable and I never considered a language barrier of any kind.

My example of restaurant, and doing a quick calculation:

  • 10 bartenders
  • Peak hours: 1 action every 6 seconds
  • off-peak hours: 1 action every 1 minute
  • on average, the current framework (without webSockets) is one query every 3 seconds, even if the user does nothing

Without webSockets:

  • Peak hours: 18 000 request per hour
  • off-peak hours: 6 000 request per hour

With webSockets:

  • Peak hours: 6 000 request per hour
  • off-peak hours: 600 request per hour

This difference is important because server side, I guess there are many threads maintened for processing requests. The core is more busy, it has less time to respond to other requests, It causes slowdowns, it weakens. I think. In fact, I hope that this is the case. It would allow to expect a good optimization when Xojo will manage the websockets :slight_smile:

Lets talk a little bit about what WebSockets actually gets you. When we make regular http requests to a server, each request has ~272 byte header. The allure of WebSockets is that you have a single http request and then every message after that doesn’t need to renegotiate the connection, whereas the header on a WebSocket message is somewhere between 6 and 14 bytes.

But it doesn’t really make push more efficient. When we had the early draft version of WebSockets working, people were trying to do animations with WebCanvas. Just like everything else, the packets still have to traverse the internet.

If the WebSocket tunnel goes down for some reason we would immediately try to re-establish the connection. If it doesn’t we would try to connect via Ajax. If that still doesn’t work, it’s a reasonable assumption that the server is gone and that at the very least, the session would no longer be synced up.

Don’t worry, if you’ve got a buggy proxy out there, it’ll cause problems with your websockets too. :slight_smile:

We handled ~100 at XDC with a web app last year without load balancing.

Greg thank you for all that information.

Yes, a small advantage of webSocket is to save headers. But the main advantage is especially that the server can send a request to the browser without prior request from the browser. Or there is something that escapes me. If the browser needs to send data to the server, it does. If the server needs to send data to the browser, it does. Otherwise, no request is made. There is no need to do polls. Anyway, this is how frameworks like meteor.js seems to work.

For example, an application displays in real time the results of a football match. There are 500 people who are connected. With Ajax, the 500 browsers will need to ask the server every 10 seconds (for example) if there is another goal. With WebSocket, whenever there was a goal the server sends the information to the 500 browsers. The rest of the time, no request is made. So if the match is 90 minutes and there are 3 goals:

  • Ajax: 500 * 6 * 90 = 270 000 requests
  • Websockets: 500 * 3 = 1 500 requests

It is not that?

When you have a push socket in place, you don’t reconnect every 10 seconds. You hold that connection open as long as the browser will allow (typically about 3 minutes). When the socket timeout occurs or data is received, you make a new connection, but remember even with WebSockets 500 simultaneous users still means 500 simultaneous sockets, each with their own memory and CPU overhead. I seriously doubt that you could get a standalone app to host that many users without load balancing anyway.

Anyway, Ajax would be more like this: 500 * (90 / 3) = 15,000 requests + 1500 ( 3 x 500 ) = 16,500

WebSockets also do not reduce the number of threads. Currently, a request comes in, the request gets processed in a thread, a response is created, the response is sent to the browser. If we were using WebSockets, that portion of the framework would behave just the same.

This conversation made me realize I had a problem, the push socket of my app reconnected every 3 seconds! I recompiled the app, now it looks good. I feel reassured, the application will be significantly less greedy:-)

It’s been a while since I had not compiled the application on the server, maybe it was a beta problem or another. Now it reconnects every 30 seconds, but this is my proxy timeout. I’ll increase it lasts for 3 minutes.

Thank you Greg and Phillip!

Here, for example: . But yes, Node.js works completely differently. It would be interesting to do a similar test with xojo, for example with the WebSDK or even only with handleSpecialURL (for xojo is optimal). It delivers the javascript source code of the test.

Wow, I hope he notified Rackspace before doing that. Stress testing like that a violation of their EULA. :stuck_out_tongue:

Also, keep in mind that this experiment of his would not have been cheap.

Rackspace: $405/mo, Bandwidth $0.12/GB
Amazon: Free if you use micro instances

I’d bet the test ran him $500.

Also with noting that our ServerSocket would not handle that many connections without load balancing, and even so, without multicore processing, I doubt that you’d even get close.


Do you have some estimates about ServerSocket max connections ( I know it depends on each connection workload ), but roughly estimates. I am developing a REST app with ServerSockets and will need about 50-100 concurrent connections with very low workload on each. I am not using Web edition for this, just a console App, ServerSocket and REST arch.

One of my priorities is to use native code and not PHP or other scripting languages than XojoScript. I’m getting close to my 1st milestone and with one dev only working I would like to avoid having to write an stress-test application. Have a nice base of code, and will appreciate any info you can share.


There are a few factors to consider:

  1. CPU speed. The faster it is, the more you’ll be able to handle simultaneously
  2. Memory. Each socket requires a certain amount of RAM, and as you probably know, ServerSocket keeps a bunch of them in reserve all the time to make sure a client can connect quickly.
  3. Your code. I suggest spinning up a thread each time a request comes in so that one client can’t block another. Make sure you are calling Self.Sleep from within those threads (especially in tight loops) so they can operate cooperatively.

You’ll also want to tune the ServerSocket itself. MaximumSocketsConnected will need to be a tad higher than your expected simultaneous connections. MinimumSocketsAvailable helps determine how many sockets are waiting and ready to handle an incoming connection. If you expect to have a lot of clients connecting at the same time, you may want to increase this value so they don’t have to wait while a new socket is allocated. It’s a bit of a balancing act.

To give you an example, with the web framework, the out-of-the-box configuration has the minimum set to 20 and the maximum set to 200. Our tests showed that you could have 50-70 simultaneous users like that. That’s because browsers like to open more than one connection to a server at a time, usually somewhere between 2 and 4.

I’d like to suggest that you consider using a web app though. Now that you can use just about any URL with the new HandleURL event, you may find that we’ve done all of the heavy lifting for you… And without any pages to serve, its pretty much just a glorified console app. The best part would be that if you need to add an admin interface sometime in the future, you could potentially put it in the same app.

FWIW, Travis will be talking about how to build a web service using the web framework at XDC this year.