Load balancer / Spillover

I was very interested in the conversation sparked in a previous thread have-people-tested-the-xojo-web-s-performance ; however, figured it would be best to start a new thread as the discussion has moved away from the OP’s question.

in Damon’s comments below:

[quote=91666:@damon pillinger]Spillover sounds like a good idea, I just gave it a try.

You basically add 1 to the app.Port
If the current app is “full” you then forward the user to the next port, repeat until you find a sever with space.
Works nicely when a user finishes on the first server the next user will be put on to the first server. This would allow you to use your best computer as the first server and then worse machines as you go down the path.

The only problem I found was that if a user is being bounced 3 or 4 times it is quite a wait before you are shown a login screen or web page but there is no need for a separate program you build it all into the session.open event.

Thanks
Damon[/quote]

In a spill over type configuration, I had questions concerning having multiple web apps running on the same server or different server using different ports for the various nodes that could be used for re-directing.

My question is… How are you redirecting the user to the various ports? Is this a simple page re-direct or by other methodology? Also, if other ports are to be used… does this mean on the firewall they have to be open to or does the first app act as a pass through?

thank you very much for your input!

-rich

You could start with a very simple script that checks a database to see the current concurrent connections, and if it’s max is reached, redirect the user to the same webapp on another port. There are various methods including apache setups to automatically do this for you.

Say for instance I visit site.com and the max has been reached, a perl/php/ruby/(Xojo) script can create an html redirect, and the browser will automatically be redirected.

Html redirect

So basically a two layered application exists. One which is visited to accept and monitor traffic, then forward it correctly to the appropriate application port.

Depending how your firewall is setup on the server will determine whether or not it needs to be open. Some apache setups will automatically allow a port connection if the referrer is the server itself and deny “direct access” attempted connections. If you are unsure how your server operates, contact the administrator or hire someone to make sure your server is properly setup and configured for security. You don’t want someone tearing down all your hard work, or compromising potentially damaging private/proprietary/confidential information.

I also read the post with interest and actually thought that it would not be difficult to write a Xojo web app that in effect was a redirect manager that handles which port to redirect the user to based on counters in a database as Matthew has suggested. This way the servers could exist on different machines as well as on the same machine on different ports. The manager app would even be tweaked to allow you to set an instance to be unavailable to allow you to stop it etc.

Matt & Nathan, thank you for your insight.

Though I do not have a web app that requires load balancing at the moment, I am interested in mocking one up with stand alone xojo web apps in which I could redirect to.

The idea of have a database to kept track of the various forwarding events is a good idea and along with a managing application. In addition to re-directing, the managing app would need to be able to poll the various web apps to determine session count which it could report back to the DB to assist in node assignment.

a couple of thoughts.

  1. if site.com is the domain, and I wanted to maintain this in the address bar of the browser regardless of which node I am re-directed to. would this be obtainable in the managing app or would this be done on the firewall?

  2. if the node application port was shown in the address bar… such as http://site.com:1234, what would prevent people from bookmarking the node?

  • my thought to this would be for the managing app and node web app to have a secret handshake before the connection would go through?

There are very good, free and open source solutions for load balancing which do not rely on simple HTML redirect.

Check out HAProxy and nginx. There’s also an open source project called Zen Load Balancer, not sure what it’s like though.

I agree, no point in reinventing the wheel but if Rich wants to then a few things to consider.

No the management app would not need to talk to the nodes serves as these would update the database directly with session info etc and the management app would simply poll the database whenever a new request came in.

Nothing really, if you wanted to be clever then when the management server redirects to the node servers it could put something in the HTTP header which the nodes check and if the request has come in without the entry in the header then it would redirect to the management server to be redirected back to the selected server by the management server with the HTTP header in place.

If they are running on different boxes then they could all run on the same port but you would have to use a different url but this could simply be www1.site.com, www2.site.com, www3.site.com or if you ran on the same box you would have www.site.com:8080, www.site.com:8081, www.site.com:8082 etc.

Their used to be a really cool load balancer and cluster system by a company call Mutual Ends but it appears they no longer exist, they had a really simple UI and you could set rules how to route traffic etc and request types so you could say that html could go to a bunch of servers but js and images etc came from other servers.

[quote=91901:@Nathan Wright]I agree, no point in reinventing the wheel but if Rich wants to then a few things to consider.
[/quote]

The suggestion was for a simpler to deploy option, as it’s contained in the WE app binary, and could be configured with a small setup file and a script that starts up a few instances. There are many choices for full function load balancers. They might be overkill or overcomplicated depending on your deployment situation.

Ok you have beaten me on this one as I cant see how you would achieve this in a simple way?

  1. Startup/restart script to launch e.g. 4 instances of the app on 4 different ports.
  2. Listening ports specified in a config file.
  3. In Session.Open, check current app instance load, fwd to a different app if load is too high, i.e. needs to spill over.
  4. Keep track of consecutive spillovers with URL parameters.

So let’s assume this works fine for the typical load of the deployment, and you are doing many deployments. Maybe you ship an appliance or have many customers running your app on their own servers. You need only open firewall and run your startup script to get going. System updates are a little easier too – just your app and a script.

** Not ideal if you’re deploying a large centralized stand-alone web service. **

[quote=91853:@Rich Hatfield]…
I had questions concerning having multiple web apps running on the same server or different server using different ports for the various nodes that could be used for re-directing.

My question is… How are you redirecting the user to the various ports? …[/quote]
Check this thread:
https://forum.xojo.com/4149-load-balancing-with-nginx/p1#p28982

Thank you all for your input.

There is a lot of great ideas here in which I am going to do more digging on.

thank you again.