I have been doing a lot of testing in stand alone deployment recently with multiple instances of apps running behind loadbalancers, on multi-core servers etc… It seems like a great solution to a common problem for apps that are critical or higher use, but I haven’t seen much good documentation. I was thinking about putting together a tutorial on this deployment style, from the ground up. But before I invest the time, I want your input on 2 things please:
Would you be interested in a tutorial like this or do you think it would be beneficial for the Xojo community? (or no it’s not really necessary)
If you have experience in these types of configs, do you want to help? (I would love to share experiences)
Please let me know your thoughts.
John, there have been a few posts already on the topic, but I think that a step-by-step, detailed and well structured tutorial will be to everyone’s benefit. I would suggest to make it as wide ranging as possible: using Mac, Linux or Windows servers behind the load balancer, with networking specifics for each, etc.
I do not have experience to help you with the task however.
Thanks in advance!
I have a PDF made for setting up stand alone Windows 2012 IIS 8 (possibly works on IIS7 too) using a Reverse proxy.
Perhaps it will be usefull to you.
Let me know (pm) your email and i will share it.
Haven’t done any of this but would be very interested in your techniques and The resulting performance.
2- sorry, I can not help
John, It doesn’t need to be the definitive guide. More of a “here’s what worked for me”. We all run into challenges specific to our hosting environment, application, user base, etc.
It would almost be worth setting up a separate channel for web deployments. I’m sure there are many stories to be told.
this period I am doing the same work with you, ( with Nginx + Centos )
Also, I am developing a Class, to automate the NGINX configuration.
I am available …
I am interested.
I have already done standalone running behind nginx, and plan to start testing load balancing, both on the same instance and to multiple instances. A guide would be very useful. WIll you cover using Amazon’s load balancing vs. nginx?
I can offer my services in testing the guide.
Thanks for the interest everyone. I think I am just going to take the advice of @Brad Hutchings and cover my implementation from the ground up myself.
I AM going to include any information that I can get from anyone else who wants to contribute on their own particular configs such as @Derk Jochems Windows server info and @Jay Madren if you or anyone else has info on Amazon I will put that in there as well.
Another question for you all - do any of you have any interest in Vagrant & Puppet type setups?
I am thinking that a Vagrant to Amazon AWS type setup my be beneficial as well but that might come as a later add-on.
[quote=55495:@Antonis Vakondios]Also, I am developing a Class, to automate the NGINX configuration.
I am available …[/quote]
@Antonis Vakondios - that sounds great. I would love to get your input. Do you have any interest in sharing your class? Also - have you experimented with HAproxy as well? I personally find the sticky sessions in HAproxy to work better than IP-hashing in nginx. I will probably include both options but I have been leaning toward HAproxy lately. I need to run some more performance tests though.
1+ for interest.
I can install MAMP, I know what a port is and that’s it.
The server/web stuff seems to be distributed knowledge to be gained by osmosis. When do I need load balancing? How do I do SSL? How would I use Amazon AWS?
Yes, I will share the class when it will be ready for all needed configuration.
There is a big issue with nginx (ip_hash), when you are behind a proxy server. Load balancing will not work because the remote address will be the same for all the clients, so it will redirect all the traffic on the same resource. Right now, I am looking for HAproxy, but I am not sure, that it is working well. Which value you configured on “balance” ?
I have successfully shared my Amazon EC2 instance with another developer. I was thinking of publicly sharing it so any Xojo developer could use it. We could build up our own library of instance configurations geared toward different purposes, like single instance vs. load balancing.
I am interested in learning about HAproxy.
Below, you will find a working haproxy configuration, which it will be behind of nginx . Nginx will handle the http and https protocols, it will route on two servers which they will run haproxy and each haproxy will forward on three Xojo standalone applications.
haproxy configuration :
log 127.0.0.1 local0
log 127.0.0.1 local1 debug
timeout connect 5000ms
timeout client 300000ms
timeout server 300000ms
listen webfarm 0.0.0.0:7000
stats uri /haproxy?stat
stats auth admin:admin
cookie SERVERID insert indirect nocache
server web01 127.0.0.1:7001 check cookie 001
server web02 127.0.0.1:7002 check cookie 002
server web03 127.0.0.1:7003 check cookie 003
This is the basic HAproxy configuration I am using for the app I am testing it with. I have three nodes running on the same server. I am using the default balance, Round Robin.
I totally agree with you on the sticky sessions with nginx. I ran into the same issue regarding ip based sessions. It is possible to compile nginx with a module that addresses this, but I switched to HAproxy instead because I had never used it and I wanted to test it - plus it was simple.
I find that it does a much better job at load balancing with sticky sessions when you use cookie injection. As you can see by my .cfg I am using the same thing with the ‘serverid’ cookie. It works fantastically! If you have HAproxy v 1.5 or higher you can do the SSL termination there too instead of using nginx. I am not using SSL, so I haven’t messed with it but it seems straightforward enough.
log 127.0.0.1 local0
option httpchk OPTIONS /
#Here is where the 'serve rid' cookie is injected
cookie serverid insert indirect nocache
#These are the different nodes with their cookie names - just instances on the same box.
server node1 127.0.0.1:8080 check cookie node1
server node2 127.0.0.1:8081 check cookie node2
server node3 127.0.0.1:8082 check cookie node3
[quote=55588:@Jay Madren]I have successfully shared my Amazon EC2 instance with another developer. I was thinking of publicly sharing it so any Xojo developer could use it. We could build up our own library of instance configurations geared toward different purposes, like single instance vs. load balancing.
I am interested in learning about HAproxy.[/quote]
Jay, this sounds great! I don’t use Amazon much but I would like to hear more about it. I am mostly developing on client’s existing VPSs. The library idea sounds excellent!
I do HAProxy SSL Termination rather than SSL pass through to back end servers.
Alan, if you have any tips/tricks feel free to post them here - thanks!