start-stop-daemon for stand alone web app

Hello all,

Got start-stop daemon working well with console apps, but how to do the same with a Stand Alone web app? Basically I want the supervision of the system to restart the web app if it crashes. The only mechanism I know to do this in a linux environment is the init.d daemon controls.

Please advise any code adjustments that should be made - is it as simple as adding the daemon check code?

// ************************************************************************ //
#If TargetLinux Or TargetARM Then // Daemonize 
  #If Not DebugBuild Then // Do not try to daemonize a debug build
    System.Log( System.LogLevelCritical, "Trying to daemonize the Axcys application.")
    // System.Log( System.LogLevelCritical, "Trying to daemonize the Axcys application. args = " + CStr(args.Ubound)  + "  arges= " + args(0) )
    
    // If (args(1) = "start" Or args(1) = "-d" ) Then // Check for command-line parameter to daemonize
    // If (args(1) = "start" Or args(1) = "-d" or args(1) = " &" ) Then // Check for command-line parameter to daemonize
    If Not App.Daemonize Then
      System.Log( System.LogLevelCritical, "Could not daemonize the Axcys application.")
      Return -1
    End If
    // Else
    // System.Log( System.LogLevelCritical, "Incorrect init.d/axcys command.  Could not daemonize the Axcys application.")
    // End If
    
    System.Log( System.LogLevelCritical, "Daemonized the Axcys application.")
  #EndIf
#EndIf

Thanks for the feedback all!
Tim

Heh. I usually use a cron job for that.

You can create a textfile somewhere on your server, like for instance /opt/myhandler.sh and make it executable (octal 755).

You then add lines like these (supposed you have a standalone webapp named ‘myapp’):

[code]#!/bin/bash

#check if mayapp is running
if [[ ! pidof -s myapp ]]; then
sudo /opt/myapp --secureport=9999 --maxsecuresockets=400
fi

#Check if DNS Service is running
if [[ ! pidof -s named ]]; then
sudo /etc/init.d/bind9 start
fi

#Check if HAProxy Service is running
if [[ ! pidof -s haproxy ]]; then
sudo service haproxy start
fi

#Check if CubeSQL is running
if [[ ! pidof -s cubesql ]]; then
sudo cubesqlctl start
fi[/code]

Then you add a cronjob which runs this script every one or two minutes.

So whenever one of these services goes down or has not been restarted after a reboot, the script will discover it and try to start the app.

One easy way to add cronjobs is to install and use webmin on your Linux server.

Otherwise check out crontab:
http://www.adminschoice.com/crontab-quick-reference

I don’t daemonize the web app. When I was reading up on how to set up the init script I read somewhere that you should let the system handle it. So I just modified a template script and came up with the file below. Seems to work fine so far.

I’ve included it in case you wanted to see it.

[code]#! /bin/sh

BEGIN INIT INFO

Provides: SmokeConfig

Required-Start: $local_fs $network

Required-Stop: $local_fs

Default-Start: 2 3 4 5

Default-Stop: 0 1 6

Short-Description: SmokeConfig

Description: SmokeConfig web based gui for configuring SmokePing

END INIT INFO

NAME=“smokeconfig”
DESC=“SmokeConfig web gui for SmokePing”
PIDFILE="/var/run/${NAME}.pid"
LOGFILE="/var/log/${NAME}.log"

DAEMON="/usr/share/smokeping/SmokeConfig/SmokeConfig"
DAEMON_OPTS="> /dev/null 2>&1"

START_OPTS="–start --background --make-pidfile --pidfile ${PIDFILE} --exec ${DAEMON} ${DAEMON_OPTS}"
STOP_OPTS="–stop --pidfile ${PIDFILE}"

test -x $DAEMON || exit 0

set -e

case “$1” in
start)
echo -n "Starting ${DESC}: "
start-stop-daemon $START_OPTS >> $LOGFILE
echo “$NAME.”
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon $STOP_OPTS
echo “$NAME.”
rm -f $PIDFILE
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon $STOP_OPTS
sleep 1
start-stop-daemon $START_OPTS >> $LOGFILE
echo “$NAME.”
;;
status)
echo -n “Sorry, this isn’t implemented yet”
;;
*)
N=/etc/init.d/$NAME
echo “Usage: $N {start|stop|restart|force-reload}” >&2
exit 1
;;
esac

exit 0
[/code]

Most unix systems these days usually have upstart or systemd to manage daemons. They make it easy to start your app, restart it on failure or whenever you want, run multiple instances, control the user and group, control the environment, execute pre-start or post-start scripts, etc…

Do you know if your OS has either of these? What OS is it?

I have brief write up on upstart here:

http://john-joyce.com/xojo-and-load-balancing-with-nginx/

If your system is running systemd the process is similar. You create a unit file in the /etc/systemd/system/ directory. This makes it very easy to spin up multiple instances on different ports FYI. I covered this in my XDC session last year. Here is a sample unit file template:

[Unit]
Description=My Xojo App
After=network.target

[Service]
ExecStart=/path/to/my/app --port=%I &
EnvironmentFile=/etc/sysconfig/myEnvironentFile
Type=simple
Restart=always
User=myUser
Group=myGroup

[Install]
WantedBy=default.target

Here’s some more information to build on John Joyce’s guidance.

Although init is very popular and works on both current and legacy systems, systemd is the better tool for modern systems. We prefer systemd but use init when apps may be deployed to legacy systems.

Here is a simple example for a systemd unit file.

[Unit] Description=Axcys Facility Security Manager [Service] User=axcysfacilitysecuritymanager ExecStart=/home/pi/Public/axcys/AxcysFacilitySecurityManager/AxcysFacilitySecurityManager Restart=always [Install] WantedBy=multi-user.target

Assuming that file is named AxcysFacilitySecurityManager.service and is in /etc/systemd/system, use sudo systemctl enable AxcysFacilitySecurityManager and sudo systemctl start AxcysFacilitySecurityManager to enable and start your app.

It’s important to note that when using systemd, the app should not be daemonized within Xojo.

Hope that helps.

I am new to WE apps so bare with me. Why Not? I thought daemonizing the app was the right thing to do.

With a CGI application the docs say it is bad to daemonize them. I am not sure how this effects a standalone app - but it is unnecessary if you are launching the app from the init system because it will be launched as a daemon from there (no terminal attached).

Thanks. I wasn’t clear but you answered it. I don’t work on CGI apps. Just StandAlone.

Wow! I did not expect so many responses to my question - THANK YOU ALL!

To answer John, the OS is linux, running on a Raspberry Pi and other boards, but the pi for now.

Going to try some of the things suggested, will get back to the post following.

Tim

I am pretty sure that Raspbian uses systemd for the init system.
if you copy and paste this into a shell and hit enter, it should say yes or no. Yes = systemd

[[ `systemctl` =~ -\\.mount ]] && echo yes || echo no

Then you can create a unit file as myself and @Frederick Roller have discussed.
I would be glad to help you through it if you have questions but it is pretty straightforward.

Let us know how it goes.
J

Hi John,

Yes, it does use the init system.
What is the difference between what You and Frederick suggested and what Kevin suggested? Or even from what is normally used to control and daemonize an app in init.d?

Because it was simpler at the time, I tried what the same code as used for the console apps (daemonized) for the standalone web app. I did not make any changes to the Xojo code. This is what I did:

#!/bin/sh

### BEGIN INIT INFO
# Provides:          Axcys Embedded Viewer
# Required-Start:    $local_fs $syslog $time $network
# Required-Stop:     $local_fs $syslog $time $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Axcys Embedded Viewer
# Description:       Axcys Embedded Viewer
### END INIT INFO

case $1 in
	start)
		start-stop-daemon --quiet --chuid pi --start --exec /home/pi/Public/axcys/AxcysEmbeddedViewer/AxcysEmbeddedViewer
		;;
	stop)
		start-stop-daemon --quiet --chuid pi --stop --exec /home/pi/Public/axcys/AxcysEmbeddedViewer/AxcysEmbeddedViewer
		;;
	restart|force-reload)
		$0 stop && $0 start
		;;
	status)
		ps u -U pi
		;;
	*)
		echo "Usage: sudo service axcys-embedded-viewer {start|stop|restart|force-reload|status}"
		;;
esac

Thanks for the input!
Tim

Hello guys, is this still valid ?

and how do you set all the shutdown steps for a web app in my case ?

For the moment to start the app I use this command ./syapi -d --port 80 --SecurePort=443 --certificate=/opt/test/apps/syapi/Data/Certificates/syapi.crt

but to stop it I just kill the process but in a way is not the right thing to do as I would like to close the database in case open to do some cleanup and then to shut down the app .

For the starting part, is there a way to point the ports and certificates in the app itself on the Open event of the web app ?

@John Joyce Apparently your URL for the haproxy is not working anymore , any luck on having those available again ? thanks.

ah and OS is Debian 9

Hi there @Aurelian Negrea , Yes this is still valid. The best way to start and stop your standalone web app is with the OS init system, which for Debian is systemd, the one I am referencing earlier in this post.

It is not complicated to learn and gives you a lot of power over starting and stopping, automatic restarting, executing scripts when starting or terminating, the application environment, which user the app executes as, and on and on. If you back up this thread just a few posts there is a 5 second overview explanation of creating a unit file and where to save it to make your application launch.

Here is a pretty in-depth tutorial:
https://www.digitalocean.com/community/tutorials/systemd-essentials-working-with-services-units-and-the-journal

I let my site go because I just wasn’t using it. The info is all more or less on this forum in various places though. If you search load balancing or haproxy you might find more. You can also check the www archive here:
https://web.archive.org/web/20161012051757/http://john-joyce.com/xojo-and-load-balancing-with-haproxy/

As far as your application shutting down, you might be able to throw some clean up code into webapplication.close but honestly you should write your app so that it is not generally in an “un-clean” state, because if it crashes then it will create problems. You can also create a script to do shutdown activities and set it to be called on termination by systemd as part of the shutdown process using the ExecStop or ExecStopPost directives.

https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files

Hope this helps!

Hello John,

Thanks a lot for taking your time to reply to this.

I did started a unit with systemctl in my case and I works perfectly following your Unit from above but I have some issues here .

Based on your code :

[code][Unit]
Description=My Xojo App
After=network.target

[Service]
ExecStart=/path/to/my/app --port=%I &
EnvironmentFile=/etc/sysconfig/myEnvironentFile
Type=simple
Restart=always
User=myUser
Group=myGroup

[Install]
WantedBy=default.target[/code]

I see that the app starts with [quote]ExecStart=/path/to/my/app --port=%I &[/quote]

Now it works when I start the service but I don’t get any port so while doing systemctl start myapp.service where do I mention the port ? for this case ?

and the last part , how I could use this xojo-and-load-balancing-with-nginx on Debian, as I see there are few changes on my side, Debian does not use initctl anymore and I need to use systemctl but in the same time to adapt to your code I need to be able to run the app multiple instances on different ports if I understood well.

So in my case I should create a service for different port or I could fire with the same service a different instance on another port ? and how ?

Last part, what do you have in [quote]EnvironmentFile=/etc/sysconfig/myEnvironentFile[/quote] in your case ?

Thanks again.

Found a way to make it work, thanks.

Sorry I didn’t get back to you sooner - but glad you got it figured out.

So I didn’t realize this but my code is for a template service which is what you use to run multiple instances. You can set up one template and then on startup you can have the system call it with different port numbers, that is why the ExecStart line has %I for the port. You can find out more about service templates here:
https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files#creating-instance-units-from-template-unit-files

So yeah for load balancing behind a proxy you would fire up different instances listening on different ports and then have the proxy listening on 443/80 and send the traffic to the different Xojo apps. You can then also offload the ssl to the proxy (faster, easier, and more control), but you will need a mechanism of some sort for “sticky sessions” - so that the load balancer sends requests from the same session to the same instance otherwise sessions will break.

The environment file I use to set some variables that are needed for my ODBC driver, etc… like this :

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/qw/lib/ilinux:/home/qw/bin/ilinux
OPENRDA_INI=/home/qw/config/ilinux/openrda.ini
ODBCINI=/home/qw/config/ilinux/odbc.ini
LD_LIBRARY_PATH=/home/qw/lib/ilinux

[quote=464603:@John Joyce]Sorry I didn’t get back to you sooner - but glad you got it figured out.

So I didn’t realize this but my code is for a template service which is what you use to run multiple instances. You can set up one template and then on startup you can have the system call it with different port numbers, that is why the ExecStart line has %I for the port. You can find out more about service templates here:
https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files#creating-instance-units-from-template-unit-files

So yeah for load balancing behind a proxy you would fire up different instances listening on different ports and then have the proxy listening on 443/80 and send the traffic to the different Xojo apps. You can then also offload the ssl to the proxy (faster, easier, and more control), but you will need a mechanism of some sort for “sticky sessions” - so that the load balancer sends requests from the same session to the same instance otherwise sessions will break.

The environment file I use to set some variables that are needed for my ODBC driver, etc… like this :

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/qw/lib/ilinux:/home/qw/bin/ilinux OPENRDA_INI=/home/qw/config/ilinux/openrda.ini ODBCINI=/home/qw/config/ilinux/odbc.ini LD_LIBRARY_PATH=/home/qw/lib/ilinux [/quote]
Indeed , apparently this is what I ended up doing, I guess pending is the ssl part and sticky sessions, the project is kind of an internal API server so I guess I’ll have to dig more into this and see if it works and if it does the job properly .

I did looked into the options for load-balancing but still I have to do tests to make sure all ok .

Any specific parts I should look at ?

thanks again.

In the end I switched to HAproxy instead of nginx so that is where my experience with nginx ends. I have found HAproxy to be more powerful, more flexible, and easier to do what I was trying to do. I have a system like this which has been live in production since June 2015 with very few issues. Here is my article on HAproxy in case you are interested:
https://web.archive.org/web/20161012051757/http://john-joyce.com/xojo-and-load-balancing-with-haproxy/

Regarding load balancing I found that the ‘least connections’ method works great, but you also have to be able make sessions stick to specific instances as I previously said. The easiest way I have found to accomplish that is with cookie injection and routing (load balancer adds a cookie to identify which backend instance is used, browser passes it back with each request and you route accordingly).

Also terminating SSL on the Load Balancer means you get unencrypted info at the load balancer so you can check cookies and other request data and use it during routing. If you do SSL at the instance then all the data is encrypted to the load balancer and you eliminate much of its awesomeness because it can’t look into (or modify) any of the request or response data at all.

Good luck and let me know how it goes!
J

well due to limitations of XOJO not being able to use more than 1 cpu then the idea of load balancing came so you can use efficiently all the resources of a server. For the API side I was thinking to use Tim’s AloeXWS as I could have max 1000 customers for this so I guess it could handle but seeing the issues as you most probably get as well using Nginx then I will have to look as well on Haproxy but in the same time to be able to use as well multiple threads for this.

I guess I’ll do a tests as well on that side and see how it goes .

Thanks again .