Troubles after having ran Standalone Xojo Web Install Script


My web app ran fine, so I decided to use the Standalone Xojo Web Install Script (which I had bought several weeks ago) so my app would run across server’s restarts.

Initially, I encountered two problems:
One about /bin/bash^M not being recognised. For that, I replaced Windows end of lines with Unix end of lines. It ran further.
Then, I had a problem about Yum not being installed. I figured out how to install it and could finally run the script.

Since I ran the script, I now can’t get responses from my web site. The web application launches, but it relies on an HTTP request (to the same server), which fails.
If I manually enter an URL of my server (html, php) in Safari (or Firefox), the page stays blank (it should usually show the page’s content). And other apps I have, doing HTTP, also fail.

When I ran the script (and also now, if I run it again), I’m getting this summary:

 ## <name> Installation Complete
 ## executable: pass
 ## autorenew-ssl: scheduled
 ## certbot: installed
 ## fail2ban: offline
 ## firewalld: offline
 ## libsoup: missing
 ## libunwind: missing
 ## SSL certificate: installed
 ## <name>.service: installed
 ## <name>.service: running
 ## Success! <name> is now running:
 ## https://<mywebsite>

Either I must undo what the script did, or, preferably, understand what it did wrong and hopefully fix that (soon enough).

Help please.

libunwind is not installed and definitely needed: System requirements for current version — Xojo documentation

libsoup is needed in case you still use httpSocket.

For a deeper dive in, we would need a web more information. Which linux flavor are you using? Which version, etc. I suspect you are using @Tim_Parnell’s script? Which is actually very good. Did you contact him already? He usually responds quite fast.

Thank you for your answer.

The web app is actually launching and shows my login window; the problem is later. Wouldn’t this missing library make the app failing to launch?

The problem isn’t at the Xojo level, it seems.

For example, this fails:
On my server, I create a file “test” (no extension). Then, in Safari, I go to https:///test, the page loads quickly and get nothing on the page (even the source code, seen in Firefox, is absolutely blank).

Ubuntu 16.04.7

Exactly. Sorry for not having mentioned the script that way. I used the description found in the Read Me file.

Well, I hoped he would answer here. Actually, being a bit stressed by this issue, I preferred to post in the forum, because I didn’t have to search for the address.

I’ve read all the log files I could dream of, on my server, by the way. Found nothing…


Ok, I’ve found the problem (wasn’t easy…).
My web app was using port 443, which happens to be the port Apache wanted to use too. So, with the script, my web app was launched before Apache. Then, Apache complained the port was already in use and wouldn’t start. As a result, all my pages were blank (and http requests too), but my web app was launching.

I’ve now tried to rebuild the web app using another port (9000), but, for some reason, the port 443 is still used (despite having correctly re-uploaded the files and restarted the server).
Is the port set in the IDE the only thing to change?

Not for libsoup for instance. Which WebServer are you using? Sounds to me (with no guarantee) that somehow the config files of your “other” pages are now buggy after the xojo app config worked. This is not unusual as it is a complex topic. This script was the “beginning”, the foundation so to say, for LifeBoat, which is far more sophisticated (I’m not using it, but reading about it).

Your firewall is offline, so does fail2ban, so the only indicator I have is that Xojo is running and certbot was successfully installed. Certbot is installing the certs and(!) is making some changes so that your insecure HTTP pages are now linking to HTTPS. A few things can break here. Certbot is “talking” to the end-user, but usually no one is reading this stuff :wink: .

If you had already had buggy and/or complex config files a script can unfortunately not do many miracles. Most likely it is related to some new redirects. And don’t forget that web2 comes with an own integrated server. So it can very well be that Xojo is working and serving its pages but your main webserver nginx, Apache2 or whatever is now misconfigured for other pages.

Patience is a virtue when facing this kind of issue. If your webserver is nginx please run:

sudo /etc/init.d/nginx configtest

This should throw more information out.

If you are running Apache2, please try:

sudo apachectl configtest

And then we can go from there. The good news: it will be fixable, but we first need to know what the issue is.

1 Like

Ok, my last post was overlapping with your answer. Good that you found the root issue.

That’s the right thing to do. Despite Xojo coming with its own server implemented into each app, it is good practice to use your Apache2 as a reverse proxy. This means you should point your domain still to an Apache2 virtualhost, but then Apache2 should forward the 443 requests for your Xojo App to port 9000 of your Xojo App (server) in that virtualhost config.

I’m showing that approach in the Nginx chapter in … should not be too hard (I hope) with some googling to translate that approach from Nginx to Apache2.

1 Like

And rule for the future (even experts are often not doing it :frowning: ). Never re-start a running web server without testing the configuration first. That’s one good thing with Linux: with the right parameters, it will never restart services if you force the restart process to first test your configurations. Doesn’t help completely, as the latest with a next reboot the system will be messed up, but you buy yourself time to solve issues and the server continues to serve your systems with the healthy configuration it has in memory.

1 Like

You’re completely right.

And here too :wink:

I thank you about it, as you wrote a “complex” reply. It contains useful informations, like commands to troubleshoot. I’ll keep them somewhere for future use.

So it’s Apache who is handling the connection to my web app (from the request of the browser)? I never truly wondered about it, but, yes, it can’t just be my application listening on port 9000 and the request going right there. Makes sense.

I’ll check your link, thanks.

I recognise this is a smart advice.
But I don’t really understand how one can test a new configuration without restarting the service that uses it.
I’m somehow not understanding this:

it will never restart services if you force the restart process to first test your configurations

What is the restart process in this case?

The thing I understand is I messed the thing when I restarted the whole server, as the configuration took place completely. But I fail to see how I could determine this earlier.

Edit: I also see the server is running Ubuntu 16.04, despite the fact that I regularly update it (both the kernel and the packages). Do you happen to know if it’s a limitation of some sort, or if I’m just using the wrong commands?
I’m using these:
apt-get update
apt-get upgrade
sudo apt-get update
sudo apt-get dist-upgrade
(though I think the 3rd is not necessary, as I already run as root (SSH) and it would be a shortcut to the 1st command (but I’m not expert enough to be sure)).

Thank you a lot.

That’s the way Linux is working (and one of the fantastic things). Let’s assume Apache2 for instance is working fine currently. Now you are changing the configuration, which of course might break things. If you now stop your Apache2 and you are trying to restart it, it will at the start of course load (or try to load) your new configuration (which, again might work, break things, or not load at all).

But(!) even though when Apache2 or Nginx etc. is running, you can test the current changes. Means: Apache2 is running (with the configuration from the last start “in memory”), but you Apache2/Nginx can test if the configuration “on disc” would actually load or not. Of course, this has its own limits, these tests will only detect bigger issues in your configuration files like syntax typos, wrong paths, etc. but not logical issues. Back to your original question. Let’s again assume that your Apache2 is running fine (in the very near future) and then you are making new changes. Doing something like this is not ideal:

sudo /etc/init.d/apache2 restart

Because restart means, that you are first shutting down Apache2 and then you will start it again, but if your config is broken … well, Apache2 won’t start. So better do first(!) something like this.

sudo apachectl configtest

If everything is fine, you can restart your server. But if you get something like this, please do not :wink:

$ sudo apachectl configtest
AH00543: apache2: bad user name username
Action 'configtest' failed.
The Apache error log may have more information.

You are the limitation, and Linux protects you :wink: . In Linux, you can configure everything, but out-of-the-box most distros are configured these days in a way that you have to explicitly tell the system that you want to move from “windows 95” to “millennium”, “vista” etc. otherwise it will only upgrade to the highest available version of the major release you are using. you need first(!) to manage the following file:

sudo vi /etc/apt/sources.list

and then do an apt-get update and an apt-get upgrade / dist-upgrade. But I suggest first google and read a bit about it. You might easily break many things. DBs are changing, server and services are getting upgraded, paths might change.

From what you say, you already had a Ubuntu 16 configured with Apache and ran the script, right?

From what I understand the script does, is install several things, line nginx, letsencrypt, a firewall to only allow some ports (80 and 443 for web), fail2ban and if you have the pro version a load balancer.

I think the script is designed to be used in a new droplet or ligthsail instance. I haven’t tested with a web server already running.

You may want to update your Ubuntu server to a later version, from what I read Ubuntu 16.04 end of life was set to April 2021 (no more maintenance updates just security updates). Visiting this page Ubuntu release cycle | Ubuntu I will not try to install 21.04 and just install 20.04.

1 Like

Correct, but that’s the App. The original script from Tim did pretty much the same as the App but with less “intelligence”. From @Arnaud_N original" “log” it seems that fail2ban and firewalls are not running. As his Xojo App is working, I assume that Nginx is working and the log shows that certbot is up and running too. So he either should make Nginx the only running webserver taking care of everything currently running on Apache or vice-versa. It isn’t too difficult, but I do not know how good he knows Linux. As Tim’s App Lifeboat (not the script) is meanwhile enabling a user to upload and serve as well “normal” pages, I think that approach might probably be the fastest:

  1. Backing up everything (for instance downloading the current non Xojo pages)
  2. Installing a brand new ubuntu
  3. Use Lifeboat (the App) to install the Xojo app and everything else
  4. Happy Arnaud
  5. Happy Tim
  6. Perfect world (almost) :wink:

Alternative: server backup, trial and error, googling, trial and error, restoring … trial and error … the learning curve will be greater with the latter approach, but if it is time and/or mission critical it is likely not the best path to follow.


1 Like

I hope it’s at least an intra****net server or that the log posted above is wrong ;-).

That’s good to know, thanks.

I’ll wait a bit for that; certainly doing it next week.
For now, I have to make the whole current setup to work.


Ok, good to know. I’ll try that next week.
Thank you.

Correct. They both can’t start because of some sort of “misconfiguration”.
However, that’s not my main problem as of yet and I assume they’ll be fixable in a non tricky way.

I’m not using nor installed (myself) Nginx; Apache instead.

This is my current issue:
If I launch the Xojo web app on the server, HTTP/HTTPS requests fail and produce a blank page. That’s because Apache fails to start since it uses the same port as the web app.
Then I changed the port of the web app (to, say, 9000), Apache then runs fine but the web app cannot be reached (the browser says it can’t connect to my website, even with https and the port (:9000 appended)).

As you wrote earlier:

I followed your tutorial. The problem is my web server was already configured by other means and I have yet to find how to tell Apache2 to forward the 443 request to the port 9000 (which I’ve not seen obvious in your blog; but it’s perhaps something I didn’t understood :thinking:).

I only have Apache, I think. I never configured Nginx. Did the script do it for me?

I wish I knew that myself :wink:
I’m self-educated here, learning bits at various place. This is often more theories than practice (like, when using sudo, I understand 75% of what it does, but not “how” or “from where in the system this command comes”). I can’t just understand/assume something without knowing it in-depth.

I’m currently waiting on someone to try an app, so I’ll wait for that before reinstalling my server (in case something goes wrong and it takes one week for me to rebuild the server :grin:).

Correct. Sadly, I started with this path and it’s often not easy to jump from it to standard paths later without more issues.

Yes, I know… :sweat_smile:
I never heard of fail2ban before. As for the firewall, I can’t believe there’s none installed.

I must have another firewall, for sure. My current assumption is Tim’s script attempted to install firewalld (which failed) and the former firewall is still working.
But, well, it’ll be my next immediate check.


1 Like

My install script was designed to run on CentOS 7 only.

There are compatibility differences between the flavors.

If you need Ubuntu support please use Lifeboat. I would be happy to discount Lifeboat by the cost of the script.


IMHO, If your firewall isn’t already running at this point, you might as well assume that your server is already compromised and start over from scratch.

1 Like


@Arnaud_N it is not about blaming you (just to get that right :wink: ), it is just a reality that with every server exposed to the internet the danger of being compromised is unfortunately very high, though it takes some expertise to realize/monitor this. That’s why, it is not enough to “hope” that some firewall might be running, it must(!) be active, always.

This is where Xojo Cloud and LifeBoat are invaluable. They do ensure that your server is safe, regardless if you are aware of it or not. My recommendation would always be to use one of them.

Now with Xojo Cloud you can’t really look into all their security measures (which is a good thing from a security perspective), but with Lifeboat, for instance, you can check your system and learn how the security measures have been implemented. But at least you know that your server is safe before you are starting to look into all the details.

In my apps I’m logging for instance all login trials into a web app. Ok, a few are users who had a typo in their credentials, but it is interesting to see how many are trying to connect with fantasy names etc. :slight_smile:

Last but not least: all tutorials (including my owns) are not showing all the possible measures to secure your servers in the best possible way, for the simple reason that showing the way how you are doing it, is of course a risk on its own.

It seems I overlooked that before use.

Thank you. I’ll take a look soon.
Do you happen to have a way to uninstall automatically what the script I already ran did install? Or wouldn’t they interfere which each other?

I’m sure there had to be a running firewall out of the box. Currently, ufw is the one installed (I installed it last week), but it doesn’t automatically launches when the server restarts, although when I invoke it, it says it’s “now enabled and at system boot”, but it’s wrong.
I’ll fix that when I’ve time; for now, I fairly know when I restart my server, so I start ufw manually after a restart (sounds silly, I know, but I’ll have to do some searches to understand how to launch a script at startup, and I’ve already too much to do at various places).

Yes, you’re right to tell such important things, of course. While I’m always afraid something would get compromised (as it can alway happen), I can’t think of something I should do more, at the moment.
Fail2Ban works (I have to create a folder after each restart, because one folder Fail2Ban requires is delete at each boot; I do it manually for now) and the Firewall too. Root is deactivated and my user’s password is quite strong.
I tried to change the SSH port, but that led to even myself being rejected, so I undone that.
When I looked at /var/log/auth.log, the 5 past days, I could see a lot of attempts from foreign IP addresses to try various users and passwords, several times per minute; that’s impressive and part of me was concerned by this.
With Fail2Ban, I set the ban to 1 day (and am planning for one week, soon); that’s going better. Really wish some kind of police could watch these attempts over the network, track them down, and arrest these guys…

I do my best for that. I’ve learnt the hard way that Linux can easily get screwed by uncommon configurations or nonstandard attempts, though (see my two problems of Fail2Ban and ufw not starting automatically after a server restart).

Only one of them?
(for me, Xojo Cloud is too expensive, so my choice is done)

I’ll definitively take a look at it.

Hackers also try to enter into WebApps?
What do they expect to do once inside it?

Yes, that’s why one part of me is concerned. I’ve followed advices found on the Internet and monitored (some times) my server’s logs, but I’m not a hacker myself: I don’t know all they could attempt and prevent all their tricks.

Thank you, guys.

Whatever has a login screen, you will find an idiot /bot on this planet trying to get in, be it for the sole pleasure of causing harm. That’s why it is so important too to work with prepare-statements to avoid SQL injections in web development.

Luckily I don’t use databases; this concern is not mine :grin:
But these idiots just wanting to harm others have a severe problem in their brain… :roll_eyes: