have people tested the XOJO WEB'S performance?

have people tested the XOJO WEB’S performance?

EXAMPLE: MAC MINI SERVER i5 2.4GHZ CPU, 4GB DDR3 MEMORY, 500GB HDD,

i built a common library query system + Online chat( use Xojo’s example chat application’s method) ,the DB is MYSQL, when a open use, will open three WebThread in one Session, could a single server can support 500 User Online? (if all in chat with HTML chat content in the same time)

Xojo does not have real threading, it users Fibers, so it does not use the full power of multi core processors and can’t perform good enough for high demanding tasks. http://en.wikipedia.org/wiki/Fiber_(computer_science)

Ugh. Ugh. Ugh. The OP’s question is a tough one for which there is no general answer. When developing a system involving Web Edition, you simply have to benchmark and test often, and leave yourself available to as many ways to scale as possible. That might involve optimizing queries in the database server. It might involve replication of the database server. It might involve running multiple instances of the WE app. It might involve splitting some app tasks off into their own console apps, letting the OS schedule on all the processor’s cores.

Another problem that comes up in 99% of web app designs is drastically overestimating the capacity needed. Unless you really, really know, just get the thing working first and keep your scaling options open. You can probably scale later once you have a sense of how much work your app actually has to do.

Based on Brads lines, let me rephrase, can’t perform good enough for high demanding tasks on multi-core processors without intense redesign to get the most from your host processor, needing splitting tasks by different processes (on different cores) and creating interprocess communications (that delays those “threads” compared with real threading).
Your app hiccups held by DB calls, socket calls, and your processing in “parallel” in fiber mode.

This is for any App. But for Web Apps, things can be a little better, because you can split tasks in different hosts using a reverse-proxy on the gateway to distribute the incoming requests, but increasing your CPU costs due to the more demanding infra structure.

Rick - my reaction is the same as Brad’s to both of your statements. It is simply inaccurate and wrong to say that Xojo is not good enough for high demanding tasks. It’s also ridiculous to say that Xojo requires “…intense redesign to get the most from your host processor…” A typical web app design where sessions do not talk to one another and are backed by a SQL database requires literally zero redesign for load balancing multiple instances.

As for helper apps using IPC vs. threads in a single app: managing shared memory among preemptive threads generally leads to threads waiting idle or outright locking and crashing. Unless your problem is embarrassingly parallel preemptive threads are difficult to use efficiently. This is coming from someone who has used them a lot in past C++/obj. C projects.* I wish Xojo would add support for preemptive threads, but I hold no delusions about them being a magic bullet. In fact I tend to agree with the sentiment that threads are evil and that we need other language mechanisms to manage parallel processing.

As for hiccups…the database plugins yield time to Xojo for cooperative threading…fibers…and the socket classes make very efficient use of events. I’ve never had a ‘hiccup’ due to socket traffic. That includes some custom data server apps for one client that field hundreds of connections at a time on a typical day.

The #1 thing which will cause a Xojo app to ‘hiccup’ is a call into a library or API since they don’t yield time back to Xojo while working.

  • I was once handed a threaded C++ app that took a week to perform a data analysis task. It was a nightmare design with threads waiting, racing, locking, crashing. I re-wrote it in Xojo (Real Studio at the time) using IPC + helper apps and got the processing time to <24 hours. If I took the design and implemented it back in C++ I could have shaved a little more time, but nobody cared at that point.

Software architecture typically trumps language and compiler. And preemptive threads are not a magic bullet.

Ju Yang - there is no simple answer to that question in any web language. Brad is correct whether we’re talking about Xojo web, PHP, ASP.NET, Node.js…anything.

Test early, test often, and design your app so that it can be scaled when needed.

if you need performance, you will have to ruin several copies of the Xojo Web app on the machine and do a load balancing.

[quote=91205:@Rick Araujo]Based on Brads lines, let me rephrase, can’t perform good enough for high demanding tasks on multi-core processors without intense redesign to get the most from your host processor, needing splitting tasks by different processes (on different cores) and creating interprocess communications (that delays those “threads” compared with real threading).
Your app hiccups held by DB calls, socket calls, and your processing in “parallel” in fiber mode.

This is for any App. But for Web Apps, things can be a little better, because you can split tasks in different hosts using a reverse-proxy on the gateway to distribute the incoming requests, but increasing your CPU costs due to the more demanding infra structure.[/quote]
For what it’s worth, even if we supported preemptive threads, you’d still need to do extensive refactoring very similar to spinning off console helper apps.

Your argument started wrong, unless we’re talking inside the box of the Schroedinger’s cat. :slight_smile:

Wrong argument. First part of my argument was just about maximizing processing power, not specifically about web development. This was addressed on the second part of my argument.

Only in case of a bad code, or had a bad thread library. Don’t know. If you do it right, not. The threading model exists to be used (and is) and not to be blamed for insuccessful use. That comes from someone who wrote lots of multi-thread fully functional code in Object Pascal.

As for the rest. Benchmarks can answer better. http://www.scielo.br/img/revistas/jbchs/v21n9/a05fig01.gif :wink:

[quote=91265:@Rick Araujo]As for the rest. Benchmarks can answer better. http://www.scielo.br/img/revistas/jbchs/v21n9/a05fig01.gif :wink:
[/quote]

That is classic. A benchmark of a calculation with no hint of what the calculation is. Good grief.

OP. The answer to your question is: nobody knows. Because nobody except you knows what your application is, where the chokepoints will be, where it lends itself to various scaling options. People who tell you they know are either lying or ignorant :-).

Ok, I can agree.

Just because you asked, If you really mind about what made the cores busy, here it is. http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-50532010000900005

You claimed that Xojo “…does not use the full power of multi core processors…” and as a result “…can’t perform good enough for high demanding tasks…” My example of a load balanced web app shows that Xojo can take advantage of multiple cores. I only have to provide one example to disprove your claim. I chose that particular example because we are discussing web apps, and because as an example it’s also sufficient to disprove your later claim that Xojo apps need “…intense redesign…” to take advantage of multiple cores. The level of redesign necessary will depend on the problem being solved, and may be zero.

This is an incredibly naive statement. For problems which are not embarrassingly parallel a valid thread design can be difficult…some would argue impossible…to achieve. The question of how to best enable programmers to take full advantage of multiple cores is one that is still being debated in academic circles precisely because threads are so difficult to debug and manage.

Your example fits the definition of embarrassingly parallel and could be split among helper apps in Xojo as easily as among threads in another language. From the paper: The two tasks (TASK(1) and TASK(2)) created in script 3 split the calculations into two threads. The first thread carries out the pseudoinverse calculations for the first m/2 matrices (i ranging from 1 to m/2 in script 2) and the second thread carries out the pseudoinverse calculations for the last m/2 matrices (i ranging from m/2 + 1 to m in script 2). Parallel processing doesn’t get any easier then this.

Threads vs. processes is not as simple as “threads good; processes bad.” For some problems multiple processes are faster and more efficient then threads. If you have a problem for which the obvious solution is threads, then Xojo is not your language unless you want to implement the threaded portion as a plugin or external library using another language. But that does not lead to the conclusion that Xojo “…can’t perform good enough for high demanding tasks…”

I dont mean to hijack this post but does anyone know of any info on how two set up this type of load balancing on Windows. Do you need to run the separate copies of the Xojo web app on different ports and then in effect create a load balancer that directs from one port and talks on the different Xojo web app ports?

[quote=91318:@Nathan Wright]I dont mean to hijack this post but does anyone know of any info on how two set up this type of load balancing on Windows. Do you need to run the separate copies of the Xojo web app on different ports and then in effect create a load balancer that directs from one port and talks on the different Xojo web app ports?
[/quote]

That’s one approach. You could also do spillover. Have a primary app that clients connect to. If a Session is started that makes it too busy, it redirects the client to an available spillover app.

Hi Nathan,

exactly.

I have a system which runs at 100- 500 users at any one time, the incoming port 80 is only going to a load balance program (also written in WE) . It keeps a count of the number of connections currently on the main apps which it requests a count from the app via a EasyTCP connection every minute.

Whichever has the least gets the user redirected to it via a script which is sent via port 80 to send them to port 9000,9001,9002,9003 etc…

I am at home now but if you need help I can post some code on Monday.

Downside :
you can configure your browser to NOT do forwarding ( quite a few large organizations do this by default ). So there browser just sits there. Anyone figured an easy way around this?

Upside :
If the EasyTCp connection drops you assume the app has crashed and can re-start it.
You can run many apps, we run 4 as a base and have another 4 for when the load is heavy.

I have found WE really can only handle less than 100 connections and will happily handle 50 or so. I try to balance so the new server comes in at 200,250,300,350 and after that it is “take what you can get”.
Keep in mind I use console apps to handle most of the real load. I.e. if a report needs to be generated then it is sent to a console app and a timer looks for a “finished.txt” to be created it then sends the user the “result.pdf”. In this way the WE app never really has much use except to show the UI .

TIPS:
use lots of console apps and timers to see if they have finished. You want the main apps CPU usage to stay as low as possible to maintain the speed for all users. This is where you will spend most of your time. If one user requests a report which uses 100% CPU even for a few seconds it kills the UI for all the other users which leads to lots of hate mail. Console apps are great and once you start with them you will find it a simple process to change existing code to this “multi-thread” approach and when you are really into it you actually multi thread the console apps. For example our site will generate labels so it is common for us to run 4 console apps to generate one set of labels. Takes 1/4 of the time (roughly) and since windows works out which core of the CPU has the least load it is windows which does the load balancing for you.

try to use SQL commands which end in “LIMIT 0,50”. If you don’t need to fill a listbox with 100’s of rows of data then don’t most people only look at the top 20 rows anyway. Give them a option to “Show All” and pop up a warning that “this may take a while”, it also slows the whole thing down for all the other users on that app (port 9000) as well so use sparingly as it hogs the CPU.

bring app in when you need them to speed up the whole process, running 8 apps at 25 users runs slower than 4 apps at 50. It just does.

When you compile the app use a script so that you change the app name and port number to compile it 4 times.

You don’t have to run all the WE apps on one machine , you can load balance over multiple machines. I use 2 on 2 machines and then add another 2 to each machine when required. There is nothing to stop me adding machines when required ( probably later this year ) simply by cloning an existing machine and replacing the WE apps. You just set your router to forward a group of ports to the other machine. 9000-9003 to machine A - 9004-9007 to machine B - presto scalable load balancing

Have your SQL server on a dedicated machine. Make sure you don’t use the standard port and don’t have it open to the world.

Remember in windows each instance of the WE app will be running on a separate CORE of the CPU so 8 cores means you can happily run 2 or 4 WE apps without them effecting each other. Stay away from WE and single core processors, you really need a 4 core at least to have any chance with the above.

Hope this helps - there is lots more but you will work it out.

Damon

I wouldn’t create a custom load balancer without a specific need in mind. There are existing solutions out there, some of them free and open source.

Use a load balancer that does not do simple forwarding.

Yes, thank you!

Don’t forget to code for session stickiness . An example where you might need sticky sessions is a shopping cart - the session needs to persist to same load-balanced server, otherwise their shopping cart might empty.

[quote=91333:@Brad Hutchings]You could also do spillover. Have a primary app that clients connect to. If a Session is started that makes it too busy, it redirects the client to an available spillover app.
[/quote]

The spillover is nifty.

You could have a master app entry point that sends users to apps that can even be on different servers. This would require it to know how many users where affected to each copy to avoid overloading the spillover sites. Inter Process Communication could be used to monitor active sessions.

Such an architecture would be highly scalable :slight_smile: