Request of true multi-core threads

In short: “Stop dreaming”.:innocent:


This. This it right here. Björn hit the nail on the head. Software development has changed and will keep changing, and multi core is where we are right now and will be for the future. If running a business well means “skating to where the puck will be” then Xojo is betting we’ll see a resurgence of high frequency, low core processors. That’s simply not a bet I’d take.

Regarding the pro vs hobbyist thing, it doesn’t make sense to me to essentially have planned obsolescence for your users. It’s basically saying “welcome, here’s our tool, but if you get really good with it, you’ll need to go elsewhere.” The truth of the matter is I’ve outgrown Xojo myself, but it’s still my tool of choice because it’s still the best cross platform tool I’ve found. The market has a lot of competition, but none of them really tick the boxes like Xojo does. But Xojo isn’t cross idiom, and that really hurts. To develop for Mac, Windows, iOS, and Android would require three separate projects. Yet projects don’t share code well. I am not a proponent of using the exact same code for all targets. That won’t end well. But they need to be able to exist in the same project.

My post has become off topic, but getting back to my original point, Xojo does some things really well. I’d say better than anyone else. So good that I fight with its missing capabilities. So good that I tell my users the same ■■■■■■ answer that I have since I launched in 2016: “my dev tool doesn’t do mobile.”

And yet, I’ve outgrown it.

(Edit: that’s funny, I didn’t swear there, but whatever.)


@Thom_McGrath Somehow that ended up as a watched word, but I have removed it from the list and a few others that seemed innocuous.

1 Like

Now you have to share the list of innocuous words. :grin:


I feel your pain. Somehow the ease of use that makes Xojo so attractive turns into the opposite when you reach development levels beyond core technologies. Developer needs to become an OS expert to access contemporary computing features, and often has to go long ways to achieve simple tasks.

Experimenting with NSBlockOperations, I was shocked to see how easy it is to set up and run a concurrent operation on macOS. Basically two API calls. But of course you run into the framework locking limitations, so that’s not really a Xojo solution.

Thomas Tempelmann held a lecture about background tasks a few years ago, and I wish I had understood his approach and sophisticated code … not to mention the other pieces and possibilities addressed in this thread. How much I wish Xojo would use that potential. But I am afraid the feeling of having outgrown the tool will persist.


So I feel picked now :thinking:

I wrote an app - and had exactly this problem. I have a Mac with 24 Cores - and wrote a few console apps and a scheduler which controls everything. GUI of the App does not have to run, everything is running with semaphores and mutexes. This was a bit tricky - but it is running. The common data is stored in a database.

Finally the Mac can be loaded with more then 50 console processes and started to think. And this runs in any environment, very nice. I just did not try more yet. May be GUI is overestimated, may be not. At least there IS a way to do multi-core.


Before helpers existed we created a Xojo app that used console applications to saturate up to 124 cores, once again we used a set of SQLite databases to store the work lists for each core. The controller application splits up the source data and creates the jobs for each core. When complete it packages the results and saves them, before moving on to the next data file to be processed. Available on Mac, Windows and Linux.


Just imagine back in time when Intel launched their first multi core processor what would happened if AMD had your logic… We are fine with single core processors, At least there IS a way to do multi-core with them, just use 2 CPUs…

That is NOT a multicore app, it is just a single threaded app runing multiple instances.

If only Xojo had multi Core threads… Their Web Apps coul be on another level.


Seems to me you’re picking nits here. If the end result is the same - the work is completed faster by using multiple cores - I don’t think it matters exactly how the implementation is structured.

1 Like

It is more expensive to launch a separate app (which is what a worker is). Secondly inter-app communication is relatively slow with the API’s Xojo provides for it.

These things limit the real world uses cases for workers.



For basic apps maybe it doesnt matter, but it is NOT the same.

Karen has a good point, each instance needs to allocate the whole app in memory for each instance, again, for home usage doesnt matter, but for real world apps, where each MB in a hosting has a cost, is les efficient and expensive to have a “but it works” tool.

1 Like

My largest Xojo project also had a headless helper app (that was created with PureBasic) that did much of the heavy lifting in the background.

Python also has its shortcomings when it comes to threads but it does have an elegant ( imo ) module for handling multiprocessing, perhaps Xojo could take inspiration from that:

from multiprocessing import Pool

def foo(name):
    return 'Hello, ' + name
if __name__ == '__main__':
    pool = Pool(processes=4, maxtasksperchild=1)
    print(pool.apply_async(foo, ('Guido',)).get())

And this is why I use Xojo… :wink:


In just 4 lines, real threads are created, run a mehod with arguments in a separate core and wait for a result asynchronously…

To have spaguetti code with lots of events, flag variables and helpper apps to do “the same”…

1 Like

I meant that that neat 4 line code sample means absolutely nothing to my uneducated eye. It may do the job, but I have no idea what job it would be doing… :wink:


Not to mention that Xojo’s Shell class which is often used to control helpers (and potentially workers) uses 5% of the CPU itself. Multiply that by how many helpers are running concurrently and…



I am engaged in Apps running on their own stuff, not as a Web-Service. And this kind of stuff works like a charm (stolen from StackOverflow):

AFFINITY works with a hexidecimal mask that should allow granular control of all of your processors. Note that the rightmost bit specifies the lowest-order CPU (0).

For the case in question, 0xAA (10101010) requests that your process run using processors 1, 3, 5 and 7, but not 0, 2, 4 or 6. Be sure to leave out the ‘0x’ on the command line.

 start /affinity AA app.exe

Other examples:

 start /affinity 1 app.exe     (only use CPU 0)
 start /affinity 2 app.exe     (only use CPU 1)
 start /affinity 1F app.exe    (only use CPUs 0, 1, 2, 3, and 4)

That seems not be the case here. I sometimes launch up to 10 shells (mostly running unzip) and they do not use any CPU cycles.

Marc, in xDev 19.2, covered his agonizing introduction into “workers”. It seems code that works in development might not work in the compiled app because of scope restrictions. And the only debug help you get is the message “Error”.

Around the time I was looking at that, I read an article about Core and Pool in Python. It seemed Python was much further along in giving the programmer easy access to multi-core performance when the problem fit that solution.

I know a large part of the world is into “media” - video and such. But there are some of us who are still interesting in raw “crank” power - maximizing calculations. Yes, it’s all “calculations”; but I want to give at least half of those multi-cores their own task and have them report back when each has completed its task. And if it “has a problem with that”, something more than “Error” would be appreciated - especially if it ran in the development environment.

1 Like

Error handling for workers is a joke. I made a feature request to get more information: