Considering that creating workers is expensive (especially on Windows - just ask Aaron) would it not make sense to have the option to make them permanent until you send them the quit command? For example I have a text file of over 1 GB (a complete proteome) where I would send a “paragraph” (a single protein sequence) to be processed (digested), and an array of words (peptide fragments) would be returned.
The text file can easily contain a hundred thousand paragraphs, so any speed up due to using multiple cores would be wiped out by the cost of creating and destroying workers.
(and just for the record, I know that I can pre-process the large text file and get each worker a larger portion, say 20,000 paragraphs, to process, with each creating an output file for the main app to combine the results … but that then is disk based transfer which again is slow … so memory based data zipping back and forth might be much quicker).
I have not looked at workers yet but could not teh solution for this situation is the worker idle sleep for x mSec (using App.DoEvents(mSec) in the worker … assuming that is possible? ). When the DO event loop runs it can get/Send messages. The controlling app occasionally either sends the data for a job to the worker or occasionally sends a stay alive message.
If the worker does not get a stay alive message or a job in a certain amount of time, it then quits assuming the main app has terminated.
If the App’s stay alive message to the worker is not answered, it assumes the worker has died and spawns another worker to take it’s place.
That way the worker can stay alive but not use much CPU when not running a job.
As long as you feed the worker, it stays alive, and you can sleep the worker as you would a Thread, so the simple solution might be to develop a communication system where you can tell the worker to stand by.
During the initial tests I explained exactly what Markus and most people will need. A Worker as it is is half baked. It needs IPC. Process intercomm. Messaging and Events.
My only concern with this on the Mac is at some point the OS may “nap” the worker, and at that point it may take longer to wake up (especially with IPC) than simply launching a new instance.
The ListenForMessages() waits forever while the master is active and fires worker events when receiving msgs+data from the master, optionally you can end it using some kind of EndListeningMessages() or even Quit() (Xojo must intercept it and communicate the master a “normal ending” before the real quit). Then you process those msgs+data and optionally send msgs+data back and the master will do what you designed. The IPC implementation can be done using whatever the OS provides, like shared memory (locally), pipes, even TPC/IP (slow but great for remote (in other machines) processes).
They don’t nap while they’re busy, they’re napped after a while of idle by the OS.
When they’re “napping”, everything about the application runs really slowly. Which means it’s really slow to receive a message, process that message, tell the OS that it needs to be fully awake to do it’s job.
It is a multi-second delay, sometimes 5 or more, before the app is awake and operating properly.
I have several that I use internally, and they’re not for mission critical instantaneous tasks. So when I ask it to do something and there’s a long delay before it starts happening I don’t care.
Anyway, just my 2¢ on the idea of having works running permanently awaiting instructions.
Maybe a “ping the master to check if it’s alive” every 5 secs idle don’t put them in a nap, and if a “Master not found” condition occurs, twice in sequence, just quit.
Function JobRun(jobData As String) Handles JobRun as String
System.DebugLog "Received: " + jobData
Thread.SleepCurrent 3000
System.DebugLog "Nap: " + jobData
return jobData
End Function
The main app sends them a number in sequence endlessly.
Here’s what I found: the Workers do not nap, so they (mostly) log their results every three seconds, then instantly log the receipt of the next number.
What does nap is the main app, so it eventually does not respond to the Worker in a timely fashion and you see a delay between the result being sent back and the next jobData’s arrival. Wake the app up and that gap vanishes.
So to maximize performance, disable app nap in the main app while Workers are running.
Otherwise, this is a proof of concept for keeping the Workers running in a “standby” mode while waiting for the arrival of the new jobData.
This is what we were hoping for with the Worker class. As it is designed now, it’s not of much use to use. In our case, we have thousands of small decryption routines to run. They take milliseconds at a time, but the decryption blocks all threads, and even those small amounts of time make the app feel awful while this is happening. We’ve been forced to simply slow the decryption down to a crawl, as our own attempts with a helper process has proven unreliable. But starting and stopping the worker for each job would be counter productive. Best we can come up with is batching, but we’d sacrifice progress accuracy to do so.
A long running worker that I could send jobs to as needed would be ideal.
I thought App Nap was just about UI apps so I would not have expected the helpers to be affected.
In any case this shows why the framework should have a method (with a note on the worker class docs) to enable and disable App Nap on the Mac.
Workers are supposed to make using multiple cores simple for the cases when you need multiple cores for good performance.
But without this OS method/specific knowledge many would get frustrated with the performance they see on the Mac for non trivial usage and not realize why and blame the worker.
Speaking of worker performance, shared memory support would be a HUGE boost as well when there is a lot of data to be passed back and forth!!!