Considering that creating workers is expensive (especially on Windows - just ask Aaron) would it not make sense to have the option to make them permanent until you send them the quit command? For example I have a text file of over 1 GB (a complete proteome) where I would send a “paragraph” (a single protein sequence) to be processed (digested), and an array of words (peptide fragments) would be returned.
The text file can easily contain a hundred thousand paragraphs, so any speed up due to using multiple cores would be wiped out by the cost of creating and destroying workers.
(and just for the record, I know that I can pre-process the large text file and get each worker a larger portion, say 20,000 paragraphs, to process, with each creating an output file for the main app to combine the results … but that then is disk based transfer which again is slow … so memory based data zipping back and forth might be much quicker).
I have not looked at workers yet but could not teh solution for this situation is the worker idle sleep for x mSec (using App.DoEvents(mSec) in the worker … assuming that is possible? ). When the DO event loop runs it can get/Send messages. The controlling app occasionally either sends the data for a job to the worker or occasionally sends a stay alive message.
If the worker does not get a stay alive message or a job in a certain amount of time, it then quits assuming the main app has terminated.
If the App’s stay alive message to the worker is not answered, it assumes the worker has died and spawns another worker to take it’s place.
That way the worker can stay alive but not use much CPU when not running a job.
As long as you feed the worker, it stays alive, and you can sleep the worker as you would a Thread, so the simple solution might be to develop a communication system where you can tell the worker to stand by.
The ListenForMessages() waits forever while the master is active and fires worker events when receiving msgs+data from the master, optionally you can end it using some kind of EndListeningMessages() or even Quit() (Xojo must intercept it and communicate the master a “normal ending” before the real quit). Then you process those msgs+data and optionally send msgs+data back and the master will do what you designed. The IPC implementation can be done using whatever the OS provides, like shared memory (locally), pipes, even TPC/IP (slow but great for remote (in other machines) processes).
They don’t nap while they’re busy, they’re napped after a while of idle by the OS.
When they’re “napping”, everything about the application runs really slowly. Which means it’s really slow to receive a message, process that message, tell the OS that it needs to be fully awake to do it’s job.
It is a multi-second delay, sometimes 5 or more, before the app is awake and operating properly.
I have several that I use internally, and they’re not for mission critical instantaneous tasks. So when I ask it to do something and there’s a long delay before it starts happening I don’t care.
Anyway, just my 2¢ on the idea of having works running permanently awaiting instructions.
Function JobRun(jobData As String) Handles JobRun as String
System.DebugLog "Received: " + jobData
System.DebugLog "Nap: " + jobData
The main app sends them a number in sequence endlessly.
Here’s what I found: the Workers do not nap, so they (mostly) log their results every three seconds, then instantly log the receipt of the next number.
What does nap is the main app, so it eventually does not respond to the Worker in a timely fashion and you see a delay between the result being sent back and the next jobData’s arrival. Wake the app up and that gap vanishes.
So to maximize performance, disable app nap in the main app while Workers are running.
Otherwise, this is a proof of concept for keeping the Workers running in a “standby” mode while waiting for the arrival of the new jobData.
This is what we were hoping for with the Worker class. As it is designed now, it’s not of much use to use. In our case, we have thousands of small decryption routines to run. They take milliseconds at a time, but the decryption blocks all threads, and even those small amounts of time make the app feel awful while this is happening. We’ve been forced to simply slow the decryption down to a crawl, as our own attempts with a helper process has proven unreliable. But starting and stopping the worker for each job would be counter productive. Best we can come up with is batching, but we’d sacrifice progress accuracy to do so.
A long running worker that I could send jobs to as needed would be ideal.