AFAIK You can have as many shared memory objects as you want.
How I had designed my solution was to put all the data into the shared block, launch the helper and pass it the id of the shared block as a launch argument. Helper launches, knows where to look for the data, does it’s job and puts the values back to where it got them and exits. Controller gets a signal that the helper has exited, then does what it needs too.
If you’re going to have multiple helpers, you can give them the same shared memory, but perhaps include offsets & lengths of data to process.
You must use a controller object such as NSTask if your application is Sandboxed or a Xojo shell instance if not, therefore (when set to Asynchronous) you can use the controller object to send and receive messages, also to know when the helper has terminated, and if it was successful or ran into errors.
All of the above vanishes when you use GCD. This is Apple’s example code, in basic summary it takes a long loop, chops it up into segments, and give each core a segment until it’s all done. I used a memory block to share data between the segments and expected to run into some kind of locking issue, but I didn’t. It just worked and completed the loop in 1/7th of the time. Blew my mind at how simply it was, compared the solution that I’d spent a long time building in Xojo and fighting with various components.
I am not sharing this to poo-poo Xojo, it’s more like an illustration that I believe Xojo needs to take multi-core processing very seriously. I would love to post an example like below in native Xojo code, that would be as close to it in terms of performance.
[quote=459775:@Sam Rowlands]How I had designed my solution was to put all the data into the shared block, launch the helper and pass it the id of the shared block as a launch argument. Helper launches, knows where to look for the data, does it’s job and puts the values back to where it got them and exits. Controller gets a signal that the helper has exited, then does what it needs too.
This is enormously helpful, thanks for taking the time to make it clear, now I just need to figure out which data will benefit from this the most!
Just to provide more food for thought, while Sam appears to be using it above to pass data to a helper and pass results back when done, in some of my cases the helpers are long running processes (hours or even days). The helpers (console apps) use a timer and periodically put status information in the shared memory, then the GUI uses another timer to periodically read the shared memory and update status information in the GUI app.
I had been getting the status updates via Aloe Express calls to the helpers. And the programs still support those for ad hoc updates on demand, and to send new instructions to the helpers. But the shared memory approach eliminated all the network calls for periodic status information.