Worker.Stop doesn't stop?

I’m probably doing something wrong.
When I call “stop” the worker class keeps running and firing job requested.
Is there more that I need to do?

Do You have an infinite loop in the worker?

Nope, but I figured it out. It was all about what the worker had access to.
I guess this takes a little getting used to.

Now that I have nit working, I’m trying to figure out how to monitor how many cores it actually uses.
I’m running a long task that involves opening text data from a database (blob), converting to JSON and then to an internal object for optimization. The optimization is slow, and I had hoped the worker would do a bunch in parallel and speed it up. It’s not quite as fast as I wanted.

I think it’s important to understand Workers won’t automatically make a task support parallel processing; this is still up to the developer. However…

If you can break a long task into multiple tasks, then workers can really speed it up.

For example, if you’re just processing a single record’s data, but it takes a long time, you’ll get some improvement (main thread isn’t having to give up slices of time to your heavy task), but…

If you’re processing 1000 records, you could have five workers process 200 records each (and use five “cores” assuming the hardware has this many).

There are engineers in the forum more capable in this area than I, who can talk at detail about how to best design parallel processes, what type of tasks are most easily made parallelized, but at a high level it’s still up to the xojo developer to parallelize the task.

Xojo’s just offering a mechanism (via Workers) to make initiating/managing/interacting with the code running in parallel much easier to do.

Edit: for clarity

5 Likes

I’m in the second situation. Thousands of individual task that each take a moderate time, but together take a long time. I was hoping they’d be processed by multiple cores. Do I just start one worker and it spawns what it needs?

No. You’ll need to come up with logic to divvy them up into X groups and then each group would be run on a separate worker.

For example, you might create a “dispatcher” class which has the logic to:

  1. divide the work into groups (probably some sort of container object like an array or a dictionary)

  2. initialize the workers (passing the “container” with the group to be processed to the Worker)

  3. receive the data back from the Workers

  4. recombine the data and raise an event (which probably is desirable here to allow processing to be asynchronous)

So if you have 1000 records to process and you know their IDs, you’d call some method in the “dispatcher” class, passing the method the IDs.

The dispatcher class then encapsulates the logic to parallelize the work, and is more likely to be reusable in other projects.

1 Like