In a previously written Windows cooperative-thread app, I added the line
thread1.type = thread.types.preemptive
before
thread1.start
with no other changes.
In my test situation, the app runs without user intervention on a dedicated computer, but does report progress to the screen.
Execution time on a standard dataset increased from 50 seconds for cooperative to 3 minutes 9 seconds for preemptive. Ouch!
The problem is that thread1.adduserinterfaceupdate has become much more time-consuming. In this app, it is used to report progress to the screen. Commenting out these calls, and the time reduced to a mere 5 seconds for cooperative and an even smaller 4 seconds for preemptive.
I can quite confidently say that I have found the preemptive threads and the adduserinterfaceupdate to be amazingly fast. Previously with cooperative threads the operation I am doing didn’t make much difference between using threads vs. timers. With the peemptive threads and using the adduserinterfaceupdate, it flies. I’m doing quite a bit of graphics updates on canvases in the user interface update handler.
Are you passing a lot of data in the adduserinterfaceupdate event as maybe there is some kind of overhead with synchronisation between preemptive threads and the main thread.
Because the thread works a lot faster, is it possible that you are adding duplicate entries to the screen update that wasn’t happening before. For example, if it was a percentage are you adding 1% over and over, then 2% etc.
It’s worth checking if the value has changed before adding it to the window update event. Just a thought.
Good news! Thanks folks. With an almost trivial rewrite of adduserinterfaceupdate-related code, full functionality of my test app is now Cooperative = 6 seconds, Preemptive = 14 seconds.
The problem was threads waiting for each other to complete tasks. Now to study comments in other topics for further improvements.
Agreed, preemptive should be faster. At present only one thread is active as the original purpose was to keep the user interface alive. My plan is to have system.corecount threads running. Then preemptive should really shine!
As I said in another thread. Each core has at least 100% CPU to go on. Almost certainly 200% with hyper threading. So Cores * 200% is the total CPU activity. Certainly in macOS. I can’t remember how it reports in Windows.
Further improvement: Cooperative 8 seconds, Preemptive 9 seconds. I am really liking Xojo
Will now try this on much bigger datasets.
And, Kem, thanks for the ThreadPool example.
Hi Kem, In the first phase, 100 million datapoints take about 2 hours:
2 * 60 * 60 * 1000 / 100,000,000 = .072 ms for each datapoint.
So no joy it seems …
Start:
Loop starts:
Read in a data line
Parse data line according to a set of rules
Store resulting datapoint in a database
Repeat loop 100 million times
Phase 2 is more likely ..
set initial values for estimates
Outer loop starts
Inner loop starts
read datapoint from database
perform mathematical operations using estimates
accumulate results
Repeat inner loop 100 million times
Improve estimates using accumulated results
Repeat outer loop 15 times
Report (final) estimates
End:
Each outer loop takes about 2 hours /15 = 8 minutes,
so this could work except that each outer loop uses estimates produced by the previous outer loop.