An updateā¦
I converted my code to multi-core and was, unfortunately, very disappointed. No matter what I tried, it was slow. And I donāt mean slower than expected, I mean absurdly slow, like slower than the Baseline.
So I reached out to @William_Yu and filed this Issue. As it was explained to me, the current versions of Xojo use āsynchronous locksā that are cross-platform but (very) expensive, so the results I saw are what youād see in 2024r3.1.
But the good news is, in practically no time at all, William (and Travis) explored, identified, and fixed the issue by switching to āplatform-specific locksā. (I donāt actually know the difference.) And that change, provided no problems come up in testing, will appear in a future release.
The result? My previous code was processing at about 14 M rows per second when compiled through 2024r3. Iāll let you be the judge of this.
Baseline.Process
Parsing file 'input.txt'
Reading row 10000000
Parsed 10,000,000 rows in 6.3 seconds, rate = 1,578,969 rows/second
Calculating 413 stations
Output file = 'output.txt'
Calculated 10,000,000 rows from 413 stations in 0.1 seconds, rate = 83,420,693 rows/second
Processed 10,000,000 rows in 6.5 seconds, rate = 1,549,494 rows/second
----------------------------------------
Optimization_KemTekinay_1.Process
Optimization_KemTekinay_1.ProcessIntoDictionary
Parsing file 'input.txt'
Reading row 10000000
Parsed 10,000,000 rows in 0.6 seconds, rate = 16,423,869 rows/second
Avg per chunk: 42.5 ms
Calculating 413 stations
Output file = 'output.txt'
Calculated 10,000,000 rows from 413 stations
Processed 10,000,000 rows in 0.6 seconds, rate = 16,360,311 rows/second
Speedup : 10.6x faster than Baseline
----------------------------------------
Optimization_KemTekinay_MT.Process
Optimization_KemTekinay_MT.ProcessIntoDictionary
Parsing file 'input.txt'
Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 43.0 ms
Reading row 729638Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 44.4 ms
Reading row 1459081Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 42.4 ms
Reading row 2188758Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 42.6 ms
Reading row 2918201Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 45.8 ms
Reading row 3647526Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 44.0 ms
Reading row 4377034Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 41.3 ms
Reading row 5106755Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 49.2 ms
Reading row 5836465Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 30.0 ms
Reading row 6352286Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 42.0 ms
Reading row 7081819Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 42.0 ms
Reading row 7811335Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 42.9 ms
Reading row 8541130Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 44.4 ms
Reading row 9270562Oct 24 12:05:40 Xojo1BRC[63660] <Warning>: Process chunk: 44.3 ms
Reading row 10000000
Parsed 10,000,000 rows in 0.1 seconds, rate = 94,795,752 rows/second
Calculating 413 stations
Output file = 'output.txt'
Calculated 10,000,000 rows from 413 stations
Processed 10,000,000 rows in 0.1 seconds, rate = 93,441,352 rows/second
Speedup : 60.3x faster than Baseline
----------------------------------------
And before anyone asks, no, I donāt have access to a Xojo version used to compile this project so I cannot do any further testing at this time.
But boy am I excited about it.