So far, the examples I’ve seen for embracing multi-core computers - like the Mac M4 mini - involve graphic manipulation. I’m waiting for techniques to appear in other computing areas. Maybe, someday, the OS can assess the tasks and employ more than one core to do the job.
For now, it seemed that “Sorting” would be a candidate. Because it seems the “best” technique depends on two factors - the number of items to be sorted, and the initial order of the items - I’m guessing an assessment of that starting state would be the first order of business. So there is that overhead.
A google search, or even a ChatGPT inquiry will give recommendations for shorting techniques given initial starting conditions. But they are based on single-core computations. I wonder if those recommendations would change if multi-cores were involved?
For example, with three cores, I could imagine some scheme that would divide the number of records into three groups, assess the best technique for each group, use each core to simultaneously sort its group, and then weave the groups back together.
Now that “weaving” could be complicated. What if, instead of just breaking the records into three groups like - with 300 records (or 30,000) - the first 100, the second 100, the third 100, that a first pass would create three groups with everything in the first group was smaller (on sort index) than everything in the middle group, which was smaller than everything in the third group?
That way, once the first, second, and third groups were sorted, you just put them back together in that order - first, second, third.
I’m guessing there is some records/order threshold, like the number and order of records needs to be such that the increased sorting speed benefit would be greater than the computing overhead of setting it up. For example, with less than 10,000 records, it might be difficult to beat a standard Shell Sort (that employs the best Gap Sequence for the particular record number/order) because of the overhead of something more complex.