Hmpf – now I have my matching working, i.e. I can identify the position and orientation of all scanners, but I only find 30 beacons when I should find 79.
I had something like that when I was trying to be clever and not change the coordinates of all the beacons.
Yeah, I try to be clever, too. At least, the 30 beacons are found within seconds, not minutes
Hmm, that gives me an idea what might be wrong. Kinda relates to what you wrote earlier.
The test data takes seconds, it’s the actual data and it’s 36 scanners that takes a while, at least with my code.
Thinking back, I also had to fix bugs in my orientation code along the way.
So I managed to have the test sample find the correct number of 79 in about one second.
Alas, the same code finds 846 beacons with the target sample in 95s, but it’s the wrong number.
Huh: I just compared my results against yours, @Kem_Tekinay.
Surprise! Your scanner data is different than mine. You have 36, I have 33 scanners.
And if I put my data into your code, then we find the same scanners (with mine still 8 times faster, hehe).
Were you aware that we get different data for the challenges?
Yes, I know we get different data. On one of the challenges (finding the best path), my data presented challenges that others did not, for example, so my code worked more slowly but got the right answer where someone else’s code worked much faster but could only get the right answer for their data.
I’m also not surprised that my code is slow. I wasn’t looking to optimize it, just solve the puzzle, although I am surprised that yours is that much faster. I’d like to see how you did it.
About the performance: I used fewer combinations. You try every of the 24 directional combinations, whereas I only swap out the xyz order (6 cases) and then map their distances in a dictionary for each of the 3 dimensions, with the key being the distance and the value being a counter. If that counter reaches 12, I set a flag for that dimension. Once all 3 dimensions have their flag set, I suspect a match.
I then double check by finding 12 matching views based on the assumed relative position between the two scanners.
But I definitely do something wrong in my matching, still, because only about half of my found scanner positions are the same that your code found. So I’ll keep looking for the bug in my code.
Huh, I got my above description wrong. I do not do just 6 combinations but 48! I.e, 6 for each xyz order multiplied by the pos vs. neg direction for each dimension (another 8). So, I do twice as many iterations but still am much faster. But also getting wrong results. But that’s because I still calculate the coord wrong when it’s chained from another scanner (turns out I only got those correct that related directly to scanner 0). What irritates me most is that I got the correct coords for the 4 scanners and the correct number of 79 beacons with the test data despite doing something wrong in the chaining when I compare the non-test results with yours.
Well, I surely made a mess of my code. No wonder when I don’t comprehend the algorithm but only try to get the correct results but trial and error
That’s why I never did games. 3D is way too abstract for me. (Actually, I wrote 2D games in the early 80s!)
Also, I only realize now that I can skip a day and just continue with the next. I thought I had to finish one to get ahead. I figured that wasn’t the case when looking at the stats and noticing that there were much fewer correct solutions for Day 19 than for Day 20. Which is a relief also in the way that this was a difficult one for many others, too, apparently.
Whew, finally got it all right for day 19.
Eventually I had realized that I made it more complicated than it needed to (I tried to track back the chain of scanners), and at the cost of losing a few seconds (now roughly taking 100s), I got it working.
Dumb, dumb, dumb…
I know how expensive string concatenation is and yet I do it anyway. I changed my code to track the beacons using a computed hash and now my code runs in a little over 2 minutes.
Oddly, I only gained a few seconds when I changed the key from strings to an int64 (with each dimension occupying 16 bit in it)
I have uploaded my latest version to my server (same URL). I also adopted your method of listing all and then double clicking to run them
You need to try my data, and I yours.
Yup, with the same data yours is now 1.5 times faster than mine.
From their About page:
every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.
I suspect that needs to be updated.
I’ve refactored my code, removing some invariable calculations out of loops and adding some early loop exits as soon as I know that further checks won’t be needed, and now got from 97 down to 23s with my data, and 39s with your data. (Also, after the refactoring where I switched out the loops, so the outmost one is now finding the 12 matching distances for one dimension at a time, my algo is now easier to understand, IMO. And that algo lets me exit early as soon as one dimension won’t have 12 hits, and that’s how I got the algo 3x faster.)
That’s still more than the 15s it should took (and mine is a fairly new and fast CPU, too)
Before day 19, the longest times were
- 18b: 15s
- 12b: 5.5s
Though, your times were significantly better in both cases (12b: 2.6s, 18b: 5.2s, with my data sets), which is well within the 15s claim.
So maybe we still get #19 wrong, algorithmically. Have you looked at other people’s runtimes for this?
(BTW: Enabling Aggressive code optimization has hardly an impact on these times. In fact, day 19 runs ~10% SLOWER in Xojo 21r3, with code opt turned on or off, than in old RS 2021r2 - what’s up with that??)
You haven’t pushed your latest optimization to 19-12-2021 yet, though, have you? It still takes 6.5 minutes here with the code I just pulled.
Pretty sure I pushed everything.
And it takes about two minutes here, fwiw.
I did another improvement to the algo of 19a, whereby I sort out scanners that can’t be in the necessary vicinity earlier.
The following tests were done with running built apps (with Xojo 2021r3.1), not running in the debugger, to get the cleanest results:
Hackintosh 2017 Z370, 5 GHz i7, Geekbench 5 single-CPU rating 1190, macOS 10.13.6, running in Xojo 2021r3.1, I get the following times now:
- my code, my data: 3.6s
- my code, Ken’s data: 5.6s
- Ken’s code, my data: 1.43 min
- Ken’s code, Ken’s data: 1.7 min
Mac Mini 2018, 3.2 GHz i7, Geekbench rating 1150, macOS 12.1:
- my code, my data: 9.1s
- my code, Ken’s data: 14.2s
- Ken’s code, my data: 5.4 min
- Ken’s code, Ken’s data: 6.5 min
M1 Air 2020, 8 cores, Geekbench rating 1700, macOS 12.1, built for ARM:
- my code, my data: 6.1s
- my code, Ken’s data: 9.6s
- Ken’s code, my data: 4.0 min
- Ken’s code, Ken’s data: 4.8 min
- For the tests, especially on Monterey, it’s important to keep the app in foreground the whole time, or it may take 2-3 times longer. That’s what probably caused me measuring over 6 minutes before when Ken measured below 2 minutes.
- The Desktop Intel CPU (on High Sierra) still outperforms the M1 and Mini’s i7 CPUs significantly. Or perhaps it’s Monterey that is at fault here?
- My own code is 2x slower on the M1 and 3x slower on the Intel-Mini, whereas Ken’s, while also slower on the M1 and Mini, does better on the Mini than the M1. This makes no sense to me.
Ken, what hardware do you run your tests on?
Also, there appears to be a bug in Xojo when it comes to making an initially hidden window visible later by setting its Visible property to true. That works on High Sierra but not on Monterey (I had to change the code to calling Show instead to make it work in Monterey).
My code is, as always, at http://files.tempel.org/RB/AdventOfCode.rbp - and should open and run without problems in Xojo as well. When running on an M1 Mac, just remember to enable ARM or Universal builds, because it’ll default to Intel.