Now that #PEAC on the schedule, I realise that no matter how I turn thing around, it always ends up with some sort of "classic" sorting so why even fight ?
Should I hash the hell out of the data set, I will get tons of buckets, sure, but each bucket must still be sorted anyway because that's how they are expected to be in the end, right... So what's holding me back ? I guess it's the large block moves, at the end of the N×log(N) spectrum. Small blocks are fine but if they grow, the high-level, end-of-sort merges become... heavy. Hashing data can help with this issue, so we can work with "small" blocks of around 100 to 1000 items each. Maybe 256 is the sweet spot. This way, no fear to spend a lot of time managing big blocks at the end of the sort. And it's possible to merge several blocks at a time, in both directions, right ? So let's consider a 2×4 merge operation for later. But we have to start somewhere.
For now I'm back to the "simpler" algo which uses a dual-purpose stack at the top of the main area.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.