• lukaslalinsky 18 hours ago

    There is one sentence I really took out from the years at university, it was at a database implementation course:

    > If you have a trouble solving some problem, see if sorting the data first helps.

    I feel that sorting data is the ultimate computer science hack. Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.

    So I'm really enjoying how good sorting algorithms are getting and how despite the O complexity remains mostly the same, the real computing efficiency is improving significantly.

    • boothby 14 hours ago

      It's funny because the opposite is often true as well: if you're having trouble solving a problem quickly, randomize the data and try again.

    • Imnimo 15 hours ago

      Just be careful you aren't doing the classic, "my linear regression works way better when I independently sort the inputs and targets"!

      • danielmarkbruce 15 hours ago

        Also, "shove it in a dictionary" works frequently too.... basically "organize the data in some way" is often the answer.

        • RyanHamilton 13 hours ago

          Similar to you, based on years with databases I saw sorting as a huge advantage and often performed this step as part of optimizing any data access. I've tended to see the same pattern of problems over the last 15 years. Imagine my surprise when I read a blog post that showed not perfectly sorting your data could often result in faster overall result time for a wider range of queries. Duckdb: https://duckdb.org/2025/06/06/advanced-sorting-for-fast-sele... continues to surprise me with novel improved approaches to problems I've worked on for years.

          • TuxSH 13 hours ago

            > Many, many, classes of problems turn into O(log n) problems, once you sort your input in some way. It might not be the most effective way of solving the problem, but it's often a fairly good one.

            As a corollary, you can use binary search+linear interpolation as a fast way to approximate a strictly monotonic function, knowing its reciprocal, over a limited sets of outputs (e.g. n -> floor(255 * n^(1/2.2) + 0.5) for n in 0..255). You also get to work on bit_cast<s32>(...) as an optimization when the targeted FPU doesn't have conditional move instructions.

            • throwaway81523 14 hours ago

              I spent many years as a programmer somehow avoiding ever doing much with databases, since most problems that seemed to want databases could instead be solved using sorting batch-collected data.

              • mbroncano 16 hours ago

                On a side note, some languages still refer to computers as ‘sorting machines’ or just ‘sorters’

                • natmaka 17 hours ago

                  ... and it many cases comes at a very low cost as quite often an index enabling to satisfy the usual "quickly find something" standard need exists, and most of them let us immediately obtain a sorted list.

                  • wizardforhire 15 hours ago

                    Side note: this works great for fermions as well…

                    To save the densest among you the oh so tedious task of extrapolation here are a some anecdotal examples.

                    Dishwasher: Sort your dishes when you pull them out. Cut your walking time to the minimal possible steps, think of this like accessing cache data. Enjoy the now painless benefit of constantly clean dishes and kitchen.

                    Short story: Some friends had a geodesic dome company. I’d get brought out to do setup when they were really busy. These Domes had a lot of heavy metal pipes of different and specific lengths. Its hot, it’s outside, its heavy and tedious… pipes would invariably be in a giant pile on a pallet… caught another friend doing bubble sort… the 56’ dome went up in record time with the two of us.

                    More meta-hierarchical: End of life, parents on cusp of hoarding. Trauma combined with sentimentality manifests as projecting value on to items. Items pile up, life becomes unmanageable, think of this like running out of ram. Solution: be present, calm and patient. Help sort the memories and slowly help sort the items. Word of warning this is NP-hard-af… eventually they will get out of it and the life for them and you will dramatically improve.

                    Further reading: https://knolling.org/what-is-knolling

                    • bluGill 13 hours ago

                      Just be careful. Often there are goals other than efficiency. If you get the dishwasher unloaded while cooking - but burn the meal that was a loss. you might be able to do something else while unloading the dishwasher if your sort order is less effient. Or sometimes sorting is less efficent as there isn't enough to do as to make up for the overhead of sorting.

                      • wizardforhire an hour ago

                        Good point, thats how I feel about to do lists most of the time.

                        I think the key insight you make is recognizing that attempting to do two things at the same time is asking for disaster. Concurrency in my experience is all about rhythm and timing… something humans in their default state are notoriously bad at ie without training and practice. The best approach in my experience is doing one thing at a time with focus and conviction until completion, then move on to the next thing. If concurrency is desired then understanding duration and prioritizing items via dependencies is of utmost prudence. But paraphrasing Michel de Montaigne… something something about something

                  • bob1029 a day ago

                    I find in practice that if the sorting process is too slow, you should begin thinking about different ways to attack the problem. Maintaining a total global order of things tends to only get more expensive over time as the scope of your idea/product/business expands. The computational complexity of the sort algorithm is irrelevant once we get into memory utilization.

                    This is why we have things like tournament selection. Randomly sampling from the population and running tournaments is way more scalable than scanning and ordering a global list each iteration. You can maintain things like an ELO score with very narrow views into memory. Nothing needs a global view yet you get global effects.

                    • spiffytech 21 hours ago

                      Could you give an example of reframing a problem from totally ordered complete data to a sampled tournament? I can imagine cases amenable to sampling, but since sampled data is smaller I'm not sure why I wouldn't just sort it.

                      • loa_in_ an hour ago

                        Sorting doesn't yield an ELO score for each item. Though tournament is a form of sorting. It's like instrumented sorting.

                      • akoboldfrying 18 hours ago

                        I don't yet see how tournament selection could work here, could you explain?

                        Sometimes when you think you need to maintain a sorted array under item insertion, it turns out that you only ever need to continually read the next-smallest (or next-largest) item -- and in that case, it suffices to maintain a heap, which is much cheaper.

                        • bob1029 18 hours ago

                          An example would be an evolutionary algorithm that relies on a large population to maintain diversity. As you get into 6-7 figure population size, ranking the whole set starts to take a really long time (relatively speaking) each iteration. This also requires a serialized phase of processing that halts all workers for the duration.

                          With tournament selection, you can randomly pick indexes from the population to build tournaments, which is effectively instant. There is no more requirement for a serialized processing phase. All processors can build their own random tournaments and perform updates of scores. There will be occasional conflict on score updates but the idea is that with enough samples/iterations it becomes very accurate.

                          Another example: https://danluu.com/2choices-eviction/

                      • Epa095 a day ago

                        Double jaw-drop. First when the (dynamic) match was slower than the hash map, second when sort_unstable was faster than the hash map!

                        Cool article. It's clear that all my theoretical algorithm-knowledge comes short when faced with real CPUs.

                        • fpoling 17 hours ago

                          Chromium recommends to use flat_map, a map-like interface based on a sorted array, for data structures facing GUI or similar when the number of items in the map is naturally bounded. It is faster and more compact compared with hash maps.

                        • conradludgate a day ago

                          The efforts of developing better sorting algorithms like driftsort/ipnsort and better hash functions like foldhash make my life as developer so much simpler. No matter how clever I try to be, most often just using foldhash hashmap or a sort_unstable is the fastest option

                          • codegladiator 21 hours ago

                            Good read. Reminds me of the 1 billion row aggregation challenge, especially the perfect hashing part, all the top entries all used it.

                            https://github.com/gunnarmorling/1brc

                            • Animats 13 hours ago

                              Neat.

                              Adaptive radix sorts exist, where the keyspace is divided into roughly equal sized buckets based on the distribution of the data. The setup is slow enough that this is usually used only for very large sorts that have to go out to disk, or, originally, tape.

                              It's the first patented algorithm, SyncSort.

                              • ekelsen 15 hours ago

                                Wouldn't it make sense to test radix sort? You could do it in one pass with 2 bits and it would degrade gracefully as the number of bits increased. A MSB bucketing followed by LSB passes would take care of the 5% random data case with good efficiency.

                                • allturtles 15 hours ago

                                  The scenario presented seems very odd. Why would you want to sort 10^7 items that are known to contain only four distinct values? It seems much more likely you would be counting the number of times each value appears, or selecting all of the elements of value X.

                                  • forsalebypwner 13 hours ago

                                    I believe the purpose of choosing such an odd scenario is to show that, while you might think that you can beat the generic sort algos with a more domain-specific implementation, you might be wrong, or you might not gain enough performance to make it worth the other pitfalls of using such algos

                                  • TinkersW 10 hours ago

                                    Is there a C++ port of ipnsort?

                                    • akoboldfrying 18 hours ago

                                      Your "Branchless" approach could indeed be implemented very efficiently in a CPU with AVX2 (256-bit-wide vectors). With the current element in rax, the 4 valid values in ymm2, and the 4 running totals in ymm3 (initially zero), the inner loop would be just:

                                          VPBROADCASTQ rax,ymm1
                                          VPCMPEQQ ymm1,ymm2,ymm1
                                          VPADDQ ymm1,ymm3,ymm3
                                      
                                      VPBROADCASTQ copies rax into each of the 4 lanes in ymm1. The VPCMPEQQ sets each qword there to all-0 ( = 0) or all-1 ( = -1) depending on the comparison result, so the VPADDQ will accumulate 4 running negative totals into ymm3, which can be negated afterwards.

                                      I would still expect the perfect hash function approach to be faster, though -- a similar number of operations, but 25% of the memory movement.

                                      • Sesse__ 17 hours ago

                                        “Memory movement”? None of the instructions you list involve memory.

                                        I find the perfect hash implementation a bit weird; it seems to obfuscate that you simply look at the lowest two bits (since they differ between the four values). You can do the x + 3 and 3 - expr at the very end, once, instead of for every element.

                                      • dvh a day ago

                                        Isn't this just another case of premature optimization? Shouldn't you be adjusting sorting algorithms only when customer complains?

                                        • grues-dinner 20 hours ago

                                          I think the article basically had this conclusion. Think twice before optimising here because you may be able to squeeze something out for a very limited scenario but it can have ugly failure modes and it end up being slower in some cases. Plus it takes time and effort. And modern standard sorts are "unreasonably" fast anyway for many practical purposes.

                                          Then again only thinking of fixing things when a customer complains is a way to end up with a leaning tower of hacks which eventually ossify and also the customer (or rather the users, who may not be the customer especially in business software) may be putting up with dozens of niggles and annoyances before they bother to actually report one bug because they can't work around it.

                                          • hyperpape 19 hours ago

                                            It only makes sense to talk about premature optimization in the context of building a production system (or a standard library).

                                            This is research or experimentation, designed to improve our understanding of the behavior of algorithms. Calling it premature optimization makes no sense.

                                            • codegladiator 21 hours ago

                                              This is pushing the limits to identify the boundaries

                                              • dvh 21 hours ago

                                                Also known as premature optimization. You had to literally invent new dataset just to show there is a difference. You are inventing problems, stop doing that!

                                                • dspillett 20 hours ago

                                                  > You are inventing problems

                                                  Sometimes that is how useful jumps are made. Maybe someone will come along with a problem and the data they have just happens to have similar properties.

                                                  Rather than premature optimisation this sort of thing is pre-emptive research - better to do it now than when you hit a performance problem and need the solution PDQ. Many useful things have come out of what started as “I wonder what if …?” playing.

                                            • DennisL123 a day ago

                                              Efficiency, not effectiveness. They are all effective in the sense that they produce sorted results. Even the non-modern sort algorithms are effective in the sense that the results are correct. This should be about the efficiency with which they do it, right?

                                              • JSR_FDED 20 hours ago

                                                From TFA:

                                                The title is an homage to Eugene Wigner's 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".

                                                • aabhay a day ago

                                                  Agreed. Effectiveness would imply that some algorithms are more likely to sort the list correctly than others, or they sort a higher percentage of elements. Efficiency is about factors external to the correctness

                                                  • creata 21 hours ago

                                                    "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is one of those titles that gets imitated a lot for some reason. Maybe even more than "Goto Considered Harmful".

                                                    • kwertyoowiyop 20 hours ago

                                                      Coming next: “What we talk about when we talk about modern sort algorithms”

                                                      • chuckadams 14 hours ago

                                                        "Optimize your sorts with this one weird trick."

                                                        "What they don't want you to know about sorting."

                                                        • rcxdude 18 hours ago

                                                          or "we need to talk about what we talk about when we talk about the unreasonable effectiveness of title memes are all you need considered harmful"

                                                          • j_not_j 6 hours ago

                                                            Ascending is all you need.