The only two examples of real "magic" I've encountered (would be interested in more):
1) You cannot do preemptive multitasking except by having a clock interrupt (okay, maybe one can also allow emulation).
2) Quantum key distribution (starting with BB84) depends crucially on the fact that the world is not classical.
But in general I agree with the article, it's more or less why I did not become a programmer.
When understanding a new "magic", there's this beautiful moment when you grok it, and the abstraction poofs away.
It's when you take apart a mechanical clock and keep looking for the time-keeping part, until you figure out that there isn't a time-keeping part in there, it's just gears and a spring.
It's when you learn about integrated circuits and full-adders, and keep trying to understand how a bunch of transistors can do Mathematics, until you figure out that there isn't a mathematics-doing part in there, it's just circuits and wires, arranged in a way that makes the voltages come out right.
It's when your understanding of the top-down structure snaps together with the bottom-up mechanics of the building blocks. There's no space left for the ghost in the machine to haunt, and you go "Oh. huh". I live for that moment.
I went through an EE degree instead of a CS degree in undergrad specifically so I could peel back this magic and really understand what's going on, to some detail, down to the electromagnetism level. It is indeed a very freeing feeling to be resting atop so many layers of abstractions, with the understanding that if you ever had to, you could go down to any one of them and kind of feel your way around.
I think for me the biggest magic-killing moment was when I realized that CPU clock cycles, timing trees, etc. were all just ways for us to shoehorn a fundamentally analog thing (how much voltage is in area X?) to a digital thing (does area X have enough voltage? yes? OK, call it a 1 and let's move on already!). Somehow to me that feels like the ultimate "leaky" abstraction, although of course decades and trillions of dollars have gone into making it ever more watertight for 99.99-several-more-9s% of the time. At least to the end user. Your mileage may vary if you're a TSMC researcher or something, of course.
I've recently done reading "Digital Design and Computer Architecture" by Harris and Harris, and the part about microarchitecture had this exact impression on me: "oh, so we just demux the opcode, enable/disable the necessary signals/paths and it all... just works out in the end, if the clock is fine. Huh. Well, in retrospect it's kinda obvious."
That's the difference between technology magic and illusionist magic, When you see how the trick is done with illusions it's always a bit of a letdown because the answer is usually prosaic, the 'magic' vanishes and it becomes a trick.
When you understand how a piece of technology works you get that beautiful moment.
I've never got that. I feel the same way in either case, if your trick was easy everybody would do it. Sleight of hand tricks for example, if you're good they're completely seamless, I could never hope to reproduce and yet I know exactly how it's done.
Take that Penn & Teller trick where the live audience is just lying - that's a bit lazy, we're not supposed to have some great admiration for this trick they're just showing you it's an option to just have the live audience lie to the recorded audience and that "works". Whereas their transparent version of cups and balls is the opposite, you must be a sleight of hand master or this won't look like anything.
> It's when you take apart a mechanical clock and keep looking for the time-keeping part, until you figure out that there isn't a time-keeping part in there, it's just gears and a spring.
The time-keeping part is arranging gears and a spring in a way that will, in fact, keep time.
> It's when you learn about integrated circuits and full-adders, and keep trying to understand how a bunch of transistors can do Mathematics, until you figure out that there isn't a mathematics-doing part in there, it's just circuits and wires, arranged in a way that makes the voltages come out right.
The mathematics-doing part in there is the arrangement of circuits and wires in ways that can actually do arithmetical operations on voltages.
It's not magic. But an adder, while never more than a bunch of circuits and wires, is still a mathematics-doing part.
“The true delight is in the finding out rather than in the knowing.” ― Isaac Asimov
I was certain that you were going to conclude with a paragraph about LLMs.
One interesting thing of (accidentally) starting with assembly is that you mostly can’t see magic at all, but instead you see learning magicians explaining computers in all sorts of funny ways.
My first “pc” was a clone of zx spectrum and all I had was a built-in BASIC and then some assembler on a cassette. Both went with “books” on how to use them, together with all of the unlimited time you have when you’re a kid.
This transferred to my first PC and eventually I learned how FAT, DOS, BIOS work, how to make a TSR and fool around B8000/A0000, first steps with 386+. It also helped that my granddad was an impulse electronics engineer and taught me how actual gates work and how computers count, sum and select numbers. He also had access to multiple books on hardware. I knew it all down to the silicon.
Other people had all sorts of magical ideas on how computers work. Special “hidden system areas”, “graphics card does X”, “computers multiply by addition”, etc etc. It’s a human thing that if we don’t understand something, our mind tries to yadda yadda it.
And the more you yadda yadda, the less chances it leaves that you’ll actually learn it. I tend to fight with these half-baked autogenerated explanations and try to dig down to how it really works. For no particular reason, that’s just what I like to do. It leaves a mark on how you work though.
> yadda yadda
Like LLMs, we confabulate. We fill in knowledge holes, with imagined knowledge we don't really have.
> Like LLMs, we confabulate.
Thank you for describing LLM behaviour that way. It's a much better description than the more popular but less apt "hallucination".
I agree, hallucination is a completely different phenomena from inadvertently filling in knowledge gaps.
Hallucinating to me, is not a one off effect, but a dynamic phenomena. When our sensory processing/interpreting continues iterating, but wanders away from, or separates from, actual sensory input.
Dreams being a functional example. Drugs that cause our sensory systems to be disrupted or overwhelmed by unusual internal signals, being another example.
LLM’s operate in discrete steps. So that “interpretation continues iterating” is a very good description of what’s actually happening.
There’s a little uncertainty in the process, so sometimes it will pick the wrong symbol at the wrong time and the entire process just spirals into nonsense. But it’s semi lucid nonsense like a dream or hallucination not line noise. The confidently stating the wrong thing bit is arguably a different though related problem.
> Special “hidden system areas”
I guess "hidden" can be a matter of perspective, from filenames beginning with a period, right through to things like the Intel Management Engine.
So many frameworks and dev-oriented features are sold with the claim that domain X is super complicated and full of esoteric useless trivia, so instead someone has put a clean facade on that which simplifies everything and protects the hapless dev from having to learn. Which is nice, it’s a tidy abstraction, we all have other things to do, now we go fast.
Except…the dev has to learn the new API, which we can call domain Y, and which can become quite complicated and hard to map conceptually to domain X (e.g. React events vs browser rendering, or Java GC vs real memory).
And when the cool facade doesn’t quite work for some exotic use case, now what? The dev gets to learn the original domain X anyway, plus how X and Y interact. In the worst case they have rewrite all their Y-using code.
Great abstractions are good magic; bad abstractions are evil magic. And yet good vs evil is often hard to tell apart when you pick up the problem.
The only areas I’ve ever seen that were truly super complicated were when synchronization is involved.
It’s so hard to wrap my head around how a CPU reorders instructions and I need to think very slowly and very carefully whenever I work with synchronization.
Same with distributed systems. I implemented a toy version of raft forever ago, but just wrapping my head around it took a month, and I still think I don’t really grasp it.
Haven’t even looked into paxos yet.
On the topic of how that implementation comptime in Go works: I've toyed with a similar idea for implementing constant folding/beta reduction — generate a temporary file with the relevant subset of definitions, insert the constant expression/function call you'd like to evaluate and... compile and run this file. It may not be the most performant thing to do but at least you get the correct semantics without writing a separate constexpr interpreter.
TFW the real wizards are the ones working in the strata just below the one I understand...
After nearly 30 years of persistent excavation, I'm finally at bedrock with registers, some assembly and embedded C.
Lifting my head up to find the mess we've created for graphical application development is admittedly disheartening, but also thrilling as there are ways to make it better.
There is at least one more layer, which lies behind the abstraction boundary the ISA presents. E.g. x86-64 specifies only 16 general purpose registers, but the Zen 4 has 224. Even as far back as Pentium II the CPU had more registers than the ISA.
You will indeed find the _magic_ to be no such thing when you dig deep down in the abstactions. Each part makes sense and is comprehensible.
However the magic is what emerges when the parts come together, the system becomes more than the sum of its parts.
Think of a car or a cellphone, an llm, it feels magical what it does, while the elementary parts do not consist of magic.
Corollary: Sufficiently well encapsulated magic is indistinguishable from technology.
Now I kind of want to read a story where it turns out the technology we use in our modern lives is actually very carefully hidden magic. Sort of like the opposite of (rot13 for book spoilers) gur fgrrefjbzna frevrf.
Magic isn't real. But unnecessary magic can be annoying and sometimes doesn't pull it's weight.
Sufficiently understood magic is by definition technology.
Going the other direction, towards higher levels of abstraction tends to strip away the magic too!
I recently write some robotics code using ROS, taking a step back I looked a the result and thought: Actually that is not much different conceptually from running K8s deployments coupled by Kafka.