All programmers need to try assembly programming just once.
Just a program counter, a stack, a flat addressable block of memory, and some registers.
You’ll very quickly learn how hard it is to program when you have complete responsibility for making sure that the state of the system is exactly as expected before a routine is called, and that it’s restored on return.
Every language tool we’ve built on top of that is to help programmers keep track of what data they stored where, what state things should be in before particular bits of code are run, and making it harder to make dumb mistakes like running some code that will only work on data of one sort on data of a totally different structure.
Of course types are a solution. You just have no idea what the problem was.
And I'll recommend Turing Complete game on Steam if someone wants to have full journey.
It's a puzzle focused on creating electronic components (with the goal of creating a running computer). Since it's more educational than competitive it's rather smooth journey.
But in the end there's not only assembler but there are actual electrons in the system that take time to reach CPU. It took me 2-3 days to finish during Holiday period and it made me much more aware of what is happening under the hood.
When I was taking my compilers class at uni, I realised very quickly that the most irritating part was the sheer volume of book-keeping.
Things like the ABI itself, which includes register saving/restore, function call stack setup, to the linker and loader, ensuring the right sections go to the right places and are correctly sized, and then at load time ensuring relative addresses are correctly relocated, all branch target labels are resolved to these relocated addresses, it was such a pain.
All of this book-keeping was a genuinely the most time-consuming part of the assignments; the actual compiler ideas and implementation being relatively manageable.
Funnily enough the compiler implementation was done in OCaml.
Note that this is not incompatible with the author's view. The function abstraction does solve something: a problem we faced in the 20th century.
While I don't know whether I agree with their view, I do see that, once we've used the function abstraction to build a C compiler, and used this C compiler to build a proper OS, there is possibility for such an OS to provide entirely new abstractions, and to forego (almost) completely with functions.
A very weird counter point was when I learned how to do COM / OLE calls in assembler instead of C++.
It was easier to get right, there was no wizard behind the typed curtain, just a bunch of arguments.
https://computersystemsbook.com/
If anyone is interested in learning assembly, this was the book and simulator I used as an undergrad. It's not exactly bytecode that will run on an actual processor but it will help you understand the bare metal things a CPU has to do in order to actually operate upon data and it's kinda eye opening when compared to what modern coding is actually like nowadays.
I’ll always recommend Ben Eater’s YouTube series - one where he constructs a CPU and then one where he builds a working system around a 6502. Full of amazing insights into the things that a computer has to do just to accomplish the simplest tasks. And if you work your way up from the very bottom you have such an appreciation for why 6502 assembly language is the way it is when it’s introduced.
> make complexity manageable: strict isolation between components, explicit timing constraints, and simple, well-defined interfaces.
Maybe I'm missing something, but those simple well defined interfaces are the types?
I've worked with large javascript codebases before typescript came on the scene, and functions with a single params object were rife, which combined with casual mutation plus nested async callbacks made for a mess that was very difficult to get confidence on what was actually being passed to a given function (presumably easier if you'd written it instead of joining later)
A type system so that you can define those interfaces (and check they're adhered to) feels pretty essential to me.
I don't really get what the author is trying to say here; it sounds like they don't really know what they're talking about.
They talk about how electronics engineers use "isolation, explicit interfaces, and time-aware design" to solve these problems as if that's somehow different than current software development practices. Isolation and explicit interfaces are types. There's no strict analogue to "time-aware design" because software depends on timing in very different ways than hardware does.
Electronics engineers use a raft of verification tools that rely on what in software we'd call "types". EEs would actually love it if the various HDLs they use had stronger typing.
Where they really lost me is in the idea that UNIX pipelines and Docker are examples of their concept done right. UNIX pipelines are brittle! One small change in the text output of a program will completely break any pipelines using it. And for the person developing the tool, testing that the output format hasn't changed is a tedious job that requires writing lots of string-parsing code. Typed program output would make this a lot easier.
And Docker... ugh, Docker. Docker has increasingly been used for isolation and "security", but the original purpose of Docker was reproducible environments, because developers were having trouble getting code to run the same way between their development machines and production (and between variations of environments in production). The isolation/security properties of Docker are bolted on, and it shows.
"UNIX pipelines": Yes, this is where I stopped reading. The author claims that
> UNIX pipelines routinely compose dozens of programs into complex
> workflows, yet they require no type checking at the transport
> layer. The individual programs trust that data flowing between
> them consists of simple, agreed-upon formats—usually lines of
> text separated by newlines.
The transport layer is not the relevant layer; it's the programs in between that matter. If you're just looking for words or characters or something, that's straightforward enough--but the equivalent in programming is passing strings from one function to another. You run into problems when program 1 in the pipeline uses tab delimiters, whereas program 2 was expecting space characters or commas or something, and program 3 is expecting tokens in the pipe to be in a different order, or program 4 doesn't know what to do when some records have three tokens and others have four or two or something.
A few points:
- the explicit interfaces for modules used in hardware design _are_ types. - hardware design requires extensive testing and verification, and formal verification (using types) is a valuable step in hardware design.
I think the author conflates the use of types with structural complexity - at least at the introduction of his article, hence the term "symptom" used to qualify type checking in the title (actually _lengthy_ type checking, this is implied, but without context the title of the article becomes clickbait-y).
Indeed, by the end, the author admits that types are useful, without conceding that the initial idea was an error of judgment.
> you can reason about each component in isolation
> you can focus on the explicit contracts between components
> you can design systems where problematic interactions are structurally impossible
In other words, exactly what a strong type system gives you?
My impression from doing my own amateurish hardware projects and knowing some real hardware engineers is that they would KILL for something like the type systems we have in software. In fact there are plenty of hardware description language projects that try to strengthen the type system of the HDL to catch errors early.
If you think a Rails test suite is bad, look at a Verilog test suite sometime.
What I find most amusing in this article is that its author states with great confidence that electronic systems designers don't use anything like rule checkers or typing, when the EDA industry is well-known for its extensive use of validation tools, and the design of large-scale complex modern electronic systems is almost impossible without it.
For the uninitiated, one popular example is SPICE: https://en.wikipedia.org/wiki/SPICE
>functional programming languages ... break down when you try to coordinate between components, especially when those components are distributed ...
I think the exact opposite. Having spent more than a decade in distributed systems, I became convinced functional programming (and pure functions) are essential for survival in distributed systems.
Back then I wrote down my insights here (amazed that the site still exists)
https://incubaid.wordpress.com/2012/03/28/the-game-of-distri...
And part of why functional programming works so well is exactly that you don't need to care about things like control flow and interactions. You're just expanding definitions. Those definitions are local/isolated, and compose in regular ways.
Even with my limited knowledge about FP I am pretty sure I would only grow more in agreement as I learn more. My only exposure to functional is via Nix and Rust (I promise I'll learn OCaml soon). One thing that I've really come to appreciate is the concept of "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages. Coming back to distributed systems, I have wondered what a functional database would look like. Mutations as pure functions, DUs in tuples, could we store a function, etc.
> "making invalid states unrepresentable," a trick that is harder than it should be (though not impossible) in "less functional" languages
The flip side of this is to "make representable states valid." If you have an enum that doesn't fill a bitfield, values of the bitfield outside the enum are representable -- and the behavior of the system must be defined in that case. (Most often, this is done by mapping the behavior of undefined states to a chosen defined state, or using it to trigger an abort -- the key is that it must be an explicit choice.)
regarding databases, you get quite far using purely functional data structures and zippers.
>The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.” This sounds reasonable until you realize what we’re actually admitting: that we’ve designed our systems to be inherently incomprehensible to human reasoning. We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways.
A few thoughts...
1. Some things are complicated. Of course we should elminate needless complexity - but not all complexity is needless.
2. If your system has huge thing 1 connected to huge thing 2, then you have to understand 2 huge things and a little glue. If you instead have a system comprised of small thing 1 connected to small thing 2, ... connected to small thing 100, you now how to understand 100 small things and 99 connections. The nature of these things and connections matter but you can't just keep dividing things to simplify them. John Carmack's email about inlining code springs to mind: http://number-none.com/blow/john_carmack_on_inlined_code.htm...
3. "A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." - https://en.wikipedia.org/wiki/John_Gall_(author)
I would love to see the author take a complex system where Typescript (for example) is "necessary" and refactor it to JavaScript.
> that data flowing between them consists of simple, agreed-upon formats
> HTTP servers and clients, email systems, DNS resolvers—they all interoperate based on simple protocols
Yeah, massive disagree with this. There is absolutely nothing "simple" about these protocols. Try to compose 3/4 UNIX programs together and you necessarily need to run it to see if it even makes sense. Don't get me started on HTTP.
It boils my blood to read "simple" in the context of software engineering
>> Instead of worrying about how changes in one module might ripple through dozens of dependencies, you can focus on the explicit contracts between components.
So, typing.
> When components are designed as genuine black boxes—where what happens inside stays inside, and communication occurs only through explicit input and output ports—systems naturally become easier to reason about.
"How do you specify the shape of the input and output ports?" shouts the goose as it chases me away.
Text goes in and text comes out - simple! Works perfectly for shell scripts. /s
This might be the worst argument against type checking I've ever heard, in a strong field!
This author would apparently also argue that, because I rely on handwritten labels for the dozen or so storage bins in my garage, I must have created a system so intricate and interconnected that human reasoning fails, then declared the tools that help me navigate this complexity to be essential.
The author would surely make similar arguments against clocks, calendars, and calculators.
Of course, what's actually happening in all of these situations is that humans are relying on tools to handle tasks which the tools are designed to handle, so that the humans can focus on other things.
> The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.”
There's not really a JavaScript program so small that I wouldn't use TypeScript. Okay, typing a few lines into the Chrome DevTools console is an exception, if you count that.
Ahahahahaha.
Remember when the world went insane and wanted JS (not TS) to run literally everywhere?
Now we live in a world where TS, Python and even PHP admitted the mistake of unrestrained types. People who didn’t get the memo are highly questionable, probably facing solo-dev hacker mindset syndrome.
I am not sorry for having strong opinions.
As other people here have said, type systems are merely an automated check that interfaces are used correctly. But to read this more generously, the author is saying that with sufficiently simple interfaces such checking isn't necessary. That's arguably true, though rather limiting (since a direct composition of simple interfaces is not necessarily a simple interface): a system that presents a single call that reads one byte and adds one to it, with wrapping, doesn't need a type because all inputs are valid.
Unfortunately, the examples given are not particularly representative, because they all imply complex interfaces (reading? writing? buffering?) and complex input/output formats: the shell that is doing the piping only cares about ‘bytes’, but the programs themselves have much more complex requirements on the data, which tends to be what happens once you start assigning semantics to your interfaces.
Even with spatial locality limiting the number of inputs and outputs, the ‘interfaces’ that hardware uses are usually specified in multi-hundred-page manuals, and that's mostly due to having to encode the semantics of what it means to put volts on those wires.
Another thing about bytestream oriented interfaces like pipes is they tend to turn every program into a parser in order to reject invalid input. Eventually someone is just going to put a type system on top of that parser to reject more than just syntactically valid programs, but also semantically invalid ones.
As such this notion that "we don't need types, we can have parsers." kind of argument doesn't feel very inspiring.
I can get by with a weakly typed language for a small program I maintain myself, but if I am making something like a library, lack of type checking can be a huge problem.
In something like JavaScript, I might write a function or class or whatever with the full expectation that some parameter is a string. However, if I don't check the runtime type, and throw if it is unexpected, then it is very easy for me to write this function in a way where it currently technically might work with some other datatype. Publish this, and some random user will likely notice this and that using that function with an unintended datatype.
Later on I make some change that relies on the parameter being a string (which is how I always imagined it), and publish, and boom, I broke a user of the software, and my intended bugfix or minor point release was really a semver breaking change, and I should have incremented the major version.
I'd bet big money that many JavaScript libraries that are not fanatical about runtime checking all parameters end up making accidental breaking changes like that, but with something like typescript, this simply won't happen, as passing parameters incompatible with my declared types, although technically possible, is obviously unsupported, and may break at any time.
Maybe I’m in the minority but I’ve never associated a type system with scale/maintainability directly.
I treat them as I would unit testing: a useful tool to catch bugs early and if you’re skipping it, it should be an intentional decision where you’ve accepted the trade offs.
I find the author's description of hardware design, of which I know very little, to match how static typing work. You are explicit about the interface in order to compose while treating individual parts like black boxes.
On static type checking in general my view is: Programming is principally about dealing with invariants. The author is correct to point out that humans are bad at reasoning about more than 7 things at once. Those things we often reason about are invariants and type systems helps us ensure we never violate them. I think even 7 invariants is too many, in C it's common enough to mess up a single invariant (only access this memory while it's alive) that it has caused thousands of CVEs.
Ultimately, why would I not use a tool that helps me get the invariants right more of the time, thus reducing cognitive load?
This article is so consistently wrong that I can only assume that it's some sort of deliberate ragebait. It's not worth the time to read, let alone refute.
I don’t think it is deliberate rage bait.
It is looking more like the author genuinely thinks he understands something on a higher level and is a bit high on his own farts.
The excessive amount of emdashes does seem to imply it's LLM slop ragebait. The prompt probably included "make intentional mistakes in your arguments to drive engagement" or something like that.
All of the examples in the article of “simplicity” involve the orchestration of complex platforms that are, for the most part, written using types. Try writing a browser that can pass Acid2 in Ruby and then try to convince anyone that what you’ve done is in any sense better than the Chromium codebase in all its complex glory.
Maybe the truth is that our job as developers is to manage the inherent complexity of the world we live in.
It's worse than that. Even ruby has types. They're just implicit and only checked at runtime.
> The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.”
This is not accepted wisdom at all.
If the argument is that even small programs benefit from types, then I'd agree. In the context of Typescript for example, leaving types out is hiding vital information needed for reading the code. Without it, you must infer that information on reading it, and you very well may infer wrong, incorrectly spotting a place where a function that appears to take a number is passed a string. To know if that's a bug, you must read the called function explicitly for the case to determine if an optional conversion is applied. And of course the inverse is even worse, because you may come to assume that all the functions taking numbers are also taking care to convert strings should they appear, when in fact no such assumption can be made.
I'm of the mind that the modern resurgence of typed programming languages is mainly the revelation that you can build your contracts directly into the structure of the code without necessarily needing additional (imperative, runtime!) assertions or physically separated unit tests to validate those assumptions as the software evolves. It's just inlining a certain kind of assertion or test directly into the declarations of the code.
He's even more wrong when the Python typing system allows us to catch bugs in small scripts.
Extremely poor takes.
I distinctly remeber back in 2017 or so I was using tensorflow to train a CNN.
At the time Python really hadn't gotten its shit together with regards to typing.
Googles Tensorflow documentation was heavily filled with completely useless high level nonsense like this.
from tensorflow.datasets import mnist
from tensorflow.models import alexnet
alexnet.train(mnist)
Or something like this. Useful maybe if you wanna put "wrote a machine learning classifer" on your resume or something. But not useful.But the biggest pain of all was that I had no clue what I was supposed to put into any of these functions. The documentation was the function name and arguments names. I ended up just opening Python in interactive mode and calling type on various intermediates running through examples and writing it down.
When you want others to actually be able to use your code you NEED to specify in what actually are your inputs in a clear and unambiguous way. This is fundamentally why typing is vital and usedul even in small codebases. Argument names (or even worse just being like *args, **kwargs just doesn't work.
This is why if you open up any half decent Python library now they actually tell you what you need to provide and what to expect out. Without it you cannot reason about what to do.
> The transport layer [of UNIX pipelines] uses extremely simple data—bytes or characters—which allows universal pluggability. Individual components can layer more sophisticated agreements about data meaning on top of this simple substrate.
No, they can't - they can make assumptions and assertions, but not agreements, because they have no way to communicate such agreements between each other.
Oh, man... If the author puts the work into studying and making his idea work, he may even create some weird variant of Smalltalk...
This is a bit like saying, "We shouldn't need math to reason with problems, we should just go with simpler explanations that don't need math."
Testing is science, type-checking is math https://adueck.github.io/blog/testing-is-science-type-checki...
Why have stair railing when the real problem are buildings with multiple floors?
I think every comment on this article so far is misunderstanding the argument, which seems optimistic to me—he may be rightly pointing at a dogma.
I learned electronics and dataflow languages before I learned traditional languages, and learning how modern systems were architected was horrifying to me. The level of separation of components is nowhere close to what's possible, because—as the author points out—the default level of abstraction in modern programming languages is the function.
Don't read this article as a critique of types (as in: typed vs untyped), but as a finger pointing at a higher level of abstraction that isn't a first-class concept in the tooling used to architect large systems today.
All that said, I have come across many people beating this drum over the years—usually Alan Kay or Ted Nelson types—and I suspect there's something they're missing as well.
Meh, having been forced to use labview for an engineering class a long time ago, dabbled some in verilog, and worked on some production actor based code, it seems clear to me that we don't see those kinds of approaches often is that they're very quickly impossible to understand when you have a lot of interactions.
Labview was a particular nightmare even for very basic tasks.
Ultimately I think this is cope, but I see this refrain over and over and I think it's what holds the current dogma in-place.
I suspect the issue is that traditional boxes and wires type dataflow IDEs put the burden of abstraction on the programmer, and since most people encounter these tools as students, they understandably fail to set up the abstractions correctly and conclude that the entire concept of high level abstraction is pointless.
I think there's a much better way, but nobody's really built it in a way that cuts through the prejudice.
Coincidentally, I learned how Palantir Foundry works today and it's an interesting case-study.
Or it's just easier to create abstractions with functions as your basic tool. I find that something like differential geometry or representation theory have way more powerful abstractions that you can use to quickly reason about very complicated ideas vs. anything you'll ever see in business programming, and those abstractions are all based on functions (and you'll find that any math book is full of "type annotations").
> Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
I'm sorry, but this is ridiculous. Yes, type checking is a tool for managing complexity (among other things). But to make the leap from there to "this complexity is unnecessary" has absolutely no basis. This is like saying EDA tools shouldn't have design rule checks because it means we're making our circuit boards too complex. These tools are there to enable you to be more efficient: to write code or lay down traces without having to manage all the constraints yourself.
And speaking of design rule checks:
> Consider this: electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don't rely on anything analogous to our type checkers.
The author has clearly never done any professional electrical engineering because this could not be further from the truth. Every EDA package has a design rules system that includes a design rule checker. You cannot spin a board without passing these checks, and if you try to work without them, not only is the board you get from the assembly house not going to work, you're also going to get fired.
The suggestion in the concluding section that we should merely design every software component as a black box with explicit input and output ports is already how we build software. It's how we've been building software for decades. What do you think function arguments and return values are? What do you think APIs are? And how do engineers whose components interface with yours know how to interface with your components, if they don't have at least some type information? Even in a JavaScript application you need to know something about the layout of the objects that are expected and returned. That information is a type!
Quit trying to outsmart decades of real-world experience and accept help from the tools that make this job easier.
Horrendous take. Humans can reason about (some) hardware chips because they are space-constrained. The analogous solution in software would be to limit the size of your programs to a human-readable size—18k loc or so.
The author seems to like black boxes. Unix corelib is a collection of black boxes[1]. So are docker containers[2]. Do you know what else are black boxes? TYPES. Specifically interfaces. Types let you compartmentalize your software into as many black boxes as you want, then compose them.
———
[1] Try building any non trivial service by composing Unix corelib tools, docker containers, or REST APIs. You’re in for a world of hurt.
[2] Docker containers aren’t even really isolated. The JupyterHub DockerSpawner image spins up and tears down other docker containers at will. The whole thing is a mess of state. There’s no composition—just the container equivalent of `goto`
> a human-readable size—18k loc or so.
I agree in general with the principle that there is a 'human-readable' size, and that it's around some 10k-30k LOC, but I'm curious how you came up with that specific number. Has it been researched and measured? Does it vary significantly from person to person? I tend to think of 80kLOC as the upper limit, because that's the size of the DOOM engine, written by a single highly-capable programmer, who presumably had it all in his head at one point. But that is empirically way beyond what I'm capable of.
> Do you know what else are black boxes? TYPES.
Indeed. This person doesn't even know what types or functions are.
I have noticed that static type checking often enables people to build systems more over-engineered than they could without it. It's not a coincidence that factory-factory-impl happened in Java, not Ruby.
The more pedestrian reason that's there's demand for static typing is that when people press '.' they generally want a menu to appear and for it to be complete and accurate.
I feel like the opposite is true - we don't need type systems because our programs are complex, our programs are complex because we use powerful type systems. The code I write in Typescript is usually more complex than what I would have written in Go or JS.
That being said, clearly there is a ton of value in automatically verifying interfaces are implemented correctly. Even dynamic languages like Clojure have Spec. I think that's a good balance.
> we need to recognize that the felt necessity of sophisticated type systems is a signal that we’re using the wrong architectural foundations
I'd be forever ashamed if I wrote that.
I agree with the author. I've written my fair share of typed code, but it is way more valuable to probe your program while running. I feel Jack Rusher puts it quite nicely in this talk: https://youtu.be/8Ab3ArE8W3s?t=1477
I think this article highlights a misconception that type skeptics often have, which is that type systems are some sort of enterprise solution that cargo-cultists have come to embrace as "best practices". Either that, or that type enthusiasts are just people that love wearing hairshirts.
Maybe there are some people like that, but for many, types simply are the best way to think about and structure code. They provide a sort of interactive feedback loop that actually makes the development process more pleasant. And these skeptics seem entirely incapable of conceiving that people pick types not because they were looking for some sort of organizational "solution" but because they genuinely prefer the mode of code-writing that type systems enable.
> We’ve created systems so intricate and interconnected that human reasoning fails, then declared the tools that help us navigate this complexity to be essential.
Yes??? I personally cannot understand the complexity of a single compiler yet alone all the libraries and systems that might contribute to a program, but maybe I'm just not trying hard enough
I'd love for type checking in ML code. For example, here's an einsum from my code: "bhsc,cst->bhst". I'd love for a static analyzer to be able to tell if I accidentally swapped two of my dimensions and the einsum won't run, or what the output dimensions of an operation are.
lots of negativity here. I found that POV very interesting. I'll rephrase like this: programmers tend to focus on micro-issues, but with large codebase the main problems are finding the correct abstractions and designs to manage the complexity, which themselves aren't described at all by types (a perfectly type safe program can be an absolute buggy over-engineered mess).
Aren't those abstractions described by types though? e.g. take a "paradigm" like Map-Reduce. Very buzzwordy for disturbed processing 15 years ago. Basic gist is "notice your program is a monoid homomorphism and you see it is trivially parallelizable". Or has been observed before, half of the Design Patterns in Java are just trivial uses of function types.
A type system lets different parts of a program agree on how to interpret a pattern of bits in memory and then enforce that interpretation. I don't think electronic circuits built from discrete components that have immutable physical properties are analogous in the way that the author apparently thinks they are.
Wow, so much wrong here. I dunno whether it's worth picking apart because it's so obviously wrong but... I'll start:
> The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.” This sounds reasonable until you realize what we’re actually admitting: that we’ve designed our systems to be inherently incomprehensible to human reasoning. We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways. > > Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
Woah woah, where did unnecessary come from? Yes we have made super complex systems that need automated checking. But they are (mostly) necessarily complex. Are you going to write a 10k line OS kernel? A 10k web browser? A 10k line word processor? Yeah good luck.
> electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don’t rely on anything analogous to our type checkers.
I dunno if he's talking about PCBs... but actually having types for wires is a pretty great idea. Seems like Altium kind of has support for this but I don't know how much checking it does: https://www.altium.com/documentation/altium-designer/sch-pcb...
Anyway, aside from the fact that PCB design is a completely different thing, PCB designers accidentally make wrong connections all the time.
> The problem isn’t that software is inherently more complex than hardware.
It is though. Way more complex. Find me a PCB as complex as Chrome or LLVM. Hell, find me a CPU as complex. Even the latest OoO designs don't come close.
> UNIX pipelines
Ha I knew he was going to suggest this.
> yet they require no type checking at the transport layer
"require" sure, I guess. Now go and look up how many quoting options there are in `ls`. And nobody has ever realized that structured piping is better and have written two popular shells (at least) that support it.
Ugh so many misguided words remain, I'll stop there.
> electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don’t rely on anything analogous to our type checkers.
Disclaimer: not hardware engineer.
However as a junior in college we had several classes using VHDL and later Verilog. They have types. They are used to simulate design before committing it to board.
I would be surprised people brute force it often.
Right. Try building thousands of transistors without rigorous EDA and basic checking in VHDL. This article is such a bad take.
Physics provides a cruel form of type checking that electronics engineers have to deal with, in a way that software doesn’t. Put the wrong voltage on a trace and the type checker fails in a puff of smoke.
Since software won’t be subjected to physics, we have to introduce our own laws to constrain reality. Type checkers are exactly that.
> Why do we need type checking at all? The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.”
No it's not.
Type checking is a first line of defense against bugs. That's all.
It's just making sure you're not accidentally passing invalid data or performing an invalid operation.
And since the article gets the premise wrong, its conclusion that reducing complexity will make type checking "largely irrelevant" is therefore also wrong.
> Type checking is a first line of defense against bugs. That's all.
Exactly, and it goes way beyond that point. Static type checking turns expensive, hard-to-catch bugs that manifest only at runtime when exercising specific code paths into cheap, trivial to spot bugs that you spot at compile time.
On top of that, nowadays it greatly improved developer experience as IDEs use type hints to detect and avoid bugs even before you compile the code, and even helps autocomplete code you are typing.
We've been through this before. Haven't we learned this yet?
One of the things that drives me crazy is when people say with such brazen confidence, "we don't need type checking because our tests catch type errors." But even if your tests do catch a large amount of the errors (depending on how thorough your tests are that you've written), YOU STILL HAVE TO GO HUNT DOWN WHERE THAT ERROR CAME FROM when a test fails. Isn't it so much nicer to just have the type checker automatically show you exactly where the type error came from?
Right. Also same rationale goes to compile time vs runtime time checks -- catching type bugs before the program is executed
I am all for static type checking and it is a hill I will die on so here goes...
The obvious flaw in the article: we can have both static type checking AND better architecture. In fact I also agree a lot of software architecture is unnecessarily complex and we should do better. Simultaneously I want a rock solid type system to back up that good architecture at a lower level.
Next: programming a computer is inherently complex, we make a lot of code, and humans make mistakes. In my experience we make quite a lot of mistakes as a result. Did you remember to initialise that variable? Assign the right data type? Free that memory allocation? Pick the right CPU register? One day you'll forget, so why not have a machine check it for us? Why handicap yourself with manual checks for something the machine can solve near instantly? Some people absolutely INSIST on dodging static checks and then are surprised when their software ends up a buggy mess and on the CVE list.
Next: types are abstractions for things we shouldn't need to worry about every time. Like another commenter implied, if you hate types then feel free to code in assembly, where you'll spend most of your day choosing registers and which MOV instruction to use. In most use cases this is a waste of time because the compiler will do a great job in a millisecond. And some things are just complicated and beyond comfortable human mental capacity - who wants to model a hash table every time they need a hash table? Lucky we can put it in a type and stop thinking about it. Computers are intricate and types help us manage the difficulty at a low-medium level of abstraction.
Next: loose typing will almost always reduce readability. For any sort of structured data it quickly becomes a headache to figure out what's happening without types. Types are precise, immutable, and enforced. "Stringly-typed" or "dict-typed" data sucks. Even if you architect your design well it's still clearer to say in code what you mean for your code. Maybe you can put a detailed code comment but at that point just use types. Interesting that the article mentioned UNIX shell coding because past the given example of sorting lines - which is about as trivial and uninteresting of an example as you can possibly get - shell coding is a pain exactly because everything is a string. What is this variable? What does this function/module return? Who knows! Shall we spend an hour researching and probing until we figure it out?
Lastly: I'm not an electronic engineer but I'm pretty sure they DO use tools to statically check their designs. Because signal timing is so important and it's hard to check manually. I doubt Intel's CPU has architectural simplicity and that's the sole reason it works correctly.
In summary, types make so much sense. Please use them to make your software more efficient, more reliable, and more maintainable.
> electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don’t rely on anything analogous to our type checkers.
Bullshit, they absolutely do. They have circuit-design software with all sorts of verification tools. Go search "circuit verification". And EEs would jump at any additional opportunity to check their circuits for common errors before having to run them, just as every other engineering discipline would jump on the opportunity for better checking of common issues.
Sure, we should be writing more "obviously correct" systems. Type-checking helps you do that. What is the problem?
"Actually, proving things is bad. We should arrange the world so that everything is self evident."
I've always viewed type systems as adding constraints on and descriptions to the data and logic in the system.
Which is exactly what you find a ton of in electrical engineering (e.g. IEEE C37.2 and gazillion more).
> The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.” This sounds reasonable until you realize what we’re actually admitting: that we’ve designed our systems to be inherently incomprehensible to human reasoning.
Reasoning has nothing to do with it. If I have a `scale(x, y, z)` function in my code I can easily remember how to call it.
But with a large codebase, if my fuzzy memory tells me I should call it as `scale([x, y, z])` I don't want to find out I'm wrong at runtime.
Unit tests are good, but there's a tendency for dynamic languages to overuse them exactly because they don't have basic type safety.
Insert that one kxcd meme
"You don't need x, real programmers use y!"
What a load of rubbish. Is this generated by AI? Sure reads like slop...
> Instead of worrying about how changes in one module might ripple through dozens of dependencies, you can focus on the explicit contracts between components
That's a great idea! I'd like to suggest that we make programming languages have this as a first class concept: as part of writing a module, you write down a static description of the explicit contract. You might even encode effects and error states into that description!
Then, you know that if you stick to that contract, nothing ripples through the dependencies, but if you change it, there's things you need to go update elsewhere.
You can even readily swap in different components that have that same external contract! And since it's static, we can check it before the program even runs.
"external contract" is kind of clunky. What if we called it a "type"?
This is silly. Types are specifications of programs and data. The only reason that isn't obvious is because most of our type systems are rudimentary. It becomes obvious once you pick up anything with a bit more power (Haskell with extensions or any dependently typed programming languages). A type system is not just another ad-hoc solution, it's fundamental and essential to what we're doing. This is obvious if you know the (pre-computer) history of type theory: Types specify sets of objects intensionally. Types describe what our program should do and are also a very clear form of documentation. On top of that, you can't even get rid of types if you wanted. The best you can do is make them implicit. You have to make a decision about how to interpret data in the end, and that's typing. Even assembly language and machine code has types. Even electronics have types, we just don't think of it that way (but look at the 7400-series ICs if you want explicit examples).
This post is written by someone rather naive of computer science. Their description of a "function" betrays that. The failure to recognize UNIX pipelines as function composition also betrays that and so does the whole "black boxes" paragraph.
> Even assembly language and machine code has types.
I agree with the overall thrust of your argument, but on this specifically what's noticeable in an assembly language is that data doesn't have types, the operations are typed. That's usually not really a thing in programming languages. It does happen, but it's not common.
For example Rust's Wrapping<i8> is a distinct type from Saturating<i8>, and if we've got variables named foo and bar with the value 100 in them, if foo and bar have the type Wrapping<i8> the addition operation means the answer is -56 but for Saturating<i8> the exact same addition operation gets you 127
In assembly or machine code there's be Wrapping addition, and Saturating addition and they'd both work on integers, which don't themselves inherently care.
No, the data is typed; it's just not type checked.
> Types are specifications of programs and data. The only reason that isn't obvious is because most of our type systems are rudimentary.
One major limitation of almost all type systems: they fail to distinguish logical types from representation.
> This is silly.
Yes, the article is deeply unserious and perhaps even in "not even wrong" territory. The poor blogger tried to put up a strawman on how type checking was a cognitive load issue, completely ignoring how interfaces matter and comply with them is a necessary condition for robust bug-free code.
Agreed. The article makes many references to how life would be better with explicit well-defined interfaces… but types are how we explicitly define interfaces.
For example, Rust borrow checking isn’t to add complexity, it is to make it possible to explicitly define lifetime bounds in interfaces.
Sorry if this is off topic, I have a language-related question:
Over the last year I’ve seen people on the internet using “unserious” as an adjective to criticize people. Is it just an expression that became common, a frequent literal translation from another native language…?
Not at all judging, I am not a native speaker and I never saw the word used that way before, but this is the kind of question that’s pretty much impossible to get an answer for without a direct question.
It's an old word from the 1650s, just becoming more popular now for zeitgeist reasons. https://www.etymonline.com/search?q=unserious
It's not a literal translation from another language. It's just a word that's taken off in popularity, presumably in response to the rising trend of anti-intellectualism.
The word ‘serious’ has several meanings, depending on what it’s used to describe. As a result it has a bunch of opposites: frivolous, funny, non-life-threatening…
When we are grading a person’s argument or their approach to a topic for seriousness, none of those seem to capture the sort of opposite we need. And that’s where ‘unserious’ comes up. How seriously should we take this? Is it serious or unserious?
I’m not sure why there would be a relatively recent surge in its use. Etymonline cites ‘unserious’ as having 17th century origins. Ngram shows increasing usage in the modern era but starting from the 1950s onwards.
If anything it probably reflects an increased use of ‘serious’ in the context of discussing intellectual opinions, as opposed to ‘consequences’ or ‘demeanor’ or ‘injury’ – which if anything is a reflection of increased public engagement with intellectual pursuits.
To clarify, I mostly see it in political comments, as in “trump is deeply unserious as a president” (substitute trump for whoever the poster dislikes).
That’s what weirded me, without context it sounds too literary for random internet comments, but my gut feeling in a second language isn’t really worth much.
Exactly. Types are the theorems, programs are the proof. So types are necessarily linked to programs. I think the author in the article has a misunderstanding of what types are
> Consider this: electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems.
As an EE: no, just no. This is like asserting that the average programmer routinely writes thousands of lines of C in a highly distributed environment. And like programmers, once you're at this scale of complexity, it's far too big to fit into a single human skull, and you absolutely must have an entire team of people.
> Yet they don’t rely on anything analogous to our type checkers.
We absolutely do. Every day. When we lay out a PCB, there are dozens and dozens of safe/sane checks happening in the background at all times.
Counterpoint: we have all kinds of situations where type checking would save us. Ask anyone who has implemented an I2C driver. We're twiddling bits and we have a huge pile of magic numbers from the vendor. There is absolutely no way to verify that your driver isn't going to set the chip on fire other than to try it. It takes hours and days to validate.
When we write to CPU registers, there is absolutely no way to check in advance if you've set the bit you want, or if you've set a bit that permanently burns some efuses and bricks the chip.
> They use different architectural principles—isolation, explicit interfaces, and time-aware design—that make their systems naturally more robust and understandable.
I'm gonna stop reading right here. Hardware systems are exactly as robust and reliable as software. Because hardware is software. There is almost nothing in the modern day that operates on straight digital/analog logic without any code. Everything is running software and the vast majority of it is written in C with Classes.
I surmise that the author has never met an EE or even seen a circuit board. They also have never written or seen a single line of C.
But really, C is the only argument you need. We invented typed languages because C is so godawful.
I think this is the same thing as vaccine denial. We've lived in a world without these problems for so long that certain people without critical thinking skills just don't believe the problem ever existed in the first place, so the remedies we've spent generations building are an undue and dangerous burden. This is a remarkably unintelligent and uninformed line of thinking that is sadly far, far too common.
That's the first time I've seen the Borrow Checker referred to as a "type system"
is typing not interface definition? It's just defining the shape of data that goes in and the shape of data that comes out. Black box/contract orientation has been a principle of software design for as long as I've been doing it, for all of the reasons this article cites when claiming that the linux CLI is superior: one component doesn't care at all what another subcomponent it needs does, just what that subcomponent requires in order to do it and what the calling component can expect back from the subcomponent. Types are just explicitly defining the inputs and outputs and ensuring there's an automated way to catch it if they change.
I do agree with how the article talks about time though. It seems as though in software there has been an implicit assumption that timing things isn't a consideration and it's because the early principles of the craft were developed on single threaded systems and we were all taught as late as the 90s that coding is just writing a list of instructions to be followed in order.
why cant we downvote links ? this is an absolute waste of time.
I good a few good laughs out of this post, but I still hope it's just elaborate trolling.
I do not find this argumentation compelling:
> We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways.
> Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
I don't buy the idea that (1) we can always easily identify unnecessary complexity, separating it from necessary complexity; (2) reduce the system to just the necessary complexity, such that: (3) the remaining necessary complexity is always low, and easy to understand (i.e. not "incomprehensible to human reasoning", having few or no "implicit behaviors", so that we don't "need automated tools just to verify that" it "won't crash in obvious ways").
The idea that we can implement arbitrary systems to handle our complex requirements, while keeping the implementation complexity capped below a constant simply by not admitting any "unnecessary complexity" is not believable.
This is simply not a mature argument about software engineering and therefore not an effective argument against type checking.
> The problem isn’t that software is inherently more complex than hardware.
It positively is. The built electronic circuit closely mimics the schematic. The behavior of the electronic circuit consists of local states that are confined to the circuit components. For instance, in the schematic you have a capacitor and a neighboring resistor, as symbols? Then in the manufactured circuit, you have the same: a capacitor component connected to a neighboring resistor component. Now those things have a run-time behavior in response to the changing voltage applied. But that run-time behavior is locally contained in those components. All we need to imagine the behavior is to take the changing quantities like the capacitor's charge, current through the capacitor, voltage drop across the resistor and superimpose these on the components.
Software source code is a schematic not for compiled code, but for run-time execution. The run-time execution has an explosive state space. What is one function in the source code can be invoked recursively and concurrently so that there are thousands of simultaneous activations of it. That's like a schematic in which one resistor a circuit with 50,000 resistors, and that circuit is constantly changing; first we have to know what the circuit looks like at any time t and then think about the values of the components and then their currents and voltages. You might understand next to nothing about what is going on at run-time from the code.
Circuits havre no loops, no recursion, no dynamic set data structures with changing contents, no multiple instantiation of anything. The closest thing you see to a loop in a schematic is something like dozens of connections annotated as a 'bus' that looks like one wire. Some hardware is designed with programming-like languages like VHDL and whatnot that allow circuits with many repeating parts to be condensed. It's still nothing like software.
I mean, show me a compiler, web browser, or air traffic control system, implemented entirely in hardware.
> The evidence for this alternative view is hiding in plain sight. UNIX pipelines routinely compose dozens of programs into complex workflows, yet they require no type checking at the transport layer.
Obligatory: where is the web browser, Mobile OS, video game, heat and lung machine, CNC router, ... written with shell scripts and Unix pipes?
Even within the domain where they are applicable, Unix pipes do not have a great record of scalability, performance and reliability.
In some narrowly defined, self-contained data processing application, they can shine, especially against older languages. E.g. Doug McIlroy writing a tiny shell script to solve the same thing as Knuth's verbose Pascal solution.
"Type checking is an antipattern" because it admits that our software has more than seven dependencies, which is cognitively unacceptable
"Type checking is an antipattern" because electrical engineers designing CPUs have nothing like type checking to rely upon
These two statements alone seem, to be charitable, wrong. But together, I have a sneaky feeling that a running CPU is at least as cognitively overwhelming as a modern software application. Which would mean cognitively overwhelming systems are not an antipattern. Or that CPU designers are also engaging in an active antipattern. So these arguments would appear to contradict each other in any case.
Overall a post like this that makes huge grandiose claims and ends with basically nothing to show at the end rings pretty hollow. Yes complexity is inconvenient but handwaving "there should be systems that are simple ! easy to understand!" yeah thanks! show us one? Oh, the mainboard of an iPhone, that's easy to understand? It literally requires software to figure out how to lay out all the tiny components on both sides like that. It is...not simple at all.
Prove it. Write a very large program in an experimental state chart-based language and prove that doing the same thing in any conventional language would have been impractical or impossible.
> We debate whether Rust’s borrow checker is better than Haskell’s type classes. We argue about whether TypeScript’s structural typing is superior to Java’s nominal typing.
Life is too short to argue about stupid programming language bullshit. If you want to use Rust, do it. If not, don't. The actual code that makes up the computer system as a whole is still 99.99% the same because you are running on an operating system written in C calling a database or other systems also written in C.