I'm fascinated by this idea of not reviewing AI generated code. On the surface it sounds absurd - we know these machines make mistakes all the time, so how could we ever responsibly move ahead with code they have written without closely reviewing every detail?
Then I remembered the times I've worked at large companies and depended on code written by other teams. I didn't review every line of code they had written - I'd trust that they had done a competent job, integrate with that code myself, and only dig into the details of their code if I run into bugs or performance issues or other smells that something was wrong.
Trusting humans is obviously different from trusting AI - humans have reputations, and social contracts, and actual intelligence as opposed to multiplying matrices and rolling a dice. But... I do think an AI model can still earn trust over time. I've spent enough time with Opus 4.5 and 4.6 that I trust them not to make dumb mistakes with the common categories of code that I use them for. Of course now I need to rebuild that trust with 4.7!
I think the most interesting challenge here is to figure out how to have coding agents demonstrate that the code works without actually reading every line of it yourself - in the same way that I might ask an engineering team I haven't worked with before for a demo and then interrogate them about their testing strategy before relying on their work.
The distinction here that keep getting glossed over in such comparisons is accountability.
If the engineering team fucks up somehow they can be kept accountable. An AI cannot be held accountable.
100% agree. A human has to be accountable for the work.
As an engineering manager I can take accountability for the output of my team even if I don't review every line. Using coding agents feels similar.
People who use AI are responsible for what it does. IBM had it right in 1979:
A computer can never be held accountable
Therefore a computer must never make a management decision
https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...
Do you review all machine code your compiler produces?
...how exactly do you think that's even remotely the same thing?
Compiler output is deterministic based on input code - which is typically reviewed before compiling by someone(s) who will be held accountable for it.
Hardware--physical hardware, not servers--could become commodities like the cloud: rent bipedal robots by the hour, lidar-equipped vans, and managed drone fleets.
The SaaS companies disrupting today could become utilities offering mechanized leases tomorrow.
With agents as a singular "swarm brain" (per machine, not a global hivemind) just seems like a natural course of abstraction.
I'd file this under "creative marketing" and move on. If it were real, I really don't need to know why a religious fanatic decided to turn off their brain.
It's not even all that creative. Anyone who's given more than two-braincells worth of a thought to the discourse around AI, and coding especially, recognizes this as more of the same "hype" intended to make this guy's AI company sound magical and powerful.
[dead]