Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0].
[0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...
Fwiw fortune had another article this week saying this J-curve of "General Technology" is showing up in the latest BLS data
https://fortune.com/2026/02/15/ai-productivity-liftoff-doubl...
Source of the Stanford-approved opinion: https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419...
The comparison seems flawed in terms of cost.
A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.
Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.
What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.
I find that highly unlikely, coding is the AIs best value use case by far. Right now office workers see marginal benefits but it's not like it's an order of magnitude difference. AI drafts an email, you have to check and edit it, then send it. In many cases it's a toss up if that actually saved time, and then if it did, it's not like the pace of work is break neck anyway, so the benefit is some office workers have a bit more idle time at the desk because you always tap some wall that's out of your control. Maybe AI saves you a Google search or a doc lookup here and there. You still need to check everything and it can cause mistakes that take longer too. Here's an example from today.
Assistant is dispatching a courier to get medical records. AI auto completes to include the address. Normally they wouldn't put the address, the courier knows who we work with, but AI added it so why not. Except it's the wrong address because it's for a different doctor with the same name. At least they knew to verify it, but still mistakes like this happening at scale is making the other time savings pretty close to a wash.
I think it’s more likely that the same amount of work is getting done, just it’s far less taxing. And that averages are funny things, for developers it’s undeniably a huge boost, but for others it’s creating friction.
Would hardly drag Graeber into this, theres a laundry list of issues with his research.
Most "Bullshit Jobs" can already be automated, but can isnt always should or will. Graeber is a capex thinker in an opex world.
And that book sort of vaguely hints around at all these jobs that are surely bullshit but won’t identify them concretely.
Not recognizing the essential role of sales seemed to be a common mistake.
What counts as “concretely”? And I don’t recall it calling sales bullshit.
It identified advertising as part of the category that it classed as heavily-bullshit-jobs for reason of being zero-sum—your competitor spends more, so you spent more to avoid falling behind, standard red queen’s race. (Another in this category was the military, which is kinda the classic case of this—see also, the Missile Gap, the dreadnought arms race, et c.) But not sales, IIRC.
The thesis of Bullshit Jobs is almost universally rejected by economists, FYI. There’s not much of value to obtain from the book.
How viable are the $20/month subscriptions for actual work and are they loss making for Anthropic? I've heard both of people needing to get higher tiers to get anything done in Claude Code and also that the subscriptions are (heavily?) subsidized by Anthropic, so the "just another $20 SaaS" argument doesn't sound too good.
Merely for the viability part: I use the $20/mo plan now, but only as a part-time independent dev. I will hit rate-limits with Opus on any moderately complex app.
If I am on a roll, I will flip on Extra Usage. I prototyped a fully functional and useful niche app in ~6 total hours and $20 of extra usage, and it's solid enough and proved enough value to continue investing in and eventually ship to the App store.
Without Claude I likely wouldn't have gotten to the finished prototype version to use in the real world.
For Indy dev, I think LLMs are a new source of solutions. This app is too niche to justify building and marketing without LLM assistance. It likely won't earn more than $25k/year but good enough!
I am confident that Anthropic make revenue from that $20 than the electricity and server costs needed to serve that customer.
Claude Code has rate limits for a reason: I expect they are carefully designed to ensure that the average user doesn't end up losing Anthropic money, and that even extreme heavy users don't cause big enough losses for it to be a problem.
Everything I've heard makes me believe the margins on inference are quite high. The AI labs lose money because of the R&D and training costs, not because they're giving electricity and server operational costs away for free.
Nobody questions that Anthropic makes revenue from a $20 subscription. The opposite would be very strange.
I always assumed that with inference being so cheap, my subscription fees were paying for training costs, not inference.
id guess the 200 subscription sufficient per person.
but at that point you could go for a bugger one and split amongst headcount
$20 is not useable, need $100 plan at least for development purposes. That is a lot of money for some countries. In my country, that can be 1/10 of their monthly salary. Hard to get approval on it. It is still too expensive right now.
If anything, the 'scariness' of an old computer probably protected the company in many ways. AI's approachability to the average office worker, specifically how it makes it seem like it easy to deploy/run/triage enterprise software, will continue to pwn.
>I think there are enough short term supposed benefits that something should be showing there.
As measured by whom? The same managers who demanded we all return to the office 5 days a week because the only way they can measure productivity is butts in seats?
Productivity is the ratio of outputs to inputs, both measured in dollars.
Like Uber/Airbnb in early days, this is heavily subsidized.
I don’t think LLMs are similar to computers in terms of productivity boost
One part of the system moving fast doesn't change the speed of the system all that much.
The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.
If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.
> it's helping lots of people, but it's also costing an extraordinary amount of money
Is it fair to say that wall street is betting America's collective pensions on AI...
Very few people have pensions anymore. People now direct their own retirement funds.
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.
L2? I'm hot L1 material, dude.
But I like your and OP's analogy. Also, the productivity claims are coming from the guys in main memory or even disk, far removed from where the crunching is taking place. At those latency magnitudes, even riding a turtle would appear like a huge productivity gain.
Interesting analogy to explore a Distributed System as compared to Organizational Dynamics.
operationally, i think new startups have a big advantage on setting up to be agent-first, and they might not be as good as the old human first stuff, but theyll be much cheaper and nimble for model improvements
... or perhaps the exact opposite. I'm in the early phase of a startup and I have a strict no-LLM policy. No employees yet, but I will have this in the contracts.
Start ups mostly move fast skipping the necessary ceremony which large corps have to do mandatorily to prevent a billion dollar product from melting. Its possible for start ups because they don't have a billion dollar to start with.
Once you do have a billion dollar product protecting it requires spending time, money and people to keep running. Because building a new one is a lot more effort than protecting existing one from melting.
The where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?
> 4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026.
https://newsletter.semianalysis.com/p/claude-code-is-the-inf...
There’s lots of slop out there, that doesn’t mean it’s actually good or useful code.
Keep moving those goal posts.
Doesn’t look like goal-post moving to me. GP argued that AI isn’t making a difference, because if it was, we’d see amazing AI-generated open source projects. (Edit: taking a second look, that’s not exactly what GP said, but that’s what I took away from it. Obviously individuals create open source projects all the time.)
You rebutted by claiming 4% of open source contributions are AI generated.
GP countered (somewhat indirectly) by arguing that contributions don’t indicate quality, and thus wasn’t sufficient to qualify as “amazing AI-generated open source projects.”
Personally, I agree. The presence of AI contributions is not sufficient to demonstrate “amazing AI-generated open-source projects.” To demonstrate that, you’d need to point to specific projects that were largely generated by AI.
The only big AI-generated projects I’ve heard of are Steve Yegge’s GasTown and Beads, and by all accounts those are complete slop, to the point that Beads has a community dedicated to teaching people how to uninstall it. (Just hearsay. I haven’t looked into them myself.)
So at this point, I’d say the burden of proof is on you, as the original goalposts have not been met.
Edit: Or, at least, I don’t think 4% is enough to demonstrate the level of productivity GP was asking for.
They didn’t, amazing open source was asked for, meaningless stats were given. Not that GitHub public repositories were amazing before AI, but nothing has changed since, except AI slop being a new category.
I deliberately asked for amazing open source projects. I’ve yet to see a single AI coded project i would use.
Keep licking those boots.
Seemingly every day on Show HN?
Also small businesses aren't going to publish blog posts saying "we saved $500 on graphic design this week!"
Is saving 500$ by generating some shitty AI art the bar? I thought this supposed to replace entire departments
Someone asked “where are all the small businesses”, this was a reply to that. Small businesses don’t have entire art departments.
Gotcha, so the impact of AI is small businesses get to save a couple hundred dollars and the cost is only 2% of your countries GDP. That’s good.
Prior to industrialization if you wanted to paint something you had to know how to mix your own paints.
And make your own brushes.
Before the printing presses came along, putting up flyers was not even imaginable.
Signs for businesses used to hand carved.
Then printed. A store sign was still produced by a team of professionals, but small businesses coils reasonably afford to print a sign. Not often updated, but it existed.
Then desktop publishing took off. Now lone graphic designers could design and send work off to a print shop. Small businesses could now afford regularly updated menus, signage, and even adverts and flyers.
Now small businesses can make their own creatives. AI can change stylesheets, write ad copy, and generate promotional photos.
Does any of this have the artistry of hand carved signs from 600 years ago? Of course not.
But the point is technology gives individuals control.
Couple hundred dollars
..a month
..multiplied by how many small businesses globally?
I think both. Most organizatuons lack someone like Steve Jobs to prime their product lines. Microsoft is a good example where you see their products over the years are mostly meh. Then meetings are pervasive and even more so in most companies due to msteam convenience. But currently they faced reduced demands due softer market as compare 2-3 years ago. If you observed that no effect while they layoff many and revenue still hold or at least no negative growth, I would surmise that AI is helping. But in corporate, it only counta if directly contributed sales numbers.
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.
It seems that to some number of folks, "engineering" means "writing code."
> I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
I'd suspect the kind that's going away.
That kind was already reserved for junior roles, contractors, and off shoring.
Well that’s why AI will not replace the software engineer!
Ime a team or project lead does that and the rest of the engineers maybe do that on a smaller scale but mostly implement.
Thinking is always the hardest part and the bottleneck for me.
It doesn’t capture everyone’s experience when you say thinking is the smaller part of programming.
I don’t even believe a regular person is capable of producing good quality code without thinking 2x the amount they are coding
Agree. I remember in school in the 1980s reading that a good programmer can write about 10 lines of code a day (citing The Mythical Man-Month) and I thought "that's ridiculous, I can write hundreds of lines a day" but didn't understand that's including all the time understanding requirements, thinking about design, testing, debugging, etc. Writing the code is a small portion of what a software engineer does.
Most people (and most businesses) aren’t making good quality code though. Most tools we use have horrible codebases. Therefore now the code can often be a similar quality to before, just done far faster.
> making slides/decks to communicate those thoughts,
That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?
It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.
Yeah, but this is self-correcting. Eventually it will get to a point where the data that you use to prompt the LLM will have more signal than the LLM output.
But if you get deep into an enterprise, you'll find there are so many irreducible complexities (as Stephen Wolfram might coin them), that you really need a fully agentically empowered worker — meaning a human — to make progress. AI is not there yet.
> unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works
this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output
Yeah, ratios vary depending on how productive you are with code. For me it was 50:50 and is now 80:20, but only because I was a relatively unproductive coder (struggled with language feature memorization, etc.) and a much more productive thinker/architect.
when the work involves navigating a bunch of rules with very ambiguous syntax, AI will automate them to the point computers automated rules based systems with very precise syntax in the 1990s
this software (which i am not related to or promoting) is better at investment planning and tax planning than over 90% of RIAs in the US. It will automate RIA to the point that trading software automated stock broking. This will reduce the average RIA fee from 1% per year to 0.20% or even 0.10% per year just like mutual fund fees dropped in the early 00s
You could have beaten the returns of most financial professionals over the last several years by just parking your money in the S&P 500, and yet plenty of people are still making a lucrative career out of underperforming it. In some fields, “being better and cheaper” does not always spell victory.
you are right on beating money managers. when I said investment planning, I meant planning the size and tax structures for investments. this software automates all of the technical work that goes on inside financial planning firms, which is done by tens of thousands of white collar professionals in US/UK/EU, et c. it will then lead to price competitiveness.
more expensive silly companies will exist, but the cheap ones get the scale. SP500 index funds have over 1 trillion in the top 3 providers. cathy wood has like 6-7 billion.
BNYMellon is the custodian of $50 trillion of investment assets. robinhood has $324bn.
silly companies get the headlines though
The slow part as a senior engineer has never been actually writing the code. It has been:
- reviews for code
- asking stakeholders opinions
- SDLC latency (things taking forever to test)
- tickets
- documentations/diagrams
- presentations
Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.
Original paper https://www.nber.org/system/files/working_papers/w34836/w348...
Figure A6 on page 45: Current and expected AI adoption by industry
Figure A11 on page 51: Realised and expected impacts of AI on employment by industry
Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry
These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)
A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.
As we approach the singularity things will be more noisy and things will make less and less sense as rapid change can look like chaos from inside the system. I recommend folks just take a deep breath, and just take a look around you. Regardless on your stance if the singularity is real, if AI will revolutionize everything or not, just forget all that noise. just look around you and ask yourself if things are seeming more or less chaotic, are you able to predict better or worse on what is going to happen? how far can your predictions land you now versus lets say 10 or 20 years ago? Conflicting signals is exactly how all of this looks. one account is saying its the end of the world another is saying nothing ever changes and everything is the same as it always was....
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
it turns out it's really hard to get a man to fish with a pole when you don't teach them how to use the reel
In regards to copilot, they’ve also been led on a fishing expedition to the middle of a desert
If AGI is coming, won't there just be autofishers and no one will ever have to fish again, completely devaluing one's fishing knowledge and the effort put in to learn it?
It’s not a great analogy but...
“Autofishers” are large boats with nets that bring in fish in vast quantities that you then buy at a wholesale market, a supermarket a bit later, or they flash freeze and sell it to you over the next 6-9 months.
Yet there’s still a thriving industry selling fishing gear. Because people like to fish. And because you can rarely buy fish as fresh as what you catch yourself.
Again, it’s not a great analogy, but I dunno. I doubt AGI, if it does come, will end up working the way people think it will.
Or give them a stick with twine and a plastic fork as a hook, as is the case with Copilot.
100% All of the people who are floored by AI capabilities right now are software engineers, and everyone who's extremely skeptical basically has any other office job. On investigating their primary AI interaction surface, it's Microsoft Co-Pilot, which has to be the absolute shittiest implementation of any AI system so far. As a progress-driven person, it's just super disappointing to see how few people are benefiting from the productive gains of these systems.
I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.
For my team at least, the productivity boost is difficult to quantify objectively. Our products and services have still tons of issues that AI isn't going to solve magically.
It's pretty clear that AI is allowing to move faster for some tasks, but it's also detrimental for other things. We're going to learn how to use these tools more efficiently, but right now, I'm not convinced about the productivity gain.
Is your backlog and/or your velocity increasing, decreasing, or the same? That's really the ultimate question.
> I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.
What improvements have you noticed over that time?
It seems like the models coming out in the last several weeks are dramatically superior to those mid-last year. Does that match your experience?
Not the grandparent, but I've used most of the OpenAI models that have been released in the last year. Out of all of them, o3 was the best at the programming tasks I do. I liked it a lot more than I like GPT 5.2 Thinking/Pro. Overall, I'm not at all convinced that models are making forward progress in general.
In a team of one at work I see clear benefits, but having worked in many different team sizes for most of my career I can see how it quickly would go down, especially if you care about quality. And even with the latest models it’s a constant battle against legacy training data, which has gotten worse over time. ”I have to spend 45 minutes explaining why a one minute AI generated PR is bad code” was how an old colleague summarized it.
I think anthropic will succeed immensely here because when integrated with Microsoft365 and especially Excel it basically does what co-pilot said it would do.
The moment of realisation happen for a lot of normoid business people when they see claude make a DCF spreadsheet or search emails
claude is also smart because it visually shows the user as it resizes the columns, changes colours, etc. Seeing the computer do things makes the normoid SEE the AI despite it being much slower
No one wants a chatbot “integrated” with excel and office365 crap, it’s clippy 2.0 bullshit.
Replace excel and office stuff with ai model entirely then people will pay attention.
that only works if you can oneshot. but nobody can oneshot.
iterating over work in excel and seeing it update correctly is exactly what people want. If they get it working in MSWord it will pick up even faster.
If the average office worker can get the benefit of AI by installing an add-on into the same office software they have been using since 2000 (the entire professional career of anyone under the age of 45), then they will do so. its also really easy to sell to companies because they dont have to redesign their teams or software stack, or even train people that much. the board can easily agree to budget $20 a head for claude pro
the other thing normies like is they can put in huge legacy spreadsheets and find all the errors
Microsoft365 has 400 million paid seats
IMO Copilot was "we need to give these people rope, but not enough for them to hang themselves". A non technical person with no patience and access to a real AI agent inside a business is a bull in a china shop. Copilot Cowork is the closest thing we have to what Copilot should have been and is only possible now because models finally got good enough to be less supervised.
FWIW Gemini inside Google apps is just as bad.
This isn't my experience. I see many non-software people using AI regularly. What you may be seeing is more: organizations with no incentive to do things better never did anything to do things better. AI is no different. They were never doing things better with pencil and paper.
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
So management basically have no clue and want you to figure out how to use AI?
Do they also make you write your own performance review and set your own objectives?
> So management basically have no clue and want you to figure out how to use AI?
This is basically the same story I have heard both my own place of employment and also from a number of friends. There is a "need" for AI usage, even if the value proposition is undefined (or, as I would expect, non-existent) for most businesses.
Look, to make something productive out of it: a job seeker who has high level skills using LLM assistance will be much more valuable than one without the experience. Never mind your current company mangement's policies.
I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data
> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.
good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...
It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.
I think the biggest problem is calling it AI to start with. It gives people a huge misrepresentation of what it is actually capable of. It is an impressive tool with many uses, but it is not AGI.
I think the 'AI productivity gap' is mostly a state management problem. Even with great models, you burn so much time just manually syncing context between different agents or chat sessions.
Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.
The article suggests that AI-related productivity gains could follow a J-curve. An initial decline, as initially happened with IT, followed by an exponential surge. They admit this is heavily dependent on the real value AI provides.
However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?
The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.
It's not just technology, it's very hard to detect the effect of inventions in general on productivity. There was a paper pointing out that the invention of the steam engine was basically invisible in the productivity statistics:
BTW the study was from September 2024 to 2025, so its the very earliest of adopters.
This article is mostly based on NBER working paper 34836, which was published this month, and the data was collected from September 2025 to January 2026[0]
[0]: See page 2: https://www.nber.org/system/files/working_papers/w34836/w348...
Thousand of companies to be replaced by leaner counterparts that learned to use AI towards greater employment and productivity
It's weird being on here and seeing so much naysaying, because I see a radical change already happening in software development. The future is here, it's just not equally distributed.
In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.
This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.
But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.
I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.
I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.
But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.
It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.
To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.
Curious why you like Conductor. I’m trying it out, but since I primarily live in the CLI, I might not see much value in it.
I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.
CEOs are now on the downside of the hype curve.
They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.
I consume a lot of different content on a lot of different places. Every site or app has its vibe and communal beliefs. They rarely if ever agree on anything, but they all agree we're in a massive bubble.
I don't have a point, just that it's an unlikely unity.
The people who will be most productive with AI will be the entreprompteurs who whip up entire products and go to market faster than ever before, iterating at dangerous speeds. Lean Startup methodology on pure steroids basically.
Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.
Software doesn't need to be good to be successful; it only needs to solve a problem and be better than the competition.
I predict a golden age for experienced developers! There will be an uncountable number of poorly designed apps with scaling issues. And many of them will be funded.
Meh, no. In a future where any app could be prompted, the only thing you’d get funding for is if you had managed to go viral and secure some large audience.
This is not good. When all that matters is how viral your app is, people no longer compete on features and quality of life.
It’s funny because at work we have paid Codex and Claude but I rarely find a use for it, yet I pay for the $200 Max plan for personal stuff and will use it for hours!
So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?
Why do you find it useless for legacy code? I find I have to give it plenty of context but it does pretty well on legacy code.
And Ask DeepWiki is a great shortcut for finding the right context… Granted this is open source and DW is free.
Is it the specific nature of your work?
Anyone read the goal lately?
These surveys don’t make sense. Ask the forward thinking companies and they’ll say the opposite. The flood of anti AI productivity articles almost feel like they’re meant to lull the population into not seeing what’s about to happen to employment.
Eh, try using Microsoft Copilot in Word or PowerPoint. It is worthless. If your experience with AI was a Microsoft product, you would think it was a scam too.
It’s not just that though. You find when going through AI projects in an organization that many times the process is manual for a reason. This isn’t the first wave of “automation” that’s came through. Most things that can be fully automated already have been long ago and they manual parts get sold as we can make AI do it, until you see the specs and noodle around on the problem some then you realize it’s probably just going to remain manual as the amount of model training requires as much time and effort as just doing it by hand.
Yeah Microsoft has consistently been bragging about how so much code is written by AI, yet their products are worse than ever. Seems to indicate “using AI” is not enough. You have to be smart about when and where.
There is probably a threshold effect above which the technology begins to be very useful for production (other than faking school assignments, one-off-scripts, spam, language translation, and political propaganda), but I guess we're not there yet. I'm not counting out the possibility of researchers finding a way to add long term memory or stronger reasoning abilities, which would change the game in a very disorienting way, but that would likely mean a change of architecture or a very capable hybrid tool.
the greatest step change will be when mainstream business realise they can use AI to accurately fill in PDF documents with information in any format
filling in pdf documents is effectively the job of millions of people around the world
At $dayjob GenAI has been shoved into every workflow and it's a constant source of noise and irritation, slop galore. I'm so close to walking away from the industry to resume being a mechanic, what a complete shit show.