• mark_l_watson 20 hours ago

    I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.

    I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.

    I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.

    • ughitsaaron 19 hours ago

      I just want to call out how much I appreciate the comparison of “vibe coding” to the endless scroll.

      • andai 17 hours ago

        They're both slot machines, in terms of the effect on the reward system.

        • bicepjai 16 hours ago

          Totally true. It’s hard for me to stop a project, I keep piling feature after feature for no reason. I literally stop only when Claude Max Pro hits the hourly limit.

          • benterix 13 hours ago

            This is the experience of many of us. But just like with social media, it doesn't give deep satisfaction and always leaves me a bit frustrated.

        • addled 18 hours ago

          Agreed. I noticed myself having a harder time stopping at the end of the day since I started using AI tools in earnest.

          I naturally have a hard time stopping when almost done with something, but with AI everything feels "close" to a big breakthrough.

          Just one more turn... Until suddenly it's way later than I thought and hardly have time to interact with my family.

          • OneMorePerson 13 hours ago

            For me it was similar, but I think it was more about a lack of a natural friction. Normally when coding there was the "hit" of seeing something work, but the actually planning/coding/debugging would eventually wear me out, so I'd stop for the day. Now it can all just be endless "hit" of success and nothing that makes me feel tired or annoyed.

            The reason I believe this is because I recently went through a really annoying battle with Claude trying to get it to stop being so strict with its sandbox. I wanted it to simply load some sanitized text from a source online, and it just would not do it. The sessions when I was sorting that out were so much easier to stop and moderate than the ones where everything just kept flowing effortlessly.

            • dataviz1000 17 hours ago

              It's a slot machine.

          • boxedemp 19 hours ago

            I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

            Where are they?

            Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

            Incidentally, this comment was written by AI.

            • grogers 19 hours ago

              It's not your main point, but I can't help but point out that artificial diamonds ARE diamonds. Cubic zirconia is a different mineral. Usually the distinction is "natural" vs "lab grown" diamonds.

              When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.

              • maest 18 hours ago

                I agree, but as a nit, the industry uses "earth mined" instead of "natural", presumably because it's more precise (and maybe less normative?)

                • Eddy_Viscosity2 8 hours ago

                  mined should be 'hand-picked' and lab made could be 'hand-crafted'.

                • windows2020 16 hours ago

                  Well, unless intellect is immaterial.

                • tim333 5 hours ago

                  AI becoming conscious is different to LLMs doing so. Maybe more people are claiming that? I think AI will but LLMs won't.

                  It depends a bit what you mean by conscious but assuming it's human like then it incorporates a lot of feelings, vision, sound, thoughts and the like, things that are not language really. But we do it with neurons and some chemicals and I imagine you could do something like that with artificial neural networks and some computer version of the chemistry, but not just language really.

                  • andai 17 hours ago

                    Interesting. Artificial does have a negative connotation to it, I never considered that.

                    Synthetic sounds more neutral, aside from bringing microplastics to my mind.

                    I guess the field of artificial life has the same issue.

                    As another comment pointed out, you don't necessarily need consciousness for intelligence. And you don't need either of those for goal oriented behavior.

                    My favorite example is the humble refrigerator. (The old one, without the microchips!) It has a goal (target temperature), it senses its environment (current temperature), and takes action based on that (turn cooling on or off).

                    A cuter example is the dandelion seed. It "wants" to fly. Obviously! So you can display goal directed behavior as the result of natural forces moving through you. (Arguably electricity and glucose also fall in that category, but... Yeah...)

                    LLMs, conscious or not, moved into that category this year, in a big way. (e.g. Opus and Codex routinely bypassing security restrictions in the pursuit of the goal.)

                    Does it really have goals, or does it merely appear to act as though it has them? Does it appear to act as though it has consciousness?

                    (I forget who said it: it won't really disrupt the global economic system, it will merely appear to do so ;)

                    Also, here I am! :)

                    • sshine 10 hours ago

                      > When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

                      One can bypass the whole sentience discussion and say that AI stands for Automated Inference.

                      If actual, conscious intelligence were to manifest synthetically, as in silicon-based rather than carbon-based, it is a losing battle to convince people because of the philosophical “problem of other minds.”

                      If there is a functional equivalence between meatspace intelligence and synthetic, it will surely have enough value to reinforce itself, philosophical debates aside.

                      • palmotea 16 hours ago

                        > I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

                        I haven't met him, but a famous (pre-ChatGPT) counterexample is Blake Lemoine:

                        > In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. (https://en.wikipedia.org/wiki/LaMDA).

                        It's also not uncommon here to see someone respond to a comment questioning the consciousness or sentience of LLMs with the question along the lines of "how do you know anyone is conscious/sentient?" They're not being direct with their beliefs (I believe as a kind of motte and bailey tactic), but the implication is they think LLM are sentient and bristle when someone suggests otherwise.

                        • rickydroll 7 hours ago

                          An interesting parallel would be to look at what it took for humans to accept that sapience existed in non-humans, especially non-human primates.

                          On terminology, I would argue for non-biological intelligence. People can be awfully bioist (biological racist).

                          • mullingitover 17 hours ago

                            > LLMs will become sentient or conscious

                            I've always doubted it, but then again I've also been skeptical about claims that humans have these capabilities.

                            • jamesfinlayson 19 hours ago

                              > But I always see people online claiming that there are lots of people who believe that.

                              I saw someone on the news claiming this recently, but he ran an AI consultancy firm so I suspect he was trying to drum up business.

                              • melagonster 19 hours ago

                                >LLMs will become sentient or conscious.

                                People who declare that AGI is coming.

                                • mattclarkdotnet 19 hours ago

                                  AGI is completely orthogonal to consciousness. Crows seem pretty conscious to me, as does my cat, but I have no way to test or prove it. They are intelligent though.

                                • mattclarkdotnet 19 hours ago

                                  What? Nobody says cubic zirconia is an artificial diamond, it’s just a different shiny crystal. We have loads of actual artificial diamonds, so cheap you can get a cutting disc made fr9m them for $10 at home depot.

                                  And nobody working in the space either as ML/AI practitioners, or as philosophers, or as cognitive scientists, even thinks we know what consciousness is, or what is required to create it. So there would be no way to tell if an AI is conscious because we haven’t yet managed to reliably tell if humans, or dogs, or chimpanzees or whales are conscious.

                                  The claim that is often made is that more work on the current generation of AI tech will lead to AGI at a human or better level. I agree with Yann Lecun that this is unlikely.

                                  • WalterBright 18 hours ago

                                    I'm pretty sure mammals and birds are conscious. Insects, probably not.

                                    • falcor84 18 hours ago

                                      Why? Are you arguing that insects are purely automatons? I personally don't have a strong view on insects, but my intuition is that there are different degrees of consciousness, and feel natural to attribute some consciousness to insects, and even individual amoebas, and maybe even (as in Chalmers's famous example) to thermostats.

                                      I would draw a separate line around sapience, and particularly the capacity for suffering, maybe indeed attributing it to mammals and birds but not insects, but consciousness seems more widespread to me.

                                      • windows2020 16 hours ago

                                        Bees seem like they know what's going on. What about a cell, though? A virus?

                                        • falcor84 15 hours ago

                                          It depends on the cell; an amoeba for example clearly seems to know what's going on around it. A virus on the other hand, having no metabolism of its own clearly doesn't.

                                      • mattclarkdotnet 18 hours ago

                                        If you were to force the choice I might agree. But I’d prefer to think there’s likely a sliding scale in operation here. Even humans aren’t conscious all the time, or equally conscious at all times. It will be an amazing day when we figure this out.

                                    • pllbnk 11 hours ago

                                      Lucky you. I have personally faced some cargo cult-like behavior.

                                    • georgeecollins 17 hours ago

                                      "You can see the computer age everywhere but in the productivity statistics "

                                      Robert Solow, Noble Prize winning economist, 1987.

                                      • rubslopes 9 hours ago

                                        I do believe I'm more productive, but my company is not charging much more for it. I'm working the same hours. Maybe that's the reason.

                                        I just had a meeting yesterday when someone from the customer support team vibe-coded a solution in a few hours. The boss said, "Let's just give this as a gift; this product is not our focus and I want to show them how AI makes us work fast."

                                        • palmotea 16 hours ago

                                          > "You can see the computer age everywhere but in the productivity statistics "

                                          > Robert Solow, Noble Prize winning economist, 1987.

                                          Some skeptic was wrong in the past, therefore we should disbelieve every skeptic, forever.

                                          That's the argument, right?

                                          • georgeecollins 2 hours ago

                                            No, sorry I should have elaborated because while this is a really familiar case to people who study economics it may not be familiar to everyone. People spent a fortune on computers, and increasing amounts. To the point of the quote it wasn't clear that it was improving productivity. It took time and a lot of investment for the transformation of work to happen.

                                            A similar historical thing is when factories went from steam engines to electricity. Steam factories had one big engine connected mechanically to many tools and conveniences in the factory. So they replace the one big steam engine with an electric motor. Really not much better. It took time for them to realize they wire the factory and have each device have its own electric motor. That was more efficient and more flexible. Technology that changes how you work takes a long time to adopt.

                                            • sph 4 hours ago

                                              “The dot-com boom left all this fibre that powered the next 20 years of Internet growth” is the common example put forward, and I always wonder what amazing societal advancement we got with all those leftover tulip bulbs in the 1600s.

                                            • kamaal 17 hours ago

                                              Oh well had a talk with a director at office. He says, instead of using AI to get more productive, people were using AI to get more lazy.

                                              1)

                                              What he means to say is, say you needed to get something done. You could ask AI to write you a Python script which does the job. Next time around you could use the same Python script. But that's not how people are using AI, they basically think of a prompt as the only source of input, and the output of the prompt as the job they want get done.

                                              So instead of reusing the Python script, they basically re-prompt the same problem again and again.

                                              While this gives an initial productivity boost, you now arrive at a new plateau.

                                              2)

                                              Second problem is ideally you must be using the Python script written once and improve it over time. An ever improving Python script over time should do most of your day job.

                                              That's not happening. Instead since re-prompting is common, people are now executing a list of prompts to get complex work done, and then making it a workflow.

                                              So ideally there should be a never ending productivity increase but when you sell a prompt as a product, people use it as a black box to get things done.

                                              A lot of this has to do with lack of automation/programming mindset to begin with.

                                              • bgitarts 5 hours ago

                                                The way I'm using it is I have AI generate the tool (python script) and then it will use it for the task and for future tasks. As time goes on, the AI has more tools to call on which makes it (and me) more productive (higher quality work in less time)

                                            • agnishom 16 hours ago

                                              The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

                                              Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

                                              I don't think standard economic indicators are tuned to detect these externalities in the short to medium term.

                                              • palmotea 16 hours ago

                                                > The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

                                                This. I think generative AI will mostly generate destruction. Not in the nuking cities sense, but in hollowing out institutions and social bonds, especially the complicated and large-scale kind that have enabled advanced civilization. In many ways, things will revert to a more primitive state: only really knowing people in your local vicinity (no making friends online, because it'll be mostly dead-internet bots out there), only really knowing the news you see yourself, more reliance on rumor and hearsay, removal of the ability for the little guy to challenge and disprove institutional propaganda (e.g. can't start a blog and put up some photos and have people believe your story about what happened), etc.

                                                • yunwal 16 hours ago

                                                  > Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

                                                  I think most people will retreat into smaller spaces where they can rely on people to not deceive them. Everyone is moving to discord/group chats now for any sort of trustworthy information. This might be a good thing honestly. It was probably never good that we all got our information from the same place.

                                              • rising-sky 20 hours ago

                                                I guess this is trend now because it's a contrarian / attention grabbing headline. See:

                                                - "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...

                                                - “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...

                                                But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox

                                                  > The term can refer to the more general disconnect between powerful computer technologies and weak productivity growth
                                                • XenophileJKO 19 hours ago

                                                  I keep seeing the "Productivity Paradox" highlighted over an over again. I think one thing people are missing with this specific technology is that unlike many of the comparisons (computers, internet, broadband, etc), AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

                                                  There will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.

                                                  • afavour 19 hours ago

                                                    > AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

                                                    That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.

                                                    When the market eventually corrects it’ll be interesting to see how much AI ends up costing. At the very least it will be comparable to the broadband internet connection you mentioned. Possibly a whole lot more.

                                                    • slg 19 hours ago

                                                      >That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.

                                                      Isn't that a huge red flag? If customers are being given this product at a discount and it still isn't showing a positive ROI for them, what makes people think it will improve once we're charged full price?

                                                      • weevil 18 hours ago

                                                        I think most people just assume it's magic, and are too awestruck by the hype to think critically.

                                                        Financially this feels similar to Uber's business plan in the 2010s; undercut the market with unsound pricing propped up by venture capital (PE was literally subsidising taxi fares; they admitted this and their intention to readjust, but no one seemed to care) then stop manipulating the market and allow fares to even out at (gasp) what it cost to get a cab before Uber.

                                                        The difference here is that the LLM market is human productivity; enormous subsidies are afforded to Anthropic, OpenAI etc. in the form of VC or compute credit, but eventually those debts will be called in, the free-to-use aspect will vanish because it's simply not profitable, and we'll be left with several premium products that only a few people will actually pay for, and even then that may not be enough to cover their costs. That's when the bubble will burst.

                                                        • great_psy 16 hours ago

                                                          Actually I think there’s another option.

                                                          There’s the scenario where LLMs get more efficient in size, and to get 2026 SOTA performance you will be able to get it from consumer grade laptop.

                                                          Sure with a 1000B parameter you will get better performance but the average person will have it write some python script, not derive new physics equations.

                                                          So in a sense the demand for LLM intelligence with reach a plateau (arguably we are there today for avg person) so there will not be any subsidy required, because the avg person will not need the latest and greatest.

                                                          There’s not the same demand pattern for something like uber.

                                                          • palmotea 16 hours ago

                                                            > There’s the scenario where LLMs get more efficient in size, and to get 2026 SOTA performance you will be able to get it from consumer grade laptop.

                                                            But isn't that bad for the AI companies, too? Because then people just run an ~2026 SOTA performance open source model on their laptop for free and not pay any subscription.

                                                            • great_psy 15 hours ago

                                                              Yes and no.

                                                              Regular folks will not pay Anthropic, but NSA, NASA or research labs might.

                                                              I’m not implying this will be a good time for AI companies. I am saying AI as a technology can provide value without it being controlled by only 3 companies.

                                                              • jononor 13 hours ago

                                                                In a hypothetical future with 2026 level LLMs on a (high end) consumer laptop, I still think that majority of buyers would prefer to pay 20 USD/month for a service. Just for the convenience and flexibility.

                                                                • palmotea 4 hours ago

                                                                  > In a hypothetical future with 2026 level LLMs on a (high end) consumer laptop, I still think that majority of buyers would prefer to pay 20 USD/month for a service. Just for the convenience and flexibility.

                                                                  $20 a month is a lot of money, I don't think the "convenience and flexibility" you get would actually be worth it, unless you've 1) got money to burn, 2) lack the skills to install software, 3) the open source community totally fails to develop a reasonable installer. The LLM service would probably be akin to a scam preying on ignorance, like those companies that will rent you a water softener for like $100/month.

                                                                  • jononor 3 hours ago

                                                                    It is a lot compared to what? I believe that a LLM capable laptop will cost considerably more than something that is good-enough for non-LLM productivity tasks. At least within the next 5 years. Say that it would cost 600 USD more, that would buy 30 months of subscription. It is this kind of scenario I think many people will favor the subscription.

                                                      • BobbyJo 19 hours ago

                                                        Is it actually being sold at a steep discount? Anthropic CEO has stated they have high margins on inference, so training is the big cost center.

                                                        • bigbadfeline 18 hours ago

                                                          > Anthropic CEO has stated they have high margins on inference, so training is the big cost center.

                                                          I'm pretty sure that in corpo-speak "inference" excludes the cost of datacenter construction, GPUs and other hardware, manual data cleaning, R&D, administration, etc - basically everything except the power bill for inference.

                                                          I have absolutely no problem with companies that run inference only - plenty of them offer open models as a service - they're usefull and their accounting can be believed... but they don't have near $ Trillion valuations and they don't misallocate capital on a vast scale as the frontier models do.

                                                          The point of the OP is that closed models don't pay for themselves and, on the scale of the US economy, they provide minuscule economic advantages compared to the enormous investments they consume.

                                                          • BobbyJo 17 hours ago

                                                            They've raise 70-ish billion (which they have not spent all of) and have a run rate of 14 billion/y as of now. All said and done those are great economics so far, even accounting for those extra expenses.

                                                            • judahmeek 2 hours ago

                                                              Your argument requires the run rate to reduce over time until OpenAI reaches profitability. However, even OpenAI has publicized that they expect their expenses to exponentially increase for their models to remain competitive.

                                                              So they are not profitable now & they have no idea of when they ever will be.

                                                              Worse, Gemini has guaranteed funding for continued training whenever the AI hype bubble pops.

                                                              Anthropic & OpenAI's only saving grace is that Google is generally terrible at product.

                                                          • lich_king 19 hours ago

                                                            > Is it actually being sold at a steep discount? Anthropic CEO has stated they have high margins on inference, so training is the big cost center.

                                                            They're spending more than they're making. For the foreseeable future, saying "we could be profitable if we stopped training" if goofy, because they can't stop. If they do, no one will want to use their product because it will be overtaken by competitors within three months.

                                                            I get it that in 10 years all of this might peak and we're gonna be content using old models, but that'll be a very different landscape and Anthropic might not be a part of it anymore if they don't start making money before that.

                                                            • frde_me 18 hours ago

                                                              > I get it that in 10 years all of this might peak and we're gonna be content using old models

                                                              I would personally be happy using gpt 5.3 codex for the foreseeable future, with just improvements in harnesses

                                                              IMO we're already at the point where even if these company collapse and the models end up being sold at the cost of inference (no new training), we would be massively ahead

                                                              • BobbyJo 17 hours ago

                                                                That's a perfectly valid approach if you can balance capex and revenue. Why stop and try to be profitable when the economy is giving you the liquidity to push that down the road?

                                                                Models are already super useful, but if you can make them more useful by burning cash people are willing to hand you, why not?

                                                              • ambicapter 19 hours ago

                                                                Well, training isn't going to end soon if these companies keep on competing with one another whilst being neck-and-neck, so I'm not sure why you would ignore the cost of training in the ROI calculation.

                                                                • numbsafari 19 hours ago

                                                                  Does the cumulative earnings from inference on a single model exceed its training costs?

                                                                  That’s.. kinda the question.

                                                                  • ainch 19 hours ago

                                                                    Amodei says yes - each model pays for its training. But they're scaling up investment for each new run, so they're still happily in the red.

                                                                    And also that may be the case for Anthropic who have fewer free users, a large enterprise business, and less generous rate limits on their subscriptions. I don't know if OpenAI or Google have commented. I suspect OpenAI is in a worse position given their massive non-paying consumer base.

                                                                  • itsmenick 19 hours ago

                                                                    Then why are they stopping people from having multiple max plans? If they are making such good margins on inference.

                                                                    • lehmacdj 18 hours ago

                                                                      They have good margins on inference at API costs, i.e. $5/$25 per mtok input/output. They are almost certainly making losses on subscriptions, at least if people max out rate limits.

                                                                      In the past 30 days I have burned $78.19 in API token costs with my $20/month Claude Pro subscription. In January I burnt over $300 in API token costs.

                                                                      • noah_buddy 4 hours ago

                                                                        Because the power users of the max plan are subsidized at the upper end of usage by people who don’t approach the per account limit. In other words, the power users are getting more than they pay for, because most people don’t reach that threshold. If you let the power users have dozens of accounts, it has a multiple effect on the proportion of accounts breaching the profitability line.

                                                                        • jononor 13 hours ago

                                                                          They are likely aiming to maximize reach/mindshare. Get as many people hooked as possible. More important than minor upside from a few multi-Max users.

                                                                          EDIT: also, the casual or gym-style members that pay every month but barely use the service are of course very valuable wrt margins

                                                                      • fooster 16 hours ago

                                                                        >That is currently sold to them at deep discount by companies that are losing billions.

                                                                        They're not losing billions on inference, they're losing billions in the arms race of training.

                                                                      • fallinditch 17 hours ago

                                                                        At the large insurance company I'm doing some work for the big capability gains have yet to materialize. There are some pockets of workflow innovation but big institutions can carry a kind of inertia and are slow to adapt.

                                                                        But as the organization slowly learns and adapts I'm sure the capability gains will materialize.

                                                                        • sifar 18 hours ago

                                                                          > AI in particular doesn't have a high requirement at the consumer side

                                                                          Effective use of these AI tools need high critical thinking skills which are in short supply.

                                                                        • Lalabadie 19 hours ago

                                                                          I would argue that the leadership and financial support behind AI (in its current form) does not have the patience or level-headedness to treat it as a long-term change, and is very much trying an all-or-nothing approach to making a long shift happen in a few years instead, or burn through nation-level budgets trying.

                                                                          To my eyes, the problem is not the productivity gain arriving slowly, but the immediate draining of funding from virtually all other areas of innovation.

                                                                          • camillomiller 19 hours ago

                                                                            This. They created an innovation black hole and we will all pay the long-term consequences of it

                                                                            • palmotea 16 hours ago

                                                                              > This. They created an innovation black hole and we will all pay the long-term consequences of it

                                                                              But they may get rich soon, which is all that really matters to them.

                                                                          • slongfield 19 hours ago

                                                                            This isn't new.

                                                                            "The Productivity Paradox" is what they called it when people were skeptical that computer would end up finding a place in the office. There are articles from the 90s complaining about how much people are spending on buying computers for no real impact on productivity https://dl.acm.org/doi/10.1145/163298.163309

                                                                            • ej88 20 hours ago

                                                                              Even the source article in the first link, https://www.nber.org/papers/w34836

                                                                              the same firms "predict sizable impacts" over the next three years

                                                                              late 2025 was an inflection point for a lot of companies

                                                                              • edgyquant 18 hours ago

                                                                                Seems like it’s an ever shifting goalpost when we are told that tons of layoffs etc are already happening due to the tech and yet when quantified it’s debatable if there’s been any gains at all

                                                                                • dodu_ 16 hours ago

                                                                                  I'll take it over seemingly endless deluge of FUD-slop from the past 4 years that claims you better get ready for the AI takeover coming for all the jobs in just-long-enough of a timeline that nobody will remember to hold the author accountable when their prediction is woefully incorrect, where the "advice" in the article is conveniently to pay for more AI tools.

                                                                                  • camillomiller 19 hours ago

                                                                                    All of the technologies mentioned eventually made things better. In order to work, gen AI requires a general acceptance of widely spread mostly mediocre outcomes. I don’t see how the comparison stands.

                                                                                    • surgical_fire 19 hours ago

                                                                                      How to reconcile this with all the narratives of how powerful AI is, how it can perform right now at the same level of engineers and so on?

                                                                                      Once confronted with reality we have a "productivity paradox"?

                                                                                    • kermatt 32 minutes ago

                                                                                      When talking about the economy, how much of these numbers consider secondary effects of AI?

                                                                                      For example when there are large scale layoffs attributed to AI (which in some cases may be a smokescreen to reduce headcount), the people that are affected reduce their spending to compensate.

                                                                                      Less spending means less revenue for other companies, resulting in more layoffs, something of a feedback loop?Therefore perceived productivity gains maybe a net negative in the macro economic sense.

                                                                                      I am no economist so I don't know if any of that makes sense, but it's something I regularly wonder about.

                                                                                      • chris_money202 18 hours ago

                                                                                        I think a pretty good example I had at work, we had the option to buy a software package from a 3rd party company. After reviewing the specs we needed, I told my manager to give me a few hours to see if I could produce what we needed with AI instead. Lo and behold, I was able to do it in just a few hours, AI package was tested, integrated, and we moved on. No where was any of that recorded that I just saved the company lots of money using AI. I bet there are lots of examples like this that just aren't adequately tracked at both micro and macro levels. For some reason we expected to to be able to see these huge gains from AI but we never bothered putting systems in place to observe them.

                                                                                        • gpm 18 hours ago

                                                                                          I suspect we are still at the stage where for every story like this there's an offsetting story in the other direction of "I (more commonly reported as my coworker) tried to implement something with AI, messed it up, and ended up wasting a ton of time and resources on that mistake".

                                                                                          It's not that AI can't be useful, but that there's a learning curve, and early in the learning curve we should expect as many resources to be spent learning as resources are saved by using the thing. A macro level view of the economy as a whole sees this as "zero economic growth".

                                                                                          • mrtksn 18 hours ago

                                                                                            I think this is probably going to be the mainstream. Once you are able to define what you need LLMs are able to produce it. If you are able to understand what is delivered, it ends up working as expected.

                                                                                            I needed and embedded document based database, a friend of mine with 30 years experience was vibe coding a database in Rust and I asked him if he can make it support Swift and be embedded in iOS and in few minutes he delivered that using Claude. Then I started vibe coding on it with Codex adding features I wanted and integrating it into my project. It worked as expected. I think it is close to reaching parity with MongDB, years of work vibe coded in a weekend.

                                                                                            There’s going to be fundamental changes in how we program computers and consequently the IT industry.

                                                                                            • dw_arthur 15 hours ago

                                                                                              It should show in decreased revenue for the company you didn't buy the product from. It also should show up at your company either as increased profit margin, increased investment, increase in total employee wages, or increased dividend payout.

                                                                                              If this is happening on a widespread basis in the economy we should see evidence of it sometime this year and that's what investors are anticipating with SaaS stocks.

                                                                                              • gls2ro 15 hours ago

                                                                                                While personally this excites me: the idea that I can build a custom software that fits that specific problem is quite amazing.

                                                                                                But on company level I see it as a risk: suddently you might have 50 new small apps created by people who might not even work at the company who are not constantly tested for security/privacy ... but more important who once done are not pushing the frontier of how a much better solution might be in that area cause nobody is putting time into them. So as time passes by this has the risk to become legacy software used to run your business. yes of course you can point an AI to all of them and prompt it to make them better but that means focus on that instead of your core business.

                                                                                                Maybe we will see solutions appearing to manage this kind of tech debt.

                                                                                                • samrus 10 hours ago

                                                                                                  Your point on visibility of the value of avoiding the initial purchase makes sense, but theres something your missing. Theres a cost to maintaining and supporting the software. The cost of that wont be factored in either. It might still end up being a positive value proposition, but that needs to be seen

                                                                                                  • consumer451 12 hours ago

                                                                                                    > For some reason we expected to to be able to see these huge gains from AI but we never bothered putting systems in place to observe them but we never bothered putting systems in place to observe them.

                                                                                                    I am an economic dummie, but wouldn't the metric be revenue per employee?

                                                                                                    • enraged_camel 18 hours ago

                                                                                                      Yes. We needed to do a huge migration project that would otherwise have taken us six months and/or cost more than $100k. With the help of Opus 4.5 we finished it in three weeks for a total token cost of $1200. I posted about it last month.

                                                                                                      So if you want to think of it in economic terms, some software consulting firm that would otherwise have made six figures instead did not. The vast majority of the money we would have spent stayed in our pocket. Slight decreases like this in “velocity of money” no doubt add up to significant sums.

                                                                                                      • bgitarts 5 hours ago

                                                                                                        This should show in up in higher margins for the company.

                                                                                                        Is your company a software firm or considered something outside of pure software?

                                                                                                        • buu700 18 hours ago

                                                                                                          GDP is a classic example of Goodhart's law.

                                                                                                        • slopinthebag 18 hours ago

                                                                                                          What was the software package?

                                                                                                          • chris_money202 18 hours ago

                                                                                                            simulation model of a hardware component

                                                                                                            • linkjuice4all 18 hours ago

                                                                                                              Typically I wouldn't press on the details - but do you have any reason not to name the specific package/vendor and/or the use case? It's nice to draw from the actual experiences of HN commenters and I'd be interested to hear how you used the technology in question in actual practice.

                                                                                                              • slopinthebag 18 hours ago

                                                                                                                I didn't push because I imagine they want to stay anonymous, but I am also curious.

                                                                                                          • pylua 18 hours ago

                                                                                                            Doesn’t this hurt the economy ?

                                                                                                            • chris_money202 18 hours ago

                                                                                                              I think it would depend on which company is the more innovative. Is the 3rd party going to use the money we give them to drive further economic growth and innovation? Or is the money saved going to do that. Its a tough call and could go both ways. We need to somehow measure how the pendulum swings with more accuracy and clearer signals.

                                                                                                          • d_watt 20 hours ago

                                                                                                            It took 20 years for computers to "add" to the economy.

                                                                                                            https://en.wikipedia.org/wiki/Productivity_paradox

                                                                                                            • preommr 20 hours ago

                                                                                                              I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

                                                                                                              It's not good enough to just say oreo ceos say we need to more oreos.

                                                                                                              There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.

                                                                                                              Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.

                                                                                                              • ozim 20 hours ago

                                                                                                                AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.

                                                                                                                They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.

                                                                                                                So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.

                                                                                                                • rfv6723 19 hours ago

                                                                                                                  This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.

                                                                                                                  • afavour 19 hours ago

                                                                                                                    But fiber optic created in 2000 is still very usable in 2026. AI hardware purchases in 2026 is going to out of date very quickly by comparison.

                                                                                                                    • rfv6723 19 hours ago

                                                                                                                      The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.

                                                                                                                      • DrewADesign 18 hours ago

                                                                                                                        Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.

                                                                                                                        • rfv6723 17 hours ago

                                                                                                                          The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.

                                                                                                                    • toomuchtodo 19 hours ago

                                                                                                                      How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.

                                                                                                                    • whattheheckheck 19 hours ago

                                                                                                                      Try 3

                                                                                                                    • Terr_ 19 hours ago

                                                                                                                      It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."

                                                                                                                      To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.

                                                                                                                      [0] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

                                                                                                                      • flowerthoughts 14 hours ago

                                                                                                                        > the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years

                                                                                                                        An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.

                                                                                                                        • co_king_5 20 hours ago

                                                                                                                          > the problem is that people from OpenAI/Anthropic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

                                                                                                                          I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.

                                                                                                                          Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.

                                                                                                                          • bigstrat2003 20 hours ago

                                                                                                                            > I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude.

                                                                                                                            You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.

                                                                                                                            • maplethorpe 9 hours ago

                                                                                                                              Are you using Claude Opus 4.6?

                                                                                                                            • arctic-true 20 hours ago

                                                                                                                              Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.

                                                                                                                              • dougb5 19 hours ago

                                                                                                                                Minor correction: "sparks of AGI" was in reference to GPT-4, which came out 5 months later in 2024. (https://arxiv.org/abs/2303.12712)

                                                                                                                                • arctic-true 18 hours ago

                                                                                                                                  Ah, that’s my mistake. Thank you. I saw 2023, I thought GPT-3. Even still, people talk about GPT-4 today like it was a quaint little demo. It was a magnificent achievement, it scared the pants off of a lot of people, and sparked a new round of “is AI conscious?” discourse.

                                                                                                                                • yowayb 20 hours ago

                                                                                                                                  iirc, when Eliza came out, many people briefly believed it was sentient

                                                                                                                              • EA-3167 20 hours ago

                                                                                                                                It’s amazing that economic analysis can be dismissed by “feeling the AGI”.

                                                                                                                                You might as well be telling people to “HODL”

                                                                                                                                • lanstin 20 hours ago

                                                                                                                                  Have you ever tried to trick an LLM? Did you have trouble?

                                                                                                                                  • chrysoprace 20 hours ago

                                                                                                                                    What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].

                                                                                                                                    [0] https://news.ycombinator.com/item?id=47031580

                                                                                                                                    • conception 20 hours ago

                                                                                                                                      A link to a page where the top comment talks about how a major model doesn’t get stuck on the question doesn’t seem like much of a flex.

                                                                                                                                      • co_king_5 20 hours ago

                                                                                                                                        Have you even used Claude?

                                                                                                                                        You can feel it coming.

                                                                                                                                        • albatross79 19 hours ago

                                                                                                                                          You seem to be doing a lot of feeling, have you tried thinking? It's pretty cool when you need a break from feeling.

                                                                                                                                          • testbjjl 19 hours ago

                                                                                                                                            If somehow Claude became sentient that would be sci-fi. One day it’s wrangling CSS and Spring Boot Controllers and the next it’s telling you opinions it developed through its own experiences on programming languages. Not sure that’s on the near horizon, but it’s definitely impressive technology.

                                                                                                                                        • AnimalMuppet 19 hours ago

                                                                                                                                          > Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.

                                                                                                                                          Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?

                                                                                                                                          Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.

                                                                                                                                          • testbjjl 19 hours ago

                                                                                                                                            The GO may be talking about Wordpress or something less than embedded code running on the hadron collider.

                                                                                                                                      • yifanl 20 hours ago

                                                                                                                                        The difference being that AI's marketing has been significantly more prevalent than any early computing efforts.

                                                                                                                                        • jsheard 20 hours ago

                                                                                                                                          Not to mention the investment is on another level. We've got companies with valuations in the hundred-billions talking about raising trillions to buy all of the computers in the world, before establishing whether they can even turn a profit, nevermind upend the economy.

                                                                                                                                          • m4rtink 19 hours ago

                                                                                                                                            I wonder how many actually beneficial projects will not be financed by investors too scared to try anything risky after the AI buble crashes and burns to the ground. :P

                                                                                                                                            • testbjjl 19 hours ago

                                                                                                                                              More than the last few crashes. Same players on different teams.

                                                                                                                                            • bdangubic 20 hours ago

                                                                                                                                              the investments are being made by massively profitable companies (our biggest and brightest ones, the ones that have been carrying the economy for quite some time now, even before "AI"). even just in recent history we have seen companies making large investments and being very unprofitable until they weren't anymore (e.g. Uber). and it is always the same story, everyone is up in arms "this is not sustainable etc..."

                                                                                                                                              whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...

                                                                                                                                              • arctic-true 19 hours ago

                                                                                                                                                The fact that the companies that have already shoveled billions of dollars at this are continuing to do so is equally consistent with AI improvement and adoption stalling as it is with infinite improvement and widespread adoption. Yes, it’s irrational to chase sunk costs - but unlike the VC funds that backed Uber and its competition, may of the players in this game are exposed to public markets, which are not known for being rigorously logical. If you pull back on your AI investments, the markets will punish you - probably vigorously - and if your only concern is the value of your stock options, it is entirely rational for you to act in a way that keeps the market from punishing their value. We’re 3 years in without showing any ROI, and who’s to say we can’t get 3 or 5 or 10 more? Plenty of time to cash out before the eventual reckoning.

                                                                                                                                                • bdangubic 19 hours ago

                                                                                                                                                  > If you pull back on your AI investments, the markets will punish you - probably vigorously

                                                                                                                                                  quite the opposite is happening as evidenced from last earnings reports…

                                                                                                                                                  • arctic-true 19 hours ago

                                                                                                                                                    There is definitely growing hesitancy in the market, but pulling back at this juncture could set off a full-on race to the bottom, because it would disprove the original point (“all the smart tech companies are all-in, so there must be profit at the end of the tunnel”). Right now, they can point to the skeptics as bears or doomers or whatever. The first big tech company to drop its capex will pierce the aura of invincibility and make the moderate retreat from the exuberant highs of late 2025 look like a blip on the radar.

                                                                                                                                                • jsheard 20 hours ago

                                                                                                                                                  I'd maybe think twice about assuming Meta knows what they're doing after they just pissed $75 billion up the wall on a Metaverse dream that went nowhere.

                                                                                                                                                  • testbjjl 19 hours ago

                                                                                                                                                    Pissed it away, but Zuckerberg is richer than ever and so are his stockholders it seems. I can’t imagine doing it, but also can’t imagine running Meta.

                                                                                                                                                    • bdangubic 20 hours ago

                                                                                                                                                      if it was just Meta perhaps I’d think twice but it is not just Meta, it is all of them

                                                                                                                                                      • albatross79 19 hours ago

                                                                                                                                                        "The lemmings can't be wrong, they're all doing it". I think you're overlooking the incentive structures here.

                                                                                                                                                        • bdangubic 19 hours ago

                                                                                                                                                          I am certainly not saying that this can’t all come crashing down for the big boys, surely it can. I just am putting a little more weight on them than on people on the internet and doomsdayers hunting for clicks is all

                                                                                                                                                          • leetrout 19 hours ago

                                                                                                                                                            I just keep thinking about SGI and, to an extent, Sun. Couple missteps and a couple innovations in the commodity direction and it will start having a negative effect.

                                                                                                                                                • petcat 20 hours ago

                                                                                                                                                  This seems false to me. Commodore and Apple were blitzing every advertising medium and especially TV ads in the early 1980s.

                                                                                                                                                  • ohrus 19 hours ago

                                                                                                                                                    Worryingly revisionist to compare 1980s media advertising budgets to what's going on now (even if they were 'high' for the time).

                                                                                                                                                    • galleywest200 20 hours ago

                                                                                                                                                      Atari too

                                                                                                                                                    • paradox460 17 hours ago

                                                                                                                                                      And early (electronic) computing paid immediate dividends, with the bletchley park code breakers

                                                                                                                                                      • testbjjl 19 hours ago

                                                                                                                                                        More than Apple, on relative scale. Personally I don’t think that.

                                                                                                                                                      • RigelKentaurus 19 hours ago

                                                                                                                                                        For the U.S. economy, productivity is defined as (output measured in $)/(input measured in $). Typically, new technologies (computers, internet, AI) reduce input costs, and due to competition in the market, companies are required to reduce their prices, thereby having an overall deflationary effect on the economy. It's entirely possible that AI will have a small or no effect on productivity as measured above, but society will benefit by getting access to inexpensive products and services powered by inexpensive AI. Individual companies won't use AI to improve their productivity but will need to use AI just to stay competitive.

                                                                                                                                                        • yowayb 20 hours ago

                                                                                                                                                          I think this paragraph from the wikipedia article captures it nicely:

                                                                                                                                                          >Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.

                                                                                                                                                          • pier25 19 hours ago

                                                                                                                                                            Sure but the issue with AI is results vs money burned.

                                                                                                                                                            • kakapo5672 20 hours ago

                                                                                                                                                              Yep, and the same with the internet. During the 1990s and 2000s, people kept wondering why the internet wasn't showing up in productivity numbers. Many asked if the internet was therefore just a fad or bubble. Same as some now do with AI.

                                                                                                                                                              It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.

                                                                                                                                                              • rainsford 20 hours ago

                                                                                                                                                                Sure, but you have to consider Carl Sagan's point, "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Some truly useful technologies start out slow and the question is asked if they are fads or bubbles even though they end up having huge impact. But plenty of things that at first appeared to be fads or bubbles truly were fads or bubbles.

                                                                                                                                                                Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.

                                                                                                                                                                • ericd 19 hours ago

                                                                                                                                                                  All those racks of Nvidia machines might not pay off for the companies buying them, but I have a hard time believing that people are still questioning the utility of this stuff. In the last hour, Opus downloaded data for and implemented a couple of APIs that I would’ve otherwise paid hundreds a month for, end to end, from research all the way to testing its implementation. It’s so, incredibly, obviously useful.

                                                                                                                                                                  • jononor 13 hours ago

                                                                                                                                                                    That something is useful does not necessarily mean that it will be doable for companies the capture enough of value to make up for the billions in investments they have/will have make in the coming years.

                                                                                                                                                                    Right now the frontier AI companies are explicitly running a kind of chicken race - increasing the burn rates so much that it gets harder and harder. With the hopes that they (and not their competitor) will be the one left standing. Especially OpenAI and Antropic, but non-AI companies like Oracle have also joined. If they keep it going, the likely outcome is that one of them folds - and the other(s) reap the rewards.

                                                                                                                                                                    Utility (per cost) will go up the tougher the competition. Money captured by single entity possibly down with increased competition.

                                                                                                                                                                    • slopinthebag 18 hours ago

                                                                                                                                                                      It's only really useful if what you produce with those API's is useful. It's easy to feel productive with AI tho, in a way that doesn't show up in economic statistics, hence the disconnect.

                                                                                                                                                                      • ericd 15 hours ago

                                                                                                                                                                        Well, it might actually decrease GDP in this case, because it's making it so I can just quickly make products that I would've otherwise purchased. But it's also made me more productive, and purchasing things isn't good for its own sake. So maybe measuring progress via GDP isn't ideal?

                                                                                                                                                                        The thing I'm making with the APIs is very helpful to me, maybe it'll be helpful to others, who knows.

                                                                                                                                                                    • jeltz 20 hours ago

                                                                                                                                                                      Columbus was not a genius. He was an idiot who believed the earth was smaller than the scientists of his day, and the scientists were right. Columbus became successful through pure luck, genocide and cruelty.

                                                                                                                                                                      Most idiots like Columbus died in obscurity.

                                                                                                                                                                      • rainsford 19 hours ago

                                                                                                                                                                        Yeah the inclusion of Columbus is admittedly not great, but it's part of the original quote and the overall point is still a good one.

                                                                                                                                                                        • surgical_fire 19 hours ago

                                                                                                                                                                          Columbus, the man that didn't know where he was going to, and when he came back he couldn't tell where he was.

                                                                                                                                                                        • georgemcbay 19 hours ago

                                                                                                                                                                          > What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it.

                                                                                                                                                                          I think there are two layers of uncertainty here. One is, as you say, if the value is worth the investment. The other and possibly bigger issue is who is going to capture the value and how.

                                                                                                                                                                          Assuming AI turns out to be wildly valuable, I'm not at all convinced that at the end of this money spending race that the companies pouring many billions of dollars into commercial LLMs are going to end up notably ahead of open models that are running the race on the cheap by drafting behind the "frontier" models.

                                                                                                                                                                          For now the frontier models can stay ahead by burning heaps of money but if/when progress slows toward a limit whatever lead they have is going to quickly evaporate.

                                                                                                                                                                          At some point I suspect some ugly legal battles as some attempt to construct some sort of moat that doesn't automatically drain after a few months of slowed progress. Google's recent complaining about people distilling gemini could be an early signal of this.

                                                                                                                                                                          I have no idea how any of that would shake out legally, but I have a hard time sympathizing with commercial LLM providers (who slurped up most existing human knowledge without permission) if/when they start to get upset about people ripping them off.

                                                                                                                                                                          • arisAlexis 20 hours ago

                                                                                                                                                                            Even that you mentioned NFTs in comparison hurts my mind

                                                                                                                                                                            • kibwen 20 hours ago

                                                                                                                                                                              I mean, it's an apt comparison, given that the Venn diagram between the pro-NFT hucksters and the pro-AI crowd is a circle. When you listen to people who were so publicly and embarrassingly wrong about the future try to sell you on their next hustle, skepticism is the correct posture.

                                                                                                                                                                          • recursive 20 hours ago

                                                                                                                                                                            Also no particular reason to group it in with those two. There are plenty of things that never showed up at all. It's just not a signal It's kind of like "My kid is failing math, but he's just bored. Einstein failed a lot too you know". Regardless of whether Einstein actually failed anything, there are a lot more non-Einsteins that have failed.

                                                                                                                                                                            • sillyfluke 20 hours ago

                                                                                                                                                                              It didn't take mobile apps with the launch of the iPhone 20 years to add to the economy though, did it?

                                                                                                                                                                              • m4rtink 20 hours ago

                                                                                                                                                                                The iPhone was not the first mobile device or even the first smartphone. Not to mention it did not support mobile applications as we know them today.

                                                                                                                                                                                • sillyfluke 20 hours ago

                                                                                                                                                                                  That seems a tad reductionist. Why not just say the iPhone was completely inconsequential because afterall it's simply another "computer". Why not go even back further and start the timer at the first physical implementation of a Turing machine?

                                                                                                                                                                                  The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.

                                                                                                                                                                                  • m4rtink 18 hours ago

                                                                                                                                                                                    I think it would have happened regardless - late Symbian from Nokia was pretty close and Maemo was already a thing with N900 not that far off in the future, not to mention Android.

                                                                                                                                                                                    We might have been possibly better of actually, with the Apple walled garden abominations and user device lockdowns not being dragged into the mainstream.

                                                                                                                                                                                    • eichin 16 hours ago

                                                                                                                                                                                      As someone who worked for Nokia around the iPhone launch (on map search, not phones directly) - I also wanted to believe this at the time. But in retrospect, it feels like what actually mattered was that capacitive multi-touch screens were the only non-garbage interface, and only Apple bought FingerWorks...

                                                                                                                                                                                      Not clear that this is a helpful interpretation, other than "we're in the primordial ooze stage and the thing that matters will be something none of the current players have", but that's hard to take to the bank :-)

                                                                                                                                                                          • mirekrusin 20 hours ago

                                                                                                                                                                            This article seems to have "basically zero" content.

                                                                                                                                                                            Today you have to be blind to not see the change that is coming.

                                                                                                                                                                            World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.

                                                                                                                                                                            AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".

                                                                                                                                                                            As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.

                                                                                                                                                                            • sjaiisba 19 hours ago

                                                                                                                                                                              > As people say - something changed around Dec/Jan

                                                                                                                                                                              Yes, Anthropic decided they wanted to IPO and got the hype machine in full swing.

                                                                                                                                                                              Don’t get me wrong LLMs are here to stay but how we’re currently using them is likely going to change a lot. Stuff like this:

                                                                                                                                                                              > in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it

                                                                                                                                                                              Is not needed to get a lot out of AI, and is mostly snake oil. Integrating them with actionable feedback is, but that takes a lot of time and rethinking of some existing systems.

                                                                                                                                                                              I don’t like the Internet analogy cause that’s like producing a new raw material, but AI is gonna be like Excel eventually (one of the most important pieces of software in the world).

                                                                                                                                                                              • dvt 20 hours ago

                                                                                                                                                                                We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets. Clearly there's potential there, and a lot of people are working on products in the AI space (myself included), but anyone that's seriously tried to wrangle these models will agree with the reality that it's very hard to reliably get them to do what you want them to do.

                                                                                                                                                                                • DrewADesign 18 hours ago

                                                                                                                                                                                  > We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets.

                                                                                                                                                                                  If you replace OpenClaw with any number of other hot LLM products/projects, I’ve been hearing that same exact sentiment for numerous 6-to-12-month periods. I’d argue we have no idea how long it’s doing to be, but it’s probably not very soon.

                                                                                                                                                                                  • slopinthebag 18 hours ago

                                                                                                                                                                                    We're only 5 years away from fusion energy!

                                                                                                                                                                                    • defrost 18 hours ago

                                                                                                                                                                                      On a yearly average we've always been 8.317 light-minutes away from fusion energy.

                                                                                                                                                                                      • slopinthebag 18 hours ago

                                                                                                                                                                                        I actually wonder what will come first, fusion or AGI.

                                                                                                                                                                                        Or will AGI build the fusion energy

                                                                                                                                                                                        • FrancisMoodie 7 hours ago

                                                                                                                                                                                          Yes, and AGI will cure cancer, Alzheimers, Huntingtons and all other diseases known to man... by killing us all. /scifi

                                                                                                                                                                                  • burgerone 20 hours ago

                                                                                                                                                                                    It's not that the technology is not there yer, it's all the ethical concerns and the mental barrier that nobody wants to spend their day begging AI for solutions.

                                                                                                                                                                                    • ipaddr 20 hours ago

                                                                                                                                                                                      Nothing changed in Dec/Jan. Everything changed in 2023 with someones first openAI chat and things are slowly getting adopted into everything with high, marginal and negative benefits.

                                                                                                                                                                                      Things are actually slowing down. And society will still see AI adding little to next years report. The costs still outweigh the benefits.

                                                                                                                                                                                      • geraneum 20 hours ago

                                                                                                                                                                                        > This article seems to have "basically zero" content.

                                                                                                                                                                                        Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.

                                                                                                                                                                                        • gaigalas 20 hours ago

                                                                                                                                                                                          Change is always coming. It's cute when someone thinks this time it's going to be special.

                                                                                                                                                                                          • __loam 20 hours ago

                                                                                                                                                                                            Can't even spell bureaucracy while you're making big predictions like this.

                                                                                                                                                                                            • mirekrusin 13 hours ago

                                                                                                                                                                                              Now you know it wasn't written by a bot.

                                                                                                                                                                                            • staplers 20 hours ago

                                                                                                                                                                                                the change that is coming.
                                                                                                                                                                                              
                                                                                                                                                                                              Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..
                                                                                                                                                                                            • pluto_modadic 21 hours ago

                                                                                                                                                                                              Why do I have a feeling that this will be ignored as biased by the people who need to read it the most.

                                                                                                                                                                                              • brokencode 20 hours ago

                                                                                                                                                                                                Why do I get the feeling that AI skeptics will treat it as definitive and irrefutable proof that they were right all along even though it’s one data point in an industry that’s hasn’t even been around for 5 years.

                                                                                                                                                                                                • albatross79 19 hours ago

                                                                                                                                                                                                  You're right, it is tempting to dunk on AI boosters every time an article like this comes out and puts a damper on their sci fi fan fiction fantasies. There's just something about a grown person getting all excited like a child that makes it really satisfying.

                                                                                                                                                                                                  • brokencode 19 hours ago

                                                                                                                                                                                                    You must have a really bleak view on life to think an adult should never get excited like a child.

                                                                                                                                                                                                    Adult life doesn’t have to be boring drudgery, you know. I mean, it mostly is, but the rare moments of childlike joy and excitement are some of the best parts.

                                                                                                                                                                                                    As far as the putting a damper on anything, nope it doesn’t. And it never will.

                                                                                                                                                                                                    The people excited about AI are excited because of the impacts they see on their own jobs and daily lives. We don’t care what Goldman Sachs has to say about productivity.

                                                                                                                                                                                                    • albatross79 19 hours ago

                                                                                                                                                                                                      It's not bleak, just more mature.

                                                                                                                                                                                                      • brokencode 18 hours ago

                                                                                                                                                                                                        Nope it’s bleak. Try to enjoy life.

                                                                                                                                                                                                • ohyoutravel 20 hours ago

                                                                                                                                                                                                  It’s a grift being perpetuated by the folks at the top, who then sweep along in their slipstream folks under them, and so on. The folks who “need to hear this” are helpless to go against and so can’t back down, and the folks who don’t need to hear this because they’re driving it have their paychecks aligned to it, so they’re not backing down.

                                                                                                                                                                                                • areoform 17 hours ago

                                                                                                                                                                                                  I think analyses like these are motivated reasoning. In 2000, I'm sure you could have said that after infrastructure costs the internet, and the web, added "basically zero" to US economic growth. And there were people saying that!

                                                                                                                                                                                                  Someone I deeply respect, Clifford Stollm wrote a book called “Silicon Snake Oil — Second thoughts on the information highway" in 1995. And while he was and is a brilliant person, Stoll was wrong.

                                                                                                                                                                                                  Smart people are terrible at predicting the most consequential changes in our future – even when they're familiar with the technology. I wrote a bit about my thesis why here, https://1517.substack.com/p/inside-v-outside-context-problem...

                                                                                                                                                                                                  Don't make his mistake. Don't look away from the change being wrought. The world has changed and our history now has a new, sharp dividing chapter "Before ChatGPT | After ChatGPT"

                                                                                                                                                                                                  and that chapter will go down right next to "Before Trinity | After Trinity"; "Before PC | After PC"; "Before 'Internet' | After 'Internet'"†

                                                                                                                                                                                                  † Yes, I know I'm referring to the Web. But we're still using the dark fiber from the .com boom.

                                                                                                                                                                                                  • wefzyn 19 hours ago

                                                                                                                                                                                                    Andrej Karpathy said that major revolutions like the Internet, smartphones, and AI often don’t show up clearly in GDP statistics, even when they radically change how people work. GDP measures total spending, not productivity or usefulness. These revolutions improved efficiency and quality of life, but GDP mostly continued along its long-term trend.

                                                                                                                                                                                                    See his interview in Dwarkesh's podcast: https://www.youtube.com/watch?v=c0-0gGdDJyE&t=4983s

                                                                                                                                                                                                    • bwestergard 19 hours ago

                                                                                                                                                                                                      "These revolutions improved efficiency and quality of life"

                                                                                                                                                                                                      I'm genuinely not sure. We are all computer people in this forum, so it may have improved our lives. But for many people, information technology has lessened the time spent in a given week or year on activities they find meaningful.

                                                                                                                                                                                                      • wefzyn 19 hours ago

                                                                                                                                                                                                        True. The point remains the same. GDP isn’t measuring “meaningfulness,” and it also doesn’t measure “stress” very well either. Tech can change daily life massively in either direction without moving GDP much.

                                                                                                                                                                                                      • mlnj 19 hours ago

                                                                                                                                                                                                        Besides, AI has barely started to be productive to Developers. The rest of the workforce are still untouched for the most part. The tools that assist the bulk of knowledge work out there is still in the works.

                                                                                                                                                                                                      • tabs_or_spaces 16 hours ago

                                                                                                                                                                                                        Feels like this is very hard to measure isn't it?

                                                                                                                                                                                                        Are we saying that llm's have zero economic growth, or are we saying that the sum of winners and losers in llm usage are zero or less than zero?

                                                                                                                                                                                                        I think there's many examples of llms resulting in winners, or maybe this signal is just very high in the tech space.

                                                                                                                                                                                                        But maybe there's not enough reporting on the losers in llms at the moment? (E.g. did llm displace their jobs, they have llm use cases that failed, etc, etc)

                                                                                                                                                                                                        • cadamsdotcom 13 hours ago

                                                                                                                                                                                                          A motivated entity publishing a bombastic opinion. Hard to know what to make of this. If you believe claims about an economy they have a vested interest in affecting, better be sure they’re trustworthy.

                                                                                                                                                                                                          These opinions masquerading as statistics is why governments create departments to publish trustworthy statistics.

                                                                                                                                                                                                          • bjacobel 5 hours ago

                                                                                                                                                                                                            What makes the statistics published by the government inherently more trustworthy?

                                                                                                                                                                                                          • aaronbrethorst 19 hours ago

                                                                                                                                                                                                            I have an alternative explanation: for the areas where AI is giving employees serious productivity gains, they're working for 20 minutes, playing wordle/resting/relaxing for 7 hours, 40 minutes, and delivering exactly as much as they were before.

                                                                                                                                                                                                            • mym1990 19 hours ago

                                                                                                                                                                                                              Its partly this, but hardly any developers I know were writing code for 8 hours a day. 4 on a good day and the rest was meetings or other auxiliary activities. From what I have gathered, companies have no idea how to measure the productivity gains yet.

                                                                                                                                                                                                              • OneMorePerson 13 hours ago

                                                                                                                                                                                                                When you look into the edge cases developer productivity is really tough to understand. It's easy as the engineer yourself to see your own productivity as easy to understand, but if you are ever in the position of trying to assess someone's productivity that you don't work with day to day, its really difficult. There are people who are able to achieve millions in yearly savings with like 10 lines of code updated per year, perf debugging types. I'd never believe that up front if I hadn't seen it after the fact.

                                                                                                                                                                                                              • OneMorePerson 13 hours ago

                                                                                                                                                                                                                This is something people always claim, but this explanation assumes that everyone is "colluding". The guy who wants to get a promotion and get paid 50% more cause he has 3 kids isn't going to play along, he is going to at least put in 4 hours a day of real work, and then everyone else is going to look 20x less productive. There are teams where people are mailing it in so hard that they will find ways to kick out the hard workers, but this doesn't exist across entire orgs/companies/industries.

                                                                                                                                                                                                                • lattalayta 19 hours ago

                                                                                                                                                                                                                  I've heard another version of this is that AI can speed up some tasks, but then employees still need to wait for meetings, approvals, other users to chime in, etc. that the net effect isn't as pronounced

                                                                                                                                                                                                                  • bilsbie 19 hours ago

                                                                                                                                                                                                                    Incentives matter

                                                                                                                                                                                                                    • bdangubic 16 hours ago

                                                                                                                                                                                                                      This would check out at companies where you could “coast” without any oversight and were just randomly estimating some “points” on your “stories” or whatever BS process is in place.

                                                                                                                                                                                                                      On real projects and real teams you can’t do this. If you did what you did in your example you’d say log 8hrs of work, right? Your team lead will ask a simple question: “did you write this code or was it AI-assisted? and what exactly here took 8hrs?” so you could do this once and 2nd time you’d be changing the status on linkedin to looking for work

                                                                                                                                                                                                                    • maxdo 18 hours ago

                                                                                                                                                                                                                      there literally slew of companies who went in 1 year from mid size business to multi billion ARR.

                                                                                                                                                                                                                      And yeah, blah blah they burn money blah blah. Check Anthropic CEO interviews. He openly describe the balance problem : - cost of training a new model - newly built infra ratio of training vs inference - market adoption, that is despite extremely quick is not unlimited, since even market is not unlimited.

                                                                                                                                                                                                                      essentially it's a tricky balance, between you do not invest today you will loose tomorrow vs you invest too much and go bankrupt next year.

                                                                                                                                                                                                                      • interestpiqued 18 hours ago

                                                                                                                                                                                                                        The problem is, part of their whole appeal is being on the "frontier" of model development. So what happens when scaling gets too expensive or we reach some sort of end state for LLMs? They will lose their differentiation and it seems like a non negligible chance their pricing power collapses. The entire reason they make so much money is because they spend so much to be on the frontier.

                                                                                                                                                                                                                        • viking123 16 hours ago

                                                                                                                                                                                                                          This is the same CEO that is selling a room full of old boomers that AI will double their lifespan? Oh yeah it's that guy..

                                                                                                                                                                                                                        • roncesvalles 16 hours ago

                                                                                                                                                                                                                          AI has tremendously improved quality of life (both personal and professional) but I just don't see it translating to increased economic productivity.

                                                                                                                                                                                                                          • sillyfluke 21 hours ago

                                                                                                                                                                                                                            Bottom line, no one's buying your vibeslop when they can create and maintain their own for their custom needs. And if we're not buying each others vibeslop there's no productivity to be measured in the economy.

                                                                                                                                                                                                                            With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.

                                                                                                                                                                                                                            In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.

                                                                                                                                                                                                                            I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.

                                                                                                                                                                                                                            And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.

                                                                                                                                                                                                                            It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.

                                                                                                                                                                                                                            • ekjhgkejhgk 20 hours ago

                                                                                                                                                                                                                              > a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.

                                                                                                                                                                                                                              Is it possible that this sort of problem will be fixed? Hypothetically, what would happen in a scenario where one of these apps can do in 1 hr the work that would take a developer a month, reliably? Or is your premise that will NEVER happen?

                                                                                                                                                                                                                              • sillyfluke 20 hours ago

                                                                                                                                                                                                                                The same underlying magic that enables LLMs to be faster than a brute force SQL query on all the worlds data while producing "good enough" results appears to be the very thing that is creating hallucinations and finite context windows. ie there is no free lunch. It seems to be the theory in many in the field (ilya included?) that the obstacle might not be overcome without an LLM-level breakthrough in AI research, or maybe more likely, a breakthrough in hardware. Big tech until at least recently seems to have thought they can brute force it with energy (nuclear). But who's paying?

                                                                                                                                                                                                                                • keybored 20 hours ago

                                                                                                                                                                                                                                  No need to stress out over us rank and file answering that question. An entire economy is boiling based on it.

                                                                                                                                                                                                                              • mtct88 20 hours ago

                                                                                                                                                                                                                                I think it’s still a bit too early to draw the conclusion.

                                                                                                                                                                                                                                We need to get past the hype first and let the cash grabbers crash.

                                                                                                                                                                                                                                After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.

                                                                                                                                                                                                                                • gdulli 20 hours ago

                                                                                                                                                                                                                                  What about social media, did that evolve into something sane and useful or has it remained owned by the cash grabbers? Have we not yet internalized that they've permanently captured control of technological advances?

                                                                                                                                                                                                                                • mgh2 20 hours ago

                                                                                                                                                                                                                                  Trickle down effect reversal: > “A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP”

                                                                                                                                                                                                                                  • user____name 19 hours ago

                                                                                                                                                                                                                                    There really need to be better metrics about the state of an economy than GDP.

                                                                                                                                                                                                                                    • HardCodedBias 20 hours ago

                                                                                                                                                                                                                                      I think this is key:

                                                                                                                                                                                                                                      "On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."

                                                                                                                                                                                                                                      No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.

                                                                                                                                                                                                                                      Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.

                                                                                                                                                                                                                                      • yunnpp 19 hours ago

                                                                                                                                                                                                                                        The cover image is just too good. It's just way too good.

                                                                                                                                                                                                                                        • erelong 19 hours ago

                                                                                                                                                                                                                                          Article refutes itself by saying it's difficult to measure impact on GDP (thus would by this logic have to take a neutral stance on impact of AI)

                                                                                                                                                                                                                                          • pier25 19 hours ago

                                                                                                                                                                                                                                            The article is reporting what someone from Goldman Sachs said in an interview.

                                                                                                                                                                                                                                          • qgin 20 hours ago

                                                                                                                                                                                                                                            The most interesting thing about this is that the underlying economy is actually stronger than people realize. The narrative has been that AI data center construction was propping up an otherwise weak economy. If this analysis is true, then it wasn't being propped up by data center construction. The strength was usual and normal strength.

                                                                                                                                                                                                                                            I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.

                                                                                                                                                                                                                                            • Gigachad 20 hours ago

                                                                                                                                                                                                                                              The US economy is remarkably resilient considering its withstood a year of sabotage from the top down.

                                                                                                                                                                                                                                              • vachina 20 hours ago

                                                                                                                                                                                                                                                The top don’t run the show. Tells you how much a value they provide.

                                                                                                                                                                                                                                                • tempodox 4 hours ago

                                                                                                                                                                                                                                                  The amount of damage that was done in just a year says otherwise.

                                                                                                                                                                                                                                                  • kmeisthax 17 hours ago

                                                                                                                                                                                                                                                    Alternatively: the companies at the top paid the necessary bribes (e.g. $100k H-1B sponsorships) and got to continue on with business as usual. The people at the bottom are the ones who can't pay the bribe and are thus hurting.

                                                                                                                                                                                                                                              • cmiles8 19 hours ago

                                                                                                                                                                                                                                                The AI bros are saying everyone will be out of work in 5 years.

                                                                                                                                                                                                                                                Economists and businesses are calling BS and saying AI is cool, but basically adding zero measurable value with 95% of AI projects failing.

                                                                                                                                                                                                                                                The truth is likely somewhere in the middle, but it seems unlikely this bubble can continue much longer.

                                                                                                                                                                                                                                                • tempodox 4 hours ago

                                                                                                                                                                                                                                                  The duration of this bubble so far goes to show how incredibly rich the investors are, burning trillions of dollars over the years in a wild speculation that they will be able to get everyone else out of work and then having the power to decide who will be allowed to live and who will have to die in the fight over the last breadcrumbs. In the end they will be the ones who can afford to buy private armies to protect themselves from the hungry masses.

                                                                                                                                                                                                                                                • PlatoIsADisease 17 hours ago

                                                                                                                                                                                                                                                  After using OpenClaw for 1 week, I'm so extremely bullish.

                                                                                                                                                                                                                                                  Buy buy buy buy.

                                                                                                                                                                                                                                                  We don't even have enough data centers.

                                                                                                                                                                                                                                                  • jibal 19 hours ago

                                                                                                                                                                                                                                                    This is an abbreviated version of a far more nuanced WaPo article:

                                                                                                                                                                                                                                                    https://www.washingtonpost.com/technology/2026/02/23/ai-econ...

                                                                                                                                                                                                                                                    • Madmallard 20 hours ago

                                                                                                                                                                                                                                                      Yet the job situation for software developers in the United States is borderline terminal. Interesting.

                                                                                                                                                                                                                                                      • co_king_5 20 hours ago

                                                                                                                                                                                                                                                        COVID and "AI" lowered the threshold of acceptable service to the extent that software vendors are making offshoring attempts again.

                                                                                                                                                                                                                                                      • phendrenad2 21 hours ago

                                                                                                                                                                                                                                                        I'm sure we can find stories from the 1980s and 1990s about how the "world wide web" hasn't increased the GDP at all.

                                                                                                                                                                                                                                                        • sib 20 hours ago

                                                                                                                                                                                                                                                          Given that the first communication between a web server and client was in December 1990 (and that was private to Tim B-L's environment), and it was released to the public in 1991, I bet we actually couldn't find such stories in the 1980s :)

                                                                                                                                                                                                                                                          • trimethylpurine 20 hours ago

                                                                                                                                                                                                                                                            I assume you mean technology, not the www (didn't exist). And, until around the second half of the 90s those papers were right. Most papers you'll find arguing that it wasn't contributing much to productivity were saying just that, that it wasn't, not that it won't. At the time, they were right. Productivity had stagnated despite heavy spending in technology.

                                                                                                                                                                                                                                                            But now we have something else happening. It's hard to find an application for something that makes a lot of mistakes. That's not the same issue. The issue then was that no one had written the software yet. Everyone knew what software needed writing. The future was obvious. Here, not so much. We can't see how to make it not make mistakes.

                                                                                                                                                                                                                                                            We have to hope someone will come up with a solution to that. Otherwise their big bets on something non-productive won't pan out the same way that the computer did, and we're all going to suffer for it.

                                                                                                                                                                                                                                                          • zombot 12 hours ago

                                                                                                                                                                                                                                                            When the high priests of capitalism say so themselves, I tend to believe them. Nice to see that not everybody has completely lost their mind under the endless barrage of hype, shilling and astroturfing.

                                                                                                                                                                                                                                                            • deadbabe 18 hours ago

                                                                                                                                                                                                                                                              Anyone want to speculate on the Post-AI Bubble world?

                                                                                                                                                                                                                                                              When companies can no longer afford to just keep running AI data centers at a loss, we will suddenly have a lot more data centers than we need, who will benefit from these? Who could have use for the hardware for other purposes?

                                                                                                                                                                                                                                                              • diedyesterday 14 hours ago

                                                                                                                                                                                                                                                                1. The initial stage of adoption of AI will be more of a shift (some plus + here and some minus there) and reorientation and restructuring than net exponential growth.

                                                                                                                                                                                                                                                                2. Also a good portion of contribution of AI is where it's not taken into account by metrics like GDP. I have seen an explosion of FOSS projects especially in my own area and I'm sure it's pretty much are the case for many other areas.

                                                                                                                                                                                                                                                                3. Also there will be a sink-like effect with effects of AI on net wealth production. Like Earth's own oxygenation event which took billions for the produced oxygen to (after saturating all the sinks like Earth' iron reserves and turning them to Iron ore) ultimately find its way into the atmosphere.

                                                                                                                                                                                                                                                                4. Also I'm not sure what is exactly being counted on as contribution of AI to economy. Is the data-center build-outs and growth of chip companies,etc. included in this metric in AI's favor? ...

                                                                                                                                                                                                                                                                • christkv 18 hours ago

                                                                                                                                                                                                                                                                  I mean for me I compare AI today with the introduction of the Apple 2. It promises a lot and can do some awesome things but we are still at the beginning. Also I am amazed how quickly people just got used to AI. Its still magical and 5 years ago this was science fiction that people did not think was possible.

                                                                                                                                                                                                                                                                  • deterministic 20 hours ago

                                                                                                                                                                                                                                                                    I completely agree. If AI can't do 100% of a job then you can't remove the job.

                                                                                                                                                                                                                                                                    And most jobs that can be automated already has been automated using traditional software.

                                                                                                                                                                                                                                                                    • _aavaa_ 19 hours ago

                                                                                                                                                                                                                                                                      If AI does 90% of the work, you can either do more work with your current staff, or fire a portion and have them do the same amount of work.

                                                                                                                                                                                                                                                                      • nanobuilds 19 hours ago

                                                                                                                                                                                                                                                                        Remove the job.. but have one super skilled coordinator managing and teaching agents the last 10% (or doing the 10% of the job)

                                                                                                                                                                                                                                                                        • singpolyma3 19 hours ago

                                                                                                                                                                                                                                                                          A lot of jobs that can be automated haven't been because it's not worth it is because the people with domain knowledge can't imagine automating it is other related problems.

                                                                                                                                                                                                                                                                          I'm not sure if LLMs will change that or not

                                                                                                                                                                                                                                                                          • codexon 20 hours ago

                                                                                                                                                                                                                                                                            You can replace it with a much lower paid employee though.

                                                                                                                                                                                                                                                                            • loloquwowndueo 20 hours ago

                                                                                                                                                                                                                                                                              A lower paid and less qualified employee won’t be able to spot when the AI screws up.

                                                                                                                                                                                                                                                                              Having a higher-paid, qualified employee supervising multiple AIs as the human only needs to spot for mistakes - maybe.

                                                                                                                                                                                                                                                                              • codexon 20 hours ago

                                                                                                                                                                                                                                                                                I'm not sure that's entirely true. For most things, checking if a solution is correct is much easier than implementing it (page looks wrong, can't login etc...)

                                                                                                                                                                                                                                                                                • loloquwowndueo 18 hours ago

                                                                                                                                                                                                                                                                                  You’re looking at the end result, I’m looking at implementation. Engineering management, not QA.

                                                                                                                                                                                                                                                                              • qudat 20 hours ago

                                                                                                                                                                                                                                                                                You definitely cannot. Code org, architecture, and system design are senior level roles and responsibilities.

                                                                                                                                                                                                                                                                                • codexon 20 hours ago

                                                                                                                                                                                                                                                                                  AI is already aware of the best practices. It does not just blindly do what you ask of it in the simplest way.

                                                                                                                                                                                                                                                                                  • saulpw 20 hours ago

                                                                                                                                                                                                                                                                                    Best practices are always situation dependent.

                                                                                                                                                                                                                                                                                    • codexon 20 hours ago

                                                                                                                                                                                                                                                                                      Claude code will prompt you and explain to you what practice fits a situation. It might not do it perfectly, but the foundations are there.

                                                                                                                                                                                                                                                                                • singpolyma3 19 hours ago

                                                                                                                                                                                                                                                                                  That's not growth. Growth is having the existing employee do more.

                                                                                                                                                                                                                                                                                  • codexon 19 hours ago

                                                                                                                                                                                                                                                                                    I'm not arguing about growth. I was addressing this statement which seems to presume that AI has no effect if the job can't be removed.

                                                                                                                                                                                                                                                                                    > If AI can't do 100% of a job then you can't remove the job.

                                                                                                                                                                                                                                                                              • keybored 20 hours ago

                                                                                                                                                                                                                                                                                Note last year. The vibes coming from the Claude dungeons tell a different story. Just in the last six weeks. We are on the precipice.

                                                                                                                                                                                                                                                                                • thomasfromcdnjs 20 hours ago

                                                                                                                                                                                                                                                                                  I've been using claude code to code my own gpt model from absolute scratch in type script with c code it generates for the gpu. Anytime it wants to use cuda or some lib to do things faster I can keep telling it to write it in typescript or c etc lots of fun and it actually works lol

                                                                                                                                                                                                                                                                                  • co_king_5 20 hours ago

                                                                                                                                                                                                                                                                                    ^This. Claude is very rapidly approaching AGI.

                                                                                                                                                                                                                                                                                    Opus 4.6 is SPECIAL. nothing like other models. This is a new breed of intelligence.

                                                                                                                                                                                                                                                                                    I give it 18-24 months until we see a full-scale societal transformation.

                                                                                                                                                                                                                                                                                    • boxedemp 19 hours ago

                                                                                                                                                                                                                                                                                      Just so we're on the same page, you don't think that an llm is going to achieve AGI right? Like you're thinking some sort of combination between world models, llms, visual models and others. Right?

                                                                                                                                                                                                                                                                                      • albatross79 19 hours ago

                                                                                                                                                                                                                                                                                        I give it until the next model that you'll be proclaiming how NOW IT'S THE REAL THING FOR SURE THIS TIME GUYS.