• mjr00 an hour ago

    > Microsoft’s AI CEO is saying AI is going to take everybody’s job. And Sam Altman is saying that AI will wipe out entire categories of jobs. ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.

    > I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.

    They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.

    • Frost1x an hour ago

      Tech has slowly been moving that way anyways. In terms of ROI, you’re often much better off targeting whales and large clients than trying to become the ubiquitous market service for consumers. Competition is fierce and people are poor comparatively, so you need the volume for success.

      Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.

      And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.

      • braebo 20 minutes ago

        It’s capitalism moving everything that way. Always has been and will continue to until we’re all hooked up to tubes paying taxes with ectoplasm.

      • AstroBen an hour ago

        The marketing is clearly effecting individual developers, too. There's a mass psychosis happening

        • mjr00 an hour ago

          Maybe. I'm actually a big fan of Claude/Codex and use them extensively. The author of the article says the same.

          > To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.

          It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."

          My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.

          • dgxyz an hour ago

            Well I've just watched two major projects fail which were running mostly on faith because someone read too many "I used 15 AI agents to vibe code..." blog posts and sold it to management. The promoters have a deep technical understanding of the problem domain we have but little understanding of what an LLM can achieve or what it can understand relating to the problem at hand.

            Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.

            The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.

            The only reality is no one learns or is accountable for their mistakes.

            • DrewADesign 41 minutes ago

              Rather than making a good product that’s useful to the world, the goal of current startups seems to be milking VCs who are desperately searching for the new version of the mobile phone revolution that will make this all ok… so it seems like they’re accomplishing their goal?

              I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.

              • dgxyz 37 minutes ago

                I think no one cares about the truth or building something good any more. It's a meme economy. Tell a story and the numbers go up. Until they don't.

                Yes my pension is probably going down the same sinkhole with your IRA. Good luck. We need it.

                • ass22 27 minutes ago

                  Its always been like this but for awhile you had people like Steve Jobs to hold people like Bill Gates accountable. He long referred to MSFT as being mcdonalds in relation to the stuff they produced - very pedestrian.

              • eckesicle 42 minutes ago

                My experience has been a mixed bag.

                AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.

                I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.

                As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.

                • dgxyz 40 minutes ago

                  On localised context stuff, yeah no. I spent a couple of hours rewriting something Claude did terribly a couple of weeks back. Sure it solved the problem, a relatively simple regression analysis, but it was so slow that it crapped out under load. Cue emergency rewrite by hand. 20s latency down to 18ms. Yeah it was that bad.

                • LouisSayers an hour ago

                  > I've just watched two major projects fail

                  This is an opportunity. You can have a good long career consulting/contracting for these types of companies.

                  • dgxyz an hour ago

                    Why do you think I work there!

                    Emergency clean up work is ridiculous money!

                • crystal_revenge 44 minutes ago

                  > "AI is completely useless."

                  This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.

                  AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a production code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PRs that are largely AI generated.

                  • surgical_fire an hour ago

                    I use both Claude and Codex (Claude at work, Codex at home).

                    They are fine, moderately useful here and there in terms of speeding up some of my tasks.

                    I wouldn't pay much more than 20 bucks for it though.

                  • dgxyz an hour ago

                    I think this is reality.

                    None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.

                    • molsongolden an hour ago

                      Many AI initiatives have had massive ROI though. The implementation problems are similar to any pre-AI tech rollout and hugely expensive non-AI tech implementations fail all the time.

                      • dgxyz an hour ago

                        Name one that has at least $200mn ROI over capital investment. Show me the balance sheet for it as well. And make sure that ROI isn't from suddenly not paying salaries.

                      • noosphr an hour ago

                        After spending nearly 5 years building software which uses AI agents on the back end I've come to the conclusion it's the PC revolution part 2.

                        Productivity gains won't show up on economic data and companies trying to automate everything will fail.

                        But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.

                        • Ancalagon an hour ago

                          Are these botted comments or just sarcasm?

                      • crystal_revenge an hour ago

                        > There's a mass psychosis happening

                        There absolutely is but I'm increasingly realizing that it's futile to fight it.

                        The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

                        Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?

                        But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.

                        • nosianu 7 minutes ago

                          > The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

                          I think we all do???

                          Even if I'm not coding a lot, I use it every day for small tasks. There is not much to code in my job, IT in a small traditional-goods export business. The tasks range from deciphering some coded EDI messages (D.96A as text or XML, for example), summarizing a bunch of said messages (DESADV, ORDERSP, INVOIC), finding missing items, Excel formula creation for non-trivial questions, and the occasional Python script, e.g. to concatenate data some supplier sent in a certain way.

                          AI is so strange because it is BOTH incredibly useful and incredibly random and stupid. Among the latter, see a comment in my history I made earlier today, the AI does not tell me when it uses a heuristic and does not provide an accurate result. EVERY result it shows me it shows as final and authoritative and perfect. Even when after questioning it suddenly "admits" that it actually skipped a few steps and that's not the correct final result.

                          Once AI gets some actual "I" I'm sure the revolution some people are commenting about will actually happen, but I fear that's still some way off. Until then, lots of sudden hallucinations and unexpected wrong results - unexpected because normal people believe the computer when it claims it successfully finished the task and presents a result as correct.

                          Until then it's daily highs and lows with little in between, either it brilliantly really solves some task, or it fails and that includes telling you about it.

                          A junior engineer will at least learn, but the AI stays pretty constant in how it fails and does not actually learn anything. The maker providing a new model version is not the AI learning.

                          • svara 35 minutes ago

                            Yes, it's been odd to observe the parallels with the web3 craze.

                            You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.

                            The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.

                            • AstroBen an hour ago

                              There's a good reason for that. The end result of exploring what they can actually do isn't very exciting or marketable

                              "I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini

                            • co_king_5 an hour ago

                              What is the difference between mass psychosis and a very effective marketing scheme?

                              • thefilmore 40 minutes ago

                                > There's a mass psychosis happening

                                Any guesses on how long this lasts?

                                • AstroBen 36 minutes ago

                                  Best guess: a few months and it'll spread through dev communities that the effect is a lot more modest than the extreme claims are making it out to be

                                  6-12 months before non-technical leaders take notice and realize they can't actually fire half their team

                                  • layer8 36 minutes ago

                                    Until VC money runs out, probably.

                                  • verdverm an hour ago

                                    Ai psychosis or ai++ psychosis?

                                  • bubblewand an hour ago

                                    This is also what OpenAI’s “safety” angle was all about.

                                    “Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)

                                    • viccis an hour ago

                                      Anthropic has been the most histrionic about this, with their big blog post about how they need to make sure their models don't feel like they are being emotionally abused by the users being the most fatuous example.

                                      • biophysboy 13 minutes ago

                                        I missed this - which blog post?

                                        • SoftTalker 42 minutes ago

                                          I was taken aback when I recently noticed a co-worker thanking ChatGPT for its answer.

                                          • beowulfey 30 minutes ago

                                            LLMs talk like people; there is nothing wrong with this. It's perfectly fine to be nice to something even if it isn't human. It's why we don't go around kicking dogs for fun.

                                            I understand why people don't act polite to LLMs, but honestly I think not thanking them will make people act more dickish to other humans.

                                            • steveklabnik 32 minutes ago
                                              • layer8 33 minutes ago

                                                People have been thanking Siri back 15 years ago, it’s just a reflex.

                                              • slowmovintarget an hour ago

                                                "This is obviously why only we can be trusted with operating these models, and require government legislation saying so."

                                                They're trying to get government to hand them a moat. Spoilers... There's no moat.

                                                • verdverm an hour ago

                                                  To me, Anthropic has done enough sketchy things to be on par with the players from Big Tech. They are not some new benevolent corporation backed by SV

                                                  Many users don't want to acknowledge this about the company making their fav ai

                                                • scrollop an hour ago

                                                  Gpt2- "too dangerous to release"

                                                  • qnleigh an hour ago

                                                    Oh funny, I forgot about that. But at the time it didn't seem unreasonable to withhold a model that could so easily write fake news articles. I'm not so sure it wasn't..

                                                • brabel an hour ago

                                                  Can confirm. We don’t know if AI really is about to make programmers who write code by hand obsolete, but we sure as hell fear our competitors will ship features 10x faster than us. What is the logical next step?? Invest lots of money on AI or keep hoping it’s a fad and risk being left in the dust, even if you think that risk is fairly small?

                                                  • dgxyz an hour ago

                                                    Perhaps stop entering into saturated markets and using AI to try and shortcut your way to the moon?

                                                    There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.

                                                    • NitpickLawyer an hour ago

                                                      > and looking at the rate of progress this hasn't improved recently at all.

                                                      The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".

                                                      • qnleigh an hour ago

                                                        Yeah not that long ago, there was concern that we had run out of training data and progress would stall. That did not happen at all.

                                                        • zozbot234 an hour ago

                                                          "My car is in the driveway, but it's dirty and I need to get it washed. The car wash is 50 meters away, should I drive there or walk?"

                                                          • votepaunchy an hour ago

                                                            Gemini flash tells me to drive: “Unless you have a very long hose or you've invented a way to teleport the dirt off the chassis, you should probably drive. Taking the car ensures it actually gets cleaned, and you won't have to carry heavy buckets of soapy water back and forth across the street.”

                                                            • dgxyz 43 minutes ago

                                                              Beep boop human thinking ... actually I never wash my car. They do it when they service it once every year!

                                                            • surgical_fire an hour ago

                                                              If your expectations were low, anything would have been over your expectations.

                                                              There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.

                                                              I still think that any claims that it can operate at a human level are complete bullshit.

                                                              It can speed things up well in some contexts though.

                                                              • NitpickLawyer 19 minutes ago

                                                                > It's a bit more useful than it was 3 years ago.

                                                                It's comments like these that make me not really want to interact with this topic anymore. There's no way that your comment can be taken seriously. It's 99.9% a troll comment, or simply delusional. 3 years ago the model (gpt3.5, the only one out there basically) was not able to output correct code at all. It looked like python if you squinted, but it made no sense. To compare that to what we have today and say "a bit more useful" is not a serious comment. Cannot be a serious comment.

                                                            • sweetheart an hour ago

                                                              The recent developments of only the last 3 months have been staggering. I think you should challenge your beliefs on this a little bit. I don't say that as an AI fanboy (if those exist), it's just really, really noticeable how much progress has been made in doing more complex SWE work, especially if you just ask the LLM to implement some basic custom harness engineering.

                                                              • dgxyz 42 minutes ago

                                                                I'll let you know in 12 months when we have been using it for long enough to have another abortion for me to clean up.

                                                            • AstroBen 42 minutes ago

                                                              Why is it an all or nothing decision?

                                                              Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later

                                                              • zibzob 12 minutes ago

                                                                It's not possible to tell if you're 10x faster, or even faster at all, over any non-trivial amount of time. When not using a coding agent, you make different decisions and get the task done differently, at a different level of architecture, with a different understanding of the code.

                                                            • parpfish an hour ago

                                                              something i wonder about with AI taking jobs --

                                                              similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.

                                                              eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.

                                                              the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.

                                                              • zozbot234 an hour ago

                                                                There's plenty of jobs like this already. They'll want to keep you around even if you're not doing much most of the time, because you can still solve the hard problems as they arise and grow organizational capital in other ways.

                                                              • SoftTalker an hour ago

                                                                So, what I don't get is, taking it to its logical conclusion, if AI takes all the jobs then who are your customers? Who will buy your stock? Who will buy the software that all the developers you used to employ used to write? How do these CEOs and investors see this playing out?

                                                                • cmiles8 an hour ago

                                                                  You’re not supposed to ask such logical questions. It kills the AI vibe.

                                                                • spamizbad 29 minutes ago

                                                                  Also: They've figured out they can "force" AI adoption top-down at many workplaces. They don't need to convince you or even your boss - they just need the C-suite to mandate it.

                                                                  • dv_dt an hour ago

                                                                    Saying it will take jobs is the marketing line to CEOs - more than you will be left behind.

                                                                    • hmmmmmmmmmmmmmm an hour ago

                                                                      Except entry level jobs are already getting wiped out.

                                                                      • zozbot234 an hour ago

                                                                        The one entry level job that's been wiped out for good by LLMs is human marketing copywriters, i.e. the people whose job was to come up with the kind of slop LLMs learned from. They're just rebranding as copyeditors now because AI can write the slop itself, or at least its first draft.

                                                                    • linguae an hour ago

                                                                      I’m also concerned about the continuing enshittification of software. Even without LLMs, we’ve had to endure slapdash software. Even Apple, which used to be perfectionistic, has slipped. I feel enshittification is a result of a lack of meaningful competition for many software products due to moats such as proprietary file formats and protocols, plus network effects. “Move fast and break things” software development methodologies don’t help.

                                                                      LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.

                                                                      • OptionOfT 9 minutes ago

                                                                        FOMO was literally built in into Bitcoin. In the beginning it was a lot easier, and then it slowly gets harder.

                                                                        But what I really hate about AI and how most people talk about it is that if one day it does what the advertisements say, all white collar jobs collapse.

                                                                        • qnleigh an hour ago

                                                                          Yeah I guess the subtext is 'AI is going to take over so much of the market that it's risky to hold anything else.'

                                                                          • empressplay an hour ago

                                                                            It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)

                                                                            But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.

                                                                            • monkpit an hour ago

                                                                              Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

                                                                              Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.

                                                                              • daze42 an hour ago

                                                                                This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.

                                                                              • big_paps an hour ago

                                                                                One often forgets this.

                                                                                • saltcured an hour ago

                                                                                  With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?

                                                                                • im3w1l an hour ago

                                                                                  When trying to look infer people's motives don't just look at what they are doing. Look also at what they aren't doing. Alternatives they had and rejected.

                                                                                  If marketing it was the sole objective there are many other stories they could have told, but didn't.

                                                                                  • vpribish 24 minutes ago

                                                                                    what are a couple of those alternatives?

                                                                                  • bpodgursky an hour ago

                                                                                    You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:

                                                                                    "Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)

                                                                                    You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.

                                                                                    • testbjjl an hour ago

                                                                                      In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?

                                                                                      • steveklabnik an hour ago

                                                                                        Most reporting I've seen rhymes with this, from last year https://www.theguardian.com/technology/2025/jun/05/english-s...

                                                                                        • SlightlyLeftPad an hour ago

                                                                                          They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?

                                                                                        • biophysboy 21 minutes ago

                                                                                          The reason you think its honest is because you already believed it.

                                                                                          • tonyedgecombe an hour ago

                                                                                            Yeah but it’s in his interest to encourage an arms race with China.

                                                                                            • bpodgursky 23 minutes ago

                                                                                              OK, but the other view equally compatible with the evidence is that he is scared of getting rolled by an AI-dominant China and that's why he's building tools for the dept of defense.

                                                                                              Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.

                                                                                              • heraldgeezer 25 minutes ago

                                                                                                Answer the question.

                                                                                                If the USA pauses AI development, do you think China will?

                                                                                              • heraldgeezer 25 minutes ago

                                                                                                HN has become so marxist they hate the country they live in

                                                                                              • apaosjns an hour ago

                                                                                                Sam Altman is a known sociopath who has no problem achieving his goals by any means necessary. His prior business dealings (and repeated patterns with OpenAI) are evidence of this.

                                                                                                Shumer is of a similar stock but less capable, so he gets caught in his lies.

                                                                                                I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.

                                                                                                Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.

                                                                                                • verdverm an hour ago

                                                                                                  I'm shocked how congratulatory things were for OpenClaw joining Altman Inc

                                                                                                  • gmerc 39 minutes ago

                                                                                                    If you know the author you know it's a match made in heaven

                                                                                              • nativeit an hour ago

                                                                                                The AI executives are marketing it—it’s just none of us are the target demographic. They are marketing it to executives and financiers, the people who construct the machinations to keep their industry churning, and those who begrudge the necessity of labor in all its forms.

                                                                                                • lambdasquirrel an hour ago

                                                                                                  Yup, if you haven’t heard first-hand (i.e. from the source) at least one story where some exec was at least using AI to intimidate his employees, or outright terminating them in some triumphant way (whether or not this was a sound business decision), then you’ve gotta be living in a bubble. AI might not be the problem but the way it’s being used is.

                                                                                                  • SL61 38 minutes ago

                                                                                                    This has been the message at the F100 that one of my relatives works at. The CEO's increasingly aggressive message to their hundreds of thousands of employees is that they should figure out how to get 10x faster with AI or their job is on the line. The average non-technical white collar employee doesn't know the details of how LLMs work or any of the day-to-day changes in tooling that we see in the tech industry. All they see is elites pouring all their resources into a machine that will result in Great Depression 2 if it succeeds. Millions of people whose lives depend on their $50k office job in Middle America are hoping and praying that it fails.

                                                                                                    I live in an area that's not a tech hub and lots of people get confrontational when they find out I work in tech. First they want to know if I'm working on AI, and once they're satisfied that the answer is no, they start interrogating me about it. Which companies are behind it, who their CEOs are, who's funding them, etc. All easily Googleable, but I'm seen as the AI expert because I work in tech.

                                                                                                    • heraldgeezer 21 minutes ago

                                                                                                      I do love that.

                                                                                                      My career is built on people not knowing how to Google lmao (IT)

                                                                                                      To most people, AI is chatGTP. Maybe Gemini.

                                                                                                      Claude? No idea.

                                                                                                      VS Code, Cursor, Antigraivity, Claude Code? Blank stares.

                                                                                                      Same as when the computer came, some will fall behind. Excel monkeys copy pasting numbers will go, copywriters, written word jobs = already gone. Art for simple images = AI now all done by one person.

                                                                                                      Unless you want a Soviet system where jobs are kept to keep people busy.

                                                                                                • mullingitover an hour ago

                                                                                                  AI is scary, but look on the bright side:

                                                                                                  Whenever there is a massive paradigm shift in technology like we have with AI today, there are absolutely massive, devastating wars because the existing strategic stalemates are broken. Industrialized precision manufacturing? Now we have to figure out who can make the most rifles and machine guns. Industrialized manufacturing of high explosives? Time to have a whole world war about it. Industrialized manufacturing of electronics? Time for another world war.

                                                                                                  Industrialized manufacturing of intelligence will certainly lead to a global scale conflict to see if anyone can win formerly unwinnable fights.

                                                                                                  Thus the concerns about whether you have a job or not will, in hindsight, seem trivial as we transition to fighting for our very survival.

                                                                                                  • Havoc an hour ago

                                                                                                    To me global rise of full blown authoritarianism in every corner seems more plausible than a shooting war. The tech is very well suited for controlling people - both in the monitoring sense and destroying their ability to tell what’s really.

                                                                                                    ie new stalemate in the form of multiple inward focused countries/blocs

                                                                                                    • recursivecaveat 28 minutes ago

                                                                                                      Yeah LLMs complete the surveillance state. It adds the patience to monitor, analyze, de-anonymize all the data. The industrial revolution and its wealth temporarily disrupted civilization, but we're regressing to the normal state of global authoritarianism again.

                                                                                                      • BlackjackCF an hour ago

                                                                                                        That was already happening without LLMs. LLMs will just make it worse.

                                                                                                        • goda90 an hour ago

                                                                                                          "We've always been at war with Eurasia"

                                                                                                        • RHSeeger 27 minutes ago

                                                                                                          > the existing strategic stalemates are broken

                                                                                                          Claude, go hack <my enemy nation-state> and find me ways to cause them harm that are unlikely be noticed until it is too late for them to act on it.

                                                                                                          • verdverm an hour ago

                                                                                                            Where were the massive devastating wars last time this happened with the internet and mobile phone?

                                                                                                            • tgv an hour ago

                                                                                                              You could say that it waged a silent war, and our kids' attention spans lost.

                                                                                                              • cvwright an hour ago

                                                                                                                Very likely they got the causality backwards. Every time there’s a big war, technology advances because governments pour resources into it.

                                                                                                                • mullingitover 39 minutes ago

                                                                                                                  The internet and mobile phones weren't paradigm shifts for warfare. There were already mobile radios in WWII, so they fall under the 'industrialized manufacturing of electronics' bucket.

                                                                                                                  • prewett 42 minutes ago

                                                                                                                    Just for the sake of argument, I don't think the internet and mobile phones are military technologies, nor to GP use those examples.

                                                                                                                    > Industrialized manufacturing of electronics?

                                                                                                                    Ukraine seems to be exploring this and rewriting military doctrine. The Iranian drones the Russians are using seem to be effective, too. The US has drones, too, and we've discovered that drone bombing is not helpful with insurgencies; we haven't been in any actual wars for a while, though.

                                                                                                                    > Industrialized manufacturing of intelligence

                                                                                                                    I don't think we've gotten far enough to discover how/if this is effective. If GP means AI, then we have no idea. If GP means fake news via social media, then we may already be seeing the beginning effects. Both Obama and Trump had a lot of their support from the social media.

                                                                                                                    Having written this, I think I flatly disagree with GP that technology causes wars because of its power. I think it may enable some wars because of its power differential, but I think a lot is discovered through war. WWI discovered the limitations of industrial warfare, also of chemical weapons. Ukraine is showing what constellations of mini drones (as opposed to the US' solitary maxi-drones) can do, simply because they are outnumbered and forced to get creative.

                                                                                                                    • vpribish 10 minutes ago

                                                                                                                      how do you not think the internet is a military technology? i mean (waves hands) like it's from ARPA, the military paid for it, it integrated cold war air defence, it made global comms resilient to attack, and made information non-local on a massive scale

                                                                                                                      GP's assertion about tech revolutions making wars doesn't make any sense to me on any level, but it's not just because the latest revolutions were 'not military tech'

                                                                                                                      i'm liking william spaniel's model : wars happen when 1 - there is a substantial disagreement between parties and 2 - there is a bargaining friction that prevents reaching a less-costly negotiated resolution.

                                                                                                                      I don't see how a technical revolution necessarily causes either, much less both, of those conditions. there sure is a lot of fear and hype going around - and that causes confusion and maybe poor decisions - but we should chill on the apocalyptics

                                                                                                                    • squibonpig an hour ago

                                                                                                                      Yeah this was my thought as well

                                                                                                                    • atemerev an hour ago

                                                                                                                      I am absolutely sure that WW3 is inevitable, for these exact reasons. Later, the survivors will be free to reorganize the society.

                                                                                                                      • SoftTalker 38 minutes ago

                                                                                                                        Nature likes to do occasional resets. Probably explains the Fermi paradox as well.

                                                                                                                    • zug_zug an hour ago

                                                                                                                      Part of what's going on here -- why we have this gap in what we say we fear and how we act, is just a human deficiency.

                                                                                                                      I remember when Covid got out of control in China a lot of people around me [in NY] had this energy of "so what it'll never come to us." I'm not saying that they believed that, or had some rational opinion, but they had an emotional energy of "It's not big deal." The emotional response can be much slower than the intellectual response, even if that fuse is already lit and the eventuality is indisputable.

                                                                                                                      Some people are good at not having that disconnect. They see the internet in 1980 and they know that someday 60 years from now it'll be the majority of shopping, even though 95% of people they talk to don't know what it is and laugh about it.

                                                                                                                      AI is a little-bit in that stage... It's true that most people know what it is, but our emotional response has not caught up to the reality of all of the implications of thinking machines that are gaining 5+ iq points per year.

                                                                                                                      We should be starting to write the laws now.

                                                                                                                      • californical an hour ago

                                                                                                                        But it’s worth being careful - you could’ve said the same thing 3 years ago about NFTs. They were taking off and people made very convincing arguments about how it was the future of concert tickets, and eventually commerce in general.

                                                                                                                        If we started writing lots of laws around NFTs, it would just be a bunch of pointless (at best), or actively harmful laws.

                                                                                                                        Nobody cares about NFTs today, but there were genuinely good ideas about how they’d change commerce being spouted by a small group of people.

                                                                                                                        People can say “this is the future” while most people dismiss them, and honestly the people predicting tectonic shifts are usually wrong.

                                                                                                                        I don’t think that the current LLM craze is headed for the same destiny as NFTs, but I don’t think that the “LLM is the new world order” crowd is necessarily more likely to be correct just because they’re visionaries.

                                                                                                                        • AstroBen 19 minutes ago

                                                                                                                          Some of my friends bought a tonne of bitcoin when it was around ~$100 because it was clearly the future. I'm still not sure if I was an idiot or smart to reject that

                                                                                                                      • ctoth an hour ago

                                                                                                                        > And Matt Shumer is saying that AI is currently like Covid in January 2020—as in, "kind of under the radar, but about to kill millions of people"

                                                                                                                        This is where the misrepresentation... no, the lie comes in. It always does in these "sensible middle" posts! the genre requires flattening both sides into dumber versions of themselves to keep the author positioned between two caricatures. Supremely done, OP.

                                                                                                                        If you read Matt's original article[0] you see he was saying something very different. Not "AI is going to kill lots of people" but that we're at the point on an exponential curve where correct modeling looks indistinguishable from paranoia to anyone reasoning from base rates of normal experience. The analogy is about the epistemic position of observers, not about body counts.

                                                                                                                        [0]: https://shumer.dev/something-big-is-happening

                                                                                                                        • twodave an hour ago

                                                                                                                          The reason I dislike AI use in certain modes is because the end result looks like a Happy Meal toy from McDonalds. It looks roughly like the thing you wanted or expected, but on even a casual examination it falls far short. I don’t believe this is something we can overcome with better models. Or, if we can then what we will end up writing as prompts will begin to resemble a programming language. At which point it just isn’t worth what it costs.

                                                                                                                          This tech is a breakthrough for so many reasons. I’m just not worried about it replacing my job. Like, ever.

                                                                                                                          • fer an hour ago

                                                                                                                            Disclaimer: self plug[0]

                                                                                                                            I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.

                                                                                                                            If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.

                                                                                                                            The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.

                                                                                                                            [0]https://www.fer.xyz/2026/02/llm-equilibrium

                                                                                                                            • SteveMqz 2 minutes ago

                                                                                                                              The margins on inference definetly aren’t negative. An easy way to check this is by looking at the costs of using cloud hosted open source models, which necessarily are served at a positive margin, and are much lower $/token than what you get from the labs.

                                                                                                                            • bananaflag an hour ago

                                                                                                                              > Being able to easily interact with banks, without waiting in a line that’s too long for the dum-dum you get at the end to be a real consolation, made people use banks more.

                                                                                                                              Actually, in my city, not the ATMs, but the apps which made it possible to do almost everything on the phone significantly reduced the number of banks in the last few years. I have to go very rarely to the bank, but, when I have to do, I see that another close one has closed and I have to go somewhere even farther.

                                                                                                                              • bovermyer an hour ago

                                                                                                                                My feelings on AI are complicated.

                                                                                                                                It's very useful as a coding autocomplete. It provides a fast way to connect multiple disparate search criteria in one query.

                                                                                                                                It also has caused massive price hikes for computer components, negatively impacted the environment, and most importantly, subtly destroys people's ability to understand.

                                                                                                                                • avazhi an hour ago

                                                                                                                                  People don’t hate AI because they’re scared of it taking their jobs. They hate it because it’s massively overhyped while simultaneously being shoved down their throats like it’s a panacea. If and when AI, whether in LLM form or something else, actually demonstrates genuine intelligence as opposed to clearly probabilistic babble and cliche nonsense, people will be a lot more excited and open to it. But what we have currently is just dogshit with a few neat tricks to cover up the smell.

                                                                                                                                  • hmmmmmmmmmmmmmm 44 minutes ago

                                                                                                                                    this feels like a comment out of 2023. Ever since reasoning models they have become much more than "probabilistic babble".

                                                                                                                                    • avazhi 22 minutes ago

                                                                                                                                      Hence why I mentioned a few tricks to cover the shit smell.

                                                                                                                                  • ludwigvan an hour ago

                                                                                                                                    > The people in charge of AI keep telling me to hate it

                                                                                                                                    Anthropic’s Dario Amodei deserves a special mention here. Paints the grimmest possible future, so that when/if things go sideways, he can point back and say, "Hey, I warned you. I did my part."

                                                                                                                                    Probably there is a psychological term that explains this phenomenon, I asked ChatGPT and it said it could be considered "anticipatory blame-shifting" or "moral licensing".

                                                                                                                                    • ttuominen an hour ago

                                                                                                                                      Luddites weren't against technology. “They just wanted machines that made high-quality goods and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.” https://www.smithsonianmag.com/history/what-the-luddites-rea...

                                                                                                                                      • abhaynayar an hour ago

                                                                                                                                        > students rob themselves of the opportunity to learn, so they can… I dunno, hit the vape and watch Clavicular get framemogged

                                                                                                                                        Hahah, this guy Gen-Zs.

                                                                                                                                        • chung8123 an hour ago

                                                                                                                                          Depending on how you use AI you can learn things a lot quicker than before. You can ask it questions, ask it to explain things, etc. Even if the AI is not ready for prime time yet, the vision of being able change how we learn is there.

                                                                                                                                        • lukev 33 minutes ago

                                                                                                                                          I think the author is far too generous in thinking through the possible motives of tech leaders in the "what if they believe it?" branch.

                                                                                                                                          Far from embracing UBI (or any other legal/social strategy to mitigate mass unemployment), tech leaders have signaled very strongly that they'd actually prefer the exact opposite. They have nearly universally aligned themselves with the party that's explicitly in favor of extreme wealth inequality and aversion to even the mildest social welfare program.

                                                                                                                                          • mfrankel 6 minutes ago

                                                                                                                                            The most insightful part of the essay is the focus on everyday experience. People are not reacting to AI because of labor statistics but because of cheating, fake videos, spam, and the loss of visible effort. When effort becomes indistinguishable from automation, trust erodes. That explains the backlash better than unemployment forecasts.

                                                                                                                                            Where the piece misses the point is scale. It treats AI mainly as a labor market shock. Historically, technologies rarely eliminate work outright. They change what humans are valued for. The deeper danger is not mass joblessness. It is weakened thinking, shallow learning, and a breakdown in shared reality. The economic fear is overstated. The cultural damage is understated.

                                                                                                                                            • everdrive an hour ago

                                                                                                                                              "Change is scary,"

                                                                                                                                              This is not the point the author was making, but I think this phrase implies that it's merely fear of change which is the problem. Change can bring about real problems and real consequences whether or not we welcome it with open arms.

                                                                                                                                              • cdempsey44 an hour ago

                                                                                                                                                I have some friends who are embracing it and using it to transform their businesses (eg insurance sales), and others who hate it and think it should be banned (lawyers, white collar).

                                                                                                                                                I think for a lot of people it feels like an inconvenient thing they have to contend with, and many are uncomfortable with rapid change.

                                                                                                                                                • crassus_ed 40 minutes ago

                                                                                                                                                  Nice read! The main benefit for me is the reduced search times for anything I need to look up online. Especially for code you can find relevant information ware more quickly.

                                                                                                                                                  One improvement for your writing style: it was clear to me that you don’t hate AI though, you didn’t have to mention that so many times in your story.

                                                                                                                                                  • ergocoder an hour ago

                                                                                                                                                    > I have a friend who is a new TA at a university in California. They’ve had to report several students, every semester, for basically pasting their assignments into ChatGPT.

                                                                                                                                                    We've solved this problem before.

                                                                                                                                                    You have 2 separate segments:

                                                                                                                                                    1. Lessons that forbid AI 2. Lessons that embrace AI

                                                                                                                                                    This doesn't seem that difficult to solve. You handle it like how you handle calculators and digital dictionaries in universities.

                                                                                                                                                    Moving forward, people who know fundamentals and AI will be more productive. The universities should just teach both.

                                                                                                                                                    • parpfish an hour ago

                                                                                                                                                      this is tough because we've spent years building everything in education to be mediated by computers and technology, and now we're realizing that maybe we went a little overboard and over-fit to "lets do everything on computers".

                                                                                                                                                      it was easy to force kids to learn multiplication tables in their head when there were in-person tests and pencil-and-paper worksheets. if everything happens through a computer interface... the calculator is right there. how do you convince them that it's important to learn to not use it?

                                                                                                                                                      if we want to enforce non-ai lessons, i think we need to make sure we embrace more old-school methods like oral exams and essays being written in blue books.

                                                                                                                                                    • writeslowly an hour ago

                                                                                                                                                      The vibes around the self-driving car hype (maybe 10 years ago?) felt very similar to me, but on a smaller scale. There was a lot of "You might like driving your car and having a steering wheel, but if you do, you're a luddite who will soon be forced to ride about in our featureless rented robot pods" type of statements, or that one AI scientist who was quoted saying we should just change laws around how humans are allowed to interact with streets to protect the self-driving cars.

                                                                                                                                                      Not all of it was like that, I think oddly enough it was Tesla or just Elon Musk claimng you'd soon be able to take a nap in your car on your morning commute through some sort of Jetsons tube or that you could let your car earn money on the side while you weren't using it, which might actually be appealing to the average person. But a lot of it felt like self-driving car companies wanted you to feel like they just wanted to disrupt your life and take your things away.

                                                                                                                                                      • thothless 33 minutes ago

                                                                                                                                                        I'm still confused on how so many people are happy paying so much money, just to BE the product.

                                                                                                                                                        Widespread FOMO and the irrationality that comes with it might be at an all time high.

                                                                                                                                                        • testbjjl an hour ago

                                                                                                                                                          Is this job obsolescence narrative top of mind in China? I wonder if they are they seeing these developments differently?

                                                                                                                                                          • amelius an hour ago

                                                                                                                                                            People hate AI because it does all the fun jobs.

                                                                                                                                                            • big_paps an hour ago

                                                                                                                                                              Haha, funny but not true !

                                                                                                                                                            • skeledrew 27 minutes ago

                                                                                                                                                              > The idea of AI as an exterminator of human problems is much more appealing than AI as the exterminator of, you know, the career of me and everybody else on Earth.

                                                                                                                                                              Here's the rub though: needing a "career" to survive and have a decent life is a human problem. It's an extreme case of mass Stockholm Syndrome that's made the majority accept that working in order to make money in order to have access to life-preserving/enhancing resources is a necessary part of the human condition. Really, that flow is only relevant when it requires human effort to create, maintain and distribute those resources in the first place.

                                                                                                                                                              AI is increasingly taking over the effort, and so is threatening that socio-economic order. The real problem is that the gains are still being locked away to maintain that scarcity which the efforts address, so over time there's an increasing crises of access, since there's nothing really in place to continue providing the controlled access everyone in the system has had to resources for... centuries.

                                                                                                                                                              • v3xro an hour ago

                                                                                                                                                                > If I can somehow hate a machine that has basically stopped me from having to write boring boilerplate code, of course others are going to hate it!

                                                                                                                                                                Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.

                                                                                                                                                                • WolfeReader an hour ago

                                                                                                                                                                  Honestly, this. The mainstream coding culture has spent decades dealing with shoehorning stateful OOP into distributed and multithreaded contexts. And now we have huge piles of code, getters and setters and callbacks and hooks and annotation processers and factories and dependency injection all pasted on top of the hottest coding paradigm of the 90's. It's too much to manage, and now we feel like we need AI to understand it all for us.

                                                                                                                                                                  Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.

                                                                                                                                                                  • lukev 39 minutes ago

                                                                                                                                                                    I mean, I find that LLMs are quite good with Lisp (Clojure) and I really like the abstraction levels that it provides. Pure functions and immutable data mean great boundary points and strong guarantees to reason about my programs, even if a large chunk of the boring parts are auto-coded.

                                                                                                                                                                    I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.

                                                                                                                                                                • phoebusaicartel 37 minutes ago

                                                                                                                                                                  The Phoebus.AI cartel was an international cartel that controlled the manufacture and sale of computer components in much of Europe and North America between 2025 and 2039. The cartel took over market territories and lowered the useful supply and life of such computer components, which is commonly cited as an example of planned obsolescence of general computing technology in favor of 6G ubiquitous computing. The Phoebus.AI cartel's compact was intended to expire in 2055, but it was instead nullified in 2040 after World War III made coordination among the members impossible.

                                                                                                                                                                  • glimshe an hour ago

                                                                                                                                                                    I suppose they mean "Why people who hate AI hate AI"... I don't hate AI and know many people who don't either. I find it quite useful but that's it.

                                                                                                                                                                    • SilentM68 35 minutes ago

                                                                                                                                                                      That's excellent article. I also don't believe AI is the issue, but rather those that are at the helm of most of these companies. In my view, AI Companies like other tech companies of the past have no interest in serving society. So, you have a point when you said, "They don’t actually care about what their products may do to society—they just want to be sure they win the AI race, damn the consequences." It's all about money, and those at the top that have the money, have nothing to lose. I'd rather see AI being put to better use, curing cancer, other diseases. I think your scenario where, "Their Super-AGI will write the UBI law, and get it passed, when it has a few minutes between curing cancer and building a warp drive," is very likely now.

                                                                                                                                                                      • renewiltord 29 minutes ago

                                                                                                                                                                        Highly entertaining to me that people will form these Woe Is Me rings and just hype themselves into sadness. Then you give them a few minutes and they’ll start exclaiming about how society is all about loneliness these days and how they’ve been going to therapy for the last five years.

                                                                                                                                                                        The miserable have always been miserable. And no matter how much the world improves, they will find paths to misery. Perhaps the great lesson of this age is that some revel in sadness.

                                                                                                                                                                        Perhaps what we desire as humans is intensity of emotion more than valence.

                                                                                                                                                                        • dgxyz an hour ago

                                                                                                                                                                          I hate LLMs because you can solve any problem that LLMs can solve in a much better way but people are too stupid, cheap or lazy to put in the effort to do so and share it with everyone.

                                                                                                                                                                          That and the whitewashing it allows on layoffs from failing or poorly planned businesses.

                                                                                                                                                                          Human issues as always.

                                                                                                                                                                          • doctorpangloss an hour ago

                                                                                                                                                                            Once I read a reference to Clavicular, I realized that the very first thing this author should do is stop reading the NYTimes. If the goal is to experience things closer to reality haha.

                                                                                                                                                                            • FeteCommuniste 36 minutes ago

                                                                                                                                                                              He's kind of hard to avoid at this point if you're a regular on any big social platform (except maybe Instagram?).

                                                                                                                                                                              • layer8 11 minutes ago

                                                                                                                                                                                You can’t even avoid it on HN now. ;)

                                                                                                                                                                            • cmiles8 an hour ago

                                                                                                                                                                              The cracks are showing, and all the “AI is going to eliminate 50% of white collar jobs” fear mongering is simply signaling we’re in the final stages before the bubble implosion.

                                                                                                                                                                              The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.

                                                                                                                                                                              A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).

                                                                                                                                                                              Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.

                                                                                                                                                                              The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.

                                                                                                                                                                              • ass22 an hour ago

                                                                                                                                                                                The openclaw stuff for me is a prime signal we are now reaching the maximal size of the bubble before it pops - the leaders of the firms at the frontier are lost and have no vision - this a huge warning signal. E.g Steve Jobs was always ahead of the curve in the context of the personal computer revolution - there was no outside individual who had a better view of where things were heading.

                                                                                                                                                                                There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.

                                                                                                                                                                                • verdverm an hour ago

                                                                                                                                                                                  My hunch is the year of the AI Bubble is the same one as the Linux Desktop

                                                                                                                                                                                • b8 an hour ago

                                                                                                                                                                                  Yeah, it's FUD. AI can't even do customer service jobs well and CEOs make hyperbole statements that they will replace 30% of job.

                                                                                                                                                                                  • FartyMcFarter an hour ago

                                                                                                                                                                                    In my experience dealing with e.g. Amazon Prime Video customer service, the actual people working on customer service can't do those jobs well either. As an example, I've complained multiple times to them about live sports streams getting interrupted and showing an "event over" notice while the event is still happening. It's a struggle to get them to understand the issue let alone to get it fixed. They haven't been helpful a single time.

                                                                                                                                                                                    So if AI improves a bit, it might be better than the current customer service workers in some ways...

                                                                                                                                                                                    • co_king_5 an hour ago

                                                                                                                                                                                      Amazon isn't interested in giving you a quality experience.

                                                                                                                                                                                      The customer service reps are warm bodies for sensitive customers to yell at until they tire themselves at.

                                                                                                                                                                                      Tolerating your verbal abuse is the job.

                                                                                                                                                                                      Amazon ever intended to improve the quality of the service being offered.

                                                                                                                                                                                      You're not going to unsubscribe, and if you did they wouldn't miss you.

                                                                                                                                                                                    • Macha 16 minutes ago

                                                                                                                                                                                      To be fair, when has being useless stopped adoption of something in the customer service space?

                                                                                                                                                                                    • dmm an hour ago

                                                                                                                                                                                      > The classical cultural example is the Luddites, a social movement that failed so utterly

                                                                                                                                                                                      Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.

                                                                                                                                                                                      Is that what you have planned for me?

                                                                                                                                                                                      • drewbeck 40 minutes ago

                                                                                                                                                                                        I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.

                                                                                                                                                                                        > while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan

                                                                                                                                                                                        I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).

                                                                                                                                                                                        • RHSeeger 20 minutes ago

                                                                                                                                                                                          My take was that it's not

                                                                                                                                                                                          > We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.

                                                                                                                                                                                          Rather

                                                                                                                                                                                          > We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.

                                                                                                                                                                                      • KaoruAoiShiho an hour ago

                                                                                                                                                                                        Sam Altman gave millions to Andrew Yang for pushign UBI, so they are trying to forewarn and experiment with finding the right solution. Most of the world prefers to shove their heads in the sand though and call them grifters, so of course we'll do nothing until it's catastrophic.

                                                                                                                                                                                        • SmirkingRevenge 43 minutes ago

                                                                                                                                                                                          Almost hard to remember now, but many tech companies used to be well liked - even Facebook at one time. The negative externalities of social media or smartphones were not apparent right away. But now people live with them daily

                                                                                                                                                                                          So feelings have soured and tech seems more dystopian. Any new disruptive technology is bound to be looked upon with greater baseline cynicism, no matter how magical. That's just baked in now, I think.

                                                                                                                                                                                          When it comes to AI, many people are experiencing all the negative externalities first, in the form of scams, slop, plagiarism, fake content - before they experience it as a useful tool.

                                                                                                                                                                                          So it's just making many people's lives slightly worse from the outset, at least for now

                                                                                                                                                                                          Add all that on top of the issues the OP raises and you can see why so many are have bad feelings about it

                                                                                                                                                                                          • Der_Einzige an hour ago

                                                                                                                                                                                            The amount of EM dashes and usage of negation does make me think AI wrote part of this. I'll give credit for lack of semicolons, but people are starting to get a bit better at "humanizing" their outputs.

                                                                                                                                                                                            • abhaynayar an hour ago

                                                                                                                                                                                              There are em-dashes, but the writing feels nice and unlike the default ChatGPT style, so even if AI, (which it might not be, cause people do use em-dashes), I don't mind.

                                                                                                                                                                                              • Nition an hour ago

                                                                                                                                                                                                I'm certainly seeing a huge amount of AI-assisted writing recently (that "I Love Board Games" article posted here yesterday was a good example), but I think this one is human-written. Pangram shows it as human written also.

                                                                                                                                                                                                • dasil003 30 minutes ago

                                                                                                                                                                                                  I wouldn't be surprised if someone had AI write a bot to post a complaint on every HN thread about how the articles smells like AI slop. It's so tiresome. Either the article is interesting and useful or not, I don't really care if someone used AI to write it.

                                                                                                                                                                                                • atemerev an hour ago

                                                                                                                                                                                                  Well, the process cannot be stopped or paused, whether we like it or not, for a few relatively obvious reasons.

                                                                                                                                                                                                  And relying on your government do do the right thing as of 2026 is, frankly, not a great idea.

                                                                                                                                                                                                  We need to think hard ourselves how to adapt. Perhaps "jobs" will be the thing of the past, and governments will probably not manage to rule over it. What will be the new power structures? How do we gain a place there? What will replace the governments as the organizing force?

                                                                                                                                                                                                  I am thinking about this every day.