• nomdep a day ago

    (I feel that I write a comment like this every few years)

    The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).

    And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.

    In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.

    History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.

    • abeppu a day ago

      > but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't

      I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.

      In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.

      In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.

      I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.

      Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.

      • raincole a day ago

        > But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.

        And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!

        [0]: https://www.economist.com/united-states/2026/02/12/the-decli...

        • _DeadFred_ 5 hours ago

          The scifi books were right in predicting future relationships would be poly, they just didn't explain it was because it was the only way people would be able to afford to live.

        • random3 a day ago

          arguably the history of humanity was about automating humanity.

          - teeth and nails with knives (in various shapes from bones to steel)

          - feet with carriages and bicycles and cars

          - hands with mills and factories on steam engines to industrial robots

          Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.

          This automation is an automation of the human brain.

          While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.

          With loss of utility comes loss of benefits.

          Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.

          By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)

          • NonHyloMorph a day ago

            The term you're looking for is externalisation not automation. Check out "the fault of epimetheus"; & on the alienation of the machine by automation ca. 19late7s one of its intellectual predecessors: gilbert simondon

            • random3 20 hours ago

              Thanks, both! Glad to get the explicit names for the things I'm "gesticulating" at. I haven't done any explicit reading on the topic, except for adjacent stuff like Analogia (Dyson), The coming wave (Suleyman) and saw talk by Terry Winograd that I thought was on point https://www.youtube.com/live/LcvYYXdXF8E. I have and do want to read Superintelligence and will check out both Stiegler and Simondon.

              • plastic-enjoyer a day ago

                I mean, if he is a reader of Nick Land, automation may be right.

            • aprilthird2021 a day ago

              Everyone says this as if the previous cycles of labor displacement could not compound and this be the last straw. Same with how phones cause shorter attention span and less thought and more social isolation. People will say "oh they said the same thing about books and TV and video games"

              We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur

              • robflynn a day ago

                I am reminded of this, I feel like its kind of a similar phenomenon: https://www.reddit.com/r/dataisbeautiful/comments/1m803ba/th...

                • plagiarist a day ago

                  I think they are compounding. Prior to the internet we had more third spaces, less attention economy, fewer self-esteem problems comparing our lives against influencers', warehouse and delivery jobs without pissing in a bottle to stay employed, people were employed instead of doing gigs. We used to have privacy somewhat, that's gone.

                  It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.

                • zozbot234 a day ago

                  Local retail and specialty print media are alive and well. Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news. Even cyberbullying is mostly a matter of salience: it takes something that has always existed in the physical realm (bullying behavior) and moves it to the cyber environment where the mass public becomes aware of it.

                  • mft_ a day ago

                    > Mass-market newspapers may be in trouble, but that's because it turns out most people were buying those for the classifieds, not really for the news.

                    Genuinely interested in some sort of data on this.

                    My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.

                  • bsder a day ago

                    > enabled cyberbullying

                    The problem here is that adults do not take bullying seriously and they take cyberbullying even less seriously.

                    This is the fairly standard problem that we do not apply the existing rules and laws online with the same vigor as offline.

                    • notanastronaut 10 hours ago

                      Don't forget, the camera also destroyed art and put artists out of a job. Then digital cameras ruined it for the film industry. And on, and on.

                      • antonvs a day ago

                        The assumption in your comment is that those changes were all net good. In hindsight though, the automobile has had possibly existential costs for humanity, the internet has provided most benefit to those who most abuse its power, and so on. In the end, it doesn’t seem as though you’ve actually made any sort of case.

                        • jdross a day ago

                          The set of people who believe the automobile (or the Internet) are net negatives taken as a whole for society is extremely small, for good reason

                          • Retric a day ago

                            Is it? Do you include everyone that’s died or lost a loved one due to personal automobiles in that assessment?

                            We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.

                            • notanastronaut 10 hours ago

                              How many people have lived or had a loved one saved due to automobiles?

                              We have the benefit of hindsight but we're also making judgment calls looking back on fuzzy recollections, forgetting just how the past used to be before an innovation came along.

                              • Retric 8 hours ago

                                I agree it’s difficult to do these calculations as society evolves with technology. Trains enable long distance evacuation from hurricanes. Street cars and subways allow for medical transportation but it looks very different than an ambulance. Similarly do we exclude helicopters assuming cars were simply banned rather than our failing to design IC engines or whatever.

                                That said, there are modern enclaves without cars mostly on islands or in very remote locations. They make due just fine without cars, it’s the low population density that’s at issue for medical care.

                            • EdwardDiego a day ago

                              Let's refine terms - internal combustion engine driven automobiles have lead to lead poisoning, air pollution, and CO2 emissions.

                              • zozbot234 a day ago

                                The automobile on its own was actually far less polluting than the horse wrt. air quality. It's just that there's a whole lot more of the former than there ever was of the latter. Even wrt. climate change, it turns out that horses produce methane emissions which are far worse for the climate than carbon dioxide.

                                • mikelitoris a day ago

                                  You are immensely discounting induced demand though.

                                  • simianwords a day ago

                                    induced demand is a good thing - it means there is more utility going around.

                                • simianwords a day ago

                                  I would like to get actual numbers.

                                  1. how many people died because of lead poisoning, air pollution?

                                  2. how many people were saved and had qualitatively better lives because of automobiles?

                                • undefined a day ago
                                  [deleted]
                                  • antonvs 19 hours ago

                                    That reason is along the lines of, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”

                                    Coal miners will fight for coal mines, the oil industry will fight for dependence on oil, and so on. Sometimes they’re aware of what they’re doing, but in the case of a comment like the above, apparently not so much.

                                  • shimman a day ago

                                    I often wonder that if cable news was around say, during the American Civil War, how likely would the 13th, 14th, and 15th amendments have passed? I'd say extremely unlikely.

                                    Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.

                                    I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.

                                    To think that technological progress means prosperity is extremely delusional.

                                    We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.

                                    • bad_haircut72 a day ago

                                      The Industrial Revolution and its consequences have been a disaster for the human race.

                                      • MSFT_Edging a day ago

                                        Seriously, these types of comments are always really narrow in their view.

                                        Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.

                                        All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.

                                        As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.

                                        The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.

                                      • georgemcbay a day ago

                                        > The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history.

                                        I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.

                                        • akoboldfrying a day ago

                                          I think it will be big, but I don't think it's bigger than the automation of manufacturing that began during the Industrial Revolution.

                                          Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?

                                        • techpression a day ago

                                          I don’t think we can haphazardly apply history like this, it’s never the same, we just like to find patterns where there are none.

                                          The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.

                                          And sure, some will run some local model here and there, but it will irrelevant in a global context.

                                        • d_burfoot a day ago

                                          > they mimic and amplify the inherent racism present in their own training data

                                          LLMs turn out to be biased against white men:

                                          https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-g...

                                          > When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.

                                          • dogmayor a day ago

                                            Important sentences immediately before the ones you quote.

                                            > For our evaluation, we inserted names to signal race / gender while keeping the resume unchanged. Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.

                                            • daveguy a day ago

                                              Hah. Even LLMs know Meta and Palantir are evil af.

                                            • biophysboy a day ago

                                              Looking at the paper, the effect is significant but weak (5-7%), even with the conditionals that magnify the effect. I would be curious to see the effect if this experiment were performed on a slightly different categorical variable (e.g. how are two white ethnicities treated). I do think its bad if preferences are "baked in" to the default though - prompting them away seems like a bad solution.

                                              • aprilthird2021 a day ago

                                                These are because of post-training. You have to give it such directives in post-training to correct the biases they bring in from scraping the whole internet (and other datasets like books, etc.) for data

                                                • 113 a day ago

                                                  That's not a reliable source.

                                                • abeppu a day ago

                                                  > You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.

                                                  > To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.

                                                  So I think even these people should not feel secure. The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise. If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter. Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.

                                                  I think the class divide that is most relevant is more literal and old-school:

                                                  - Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.

                                                  - Can you invest specifically in the firms that are actively building AI, or applications thereof?

                                                  We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.

                                                  • zombot 13 hours ago

                                                    > then owners of course take a bigger share of value created going forward.

                                                    As has been the case with every technological change since forever.

                                                  • pixl97 a day ago

                                                    So the problem here is this isn't an article about AI.

                                                    When the Luddities broke machines and burned the buildings that held them it wasn't because they hated machines (well at least initially). It's because they hated starving in the streets.

                                                    This is just a continuing part of the class war that has been going on since humanity started writing. Now, the only thing that might make this different is class/capital may have finally gotten the power to win it.

                                                    Every time you vote against a social safety net, you are ensuring that our AI future is a dark one. History has repeated this over and over.

                                                    • jostylr a day ago

                                                      We are at a fork in the road with lots of potential darkness, but simply thinking any old social safety net is going to work is not going to cut it. Nets can be, and generally are, used to capture.

                                                      An interesting multi-pronged approach is post labor economics which is being promoted by David Shapiro: https://www.youtube.com/@DaveShap

                                                      The basic premise is that currently we have households being supported by labor, capital, and transfers. With labor largely going away, the leaves capital and transfers. Relying on transfers alone will lead to ownership of the people by government. So we have to find ways to generate way more distributed capital ownership by the masses. This is what he plans, discusses, and promotes.

                                                      • pseudalopex a day ago

                                                        A generality about 1 word of a metaphor is not a legitimate argument.

                                                        I read what David Shapiro called concrete interventions.[1] They were public ownership, redistribution, public ownership or redistribution systems he called predistribution, shorter work weeks, and under developed blockchain ideas.

                                                        [1] https://daveshap.substack.com/p/understanding-post-labor-eco...

                                                      • _DeadFred_ 2 hours ago

                                                        The very reason why we object to state ownership, that it puts a stop to individual initiative and to the healthy development of personal responsibility, is the reason why we object to an unsupervised, unchecked monopolistic control in private hands. We urge control and supervision by the nation as an antidote to the movement for state socialism. Those who advocate total lack of regulation, those who advocate lawlessness in the business world, themselves give the strongest impulse to what I believe would be the deadening movement toward unadulterated state socialism.

                                                        --Theodore Roosevelt

                                                      • small_model a day ago

                                                        It seems like the author hasn't really used the latest models, I wrote my last line of code about a month ago after 20+ years coding. Claude Code can do it for me, better faster and never gets tired etc. Yes I have to keep it on leash but humans coding is over, unless its to learn or for fun.

                                                        Actually it's the lower classes that will escape the longest from AI replacing their jobs, unskilled physical work will remain human for a while yet. Whereas any job that can be done remotely is likely to replaced by one or more agents.

                                                        • bobro 9 hours ago

                                                          AI is good at coding because it is heavily text focused with excellent documentation and a relatively clear binary on works/doesn’t work. I wouldn’t so quickly lump all other kinds of remote work into the same bucket based on your experience coding.

                                                          • jdross a day ago

                                                            The new robot demos from Unitree make me wonder how many classes of unskilled labor are about to be automated (garbage collection, laundry & dishes, pothole repairs, last mile delivery, simple food preparation…)

                                                            Skilled labor still has some legs.

                                                            • small_model a day ago

                                                              I don't see any humanoid robots around at the moment, whereas a huge number of knowledge based workplaces use non-embodied AI now every day.

                                                              • small_model a day ago

                                                                Can't wait for silent robots to collect the garbage, human ones seem to enjoy making as much racket as they can.

                                                                • nicoburns a day ago

                                                                  There are already human-operated robots that collect garbage. Things like https://www.youtube.com/watch?v=9pl9vRCC6V0. If the automated robots end up being anything like that, I wouldn't expect them to be silent.

                                                              • biophysboy a day ago

                                                                A lot of low wage work isn't physical

                                                                • lowsong a day ago

                                                                  > It seems like the author hasn't really used the latest models

                                                                  The author addresses this point.

                                                                  > While I’m sure the technology and its costs will continue to improve, it’s hard to see how that would mitigate most of these harms. Many would just as likely be intensified by greater speed, efficiency, and affordability.

                                                                  > This sort of technology distributes instability to the many at the bottom, while consolidating benefit at the top—and there has arguably never been a more efficient mechanism for this than AI.

                                                                  Personally, I'm really tired of every criticism of AI being met with "you haven't tried the latest models". The model isn't the point. It doesn't matter how good it is, it cannot possibly outweigh the harms.

                                                                  > I wrote my last line of code about a month ago after 20+ years coding

                                                                  You are exactly the kind of person the author talks about

                                                                  > To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure. Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that your résumé opens doors for you.

                                                                  I fear you've entirely missed the point of the article. Just because you believe you can get value from it, does not make up for the downsides to everyone else, and it's quite literally privilege to ignore that.

                                                                  • small_model a day ago

                                                                    The world changes, you either adapt or not. People who saw this coming could have positioned themselves with plan a b/c. No different from when other societal changing technologies arrived in the past. What does crying about it in blog posts achieve.

                                                                    • dodu_ a day ago

                                                                      What "repositioning" did you have to do?

                                                                      • lowsong a day ago

                                                                        The adoption of AI in society at large is not foregone conclusion. Acting as if it's unstoppable and washing your hands of the consequences is wilful ignorance. But, it doesn't have to be this way. You do have a choice to not use or encourage this technology.

                                                                        • small_model a day ago

                                                                          Way too late now, it's out of the bottle. If you don't use it others will. Best we can do is encourage institutions to safeguard its development.

                                                                          • lowsong a day ago

                                                                            Again, you're assuming it's inevitable.

                                                                            Think of all of the other public health changes over the years: CFCs, leaded gasolene, asbestos, etc. Apparent miracles of technology that looked unassailable in their ubiquity, and through the blood and tears of many they were all but eliminated.

                                                                            That's the crux of the article. There are harms, and if you ignore them it's because you don't think it'll affect you.

                                                                            I'm not trying to be rude, we all make our choices and nobody is a saint. I still eat meat even knowing the damage of the meat industry. But don't pretend you don't have a choice here, or wash your hands of the harms because you feel you won't make a difference.

                                                                            • small_model a day ago

                                                                              It's more comparable to the internet, mobile phones and social media. It is likely to cause some harms as well as provide great utility. I disagree it's not inevitable though and I do think the harms can be mitigated, that where the effort should be focused.

                                                                  • petercooper a day ago

                                                                    > AI optimism requires believing that you … are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health

                                                                    This has echoes of moral panic to me. We hear about mental health crises triggered by LLMs in the media because they’re novel, uncommon and the stories grab attention. The modern equivalent of video games cause violence, or jazz is corrupting the youth?

                                                                    I’ll concede AI has many perils, and I doubt we’ve even broken the surface of it yet, but I don’t think user psychosis is either now, or going to be, a common one.

                                                                    • elzbardico a day ago

                                                                      Well, maybe one day, there’ll be a situation justifying a moral panic.

                                                                      We can’t simply dispose of an argument just because it smells in a particular way to us.

                                                                      • antonvs 18 hours ago

                                                                        “What can be asserted without evidence can also be dismissed without evidence.” — Christopher Hitchens

                                                                    • tgv a day ago

                                                                      Author certainly has a point. The central idea is (IMO) best expressed in this quote:

                                                                      > to focus on its [i.e., AI's] benefits to you, you’re forced to ignore its costs to others.

                                                                      • xnx a day ago

                                                                        Also works if you substitute "technology" for "AI".

                                                                        • tgv 13 hours ago

                                                                          True, and other changes in society as well. But in contrast to e.g. the introduction of Phillips screws, AI (or rather: LLMs) is a biggy, and (IMO) one with where the negatives clearly outweigh the positives.

                                                                          • xnx 9 hours ago

                                                                            Definitely agree that LLMs are a big deal, but I'm holding out hope that they are a net positive. Even the current wave of false/fake news content could have benefits if it results in emphasizing chains of trust to reliable information.

                                                                            Are there other technologies you think the negatives outweigh the positives? Leaded gasoline and paint come to mind for me.

                                                                        • tim333 a day ago

                                                                          I'm not sure that's true. Like I find the Google AI handy but that doesn't mean I have to ignore that others may be annoyed by it.

                                                                          • tgv 13 hours ago

                                                                            It's of course a quote of a handful of words from a 5000 word article. Not to be snarky, but did you read it?

                                                                            • tim333 5 hours ago

                                                                              I did read and I'll give you my comment wasn't that in line with the article. I'm not that convinced by the optimism being a class privilege thing though.

                                                                        • rossant 21 hours ago

                                                                          There’s an argument I don’t hear very often. Society is becoming increasingly dependent on data/compute centers running AI models. It would take only a few bombs from a hostile country to disrupt the entire economy, army and so on, especially if workers can no longer do anything useful without AI.

                                                                          • shmobot 15 hours ago

                                                                            Exactly, this type of economy feels like it's built for the peace and globalization times. Sadly, lately it seems like those times are over.

                                                                          • DesaiAshu a day ago

                                                                            India and Africa are significantly more optimistic about AI than US and EU

                                                                            There exists great promise in AI to be an equalizing force, if implemented well

                                                                            The future is yet to be written

                                                                            • wolrah 11 hours ago

                                                                              > There exists great promise in AI to be an equalizing force, if implemented well

                                                                              *looks at who's controlling the implementation of LLM-based "AI"*

                                                                              *looks at who the largest beneficiaries of societal inequalities are*

                                                                              theyre_the_same_picture.jpg

                                                                              • hyperadvanced a day ago

                                                                                Being optimistic is a bad way to get good outcomes

                                                                                • antonly a day ago

                                                                                  How is it an equalising force if the commodity is sold at "market value"? This will just lead to more wealth concentration, no?

                                                                                  • simianwords a day ago

                                                                                    you are conflating equity and equality. its equalising in the sense that it democratises access to data, knowledge. but that does not mean it will end up with everyone being equal in terms of wealth.

                                                                                    • antonly 17 hours ago

                                                                                      But it won't even do that. In the pursuit of extracting maximal capital, better models with "better" knowledge will be gated behind higher prices. If you pay more, you get more.

                                                                                  • wasmainiac a day ago

                                                                                    > There exists great promise in AI to be an equalizing force, if implemented well

                                                                                    That doesn’t sound like a promise then no?

                                                                                  • tim333 a day ago

                                                                                    The author perhaps pessimistically seems to think AI will benefit some privileged groups and hurt others but it could turn out to be something like clean water and sanitation that benefits almost everyone. I don't think it has to be bad.

                                                                                    • abcde666777 a day ago

                                                                                      Many of the potential technologies we might unlock in the future come with great danger. We saw this clearly perhaps for the first time with atomic weapons - the kind of technology with which we could truly destroy ourselves.

                                                                                      Many other advancements might also carry that kind of existential danger. Genetic engineering, human machine interfacing, actual AGI.

                                                                                      I see the technological climb as a bit like climbing Mt Everest - it's possible that we might reach the peak and one day live on some kind of Star Trekian society, but the climb becomes increasingly treacherous along with the risk that we perish.

                                                                                      The trouble of course is that there's nothing else for us to do: it's in our nature to explore new frontiers. It's just not clear whether we'll be able to handle the responsibility that comes with the power.

                                                                                      • suzzer99 a day ago

                                                                                        I started playing Civ (any version) for the first time a few months ago. I'm somewhat addicted, but at the same time I might find a new game soon because I get upset when the AIs attack me. Sometimes I take it personally and decide to just destroy that AI to the detriment of winning. I also am leery of an AI if they attacked me in a previous game. I also don't like to attack a previously friendly AI out of the blue, even if I'm supposed to to win the game.

                                                                                        I realize these are all silly and "me" problems. But like the author, it's interesting that I actually have to work to emotionally divest myself from being upset at a bunch of 1s and 0s.

                                                                                        • isahers a day ago

                                                                                          I disagree with most of this article. Disclaimer: I'm a junior engineer and I believe both that: 1. AI is going to take my job, 2. AI is going to do incredible good for the world.

                                                                                          I don't see how these are distinct. It's a technology shift, of course it's going to make certain jobs obsolete - that's how technology shifts work.

                                                                                          I'm not going to go through every quote I disagree with, but unlike some AI negativity discourse (some of which I agree with btw, being an optimist doesn't mean being irrational) this just reads as old man yells at cloud. Mainly because the author doesn't understand the technology, and doesn't understand the impact.

                                                                                          The author clearly does not understand model capabilities (seems to be in the camp that these are just "prediction machines") as they claim it's unreasonable to expect models to "develop presently impossible capabilities". This is not at all supported by prior model releases. Most, if not all, major releases have displayed new capabilities. There are a lot more misconceptions on ability, but again not going to go through all of them.

                                                                                          The author also doesn't understand the impact, saying stuff like "Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less". What? Is the author unaware of what average labor hours were like before the industrial revolution? AI is clearly going to be hugely net positive for white-collar (and with robots eventually blue-collar) workers in the near future (it already is for many).

                                                                                          • elzbardico a day ago

                                                                                            Average labour hours in a year dramatically increased with the Industrial Revolution.

                                                                                            They would only decrease much later, after a long period of social conflict, economic growth, and technological progress.

                                                                                            During the early phase of the Industrial Revolution (roughly 1760–1850):

                                                                                            Agricultural workers who once labored seasonally were pushed into factory schedules of 12–16 hours per day, 6 days per week.

                                                                                            Annual labor hours often exceeded 3,000 hours per year per worker.

                                                                                            This was not because work became harder physically, but because capital-intensive machinery became expensive and had to run continuously to be profitable.

                                                                                            Time discipline replaced task-based work. Before industrialization, a farmer might stop when tasks were done; factory workers had fixed shifts.

                                                                                            This trend persisted into the late 19th century.

                                                                                            • ungovernableCat a day ago

                                                                                              The labour struggle for rights we see as basic today (40h work weeks, free weekends) was bloody and deadly.

                                                                                              And this was without surveillance tech and automated police drones or w/e else Palantir is working on right now. If we're going by historical precedent this transition won't be pretty, even if you're hoping for a nice optimistic end result.

                                                                                              I'm not so sure having a beefy 401k and maybe a couple of rental properties will be enough to insulate some of the more comfortable HN posters from all the potential chaos.

                                                                                              • elzbardico a day ago

                                                                                                I am pretty sure we are running towards a big 1929 style system correction. I may be wrong, but that people doesn't even try to contemplate this possibility seems bonkers too me. And in that case, those 401k are not going to be worth much, viz. the price of butter, and neither you can count on rental properties as a income source when everywhere became Flint, OH.

                                                                                              • simianwords a day ago

                                                                                                I'm not sure what your comment is trying to say.

                                                                                                >They would only decrease much later, after a long period of social conflict, economic growth, and technological progress.

                                                                                                The technological process mentioned here as Industrial Revolution, was the necessary step. It was not primarily social conflicts.

                                                                                                The above poster is making exactly the same point.

                                                                                                You are right that there was an aberration in the middle where people worked way more hours but this was when things were consolidating.

                                                                                                But your post implies that the preference of people was to work less hours as opposed to work more hours and earn more money, which is not the case.

                                                                                            • OneMorePerson a day ago

                                                                                              Maybe in our system this is true. If you believe the latest surveys (I didn't see them publish their methodology) countries like China have way higher optimism about AI than the West, so maybe its a class privilege because we expect only a small amount of the gains will actually go to the wider society.

                                                                                              • r00tanon a day ago

                                                                                                Josh Collinsworth paints an accurate picture here, and he’s not wrong. People will be hurt by the displacement and disruption that new technology brings, not just AI. It has been happening since humans first picked up stones and sharpened sticks. No one says you have to be happy about it, or even optimistic, even if you happen to be part of a privileged class.

                                                                                                • peteforde a day ago

                                                                                                  I hope someone close to the author can break it to him that actually, we're not all seeing endless videos of women being strangled...

                                                                                                  • nunez 19 hours ago

                                                                                                    Couldn't have said this better myself. Thank you for writing this.

                                                                                                    • yesimahuman a day ago

                                                                                                      This sums up a lot of what I've been feeling lately. I've had a hard time getting excited about the AI long game, unlike a lot of my peers. I think being out of the industry for a few years (sold my startup a while back), and spending less time in that privileged tech bubble has made me far more aware of and concerned about average people and the future of non-tech hubs around the country/world. While I get value out of AI tools today, I see a lot more downsides in the immediate future for everyone that isn't directly working on this technology with skin in the game. Especially when our society doesn't seem even remotely ready for the big changes coming.

                                                                                                      • akoboldfrying a day ago

                                                                                                        > Imagine the damage one bad kid could cause using deepfakes

                                                                                                        Deepfakes are highly damaging right now because much of the world still doesn't realise that people can make deepfakes.

                                                                                                        When everyone knows that a photo is no longer reliable evidence by itself, the harm that can be done with a deepfake will drop to a similar level as that of other unreliable forms of evidence, like spoken or written claims. (Which is not to say that they won't be harmful at all -- you can still damage someone's reputation by circulating completely fabricated rumours about them -- but people will no longer treat photorealistic images as gospel.)

                                                                                                        • georgemcbay a day ago

                                                                                                          I would say it is privilege (now) combined with denialism (for the future).

                                                                                                          Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.

                                                                                                          It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.

                                                                                                          And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?

                                                                                                          • tptacek a day ago

                                                                                                            Most of the logic of this post will be incoherent in a world where AI has replaced software jobs wholesale. You have to pick a lane. Is it so effective that it (and the labor market more broadly) needs to be aggressively regulated, or it not very useful for anything but trolling? It can't be both.

                                                                                                            • Wilder7977 a day ago

                                                                                                              This assumes every decision-maker is a rational actor. Just today an executive was rambling about "quantum-empowered AI". These are the people who take decisions about firing workers. It is entirely possible that AI will replace many jobs while being useless (at achieving what those workers do). At least in the short-medium period.

                                                                                                              We would live in a post-scarcity utopia if big economic decisions were taken based on long-term optimal effects.

                                                                                                              • tptacek a day ago

                                                                                                                I'm interested in how you can tell an industry-wide job displacement story about AI, where AI isn't actually doing the job, that isn't a just-so story.

                                                                                                                • hyperadvanced a day ago

                                                                                                                  If you wanted to tell such a story, you’d have to find examples of companies spending bazillions on new AI tooling, but failing to hit their top level OKRs. I suspect there will be at least a few of these by the end of 2026 - even a great technology can seem like an abacus in the hands of a disorganized and slow moving org.

                                                                                                                  • tptacek a day ago

                                                                                                                    The story only matters if it produces an industry-wide displacement in jobs. Failed billion-dollar IT projects are not a new thing, and don't disrupt the entire labor market.

                                                                                                                    To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.

                                                                                                                    Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.

                                                                                                                    • scarmig a day ago

                                                                                                                      There's a coherent story that straddles both lanes, by assuming that the human economy is in some weird place where the vast majority of humans don't create real economic value and mostly get employment through inertia and custom, and that AI, despite being worthless, provides an excuse for employers to break through taboos and traditions and eliminate all those jobs. Quite a stretch, but it's coherent at least.

                                                                                                                • marginalia_nu a day ago

                                                                                                                  Seems to be what is happening in a lot of the places it's encroaching.

                                                                                                                  AI journalism is strictly worse than having a human research and write the text, but it's also orders of magnitude much cheaper. You see prompt fragments and other blatant AI artifacts in news articles almost every day. So we get newspapers that have the same shape as they used to, but that don't fulfill their purpose. That's a development that was already going on before AI, but now it's even worse.

                                                                                                                  Walked past a billboard the other day with advertisement that was blatantly AI-generated. Had a logo with visible JPEG artifacts plastered on top of it. Real amateur hour stuff. It probably was as cheap as it looked.

                                                                                                                  You see the trend in software too. Microsoft's recent track record is a good example of this. They can barely ship a working notepad.exe anymore.

                                                                                                                  Supposedly some birds will eat cigarette butts thinking they're bugs, and then starve to death with a belly full of indigestible cigarette filters. Feels a lot like what is happening to a lot of industries lately.

                                                                                                                  • hyperadvanced a day ago

                                                                                                                    In retrospect, it was crazy hearing stories about how SF UX designers would be paid $250 to essentially do what Figma does now.

                                                                                                                • NoOn3 a day ago

                                                                                                                  Sometimes It is effective, but very unreliable.

                                                                                                                • josemanuel a day ago

                                                                                                                  In the end there will be the owners of the farmland, and whoever/whatever they employ.

                                                                                                                  • leptons a day ago

                                                                                                                    >what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?

                                                                                                                    They will start to burn down data centers.

                                                                                                                    • elzbardico a day ago

                                                                                                                      If you believe ICE purpose is to enforce immigration law… yeah. Quite possible

                                                                                                                      • EA-3167 a day ago

                                                                                                                        There’s a reason that the Musks and Thiels of the world invested in luxury doomsday bunkers, because it won’t just be property people want to burn.

                                                                                                                        • WalterBright a day ago

                                                                                                                          The Soviets used the "iron broom" (i.e. murder) on the wealthy people.

                                                                                                                          It didn't make anyone better off.

                                                                                                                          • NoOn3 a day ago

                                                                                                                            The Soviets history is not so simple. ;)

                                                                                                                            • WalterBright a day ago

                                                                                                                              The Soviets aren't the only country that tried that. It's never worked out anywhere it was tried.

                                                                                                                              • leptons a day ago

                                                                                                                                It seems to have worked out alright for the French.

                                                                                                                                • WalterBright a day ago

                                                                                                                                  It worked out very badly for them. See "Reign of Terror". The Revolution ended when Napoleon declared himself a hereditary monarch. Things went full circle.

                                                                                                                                  • WalterBright a day ago

                                                                                                                                    Consider also the American Revolution. Nobody went on a rampage to kill the wealthy. Things went very well for America.

                                                                                                                                • EA-3167 a day ago

                                                                                                                                  I’m not arguing in favor of it, I’m just aware of how the world works.

                                                                                                                                  You also seem to have forgotten France among other places where the history wasn’t as grim as Russia. Frankly nothing Russia has ever done seems to make their lot better.

                                                                                                                                  • WalterBright a day ago

                                                                                                                                    You're overlooking the Reign of Terror.

                                                                                                                                    Consider Pol Pot (Cambodia). How'd that work out? Cuba? How'd that go? Venezuela, anyone?

                                                                                                                                    For a counter-example, the US. The greatest rise in the standard of living in history, with free markets where paupers could get rich.

                                                                                                                                    • EA-3167 2 hours ago

                                                                                                                                      I'm not overlooking anything, including the desperate circumstances that led things like the French revolution and ensuing violence. The US went through a war to be free, plenty of people died and it could have easily gone another way. If everyone followed your advice India would still belong to the UK.

                                                                                                                                      I also think it's unhelpful to compare utter madness like the killing fields of Cambodia with a typical (even failed) revolution. More typical outcomes can be seen in the wake of the "Arab Spring", which arguably achieved nothing at great cost. That doesn't mean that people won't try again though, drowning people are dangerous and irrational, and I'd argue that's why leaders and people with power should work very hard to prevent those circumstances from emerging.

                                                                                                                                      In a country like the US it isn't that hard.

                                                                                                                      • singularity2001 a day ago

                                                                                                                        I'm optimistic that the whole population will profit. Even if in some form of mild communism.

                                                                                                                        • delichon a day ago

                                                                                                                          Unpopular opinion: the reverse. God created men, but Sam {Colt, Altman} made them equal. Commoditizing physical or intellectual power makes them more accessible to lower classes, and keeping them more expensive protects class privilege.

                                                                                                                          • antonly a day ago

                                                                                                                            How does it help equalise, if the commodity is sold "at market value" by the richest, and all intellectual input is immediately fed back into the tool. Looks at best like a weapon to supress

                                                                                                                            • scarmig a day ago

                                                                                                                              Imagine two bright high school students. One goes to a tony private school taught by people with PhDs; the other is in a developing country. They both want to understand, say, Morse theory. In a world without AI, only the former has access to knowledge about it. In a world with AI, the kid in a developing country can access the knowledge roughly as easily. Even if they were paying for tokens, but free tier models provide an absolute equality.

                                                                                                                              • antonly 17 hours ago

                                                                                                                                How do we know free tier models will continue to provide adequate services? They are pretty good right now, but only because all big players are happy to burn enormous amounts of money.

                                                                                                                            • notanastronaut 9 hours ago

                                                                                                                              You're not alone in this line of thinking. While "AI" has its perils, it strikes a nerve on people who previously felt comfortable in their jobs. Worrying about AI is a privileged position, while poorer people cannot afford the luxury to do so.

                                                                                                                              • elzbardico a day ago

                                                                                                                                This is an incredibly naive take and it definitely has the feelings of anti-intellectualism in it.

                                                                                                                              • metalman a day ago

                                                                                                                                If the ultimate form of goodness is a miracle, and the ultimate form of badness is evil,then, in reality evil, is infinitly easier to realise, perform, refine, and produce at scale, and miracles remain hard and rare, but of course, somehow spoken of as some sort of yet to come but assuredly triumphant ,however eventualy. now with real time personalised halucinations!

                                                                                                                                • caconym_ a day ago

                                                                                                                                  "Class privilege" doesn't go far enough. If the class war is a war then AI may be its Manhattan project: a weapon that destroys the value of, and the ruling elite's need for, labor.

                                                                                                                                  I continue to be shocked by the way people (with platforms) talk up to the (speculative) line where AI replaces most or all jobs, and then lamely suggest that this will be bad for the people who've lost their jobs because they will be poor, or something. No. What actually happens in that scenario is that money ceases to have value, at least in the way we currently understand it to. That scenario will produce a handful of monsters—sociopathic trillionaire brains encysted in layers of automation and automated production—that will crave more resources, more land, more power, and they will fight each other by various means for those things, and the rest of us will be at best in the way.

                                                                                                                                  This scenario is not a given, because it's not obvious that AI can become this capable in the near term where the stage is set for such a profoundly lopsided outcome, but you can bet these people are thinking about it now, if not talking about it, if not materially preparing for it. And they are, indeed, the only people with reason to feel optimistic about it.

                                                                                                                                  • elzbardico 21 hours ago

                                                                                                                                    And if money loses value why would the guys with guns and the guys who run the AI still take orders from the oligarchs?

                                                                                                                                    • caconym_ 21 hours ago

                                                                                                                                      I'm going to assume this is meant as a reply to https://news.ycombinator.com/item?id=47043601 but my answer is the same: why (in this hypothetical scenario) are you assuming the AI owners aren't members of all three of these classes?

                                                                                                                                    • elzbardico 21 hours ago

                                                                                                                                      Once money loses value, sociopathic trillionaire brains become immeditally obsolete and power migrates to the men with guns and a very thin technocratic class.

                                                                                                                                      • caconym_ 21 hours ago

                                                                                                                                        > sociopathic trillionaire brains

                                                                                                                                        > men with guns and a very thin technocratic class

                                                                                                                                        Bold of you to assume these are disjoint. In the US we're spiraling toward oligarchy with a billionaire president whose primary goal in office is evidently to enrich himself and his family, and his regime is (with the $encouragement of AI titans) aggressively pushing integration of AI in various government and military applications.

                                                                                                                                        If I was somebody like Sam Altman and I had indeed been thinking about this, I'm not sure things could be going much better for me wrt. that scenario.

                                                                                                                                    • bitwize a day ago

                                                                                                                                      The best time to have a communist revolution in the USA was 100 years ago. The second best time is now.

                                                                                                                                      • RickJWagner a day ago

                                                                                                                                        This is why social media, including HN, can be damaging.

                                                                                                                                        The author is a grown man, describing how he felt after being insulted by a machine.

                                                                                                                                        Imagine how high school kids feel after being mocked and humiliated by movie stars. That’s exactly what happened to a group of school kids in 2019. Chris Evans, Alyssa Milano, John Cusack, Debra Messing and others mocked the kids, made fun of their looks, made unflattering comparisons about them, etc.

                                                                                                                                        What kind of damage could that do at that point in someone’s life? It’s horrendous.

                                                                                                                                        • semiinfinitely a day ago

                                                                                                                                          finally a based take on ai

                                                                                                                                          • aogaili a day ago

                                                                                                                                            His characterization is the privilege.

                                                                                                                                            Most people are left with no choice but to adapt or parish. The fact he is contemplating optionally in the most profound automation in the industry is a form of a.. privilege.

                                                                                                                                            • senko a day ago

                                                                                                                                              Always fun to read white male tech employees living in the US talking about privilege, so up on the Maslow's pyramid they don't even see the ground.