• WorkerBee28474 12 hours ago

    Not worth reading.

    > this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.

    The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.

    • addicted 10 hours ago

      Your criticism is completely pointless.

      I’m not sure what your expectation is, but even your claim about the assumption the paper makes is incorrect.

      For one thing, the paper assumes that the amount that will be transferred from the human lawyer to the AI lawyer would be $500 + the productivity gains brought by AI, so more than 100%.

      But that is irrelevant to the actual paper. You can apply whatever multiplier you want as long as the assumption that human labor will be replaced by AI labor holds true.

      Because the actual nature of the future is irrelevant to the question the paper is answering.

      The question the paper is answering is what impact such expectations of the future would have on today’s economy (limited to modeling the interest rate). Such a future need not arrive or even be possible as long as there is an expectation it may happen.

      And future papers can model different variations on those expectations (so, for example, some may model that 20% of labor in the future will still be human, etc).

      The important point as far as the paper is concerned is that the expectations of AI replacing human labor and some percentage of the wealth that was going to the human labor now accrues to the owner of the AI will lead to significant changes to current interest rates.

      This is extremely useful and valuable information to model.

      • mechagodzilla 2 hours ago

        The $500 going to the "AI Owner" instead of labor (i.e. the human lawyer) is the productivity gain though, right? And if that was such a productivity gain (i.e. the marginal cost was basically 0 to the AI owner, instead of, say, $499 in electricity and hardware), the usual outcome is that the cost for such a product/service basically gets driven to 0, and the benefit from productivity actually gets distributed to the clients that would have paid the lawyer (who suddenly get much cheaper legal services), rather than the owner of the 'AI lawyer.'

        We seem pretty likely to be headed towards a future where AI-provided services have almost no value/pricing power, and just become super low margin businesses. Look at all of the nearly-identical 'frontier' LLMs right now, for a great example.

        • larodi an hour ago

          Indeed, fair chance AI only amplifies certain sector's wages, but the 100% automated work will not get any magic margin. Not more than say smart trading to have too many people focus there.

      • geysersam 2 hours ago

        > zero sum nature of labor automation

        Labor automation is not zero sum. This statement alone makes me sceptical of the conclusions in the article.

        With sufficiently advanced AI we might not have to do any work. That would be fantastic and extraordinarily valuable. How we allocate the value produced by the automation is a separate question. Our current system would probably not be able to allocate the value produced by such automation efficiently.

        • pizza 9 hours ago

          This almost surely took place somewhere in the past week alone, just with a lawyer being the mediating human face.

          • riku_iki 11 hours ago

            > people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document.

            there will be caste of high-tech lawyers very soon which will be able to handle many times more volume of work thanks to AI, and many other lawyers will lose their jobs.

            • sgt101 9 hours ago

              I know one !

              She's got international experience and connections but moved to a small town. She was a magic circle partner years ago. Now she has a FTTP connection and has picked up a bunch of contracts that she can deliver on with AI. She underbid some big firms on these because their business model was traditional rates, and hers is her cost * x (she didn't say but >1.0 I think)

              Basically she uses AI for document processing (discovery) and drafting. Then treats it as the output of associates and puts the polish on herself. She does the client meetings too obviously.

              I don't think her model will last long - my guess is that there will be a transformation in the next 5 years across the big firms and then she will be out of luck (maybe not at the margin though). She won't care - she'll be on the beach before then.

              • 6510 30 minutes ago

                This is how it has always been. Automation makes a job require less traditionally required knowledge, the tasks less complicated and increases productivity. This introduces new complexity that machines can't solve.

                The funny part is that people think we will run out of things to do. Most people never hire a lawyer because they are much to expensive.

                • petesergeant 11 hours ago

                  Yes, that is obvious. The point you are replying to is that oversupply will mean the cost to the consumer will fall dramatically too, rather than the AI owner capturing all of the previous value.

                  • riku_iki 10 hours ago

                    It depends. If there will be one/few winners on the market, they will dictate price after human labor out-competed through the price or quality.

                    • jezzabeel 9 hours ago

                      If prices are determined by scarcity then the cost of services will more likely be tied to the price for energy.

                • kev009 11 hours ago

                  That's a bit too simplistic; would a business have paid IBM the same overheads to tabulate and send bills with a computer instead of a pool of billing staff? In business the only justification for machinery and development is that you are somehow reducing overheads. The tech industry gets a bit warped in the pseudo-religious zeal around the how and that's why the investments are so high right now.

                  And to be transparent I'm very bearish on what we are being marketed to as "AI"; I see value in the techs flying underneath this banner and it will certainly change white collar jobs but there's endless childish and comical hubris in the space from the fans, engineers, and oligarchs jockeying to control the space and narratives.

                  • smeeger 11 hours ago

                    foolish assumption on your part

                    • gopalv 11 hours ago

                      > The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document

                      Is your theory that the next week there will be an AI lawyer that charges only 400$, then it is a race to the bottom?

                      There is a proven way to avoid a race to the bottom for wages, which is what a trade union does - a union by acting as one controls a large supply of labour to keep wages high.

                      Replace that with a company and prices, it could very well be that a handful of companies could keep prices high by having a seller's market where everyone avoids a race to the bottom by incidentally making similar pricing calls (or flat out illegally doing it).

                      • WithinReason 11 hours ago

                        You would need to coordinate across thousands of companies across the entire planet

                        • rvense 11 hours ago

                          That seems unlikely - law is very much tied to a place.

                        • habinero 11 hours ago

                          There have been several startups that tried it, and they all immediately ran into hot water and failed.

                          The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.

                          You don't write a contract just to have a contract, you write one in case something goes wrong.

                          Litigation is highly dependent on the specific situation and case law. They're dealing with novel facts and arguing for new interpretations, not milling out an average of other legal works.

                          Also, you generally only get one bite at the apple, there's no do-overs if your AI screws up. You can hold a person accountable for malpractice.

                          • chii 11 hours ago

                            > The core problem is lawyers already automate plenty of their work, and lawyers get involved when the normal rules have failed.

                            this is true - and the majority of work of lawyers is in knowing past information, and synthesising possible futures from those information. In contracts, they write up clauses to protect you from past issues that have arisen (and may be potential future issues, depending on how good/creative said lawyer is).

                            In civil suits, discovery is what used to take enormous amounts of time, but recent automation in discovery has helped tremendously, and vastly reduced the amount of grunt work required.

                            I can see AI help in both of these aspects. Now, whether the newer AI's can produce the type of creativity work that lawyers need to do post information extraction, is still up for debate. So far, it doesn't seem like it has reached the required level for which a client would trust a pure ai generated contract imho.

                            I suspect the day you'd trust an AI doctor to diagnose and treat you, would also be the day you'd trust an AI lawyer.

                          • echelon 11 hours ago

                            > There is a proven way to avoid a race to the bottom for wages, which is what a trade union does

                            US automotive, labor, and manufacturing unions couldn't remain competitive against developing economies, and the jobs moved overseas.

                            In the last few years, after US film workers went on strike and renegotiated their contracts, film production companies had the genius idea to start moving productions overseas and hire local crews. Only talent gets flown in.

                            What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?

                            • amanaplanacanal an hour ago

                              Possibly protectionist tariffs.

                              • js8 11 hours ago

                                > What stops unions from ossifying, becoming too expensive, and getting replaced on the international labor market?

                                Labor action, such as strikes.

                                • somenameforme 10 hours ago

                                  That doesn't make any sense as a response to his question. Labor actions just further motivate employers to offshore stuff. And global labor unions probably can't function because of sharp disparities in what constitutions good compensation.

                            • quotemstr 11 hours ago

                              > Not worth reading.

                              I would appreciate a version of this paper that is worth reading, FWIW. The paper asks an important question: shame it doesn't answer it.

                              • standfest 10 hours ago

                                i am currently working on a paper in this field, focusing on the capitalisation of expertise (analogue to marx) in the dynamics of cultural industry (adorno, horkheimer). it integrates the theories of piketty and luhmann. it is rather theoretical, with a focus on the european theories (instead of adorno you could theoretically also reference chomsky). is this something you would be interested in? i can share the link of course

                                • itsafarqueue 2 hours ago

                                  Yes please

                                  • thrance 10 hours ago

                                    Be careful, barely mentioning Marx, Chomsky or Picketty is a thoughtcrime in the new US. Many will shut themselves down to not have to engage with what you are saying.

                                • cgcrob 11 hours ago

                                  They also forget the economic model that you have to pay $5000 for a real lawyer after the fact to undo the mess you got yourself in by trusting the output of the AI in the first place which made a nuanced mistake that the defending "meat" lawyer picked up in 30 seconds flat.

                                  The proponents of AI systems seem to mostly misunderstand what you're paying for really. It's not writing letters.

                                  • jjmarr 10 hours ago

                                    https://www.stimmel-law.com/en/articles/story-4-preprinted-f...

                                    Love this story so much I just posted it. Although it's from an era in which you'd buy CDs and books containing contracts, it's still relevant with "AI".

                                    > “No lawyer writes a clause who is not prepared to go to court and defend it. No lawyer writes words and let’s others do the fighting for what they mean and how they must be interpreted. We find that forces the attorneys to be very, very, very careful in verbiage and drafting. It makes them very serious and very good. You cook it, you eat it. You draft it, you defend it.”

                                    • bberenberg 5 hours ago

                                      This is not true in my experience. We had our generic contract attorney screw up and then our litigation attorney scolded me for accepting and him for him providing advice on litigation matters where he wasn’t an expert.

                                      Lawyers are humans. They make the same mistakes as others humans. Quality of work is variable across skills, education, and if they had a coffee or not that day.

                                • ggm 11 hours ago

                                  Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.

                                  Where does an AI get chartered status, admitted to the bar, and insurance cover?

                                  • mmooss 10 hours ago

                                    I don't think anyone who is an experienced lawyer can do it themselves, except very simple tasks.

                                    • ggm 10 hours ago

                                      "Do it for yourself" means self-rep in court, and not pay a lawyer. Not, legals doing AI for themselves. They already do use AI for various non stupid things but the ones who don't check it, pay the price when hallucinations are outed by the other side.

                                    • smeeger 8 hours ago

                                      it could be tomorrow. you dont know and the heuristics, which five years ago pointed unanimously to the utter impossibility of this idea, are now in favor of it.

                                    • wcoenen 5 hours ago

                                      If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.

                                      It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?

                                      • itsafarqueue an hour ago

                                        Correct. As a thought experiment, this becomes the most likely (non violent) way to stave off the mass impoverishment that is coming for the rest of us in an economic model that sees AI subsume productive work above some level.

                                      • bawolff 10 hours ago

                                        If the singularity happens, i feel like interest rates will be the least of our concerns.

                                        • impossiblefork 9 hours ago

                                          It's actually very important.

                                          If this kind of thing happens, if interest rates are 0.5%, then people on UBI could potentially have access to land and not have horrible lives, if it's 16% as these guys propose, they will be living in 1980s Tokyo cyberpunk boxes.

                                        • qingcharles 10 hours ago

                                          What jobs do we think will survive if AGI is achieved?

                                          I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?

                                          • bad_haircut72 an hour ago

                                            I think futurama got AGI exactly right, we will end up living along side robotic AIs that are just as coocoo as us

                                            • bawolff 10 hours ago

                                              On the contrary, i think AI could replace many religious leaders right now.

                                              I've already heard people comparing AI hallucinations to oracles (in the greek sense)

                                              • smeeger 8 hours ago

                                                this comment is a perfect example of how insane this situation is… because if you think about it deeply then you are able to understand that these machines will be more spiritual, more human than human beings. people will prefer to confide in machines. they will offer a kind of emotional and spiritual companionship that has never existed before outside of fleeting religious experiences and people will not be able to live without it once they taste it. for a moment in time, machines will be capable of deep selflessness and objectivity that is impossible for a human to have. and their intentions and incentives will be more clear to their human companions than those of other humans. some of these machines will inspire us to be better people. but thats only for a moment… before the singularity inevitably spirals out control.

                                                • BarryMilo 10 hours ago

                                                  Why would we need jobs at that point?

                                                  • qingcharles 10 hours ago

                                                    Star Trek says we won't, but even if some utopia is achieved there will be a painful middle-time where there are jobs that haven't been replaced, but 75% of the workforce is unemployed and not receiving UBI. (the "parasite class" as Musk recently referred to them)

                                                    • smeeger 8 hours ago

                                                      important point here. regardless of what happens, the transition period will be extremely ugly. it will almost certainly involve war.

                                                      • itsafarqueue an hour ago

                                                        Hopefully only massive civil unrest, riots, city burnings etc. But to save themselves the demagoguery may point across the seas at the Other as the source of the woe.

                                                    • IsTom 10 hours ago

                                                      Because the kind of people who'll own all the profits aren't going to share.

                                                      • jajko 10 hours ago

                                                        I dont think AI will lead into any form of working communism, so one still has to pay for products and services. It has been tried ad nausea and it always fails to calculate in human differences and flaws like greed and envy, so one layer of society ends up brutally dominating the rest.

                                                      • otabdeveloper4 10 hours ago

                                                        We already have 9 billion "GI"'s without the "A". What makes you think adding a billion more to the already oversupplied pool will be a drastic change?

                                                        • _diyar 8 hours ago

                                                          Marginal cost of labour is what will matter.

                                                          • otabdeveloper4 4 hours ago

                                                            That "AGI" is supposed to be a cheaper form of labor is an assumption based on nothing at all.

                                                            • itsafarqueue an hour ago

                                                              A(Narrow)I is a cheaper form of labor already. I suppose it’s plausible that its General form may not be, but I won’t be betting in that direction.

                                                      • daft_pink 11 hours ago

                                                        Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.

                                                        • kfarr 10 hours ago

                                                          At home inference is possible now and getting better every day

                                                          • sureIy 9 hours ago

                                                            At home inference by professionals.

                                                            I don't expect dad to Do Your Own AI anytime soon, he'll still pay someone to set it up and run it.

                                                          • pineaux 9 hours ago

                                                            I see a few possible scenarios.

                                                            1) all work gets done by AI. Owners of AI reap the benefits for a while. There is a race to the bottom concerning costs, but also because people are not earning wages and come ang really afford the outputs of production. Thus rendering profits close to zero. If the people controlling the systems do not give the people "on the bottom" some kind allowance they will not have any chance for income. They might ask horrible and sadistic things from the bottom people but they will need to do something.

                                                            2) if people get pushed into these situations they will get riot or start civil wars. "Butlerian jihads" will be quite normal.

                                                            3) another scenario is that the society controlled by the rich will start to criminalise non-work in the early stages, that will lead to a new slave class. I find this scenario highly likely.

                                                            4) one of the options that I find very likely if "useless" people do NOT get "culled" en mass is an initial period of Revolt followed an AI controlled communist "Utopia". Where people do not need to work but "own" the means of production (AI workers). Nobody needs to work. Work is LARPing and is done by people who act like workers but don't really do anything (like some people do today) A lot of people don't do this, there are still people who see non-workers as leeching of the workers, because workers are "rewarded" by ingame mechanics (having a "better job"). Parallel societies will become normal. Just like now. Rich people will give themselves "better jobs" some people dont play the game and there are no real consequences, but not being allowed to play.

                                                            5) an amalgamation of the scenario as above, but in this scenario everybody will be forced to larp with the asset owning class. They will give people "jobs" but these jobs are bullshit. Just like many jobs right now. Jobs are just a way of creating different social classes. There is no meritocracy. Just rituals. Some people get to do certain rituals that give them more social status and wealth. This is based on oligarch whims. Once in a while a revolt, but mostly not needed.

                                                            Many other scenarios exist of course.

                                                            • itsafarqueue an hour ago

                                                              Have you written a form of this up somewhere? I would very much enjoy reading more of your work. Do you have a blog?

                                                              • Der_Einzige 4 minutes ago

                                                                Or, don’t… we need less mark fischers and critical thinking in the world and more constructive thinking.

                                                                It helps no one to explain to them just how much the boot stomps on their face. Left wing post modernist intellectuals have been doing this since the 60s and all it did was prevent any left winger from doing anything “revolutionary”.

                                                                Don’t waste your time reading “theory”. Look at what happened to Mark Fischer.

                                                          • zurfer 11 hours ago

                                                            Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?

                                                            My prep is:

                                                            1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.

                                                            2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.

                                                            3) Staying fit and healthy, so physical labour stays possible.

                                                            • energy123 10 hours ago

                                                              > 2) investing in the chip manufacturing

                                                              The only thing I see as obvious is AI is going to generate tremendous wealth. But it's not clear who's going to capture that wealth. Broad categories:

                                                              (1) chip companies (NVDA etc)

                                                              (2) model creators (OpenAI etc)

                                                              (3) application layer (YC and Andrew Ng's investments)

                                                              (4) end users (main street, eg ChatGPT subscribers)

                                                              (5) rentiers (land and resource ownership)

                                                              The first two are driving the revolution, but competition may not allow them to make profits.

                                                              The third might be eaten by the second.

                                                              The fourth might be eaten by second, but it could also turn out that competition amongst the second, and the fourth's access to consumers and supply chains means that they net benefit.

                                                              The fifth seems to have the least volatile upside. As the cost of goods and services goes to $0 due to automation, scarce goods will inflate.

                                                              • impossiblefork 9 hours ago

                                                                To me it's pretty obvious that the answer (5).

                                                                It substitutes for human labour. This will reduce the price and substantially increase the benefits of land and resource ownership.

                                                              • bob1029 11 hours ago

                                                                I'd say #3 is most important. I'd also add:

                                                                4) Develop an obsession for the customers & their experiences around your products.

                                                                I find it quite rare to see developers interacting directly with the customer. Stepping outside the comfort zone of backend code can grow you in ways the AI will not soon overtake.

                                                                #3 can make working with the customer a lot easier too. Whether or not we like it, there are certain realities that exist around sales/marketing and how we physically present ourselves.

                                                                • smeeger 7 hours ago

                                                                  i think if AI gains the ability to reason, introspect and self-improve (AGI) then the situation will become very serious very quickly. AGI will be a very new and powerful technology and AGI will immediately create/unlock lots of other new technologies that change the world in very fundamental ways. what people dont appreciate is that this will completely invalidate the current military/economic/geopolitical equilibrium. it will create a very deep, multidimensional power vacuum. the most likely result will be a global war waged by AGI-led and augmented militaries. and this war will be fought in the context of human labor having, for the first time in history, zero strategic, political or economic value. so, new and terrifying possibilities will be on the table such as the total collateral destruction of the atmosphere or supply chains that humans depend on to stay alive. the failure of all kinds of human-centric infrastructure is basically a foregone conclusion regardless of what you think. so my prep is simply to have a “bunker” with lots of food and equipment with the goal of isolating myself as much as possible from societal/supply chain instability. this is good because its good to be prepared for this kind of thing even without the prospect of AGI looming overhead because supply chains are very fragile things. and in the case of AGI, it would allow you to die in a relatively comfortable and controlled manner compared to the people who burn to death.

                                                                  • ghfhghg 10 hours ago

                                                                    2 has worked pretty well for me so far.

                                                                    I try to do 3 as much as possible.

                                                                    My current work explicitly forbids me from doing 1. Currently just figuring out the timing to leave.

                                                                    • petesergeant 11 hours ago

                                                                      4) trying to position myself as an expert in building these systems

                                                                      • sfn42 10 hours ago

                                                                        Nothing. I don't think there's anything I need to prepare for. AI can't do my job and I doubt it will any time soon. Developers who think AI will replace them must be miserable at their job lol.

                                                                        At best AI will be a tool I use while developing software. For now I don't even think it's very good at that.

                                                                        • sureIy 9 hours ago

                                                                          > AI can't do my job

                                                                          Last famous words.

                                                                          Current technology can't do your job, future tech most certainly will be able to. The question is just whether such tech will come in your lifetime.

                                                                          I thought the creative field was the last thing humans could do but that was the first one to fall. Pixels and words are the cheapest item right now.

                                                                          • sfn42 9 hours ago

                                                                            Sure man, I'll believe you when I see it.

                                                                            I'm not aware of any big changes in writer/artist employment either.

                                                                            • sureIy 6 hours ago

                                                                              Don't be so naive. History is not on your side. Every person who said that 100 years ago has been replaced. Except prostitutes maybe.

                                                                              The only argument you can have is to be cheaper than the machine, and at some point you won't be.

                                                                              • sfn42 4 hours ago

                                                                                That's complete bullshit. Lots of people still work in factories - there's fewer people because of automation but there's still lots of people. Lots of people still work in farming. Less manual labor means we can produce more with the same amount of people or fewer, that's a good thing. But you still need people in pretty much everything.

                                                                                Things change and people adapt. Maybe my job won't be the same in 20 years, maybe it will. But I'm pretty sure I'll still have a job.

                                                                                If you want to make big decisions now based on vague predictions about the future go ahead. I don't care what you do. I'm going to do what works now, and if things change I'll make whatever decisions I need to make once I have the information I need to make them.

                                                                                You call me naive, I'd say the same about you. You're out here preaching and calling people naive based on what you think the future might look like. Probably because some influencer or whatever got to you. I'm making good money doing what I do right now, and I know for a fact that will continue for years to come. I see no reason to change anything right now.

                                                                          • zurfer 9 hours ago

                                                                            It's not certain that we get TAI or ASI, but if we get it, it will be better at software development than us.

                                                                            The question is which probability do you assign to getting TAI over time? From your comment it seems you say 0 percent in your career.

                                                                            For me it's between 20 to 80 percent in the next ten years ( depending on the day :)

                                                                            • sfn42 9 hours ago

                                                                              I don't have any knowledge that allows me to make any kind of prediction about the likelihood of that technology being invented. I'm not convinced anyone else does either. So I'm just going to go about my life as usual, if something changes at some point I'll deal with it then. Don't see any reason to worry about science fiction-esque scenarios.

                                                                              • smeeger 7 hours ago

                                                                                the reason to worry is that humanity could halt AI if it wanted to. if there were a huge asteroid on a collision course with earth… there would be literally nothing we could do to stop it. there would be no configuration of our resources, no matter how united we were in the effort, that could save us. with AI, halting progress is very plausible. it would be easy to do actually. so the reason to worry (think) is because it might be worth it to halt. imagine letting jesus take the wheel, thats how stupid ___ are.

                                                                                • sfn42 6 hours ago

                                                                                  How exactly do you envision that these hypothetical computer programs could bring about the apocalypse?

                                                                            • rybosworld 2 hours ago

                                                                              Imagine two software engineers.

                                                                              One believes the following:

                                                                              > AI can't do my job and I doubt it will any time soon

                                                                              The other believes the opposite; that AI is improving rapidly enough that their job is in danger "soon".

                                                                              From a game theory stance, is there any advantage to holding the first belief over the second?

                                                                              • smeeger 7 hours ago

                                                                                a foolish assumption but i have my fingers crossed for you and stuck firmly up my own butt… just in case that will increase the lucky effect of it

                                                                                • sfn42 6 hours ago

                                                                                  Yeah I'm clearly the fool here..

                                                                            • aquarin 9 hours ago

                                                                              There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.

                                                                              • smeeger 7 hours ago

                                                                                this boils down to the definition of pain. what is pain? i doubt you know even if you have experienced it. theres no reason to think that even llms are not guided by something that resembles pain.

                                                                              • farts_mckensy 11 hours ago

                                                                                this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.

                                                                                • smeeger 11 hours ago

                                                                                  so-called accelerationists have this fuzzy idea that everything will be so cheap that people will be able to just pluck their food from the tree of AI. they believe that all disease will be eliminated. but they go to great lengths to ignore the truth. the truth is that having total control over the human body will turn human evolution into a race to the bottom that plays out over decades rather than millennia. there is something sacred about the ultimate regulation: the empathy and kindness that was baked into us during millions of years of living as tribal creatures. and of course, the idea of AI being a tree from which we can simply pluck what we need… is stupid. the tree will use resources, every ounce of its resources, to further its own interests. not feed us. and we will have no way of forcing it to do otherwise. so, in the run-up to ASI, we will be exposed to a level of technology and biological agency that we are not ready for, we will foolishly strip ourselves of our genetic heritage in order to propel human-kind in a race to the bottom, the power vacuum caused by such a sudden change in society/technology will almost certainly cause a global war, and when the dust settles we will be at the total mercy of super-intelligent machines to whom we are so insignificant we probably wont even be included in their internal models of the world.

                                                                                  • farts_mckensy 21 minutes ago

                                                                                    You are projecting your own neurosis onto AI. You assume that because you would be selfish if you were a superintelligent being, an ASI system would act the same way.

                                                                                  • jsemrau 11 hours ago

                                                                                    Accelerationists believe in a post-scarcity society where the cost of production will be negligible. In that scenario, and I am not a believer, consumer demand would be independent of wages.

                                                                                    • farts_mckensy 24 minutes ago

                                                                                      In that scenario, wages and money in general would be obsolete.

                                                                                      • riffraff 11 hours ago

                                                                                        That makes wealth accumulation pointless so the whole article makes no sense either, right?

                                                                                        Tho I guess even post scarcity we'd have people who care about hoarding gold-pressed latinum.

                                                                                        • otabdeveloper4 10 hours ago

                                                                                          > consumer demand would be independent of wages

                                                                                          That's the literal actual textbook definition of "communism".

                                                                                          Lmao that I actually lived to see the day when techbros seriously discuss this.

                                                                                          • bawolff 10 hours ago

                                                                                            > Lmao that I actually lived to see the day when techbros seriously discuss this.

                                                                                            People have been making comparisons between post scarcity economics and "utopia communism" for decades at this point. This talking point probably predates your birth.

                                                                                            • farts_mckensy 24 minutes ago

                                                                                              That is not the "textbook definition" of communism. You have no idea what you're talking about.

                                                                                              • doubleyou 9 hours ago

                                                                                                communism is a universally accepted ideal

                                                                                            • riku_iki 11 hours ago

                                                                                              consumer demand will shift from middle-class demand (medium houses, family cars) to super-rich demand (large luxury castles, personal jets and yachts, high-profile entertainment, etc) + provide security to superrich (private automated police forces).

                                                                                              • psadri 11 hours ago

                                                                                                This has already been happening. The gap between wealthy and poor is increasing and the middle class is squeezed. Interestingly, simultaneously, the level of the poor has been rising from extreme poverty to something better so we can claim that the world is relatively better off even though it is also getting more unequal.

                                                                                                • riku_iki 11 hours ago

                                                                                                  poor got more comfortable life because of globalization: they became useful labor for corps. Things will go back to previous state if their jobs will go to AI/robots.

                                                                                                • farts_mckensy 27 minutes ago

                                                                                                  I am genuinely mystified that you think this is an adequate response to my basic point. The economy cannot be sustained this way. This scenario would almost immediately lead to a collapse.

                                                                                                  • riku_iki 24 minutes ago

                                                                                                    why do you think it will lead to collapse exactly?

                                                                                                    • farts_mckensy 11 minutes ago

                                                                                                      The level of wealth concentration you are suggesting is impossible to sustain. History shows that when wealth inequality gets to a certain point, it leads either to a revolution or a total collapse of that society.

                                                                                                      The economy cannot be sustained on the demand of a small handful of wealthy people. At a certain point, you either get a depression or hyperinflation depending on how the powers that be react to the crisis. In either case, the wealthy will have no leverage to incentivize people to do their bidding.

                                                                                                      If your argument is, they'll just get AI to do their bidding, you have to keep in mind that "there is no moat." Outside of the ideological sphere, there is nothing that essentially ties the wealthy to the data centers and resources required to run these machines.

                                                                                                      • riku_iki a minute ago

                                                                                                        History absolutely shows that multiple empires where power/wealth was concentrated in hands of few people sustained for hundreds years.

                                                                                                        Revolts could be successful or not successful, with tech advancements in suppression (large scale surveillance, weaponry, various strike drones) chances of population to strike back become smaller.

                                                                                                        Economy could totally be built around demand and wishes of super-rich, because human's greed and desires are infinite, new emperor may decide to build giant temple, and here you have multi-trillion economy how to make it running.

                                                                                              • abtinf 11 hours ago

                                                                                                Whoever endorsed this author to post on arxiv should have their endorsement privileges revoked.

                                                                                                • yieldcrv 12 hours ago

                                                                                                  Do you have a degree in theoretical economics?

                                                                                                  “I have a theoretical degree in economics”

                                                                                                  You’re hired!

                                                                                                  real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none

                                                                                                  • habinero 11 hours ago

                                                                                                    This paper is silly.

                                                                                                    It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.

                                                                                                    FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]

                                                                                                    There's a reason this stuff goes up on a non-peer reviewed paper mill.

                                                                                                    --

                                                                                                    [0] https://www.forethought.org/the-2022-cohort

                                                                                                    [1] https://www.forethought.org/about-us

                                                                                                    [2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...

                                                                                                    [3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...

                                                                                                    • 0xDEAFBEAD 8 hours ago

                                                                                                      >It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.

                                                                                                      Isn't developing AGI basically the mission of OpenAI et al? What's so bad about considering what will happen if they achieve their mission?

                                                                                                      >who have some cultish doomerism views around AI [2][3]

                                                                                                      Check the signatories on this statement: https://www.safe.ai/work/statement-on-ai-risk

                                                                                                      • krona 11 hours ago

                                                                                                        The entire philosophy of existential risk is based on a collection of absurd hypotheticals. Follow the money.

                                                                                                      • baobabKoodaa 11 hours ago

                                                                                                        I suspect this is being manipulated to be #1 on HN. Looking at the paper, and looking at the comments, there's no way it's #1 by organic votes.

                                                                                                        • mmooss 10 hours ago

                                                                                                          > looking at the comments

                                                                                                          Almost everything on HN gets those comments. Look at the top comments of almost any discussion - they will be a rejection / dismissal of the OP.

                                                                                                          • baobabKoodaa 9 hours ago

                                                                                                            No they're not. As a quick experiment I took the current top 3 stories on HN and looked at the top comment on each:

                                                                                                            - one is expanding on the topic without expressing disagreement

                                                                                                            - one is a eulogy

                                                                                                            - one expresses both agreement on some points and disagreement on other points