Either I am mistaken that Walmart was offering store associate type jobs or the author doesn't realize Walmart has software engineering teams. Quite a few non-tech companies have very hip, modern, and high-power tech teams behind them.
https://github.com/walmartlabs
Walmart has been doing this level of work since at least 2013/2014 to my knowledge so it's not something new.
Getting a job on the Walmart engineering team doesn't seem like such an awful thing. Other notable call-out: Domino's. A few years back (... like a decade ago ._.) I had done a re-write of a node.js wrapper for the Domino's API and the team seemed very supportive and proud of what people did with it. The REST API was actually quite good and didn't try to throw up roadblocks.
Walmart Global Tech (what Labs was called) got gutted in the layoff-palooza last year. https://www.cbsnews.com/news/walmart-tech-layoff-corporate. Heavy outsourcing, too, so I've heard. I also believe that Walmart is trying to consolidate everyone into their Bentonville HQ; gl trying to get engineers in SFBA to move out to Arkansas!
They don’t want SFBA engineers for the most part now, they want so called “dark matter” developers, meaning they’re primarily recruiting from the Midwest, upper South, and rust belt states where opportunities are more scarce
Bentonville Arkansas is basically like a cheaper Denver Colorado.
They have one of the best coffee roasters/shops out there, tons of bike and hiking trails, you're in the Ozarks, a major university is next door in Fayetteville. You're not that far away from doing a float trip on the buffalo river either (1.5 hours). They have tons of concerts there too in Walmart's ampitheater. Also, there is some industry for sure and decent schools (for the south anyway).
It is very trendy. Northwest Arkansas is nothing like south Arkansas. Not geographically or politically. I wouldn't expect it to be THAT difficult to convince folks to relocate to somewhere like that. Walmart does pump a lot of money into the area.
For people that care, the state is politically a no go.
With how hostile state politics have become though, there's no amount of pay that could get me to move out to Arkansas. Learned my lesson after living in Austin for a few years.
Good point. The state's government is in Little Rock in the middle of the state and pretty conservative.
Bentonville is quite nice all things said
Bentonville is quite nice, and the opportunities on that team are incredible. Tech problems you’d never think Walmart would be solving. Great facilities. And not a small number of veterans that used to do exquisite things for their country.
Having your job abruptly changed to force you to move far away from family and friends is not nice at all, though.
If you read the OpenAI blog post the article links to (https://openai.com/index/expanding-economic-opportunity-with...), their quote from Walmart is talking about retail associate jobs, not cushy corporate-level engineering etc jobs.
> OpenAI is committing to certifying 10 million Americans by 2030. And we’ll be doing it with our launch partners, including the biggest private employer in the world: Walmart. Here’s what Walmart had to say in their own words:
> “At Walmart, we know the future of retail won’t be defined by technology alone—it will be defined by people who know how to use it. By bringing AI training directly to our associates, we’re putting the most powerful technology of our time in their hands—giving them the skills to rewrite the playbook and shape the future of retail.” —John Furner, CEO, Walmart U.S.
Yup, software was one of the keys to Walmart being able to expand as much as it did in the 1980s-90s. Their advanced (for the time) inventory & sales systems could tell when some products were selling well in one region and not in another (due to stylistic preferences, prevailing weather, etc.), and they could arbitrage that and move inventory to optimize sales for each location.
It was a major advantage over any other store who didn't have such data and would be out of inventory in the hot region while the same items were on markdown sale a few states over. Walmart maximized profits while the blind competitors bump along with losses.
Their internal software was so good that their website and online store was an absolute embarrassment for so many years.
They finally seem to have turned that around, though I heard they did it buy just buying someone.
Walmart even used clojure at one stage, possibly still does.
It’s got a metaphorical name - a bait - for your clicks.
IIRC this is a background premise in William Gibson’s The Peripheral. Most jobs are either for a tech company, or at a Walmart-esque store that has eaten all other retail.
That book was weirdly prescient. IIRC Gibson himself noted that he was too on-the-nose with The Jackpot, which has made it very difficult to write the sequels.
Hyperstition is a real thing... if, that is, you're William Gibson.
Or you end up as one of the hired goons of Corbell Pickett, one of the greatest bad guys of all time (only slightly below Clarence Boddicker).
I haven’t read the book, and I should.
The series was really good. Too bad they cancelled it for being too expensive.
People don’t cancel things because they’re expensive, costs are a known and predictable quantity. They cancel things because they don’t make enough to justify the costs.
In this case, it actually was due to unknown and unpredictable costs. They were about to start production on S2 when the strikes hit. They delays spiked the costs of an already expensive show (S1 cost an estimated $175M) which made it less tenable for Amazon. This happened to quite a few shows during the pandemic and during the strikes – renewed because it made financial sense, then pandemic/strike spiked the cost, and then they canceled the renewal.
I liked the first half of the series, but it gradually became a generic superhero show.
The book, however, is excellent- definitely recommend.
How many jobs have they actually 'eaten'? For now, we mostly have AI labs and those in proximity claiming some number of jobs will be obsolete, but the actual job loss has yet to manifest.
It's rarely (maybe never) a direct one-to-one elimination of jobs. Most attempts to replace a single complete human job with an AI agent are not successful (and large companies, generally, aren't even attempting that). Rather, the phenomenon is more like a diffuse productivity gain across a large team or organization that results in a smaller headcount need over time as its final net effect. In practice, this materializes as backfills not approved as natural attrition occurs, hiring pipelines thinned out with existing team member workloads increased, management layers pruned, teams merged (then streamlined), etc.
When my father joined an attorney's office as recently as the 80s, there was a whole team of people that worked with: Of course, there were the attorneys who actually argued the cases, but also legal assistants who helped create various briefs, secretaries who took dictation (with a typewriter, of course) to write those documents, receptionists who managed the attorney's schedules and handled memos and mail, filing clerks who helped keep and retrieve the many documents the office generated and demanded in a giant filing room (my favorite part of visiting as a kid: row after row of rolling shelves, with crank handles to put the walkways where they were needed to access a particular file), librarians who managed a huge card catalog and collection of legal books and acquired those that were not in the collection as needed... it was not a small team.
When he retired a few years ago, most of that was gone. The attorneys and paralegals were still required, there was a single receptionist for the whole office (who also did accounting) instead of about one for each attorney, and they'd added an IT person... but between Outlook and Microsoft Word and LexisNexis and the fileserver, all of those jobs working with paper were basically gone. They managed their own schedules (in digital Outlook calendars, of course), answered their own (cellular) phones, searched for documents with the computers, digitally typeset their own documents, and so on.
I'm an engineer working in industrial automation, and see the same thing: the expensive part of the cell isn't the $250k CNC or the $50k 6-axis robots or the $1M custom integration, those can be amortized and depreciated over a couple years, it's the ongoing costs of salaries and benefits for the dozen humans who are working in that zone. If you can build a bowl screw feeder and torque driver so that instead of operating an impact driver to put each individual screw in each individual part, you simply dump a box of screws in the hopper once an hour, and do that for most of the tasks... you can turn a 12-person work area into a machine that a single person can start, tune, load, unload, and clean.
The same sort of thing is going to happen - in our lifetimes - to all kinds of jobs.
It already happened to all kinds of jobs.
I recall the mine water pump had a boy run up and down a ladder opening and closing steam valves to make the piston go up and down. The boy eventually rigged up a stick to use the motion of the piston to automatically open and close the valves. Then he went to sleep while the stick did his job.
Hence the invention of the steam engine.
Head on the nail. I try to explain this to everyone who thinks we are heading to global collapse. AI isn't good enough to replace a person. It enables a person to replace a team. That will take awhile since, as cool and great AI is now, it is not yet as powerful and integrated yet for teams to be totally replaced. Only as fast as people naturally leave, so instead of hiring someone to replace that job, one guy inherits his teammates job and his own, but has more tools to do both. It sounds like a person is being replaced, but I've never worked anywhere where people weren't complaining about being understaffed. The budget likely wasn't cut, so now they can hire someone to do a different job. A job an idea-fairy wanted someone to do but they lacked the bandwidth. The old position is gone, but new ones have opened. It is the natural was of the world. We innovate, parts of our lives get easier, our responsibility scope increases, our life is full again. For a person, that translates to never feeling rich if they allow their standard of living to match their income, and for a company, that translates to scope increase as well if the company is growing, shown as either more job openings or more resources for the employees (obviously the opposite for both cases if a person/company is "shrinking")
Many jobs even most jobs don't work you at or near the short term max capacity you can achieve because it isn't sustainable, lacks redundancy or because the nature of work flow and peer expectation creates a degree of slack.
Condensing the workforce as you describe risks destroying redundancy and sustainability.
It may work in tests with high performers over short dutations but may fall under over longer terms, with average performers, or with even a small amount of atrition.
Having cog number 37 pick up the slack for 39 doesn't work with no excess capacity.
The low hanging jobs historically created from progress are gone. You are talking nonsense. You are talking the equivalent of 'and then magically, new jobs appear, because jobs have appeared in the past'. And while you wave away job fears as a nothing burger, you randomly add in blaming people for not feeling rich because...they think progress should include their lives progressing for the better?
> secretaries who took dictation (with a typewriter, of course) to write those documents
Complete aside, just because you brought up this thought and I like the concept of it:
My mom trained professionally as a secretary in the 1970s and worked in a law office in the 1980s; at that point, if you were taking dictation, you were generally doing longhand stenography to capture dictation, and then you'd type it up later. A stenotype would've been a rarity in a pre-computer office because of the cost of the machine; after all, if you need a secretary for all these other tasks, it's cheaper to give them a $2 notebook than it is a $1,500+ machine.
> the phenomenon is more like a diffuse productivity gain across a large team or organization
AI alone can't do that, even if you make the weakest link in the chain stronger, there are probably more weak links. In a complex system the speed is controlled by the weakest, most inefficient link in the chain. To make an organization more efficient they need to do much more than use AI.
Maybe AI exposes other inefficiencies.
I mean one of the services at work had a custom HTML dashboard. We eliminated it and replaced it with Grafana.
I worked on both - my skillset went from coding pretty bar charts in SVG + Javascript to configuring Grafana, Dockerfiles and Terraform templates.
There's very little overlap between the two, other than general geekiness, but thanks I'm still doing OK.
Seems like a bad decision. Grafana is awful compared to a bespoke solution.
I don't get the hate - Imo it's one of the better working tech products I've had the chance of using.
> Rather, the phenomenon is more like a diffuse productivity gain across a large team or organization that results in a smaller headcount need over time as its final net effect. In practice, this materializes as backfills not approved as natural attrition occurs, hiring pipelines thinned out with existing team member workloads increased, management layers pruned, teams merged (then streamlined), etc.
My observation so far has been that executive leadership believes things that are not true about AI and starts doing the cost-cutting measures now, without any of the productivity gains expected/promised, which is actually leading to a net productivity loss from AI expectations based on hype rather than AI realities. When you lose out on team size, can't hire people for necessary roles (some exec teams now won't hire unless the role is AI related), and don't backfill attrition, you end up with an organization that can't get things done as quickly, and productivity suffers, because the miracle of AI has yet to manifest meaningfully anywhere.
Salesforce CEO confirms 4,000 layoffs ‘because I need less heads’ with AI - https://www.cnbc.com/2025/09/02/salesforce-ceo-confirms-4000...
Does anybody really believe that SalesForce of all companies has successfully replaced 4000 real and necessary jobs with AI? Or is that more likely just an excuse to justify more layoffs in the tech industry for the usual reasons.
tbh. for companies like SalesForce I always assume they have a lot of bloated unnecessary jobs which are done well enough by the people having them to needing a external reason to firing them
in addition SalesForce grew in employment size in 2025 AFIK and 4000 jobs are for them only around ~~5%, which means it's to small to be a meaningful metric if you don't fully trust what their press department does (and you shouldn't)
still I see people using modern AI for small productivity boosts all over the place including private live (and often with a wastely underestimate risk assessment) so in the best case it's only good enough to let people process more of the backlog (which otherwise would be discarded due to time pressure but isn't worthless) and in the worst case will lead to idk. 1/3 of people in many areas losing their job. But that is _without_ major breakthrough in AI, just based one better applying what AI already can do now :/ (and is excluding mostly physical jobs, but it's worse for some other jobs, like low skill graphic design positions)
During the just-post-pandemic hiring spree I remember talking to some software developers who were doing very light coding in what I would usually think was a business analyst role. Those roles were both bloat that was lost once free money stopped flowing, and easily replaced (or reduced) with AI.
And as software developers, it would be silly if we didn't think that businesses would love to find a way to replace us, as the software we have created did for other roles for the past 60 years.
It would be some kind of securities fraud for their CEO to say it to the media if it weren't at least partially true.
Doesn't have to be true, just needs to be unfalsifiable.
Lots of companies using AI or RTO as excuses to just downsize since layoffs for normal reasons don’t look as good.
I really wish journalists & public speaking investors would call this out more.
Though like non-GAAP earnings & adjusted EBITDA, very few care. Those that do are often old, technical, conservative & silent type of investors instead of podcasters or CNBC guests. RIP Charlie M.
There's no doubt it can function as a convenient cover, but that doesn't mean it's having no effect at all. It would be naive to assume that the introduction of a fundamentally new general-purpose work tool across millions of workers in nearly every industry and company within the span of a couple years has not played any role whatsoever in making teams and organizations more efficient in terms of headcount.
Very probable those CEOs use AI to write the speech, asking “What’s the least antagonizing way of explaining the layoffs?”.
They did say this was specifically in customer service, which if there was a department I believe you might be replaced by AI this would be it;
Alternatively though if the market is bad and there not launching as many new products or appealing to as many new customers, customer support may be a cost center you’d force to have “AI efficiencies”
It's less about justifying layouts and more PR: Salesforce is trying to sell AI products and they're marketing how good they are at AI by saying they removed 4k jobs with it
Those 4,000 were "customer support" positions & Salesforce just happens to also sell an AI product for customer support. They also underperformed expectations in their last earnings.
Companies like IBM & Klarna have made news for reducing positions like these & then re-hiring them.
AI, like most tech, will increase productivity & reduce headcount but it's not there yet. Remember, the days are long & the years are short.
CRM needs to convince the markets that it isn't an AI loser. By saying it has been able to use AI to automate internally CRM is hoping the market will believe that its AI is good enough to also sell to customers. Unfortunately for CRM, after the earnings print the market still thought it was an AI loser.
You have to trust they are actually telling the truth and not using it as a convenient scapegoat. What sounds better to shareholders “we’re replacing jobs with AI” or “we hired too many during the COVID hiring glut and need to lay more people off”?
But what if I want the option that validates my untreated anxiety?
Salesforces CEO needs to consider replacing them with security product architects so they can figure out a way to send me logs that aren’t crap
this stuff is all a cover from CEOs. We're in the middle of a downturn and they're pretending layoffs like this are due to AI so as not to spook investors.
I am sure he has had plenty of "heads" as a bigwig CEO. Also, it's "fewer" heads not "less".
Wait until his customers figure out that they don't need Salesforce anymore.
In my personal experience I've seen:
- OCR eat a good chunk of data entry jobs,
- Automated translation eat a number of translation jobs,
- LLM have eaten quite a few tier I support roles.
I don't have numbers tho, maybe people are still doing data entry or hiring translators on mechanical turk.
I have a friend who used to do book translations. Due to some craftsman union rules, the minimum rate for translations has been set (and of course that's what everybody pays). Machine translation didn't decrease these rates, but they haven't been increased in 15 years, which made inflation completely eat it up.
Initially machine translation was way worse (by professional standards) than people assumed, essentially useless, you had to rewrite everything.
As time went on, and translation got better, the workflow shifted from doing it yourself to doing a machine pass, and rewriting it to be good enough. (Machine translation today is still just 'okay', not professional quality)
On the initially set rates 15 years ago you could eke out a decent-ish salary (good even if you worked lots of hours and were fast). Today if you tried to do the work by hand, you'd starve to death.
Sorry for replying to my own comment but I had a separate thought - this is how I feel about LLMs today.
While they help with programming, I feel like the scope of my tasks have increased over time as well. I feel like this is happening to me - I'm insanely more productive and my tech stack has increased hugely over the past two years as has my productivity.
But I don't make significantly more money, or get a ton more recognition, it's just accepted.
Also illustrators, voiceover artists and customer service agents. Commercial photographers have seen their income from stock image services collapse and they are now seriously worried about the impact Nano Banana will have on work like product and fashion photography.
The question is no longer whether AI will put people out of work, but how many and how quickly.
entry "throw away, no skill" positions for programming are also kinda going away, or at least get collapsed to a small fraction of positions
to be fair this positions never made that much sense as they tend to cause more trouble then they are helping on the long run, but they exist anyway
and companies should know better then throwing away "junior, not yet skilled, but learning" positions (but then many small startups also are not used to/have the resources to teach juniors, which is a huge issue in the industry IMHO)
but I imagine for many of the huge "we mainly hire from universities"/FANG companies it will turn into a "we need only senior engineers and hire juniors only to grow our own senior engineers", this means the moment to you growth takes too long/stagnates by whatever _arbitrary_ metric you get kicked out fast. And like with their hiring process they have the resources, scale, and number of people who want to work for them to be able to really use some arbitrary imperfect effective discriminatory metrics.
Another aspect is that a lot of the day to day work of software engineering is really dump simple churn, and AI has the potential to massively cut down the time a developer needs to do that, so less developers needed especially in mid to low skill positions.
Now the only luck devs have is that there is basically always more work which was cut due to time pressure but often isn't even supper low priority, so getting things done faster might luckily not map one to one to less jobs being available.
>> ..this means the moment your growth takes too long/stagnates by whatever _arbitrary_ metric:
you get HR's glossy `Exit Packet' with a cover of a pristine chartered white catamaran on the translucent aquamarine Carribbean in a palm treed cove, afloat with bikini babes lounging on deck, five minutes to fill your cardboard box, and manhandled by two security wide-shouldered bulls squeezing your arms to your sides gripped under your forearms, marching you down the aisle with rubber-knecking wide-eyed heads watching you repeatedly slip, fall forward, the experienced bulls lowering your arms to regain your shoes' purchase with the carpet, hurriedly packing you into the elevator with silent ignominious stares and hushed whispers of those packed in around you, then past hot Tanya at Reception in front of every Tom, Dick, Irene, and Harry, out the main revolving glass door entrance, into the overcast dishwater grey and miserable wind blown wet in truth is Seattle, to the sidewalk, left at the curb -furthest from proper and successful glitterati as possible.
All double-time haste, signalling to everyone get this despicable loser/criminal POS off the property, fast.
Its over, you're done. Sooner than you thought possible, you're freezing in a tent wasting days into months: tailing-downward foodbank boxes, the TIDE(tm) pee-bottle, those el-cheapo dirty gloves with the tips cut off, dirty layered thriftstore fleeces, its under filthy roaring I-5 for you pal, and your ilk of dangerous insane addict-crazed zombies screaming outside your hideous blue-tarped tent, and where's the knife.
The answer to the fundamental question of your entire existence and net worth has reduced to simply ask: "How do I best empty my bowels"?
ICEstapo hunter/killers gunning for you, to flush you offshore unseen and forgotten forever. You better run, you better run your ass off. Why didn't you study harder all those wasted years?!
Welcome to Sam Altman's Club.
Amazon won't hire junior engineers because of AI unless it's direct from a college pipeline, and even those jobs have shrunk significantly in number. Even opening an L4 (junior) role requires VP approval these days.
I think a lot of clerical tasks where people are crossing defined things with other defined things that could be somewhat automated expensively before can be somewhat automated less expensively enough to be worth doing now.
So jobs being killed by AI are basically being killed same way that office number crunching technology killed administrative assistant positions and put those tasks onto other people.
Take for example a purchasing department for a big company. Some project needs widgets. Someone crosses the specs against what their suppliers make. They take the result of that and makes a judgement call. AI replaces that initial step so a team of N can now do the work that formerly took a team of N + Y. Bespoke software could have replaced that step too but it would have been more expensive, less flexible, etc, etc. since there's all this work required to make human facing content into machine parsable content, including the user's input and the juice simply wasn't worth the squeeze. With AI doing all that drudgery on an as-needed basis the juice now is worth the squeeze in some applications.
Can the AI find alternative suppliers when your current ones go out of business or closes down production lines (even just for short term re-tooling)? Can AI negotiate when you get a big sale and need a huge production run? There are so many moving parts that a human juggles, more than what you list. All AI solutions seem like this. Automate the day to day duties of a job, when the real value of the person doing the jobs are much more than that. Maybe you can automate away 2 purchasing jobs down to 1, but what happens when that 1 person gets sick, when they go on vacation, when they retire/move to a better job?
And the sick thing is that the company that tries to be smart longer term won't be able to compete with the short term companies that cut as much as possible using AI to maximize short term benefits. Long term these 'cut everything' AI leaning zombie companies won't last, but they will last long enough to undercut and take out the longer term thinking companies with them.
AI is an entropy machine. It sucks all momentum from everything it touches.
That's like saying machines are bad because people still hand excavate ditches in the last couple feet near utilities. You are missing the point. Anyone who can do the full job isn't missing anything by skipping the drudgery.
Every task the AI can juggle for you is one you don't have to. If your department goes from 4 to 3 great, if it goes from 2-1 or 1-0 that's fine too. Companies already exist at those numbers. You can think about it in terms of "a company can now be bigger before they need a dedicated person/team for job X" if that helps take the emotion out.
[dead]
It’s going to ramp. I have a team where 60% of the staff (6/10) is retiring in a year.
Their function is around reconciling utilization and bills from multiple related suppliers with different internal stakeholders. They do a bunch of analysis and work with the internal stakeholders to optimize or migrate spend. It is high ROI for us, and the issue is both finding people with the right analytical and presentation skills and managing the toil of the “heavy” work.
Basically, we’re able to train interns to do 80% of the processing work with LLM tooling. So we’re doing to promote two of the existing staff and replace 2/6 vacancies with entry level new grads, and use the unit to recruit talent and cycle them through.
In terms of order of magnitude, we’ll save about $500k in payroll, spend $50k in services, and get same or better outcomes.
Another example is we gave an L1 service desk manager Gemini and made him watch a YouTube video about statistics. He’s using it to analyze call statistics and understand how his business works without alot of math knowledge. For example, he looked at the times where the desk was at 95th percentile call volume and identified a few ways to time shift certain things to avoid the need for more agents or reduce overall wait times. All stuff that would require expensive talent and some sort of data analysis software… which frankly probably wouldn’t have been purchased.
Thats the real AI story. Stupid business people are just firing people. The real magic is using the tools to make smart people smarter. If you work for a big company, giving Gemini or ChatGPT to motivated contracts and procurement teams would literally print money for you due to the stuff your folks are missing.
Except that we know that even if you can't put your finger on what it is right now, there's something in the way of "ChatGPT just literally prints money" being at all realistic. It's pretty obvious in the markets if anyone has figured out a repeatable strategy for printing money.
This is to say that we know from looking at outcomes over the long term that the kinds of concrete gains you're describing are offset by subtler kinds of losses which most likely you would struggle to describe as decimal numbers but which are equally real in their impact on your business.
>This is to say that we know from looking at outcomes over the long term
Public LLMs have been around for 3 years, and adoption is still nascent. We don't have any long term data, and the longest term data we have involves a bunch of outdated models. Most people are still awful at using LLMs, and probably the most skilled users are the college kids who are graduating right now (the youth is always god-tier with new tech).
I cannot think of anything more foolish right now than not trying to leverage SOTA models to save you money, especially because you heard rumors of shadow losses that can only be found in the bottom line.
If LLM-based tools were such a money printing machine with a big ROI, why would you ever want to downsize? Keep redirecting that excess human capacity into building bigger and better capabilities to generate more revenue. Hire cheap grads to do the work.
Except it seems like the opposite is happening. CS grads have high unemployment. Companies laying off staff.
The rhetoric doesn't seem to add up to the reality.
This is a great way of looking at it.
Do you use tech to grow your business or increase dividends?
Also reducing staff via attrition shows far better management skills than layoffs which imo says more about the CEO & upper management.
And your company now won't progress. Won't have insight. Sure you can keep on keeping on, but that is all. You also farm out expertise instead of building it.
AI is an entropy machine for everything it touches. Companies that runs off of AI is a zombie company. Without people to understand an industry, what does your company ad? Without people to see new revenue streams, new directions, and most importantly NEW RISKS to your business model, you are a dead zombie company living off what the previous living, growing company built. But does that matter to owners when they get $500,000 more a year in their pockets?
We've definitely seen impact in areas like copywriting, customer service, and basic coding tasks, but it’s more nibbling at the edges than wholesale devouring
I mean, I keep web pages about cases that previously used a human and have now been replaced with neural network models. Some of which include:
- Translation. See: Gizmodo firing its Spanish translation team and switching exclusively to LLMs.
- Editors. See Microsoft replacing their news editors at MSN with LLMs.
- Customer service. Various examples around the world.
- Article graphics for publications. See: The San Francisco Standard (which used it for various articles for a period), Bleepingcomputer.com, Hackaday (selectively, before some pushback).
- Voice acting. The Finals game used synthetic voices for the announcer voices.
Well I'm part of a small few person team- I had a junior developer helping me who left. And I literally don't need someone like him anymore because I can do anything that he was doing via AI faster and better because it's under my control. I can have a 20 minute conversation and code revision that would have taken days for someone junior to code and would have probably been not done right (by 'right' I mean how I want it to be done, not necessarily wrong)
To think the same isn't happening all over the place and will only continue is ignoring just how powerful this tech is.
And when you are out sick, can AI do his role? When you are on vacation, can AI do his role? When you leave, can AI at least help define/interview your replacement, like he could have, even if he's too junior to step into the role?
There is so much else that people do. So many details that are just being ignored because of short term 'gains' that justify ignoring so many details.
The doomer stuff about AI is another kind of hype. We are at the top of the bubble hype cycle.
"At OpenAI, we can't eliminate that disruption. But what we can do is help more people become fluent in AI and connect them with companies that need their skills, to give people more economic opportunities."
I love it when companies do something and at the same time say, that they cannot do anything about it. Like Microsoft recently firing tons of people while posting record profits and making up some lame excuse about things being the way they are.
For full irony they take your data, then use that data to replace your job.
Full irony mode unlocked: they mine your data for free, feed it into a black box, then pop out a model that can do your job
It's not irony so much as it is brazenly criminal. No one should be in a position of training someone else's AI to replace their job function without consenting to it and without being compensated.
The most valuable thing AI can do right now is write code, and it couldn't do that without thousands of StackOverflow volunteers.
Wouldn't the language / frameworks documentation be sufficient?
At what point in the system?
If you have an LLM that was trained on (say) everything on the internet except for programming, and then trained it on the Python.org online documentation, would that be enough for it to start programming Python? I've not tried, but I get the impression that it needs to see lots and lots of examples first.
No, absolutely not. A lot of docs are old, missing code examples, very narrow, etc.
No more laws to be enacted on AI in the USA for 10 years thanks to the billionaires. Pouring money into the elections have been a great ROI for the ultra wealthy.
that was removed from the final legislation.
they don't even replace any jobs, they suck out all money out that was previously going to employ people doing useful things.
I feel like we need a new word for money going to datacenters instead of paychecks. 'AI taking jobs' implies AI is doing the work , which is not the case.
It also seems like a lot of these layoffs due to AI are just regular layoffs.
Let's reduce headcount and spin it as AI disruption! That way we dont have to acknowledge we overhired during covid AND our stock price will go to the moon as they say.
bingo!
Yes. I see this with my own employer.
Crazy how these CEO are so brazenly and openly committing fraud. Market and investors are playing along because stock price is going up. Board doesn't give a fuck.
USA is one giant casino right now.
what does this even mean?
Essentially displacing other jobs into power generation / energy extraction (something that, generally, we don’t want more of, since very little data centre energy is green) and huge investments in mass production AI capable servers which become obsolete rapidly.
what part is not clear to you?
well, for one, "datacenters instead of paychecks" is nonsensical. openai doesn't operate their own datacenters, they in fact write checks to people who do, who in turn write their own checks to their employees.
it sounds like you don't believe datacenters are "useful", either. that's a pretty hot take IMO.
I never said ppl are directly writing checks to datacenter. I said "money is going to" . You can't read?
> it sounds like you don't believe datacenters are "useful", either. that's a pretty hot take IMO.
It "sounds like" you are here to pick idiotic fights.
I don't know why ppl don't seek mental health help instead of being an pathetic troll online.
ah, the ad hominem attack, the gold standard of discourse.
[dead]
The endgame for all AI companies is to replace all of labor. Everyone's use of AI through their chat and api endpoints is training those replacements right now.
And when the dust settles, the same companies will turn around and sell us the cure for the problem they engineered
Why are people saying its AI and not the current administration running the economy into the ground?
Because the 'trend' of the economy going into the ground has been consistent for decades.
A few years ago the economy was adding a lot of jobs and the wage gains for the poor were rapidly outpacing inflation
The amount of arrogance in technology is staggering.
Instacart IPO'd. OpenAI hit $300B valuation. Different companies, different industries—yet look closer and you'll find the same names signing both checks.
Which is it, AI projects have delivered no value or AI is eating all jobs??
We're replacing managers with AI at our startup.
Really? Can you tell us more about that?
Yeah we spawned an agent, his name is Jeremy, that runs 24/7 and coordinates work within the team. He reaches out to individuals on the team (via email) and we interact with him, things like:
- "Hey Jeremy, I completed <X>, what should I look at next?"
- "Jeremy, who's currently working on <Y>"
He's backed by db so it ends up being way more powerful than something like trello or Jira.This is one of our agents we use internally to dogfood our product (gobii dot ai). We spun up the agent, told it that its job was helping coordinate and prioritize work within the team, and authorized it to contact the relevant team members. Now we all just email to/from the agent (would love Discord or Slack, but we don't have that quite yet.)
> Linear mcp
And capcha forces users to train neural networks for free, planning to then replace the users with those neural networks :)
Moreover, website ovners even pay for capcha. It should be other way around - people participated in training the neural nets should share profit and owhership of the networks, at the very least.
Why a company like OpenAI, that is so big and has a clear challenging goal, cares about online academies, etc? What's the % they can get from it? It's even worth it? They should focus all their minds to their (demanding) main goal, but it seems they are distracted with this stupid things.
This sounds so weird to me, and I feel I am missing something.
It feels like a proactive PR move to me. When the shit hits the fan in a few years and jobs are vanishing, they can point to this as one of the many examples of how they're fighting the good fight for humanity
Probably because they have thrown as much money and manpower as they possibly can at the core problem and investing more into it is diminishing returns. Building out a platform is the logical next step and there are probably 100s of revenue positive businesses they can build on top of it by just throwing some software developers at the problem. They are going to be releasing a _lot_ of products over the next few years that aren't just "GPT 6" or whatever.
You might as well ask why google built an ad company or email or video, or a browser or a phone OS etc, when they should have spent more money on their core search engine.
It turns out you can’t have a thriving economy if everyone is just working at service jobs as floor stockers and barista’s.
If we are lucky, AI provides a huge accelerant to being able to insource manufacturing with advanced automation. This is an energy saving play to reduce the total distance that objects are shipped which wastes a lot of energy especially as the objects are of lower value.
There is something kafkaesque about these giant tech companies restricting what you can talk to the AI about under the name of ethics while at the same time openly planning to replace you in the workforce.
Is there? I don't see any contradiction there.
For me it's funny that the first time most programmers ever think about the ethics of automating away jobs is when they themselves become automated.
He didn't say "contradiction," he said "kafkaesque," meaning "characteristic or reminiscent of the oppressive or nightmarish qualities of Franz Kafka's fictional world" (according to Google).
I don't see why it would be "kafkaesque" either.
In fact I fail to see any connection between those two facts other than that both are decisions to allow or not allow something to happen by OpenAI.
It's oppressive and nightmarish because we are at the mercy of large conglomerates tracking every move we make and kicking our livelyhoods out from under ourselves, while also censoring AI to make it more amenable to pro-corporate speech.
Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.
It would be unimaginable. That's because the only way for someone to be in the position to determine what is censored in the chat window, would be for them to be completely on the side of the data panopticon.
There is no world where technology can empower the average user more than those who came in with means.
> Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.
> It would be unimaginable.
By "do a luigi" you're referring to the person who executed a health insurance CEO in cold blood on the street?
Are you really suggesting that training LLMs to not suggest committing murder is evil censorship? If LLMs started suggesting literal murder as a solution to problems that people typed in, do you really think that would be a good idea?
Didn't OpenAI already suggest to a kid to kill himself and avoid asking for help from the outside some weeks ago?
Yeah but what we are all whining here about (apart from folks working on llms and/or holding bigger stocks of such, a non-trivial and a vocal group here) has hit many other jobs already in the past. Very often thanks to our own work.
It is funny, in worst way possible of course, that even our chairs are not as stable as we thought they are. Even automation can be somehow automated away.
Remember all those posts stating how software engineering is harder, more unique, somehow more special than other engineering, or generally types of jobs? Seems like its time for some re-evaluation of that big ego statements... but maybe its just me.
> Yeah but what we are all whining here about has hit many other jobs already in the past.
I'm less talking about automation and more about the underpinnings of the automation and the consequences in greater society. Not just the effects it has on poor ole software engineers.
It is quite ironic to see the automation hit engineers, who in the past generally did not care about the consequences of their work, particularly in data spaces. We have all collectively found ourselves in a local minima of optimization, where the most profitable thing we can do is collect as much data on people as possible and continually trade it back and forth between parties who have proven they have no business holding said data.
Having used agentic ai (Claude Code, Gemini CLI) and other LLM based tools quite a bit for development work I just don't see it replacing developers anytime soon. Sure a lot of my job now is cleaning up code created by these tools but they are not building usable systems without a lot of developer oversight. I think they'll create more software developer roles and specialties.
What you are saying does not contradict the point from your parent. Automation can create "more roles and specialties" while reducing the total number of people in aggregate for greater economic output and further concentration of capital.
There’s two kinds of programmers:
0. The people who got into it just as a job
1. The people who thought they could do it as art
And #1 is getting thrashed and thrown out the window by the advent of AI coding tools, and the revelation companies didn’t give a darn about their art. Same with AI art tools and real artists. It even begs the question if programming should ever have been viewed as an art form.
On that note, programmers collectively have never minded writing code that oppresses other people. Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs. That was even a trend, “disrupting” traditional industries (with no regard to what happens to those employed in said traditional industry). Nice to see the shoe just a little on the other foot.
For many of you here, keep in mind your big salary, came from disrupting and destroying other people’s salaries. Sleep well tonight and don’t complain when it’s your turn.
gjsman-1000 says "Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs."
"unnecessarily deadly"?
I had no idea that it was possible to measure degrees of dead: she's dead, they're dead, we're all dead, etc. - I thought it was the same "dead" for everyone.
Also, interesting but ambiguous sentence structure.
Is this an offshoot of LLMs that I've overlooked?
What's sad is engineering is very much an art. Great innovation comes from the artistic view of engineering and creation.
The thing is, there's no innovation in the "track everything that breaths and sell the data to advertisers and cops" market.
They might get better at the data collection and introspection, but we as a society have gotten nothing but streamlined spyware and mental illness from these markets.
> building unnecessarily deadly weapons at Northrop Grumman
Northrop Grumman only builds what Congress asks of them, which is usually boring shit like toilet seats and SLEPs. You can argue that they design unnecessarily deadly weapons, but if they've built it then it is precisely as deadly as required by law. Every time Northrop grows a conscience, BAE wins a contract.
> If Northrop grows a conscience, Bofors earns a contract.
That's a lame "I was just following orders" excuse. Doesn't matter who gets the contract, if you work for a weapons manufacturer or a large corporation that exploits user data you have no moral high ground. Simple as that.
If you don't see why this is oppressive, that's really a _you_ problem.
I'm being facetious, but life in the rust belt post industrial automation is kinda close. Google Maps a random Detroit east side neighborhood to see what I mean.
But it wasn’t industrial automation that ruined Detroit. It was the automakers’ failure to compete with highly capable foreign competition.
> It was the automakers’ failure to compete with highly capable foreign competition.
A lot of their capability was due to them being better at automation. See: NUMMI
> It was the automakers’ failure to compete with highly capable foreign competition.
I contend it was when Dodge won the court case deciding that shareholders were more important than employees. It’s been a slow burn ever since.
It's not a comment on the ethics of replacing jobs but the hypocrisy of companies using "ethics" as reasoning for restricting content.
They are pursuing profits. Their ethical focus is essentially a form of theater.
Replacing jobs is not an ethical issue.
Automation and technology has been replacing jobs for well over a century, almost always to better outcomes for society. If it were an ethical issue, then it would be unethical not to do it.
In any case, which jobs have been replaced by LLMs? Most of the actual ones I know were BS jobs to begin with - jobs I wish had not existed to begin with. The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM).
Your definition what is an ethical issue is reductive. It means the issue involves ethics, and they are obviously involved. Even if ultimately society at large would benefit from the disappearance of certain jobs, that can still create suffering for hundreds of thousands of people.
BeetleB says "The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM)."
So lay people off to reduce costs, say that they have been replaced by AI now, and the stockholders love you even more!
Indeed, a model that should cascade thru American businesses quickly.
> Most of the actual ones I know were BS jobs to begin with
I cannot edit my original comment, so I'll address this here:
Yes, I admit some legitimate jobs may have been lost (and if not yet, likely will be). When I spoke of BS jobs, I was referring to things like people being paid to ghostwrite rich college students' essays. That's really the only significant market I know to have been impacted. And good riddance.
You're so wrapped up in defending the job replacement aspect that you miss the point on hypocrisy.
I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.
We're still in the post-industrialization arc of history and we're on a course of overconsumption and ecological destruction.
Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?
How is it hypocrisy when OpenAI is clearly acknowledging in their blog post that AI is going to disrupt jobs?
When a factory decides to shut down, and the company offers to pay for 2 years of vocational training for any employee that wants it, is it hypocrisy? One of my physical therapists, who took such an offer, definitely doesn't see it that way. The entity responsible for her losing her job actually ended up setting up a whole new career for her.
> I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.
That's overstating my stance. I can accept that it's too early to say whether LLMs have been a net positive (or will be a net positive), but my inclination is strongly in that direction. For me, it definitely has been a net positive. Because of health issues, LLMs allow me to do things I simply couldn't do before.
> Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?
This is an age old question and nothing new with LLMs. We've been arguing it since the dawn of the Industrial Revolution (and for some, since the dawn of farming). What I do know is that it resulted in a lot of great things for society (e.g. medicine), and I don't have much faith that we would have achieved them otherwise.
Ask ChatGPT to explain consequentialism to you.
Yeah, the issue is that there is no common benefit if the private company is the only one doing the replacement. Are we ready for AGI before we solve issues of capitalism? Otherwise, the society may get a harsh reset.
There's actually a lot of common benefit. That company can now supply their goods and services in greater quantity and at lower cost, which raises consumers' standard of living. Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.
> Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.
Why would LLMs be incapable of these new jobs?
> That company can now supply their goods and services in greater quantity and at lower cost, which raises consumers' standard of living
It turns out that standard of living requires more than just access to cheap goods and services
Which is why despite everything getting cheaper, standard of living is not getting better in equivalent measure
I don't think this will happen. It does not work with the capitalism if we have only few companies which have all this power. And many consumers don't have jobs left so the value of money increases faster than the "cost" decreases.
> Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.
Put this to numbers, right now - if we remove all workers and leave managers on those fields - how many people are still employed?
Artists generally? Translators? People at various bureaucratic positions doing more menial white collar work? And tons more.
That you specifically wish for them to not even exist is your own internal problem and actually pretty horrible thing to say all things considered.
People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.
Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
So suppose someone wants to say provide localized versions of their software and avails themselves of translation software. Are we supposing that such ought not exist to provide for the livelihood of the translator who would otherwise have been paid?
If so where do we stop. Do we stop at knowledge work or do we go back to shovels and ban heavy equipment or shall we go all the way back to labor intensive farming methods?
>Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
This doesn't appear to be so. AI is discussed as a pretext for layoffs more fashion than function.
> Artists generally?
Which artists have lost their jobs?
But I am willing to grant you that. From a big picture society perspective, if it means that ordinary people like me who cannot afford to pay an artist can now create art sufficiently good for my needs, then this is a net win. I just made an AI song a week ago that got mildly popular, and just got a request to use it at a conference. No one is losing their job because of me. I wouldn't have had the money to pay an artist to create it, and nor would the conference organizers. Yet, society is clearly benefiting.
The same goes for translators (I'm not actually aware that they're losing jobs in a significant way, but I'll accept the premise). Even before LLMs, the fact that I could use Babelfish to translate was fantastic - LLMs are merely an incremental improvement over it.
To me, arguing we shouldn't have AI translators is not really different from arguing we shouldn't have Babelfish/Google Translate. Likely 99% of the people who will benefit from it couldn't afford a professional translator.
(I have, BTW, used a professional translator to get some document translated - his work isn't going away, because organizations need a certified translator).
> People at various bureaucratic positions doing more menial white collar work?
"Menial white collar work" sounds like a good thing to eliminate. Do you want to go back to the days where word processors were not a thing and you had to pay someone to type things up?
> People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.
I'll admit I spoke somewhat insensitively - yes, even I know people who had good careers with some of them, but again: Look to the past and think of how many technologies have replaced people, and do you wish those technologies did not replace people?
Do you want to deal with switchboard operators every time you make a call?
Do you want to have to deal with a stock broker every time you want to buy/sell?
Do you want to pay a professional every time you want to print a simple thing?
Do you want to go back to snail mail?
Do you want to do all your shopping in person or via a physical catalog?
The list goes on. All of these involved replacing jobs where people earned honest money.
Everything I've listed above has been a bigger disruption than LLMs (so far - things may change in a few years).
> Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
Actually, I would expect the SW industry to be amongst the most impacted, given a recent report showing which industries actually use LLMs the most (I think usage was SW was greater than all other industries combined).
As both an engineer and a programmer, who makes a living via programming, I am not opposed to LLMs, even if my job is at risk. And no, I'm not sitting on a pile of $$$ that I can retire on any time soon.
Then why are you even talking about the replacement of jobs?
I've already explained it. I don't know how to break it down any further without coming off as patronizing. You seem dead-set on defending OpenAI and not getting the point.
I think many of us question the ethics of lying to sell a product that cannot deliver what you are promising.
All the good devs that I know aren't worried about losing their jobs, they are happy there is a shortcut through boilerplate and documentation. They are also equally unhappy about having to talk management, who know very little about the world of dev, off the ledge as they are getting ready to jump off with their AI wings that will fail.
Finally, the original point was about censorship and controlling of information, not automating jobs.
> All the good devs that I know aren't worried about losing their jobs
While many of them are mistaken, the much bigger problem is for all the early career developers, many of whom will never work in the field. These people were assured by everyone from professors to industry leaders to tech writers that the bounty of problems available for humanity to solve would outpace the rate at which automation would reduce the demand for developers. I thought it was pretty obviously a fairy tale that people who believed in infinite growth created to soothe themselves and other industry denizens suspecting the tech industry hadn’t unlocked the secret to infinite free lunch, but in reality are closer to the business end of an ouroboro than they realize.
Just as the manufacturing sector let its Tool and Die knowledge atrophy, perhaps irreversibly, the software business will do the same with development. Off-shoring meant the sector had a glut of tool and die knowledge so there was no immediate financial incentive to hire apprentices. There’s a bunch of near-retirees with all of that knowledge and nobody to take over for them, and now that advanced manufacturing is picking up steam in the US again, many have no choice but to outsource that to China, too.
Dispensing with the pretenses of being computer scientists or engineers, software development is a trade, not an academic discipline, and education can’t instill professional competence. After a decade or two of never having to hire a junior because the existing pool of developers can serve all of the industry’s needs, suddenly we’ll have run out of people to replace the retirees with and that’s that for the incredible US software industry.
For another thing the owners of the data centers may not do so well if their wildest dreams fail to come true, and if they don't happen to make enough money to replace the hardware before it wears out.
> All the good devs that I know aren't worried about losing their jobs...
'Good' is doing heavy lifting here. E.g AI/Automation could possibly eliminate 90% of IT jobs and cause all kind of socio-economic issues in society. All the while good developers remain in great demand.
The book "Why We Fear AI" by Hagen Blix and Ingeborg Glimmer talks about this dynamic. Whether it will lead to a class awakening because previously if you were aligned with the company you were rewarded as well, but now if you align with the company you're advocating to destroy your livelihood.
What rational worker would want to take part in this?
Software developers. Many of them are still championing LLMs. Also anybody who still contributes to open source software.
In contributing to open source software at scale I'm teaching apprentices. I expect them to adapt what I've done to their own purposes, and have seen a good amount of that out in the wild, often people who ended up doing something entirely different like building hardware that also contains software.
I don't think LLMs will be able to pick up on what's done by an evolving and growing codebase with previous projects also included. More likely it will draw from older stuff and combine it with other people's more normal stuff and end up with an incoherent mess that won't compile. Not all work is 'come up with the correct answer to the problem, and then everybody uses it forever'.
It can lead to class awakening but I think AI is not sufficient. It would need very large scale climate / ecological disasters where suddenly lot of current middle classes conveniences become available only to top classes.
This is happening parts of the world where hyper scale data centers are being built. Rolling brown outs and diverting potable water from towns, you find these stories both in Ireland and across South America.
We already see it happening in the US too, with the Nashville data centers causing immense medical issues.
Lots? I mean most people I know aren't even willing to entertain the notion wherever it's gonna happen within our lifetime.
The argument usually centers around the fact that LLMs aren't AGI, which is obviously true but also kinda missing the point
We don't need AGI to cause a massive amount of disruption. If leadership of companies want to force use these LLMs, which is what we've been experiencing the last two years, workers will be forced to use them.
It's not like there is an organic bottom up movement on driving this usage. It's always top down mandated by executives with little regard on how it impacts worker's lives.
We've also seen how these tools have made certain jobs worse, not better, like translating:
https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-i...
The number of bullshit jobs has been growing since the Internet, and programmers have facilitated them by adding unnecessary complexity.
This time the purported capabilities of "AI" are a direct attack on thinking. Outsourcing thinking is creepy and turns humans into biorobots. It is different from robotic welding in an assembly line.
Even if new bullshit jobs are created, the work will just be that of a human photocopier.
[All this is written under the assumption that "AI" works, which it does not but which is the premise assumed by the persons quoted in the Register article.]
I don't see how thinking about some source code is an innately more human activity than welding. Both can be done by humans, both couldn't be done by anything but humans until automation came along and now both can be done by humans and automated systems.
I also fail to see how LLMs can turn humans into "biorobots". You can still do all the things you could do before LLMs came along. The economic value of those things just decreased enourmously.
Then go weld. There are still some positions for humans.
Tons of welding and other manufacturing jobs in the northeast— they’ll even apprentice you into positions with no existing knowledge and larger companies (like General Dynamics) will even pay for your job-related degrees, sometimes being able to take the classes on the clock or get a stipend.
They have to do this because the industry has basically been kicking the aging-workforce can down the road for a few decades since off-shoring and automation outpaced increasing demand, and now they don’t have nearly enough people that even know how to program CNC machines when CAM software falls short.
I have a feeling a lot of displaced software people will go that route, and have a big change in compensation and working conditions in the process.
> I have a feeling a lot of displaced software people will go that route, and have a big change in compensation and working conditions in the process.
I've watched my cousin weld on a horse trailer overhead in 105F Texas heat, would be interesting to see the typical SWE step away from an Xbox and do that.
Yeah I don’t think they’re going to have much of a choice unless they plan on doing gig jobs indefinitely. The software business has given a lot of people the impression that they’re far more special than they actually are.
I’ve seen devs say they’d pick up a trade like being a plumber or electrician because their their master electrician cousin gets paid a ton money, probably they imagine for wiring up new residential buildings and changing out light sockets… how long did it take that cousin to get there? In any trade, there’s quite a number of years of low pay, manual labor, cramming into tight spaces in hot attics or through bug infested crawl spaces, factory basements, etc. that most apprentices complete in their early twenties. Nobody gives a shit what you did as a developer and nobody gives a shit how good you are at googling things in most blue collar work environments. Getting experienced enough to have your own business making good money in some job where you need many thousands of work hours to even take a test to get licensed isn’t a lateral move from being a JS toolchain whiz. Even in less structured jobs like working as a bartender — it takes years of barbacking, serving, or bartending in the least desirable jobs (events, corporate spaces) before you get something you can pay rent with.
My argument isn't that I like welding more. I'm asking you what the _ethical_ difference is between automating welding and automating programming.
The fact that you like programming more than welding is nice to know but there's probably also a lot of people who like welding more than progamming.
It's like being lectured on ethical behavior by the thing that's actively eating your lunch
Why do you think automating software development is any less ethical than automating other jobs (which many software developers actively engaged in)?
Automating software development is not unethical. The unethical bit is allowing resource flow default to those who own the means of production.
As a software developer, when I automate someone's job, say of a cashier, I do not start to get paid their salary - my salary stays the same.
This is different for capital investors and shareholders. They keep the cashier's salary (not directly but ultimately). This results in an increasing concentration of wealth, creates lots of suffering and destabilises the world. That is where it is unethical.
In that case pretty much any automation is unethical, isn't it?
Yes, if it's resulting in a redistribution of wealth from the workers getting laid off (with no great prospects) to those providing the automation. OpenAI isn't providing any new job opportunities, it's just destroying existing ones.
The solution is for our government to onboard us onto the internet economy like China is doing. Rather than slow down tech advancement.
While training their models on pirated and scraped content
i've given this reply many times before but it's still worth repeating that "AI Safety/Ethics" really just means brand safety for the model provider.
"Rules for thee, but not for me"
Why doesn't openAI launch a github competitor where you describe a project you'd like to exist and they serve it up?
It's like setting your house on fire and handing you a fire extinguisher... for a fee
I’m not convinced matching users to jobs is the next vertical for OpenAI. To me it seems like it’s randomly grasping at straws.
I don't understand how people can sit in these comment sections pretend AI isn't affecting productivity and the job market. Do we all just imagine the insane level of AI usage is only affecting high school students, and somehow doesn't affect employees?
It's clearly having a huge impact on large portions of the job market.
I think current implementations of AI are vastly overhyped in the market. There's a huge gap between what people think or expect from AI and what has been delivered, and the deliveries are not likely to catch up in the short or medium term.
There are remarkably few success stories of AI actually replacing employees directly. The best examples we've got are marginal (and conflicting) improvements in software developer productivity.
I'm 100% willing to believe some fewer people are getting hired because many employers are actually drinking the kool-aid: they think they won't need them in 3-6 months. I also think fewer people are getting hired due to other economic conditions (e.g. tariffs). Still others see the writing on the wall (AI bubble pop is imminent, recession likely incoming) and don't want to hire for that reason.
In any case, I don't think it's because of AI itself.
> There are remarkably few actual success stories of AI actually replacing employees directly.
If I employ 100 people, and AI makes them 10% more productive, then I only need 90 people now to do the same work. AI doesn't have to replace people 1:1 for it to result in job losses. It just needs to increase productivity (or be seen to even).
And you might think 100 people that are 10% more productive now I have 110! But that's not how corporations work. When they have extra money, they don't invest it in R&D. They do stock buybacks. And when they have extra productivity, they do layoffs and stock buybacks.
> If I employ 100 people, and AI makes them 10% more productive, then I only need 90 people now to do the same work.
I think it's yet to be seen that AI actually makes people 10% more productive, and further, that getting a task done 10% faster actually means they have 10% more time that is actually used for additional tasks.
And more bluntly: I think if it actually were happening, it'd be a lot more plainly obvious to all of us. The pressure from the C-suite to integrate AI into our workflows has been widely noted.
It would be insanely obvious if this was actually happening but instead we are seeing a massive economic downturn
So it's all fairy tales and unicorn farts as far as I'm concerned
The OpenAI blog post seems equal parts desperate and gross to me. A jobs platform and certifications? They want act as the certifying body attesting that 10 million Americans have achieved 'AI fluency' by 2030 as well as act as the intermediary in matching these individuals up with corporations apparently desperate for their fluency?
"Jobs will look different, companies will have to adapt, and all of us—from shift workers to CEOs—will have to learn how to work in new ways." - Thank god for OpenAI.
This whole thing looks like someone wrote it up, pasted it in an AI, asked "How's this look?" and got back something like "Looks absolutely perfect!" and hit publish.
Isn't this just a subset of software eating the world?
Fun thought of the day: Do a basic income and pay salaries in stocks.
> At OpenAI, we can't eliminate that disruption.
The irony. More accurately should have said , "At OpenAI, we're at the forefront of that disruption. "
[dupe] Some more discussion: https://news.ycombinator.com/item?id=45131262
Nice tongue and cheek from the Register and OpenAI to tickle the 'doomer' narrative.
But reality is that there will be new high skilled jobs from AI thanks to Jevons' Paradox, the more companies using AI will lead to a huge demand for highly skilled people who can use AI in more ways than we are today.
Not so much about being replaced, but there will be new jobs for people to do.
I guess for those people being 'replaced' it is a 'skill issue'
What kind of new jobs?
"AI will be disruptive. Jobs will look different, companies will have to adapt, and all of us – from shift workers to CEOs – will have to learn how to work in new ways," she said in a blog post.
We don't have to do anything. People always listen to this propaganda from the wealthy and think the latest gadgets are inevitable.
We can go on a general strike until copyright is restored and the "AI" companies go bankrupt. Journalists can write sharper and sharper articles. You can refuse to use "AI". If mandated in a job, demonstrate that it slows you down (probably true anyway).
You can demand an investigation into Suchir Balai's demise (actually Elon Musk recently endorsed a tweet demanding just that; you can, too).
You can boycott robotaxis. You can stop watching YouTube channels that do product placement of humanoid robots. You can do a lot of things.
This won't happen. Even if western countries will outlaw LLMs (they won't, people care about unemployed software developers just as much as most software developers cared about unemployed translators), China won't. And why would we even do that? Because software developers like their cushy job and don't want to do blue-collar jobs?
>Because software developers like their cushy job and don't want to do blue-collar jobs?
In the US of A, most blue-collar jobs:
1. Pay less 2. Offer poor, or zero benefits 3. Are hard on your body
Not to mention that if suddenly millions of people flood the markets for blue-collar work, then wages will drop further in those fields.
Why would you expect someone to be comfortable with that future?
I wouldn't. But most software developers didn't care whether translators liked the new jobs they had to find. So I think most non-programmers won't care whether you'll like your new job.
China FOMO? Xi Jin Ping has warned twice about an "AI" bubble:
https://www.ft.com/content/9c19d26f-57b3-4754-ac20-eeb627e87...
https://www.wsj.com/tech/ai/china-has-a-different-vision-for...
China restricts "AI" across the country to prevent cheating:
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
Why would we do that? Because we can! Why would we let some rich brats who have never created something and just steal human knowledge get even richer and control and surveil us all?
Finally some common sense here. This is exactly my thoughts, how come in age of instant information flow people cannot gather together under one idea. How French Revolution could happen without Messenger and WhatsUp? People had to walk and talk about idea.
AI CEOs are talking like they are oracles but they just need to please stakeholders.
> how come in age of instant information flow people cannot gather together under one idea
Because instant misinformation is even faster and satisfies their biases.
> We can go on a general strike until copyright is restored and the "AI" companies go bankrupt. Journalists can write sharper and sharper articles. You can refuse to use "AI".
This sounds like naive wishful thinking. Add “theoretically” after every “can” for real life.
It has worked against nuclear energy in Germany. The major protests were in the 1980s against an all powerful government and an all powerful industry lobby.
Personally, I'm for nuclear energy but this is one of many examples that a population can get its will.