I'm excited for the AI wildfire to come and engulf these AI-written thinkpieces. At this point I'd prefer a set of bullet points over having to sift through more "it's not X (emdash) it's Y" pestilence.
You’re absolutely right! It’s not just pestilence—It’s the death of the internet as we know it. …I’m sorry, I couldn’t help myself.
Edit: I forgot HN strips emojis.
> "it's not X (emdash) it's Y" pestilence.
I wonder for how long this will keep working. Can't be too hard to prompt an AI to avoid "tells" like this one...
Anyone lazy enough to not check the output is also going to be lazy enough to be easy to spot.
People who put the effort into checking the output aren't necessarily checking more than style, but some of them will, so it will still help.
The trouble is "AI" is waaaaay less of a boost to productivity if you have to actually check the output closely. My wife does a lot with AI-assisted writing and keeps running into companies that think it's going to let them fire a shitload of writers and have the editors do everything... but editing AI slop is way more work than editing the output of a half-decent human writer, let alone a good one.
If you're getting a lot of value out of LLM writing right now, your quality was already garbage and you're just using it to increase volume, or you have let your quality crater.
Luckily there are plenty of other obvious tells!
Biggest one in this case, in my opinion: it's an extremely long article with awkward section headers every few paragraphs. I find that any use of "The ___ Problem" or "The ___ Lesson" for a section header is especially glaring. Or more generally, many superfluous section headers of the form "The [oddly-constructed noun phrase]". I mean, googling "The Fire-Retardant Giants" literally only returns this specific article.
Or another one here: the historic stock price data is slightly wrong. For whatever reason, LLMs seem to make mistakes with that often, perhaps due to operating on downsampled data. The initial red-flag here is the first table claims Apple's split-adjusted peak close in 2000 was exactly $1.00.
There are plenty of issues with the accuracy of the written content as well, but it's not worth getting into.
People are already prompting with "yeah, don't do these things":
"That's such a great observation that highlights an important social issue — let's delve into it!"
I've been prompting the bot to avoid its tics for as long as I've been using it for anything; 3 years or so, now, I'd guess.
It's just a matter of reading and understanding the output, noticing patterns that are repetitious or annoying, and instructing the bot as such: "No. Fucking stop that."
> it's not X (emdash) it's Y
No, no, no! Stop that! The em dash is an wonderful little punctuation mark that's damned useful when used with purpose. You can't turn it into some scarlet glyph just because normal people finally noticed they exist. LLMs use them because we used them, damn it.
For god's sake, are we supposed to go back to the dark ages of the double hyphen like typographic barbarians in the hopes that a future update won't ruin that, too? After all the work to get text editors to automatically substitute them in the first place?
What's funny is that, when people first started noticing that LLMs tended to like the em dash, I'd mentioned to a friend that I hoped—rather naively—it might lead to a resurgence and people would think to themselves "huh, that looks pretty useful." Needless to say, I got that one wrong. Are we really going to sacrifice the poor em dash just because people can't come up with a better signifier for LLM text?
Oh, no thanks. The emdash is lazy writing, through and through, for the same reason a parenthetical expressed any other way might be. LLMs overuse them the same way humans do: to pack in context where it doesn't belong. I'd happily lay the emdash and all its terrible cousins upon the sacrificial altar to see a renaissance in editing and proper sentence construction.
I’ve never seen an LLM use an em-dash the way a thoughtful human is most likely to use them, in a parentheses-like pair. It’s just too bad there’s way too many idiots who cannot notice such subtleties.
I first learned about em dash reading the GNU Texinfo manual in the 1990s. Now I have to wear a red, slightly long horizontal line on my shirt, and passersby shun me.
You forgot the italics on Y
I’m just glad everyone stopped posting their perfect prompting strategies.
Even if bubble burst will be massive, the slop factories will not stop with using it, because it's one of the use cases LLMs are just good at
It's not like many of those places weren't produce slop beforehand, either.
It makes me wonder what verbal tics / tells it has in other languages.
In Spanish: "En resúmen.." (in conclusion...)
Of course, that's exactly what won't happen. AI as "better spam" is not going away, it's going to wriggle in everywhere.
It's more things like AI delivering pizza that's under threat. You know, the actual value.
I think there is a high risk that people will begin writing like AI, or they will stop using effective/engaging styles because AI happens to use those. I don't want to deal with people writing in contrived/awkward/imperfect ways just to appear more human. That is a losing game anyway, because AI will learn to copy everyone.
I just copy-paste an article into chatgpt and tell it to give me 3 bullet points for the article. We should have had this forever.
With little growth and hiring happening outside of firms betting the farm on AI—and getting the funding to stay alive and play the lottery—what is a random tech employee supposed to do here?
It seems like right now the most rational move to stay in the industry is to milk the AI wave as much as possible, learn all of the tools, get a big brand name on one's resume, and then land somewhere still-alive once the AI music stops? But ultimately if nothing outside of AI is growing, it's one big game of musical chairs and even that might not save you?
That “rational move” has always been a good move, regardless of AI. This is a boom/bust industry, and the next boom will come in a few years. While we’re at it, if you’re making engineer money, you should be targeting retirement at 50. I’m not saying you have to do that, but it sure helps to have that option.
> if you’re making engineer money,
SV & big tech engineer money.
Majority of engineering fields do not make that kind of money to retire at 50. Comfortable compared to the rest of the country, sure.
I think maybe that was implied, considering the topic of conversation and website we’re on.
That said if you’re making $250k+ a year and not on track to retire by 50, seriously please open a retirement calculator and figure out what you need to do to get there.
I wouldn't assume. The readership of HN is quite vast. I've never worked at big tech and don't plan to.
Which is a pretty high salary in the US in tech generally.
That said, a lot of people in US tech can probably retire relatively early if they run the numbers and don't have a lot of external expenses.
There are a whole lot of people here who work in the tech industry but aren't working in SV. There are even a decent number who aren't in the tech industry at all.
I can back you up on that.
I regularly frequent HN, and even comment from time to time, but I don't work in tech nor do I make bank. I'm a cashier at a gas station. Lol. I'm lucky if I make $16000 a year after taxes.
Tax rates, cost of living differences, etc depending where you are in the world don't always make this a good salary.
Generally outside SV:
- If you are making $250+ it is at least middle management (not tech work) AND
- Only in zones where cost of living is eating this up (e.g. UK/Europe/Australia/etc can get to this equilvalent salary but costs for example for rent, food, tax, etc are much higher).
In most countries SWE is above average pay, but it isn't life changing and it still unfortunately has the boom/bust cycles.
I've met some very good engineers who have built some great large scale solutions who are on less than this salary often in non tech firms being outside of the SV area due to personal reasons (e.g. can't move due to family, too old to do the interview dance SWE has become these days, etc).
... if you're making $250+k/yr as an individual in your 20s, yes. If you've just hit that at age 40+, maybe you're just looking at a comfortable 60-67ish retirement. The US medical system gives you exposure to well into the five figures of risk per year on top of at least high-four-figures in premiums per year (at age 50). Each extra year puts an early retirement without crazy money behind it at serious risk, because your expenses could suddenly and unavoidably shoot up by tens of thousands per year for several years on end.
$250k+ a year means ~$12k monthly salary.
A semi decent apartment in SV will cost you ~$3k
Bills(phone, internet, electricity, etc) another $1k.
If you are married, groceries at least $1k.
Even if we assume you don’t do anything else in life, and you are in perfect health best case scenario would be $6k savings a month or $72k a year.
It would take you 10 years to save $720k plus whatever you make from investments.
That’s not enough to even buy you a house in SV. How are you going to retire?
Unless you assume you will get $250k straight out of college and keep up salary raises for 25 years.
Sure, if you don’t have kids, age with no health problems, never enjoy anything in life, you may be able to retire at 50 in Thailand or Philippines.
$2k per month for groceries and utilities for a married couple is insanely high, in any part of the country.
We're probably around $1,200 for groceries and related (cleaning stuff, mostly) in our house, but we're a family of five. Yeah I'd say $2k is nuts for just two people, even today.
For a long while we managed to stay around $500-600 but that was before COVID inflation. I dunno how the official inflation rate's as low as it is, we don't buy much that'd be considered "luxury" level (we're not buying caviar, say, and rarely even get stuff like the grass-fed "fancy" butter [actually yellow instead of white, tastes like something rather than just having texture but no flavor] instead of the cheapest available) and I'm pretty sure we buy a lot less meat per person than the US average, but if we fill up a cart now it's like $250-$300. I've hit $150 on small shopping trips where I didn't even fully fill one of the smaller, short carts.
For groceries I budgeted $1k and $1k for phones, internet, water, electricity, gas, garbage, etc.
$2.3k, northern va area, family of 3, not fancy anything. data centers have spiked electricity bills, food is insane of course. this does not include once—a-week dinner out or take out
California actual amounts for 2 people:
$150 phone $200 electricity $200 gas $200 water $80 internet $80 trash
Car insurance? Gas? I’m ridiculously generous when saying you can save $6k per month.
6k month over the past decade is circa 1.7m today depending on which index fund you chose.
Assuming a 4% draw down (conventionally agreed to be safe) is over 5.5k a month.
The 4% rule is considered safe for a 30 year retirement period. So at 50 you might want to withdraw a little less.
Would you be making $250k ten years ago? Probably not unless you were super high in the corporate ladder.
Money has lost about about 10% per year in value for the past 5 years. It used to take like a million dollars to retire, but now it's like double that. In addition, nobody really knows how long they might live or how bad inflation could get. Imagine retiring at 50 only to be wiped out, and maybe still on the hook to pay for your own expenses for another 50 years, plus whoever you have in your life who counts on you.
2 millions to retire? Without owning a house? You must be kidding, unless you plan to live only until 60 or move to the cheapest place in the country. Also keep in mind that most health problems start after 50.
To be fair, if you believe all the usual assumptions, then you can expect to earn 5% on that money. That would turn into $100k annually which is enough to live just about anywhere. Now, if you retire on time, I think this may also be tax free. So it's not that crazy, except for the unknowable inflation part of the puzzle. If inflation is also 5%, then your effective loss is 5% per year, so you'd be down nearly 100% after 20 years. Housing costs are crazy, but if you don't need to work then you can easily move to a cheaper place to save money.
Rent? Ever heard of equity? If you make 250k you can afford a nice condo. Right away that blows a huge hole in your math.
Also $1k month on bills? Groceries too?
Judging by your inflated costs for everything, and ann idea that a house (versus more modest accommodations) is what the goal is, you’ve got Lifestyle creep. And, things certainly get a lot easier when your spouse also works.
Renting can be much better financially than buying.
Edit: all % numbers are per year
Consider the case of condos in cities. If you were to buy outright, you effectively get a return by not paying rent (i.e. paying yourself rent). Rent is usually ~5% of the condo cost. HOA + property taxes is 2-3% so subtract that from the rent return i.e. net return 2-3% (5-2/3%). The rest of the return is appreciation from the underlying real estate prices. I am excluding maintenance costs because they are negligible in condos.
On the other hand, if you rent and put the entire amount (that you would have paid to buy the condo), you get ~10% per year. To break even between the two scenarios, you would need real estate prices to grow 7-8% (2-3% + 8-7% = 10%).
Beyond this, there are psychological reasons to buy vs rent. Buying - ability to customize the space, peace of mind because of perceived stability etc. Renting - flexibility, peace of mind because of no long-term obligations etc.
A mortgage is an interpolation of the two cases at the cost of the interest one pays. It is noteworthy, at least in the US, that for most people, this is the only time they can borrow several hundreds of thousands at relatively low costs.
Do you live in the Bay Area?
Bro, not everyone has daddy to give them the down payment to buy anything remotely affordable in SV.
Bro, I can tell you haven’t even tried. Talk to a mortgage advisor.
I’ve been trying for the last decade boomer. Housing keeps going up and my salary keeps staying the same. It’s to the point where a 30 year mortgage will take me to 80 years old. Where a down payment would cost me a decade of saving and nothing but saving. No life, no food, no other bills.
why would you would be saving for house and renting at same time ?
Because you need to have enough saved for a downpayment?
Well, interest rates are high right now, but you’d be surprised at how little down payment you need for purchasing a house or a condo. If you’re a tech worker with a stable career making that kind of money, most underwriters will just give you the loan.
I think people commonly underestimate how accessible this stuff is
It’s easy to make a 40 year forecast spreadsheet for retirement, including housing costs, property, taxes, maintenance. Include vacation, budget, food, general cost of living.
So you oblige yourself to an enormous long-term loan at high-interest, burn PMI on it because you have too little equity, secured against an overpriced-for-quality home whose value may already be at peak or plataeu, fixing yourself to one location, while all signs warn that you may be laid off at any time and facing a long period of unemployment.
I knew a lot of people who did almost exactly that ~18 years ago. It didn't go well for them.
And then it turned out that staying flexible as a renter and setting aside cash set me up to buy after a correction instead of before. That part went very well for me.
Be careful with the assumption coded into your "forecast spreadsheet"
Well yes, there are tradeoffs. On the other hand, go ahead and burn 3k a month on rent.
There is no one size fits all solution but i’m surprised at how many people here are inadvertently revealing to me that they haven’t even tried evaluating.
For example, you saying there’s nothing “affordable” when the baseline assumption is an income of $250k? Can tell you haven’t looked at what’s in your price range. Alright, good luck I guess!
I left Silicon Valley 5 years ago. Are people getting $1M+ loans with zero down these days?
No? What?
I mentioned Lifestyle creep before but what is with everyone’s fried brains?
A small condo in a nice neighborhood in Santa Clara is below $500k. Yes, that’s a lot, and you certainly can get more bang for your buck if you’re willing to do a little commuting.
Btw a $1m house is accessible if you make $250k yr, although to be honest, I would highly recommend against it
The question I originally responded to was "why would you rent and save for a house at the same time?"
I said "because you need to have a downpayment".
You reply "downpayments aren't that high".
Unless you're getting loans zero down, you literally still need to save to have your downpayment. While you're renting.
So where is my brain fried?
Even on a $500k condo, you're putting 10% down, you still need to have that saved up. Noticeably more, in fact, because I'm sure you'd agree "lemme sink every saved cent I have into my house downpayment" wouldn't be wise.
not for 10 years you don't
You don't need to save for a downpayment for 10 years? Or are you saying it won't take 10 years to save up for a downpayment?
now if you only read the comment I answered to you might've figured the answer to that on your own!
Touché.
I don't see how we get to "why would you rent and save for a house at the same time?" from "it takes 10 years to save $720K" but whatever.
2k a month for daycare or nursery school lol
When you can retire depends on how little you need.
Though, of course, if you're living from investment income you should be aware you're living off the work of other people.
Unless you're churning your own butter and manufacturing your own solar panels, isn't retirement inherently living off other people regardless of income?
Isn’t social security living off the backs of other people too?
Yes, they both are, it's just less obvious for investment income.
Childhood too, no? Maybe it turns out life was the original "pyramid scheme"
Childhood was that new car smell and your parents dreaming of the kind of equity they’ll get only to get frustrated with all the maintenance.
Isn't getting wages in a wealthy country so that you can afford a multiple of work hours from poorly-paid people elsewhere inherently living off other people?
If one makes more as a software developer than a bus driver, it doesn't seem like location was the factor
Isn't the logical extension that everyone lives off other people?
This was basically the point of "you didn't build that" (https://en.wikipedia.org/wiki/You_didn%27t_build_that)
you want to live off another people to some degree. single farmer can feed hundreds - there is no need for everyone to do everything. which of course raises societal fairness and trust issues
It's just slavery with extra steps
This. People act like we’ve gotten $200k+ for more than a decade. Most of us haven’t. It wasn’t until 10 year into my career that I hit $100k so this is boomer math that doesn’t account for inflation of everything.
A majority of software engineers don't make enough money to retire at 50. People who have retired so young tend to be very lucky in both employment and their investments. Most probably stayed unmarried, inherited significant amounts of money, and/or married into even more money. It also helps to be lucky enough to start with a $100k+ job at age 23 and never have any bad luck to set you back. I've met people who check some/all of these boxes, and even they seem to not be retiring at 50.
> what is a random tech employee supposed to do here?
My plan as someone who was thinking of leaving tech anyway (remote work is not for me, and practically any new tech job I get will be at least as remote as this one has become if not more so, and I want to program not manage programmers, artificial or otherwise) is to stay where I am pushing through to the other side if possible and if not, I'll find myself redundant. At that point I'll end up on a lower wage doing something else from the ground up, but if LLMs are going to be what we are told they are programming will become a minimum wage job for most anyway. Either way, sticking where I am for now, tightening the purse strings a bit, saving as much as I can, is the best course of action.
If you're a tech employee in a large company with lucrative compensation, you should be aggressively reducing your expenses and banking your excess so you can weather what might be long period of unemployment and can adapt more smoothly to employment at more modest compensation when you manage to get back in.
Unless you're working very obviously outside the blast radius of an AI-bubble correction (you'd know if you were) or are a very high-value VIP (again, you'd know), you should assume you'll be spending some time without a job within the next few years. Possibly a long time.
You might get lucky, but it's not really going to be in your control and "milking the AI wave, learning all the tools" isn't going to change your odds much. It really is musical chairs. Whether you lose your job will depend on where you happen to be standing when the music stops. And there are going to be so many other people looking for the same new chair as you, with resumes that look almost exactly like yours, that getting a new job will basically come down to a lottery draw.
If you think the AI stuff is cool, study it and play with it. Otherwise, just save money and start working on the outline for that novel you've been thinking about writing.
Do you think this has something to do with the current US policy of antagonizing most of the Western world?
Tech and software's investment balance sheet comes down to a largely fixed cost of development vs. a large customer base where every customer has little to no additional cost.
If you manage to burn the bridges or at least scare hundreds of millions of those people into exploring alternatives, that really eats into your total target market in the long run.
That is a —good— point.
> AI inference demand is directed at improving actual earnings. Companies are deploying intelligence to reduce customer acquisition costs, lower operational expenses, and increase worker productivity. The return is measurable and often immediate, not hypothetical.
Is the return measurable and immediate?
Is it really?
It's AI writing. Big words and rule of 3.
I forgot about the rule of 3 but that's obviously AI writing
Yeah maybe the AI thought leaders will be replaced by AI
But not in the sense of singularity and explosive intelligence, but in the sense of a flaming explosive bubble of slop
Yes.
Dentists offices that only need 1 receptionist instead of 2.
A dramatic reduction in front line tier 1 customer support reps.
Translation teams laid off.
Documentation teams dramatically reduced.
Data entry teams replaced by vision models.
That's a cool dream, but my question is: is it happening?
Out of the things you listed the only ones that seem plausible are translation team and data entry team, though even there, I'd want humans to deslop the output.
I'm telling you of what I've either worked on or seen myself.
Just a couple days ago a scheduled a furnace repair through an AI receptionist on the phone.
Layoffs in tech support and customer service already happened last year.
Entry level sales jobs doing cold calling have been replaced all over the place.
AI didn't replace those jobs, it was just the excuse to stop offering the service.
Here's the thing I'm pushing back on:
> The return is measurable and often immediate, not hypothetical.
It's one thing to let go some people and replace them with AI.
It's quite another to have a measurable and often immediate, not hypothetical return on that decision.
You are not capable of telling the difference between human translated and AI translated communication.
Source?
Source: Reality. You are probably already communicating with people who you have no idea are using AI to translate their messages.
I have used AI translation professionally for a few years, and between hundreds of people in long conversations, nobody has ever asked if the text has been translated. Before AI translators, you could write at most one message and people would notice.
I think that is it happening is an important question, but “does the consumer actually want it to happen” should be equally important. It won’t be, because the c suite will just make the decision for us all, but it ought to be.
If done properly you shouldn't be able to tell. A really good voice AI assistant is indistinguishable from front line support reading through a script, and potentially a few steps better.
Meanwhile I can't get a hold of my landlord because they removed both their support email and online formular in favor of an AI chatbot, which means I can't get them to repair my heaters and have been without heating since thursday
They're saving pennies but at what cost?
Historically, this is not how technology that improves productivity has affected the economy. I’d encourage you to learn more about economics and the history of automation.
Doing this stuff is literally my job.
Large banks have tens of thousands of call center employees and a large % of calls they handle are perfectly solvable with a good AI bot. They are working very hard to cut call center staff as quickly as possible.
People don't realize how much a call to customers service costs. Back when I was at MSFT, a call to tech support for our product costs $20 to have someone pick up the phone. Since we were selling low margin HW, a single call to tech support completely erased the profit from that product's sale.
Layoffs have already happened and they will continue to happen.
One can argue this is a positive, as a customer if I can push a few buttons and issue a voice command to an AI to fix my problem instead of waiting on hold, that is a net positive. Also the price of goods will drop since the expected cost of customer service factored into the product price will drop.
E.g. $30 / support call, 1 in 10 customers call support during the lifetime of a product, $3 saved, but the way costs are structured, $3 saved in manufacturing can end up as nearly $10 off the final retail price of a product.
(And in competitive markets prices do drop when cost savings are found!)
This replacement has already happened. Everyone who can has long since replaced their phone support with a set of menus that end in "use the website". When you need to talk to the human you still need to talk to the human.
>One can argue this is a positive, as a customer if I can push a few buttons and issue a voice command to an AI to fix my problem instead of waiting on hold, that is a net positive.
If you could do it through the website then you would be much happier than having to argue with a chatbot. And if you can't do it through the website, there aren't going to let a robot do it on your behalf.
"Costs $20" really means "one of those poor call center reps got paid $20, barely enough to pay rent." Once you solve the supposed problem, all those people will be on the streets.
That's not how it's mostly gone historically. People tend to find different jobs.
Those who work at call centers are already desperate for any job and have zero savings. I'm not sure where they will down even further. I guess the governments will have to pick them up at the end: give them some fictious jobs and pay the minimum out of taxes from the remaining populace who still have jobs.
And yet, whenever I pick up the phone I do so because I need to do something I cannot do on the website.
The chatbot, acting as my agent, whether on the website on on a call doesn't have more permissions than I have.
The text reminded me of one of Veritasium's latest videos [1] about power law, self-organized criticality, percolation, etc... and it also has a wildfire simulation
> training compute looks more like an operating expense with a short payback window than a durable capital asset
Today they are a durable asset functionally, longer than they are economically. So there is no reason in a market with less demand, that their economic payback windows cannot be extended further into their functional lifetimes.
There will be energy cost incentives to replace GPUs. But turnover can respond sensibly to demand as it revives, while older GPUs continue working.
Also, the data centers themselves, and especially any associated increase in power generation, will carry forward as long term functional value.
I doubt any downturn in compute demand lasts long. The underlying trend, aside from AI, was for steady increases in demand. Regardless of bad AI business models, or investment overhangs, a greater focus by more entities on AI product-market fits, along with cheaper compute, will quickly soak up cycles in new and better ways.
The wildflowers will grow fast.
I don't see how nvidia come out of this stronger
their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
jensen has to be the luckiest man in the world, first crypto, now "AI"
> their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
Why? NVIDIA is better positioned to produce faster and more efficient ML ASICs any of their huge customers (except possibly Google). And on top of that, the fact that there is a huge library of CUDA code that will run out of the box on NVIDIA hardware is a big advantage.
Arguably, this shift has already happened. Modern NVIDIA datacenter GPUs, like the H100, only bear a passing resemblance to a GPU -- most of the silicon is dedicated to accelerating ML workloads.
I think this is what the "circular financing" is all about actually. While you are in the 'picks and shovels' phase you want to use your high margins to buy up the value chain and become more vertically integrated. Effectively investing when the sun shines to diversify the company.
As a possibility for example I can see them transforming from a GPU based corp into a parent company for many full or partially owned "subsidiaries". They still manufacture chips to be "vertically integrated" but that becomes bread and butter as an enablement rather than the main story (e.g. Google TPU's). As their margins go down the value accrues to what they are owning (the business units/product areas).
Nvidia is the Cisco of .com ... cisco still exists, and it's doing pretty well.
Just took them a few 26 years to touch their peak dot com bubble stock price again.
And it will be probably the same for nvidia, unless they didn't find another business stream apart from selling "shovels".
BTW, stock price is not everything, Cisco survived, grew, and it's the backbone of internet today.
I wonder what’s going to be the “I can cloud-configure all my Cisco routers over the internet” for Nvidia.
I forgot they bought Splunk. Enterprises love shoveling money into that fire pit
> their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
Are we sure this will be the case? Perhaps the sweet spot for hardware that can train/run language models is the GPU already, especially with the years of head start Nvidia has?
They were working on adapting GPUs for machine learning back in 2005. The getting lucky with AI was preceded by a lot of preparation.
Gaming, then crypto, then AI - all GPU hungry!
And each one requiring an order of magnitude more GPUs than the last!
this trend will always continue with next big thing
AI is the only 'technology' that nobody knows what it solves. If it is a fridge, people buy it. If its a dishwasher, people buy it. The use cases of these technologies are immediately understood. AI is pushed down hard by the 'leaders', C-suite is pushing everyone to use AI at most companies. Nobody knows what its supposed to help with but a great many people claim 'success' with AI. Every full text search that was perfectly working before got converted to AI search and is instantly 100x worse. Same with lots of customer facing FAQs, customer support, etc.
Meanwhile, 67% of my time is gone fixing autocorrect on apple devices.
A million different people: I've used AI in X way and it helped me.
You: No one knows anything AI helps with.
Yeah, okay, if you ignore everything every user says then it is indeed a mystery.
How much astroturfing is happening online? These companies certainly have the funds to do so on a wide scale
The only thing I trust about these right now is my own experience
I’m sure it is. Though I can never tell if it is astroturfing or extremely weird AI maximalists just reminding us that they’re in a cult.
This is how it is done openly with clearly Grok-edited or written comments:
https://xcancel.com/elonmusk/status/1997307084853870793#m
One can only imagine the amount of covert promotion.
From another industry: https://www.reddit.com/r/AMA/comments/1p7kmbn/i_was_paid_to_...
I don't have much proof, but given the incentives and the possibility of doing it I'd be surprised if it wasn't happening everywhere. How much would be enough to pay the top 50 influencers in a market to push something? To hire 100 people to be active full time on all social media sites? To a company with billions they wouldn't even notice the expense
Default to skepticism and double down on your critical thinking skills. More important than ever today
Are these "million people" in the room with you now? Or are they just the bot and shill accounts you're reading on "X"?
They're me, my coworkers, my friends. Talk to people. ChatGPT and the other big LLMs has hundreds of millions of users.
You might not like using LLMs. You might not find them useful. You might think they're bad and harmful (I do). But to claim that no one finds them useful is a completely different position, and one that's about as disconnected as it's possible to be.
> They're me, my coworkers, my friends. Talk to people.
I have all of those. Most don't use AI at all. Some use it on a limited basis but it is unclear if there is any worthwhile gain in productivity. Remaining are two who use it with regularity, including one who's all in. I personally use it for 2 limited use cases. Sometimes it helps. Sometimes I'd be done sooner without it.
Conversely, I need to mediate an epidemic of AI foistware and AI UX pollution. 100% of my userbase is subject to overpushy AI offerings and an endless minefield of shifty, unwanted AI elements. These users are clearly more productive when I keep AI out of their way.
On balance, AI is presently a net negative for my clients.
Is there an elegant term to describe a severely overburdened metaphor? I'm getting lost in the thick bark of an intertwined canopy root system here..
Glad I was the only one getting lost in that forest. I’ve never seen a metaphor get hammered so much in one piece (AI or otherwise).
The metaphor sure seems plausible, but why does the whole thing read like a LinkedIn post that was fed to an LLM to farm attention? :(
Because it most certainly is.
VC is inherently high risk capital. It's by design most companies will fail or at most break even via acquisitions/acquihires, while a small few make investors massive amounts of money.
The only real difference this time around is all of the datacenters being built. There's real hard asset costs making it much riskier and capital intensive.
The big difference this time around is that this 'high risk capital' isn't a small amount, it's 1-10% of the entire economy.
Could the author please post the prompt this article was generated with?
A bad thing with some positive side effects is not a good thing. Wildfire is bad, too frequent wildfires will turn forest into savannah. I didn't see where this part of the analogy was discussed.
It is not, because SV people use shallow metaphors from areas they don’t understand, to gain publicity.
But that was the same thought for me. The other totally missed aspect: a fire kills all life in area.
In other words, which companies are default alive or dead? [0]
The companies that are sustainable with their own revenue, covering their runway or nearly there, are likely to be alive if there’s not investors to keep them alive. Those with ridiculous commitments expecting a hail mary until their business model materializes, are living on borrowed time
There is no wildfire coming. The model providers have narrowed down to 4: OpenAI, Anthropic, Google and xAI. The chip manufacturers are Nvidia, Google (TPUs) and AMD is trying to break in as well. Microsoft is position very well as a middleman. All these guys are giants already. Just Nvidia, Microsoft and Google together have a market cap above ten trillion. OpenAI, Anthropic, AMD and xAI probably add one trillion more.
Sure, there might be hundreds or thousands of small startups in the AI game, and some are probably as viable as the fabled Pets.com. But even if they all crash and burn, it's going to be a rounding error compared to the 7 companies I mentioned above. The AI will be alive and kicking, and nobody will even notice.
> Every promising engineer, designer, or operator is being courted by three, five, ten different AI startups, often chasing the same vertical, whether it’s coding copilots, novel datasets, customer service, legal tech, or marketing automation.
This is flat out wrong.
I still don't understand what should this "wildfire" burn. My perspective is very limited, but where are the pets.com of AI today? Where are all the small companies with improbable business cases that are getting absurd valuations/ investments because they're in AI? The space seems mostly dominated by huge players that, while burning tons of cash, are still making real progress on something that will have more economic impact than society can actually bear. Who should be wiped out by the wildfire? Anthropic?
From how stock valuations look like, they had an insane rally from the release of ChatGPT to around mid-2024, from which point it stayed mostly on a consistent trajectory with the rest of the economy.
I think a huge breakthrough for AI was priced in, and we are still waiting to find out if it will come and what it'll be.
Personally, as this article seems investment focused, I see no downside to diversifying away into more varied kind of investments, but then again, I'm not a pro, so take it with a grain of salt..
OpenAI and Anthropic.
They have no business case - they are 'burn money and hope AI allows us to build something we can monetise'. That's not a business model.
Don't know about OpenAI, but Claude wrote almost all of my code in the past few days, multiplying my productivity by a factor of at least two. My feeling is that for some use cases Anthropic could already charge enterprises a significant fraction of each developer's salary and it would still be a net gain for customers.
Grammarly certainly comes to mind, for being essentially a free feature of most chat AIs now.
Interestingly this time around I could see the 'fire' affecting mid-large corporations (or at least some divisions of them) if they don't adapt. Adobe, being heavily focused on graphic design seems like it could be under pressure. Low-end consulting / outsourcing is largely doing the same work AI is good at. Similarly with technical gig-work (like Upwork).
To be fair, if you look at language learning reddit, there are about 10 ads a day for shovelware of AI powered apps that no one ever needed. They would be those pets.com
Maybe I just didn't notice. Fair. But are these ads from companies that are collecting large capitals, or from small shops that just use the APIs provided by the few big players?
Not parent - but I've noticed those same 'start ups' and they just seem to be today's hustle-bro crypto/drop-ship/mobile-app/ceo-with-no-employees/self-help-book/low-effort grift (bullshit).
I'm sure some of them have managed to shake some change out of the VCs but these wanna-be shovel sellers are just gonna let their domains expire and move on to the next scheme with little overall damage to the economy.
I am pretty sure they use API and dont have millions on training.
I have no idea about their financials. They just annoy me, because they mask their ads as posts/comments. And use ChatGPT to generate those, they are like 2 page long drivel.
I've been approached by a few companies to train their models to basically replace me. Why would I want to do this?
Framing this as something cyclic as the rest of the world is static may be a mistake if things have deep changes outside (related or not with what is being done here), or the nature of the field radically changes.
Then the cycle is broken, and there might be no survivors, or the regrowth may be so far into the future that it will make no difference for most of the survivors.
If energy is indeed the limiting factor here, then maybe the companies building space-based compute (in which energy scales linearly) will remain after the wildfire.
The key is for them to build before the money runs out--I'm not sure they will have enough time.
That is a very optimistic scaling assumption - and would almost certainly require substantial in space infrastructure (large scale lunar and asteroid mining and refining, lunar mass drivers, at least solid core nuclear drives) before you can even thin building all the necessary radiators and structural mass.
Mid-Century at least optimistically. All of this will play out before then, but broadly, those who can code the machines will survive and thrive as they always do. Except for the older ones, it will be yet another excuse to jettison all that experience and talent because of the gray hair that makes them creatively dead from the neck up according to youthful disruptive beach loving Vinod Khosla.
Starlink (collectively) already has 10 times the solar power of ISS, with present-day launch capabilities. If Starship works out (not guaranteed) launch cost should drop to ~$100/kg, which would enable very large constellations.
Musk is planning for 1 megaton/year of satellites, each with 100kW, yielding about 100GW per year.[1]
He thinks they can do that in 4 years, but adjusting for Elon-time, it's probably no less than 8 years, if ever.
But will the AI money last that long? Maybe not.
-------------
Sure, it is impressive how much solar power Starlink currently captures, but ISS actually does not get that much - as far as I can tell its about 250 kW maximum even with the new roll-out solar arrays that have been installed quite recently.
So about 2.5 MW of solar potential ? That's indeed quite impressive, but for serious compute a lot of energy needs cooling will eat into that.
Cooling won’t use much energy because it’s mostly radiators and probably some recirculating pumps.
I think there could be a non trivial ammount of pumping.
Also building all the radiators and structure from in-space resources could be quite substantial energy investment, same with the energy to put it in the final orbit.
Does OpenAI and/or Anthropic survive the wildfire? Do one or both of them become the next Google? Or do they become Netscape and Google, Microsoft, et al win in the end?
If they are profitable on inference currently, they will be able to recover. People in SWE will continue paying for current SOTA models even if better models are not achieved.
Alternatively, latecomers will make use of newly cheaply available compute (from the firesales of failing companies) to produce models that match their quality, while having to invest only a fraction of what the first wave had to, allowing them to push the price below the cost floor of the first wave and making them go under.
Yup. The big AI companies are scared to death of LLMs being seen as commodities. But in the long term, they are.
See also: the big deepseek smear campaign.
> If they are profitable on inference currently
Man that is the question, isn't it
Exclusive ceo dinners so that one can publish the insights? A bit odd
> The next cycle, driven by social and mobile, burned again in 2008–2009, clearing the underbrush for Facebook, Airbnb, Uber, and the offspring of Y Combinator. Both fires followed the same pattern: excessive growth, sudden correction, then renaissance.
The GFC doesn't have anything in common with the .com bubble, maybe we'd have see another tech bubble in the first 2010s if there were not a GFC, but it's fundamentally wrong to place those 2 things nearby.
These analogies are like fitting a curve to align with the data. Let the new data come in next year, the curve changes to fit that data as well. Any data can be curve-fitted and be seen as following some pattern.
It is not that simple. You need to consider factors outside of the silicon valley, outside of USA, outside of technology business. There is lot of world out there. These analogies and predictions don't come out into the global scene to have a look at the what's going on across the globe.
The bubbles which happened earlier are not insulated phenomena that happened in silicon valley labs. It is a complex interaction between various forces.
For example, social and political norms may turn against all that is AI. Any AI-enabled service or product might be seen as serving plastic food.
> She argued for thinking of this moment as a wildfire rather than a bubble. The metaphor landed immediately. Wildfires don’t just destroy; they’re essential to ecosystem health.
This is some of the most nihilistic thinking I've read linked from this site. Also it lacks the nuance that when brush is left to accumulate over due years of forest mismanagement, the resulting wild-fires are much worse and unnecessarily destructive in ways that are hardly describable as "ecosystem health."
> The first web cycle burned through dot-com exuberance and left behind Google, Amazon, eBay, and PayPal: the hardy survivors of Web 1.0. The next cycle, driven by social and mobile, burned again in 2008–2009, clearing the underbrush for Facebook, Airbnb, Uber, and the offspring of Y Combinator. Both fires followed the same pattern: excessive growth, sudden correction, then renaissance.
I note your AI missed the crypto hypecycle. Maybe because it really was a bubble.
Crypto is more a casino thing on the side that doesn't effect the real economy much.
"The Bubble is Good Actually" cope cope cope.
Hopefully the wildfire will also get rid of AI-generated think pieces like this one.
I think.. after reading this article twice that, if it is indeed presented accurately and if the table participants are not just trying to drive a specific agenda, is it applying the wrong metaphor.
It is not a bubble. It is not a fire ( cleansing or otherwise ). It is, however, a piece of technology that is, misguidedly, plopped hard into everything without regard for what it is actually good at. This is why I despair when I see AI in notepad or "ai protects okta'.
I am concerned, because I do see a big change on the horizon coming, but it is not the change that is being presented. It may not be the feared ai/agi/asi ( depending on one's particular bent ),but rather deep re-entrenchment of existing ecosystems in ways that will make things a lot more difficult overall.
Here is what I mean by this:
- the internet as we once knew it, is effectively dead - the ones who can ( money-wise and knowledge-wise ) and see the need to, move behind local networks - those that can't ( money-wise, knowledge-wise, or circle-wise ), are forced into locked systems that effectively become AML for... anything ( and if you did not experience it yet, I am assuming you did yet try to buy anything that has -- lets call it -- dual use )
It is bifurcation ( or what some media call k-shaped these days ), but it is not a fire at all. If anything, these are very, very aggressive vines.
> the internet as we once knew it, is effectively dead
Maybe it's simply less visible?
I have no account at any of the social media giants (except HN but I think that does not count). I mostly use the Fediverse and specialized forums. I would argue that it feels similar to the "old" internet.
> The next cycle, driven by social and mobile, burned again in 2008–2009, clearing the underbrush for Facebook, Airbnb, Uber, and the offspring of Y Combinator.
This list of companies made me wonder a bit. Technical progress has been huge, no question about that. But as for the actual quality or experience for the user/customer - I have the impression everything got worse, starting from Google from the first wave.
Ummm... Google, Amazon, eBay, and PayPal... Facebook, Airbnb, Uber, and the offspring of Y Combinator... doesn't look like a particularly virtuous trajectory to me.
I think this article is nearing the truth in the future of AI. I think the avoidance of claiming it is a bubble is a good sign, but saying it's an AI wildfire is still hyperbole. The idea that inference will drive compute demand is not what I experience because inference is a much easier problem than training. The training of an AI (LLM) is especially demanding and if and when we complete that, inference will be a piece of cake.
I think the best metaphor will be the California gold rush. There is definitely gold there but most of it has already been mined. The people who are entering at this point are woefully unprepared, assuming that they can vibe their way into a fortune, when the rest of the gold requires hard earned labor.
Gold rush comparison is an interesting one. Much of Seattle's early economy was based on "mining the miners", especially gold rushes in the Yukon. In addition to profits to local merchants, it would distort the labor pool as people abandoned logging and left the woods to work mining claims instead.
California gold rush is practically the textbook example of a mania.
Gold, tulips, real estate, rail, witch hunts, satanic panic..
We shouldn't be worried because "this time it is different".
"AI" is none of those things. It is "totally different this time". "AI" is going to do all the work for us, we are all going to get UBI then at some point as it grows the "ASI" is going to either figure out how to grant us immortality or cause mass human genocide.
It is all completely rational this time.
Not not mention the California gold rush basically killed the existing functional local community (often literally) with very few people actually getting rich and even fewer from the gold itself.
Har not to see parallels with the current AI bubble.
> "AI" is going to do all the work for us, we are all going to get UBI then at some point as it grows the "ASI"
I don't see those companies promising that. They are bragging about being able to replace jobs for millions people and their CEOs are also simultaneously taking pretty dismissive attitude toward lesser people. If is more of "we will replace millions of people of eff you, billionaire class will become more powerful.
I am not saying it will realistically happen, I am saying that is what current messaging is.
Altman for one has mentioned that kind of thing.
And everyone but tech company is leveraged to the moon on selling shovels
The easy gold ran out. AI will keep developing.
> Businesses aren’t asking “do we want AI capabilities?” They’re asking “how much can we get, and how soon?”
This is only because businesses are full of folks with short-sighted FOMO desperately trying to cram AI features into any product they can. AI is the new digital clock.
The problem with current AI is that it's super easy to get half-decent results by hooking up a simple agent to a lot of office software- and when it works it looks like pure magic; but getting reliably good results is way harder. So half assed agents abound (I know, I've added three or four to our apps in the last few months) but they can get frustrating for the users really quickly.
I really don't know what this means about the state of the corporate world but companies just don't care if it's bad. Higher ups demand the feature be added but then don't care at all if it's good or even if people actually use it. This isn't that uncommon but "integrate AI somewhere I don't care where" is such an obvious manifestation of this pattern.
We've put so many layers between the engineers and customers and diluted any accountability to demonstrate positive ROI—even if it's theoretical—that we do pointless work for nobody. I'm not going to complain too much personally because all those layers make it possible for me to just pull cards and collect a paycheck but I'm surprised nobody on the business side even somewhat cares if the work they're paying for is worthwhile.
> Higher ups demand the feature be added but then don't care at all if it's good or even if people actually use it
Frankly I've added some of the features of my own initiative. They were low hanging fruits and really helpful in some cases, and in others they are placeholders waiting to be better integrated or expanded depending on the users requests. Nobody forces anyone to use them or even notice them, so why not?
As I said: these features look like magic in demos, it's not because of the hype that managers want them integrated but because of genuine enthusiasm. But they require more development and maintenance effort than was apparent from the demo. Also, there's a clear discoverability problem due to the fact that an agent has basically no UI.
Worker efficiency an order of magnitude greater than what it was 50 years ago. An office worker with excel and the internet can accomplish in an hour what would have taken days or weeks for their counterpart to do in 1975 with a calculator and a telephone.
Who has gained from the efficiency? We haven't gotten more vacation days and we haven't gotten more share of the money.
I think it should be natural that jobs end up being mostly pointless. Why should we produce exponentially more value without getting a share of that value?
> we haven't gotten more share of the money.
But your money buys stuff that 50 years ago would have been too expensive for the richest men in the world. A pocket supercomputer, advanced diagnostics and medicine, instant access to information anywhere in the world.
Material gains (produced by more productive workers) don't offset the increases in
the number of expenses required to minimally live
(ex:utilities, transpo, insurance, comms) and
the ever escalating costs of those added requirements
Nor does it offset the accelerating increases in complexity for basic living factors - complexity that consumes internal resources and time.More to the point, a pocket supercomputer is an irrelevancy for a typical wage worker, who's earnings are far insufficient for even the barest self-sufficiency.
Candidly, the accusation of short-sightedness doesn't really make sense when it comes to enthusiasm in a technology which often in practice falls short today but which in certain cases and in more cases tomorrow than today is worth tremendous business value.
If anything, you should accuse them of foolhardy recklessness. They are not the sticks in the mud.
Can a company like openAI be worth an estimated 1/5th of Alphabet, which offers a similar product but also has an operative system, a browser, the biggest video platform, the most used mail client, its own silicon to running that product, the 3rd most popular Cloud platform, ... ?
I think that is the recklessness in question. Throw in that there is no profit for OpenAI & co and that everything is fueled by debt and the picture is grim (IMHO)
> and in more cases tomorrow than today is worth tremendous business value
That's a nice crystal ball you have there. From where I'm standing, model performance improvements have been slowing down for a while now, and without some sort of fundamental breakthrough, I don't see where the business value is going to come from
The prerequisite for me to be wrong is that the technology needs to stop getting better entirely *right now* AND we need to discover ZERO new uses for what exists today.
That's a fairly tall order.
We don't even have good uses today. That doesn't mean there won't be good uses tomorrow, but neither does it inspire confidence.
So if the plateau is unanimously declared to have been reached tomorrow OR just one more tiny use case exists tomorrow and all others dwindle away to nothing, than you consider yourself to be correct? What a wild assertion!
If the plateau is reached at some higher level of capability, I will remain correct, yes. If use cases are discovered that do not exist today, I will also be correct. You said it in a silly way but you're directionally correct.
No. You state that this is all that it would take to be considered as tremendous business value. You are moving your goal posts on your point. My point is that you are taking an absolute position that there is tremendous business value in its current form(as a miniscule improvement and one insignificant new use case does does not equate to tremendous business value in itself) and so that remains to be seen.
You either misread or are misrepresenting my statement and either way I am not interested in continuing this.
Rushing to get on board something that looks like it might be the next big thing is often short-sighted. Some recent examples include Windows XP: Tablet Edition and Google Glass.
That's like saying that gambling is shortsighted. It depends entirely on the odds as to whether or not it's wise, but "shortsighted" implies that making the bet precludes some future course of action.
Maybe if you have near-infinite wealth like Google or Microsoft you aren't precluding future choices. For most economic actors, making some bets means not making others.
Companies that are hastily shoehorning AI into their customer support systems could instead devote resources to improving the core product to reduce the need for support.
I love how for Westerners, it's totally normal that the economy shits itself every 10 years and that people should welcome this.
It is worse than in 2000 now. Amazon, Microsoft all had good products back then. Amazon was in fact better than it is now.
"AI" hardly has any working products. Vibe coding is foisted upon companies by CEOs who want to promote their friends' products or who want to use it as an excuse for firing people or who have circular revenue agreements with other companies.
This is like the housing bubble of 2008 which was based on hot air and incorrect algorithms.
Amazon, Microsoft, Google all are profitable even despite their capex. Worst case scenario they stop spending on AI capex and go back to being ridiculously profitable instead of just comfortably profitable. There's no actual implosion for the big names.
I'm not saying they will cease to exist. I'm saying that the Internet bubble of 2000 had valid tech whose growth was (deliberately) overestimated.
They can write it off and move on to other things. But that is not what the new wildfire talking point says. The wildfire framing says that the underlying tech is as valuable as the tech of 2000 was.
I think Amazon search was likely better, and there weren't fake products, but there were also far fewer products, and shipping was maybe 30% as fast.
I wasn't a fan of any Microsoft products at the time, though Excel was pretty good when I started using it heavily a few years later.