>Blockchain... NFTs >The problem is, the same dudes who were pumped for all of that bollocks now won't stop wanging on about Artificial Intelligence.
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.
It really is 'different', though, in the same way the Internet was.
It took about 20 years for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.
I work for an old enterprise, so far rather conservative with LLM/AI usage. However the copilot cli adoption in the last 2 weeks is spreading light wild fire. Codex 5.3, a good instructions file and it works. Features are getting done and delivered in days, proper test coverage is done, proper documentation in place. Onboarding to it is also very fast.
Many of my industry friends and I were skeptics about all the things the OP mentions, still am. And yet, I am able to push 30-40K lines of nearly perfect code a day now.
It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.
do you understand every line of code you churn out?
Can you give an example of such features?
This lazy kind of post annoys me because it sort of groups any of us saying that this technology is profoundly different in with all the town criers who have said this kind of thing before — even if we have never said it before and were even skeptical of past declarations
Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.
Lazy.
By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
There are all sorts of algorithms in use that were once thought of as AI, but transitioned to being mere algorithms well before they entered public awareness, if they did that at all. Some are still useful and used everywhere, but they have never been thought of as AI by the public. For them, AI is a term that has long been reserved for some far-off, sci-fi future.
LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?
We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.
Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.
Useful algorithms will come out of all this, a lot of tears too, but not "AI".
AI is real but the socio-political environment is far from conductive to some form of productive use of it - as opposed to using it as a war-machine - AI isn't going to fail in that role but very few will be happy about it.
I mean, disillusionment is the least of my worries.
> most of the people that were the loudest won't say they were wrong
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
> It's laziness because they have little CS fundamentals to base such claims on
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
> However, current predictions about the future of software development (and the world in general) are speculative.
It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.
> what CS fundamentals do you need
1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence
And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.
I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.
There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.
> and we'll be able to see if all the hype was warranted.
Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.
Maybe AI is useful to you, but the US economy is currently buoyed by promises of AI replacing the workforce across the board.
Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.
> but the US economy is currently buoyed by promises of AI replacing the workforce across the board.
Still don't understand what's the end goal here. Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.
I think our little corner of the world has a distorted view of AI in that it is actually proving useful for us. Once they passed a certain level of usefulness... I remember when they were still struggling just to output syntactically correct code, you know, like, 18 months ago or so... they became a useful tool that we can incorporate.
But there's a lot of things playing out to our advantage. Vast swathes of useful and publicly available training data. The rigorous precision of said data. Vast swathes of data we can feed it as input to our queries from our own codebases. While we never attained the perfect ideal we dreamed of, we have vast quantities of documentation at differing levels of abstraction that the training can compare to the code bases. We've already been arguing in our community about how design patterns were just level of abstraction our coding couldn't capture and AI has access now to all sorts of design patterns we wouldn't have even called design patterns because they still take lots of code to produce, but now for example, if I have a process that I need to parallelize it can pretty much just do it in any of several ways depending on what I need at that point.
It is easy to get too overexcited about what it can do and I suspect we're going to see an absolute flood of "We let AI into our code base and it has absolutely shredded it and now even the most expensive AI can't do anything with it anymore" in, oh, 3 to 6 months. Not that everyone is going to have that experience, but I think we're going to see it. Right now we're still at the phase where people call you crazy for that and insist it must have been you using the tool wrong. But it is clearly an amazing tool for all sorts of uses.
Nevertheless, despite my own experiences, I persist in believing there is an AI bubble, because while AI may replace vast swathes of the work force in 5-20 years, for quite a lot of the workforce, it is not ready to do it right this very instant like the pricing on Wall Street is assuming. They don't have gigabytes of high-quality training data to pour in to their system. They don't have rigorous syntax rules to incorporate into the training data. They don't have any equivalent of being guided by tests to keep things on the rails. They don't have large piles of professionally developed documentation that can be cross-checked directly against the implementation. It's going to be a slower, longer process. As with the dot-com bubble, it isn't that it isn't going to change the world, it is simply that it isn't going to change the world quite that fast.
Leaving aside the economic shitshow and other things.
I think you're right but for the wrong reasons wrt sustainable profit.
Specifically, overcounting how much it will cost in 5 years to run AI because you're extrapolating current high prices, and at the same time undercounting how the demand will drive efficiency gains.
i think the point is AI has to go much further and faster than it has in the past 3 years to justify the investments being made from the hype. The hype did its job now the AI industry has to execute and create the returns they promised. That is still very much up in the air, if they can't then the tech was over hyped.
This.
It's high time to stop accumulating debt while providing free picture of pelicycles, just charge the full cost for them - enough to generate profits and pay back debt.
What we see now is literally burning money and energy to generate hype. The only true measures of success are financial and macroeconomic. If the hype is real, there should be no problem for the mighty AI to generate debt-free profits for its providers while the overall price level in the US goes down.
What we see is the exact opposite which makes the AI hype act only as market manipulation for capital misallocation.
unlike the old hpc, where we only burned hundreds of millions for machines that were 80% efficient to get a 5 year lead, we are burning hundreds of billions on machines that are 30% efficient to get a 1 year lead.
LLMs have not radically transformed the world yet because the number of people capable of solving problems by typing into a blinking cursor on a blank screen is actually quite small. Take that subset of the population and reduce it to those that can effectively write communicative prose, and its even smaller still.
It's just an interface problem. The VT100 didn't change the world overnight either.
There's another point, too. Detractors say LLMs will never advance to whatever threshold they consider meaningful. Fine. We're working on other paradigms, too, though. Just because a lot of o people are productizing LLMs doesn't mean the state of the art isn't advancing in parallel and AGI isn't in the cards.
This just sounds like the "nothing ever happens" theorem slightly rephrased, of which Scott Alexander did a great refutation here: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
Author forgot Segway. Remember when it was going to fundamentally change humanity?
Their Ninebot escooters are pretty damn good, far better than most random brands.
I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.
I also made decent money selling crypto, so that part was real for me too.
And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.
I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.
Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.
> I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.
From the post, which is not a very long one: "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists"
Fair, I read the whole post but I guess that part didn't register, maybe because I never fullheartedly believe marketing fluff to begin with. Maybe this person has too much contact with "AI will fix everything" types, and not enough with actual scientists who are really developing novel methods better than anything before, piece by piece.
I also found the "it's almost always dudes" line a bit strange, because I've seen plenty of women doing marketing for startups running on hype.
Heh - that went right off the cliff, when... well, I will let the reader research that themselves...
The guy who died on one was Jimi Heselden, who was a British entrepreneur who bought the company from the American inventor, Dean Kamen. Dean is alive, however he was recently found to have hung out with the "disgraced financier".
That’s dark. But.. accurate.
I see hoverboards everywhere, which are the self balancing scooter tech from the Segway. Many little ebikes as well making deliveries.
75% of restaurant orders are delivery now due to widespread personal electric transportation. It already has fundamentally changed humanity.
When non-programmers make sweeping statements about LLMs.
Deep disconnect from reality.
Honestly, the remixes this generation suck compared to priors.
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
Is this “nothing ever happens”
Everything is the same until it's not, good luck predicting when "until it's not" is on the horizon though. Isn't technology innovation a power law thing? Everything hums along fairly regular and then, out of the blue, there's a massive impact. Personally, I think AI has made a pretty large impact in software dev and overall tech industry but I don't see AGI any time soon (and that hype has died down) and therefore I don't see the economics working out. The coding tools, API integrations, chatbots, those are great but I don't see them producing the returns required to keep companies like OpenAI running unless OpenAI takes all the customers and all the ad clicks from everyone else ( Athropic, Alhphabet, X, Amazon, Meta, even Microsoft ). I just don't see that happening.
For me, this captures it:
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
- Terry Pratchet's Faust Eric"
Use the reader view button.
I get that everyone has a strong opinion on whats-going-to-happen-with-AI, but I really think nobody knows.
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
Perhaps this is the failure to understand the distinction between a technology and a meta-technology. Upgrading the factory that builds the robots is much different than upgrading the robots.
A technology is a set of methods and tools for achieving the desired results (generally in a reliable and reproducible way). Or, in a broader sense of the word, it's the idea of applying scientific knowledge to solving practical problems, and the process of such application.
What is meta-technology?
Or (taking the other side) failure to notice the distinction between a technology and a pump-and-dump. The technology (attention/diffusion) is awesome. The hype is unbelievable. Literally.
What is the point being made here? Some past technologies were overhyped, therefore AI is overhyped? Well, some past consumer technologies did change the world (smartphones, texting, video streaming, dating apps, online shopping, etc), so where's the argument that AI doesn't belong to this second group?
Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.
this just looks like someone hearing about tons of hyped things from people across the internet (which almost by definition, is full of false signals and grifters), imagining they are coming from the same person, then arguing with how wrong that person always is. how is that interesting?
Nuclear weapons - this time is different
Internet - this time is different
iPhone - this time is different
I enjoyed Dave Cridland's comment more than the article. The article is dismissive of AI and other technologies in an unsubstantiated way.
New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.
OP here. Unless you're still watching Quibi on your curved TV, delivered via WiMax then, yeah, I'd say it was pretty bloody substantiated.
I like technology. I made a decent living from it. But if I had chased every hyped fad that was promised as the next big thing, I doubt I'd be as happy as I am now.
You're not really saying anything, though. For every tech hype that has failed, there is another that's changed the world. This IS changing the world and our industry, regardless of whether it reaches the heights of the hypers.
I mean you're just stating that sometimes tech doesn't meet it's hype. What's insightful about that? It's a given; cherry-picking examples doesn't prove your case.
> For every tech hype that has failed, there is another that's changed the world.
Well, no, the ratio is most definitely not 1-to-1.
The thing is, the successful tech rarely get the excessive hype.
MRNA vaccines. Where are the countless breathless articles about these literal life saving tech? A few, maybe, but very few dudes pumping out asinine "white papers" and trying to ride the hype train.
Solar and battery. Again, lots of real world impact but remarkably few unhinged blowhards writing endless newsletters about how this changes everything.
I'm struggling to think of a tech from the last 20 years which has lived up to its hype.
Not everything is written to be insightful. Some things are just written to get them out of my head.
The web? GLP-1s? 5G? The newton was mega-hyped, failed but Apple came back with the iPhone. All the dot com failures that eventually became viable businesses (so viable in-fact that sfgate has to reach back 26 years to write articles on it [1])
Hype is often early, in 10-20 years we'll start seeing the value as the rest of the world catches up
https://www.sfgate.com/food/article/rise-fall-bay-area-start...
I personally see plenty of hype but I've also been following the trends and using the tools "on the ground". At least in terms of software these tools are a substantial shift. Will they replace developers? No idea, but their impacts are likely to be felt for a very long time. Their rate of improvement in programming is growing rapidly.
Do feel AI is overall just hype? When did you last try AI tools and what about their use made you conclude they will likely be forgotten or ignored by the mainstream?
I spent an hour with Gemini this morning trying to get instructions to compile a common open source tool for an uncommon platform.
It was an hour of pasting in error messages and getting back "Aha! Here's the final change you need to make!"
Underwhelming doesn't even begin to describe it.
But, even if I'm wrong, we were told that COBOL would make programming redundant. Then UML was going to accelerate development. Visual programming would mean no more mistakes.
All of them are in the coding mix somewhere, and I suspect LLMs will be.
> write an article dismissing ai
> usage is copy pasting code back and forth with gemini
the jokes write themselves
That's the most recent time. But I've bounced around all the LLMs - they're all superficially amazing. But if you understand their output they often wrong in both subtle and catastrophic ways.
As I said, maybe I'm wrong. I hope you have fun using them.
Unrelated to the conversation but:
> Not everything is written to be insightful. Some things are just written to get them out of my head.
I like that, going to use it as the motivation to get some things out of my own head.
Yes! More blogging :-)
It's not unsubstantiated though. The claim is "People frequently assert that 'this time is different' and they are almost always wrong" and it proceeded to provide a reasonable list of analogous manias.
This only doesn't feel like substantiation if you reject the notion that these cases are analogous.
"You shouldn't eat that."
"Why not?"
"Everyone else who's eaten it has either died or gotten really sick."
"But I'm different! Why should I listen to your unsubstantiated claims?"
"(lists names of prior victims)"
"That doesn't mean anything. I'm different. You're just making vague and dismissive unsubstantiated claims."
The claim isn't "AI bad" the claim is more along the lines of "there's a lot of money changing hands and this has all the earmarks of a classic hype cycle; while attention/diffusion models may amount to something the claims of their societal impacts are almost certainly being exaggerated by people with a financial stake in keeping the bubble inflated as long as possible, to pull in as many suckers as possible."
If you want another example (which you won't find analogous if you've already drunk the koolaid):
https://theblundervault.substack.com/p/the-segway-delusion-w...
I hoped the article would be be a meta-discussion of "time" and perhaps relativity or some other phenomenon. Sigh, it's an investment thesis saying "This Time is Different" is a risky bet.
That sounds like an interesting article. You should write it.
I would suggest editing the title to "This Time is Different". I think that captures the essence much better.
Love the Sir Terry reference.
I wonder if that was an automated HN edit?
Similarly to how titles that start with "how" usually have that word automatically removed.
Usually HN only auto-edits on first submission. If you go in and undo it manually as the submitter, you can force it to read how you intend.
Maybe I'm only noticing the times when it messes things up, but it kinda seems like these auto-edits cause a lot of confusion that could be avoided if they were shown up-front to submitters, who would then have the option to undo them.
Or maybe judicious use of an LLM here could be helpful. Replace the auto-edits with a prompt? Ask an LLM to judge whether the auto-edited title still retains its original meaning? Run the old and new titles through an embedding model and make sure they still point in roughly the same direction?
oh interesting, TIL I can go edit my submission titles! That's useful, I've definitely submitted stuff and gotten a less-good title due to the automated fixes, so I'll have to pay attention to this next time
And the HTTP headers
x-clacks-overhead GNU Terry PratchettAgreed--I clicked to read an article about the physics of time or something. Was sorely disappointed.
Title got mangled somehow, the original title is "This time is different".