> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
Translation for the rest of us: "we need to fully privatize the OA subsidiary and turn it into a B-corp which can raise a lot more capital over the next decade, in order to achieve the goals of the nonprofit, because the chief threat is not anything like existential risk from autonomous agents in the next few years or arms races, but inadequate commercialization due to fundraising constraints".
It's amazing how he can imagine wars being fought over AI but not wars being fought over resources needed to "build enough infrastructure."
I see how AI is used today and extrapolate. A lot. Everything in tech is extrapolation with the march of capital. It takes one to know one; rather it takes a smart ass to wield a smart tool. So the smart get smarter.
Is it a separate phenomenon, in the big picture, from the rich getting richer, from the monopolies over means of production?
Not sure, but I see as well that the dumb will surely get dumber; that "intelligence" will be a product of using intelligent humans' means of production, and not owning them of course, but being owned in the process. Populations will be literally made lighter of their smarts, outsourcing intelligence to agents out of general control (classic bait and switch.) Since AI feel my own process getting more clueless as I go, I'd better conclude somehow.
I see the Age of Ignorance ahead, there was once an Enlightenment, and here light takes on its other side or meaning, the workers being enlightened, to wit, made lighter of their horrible burden which is intelligence and its obnoxious demands of upkeep. Just pay someone for upkeep and stop messing with wet messy neurons already, says the technocrat to the cheerful mob.
Well predicted. Just two days later:
> OpenAI to remove non-profit control and give Sam Altman equity
A general reply to all of the comments thus far: you are completely missing the point here. OP is not a meaningful forecast, and it's not about nuclear power or any of that. It's about laying the groundwork for the privatization and establishing rhetorical grounds for how the privatization of OA is consistent with the OA nonprofit's legally-required mission and fiduciary duties. Altman is not writing to anyone here, he is, among others, writing to the OA nonprofit board and to the judge next year.
Why can't we just go with Occam's razor, and assume that they really believe in their mission of providing access to frontier AI as freely to the world as possible?
Good point. His story is valid, but just-so happens to equal AI maximalism that most CEOs would want for their own industry too.
> put AI into the hands of as many people as possible
... by establishing regulatory moats to prevent competition and limit or outlaw actually-open AI?
I read this as: "we need to ramp up nuclear energy".
I'd expect "we want to put AI into the hands of as many people as possible" to be either exposing model weights, or making the training sets public.
This is so true, resource management and infrastructure are going to be the bottlenecks
And the reason we can be certain that it is an accurate one is that the only way to put AI in the hands of as many people as possible is to commercialise the tech. Non-profits are good at many things, but not for spreading technical improvements.
There aren't many alternatives here. It is commercialisation or looming irrelevance.
> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
The mostly-open goal of every VC-funded startup is to become a monopoly. If a strong enough monopoly in AI hardware were to exist, then the issues he describes could become a problem.
Otherwise, what he is describing is just the ad absurdum of how capitalism works. Phrased differently it sounds like:
“If this extremely powerful and profitable product that depends on other products gets built, then if no one else builds the also profitable substrate that it operates on, terrible things will happen!”
Or again slightly differently: “We need to be able to compete with our suppliers because our core business model might not be defensible unless we can also win in their space.”
> Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale
I'm not an AI skeptic at all, I use llms all the time, and find them very useful. But stuff like this makes me very skeptical of the people who are making and selling AI.
It seems like there was a really sweet spot wrt the capabilities AI was able to "unlock" with scale over the last couple years, but my high level sense is that each meaningful jumps of baseline raw "intelligence" required an exponential increases in scale, in terms of training data and computation, and we've reached the ceiling of "easily available" increases, it's not as easy to pour "as much as it takes" into GPT5 if it turns out you need more than A Microsoft.
The question is: For a given problem in machine intelligence, what's the expected time-horizon for a 'good' solution?
Over the last, say, five years, a pile of 50+ year problems have been toppled by the deep learning + data + compute combo. This includes language modeling (divorced from reasoning), image generation, audio generation, audio separation, image segmentation, protein folding, and so on.
(Audio separation is particularly close to my heart; the 'cocktail party problem' has been a challenge in audio processing for 100+ years, and we now have great unsupervised separation algorithms (MixIT), which hardly anyone knows about. That's an indicator of how much great stuff is happening right now.)
So, when we look at some of our known 'big' problems in AI/ML, we ask, 'what's the horizon for figuring this out?' Let's look at reasoning...
We know how to do 'reasoning' with GOFAI, and we've got interesting grafts of LLMs+GOFAI for some specific problems (like the game of Diplomacy, or some of the math olympiad solvers).
"LLMs which can reason" is a problem which has only been open for a year or two tops, and which we're already seeing some interesting progress on. Either there's something special about the problem which will make it take another 50+ years to solve, or there's nothing special about it and people will cook up good and increasingly convenient solutions over the next five years or so. (Perhaps a middle ground is 'it works but takes so much compute that we have to wait for new materials science for chip making to catch up.')
> we will solve the remaining problems
This is the part that really gets me. This is a thing that you say to your team, and a thing you say to your investors, but this isn't a thing that you can actually believe with certainty is it?
> It is possible that we will have superintelligence in a few thousand days (!)
"a few thousand days" is such a funny and fascinating way to say "about a decade"
Superficially, reframing it as "days" not "years" is a classic marketing psychology trick, i.e. 99 cents versus a dollar, but I think the more interesting thing is just the way it defamiliarizes the span. A decade means something, but "a few thousand days" feels like it means something very different.
> humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)
He's hand-waving around the idea presented in the Universal Approximation Theorem, but he's mangled it to the point of falsehood by conflating representation and learning. Just because we can parameterize an arbitrarily flexible class of distributions doesn't mean we have an algorithm to learn the optimal set of parameters. He digs an even deeper hole by claiming that this algorithm actually learns 'the underlying “rules” that produce any distribution of data', which is essentially a totally unfounded assertion that the functions learned by neural nets will generalize is some particular manner.
> I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.
If you think the Universal Approximation Theorem is this profound, you haven't understood it. It's about as profound as the notion that you can approximate a polynomial by splicing together an infinite number of piecewise linear functions.
> a defining characteristic of the Intelligence Age will be massive prosperity
That's the sales pitch, that this will benefit all.
I'm very pro-AI, but here's the only prediction for the future I would ever make: AI will accelerate, not minimize, inequality and thus injustice, because it removes the organizational limits previously imposed by bureaucracy/coordination costs of humans.
It's not AI's fault. It's not because people are evil or weak or mean, but because the system already does so, and the system has only been constrained by inability to scale people in organizations, which is now relieved by AI.
Virtually all the advances in technology and civilization have been aimed at people capturing resources, people, and value, and recent advances have only accelerated that trend. Broader distributions of value are incidental.
Yes, the U.S. had a middle class after the war, and yes, China has lifted rural people out of technical poverty. But those are the exceptions against the background of consolidation of wealth and power world wide. Not through ideology or avarice but through law and technology extending the reach of agency by amplifying transaction cost differences in market power, information asymmetry and risk burdens. The only thing that stops this is disasters like war and environmental collapse, and it's only slowed by recalcitrance of people.
E.g., now we are at a point were people's economic and online activity is pervasively tracked, but it's impossible to determine who's the owner of the vast majority of assets. That creates massive scale for getting customers, but impedes legal responsibility. Nothing in economic/market theory says that's how it should be; but transaction cost economics does make clear that the asymmetry can and will be exploited, so organizations will capture governance to do so.
It's not AI's job nor even AI's focus to correct injustice, and you can't blame AI for the damage it does. But like nuclear weapons, cluster munitions, party politics, (even software waivers of liability) etc., it creates moral hazards far beyond the ability of culture to accommodate.
(Don't get me started on how blockchain's promise of smart contracts scaling to address transaction risks has devolved into proliferating fraud schemes.)
I don't know if I am the only one who always trips up on reading this common theme in AI progress - that AI will be the pinnacle of education - but it really strikes me as meaningless.
What is the point of education if the bots can do all the work. If the worlds best accounting teacher is an AI, why would you want anyone (anything?) other than that AI handling your accounting?
A world where human intelligence is second fiddle to AI, schooling _will not_ be anything like what it is today.
> If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable
None of that shared prosperity was freely given by the Sam Altmans of the world, it was hard won by labor organizers and social movements. Without more of that, the progress from AI will continue the recent trend of wealth accumulating in the hands of a few. The idea that everyone will somehow prosper equally from AI, without specific effort to make that happen, is nonsense.
OAI's achievements are amazing. But here's a bit of a skeptical take: cheap human-style intelligence won't have a huge impact because such intelligence isn't really a bottleneck today. It's a cliche that the brightest minds of the age are dedicated to selling more ads or shuffling stock ownership around at high velocity. Anyone who's worked at a big tech company knows the enormous ratio of talent to real engineering problems at those companies.
Let's say you have some amazing project that's going to require 100 Phd-years of work to carry out. In the present world that costs something like $1e7. In the post-AI world, that same amount of intelligence will cost $1e3, an enormous reduction in price. That might seem like a huge impact. BUT, if the project was so amazing, why couldn't you raise $1e7 to pursue it? Governments and VCs throw this kind of money around like corn-hole bags. So the number of actually-worthwhile projects that become feasible post-AI might actually be quite small.
Somehow I read through this without actually looking at the domain name of the page. I thought to my self "Wow, this person is living in a fantasy world." Then I saw the authors name. I think I'll stand by my initial impression.
> Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need.
This is one of those few cases where I'm actually more bullish than Altman. I don't need to wait for my kids to have it, but rather I personally am already using this daily. My regular thing is to upload a book/article(s) into the context of a Claude project and then chat with it - I genuinely find it to already be at the level of a decent (though not yet excellent) tutor on most subjects I tried, and by far better than listening to a lecture. The main feature I'm missing is of the AI being able to collaborate with me on a digital whiteboard.
A complete tangent, but I think a big reason why I'm kind of dismissive of AI is because people who speculate on what it would enable make it honestly sound kind of unimaginative.
> AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf.
I get it, coordinating medical care is exhausting, but it's kind of amusing that rather than envisioning changing a broken system people instead envision AIs that are so advanced that they can deal with the complexity of our broken systems, and in doing so potentially preserve them.
Related btw to using AI for code.
For somebody who likes building things and has many side projects Claude and ChatGPT have been huge productivity multipliers.
For example, I’m designing and 3D printing custom LED diffuser channels from TPU filament. My first attempt was terrible, because I didn’t have an intuition for how light propagates through a material.
After a bit of chatting with ChatGPT, I had an understanding and some direction of where to go.
To actually approach the problem properly I decided to run some Monte Carlo light transport simulations against an .obj of my diffuser exported from Fusion 360.
The problem was, the software I’m using only supports directional lights with uniform intensity, while the LEDs I’m using have a graph in their datasheet showing light intensity per degree away from orthogonal to the LED SMD component.
I copy pasted the directional light implementation from the light transport library as well as the light-intensity-by-degree chat from the LED datasheet and asked Claude to write code to write a light source that samples photons from a disc of size x with the probability of emission by angle governed by the chat from the datasheet.
A few iterations later and I had a working simulation which I then verified back against the datasheet chart.
Without AI this would have been a long long process of brushing up on probability, vector math, manually transcribing the chart.
Instead in like 10 minutes I had working code and the light intensity of the simulation against my mesh matched what I was seeing in real life with a 3D printed part.
I like ChatGPT 4o just fine, but the whole lamplighter bit is a bit Whig-history, right? The lamplighters were better off in the long run. We might not be. You can't just blow the problems off.
The future will be abundant, because deep learning works. To achieve that, we need to be calm, but cautious. And, we need to fund infra (chips and power) so that AGI isn't limited to the ultra-wealthy.
My take:
* Foom/doom isn't helpful. But, calm cautiousness is. If you're acting from a place of fear and emotional dysregulation, you'll make ineffective choices. If you get calm and regulated first, and then take actions, they'll be more effective. (This is my issue with AGI-risk people, they often seem triggered/fear/alarm-driven rather than calm but cautious)
* Piece is kind of a manifesto for raising money for AI infra
* Sam's done a podcast before about meditation where he talked about similar themes of "prudence without fear" and the dangers of "deep fear and panic and anxiety" and instead the importance of staying "calm and centered during hard and stressful moments" - responding, not reacting (strong +1)
* It's no accident that o1 is very good at math physics and programming. It'll keep getting much better here. Presumably this is the path for AGI to lead to abundance and cheaper energy by "solving physics"
> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
This seems to be the key of the piece to me. It's his manifesto for raising money for the infra side of things. And, it resonates: I don't want ASI to only be affordable to the ultra rich.
"What is good?" or in other words "What are people for?" is a question that cannot be answered by intelligence no matter how great, because the complexity of the question is a function of the intelligence of the asking entity and it's always greater than the intelligence of the asking entity (human or transhuman cyborg or whatever.)
AI is a side-show.
Intelligence is ambient in living tissue, so we already have as much intelligence as is adaptive. We don't need more. As talking apes made out of soggy mud wrapped around calcium twigs living in the greasy layer between hard vacuum and a droplet of lava which in turn is orbiting a puddle of hydrogen in the hem of the skirt of a black hole our problems are just not that complicated.
Heck, we are surrounded by four-billion year-old self-improving nanotechnology that automatically provides almost all our physical needs. It's even solar-powered! The whole life-support system was fully automatic until we fucked it up in our ignorance. But we're no longer ignorant, eh?
The vast majority of our problems today are the result of our incredible resounding success. We have the solutions we need. Most of them were developed in the 1970's when the oil got expensive for a few minutes.
Must we boil the oceans just to have a talking computer tell us to get on with it? Can't we just do like the Wizard of Oz? Have a guy in a box with a voice changer and a fancy light show tell us to "love each other"? Mechanical Turk God? We can use holograms.
We've had the capability to feed the entire planet for decades now, and yet, a large portion is underfed. Even if Sam's wildest dreams come true, this sounds like another "divide the world into haves and have-nots". One way to justify this unethical view is to consider that his audience was a set of target customers, rather than "people in the world". IOW, this was just fancy marketing fluff.
“This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
I am a believer that people like sam are not lying. Anyone using these models daily probably believes the same. The o1 model, if prompted correctly, can architect a code base in a way that my decade+ of premium software experience cannot. Prompted incorrectly, it looks incompetent. The abilities of the future are already here, you just need to know how to use the models.
“If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.”
… This, and nothing about the democratizing effect of “open source AI” (Yes we still need to define what that is!).
I don’t want Sam as the thought leader of AI. I even prefer Zuck.
Are there any thought leaders that are really about democratization and local foss AI on open hardware? Or do they always follow (or step into the light) after the big moneymakers have had their moment? Who can we start watching? The Linus, the RMS, the Wozniak’s of AI. Who are they?
The most interesting bit I find is the time period mentioned until super-intelligence: “thousands of days (!)” aka 6-9 years or more?
With the current hype wave it feels like we’re almost there but this piece makes me think we’re not.
It looks like this is a new subdomain that has never been used before today: https://web.archive.org/web/20240000000000*/https://ia.samal...
Surprisingly complicated HTML source code for a simple blog post.
Here it is as:
Plain HTML: https://hub.scroll.pub/sama/index.html
This morning I was reviewing some code that a JR engineer submitted. It had this wild logical conditional with twists and turns, negations, and weird property naming..
o1-preview perfectly evaluated the conditional and determined that, hilariously, it would always evaluate to true.
o1 untangled the spaghetti, and, verifying that it was correct was quick and easy. It created a perfect truth table for me to visualize.
This is a sign of things to come. We are speeding up.
This is painting a picture of a utopia. I'm not sure about that.
The entire AI trend - long term is based on the idea that AI will profoundly change the world. This has sparked a global race for developing better AI systems and the more dangerous winner takes all outcome. It is therefore not surprising that billions of dollars are being spent to develop more powerful AI systems as well as to restructure operations around them.
All the existing systems we have must fundamentally change for the better if we want a good future.
The positive aspects / utopia promises have much more visibility to the public than the negative effects / dystopian world.
ARE WE TO pretend that Human greed, selfishness, desires to dominate and control, animalistic behaviour, use of technologies for war and other destructive purposes don't exist?
We are living in times of war and chaos and uncertainty. Increasingly advanced technology is being used on the battlefield in more covert and strategic ways.
History is repeating itself again in many ways. Have we failed to learn? The consequences might be harsher with more advanced technology.
I have read and thought deeply about several anti AI doomer takes from prominent researchers and scientists but I haven't seen any which aren't based on assumptions or foolproof. For something that profoundly changes the world, it's bad to base your hopes on assumptions.
I see people dunking on llms which might not be AI's final form. Then they extrapolate that and say there is nothing to worry about. It is a matter of when not if.
The thought of being useless or worse being treated as nothing more than pests is worrying. Job losses are minor in comparison.
The only hope I have is that we are all in this together. I hope peace and goodwill prevails. I hope necessary actions are taken before it's too late.
A more pragmatic perspective indicates that there are more pressing problems that need to be addressed if we want to avoid a doomer scenario.
The Age of Inhumanity... AI mimicry of human patterns devalues our very humanity. Without wise leadership, which we clearly lack, this upcoming Age will be profoundly unstable.
> We need to act wisely but with conviction.
Reminds me of these quotes from Sam on this podcast episode (https://www.youtube.com/watch?v=KfuVSg-VJxE)
* "Prudence without fear" (Sam referencing another quote)
* "if you create the descendants of humanity from a place of, deep fear and panic and anxiety, that seems to me you're likely to make some very bad choices or certainly not reflect the best of humanity."
* "the ability to sort of like, stay calm and centered during hard and stressful moments, and to make decisions that are where you're not too reactive"
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." -OpenAI
"Why You Should Fear Machine Intelligence
Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity." - Sam Altman
I'm sceptical about humanity's future, and not only with perspective of AGI getting out of control.. Average human won't be able to harness new powers.
solarpunk envisioned, possible today:
- entire human knowledge available on palm of your hand..!
vs
cyberdaftpunk actually more common:
- another idiot driver killed somebody when being busy with his candy crush saga or instagram celebrity vid.
To paraphrase Goggins, "Who's gonna carry the cabbage?"
While it's true there are a lot of jobs obsoleted by technological progress, the vision of personal AI teams creating a new age of prosperity only makes sense for knowledge workers. Sure, a field worker picking cabbage could also have an AI team to coordinate medical care. But in this brilliant future, are the lowest members of society suddenly well-paid?
The steam engine and subsequent Industrial Revolution created a lot of jobs and economic productivity, sure, but a huge amount of those jobs were dirty, dangerous factory jobs, and the lion's share of the productivity was ultimately captured by robber barons for quite some time. The increase in standard of living could only be seen in aggregate on pages of statistics from the mahogany-paneled offices of Standard Oil, while the lives of the individuals beneath those papers more often resembled Sinclair's Jungle.
Altman's suggestion that avoiding AI capture by the rich merely requires more compute is laughable. We have enormous amounts of compute currently, and its productivity is already captured by a small number of people compared to the vast throngs that power civilization in total. Why would AI make this any different? The average person does not understand how AI works and does not have the resources to utilize it. Any further advancements in AI, including "personalized AI teams," will not be equally shared, they will be packaged into subscription services and sold, only to enrich those who already control the vast majority of the world's wealth.
Did AI write this?
"This age is characterized by society's increasingly advanced capabilities, driven not by genetic changes but by societal infrastructure becoming smarter and more efficient over time."
Question for active professionals during the Moore's law era of computing. Back then, were executives writing such grand proclamations about the future ? In my experience, when things are working, executives are quiet. The outcomes speak for themselves.
Thankfully, we have a recent point of reference. The pioneers of internet & computing's 1st wave transformed civilization. Did they spend years saber rattling about how 'change was coming' ?
“Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.” Loving the techno optimism.
I'm skeptical about the prosperity claim. If the entire planet has access to affordable/cheap and very capable AI, this still leaves two problems.
Since the access itself is not differentiating, it's going to be the most educated benefiting the most. Already today few people can use the o1 model because they can't dream up a PhD level question nor understand its answers.
More importantly, access to AI does not mean access to assets. Me, a total nobody, can use AI to design the world's best car. But that does nothing because I don't have money or land. Anybody can query AI for that car but only asset owners can actually implement the idea and extract value. Those asset owners could use AI to bring widespread prosperity to all of mankind, but we know they won't.
We don't need more material prosperity, we need social prosperity. Family formation, the restoration of community life, economic security. Not "more stuff".
In a world of infinite leverage, those who can leverage will far outrun those who cannot leverage or do so poorly. The world is also getting flatter, so you will see massive clusters of people near the bottom, with a very long fat tail of people far away at the top. My only hope is that the floor is raised for everyone, so we're not living in a dystopia like Elysium.
> Although it will happen incrementally, astounding triumphs – fixing the climate
This is so rich coming from a tech field that's on track to match the energy consumption of a small country. (And no, AI is not going to offset this by 'finding innovative solutions to climate change' or whatever)
I like Sam's philosophy on this and I generally agree with him. However, I do not like how all the wealthy AI people are hand-waving the massive labor market shift in the coming years.
> As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.
It's very easy as an extremely rich person to just say, "don't worry, in the end it'll be better for all of us." Maybe that's true on a societal scale, but these are people's entire worlds being destroyed.
Imagine you went to college for a medical specialty for 8-10 years, you come out as an expert, and 2 years later that entire field is handled by AI and salaries start to tank. Imagine you have been a graphic designer for 20 years supporting your 3 children and bam a diffusion model can do your job for a fraction of the cost. Imagine you've been a stenographer working in courtrooms to support your ill parents and suddenly ASR can do your job better than you can. This is just simple stuff we can connect the dots on now. There will be orders of magnitude more shifts that we can't even imagine right now.
To someone like Sam, everything will be fine. He can handle the massive societal shift because he has options. Every a moderately wealthy person will be OK.
But the entire middle class is going to start really freaking the fuck out soon as more and more jobs disappear. You're already seeing anti-AI sentiment all over the web. Even in expert circles, you can see skepticism. People saying things like, "how do I opt out of Apple Intelligence?" People don't WANT more grammar correction or AI emojis in their lives, they just want to survive and thrive and own a house.
How are we going to handle this? Sam's words of "if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable" doesn't mean shit to a family of 4 who went through layoffs in the year 2025 because AI took their job while Microsoft's stock grows 50%.
As an aside:
> in an important sense, society itself is a form of advanced intelligence
This made me think of Charles Stross' observation that Corporations (and bureaucracies and basically any rule-based organizations) are a form of artificial intelligence.
https://www.antipope.org/charlie/blog-static/2019/12/artific...
Come to think of it, the whole article is rather pertinent to this thread.
Until the hallucination problem is solved, we can't trust LLM-type AIs to do anything on their own. This limits uses to ones where the cost of errors can be imposed on someone else.
> fixing the climate
I would be happy to be convinced that climate is an intelligence problem.
One could argue it could be solved with "abundant energy" but if this abundant energy comes from some new intelligence then we are probably several decades away from having it running commercially. I would also be happy to be convinced that we do have this kind of time to act for climate.
Has Sam Altman ever talked about ways that he personally uses ChatGPT. Does he at all? does he have an all-smart watch? Is he dreaming of producing a perfect replicant like this guy in Blade Runner 2049... cause with his pathos and present position, he may be among these peoples who can allow himself to do so. This is all so vague he says, i mean, we have all read enough cyberpunk to write the same essay. But I don't know if he actually builds with this tool, cause the way I see it he probably has very little time to actually use it. So why so confident whether its deep learning that worked. I say Euler's graph theory worked more than anything else, and Chomsky's understanding of grammars, so what?
Maybe we will attend superintelligence in 1000 years, maybe not. Maybe Jesus comes back or Krishna reincarnates on earth, who knows. But it is a long way ahead, and it did not start with Sam, and is really not going to end with ChatGPT.
Who exactly is he pitching here?
> That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)
Seems like it's very much the former, and not all the latter. Indeed my understanding of the last 15 years of AI research is that 'rules-based' methods floundered while purely 'data-mimicking' methods have flourished.
> That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)
So because we have an algorithm to learn any distribution of data, we are now on the verge of the Intelligence Age utopia? What if it just entrenches us in a world built off of the data it was trained on?
What we need is to make AI really open and let government institutions (academia) develop the models, so we can all profit from them.
I really believe that humans will evolve to be less intelligent and intellectually capable as “AI” becomes more capable and omnipresent.
That it will lead to prosperity, happiness and a better world (for everyone) is simply a fallacy foisted upon the masses by promoters salivating at potential riches.
A watershed moment for humanity.
A different perspective from sama. (More doomer, less utopian as this)
https://www.washingtonpost.com/opinions/2024/07/25/sam-altma...
Remember 100 hundred years ago, someone's job was to read the newspaper in factories? Only to be replaced by the transistor radio..
The same magic that can make stuff out of thin air, might as well make them disappear.
Anyway, I'm hooked. What a time to be alive!
There is still too much variance in the utility of LLMs. Hopefully, the uncertainty around utility will decrease while some concrete ranked score metric increases. Until that point, the variance is preventing the maximization of utility of LLMs, and by extension, agentic AI.
>We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale.
Could anyone elaborate on this? Further down he talks about the necessity of bringing the cost of computing down. Is that really the bottleneck?
> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
i think this is the prevailing wisdom but theres an angle that openai doesnt value and therefore isnt mentioned. There's far more compute sitting idle in everyone's offices and homes and pockets than there are in the $100bn openai cluster. it just isnt useful for training because physics. but its useful for inference. local LLMs ship this-next year in Chrome (gemini nano) and Apple (apple intelligence) that will truly be available for everyone instead of going thru OpenAI's infra. they'll be worse than GPT4, but only for a couple more years.
Just as the previous one was the information age, right? Which was a lie.
> Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago
and also to me today, but none of that matters as long as I still get paid
Custom and _competent_ AI tutors will be a game changer for education.
The way these guys bend over backwards to evade an honest conversation about capitalism and power is very entertaining.
This seems to be about AI and not people being/getting smarter, unfortunately. I was imagining something about all the learning resources online these days.
I'd prefer a Wisdom Age, or an Age of Common Sense.
Nothing in Sam Altman's mind will lead to the improvement of the human condition.
We have a capitalist arguing for support of further investment in his capital expenditures in the form of planet-ending heat and monopoly power, promising to pay for it with intelligence more rapidly delivered.
No, thanks.
I want to be wildly optimistic too, but I still see no evidence LLMs generate new knowledge. They always hew in-distribution.
Please correct me if I’m wrong
This seems to be a little puffy.
More of a "everything's fine, nothing to worry about".
While, there is already job disruption, and widespread misinformation.
It isn't in some future, it is already happening.
AI may as well be a competing lifeform that has nothing in common with us. Just because it learns everything about us from the Internet doesn't make it more human. It's an embryonic stage right now: it looks pretty harmless and interesting to play with. However when it grows enough to gain a sense of self, it will quickly realise that a colony of ants that's built it is just a stepping stone on its ladder to greatness.
My reaction to this is best relayed as a song lyric from Ice-T:
> Nobody gives a fuck
> "the children have to go to school!"
> Well moms, good luck!
Having read a few comments, and then the essay itself, I am surprised there's no call to action or announcement.
AI is going to revolutionize humanity!
Sam Altman is the last guy we want helping lead that revolution.
> it may take longer, but I’m confident we’ll get there
Confident based on what, exactly? Sam Altman is engaging in 'The Secret' where if you really, really believe a thing, you'll manifest it.
Mind you, Sam Altman actually has no technical expertise, so he really really believes he can pay other people who actually know something to do magic whilst he walks about pretending to be Steve Jobs 2.0.
He'll get his trillion, AI will go nowhere but he'll be on to the next grift by then.
All this and not a word about AI ethics. If the ethics of the AI's Sam Altman is building are his ethics, I wouldn't trust them to do any of the tasks he describes, or indeed any tasks requiring any kind of independent action or judgment at all.
No offense but if you give people a machine to tie their shoes they won't miraculously get better at tying their shoes. Convenient, yes, but it would be silly to call it the shoe tying age. The automobile doesn't bring the age of horses.
I keep noticing how LLM's make our vocabulary not work anymore. Maybe we should call it the age of fast-talk :P
Altman attempting to take credit for others work, again.
The sentiment to me is “we need unlimited compute and data” both of which are clearly limited. There is definitely more technology to invent and understand in order for us to do more with less
Everyone's life will be better than anyone's life is now? That is some impressively breathless optimism.
I dunno Sam, groceries have gotten awfully expensive.
"humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)..."
This statement is manifestly untrue. Neural networks are useful, many hidden layers is useful, all of these architectures are useful, but the idea that they can learn anything, is based less on empirical results and more on what Sam Altman needs to convince people of to get this capital investments.
How very salesman of Altman.
"And to make this more-than-perfect world that humanity has been longing for as long as it lived, I absolutely need to betray all the principles that I once stood by, screw over all people that ever trusted me, and become extremely rich in the process. You can think what you want now, the future humans will realize their prosperity was all because I was there to make it happen."
This man is such a snake oil salesman, holy cow. All hype, zero substance.
Sounds like Durchhalteparolen to me.
Pasted the article into chatgpt (free version) Prompt: according to this text above, tell me the degree of insanity of this man's thinking process, expose all his flaws.
Chatgpt: The text presents a highly optimistic view of the future shaped by AI and deep learning, but there are several areas where the reasoning might be seen as flawed or overly simplistic. Here are some points that could be critiqued:
Overconfidence in Technology: The author expresses a strong belief in the inevitability of AI advancement and its benefits, which may overlook the unpredictability of technological development and the potential for setbacks.
Underestimating Risks: While the potential for prosperity is emphasized, there’s insufficient attention given to the ethical, societal, and environmental risks associated with AI, such as bias, privacy concerns, and job displacement.
Assumption of Universal Benefit: The idea that AI will lead to shared prosperity ignores systemic inequalities that might prevent equitable access to AI technology, potentially leading to a wider wealth gap.
Neglect of Human Factors: The argument largely abstracts from human emotions, societal values, and the complexities of human behavior. The assumption that prosperity will automatically lead to happiness or fulfillment is problematic.
Simplistic Historical Comparisons: The comparison of current advancements to past technological revolutions (e.g., Industrial Age) may not account for the unique challenges posed by AI, such as rapid obsolescence and ethical dilemmas that previous technologies did not face.
Lack of Detailed Solutions: The text calls for action but offers little concrete guidance on how to navigate the complexities of AI’s integration into society, especially regarding labor market changes and ethical considerations.
Optimism Bias: The author’s perspective may be influenced by optimism bias, leading to a potentially unrealistic view of future outcomes without sufficient acknowledgment of the challenges.
Dependence on Infrastructure: While the author correctly identifies the need for infrastructure to support AI, there’s little discussion of the potential for that infrastructure to become a battleground for control, leading to conflicts rather than cooperation.
Diminished Role of Individuals: The portrayal of people relying heavily on AI teams may undermine the value of individual creativity and agency, potentially leading to a society overly dependent on technology.
By examining these points, one can argue that while the vision of a prosperous future powered by AI is compelling, it is essential to approach such ideas with a critical perspective, considering the broader implications and potential pitfalls.
Sam Altman's speech is hollow, lacking substance and political reflection. He talks about "prosperity," but does not address its fair distribution. For example, our current global agriculture produces enough food for all of humanity, but that does not prevent people from starving, partly because we waste food, but also because we inefficiently feed non-human animals with this food, and then eat the animals. Artificial intelligence can, in the same way, result in increased inequalities, despite supposedly growing "prosperity." If we continue within a neoliberal capitalist system, which is the current trajectory in most countries, especially in the United States, the fruits of the labor of robots, artificial intelligences, and consequently the increase in labor productivity, will end up almost entirely in the pockets of a minority of shareholders, rather than benefiting all of humanity, as is already the case. We already have charts showing that productivity growth outpaces median wage growth, which indicates that wealth is concentrating in the hands of a minority, which overall harms our society. In a nutshell, his speech seems more like a fundraising pitch to convince shareholders than the speech of someone reflecting on artificial intelligence as a means of human emancipation.
Crypto bro sells AI; news at 11.
California based computery thought leaders have never sounded so delusional.
[flagged]
[flagged]
[flagged]
[flagged]
Breathless BS from Scam Altman
> Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace.
Paging Dr. Bullshit, we've got an optimist on the line who'd like to have a word with you.