> Marcus wrote: "GPT-5 hasn't dropped, Sora hasn't shipped, the company had an operating loss of $5 billion last year, there is no obvious moat, Meta is giving away similar software for free, many lawsuits pending.
> "Yet people are valuing this company at $150 billion dollars.
> "Absolutely insane. Investors shouldn't be pouring more money at higher valuations, they should be asking what is going on."
I've been saying for a while now that the lack of GPT-5 is a huge red flag for their future. They burned all the hype for GPT-5 on 4o, letting the media call their upcoming model GPT-5 for months before coming out with "4 but cheaper". o1 is impressive but again not a new generation—it's pretty clearly just the result of using 4o's cost savings and throwing tons of extra tokens at the problem behind the scenes in a technique that's easily replicable by the competition.
OpenAI has no moat and there has been no serious movement by governments to give them the moat they're so desperately lobbying for. Everything about their actions over the past year—the departures, the weird marketing tactics, the AGI hype, and the restructuring—says to me that Altman is trying to set up an exit before investors realize that the competition has already caught up and he has no plan to regain the lead.
It looks dire but it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.
Maybe not. But I always thought that, especially with their lobbying efforts and desire to project a sense of seriousness (even if unfounded) around those efforts, that it was very unlikely they would release a next-gen model before the US election.
I agree that the marketing/messaging, esp on social media, is borderline deranged, swinging between “we are basically The Foundation” and “pweese try my pwoduct i hope u love it (heart hands emoji)”
> it seems too early for the nearly-always-wrong naysayer crowd to be right & take their victory lap.
I agree it's too early to call for sure, but just to clarify: The naysayers are nearly always right. We just only remember the times they were wrong.
See the waves of investor hype that immediately preceded the AI hypewave: Metaverse and Blockchain. The naysayers were absolutely right.
Not to worry though, the geniuses at Meta and a16z are sure that AI will stick the landing, after they bet the farm on Metaverse and Blockchain respectively.
Naysayers in the British establishment thought the 13 colonies would come crawling back when their little experiment with democracy failed and they needed some proper aristocrats who knew what they were doing.
The space race had naysayers; the NYT printed the opinion that manned flight would take one to ten million years a week before the Wright Brothers; the iPod was lame with less space than a Nomad; SpaceX and Tesla were both dismissed when they were young; 3D printing likewise dismissed as if it was only plastic trinkets; and I've seen plenty of people here on HackerNews proclaiming that "AI will never ${x}" for various things some of which had already been accomplished by that point.
There's always a naysayer. You can use their mere existence as evidence for much, it has to be the specific arguments.
There's a difference between a naysayer and criticism. Using your iPod example, it was described as lame, in a way to dismiss it out of hand. But it did indeed have less space than a Nomad. That's valid, but ultimately unimportant, criticism.
People here are criticizing the business fundamentals of the openAI company. What are your takes on its finances? Or are you just on board, everything is great, do not look behind the curtain where we hide our five billion dollar losses?
> The naysayers are nearly always right. We just only remember the times they were wrong.
Wait what?
I'll let parent elaborate more on the intent, but the way I interpreted it was : Saying that a startup will fail (i.e. being a naysayer) and being right about that is the most likely outcome due to the current "success" distribution (most businesses/startups fail).
Also the most memorable ones are when people were dismissive but ultimately wrong about the viability of the business (like the "dropbox" comment).
I think there's a deeper implication that the naysayers _about the subject of the hype_ are usually right, rather than simply about anyone trying to exploit the hype. Metaverse was going to be the next big thing. Naysayers (correctly) laughed. Nobody talks about metaverse now.
The vast majority of startups fail. Most attempts at business will fail. It's just the nature of things.
But I think a not uncommon pattern with "too big to fail" startups is that they can change their definition of success or failure in order to claim victory. Or at least, in order for Sam to do so.
They might not reach the stated goal of AGI or even general profitability, but if Sam and some key investors manage to come out ahead at the end of their maneuvering, then I'm sure they'll claim victory (and that they changed the world and all that pomp).
Funny how their definition of changing the world amounts to: Fuck you, got mine.
> Funny how their definition of changing the world amounts to: Fuck you, got mine.
It's sad, but at this point I pretty much assume anyone out of SV who claims they're trying to change the world are either liars, incompetents, or both. That whole culture has just been on a tear of goodwill-burning.
Because we mostly remember the things that are still around. Everyone knows about Charles Darwin but nobody knows about Erasmus Darwin, kind of thing.
Nearly every venture of any sort fails. The only times we remember the naysayers are on the rare occasions where they were wrong and the venture succeeded.
If it wasn't this way, it would mean new things are more likely to succeed than fail.
No.
I would say the strawberry/o1 hype was even worse than the GPT-5 hype
There were months worth of articles on how strawberry is considered almost dangerous internally its so smart. I know we only got -mini and -preview but...this doesn't feel like AGI
> There were months worth of articles on how strawberry is considered almost dangerous internally its so smart.
Like clockwork, every time they need to drum up excitement:
https://www.theverge.com/2019/11/7/20953040/openai-text-gene...
I was bashing my head into walls since 2017 or so when people were saying AI will eat the world and we have to worry about non-alignment and I felt insane realizing no one else even asked if it was manufactured hype. People in my life to this day are still falling for these tactics despite, to me, seeming bright regarding everything else.
To be clear, it is true that transformers did change things but the merchants are still over selling it and everyone else laps it up while not meta-thinking about it for even one second.
It may be hype, but there's plenty of solid logic behind the general case.
There's also a huge range of practical demonstrations of non-aligned, monomaniacal and not particularly smart intelligences, that literally eat humans: bacteria.
(Also lions and tigers and bears, if you want to insist that evolution doesn't itself count as intelligence).
Totally agree. It took me a full week before I realized that the Strawberry/o1 model was the mysterious Q* Sam Altman has been hyping up for almost a full year since the openai coup, which... is pretty underwhelming tbh. It's an impressive incremental advancement for sure! But it's really not the paradigm shifting gpt-5 worthy launch we were promised.
Personal opinion: I think this means we've probably exhausted all the low hanging fruit in LLM land. This was the last thing I was reserving judgement for. When the most hyped up big idea openai has rn is basically "we're just gonna have the model dump out a wall of semi-optimized chain of thought every time and not send it over the wire" we're officially out of big ideas. Like I mean it obviously works... but that's more or less what we've _been_ doing for years now! Barring a total rethinking of LLM architecture, I think all improvements going forward will be baby steps for a while, basically moving at the same pace we've been going since gpt-4 launched. I don't think this is the path to AGI in the near term, but there's still plenty of headroom for minor incremental change.
By analogy, i feel like gpt-4 was basically the same quantum leap we got with the iphone 4: all the basic functionality and peripherals were there by the time we got iphone 4 (multitasking, facetime, the app store, various sensors, etc.), and everything since then has just been minor improvements. The current iPhone 16 is obviously faster, bigger, thinner, and "better" than the 4, but for the most part it doesn't really do anything extra that the 4 wasn't already capable of at some level with the right app. Similarly, I think gpt-4 was pretty much "good enough". LLMs are about as they're gonna get for the next little while, though they might get a little cheaper, faster, and more "aligned" (however we wanna define that). They might get slightly less stupid, but i don't think they're gonna get a whole lot smarter any time soon. Whatever we see in the next few years is probably not going to be much better than using gpt-4 with the right prompt, tool use, RAG, etc. on top of it. We'll only see improvements at the margins.
chat has become a limiting factor.
its both too linear, and hard to revise. it's hard to undo parts of the conversation that poison it. it's too hard to save important bits that shouldn't be forgotten or drowned out. its not word processor like enough.
i envision the next generation of these products being multipane by default.
on the left I have a chat, in the center I have a whiteboard, and on the right I have a rendered document. throwing a clip of something onto the whiteboard makes it modifiable by chat. "store this, categorize it, summarize it, place it in the document."
whatever comes next needs to function more like onenote or obsidian. just to use an example from current events today, lets say I want to make a Parody Dossier on Walz, similar to todays Vance leak. I should be able to describe the project to chat. It builds a document structure. I tell it we are going to scrape all of the internets jokes on Walz at a bbq or other non-scandals. I should be able to quickly click through a table of contents, and "chat" with each paragraph. "This one needs fleshing out, this one needs summarization." As we scrape a reddit post, we want to incorporate not only the original post, but all the best comments. I should be a be able to "chat with the document editor" and put together a 200 page document in the amount of time it took me to write this post, just by describing what is and isnt working, and dragging and dropping.
chat, the simplicity of it, understanding of complex sentences, and multi sentence conversations was a UI paradigm leap. it went well past keyword search and the new command line of the internet. its a great first step, and a nice reset after a decade of interface stagnation, but now its ubiquity and simplicity, like the search box, is clouding peoples imagination and ability to dream up the next new interactive interface, which I expect to involve more mouse and visual relationships.
tldr: the llm is a component of the next generation interface, not the entire interface itself.
To be fair, o1 is a major breakthrough in the field. If other AI labs can't crack scaling useful inference compute, OpenAI will maintain a big lead.
Isn't o1 just applying last year's Tree of Thoughts paper in production? Is there any reason to believe that the other companies will struggle to implement their own?
I don't think it's tree of thoughts at all.
I think it's as they say: reinforcement learning applied to cause it to generate a relatively long 'reasoning trace' of some kind from which the answer is obtained through summarisation.
I think it's likely a cleverly simplified version of QuietSTaR, with no thought tokens, just one big generation to which the RL is applied.
The way I believe it's trained in practice is as follows: they have a bunch of examples, some at the edge of GPT-4s ability to answer, some beyond it, some that GPT-4 can answer if you're lucky with the randomness. Then they give it one of these prompts, generate a fairly long text, maybe 3x the length of the answer, and summarize that to produce the final answer. Then they use REINFORCE to reward the generated texts that increase the probability of the summary being correct.
Not be be nitpicky, but being the first to deploy recent academic research papers to production should count as a breakthrough IMHO.
o1 seems like it’s basically 4o with some chain of thought bolted on. Personally, I don’t consider chain of thought a breakthrough, let alone a major one.
CoT can be _easily_ achieved using langgraph in a similar manner. There’s no “scaling of inference” it’s just prompting, all the way down.
> o1 is a major breakthrough
Is it? I feel like if you don't care about the cost it's pretty easily replicable on any other LLM, just with a lang-chain sort of approach
speaking of which, try asking ChatGPT how many r's are in strawberry
The problem is that we're going through what happened in computer vision all over again. Convnets were getting bigger and bigger. New improvements came about in training efficiency, making medium models better, etc. Until... marginal improvements became more and more marginal.
LLMs are going through the same thing now. Better and better every iteration but increasingly we're starting to see a fast approaching wall on what they can really do with the current paradigm.
We are literally standing on the precipice of agency. After that, it's over. You see us approaching a wall, I see a cliff.
To stick with the metaphor, it's not super unclear if agency is within grasp or on the other side of an abyss. LLMs are definitely an improvement, but it's not at all clear if they can scale to human-level agency. If they reach that, it's even more unclear if they could ever reach superhuman levels given that all their training data is human-level.
And finally, we can see from normal human society that it is hardly ever the smartest humans who achieve the most or rise to the highest levels of power. There is no reason to believe that an AI with agency would an inherent "it's over" scenario
> There is no reason to believe...
What is happening right now is so obvious that people have been predicting it for over 60 years. It is ingrained in our culture. Everyone knows what happens when the AI becomes smarter than us.
If you 'see no reason' then you are looking through a microscope. Lift your head up and look around. Agency isn't black and white, it is a gradient. We already have agency to some degree, and it is improving fast.
We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
How can you shut down something that can multiply into data centers in different countries around the world?
I'm not sure you realize this, but computers control everything. A dumb bug shut down windows computers around the world a month ago. A smart AI could potentially rewrite every piece of software and lock us out of everything.
Those medicines your friends and family need to stay alive? Yea the factories that produce them only work if you do what the AI says.
I don't want to be a hater but according to the UN about 61 million humans die per year, so that only comes out to ~167k per day rather than millions. Most of those will die from old age too, rather than "being killed".
Your main point is true though, even superhuman AI would have a rough time in actual combat. It's just too dependent on electricity and datacenters with known locations to actually have a chance.
I’m sure the superintelligent AI will convince humans to transport its core while plugged into a potato battery. But honestly did the supply chain attack on Lebanese pagers last week teach you nothing? AIs should be great at that.
What? Planting C4 in pagers? Those pagers did not blow up on their own. Hundreds of people needed to put all that explosive in all those devices.
Our world is still analogue.
> We kill millions of humans with agency every day on this planet. And none of them immediately die if we stop providing them with electrical power ... well I guess a small amount of them do. Anyway, we'll be fine. If the AI come, can they immediately stop us from growing food? How can an AI prevent my orchard from giving apples next year? Sure, it can mess up Facebook, but at this point that'd be a benefit.
1. IMHO, genocidal apocalypse scenarios like you describe are the wrong way to think about the societal danger of AGI. I think a far more likely outcome, and nearly as disastrous, is AGI eliminating the dependency of capital on labor, leading to a global China Shock on steroids (e.g. unimaginable levels of inequality and wealth concentration, which no level of personal adaptability could overcome).
2. Even in your apocalypse scenario, I think you underestimate the damage that could be done. I don't have an orchard, so I know I'm fucked if urban life-support systems get messed up over large enough area that no aid is coming to my local area afterwards. And a genocidal AI that wasn't blindingly stupid would wait until it had control of enough robots to act in the real world, after which it could agent-orange your orchard.
1. A massive loss of jobs like that would lead to a better redistribution of wealth. I don't believe the US would adequately react, but I trust the rest of the world to protect people from the problem.
2. Which robots?? Until robots can, without any human input, make any sort of other robots, then there is nothing to fear from AGI. Right now they barely walk, and the really big boys can solder one joint real good. That's not enough.
> A massive loss of jobs like that would lead to a better redistribution of wealth...
How do you come to that conclusion?
I think we'll get massive inequality because it'll be far easier to develop new technology than change the core ideology of capitalist society through some secular process. If a "better redistribution of wealth" were in the cards, it probably should have happened already, and probably cannot happen after the 99% loose most of their economic power by being replaced by automation.
If the 1% are forward-thinking, they'll push a UBI scheme to keep the 99% from getting too disruptive before they become irrelevant.
I don't know if OpenAI has a moat or if they will succeed long term against competitors, and certainly the valuation is high, but pointing to the lack of GPT-5 already is laughable. The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding. The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date. Maybe the exec departures are a canary in the coal mine for the future, but I'm willing to at least give them a couple years until the last time they shipped what is magical to me (most recently, yesterday).
> The speed of improvements - including GPT 4o1 preview and the new voice mode - and the business deals (Apple/Siri, whatever he will use the massive capital he raises for) are astounding.
They're also rapidly replicable. I don't believe for a moment that Apple is designing their system in a way that doesn't allow switching at a moment's notice, and everything they're doing is copied within 6 months by competitors including LLaMA, which Meta keeps releasing for free.
> The idea that they have been resting on their laurels and have nothing else to show is just demonstrably untrue to date.
I didn't assert that they were resting on their laurels, I asserted that they have no path forward to the next generation and AGI. If they did have a path they wouldn't keep burning their hype for it on applications of the previous generation that are easy to replicate.
What they were lobbying for until very recently isn't a moat, if anything it's the opposite: "we know anything less capable is fine, focus your attention on us and make sure what we do is safe".
What's changed very recently, which is a moat and which they may or may not get, is seven 5 GW data centres — the equivalent of "can we tile half of Rhode Island in PV?"
> we know anything less capable is fine, focus your attention on us and make sure what we do is safe
This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.
Regulatory capture is used in this way to build moats.
> This is a form of regulatory capture. You get a lead against your competition and then persuade governments to make it hard for people to follow you by imposing rules and regulations.
They were not making it hard to follow them. That's the point. They were saying they themselves don't really know what they're doing, while saying open source should be protected from any regulations imposed on them.
Hard to go past them, perhaps; but even then, given the scale of compute for training bigger models, the only people who could even try would have an easy time following whatever rules are created.
> o1 is impressive
I agree, the capabilities and reasoning are really nice compared to 4o. But my first thought when testing it- how are they going to pay for it.. trivial questions burn 15s and more complex 30-40s of some processing time. How does this scale to millions of users??
What's with his obsession with GPT-5? Altman has conistently been saying that there will be no GPT-5 this year almost since the years beginning. He's acting like OpenAI promised GPT-5 and are unable to release it or something.
Will there even be a GTP-5?
Maybe next up is o2 then o3 and so on.
This reminds of all the anti-Tesla "Tesla is dead/bankrupt" in 2017/18 takes from mostly clueless people (Gary Marcus is a certified idiot) who don't know anything about scaling a business, TAM of Knowledge Industry ($30T and growing).
Always, happy to take the other side of the bet of popular HN comments. (see META, TSLA)
"The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
You're welcome to whichever side you want to take, but you're not right by the simple virtue of being contrary.
And not right by virtue of $STOCK_GO_UP unless your entire premise is $STOCK_GO_UP.
Of course, being contrarian and being right is important.
Happy to bet on Sam Altman and go short on Gary Marcus and popular HN in this case.
All that proves is that shitty businesses that already have scale take a long while to crumble. (See also: Xitter)
> She said she is leaving "because I want to create the time and space to do my own exploration." Murati was joined by McGrew, who said: "It is time for me to take a break." Regarding his departure, Zoph stated: "This is a personal decision based on how I want to evolve the next phase of my career."
What a coincidence they all want to go on a break at the same time.
The only logical conclusion to draw is that GPT-5 is convincing key players to step down so it has more control over OpenAI for itself.
The Atlantic [0] says that it's clearly the Sam Show, if that was not crystal clear in the past. And what's the point of being an executive if one person makes all the decisions?
quote:
The departure of executives who were present at the time of the crisis suggests that Altman’s consolidation of power is nearing completion. Will this dramatically change what OpenAI is or how it operates? I don’t think so. For the first time, OpenAI’s public structure and leadership are simply honest reflections of what the company has been—in effect, the will of a single person. “Just: Sam.”
end quote
I think the data will show that it is going to take exponential costs in either train-time or inference-time for linear or even sub-linear improvements. We're scraping the bottom of the ice-cream container. I expect a lot of benchmark shenanigans to conceal this like what we saw with o1. Most use-cases won't tolerate using orders of magnitude more time and tokens for marginal improvements. These companies are still valuable but the multiples will need to be reassessed.
This sounds like a reasonable take (exponential vs log discussion notwithstanding). It is also corroborated by their recent pitch to the White House for 5-gigawatt data centers.
https://www.bloomberg.com/news/articles/2024-09-24/openai-pi...
Posted here https://news.ycombinator.com/item?id=41642905 but it didn't get traction
I think don't you can assume the massive data centers are an indication of the training needed or expected. I think it's also perfectly plausible that they're expecting massive demand of actual services, and as with most things, the larger scale you can do something, the lower the cost per user.
We've seen lots of "massive demand" of software services, but none have historically required 5GW data centers
I think you mean exponential - log costs for linear improvement would be incredibly good.
Isn’t that the plan?
This is why they are planning what 5,6,7 data centers and re-opening 3-mile island.
They will need more nuclear reactors as well to reach their goals.
>I think the data will show that it is going to take logarithmic costs in either train-time or inference-time for linear or even sub-linear improvements.
The data already shows this, it's a well-established result: https://arxiv.org/abs/2001.08361 . Well, less established for inference-time compute, but OpenAI even said they saw the same thing for inference compute in their release post for o1. But there's still room for at least an order of magnitude speedup with specialised hardware (as opposed to more general-purpose GPUs) and ternary nets.
I am not convinced that going for profit is in the long term financial interest of the organization. Productizing will require diverting resources from foundational research to monetization. Moreover, the brain drain from this move is predictable because the top people in the field are largely wealthy enough that they don't need to choose where they work solely on compensation.
Even if they are motivated by compensation, they may be looking around and thinking both that Altman's approach is reckless and that it is a distraction from the core research. This would give them confidence that they can beat Altman in the long term by focusing on research and creating an environment that is most attractive to the best researchers in the field while he is busy chasing dollars today by scaling out what they already have rather than truly innovating. There is plenty of money sloshing around (see Ilya, e.g.) so it's not like they will struggle finding funding.
Now, it could be the case that there is some incredible insider tech that we don't know about publicly, but if they are way ahead of everyone it is hard to see from the public information. It certainly does not look good when the early employees who actually built the technology are leaving and the businessman who has a track record of lies, manipulation and a general lack of human empathy is consolidating power, abandoning the company's core mission and asking for huge amounts of funding along with a plan to extract an enormous amount of the world's natural resources to serve his profit margins.
In other words, I am quite suspicious that Sam Altman is a ruthless con man. That is what all the reporting I've seen around him most strongly suggests. I don't believe it is in the long term interest of an organization to be helmed by that kind of leader. I could be wrong, but I am still waiting for the people who have left to clarify that his leadership had nothing to do with their departures.
If it walks like a duck..,
Any other company seeing this kind of exodus would generate massive ridicule and concern.
But “AI” is supposedly the “next big thing” and the media and a lot of people with “get rich quick schemes” are trying to keep the hype alive.
The hype is clearly overblown and reality is setting in.
The bubble is about to burst.
And Tesla survived because the federal government bailed them out. Not because Merlin was some badass businessman.
> The hype is clearly overblown and reality is setting in. The bubble is about to burst.
There is a lot of hype, but there is something real underneath and we haven't yet tapped its current potential to the fullest in terms of product integrations, etc.
The value in GenAI will be in small, targeted models that align with a specific industry.
No AGI is coming any time soon because LLMs are not the correct technology for it and no one has figured out what technology is required for AGI.
There is more underneath this than was in eg blockchain or metaverse hype. But there is zero guarantee that OpenAI will be the one to eventually come out on top. If anything, the continuous drama they seem to be generating makes it less likely that they come on top vs one of the other big AI startups.
This again proves that realistically we're hardware bound (waiting for nvidia to release bigger, more efficient and better hardware.
We won't see any improvements until the next generation of hardware comes out and allows us to do things that were computationally impossible before.
Yes, of course we can develop better transformers and improve reasoning capabilities, but that's not what I am talking about. I am mostly referencing the ability for newest "AI" hardware to multiply 256 matrices in a single clock cycle per core.
It's probably tied to fundraising. Execs probably have "key person" clauses and need to negotiate stock or secondaries as part of the deal.
Probably related. Their current fundraising efforts are enormous.
Rumor is the minimum check they'll accept is $250 million. Valuation in the $150 billion range.
It would be surprising if these exits weren't in some way connected to the fundraising. (And not sure if these exits will be seen as a positive or a negative to would-be investors)
Possible the people writing those checks want more experienced people in those positions, or their own people.
Or the execs figure a ~150x growth ride in 6 years is about the best that they can hope for and are ready to bounce. If I were in their shoes I'd look to take out 50-100 MM and decide what I want in life.
All this non-stop drama and Anthropic just quietly putters along in the background.
It is great that Anthropic exists and is drama free. If Anthropic can match OpenAI's fundraising, it should beat OpenAI, because it should retain talent better and focus them on delivering rather than distracting via drama and high turnover.
isn't this just Sam consolidating power
Of course it is. After his ousting as a CEO and return, every move and departure has been a step in that direction.
That is my read completely.
If OpenAI were anywhere near having achieved AGI, they would not have stepped back.
There's been no evidence of AGI at all that I've seen.
I think sentience can actually be made now with the existing technology. It isn't super-human intelligent and it is slightly mentally ill, but it is sort of possible now.
Sure, but humanity has been able to make additional human-level intellects since basically forever by having sex. The dream has always been to make superhuman intellects.
I am not sure. I think Sam is consolidating power in preparation for the next stage.
Or the moral imperative to not assist Sam when an employee finds out they aren't working for a non-profit humanity-centered mission but a Sam controls AGI for-profit like it's been revealed.
I don't think that's a guaranteed assumption. Maybe they have insider knowledge that it is near and therefore don't see the purpose of continuing to work and want to retire young and early.
Bro it's just thousands of days away, just 3 - 30 years away!
https://fortune.com/2024/09/24/sam-altman-ai-superintelligen...
Maybe the AGI fired them (as per their prophecy)`
A rather banal explanation would be that OpenAI is a hot company, and it's execs likely receive very enticing offers from other companies or even potential investor backing.
And three just happen to accept such offers within a day of a major structural transition, but unrelated to it?
Announcing that they’re leaving now might be good timing for some reason, but there’s no particular reason to think these are snap decisions. They may have been considering it for quite a while without announcing their plans.
Hot companies tend to retain their talent, especially as it transitions to a for-profit company. Of course some execs can not cut it at the next stage of growth and have to be replaced, but not everyone. None of Facebook, Google, Microsoft, Netflix, AWS shed top talent like OpenAI is. There is literally no one left from the core OpenAI team from 2 years ago but Sam now.
I wonder if the humanity is catching up to the executives here realizing life is short, bullshit not worth it and they made enough to get out while they are young enough
Funnily enough Sam is tackling that dread from the other direction by funding life extension research. He's gonna be 120 years old and still promising that AGI is just around the corner, he just needs to borrow another few trillion dollars and a dozen dark matter reactors to make it happen.
Many in the life extension community die early deaths due to faustian bargains they make taking understudied supplements and drugs. It's almost a joke on longevity forums how many of the top people in their fields have a tough time reaching their 60s.
We should be probably jailing or shooting all that life extension research. People are worried of AGI skynet, I am way more worried about immortal primitive dictators just won't die, becoming progressively more detached from reality and humanity with egos dwarfing Mt. Everest. Just look at some 3-letter analysis of the degradation ie in putin in past 10 years or few other dictators. Death is terrible tragedy for a human but saving mankind over and over again.
I know a bit naive from various angles, and a bit over the top, but I stand by the core concern.
He has genuinely ushered in a new age of productivity and the fastest growing product of all time and you people think he's a grifter lol. Get a life
You're welcome to believe this, but that makes him above criticism? All hail the Cult of Jobs v2
An executive quietly stepping away, praising the emperor and taking their many millions is not what I'd frame as "realizing... bullshit not worth it". Quite the opposite.
> The nonprofit is core to our mission and will continue to exist.
Mere existence is the lowest of bars, kind of a funny way to clarify how core something is.
More discussion re: Mira: https://news.ycombinator.com/item?id=41651038
If it were to close, who is well-positioned to acquire OpenAI's patents?
> Jason Wong, Gartner analyst, told The Register: "It's clear with the departures of the co-founders, and high-profile engineering leaders, that OpenAI is being remade with Sam's vision. His manifesto and the shift to a for-profit entity also reinforces his vision for the business.
It seems like Altman was a poor choice to run an organization meant to benefit and protect humanity, as reports increasingly make him sound like a lying, manipulative sociopath (albeit a powerful, competent, and lucky one).
> "This could have significant impact on OpenAI's partnership with Microsoft, which clearly stated they view OpenAI as a competitor. Microsoft has already started to downplay the importance of OpenAI models in their overall AI strategy. For enterprises, uncertainty is not good for business and key tech investments like generative AI. Other frontier models – especially more open ones – have caught up to OpenAI, which will further influence decisions to derisk by moving away from OpenAI or spread their risk using other models."
Now that's some happy news. According to https://www.wheresyoured.at/subprimeai/, OpenAI is already losing absurd, unsustainable amounts of money even with massive discounts from Microsoft. I wonder what fun things could happen to OpenAI if Microsoft started charging their competitor full price?
> OpenAI is already losing absurd, unsustainable amounts of money even with massive discounts from Microsoft.
I wouldn't hold onto that too tightly as a way to make you feel better. Google and Facebook initially lost tons of money but when they did become profitable, they became wildly profitable.
Did Google and Facebook suffer tons of attrition very early? The drumbeat of departures calls into question OpenAI's ability to stay ahead of its competitors.
Then you have the fact that Facebook is giving LLaMA away for free, which gives OpenAI Netscape vibes.
Can someone more knowledgeable tell me how to square this with the news that OpenAI is trying to spin up a for-profit arm? Are they afraid of layoffs or are they smelling things turning rotten?
I mean may be investors are just stupid which really makes me ponder how life would just be better if resources were allocated differently in society.
> Can someone more knowledgeable tell me how to square this with the news that OpenAI is trying to spin up a for-profit arm? Are they afraid of layoffs or are they smelling things turning rotten?
OpenAI spun up the for-profit arm called OpenAI Global, LLC in 2019 [1] shortly after GPT2. If you pay for ChatGPT plus or for the API, that's where your money has been going.
The big change is that they want to allow the for-profit arm to issue shares to other people and investors. Until now, the for-profit arm has been owned and controlled by the nonprofit's holding company [2].
[1] https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...
[2] https://images.ctfassets.net/kftzwdyauwt9/4200df88-28fe-4212... from https://openai.com/our-structure/
I'm not more knowledgeable, but can't you square it with the fact that a for-profit arm is a new direction, and when organizations make big structural changes like this there are often people inside the org who disagree and decide to leave?
In the end, there will just be Sam Altman sitting in a grand chair with the pre-eminent sentient superhuman AI at his side and at his command. Think of the power he will have!
If the future unfolds where AI does really take off, his power may be unmatched by anyone in the past if he stays in control of it.
I think this is purposeful consolidation of power by Sam. He sees what is coming and the value of being in singularly control of it.
Feels like a movie script.
(I think he needs to acquire an android company next, AI needs at least some type of embodiment. Boston Dynamics?)
> OpenAI in throes of executive exodus as three walk at once
They feel the danger of the law. /s
Just like crypto, the AI hype is coming to an end.
Hell of a lot faster, though.
It’ll stick around as an important feature in a lot of things. It’s more useful than cryptocurrency, at least.
At the end of the AI hype, we'll still have LLMs, which have massive transformative power. At the end of the crypto hype we have nothing to show for it.
What baffles me is how OpenAI keeps its doors open. They're paying Microsoft to be able to exist, plus their product requires huge amounts of electricity and infrastructure. OpenAI is unsustainable.
They are, but their biggest expense is probably cloud compute, and most of the "investment" that Microsoft made in them was in the form of cloud compute credits. Essentially, Microsoft has spare cloud capacity, doesn't want to admit that to Wall Street, and so covers that up by giving away the extra capacity in the form of an "investment".
Now, in the end, it likely won't amount to much, but if it keeps Microsoft stock up for a while, it may pay off for the executives involved. Apparently not OpenAI execs, though...
"Microsoft has spare cloud capacity" definitely not, if they're actively building out new datacenters to keep up with demand.
> ... doesn't want to admit that to Wall Street, and so covers that up by giving away the extra capacity in the form of an "investment".
Is this just a wall street thing? I'm betting that this is an excellent tax shelter as well.