Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
People remember things and consistently behaving like an asshole gets you treated like an asshole.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
What are the worst things OpenAI has done
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
Do you believe AI should not be regulated?
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.
They released a near-SOTA open-source model recently.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
I think the majority of people don't care about YC. It just happens to be the most popular tech forum.
Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.
Sam Altman wasn't fired from YC.
Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?
These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.
True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.
People here are directly in the line of fire for their jobs. It’s not surprising.
When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.
They released a near-SOTA open source model not too long ago
because of the repeated rugpulling?
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
Can someone give the counter argument to my initial cynical read of this? That read being: OpenAI has more money than it can invest productively within it's own company and is trying to cast a net to find new product ideas via an incubator? I can't imagine Softbank or Microsoft is happy about their money being funneled into something like this and it implies they have run out of ideas internally. But I think I'm probably being too reflexively cynical
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The MIT study found 90% of workers were regularly using LLMs.
The gap was that workers were using their own implementation instead of the company's implementation.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
I think it’s more like Open AI has the name to throw around and a lot of credibility but not products that are profitable. They are burning cash and need to show a curve that they can reach profitability. Getting 15 people with 15 ideas they can throw their weight behind is worth a lot
Without putting my weight behind them, here's some counterarguments:
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
It's possible that a single senior employee just wanted to do this and it doesn't cost that much and their manager was like "sure"
I really do want this to be the case
I mean, how much money are they throwing at this? I doubt it approaches anything close to a percent of the cash they have on hand.
I don't think it's about money, they don't invest anything. They gather data about "technical talent" working on AI related ideas. They will connect with 15 of these people to see if they can build it together.
It seems almost like... an internship program for would-be AI founders?
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
> OpenAI has more money than it can invest productively
I don't think there is any money given, except travel costs for first and last week.
OpenAI appears to lack clear product vision.
This feels like a program to see what sticks.
"Pre-idea stage" support is wild to me
We don't invest in ideas, we invest in founders. That's why OpenAI partnered with Y Combinator to bring you investments at the pre-founder stage.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
Feels like the next logical move to me: they need to build and grow the demand for their product and API.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
This feels like a program to see what sticks.
Isn't that how we got (and eventually lost) most Google products?
There’s a difference between having product ideas rooted in compelling hypotheses on the one hand, and random ideas you throw against a wall and see what sticks.
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
What I want is a "grove" I can flee to where I will be immune to the effects of OpenAI and the other AI labs.
Alas, there is no such grove: my only real hope is that OpenAI and the AI labs are shut down by government action.
Sam clearly misses Y Combinator.
I randomly saw him being announced as a big deal here on this forum years ago and I remember thinking what has this guy done to deserve this.
Yeah, my thoughts where along the same line. Seems like they want to be another Ycombinator but more focused on AI. (Although TBF, I guess AI would also get the most traction at Ycombinator these days, given the hype wave).
Did we ever find out why it is he doesn’t work there anymore?
Was forced to choose between OpenAI and YC by Paul Graham and Jessica. Sama chose OpenAI.
The really odd thing was when he got fired for like 3 days in 2023 because he refused to let Y Combinator have preferential representation of its startups in OpenAI models.
Clearly, dealing with OpenAI doesn't leave any room for fun stuff like YC. Just a hunch.
Indeed.
Exactly what I read between the lines on this.
If you are pre-idea today, does OpenAI believe your startup will still be relevant in the face of the AGI progress they forecast to make in the time it takes you to ship?
I ask questions like that in my head all the time. My metric is once their AI is smart enough to make their website not throw up an error half the time, I'll have to more deeply consider any AGI claims
15 ppl in first cohort? Aka dont bother applying.
"pre-idea individuals"
Next up, we're funding prenatal individuals.
In 10 years, people will apply for jobs for their children before conception, and wisely not have kids if they can’t line one up (at least as a backup.)
Sell your first born to Scam Altman now!
Right, this corporate linkedin-lingo is getting worse by the day.
Did anyone get confirmation that the form got sent? There is no feedback from pressing "submit" for me.
Same
Same issue
It looks like application submission isn't functioning.
Do a hard refresh while console is open; that'd fix it!
Yeah, clicking "Submit" doesn't do anything obvious, aside from post some arcane errors to the JavaScript Console.
lmao, was this vibe coded?
I first misread it as "OpenAI Grave" where someone would put the list of all discontinued models.
"it offers pre-idea individuals" wtf
If ideas are a dime a dozen, what even is a pre-idea startup
Why not ask the big bag of words to generate "ideas"?
Just, Devil's Advocate..
but what, exactly, makes you believe this internship program is not an idea generated by the big bag of words?
> "pre-idea individuals"
Move over "idea guys", it's the era of the "guy who hypothetically might have an idea at some point".
I've got concepts of an idea
I don't know man?
To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"
If OpenAI needs a bunch of PMs, they will increasingly be able to spin some up, not hire humans.
I caught that too. What's a "pre-idea" individual? Someone who... wants the vague _idea_ of a company?
No, before that
It's the AI guy version of the blockchain guy who had no idea what it was for or what to do with it, but was very hyped on it
South Park Commons -1 to 0 program seems conceptually similar
I mean, I get it.
I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.
I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.
Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.
The FAQ items don't expand for me, on Android Vivaldi.
Do you have to be in the US or can they help to get in?
The country selection menu seems to include countries from around the world. It sounds like only the first and last weeks are actually on-site, the rest in async/remote.
Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
Is it just me seeing this as a talent discovery program?
It's clearly a talent grab. Where talent = creativity.
Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.
But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.
Finally, a handful of the submissions will be groundbreaking.
Et voilá. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.
hmm.. wonder what the most accurate Venn diagram for this is?
Capitalists can't solve problems, they can seek out rent and put meters on things. These "builders" and "innovators" are the reason the web you dearly miss is dead.
The entire internet is now structured to sell to you. premium subscriptions for simple things that aren't technical problems, but are instead artificial complexity to monetize your every move. They profit from the fact that its artificially difficult to host your own data, sync your own devices, or connect to each other without an intermediary.
All of this becomes worse with AI stratifying hardware power again. AI is great, but on american capitalists its pearls before swine.
If capitalists can't solve problems, who do you suggest can?
The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
The world really benefits from well funded institutions doing research and development. Medicine has also largely advanced due in part to this.
What’s lost is the recapture. I don’t think governments are typically the best candidate to bring a new technology to marketable applications, but I do think they should be able to force terms of licensure and royalties. Keeping both those costs predictable and flat across industry would drive even more innovation to market.
What happens instead is private entities take public research and capture it almost entirely in as few hands as possible.
In short, the loss of civic pride and shared responsibility to society has created the nickel and dime you to death capitalism we are seeing in the rise today. Externalization of all costs possible and capture as much profit as possible. No thought to second order effects or how the system that is being dodged to contribute back to gave way for the ability for people to so grossly take advantage of it in the first place
> The internet and many adjacent technologies were all created and iterated on inside the DoD and other wings of government research.
^ This is the secret sauce. For decades the arrangement was exactly that: defense projects would create new technologies, then once those were finished, they were handed to private industry to figure out how to make a $20,000 MIL-spec LCD screen cheap enough and in vast enough quantities that you can buy 3 of them for less than $1,000 while the manufacturer, distributor, and retailer make a solid profit each. That's not an easy thing to do and it's what corporations have historically been good at. And it makes things better for the defense industry too, because they can then apply those lessons to their own hardware where appropriate. Win/win.
But we don't fund research anymore, or at least not that sort of it. Or perhaps there's just not much else to find. I think it's a bit of both. But in any case nothing new is getting made which is why technology feels so dull right now. The most innovative products right now are just thinner, dumber, lighter versions of things we already have, and that's not nothing but it isn't very interesting either.
Labor, FOSS... can you not imagine anything besides wealthy people creating artificial scarcity to force others to work for them?
Edit: if you don't think this is true, look at the history of truly any country and see what happens when subsistence farmers and indigenous communities refuse to work for capitalists
Labor, FOSS, can you be more specific? All FOSS projects operate within capitalism. Do you think Linux would be as successful as it is without the UNIX root, created by Bell Labs, a capitalism darling, or substantial contribution from companies like Intel?
BRB, waiting for capitalists to solve the housing and healthcare crisis, shouldn't be long...
Capitalists would be over the moon if they could build more housing, I assure you.
I mean they already solved that, they're raking in even more billions. The only issue was their solution was for them, not us.
Think of all the people who solved problems before/outside of typical capitalism. I guess more of those people wouldn't hurt to have right now to counter-balance the shift to hyper-capitalism that is ongoing.
Such as? Did any of those achievements lift billions of people out of poverty?
Incredible opportunity for SF Muni to get subsidized with even more full bus wrap ads for AI coding apps that nobody uses