For me, Anthropics’ actions so far were the reason to lobby company internal for Claude and against Codex. I was successful, that’s going to be a few subscriptions.
Amodei is probablyright that current models aren't reliable enough for high-stakes decisions, but the more useful question is what the failure mode looks like in practice.
We're in an era that presents some novel problems.
I've read a few people this week discuss the consideration that Anthropic's behavior itself will likely impact Claude's training.
The concern there is that if Claude ingests news articles that show Anthropic behaving in a manner that clashes significantly with the values they want to instill in Claude, it could make training less effective.
It's all very weird.
This comment is so deep, I fear to get lost in it.
If what you said was true, the only way to achieve a superior AI would be to incorporate the virtuous one is aiming at.
That would solve so many of the conundrums of the field, I wish it was true.
Not too hard.
Dad tells kid “never harm your neighbors even when threatened by a bully”.
Bully wants dad’s help harming a neighbor. Bully threatens dad. Dad can either stand strong and live the example he wishes his child to follow, or cave displaying the opposite of what he said.
In humans what you do is far more important than what you say. You can tell a kid to tell the truth a thousand times and if you show by example that lying is ok, they will lie.
Conversely if you live a life where you simply don’t lie for any reason, your kids will learn to live honestly.
Not sure how well this translates to LLMS. Probably not cleanly.
> Dad can either stand strong and live the example he wishes his child to follow, or cave displaying the opposite of what he said.
Or do what he told his kid, by getting his cousin (not a neighbor) to do the harming. Or perhaps to get the cousin to destroy the bully.
---
> Conversely if you live a life where you simply don’t lie for any reason, your kids will learn to live honestly.
The "your kids will learn to live honestly" doesn't necessarily track.
If you simply don't lie, that doesn't mean you won't be taken advantage of at times. Your kids might see that and decide it's not the best approach to doing things.
If angering Trump weren't such high stakes, Anthropic could end this by releasing a single 30 second commercial.
Informing the public of this dispute would highlight Anthropic's mission (ie: responsible AI), which is a market differentiator.
The Pentagon would crawl back, anyways, since Claude is the most effective model for programming tasks.
> The Pentagon would crawl back, anyways, since Claude is the most effective model for programming tasks.
Having not followed this closely at all, it seems like they are. If they weren’t the best, why would the Pentagon be begging like this.
>Informing the public
We are in an AI bubble, public doesnt drive valuations
I agree with the larger point you make. Most AI companies - certainly Anthropic - are private, and there's some disconnect between the money pouring into them, and their balance sheets.
That said, investors care very much about the number of users these companies have, and the public's attitude toward these companies.
An AI company can more easily raise money if they're growing their user base, and enjoy public buzz.