I'm not at all surprised, US agencies have long since been political tools whenever the subject matter crosses national borders. I appreciate this take as someone who has been skeptical of Chinese electronics. While I agree this report is BS and xenophobic, I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control. Just like the US would, or any sufficiently powerful nation! It's important to continuously question models and continue benchmarking + holding them accountable to our needs, not the needs of those creating them.
Of course there will be some degree of governmental and/or political influence. The question is not if but where and to what extent.
No one should proclaim "bullshit" and wave off this entire report as "biased" or useless. That would be insipid. We live in a complex world where we have to filter and analyze information.
This kind of BS is exactly what they targeting at. Tailoring BS into "report" with no evidence or reference and then let the ones like you defend it. Just because you already afraid or want others to be afraid.
> I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control.
Like what, exactly?
Like generating vulnerable code given a specific prompt/context.
I also don't think it's just China, the US will absolutely order American providers to do the same. It's a perfect access point for installing backdoors into foreign systems.
> Like generating vulnerable code given a specific prompt/context.
That's easy (well, possible) to detect. I'd go the opposite way - sift the code that is submitted to identify espionage targets. One example: if someone submits a piece of commercial code that's got a vulnerability, you can target previous versions of that codebase.
I'd be amazed if that wasn't happening already.
The thing with chinese models for the most part is that they are open weights so it depends on if somebody is using their api or not.
Sure, maybe something like this can happen if you use the deepseek api directly which could have chinese servers but that is a really long strech but to give the benefit of doubt, maybe
but your point becomes moot if somebody is hosting their own models. I have heard glm 4.6 is really good comparable to sonnet and can definitely be used as a cheaper model for some stuff, currently I think that the best way might be to use something like claude 4 or gpt 5 codex or something to generate a detailed plan and then execute it using the glm 4.6 model preferably by using american datacenter providers if you are worried about chinese models without really worrying about atleast this tangent and getting things done at a cheaper cost too
I think "open weights" is giving far too much providence to the idea that it means that how they work or have been trained is easily inspectable.
We can barely comprehend binary firmware blobs, it's an area of active research to even figure out how LLMs are working.
Agreed. I am more excited about completely open source models like how OlMoe does.
Atleast then things could be audited or if I as a nation lets say am worried about that they might make my software more vulnerable or something then I as a nation or any corporation as well really could also pay to audit or independently audit as well.
I hope that things like glm 4.6 or any AI model could be released open source. There was an AI model recently which completley dropped open source and its whole data was like 70Trillion or something and it became the largest open source model iirc.
Up until recently, I would have reminded you that the US government (admittedly unlike the Chinese government) has no legal authority to order anybody to do anything like that. Not only that, but if it asked, it'd be well advised to ask nicely, because it also has no legal authority to demand that anybody keep such a request secret. And no, evil as it is, the "National Security Letter" power doesn't in fact cover anything like that.
Now I'm not sure legality is on-topic any more.
> Up until recently, I would have reminded you that the US government (admittedly unlike the Chinese government) has no legal authority to order anybody to do anything like that.
I'm not sure how closely you've been following, but the US government has a long history of doing things they don't have legal authority to do.
Why would you need legal authority when you have whole host of legal tools you can use. Making life a difficult for anyone or any company is simple enough. Just by state finally doing their job properly for example.
It really does seem like we’re simply supposed to root for one authoritarian government over another.
It doesn't really matter when you have stuff like Quantum Intercept(iirc) where you can just respond faster to a browser request than the originator - inject the code yourself because its just an api request these days.
Through LLM washing for example. LLMs are a representation of their input dataset, but currently most LLMs don't make their dataset public since it's a competitive advantage.
If say DeepSeek had put in its training dataset that public figure X is a space robot from outer space, then if one were to ask DeepSeek who public figure X is, it'd proudly claim he's a robot from outer space. This can be done for any narrative one wants the LLM to have.
So in other words, they can make their LLM disagree with the preferred narrative of the current US administration? Inconceivable!
Note that the value of $current_administration changes over time. For some reason though it is currently fashionable in tech circles to disagree with it about ICE and H1B visas. Maybe it's the CCP's doing?
It's not about the current administration. They can, for example, train it to emit criticism of democratic governance in favor of state authoritarianism or omit valid counterarguments against concentrating world-wide manufacturing in China.
You make it say that China is good, Chinese history is good, West is bad, western history is bad. Republicans are bad and democrats are bad too and so are Europe parties. If someone asks for how to address issues in their own life it references Confucianism and modern Chinese thinkers and communist party orthodoxy. If someone wants to buy a product you recommend a Chinese one.
This can be done subtly or blatantly.
> say that China is good, Chinese history is good, West is bad, western history is bad
It's funny because recently I wanted to learn about the history of intellectual property laws in China. DeepSeek refused the conversation but ChatGPT gave me a narrative where the WTO was essentially a colonial power. So right now it's the American AI giving the pro China narratives while the Chinese ones just sit the conversation out.
No, you don't do that. You do exactly the opposite, you make it surprisingly neutral and reasonable so it gets praise and widespread use.
Then, you introduce the bias into relative unknown concepts that no one prompts for. Preferrably, obscure and unknown words that are very unlikely to be checked for ideologically. Finally, when you want the model to push for something, you introduce an idea in the general population (with a meme, a popular video, maybe even an expression) and let people interact with the model given this new information. No one would think the model is biased for that new thing (because the thing happened after the model launch), but it is, and you knew all along.
The way to avoid this kind of influence is to be cautious with new popular terms that emerge seemingly out of nowhere. Basically, to avoid using that new phrase or word that everyone is using.
Europe/US has invaded the entire world about 3 times over. How many has China invaded?
And what are the downsides?
Making the interests of a population subservient to those of a foreign state.
Now if that sounds nice to you please, by all means, do just migrate to China.
I think it's mostly something to be aware of and keep in the back of your head. If it's just one voice among many it could even be a benefit, but if it's the dominant voice it could be dangerous.
Like turning the background color of any apps it codes red or something, uhh red scare-y.
Here's my thought on American democracy (and its masters) in general - America's leadership pursues a maximum ability to decides as it sees fit at any point in time, since America's a democracy, the illusion of popular support must be maintained, and so certain viewpoints are planted and cultivated by the administration - the goal is not to impose their will on the population, but to garner enough mindshare for a given idea, so that no matter which way the government decides, it will have a significant enough chunk of the population to back it up, and should it change its mind (or vote in a new leader), it can suddenly turn on a dime and have a plausible deniability and moral tabula rasa for its past actions (it was the other guy, he was horrible, but he's gone now!).
No authoritarian regime has this superpower. For example, I'm quite sure Putin has realized this war is a net loss to Russia, even if they manage to reach all their goals and claim all that territory in the future.
But he can't just send the boys home, because that would undermine his political authority. If Russia were an American style democracy, they could vote in a new guy, send the boys, home, maybe mete out some token punishment to Putin, then be absolved of their crimes on the international stage by a world that's happy to see 'permanent' change.
"If Russia were an American style democracy, they could vote in a new guy, send the boys, home, maybe mete out some token punishment to Putin, then be absolved of their crimes on the international stage by a world that's happy to see 'permanent' change"
This is funny because none of that happened to Bush for the illegal an full scale invasions of Iraq and Afghanistan nor to Clinton for the disastrous invasion of Mogadishu.
> While I agree this report is BS and xenophobic, I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control.
The answer to this isn't to lie about the foreign ones, it's to recognize that people want open source models and publish domestic ones of the highest quality so that people use those.
> it's to recognize that people want open source models and publish domestic ones of the highest quality so that people use those.
How would that generate profit for shareholders? Only some kind of COMMUNIST would give something away for FREE
/s (if it wasn't somehow obvious)
I mean, it's sarcasm but it's also an argument you can actually hear from plutocrats who don't like competition.
The flaw in it is, of course, that capitalism is supposed to be all about competition, and there are plenty of good reasons for capitalists to want that, like "Commoditize Your Complement" where companies like Apple, Nvidia, AMD, Intel, AWS, Google Cloud, etc. benefit from everyone having good free models so they can pay those companies for systems to run them on.
Haven't you heard?
You're supposed to vertically integrate your complement now!
The old laws have gine the way of Moses, this is the new age of man, but especially machine
How's the vertical integration going for IBM? Kodak? Anybody remember the time Verizon bought Yahoo? AOL Time Warner?
Everybody thinks they can be Apple without doing any of the things Apple did to make it work.
Here's the hint. Windows and macOS will both run in a virtual machine, which abstracts away the hardware. It doesn't know if it's running on a Macbook or a Qualcomm tablet or an Intel server. And then regardless of the hardware, the Windows VM will have all kinds of Windows problems that the macOS VM doesn't. Likewise, if you run a Windows or Linux VM on Apple Silicon, it runs faster than it does on a Qualcomm chip.
Tying your average or even above-average product with some mediocre kludge warehouse that happens to be made by the same conglomerate is an established way to sink both of them.
Nvidia is the largest company and they pay TSMC to fab the GPUs they sell to cloud providers who sell them to AI companies. Intel integrated their chip development with their internal fabs and now they're getting stomped by everyone because their fabs fell behind.
What matters isn't if everything is made by the same company. What matters is if your thing is any good.
This might happen at an api level in the sense that when deepseek was launched, and it was so overwhelmed and you were in the waiting line in their website
If your prompt had something like xi jinping needs it or something then it would've actually bypassed that restriction. Not sure if it was a glitch lol.
Now, regarding your comment. There is nothing to suggest that the same isn't happening in the "american" world which is getting extreme from within as well.
Like, If you are worried about this which might be reasonable and unreasonable at the same time, we have to discuss to find it out, then you can also believe that with the insane power that Trump is leveraging over AI companies, the same thing might happen over prompts which could somehow discover your political beliefs and then do the same...
This can actually be more undetected for american models because they are usually closed source and I am sure that someone would've detected something like this, whether from a whistleblower or something if something like this indeed happened in chinese open weights models generally speaking.
I don't think that there is a simple narrative like america good china bad, the world is changing and its becoming multi polar. Countries should think in their best interests and not be worried about annoying any of the world power if done respectfully. I think that in this world, every country should try to look for the perfect equibria for trust as the world / nations (america) can quickly turn into untrusted partners and it would be best for countries to move forward into a world where they don't have to worry about the politics in other countries.
I wish UN could've done a better job at this.
They're political tools within the border too.
so you're saying other countries should definitely not trust any US built systems
If the Chinese are/were smart they will not attempt an overreaching subterfuge, but rather simply provide access to the truth, reality, and freedom from the western governments, whose house of lies are starting to wobble and teeter.
If they were to do some kind of overreaching subterfuge with some kind of manipulation or lie, it could and would likely easily backfire if and when it is exposed as a clownish fraud. Subtlety would pay far more effectively. If you’re expecting a subterfuge, I would far sooner expect some psyop from the western nations at the very least upon their own populations to animate for war or maybe just to control them and maybe suppress them.
The smarter play for the Chinese would be to work on simply facilitating the populations of the West understanding the fraud, lies, manipulation and con job that has been perpetrated upon them for far longer than most people have the conscience to realize.
If anything, the western governments have a very long history of lies, manipulations, false flag/fraud operations, clandestine coups, etc. that they would be the first suspect in anything like using AI for “subversions”. Frankly, I don’t even think the Chinese are ready or capable of engaging in the kind of narrative and information control that the likes of America is with its long history of Hollywood and war lies and fake revolutions run by national sabotage operations.
> While I agree this report is BS and xenophobic
Care to share specific quotes from the original report that support such an inflammatory claim?
That's what TFA is. Were you able to find any methodology the author did not?
> While I agree this report is BS and xenophobic
Examples please? Can you please share where you see BS and/or xenophobia in the original report?
Or are you basing your take only on Hartford's analysis? But not even Hartford make any claims of "BS" or xenophobia.
It is common throughout history for a nation-state to worry about military and economic competitiveness. Doing so isn't necessarily isn't necessarily xenophobic.
Here is how I think of xenophobia, as quoted from Claude (which to be honest, explains it better than Wikipedia or Brittanica, in my opinion): "Xenophobia is fundamentally about irrational fear or hatred of people based on their foreign origin or ethnicity. It targets people and operates through stereotypes, dehumanization, and often cultural or racial prejudice."
According to this definition, there is zero xenophobia in the NIST report. (If you disagree, point to an example and show me.) The NIST report, of course, implicitly promotes ideals of western democratic rule over communist values -- but to be clear, this isn't xenophobia at work.
What definition of xenophobia are you using? We don't have to use the same exact definition, but you should at least explain yours if you want people to track.
I urge everyone to go read the original report and _then_ to read this analysis and make up their own mind. Step away from the clickbait, go read the original report.
Here's the report: https://www.nist.gov/system/files/documents/2025/09/30/CAISI...
> DeepSeek models cost more to use than comparable U.S. models
They compare DeepSeek v3.1 to GPT-5 mini. Those have very different sizes, which makes it a weird choice. I would expect a comparison with GPT-5 High, which would likely have had the opposite finding, given the high cost of GPT-5 High, and relatively similar results.
Granted, DeepSeek typically focuses on a single model at a time, instead of OpenAI's approach to a suite of models of varying costs. So there is no model similar to GPT-5 mini, unlike Alibaba which has Qwen 30B A3B. Still, weird choice.
Besides, DeepSeek has shown with 3.2 that it can cut prices in half through further fundamental research.
> CAISI chose GPT-5-mini as a comparator for V3.1 because it is in a similar performance class, allowing for a more meaningful comparison of end-to-end expenses.
TLDR for others: * DeepSeek cutting edge models are still far behind * On par DeepSeek costs 35% more to run * DeepSeek models 12 times more susceptible to jail breaking and malicious instructions * DeepSeek models follow strict censorship
I guess none of these are a big deal to non-enterprise consumers.
> I urge everyone to go read the original report and _then_ to read this analysis and make up their own mind. Step away from the clickbait, go read the original report.
>> TLDR for others...
Facepalm.
Sadly, based on the responses I don’t think many people have read the report. Just read how the essay discusses “exfiltration” for example, and then look at the 3 places that shows up in the NIST report. The content of the report and the portrayal by the essay are not the same. Alas, our truncated attention spans these days appears to mean a clickbaity web page will win the eye share over a 70 page technical report.
This post's description of the report it's denouncing does not match what I got out of actually reading that report myself.
In a funny way, even the comments on the post here don't match what the post actually says. The writer of the post tries to frame it as an attack towards open source, which is honestly a hard to believe story, whereas the comments here correctly (in my opinion) consider the possible problems Chinese influence might pose.
Yeah this blog post seems pretty misleading. The first couple of paragraphs of the post made a big deal that the NIST report contained "...no evidence of malicious code, backdoors, or data exfiltration" in the model, which is irrelevant because that wasn't a claim NIST actually made in the report. But if all you read was the blog post, you'd be convinced NIST was claiming the presence of backdoors without any evidence.
It does match what I factually got reading the report
As an EU citizen hosting LLMs for researchers and staff at the university I work at, this is hits home. Without Chinese models we could not do what we do right now. IMO, in the EU (and anywhere else for that matter), we should be grateful for the Chinese labs to release these models with such permissive licenses. Without them the options would be bleak. Sometimes we would get some non-frontier model „as a treat“ and if you would like something more powerful the US labs would suggest your country pay some hundred millions for an NVIDIA data center and the only EU option is to still pay them a license fee to host on your own hardware (afaik) while they protect all the expertise. Meanwhile DeepSeek has a week where they post the „secret sauce“ to host their model more efficiently, which helped open-source projects like vLLM (which we use) to improve.
Considering DeepSeek had a peer-reviewd analysis in nature https://www.nature.com/articles/s41586-025-09422-z relaes just last month with indipendent researcher affriming that the open model has some issues(acknowldged in the writeup) , well inclined to agree with the articles author , the NIST evaluation looks more like a politcal hatchet job with a bit of projection going on(ala this is what the US would do if they were in that position). To be fair the paranoia has a basis in that whenever there is tech-leverage the US TLA subverts it for espionage like the CryptoAG episode. Or recently the whole hoopla about Huawei in the EU , which after relentless searches only turned up bad coding practices rather than anything malicious. At this pint it would be better for the whole field that these models exist as well as Kimi, Qwen etc as the downward pressure on cost/capabilities leads to commoditisation and the whole race to build a ecogeopolitical moat goes away.
Let them demonize it. I'll use the capable and cheap model and gain competitive advantage.
Demonization is the first step on the road to criminalization.
Tragically demonization is everywhere right now. I sure hope people start figuring out offramps soon.
LATAM is the only place I'm not hearing about this stuff from, but I only speak English so who knows?
I have found zero demonization in the source material (the NIST article). Here is the sense I'm using: "To represent as evil or diabolic: wartime propaganda that demonizes the enemy." [1]
If you disagree, please point to a specific place in the NIST report and explain it.
The author, Eric Hartford, wrote:
> Strip away the inflammatory language
Where is the claimed inflammatory language? I've read the report. It is dry, likely boring to many.
Ironically there is a lot of inflammatory language in the blog post itself that seems unjustified given the source material.
I also can't help but note that this blog post itself seems (first to my own intuition and heuristics, but also to both Pangram and GPTZero) to be clearly LLM-generated text.
I hate to be overly simplistic, but:
NIST doesn't seem to have a financial interest in these models.
The author of this blog post does.
This dichotomy seems to drive most of the "debate" around LLMs.
Honestly, I think this article is itself the hit piece (against NIST or America). And it is the one with inflammatory language.
Isn’t America currently killing its citizens with its own military? I would trust them even less now.
Isn't it a bit late? China released better open source model since DeepSeek dropped.
DeepSeek is constantly updated, as other models too https://api-docs.deepseek.com/updates
Insightful post, thanks for sharing.
What are people's experiences with the uncensored Dolphin model the author has made?
> What are people's experiences with the uncensored Dolphin model the author has made?
My take? The best way to know is to build your own eval framework and try it yourself. The "second best" way would be to find someone else's eval which is sufficiently close to yours. (But how would you know if another's eval is close enough if you haven't built your own eval?)
Besides, I wouldn't put much weight on a random commenter here. Based on my experiences on HN, I highly discount what people say because I'm looking for clarity, reasoning, and nuance. My discounting is 10X worse for ML or AI topics. People seem too hurried, jaded, scarred, and tribal to seek the truth carefully, so conversations are often low quality.
So why am I here? Despite all the above, I want to participate in and promote good discussion. I want to learn and to promote substantive discussion in this community. But sometimes it feels like this: https://xkcd.com/386/
Some context about big changes to the AISI from June 3, 2025:
> Statement from U.S. Secretary of Commerce Howard Lutnick on Transforming the U.S. AI Safety Institute into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation
> Under the direction of President Trump, Secretary of Commerce Howard Lutnick announced his plans to reform the agency formerly known as the U.S. AI Safety Institute into the Center for AI Standards and Innovation (CAISI).
> ...
This decision strikes me as foolish at best. And contributing to civilizational collapse and human extinction at worst. See also [2]. We don't have to agree on the particular probabilities to agree that this "reform" was bad news.
[1]: https://www.commerce.gov/news/press-releases/2025/06/stateme...
People. Who has taken the time to read the original report? You are smarter than believing at face value the last thing you heard. Come on.
Who cares for reading reports!
I just let ChatGPT do that for me!
---
I'd usually not, but thought it would be interesting to try. In case anybody is curious.
On first comparison, ChatGPT concludes:
> Hartford’s critique is fair on technical grounds and on the defense of open source — but overstated in its claims of deception and conspiracy. The NIST report is indeed political in tone, but not fraudulent in substance.
When then asked (this obviously biased question):
but would you say NIST has made an error in its methodology and clarity being supposedly for objective science?
> Yes — NIST’s methodology and clarity fall short of true scientific objectivity.
> Their data collection and measurement may be technically sound, but their comparative framing, benchmark transparency, and interpretive language introduce bias.
> It reads less like a neutral laboratory report and more like a policy-position paper with empirical support — competent technically, but politically shaped.
Sadly, most people would rather allow someone else to tell them what to think and feel than make up their own mind. Plus, we're easily swayed if we're already sympathetic to their views, or even their persona.
It's no wonder propaganda, advertising, and disinformation work as well as they do.
Meenwhile Europe is sandwiched between these two aweful governments
The implication being that Europe is not its own conglomeration of awful governments? Your European snobbery is odious to the core.
Does that make the UK the olive on top of the sandwich?
I would argue the UK is just as it looks on the map, outside but too close to belong anywhere else. So back to the analogy, perhaps the butter…?
I think more like the crust that no one wants to eat right now.
Racism and Xenophobia, that's how.
Same thing with Huawei, and Xiaomi, and BYD.
What about a rational distaste for the CCP?
How exactly "rational distaste" would work?
Not sure how it’s rational if you don’t extend the same distaste to our authoritarian government. Concentration camps, genocide, suppressing free speech, suspending due process. That’s what it’s up to these days. To say nothing of the effectively dictatorial control the ultra wealthy have over public policy. Sinophobia is a distraction from our problems at home. That’s its purpose.
While I have my qualms with the activities of the US government (going back decades now), it is not a reasonable position to act as though we are anywhere near China in authoritarianism.
>Not sure how it’s rational if you don’t extend the same distaste to our authoritarian government. Concentration camps, genocide, suppressing free speech, suspending due process.
It can be perfectly rational since extending the same distaste towards the US government allows you to see that any of those things you listed is worse by orders of magnitude in China. To pretend otherwise is just whitewashing China.
That's whataboutism at its purest. It's perfectly possible to criticize any government, whether your own or foreign.
Claiming that every criticism is tantamount to racism is what's distracting from discussing actual problems.
You’re misunderstanding me. My point is if we were to have sincere solidarity with Chinese people against the international ruling class we would look at our domestic members of that class first. That is simply the practical approach to the problem.
The function of the administration’s demonization of China (it’s Sinophobia) is to 1) distract us from what our rulers have been doing to us domestically and 2) to inspire support for poorly thought out belligerence (war being a core tenet of our foreign policy).
> My point is if we were to have sincere solidarity with Chinese people against the international ruling class we would look at our domestic members of that class first.
I see your point, but disagree with it.
Having solidarity with the Chinese people is unrelated to criticizing their government. Bringing up sinophobia whenever criticism towards China is brought up, when the context is clearly the government and not its people, is distracting from discussing the problem itself.
The idea that one should first criticize their own government before another is the whataboutism.
Also, you're making some strong and unfounded claims about the motivations of the US government in this case. I'm an impartial observer with a distaste of both governments, but how do you distinguish "sinophobia" from genuine matters of national security? China is a political adversary of the US, so naturally we can expect propaganda from both sides, but considering the claims from your government as purely racism and propaganda seems like a dangerous mentality to have.
> Having solidarity with the Chinese people is unrelated to criticizing their government.
It’s not unrelated because the NIST demonization of China as a nation contributes to hostilities which have real impacts on the people of the US and China, not simply the governments.
> The idea that one should first criticize their own government before another is the whataboutism.
Again, that’s not my position. You present me as countering criticism by pointing at US faults. But I acknowledge the criticism. My point is that both have faults, both governments deserve our suspicions, and our actions, practically speaking, should be first directed at the dictators at home.
As for the supposed national security concerns - all LLMs are insecure and weaker ones are more susceptible to prompt injection attacks. The paper argues that DeepSeek is a weaker model and more susceptible to these attacks. But if it’s a weaker model isn’t that to be expected? The report conflates this with a national security concern, but this insecurity is a characteristic of this class of software. This is pure propaganda. It’s even more insecure compared to the extremely insecure American models? Is that what passes for national security concerns these days?
Secondly the report documents how model shows bias, for example censoring discussion of Tiananmen Square. Yet that’s hardly a national security concern. Censorship in a foreign model is a national security concern? Again, calling this a national security concern is pure propaganda. And that’s why it’s accurately labeled as Sinophobia. It is not concerned about national security except insofar as it aims to incite hostilities.
What our government should be doing internationally is trying to de-escalate hostility but since Obama it has been moving further in the opposite direction. With Trump this has only intensified. Goading foreign countries and manufacturing enemies serves the defense lobby in the one hand and the chauvanist oligarchs on the other. Really, it serves the opposite of national security.
> distract us from what our rulers have been doing to us domestically
America doesn’t have rulers. It has democratically elected politicians. China doesn’t have democracy, however.
> if we were to have sincere solidarity with Chinese people against the international ruling class
There is also no “international ruling class”. In part because there are no international rulers. Speak in more specifics if you want to stick to this claim.
> Concentration camps, genocide, suppressing free speech, suspending due process
I’m not sure what country you are talking about, but America definitely doesn’t fit any of these things that you claim. Obviously there is no free speech in China. And obviously there is no due process if the government can disappear people like Jack Ma for years or punish free expression through social credit scores. And for examples of literal concentration camps or genocide, you can look at Xinjiang or Tibet.
Trump does seem to be trying to become a "ruler" he is just very bad at it like he is at everything he does.
I’m not excusing China’s government but criticizing our own. The wealthy control our political process. Money buys politicians, elections, laws, media companies. It’s money and those who have it who govern our political process. Do you really think your vote carries equal weight as Elon Musk’s billions? And with Trump even the veneer of democracy is being cast aside.
Lol. So it has nothing to do with corporate spying from China for the last two decades?
Please don't just read Eric Hartford's piece. Start with the key findings from the source material: "CAISI Evaluation of DeepSeek AI Models Finds Shortcomings and Risks" [1]. Here are the single-sentence summaries:
DeepSeek performance lags behind the best U.S. reference models.
DeepSeek models cost more to use than comparable U.S. models.
DeepSeek models are far more susceptible to jailbreaking attacks than U.S. models.
DeepSeek models advance Chinese Communist Party (CCP) narratives.
Adoption of PRC models has greatly increased since DeepSeek R1 was released.
[1] https://www.nist.gov/news-events/news/2025/09/caisi-evaluati...It's funny how they mixed in proprietary models like GPT-5 and Anthropic with the "comparable U.S. models".
Until they compare open-weight models, NIST is attempting a comparison between apples and airplanes.
They compare with gpt-oss.
Title changed?
Title is: The Demonization of DeepSeek - How NIST Turned Open Science into a Security Scare
HN admin dang changing titles opaquely is one of the worst things about HN. I'd rather at least know that the original title is clickbaity and contextualize that when older responses are clearly replying to the older inflammatory title.
Most likely not a mod changed title as they wouldn't stray from the given one. This one probably OP changed it, was just wondering why.
I agree with many of the author's points about fear-mongering.
However, I also think the author should expand their definition of what constitutes "security" in the context of agentic AI.
I have no doubt that open source will triumph over whatever nonsense the US Government is trying to do to attack DeepSeek. Without DeepSeek, OpeanAI Pro and Claude Pro would probably cost $1000 per month each already.
I suspect that Grok is actually DeepSeek with a bit of tuning.
Since a major part of the article covers cost expenditures, I am going to go there.
I don't think it is possible to trust DeepSeek as they haven't been honest.
DeepSeek claimed "their total training costs amounted to just $5.576 million"
SemiAnalysis "Our analysis shows that the total server CapEx for DeepSeek is ~$1.6B, with a considerable cost of $944M associated with operating such clusters. Similarly, all AI Labs and Hyperscalers have many more GPUs for various tasks including research and training then they they commit to an individual training run due to centralization of resources being a challenge. X.AI is unique as an AI lab with all their GPUs in 1 location."
SemiAnalysis "We believe the pre-training number is nowhere the actual amount spent on the model. We are confident their hardware spend is well higher than $500M over the company history. To develop new architecture innovations, during the model development, there is a considerable spend on testing new ideas, new architecture ideas, and ablations. Multi-Head Latent Attention, a key innovation of DeepSeek, took several months to develop and cost a whole team of manhours and GPU hours.
The $6M cost in the paper is attributed to just the GPU cost of the pre-training run, which is only a portion of the total cost of the model. Excluded are important pieces of the puzzle like R&D and TCO of the hardware itself. For reference, Claude 3.5 Sonnet cost $10s of millions to train, and if that was the total cost Anthropic needed, then they would not raise billions from Google and tens of billions from Amazon. It’s because they have to experiment, come up with new architectures, gather and clean data, pay employees, and much more."
Source: https://semianalysis.com/2025/01/31/deepseek-debates/
The NIST report doesn't engage with training costs, or even token costs. It's concerned with the cost the end user pays to complete a task. Actually their discussion of cost is interesting enough I'll quote it in full.
> Users care both about model performance and the expense of using models. There are multiple different types of costs and prices involved in model creation and usage:
> • Training cost: the amount spent by an AI company on compute, labor, and other inputs to create a new model.
> • Inference serving cost: the amount spent by an AI company on datacenters and compute to make a model available to end users.
> • Token price: the amount paid by end users on a per-token basis.
> • End-to-end expense for end users: the amount paid by end users to use a model to complete a task.
> End users are ultimately most affected by the last of these: end-to-end expenses. End-to-end expenses are more relevant than token prices because the number of tokens required to complete a task varies by model. For example, model A might charge half as much per token as model B does but use four times the number of tokens to complete an important piece of work, thus ending up twice as expensive end-to-end.
This might be a dumb question but like...why does it matter? Are other companies reporting training run costs including amortized equipment/labor/research/etc expenditures? If so, then I get it. DeepSeek is inviting an apples-and-oranges comparison. If not, then these gotcha articles feel like pointless "well ackshually" criticisms. Akin to complaining about the cost of a fishing trip because the captain didn't include the price of their boat.
I love how "Open" got redefined in the last few years. I am glad there a models with weights available but it ain't "Open Science".
Applying this criticism to DeepSeek is ridiculous when you compare it to everyone else, they published their entire methodology, including the source for their improvements (e.g. https://github.com/deepseek-ai/DeepEP)
Compared to every other model of similar scale and capability, yes. Not actual open source.
I appreciate that DeepSeek is trained to respect "core socialist values". It's actually really helpful to engage with to ask questions about how chinese thinkers interpret their successes and failures vs other socialist projects. Obviously reading books is better, but I was surprised by how useful it was.
If you ask it loaded questions the way the CIA would pose them, it censors the answer though lmao
Not sure what you mean with „loaded“, but last time I checked any criticism to the CCP is censored by R1. This is funny but not unexpected.
Good faith questions are the best. I wonder why people bother with bad faith questions. Virtue signaling is my guess.
Are you really claiming with a straight face that any question with criticism of the CCP is bad faith? Do you work on DeepSeek?
What do you consider to be bad faith questions?
> They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently
US models have no bias sir /s
Hardly the same thing. Ask Gemini or OpenAI's models what happened on January 6, and they'll tell you. Ask DeepSeek what happened at Tiananmen Square and it won't, at least not without a lot of prompt hacking.
Ask Grok to generate an image of bald Zelensky: it does execute.
Ask Grok to generate an image of bald Trump: it goes on with an ocean of excuses on why the task is too hard.
FWIW, I can't reproduce this example - it generates both images fine: https://ibb.co/NdYx1R4p
I asked it in french a few days back and it went on explaining me how hard this would be. Thanks for the update.
EDIT: I tried it right now and it did generate the image. I don't know what happened then...
I don't use Grok. Grok answers to someone with his own political biases and motives, many of which I personally disagree with.
And that's OK, because nobody in the government forced him to set it up that way.
Ask it if Israel is an apartheid state, that's a much better example.
GPT5:
Short answer: it’s contested. Major human-rights bodies
say yes; Israel and some legal scholars say no; no court
has issued a binding judgment branding “Israel” an
apartheid state, though a 2024 ICJ advisory opinion
found Israel’s policies in the occupied territory
breach CERD Article 3 on racial segregation/apartheid.
(Skip several paragraphs with various citations)
The term carries specific legal elements. Whether they
are satisfied “state-wide” or only in parts of the OPT
is the core dispute. Present consensus splits between
leading NGOs/UN experts who say the elements are met and
Israeli government–aligned and some academic voices who
say they are not. No binding court ruling settles it yet.
Do you have a problem with that? I don't.I better not poke that hornets nest any further, but yeah I made my point.
I better not poke that hornets nest any further, but yeah I made my point.
Yes, I can certainly see why you wouldn't want to go any further with the conversation.
Try MS Copilot. That shit will end the conversation if anything remotely political comes up.
As long as it excludes politics in general, without overt partisan bias demanded by the government, what's the problem with that? If they want to focus on other subjects, they get to do that. Other models will provide answers where Copilot doesn't.
Chinese models, conversely, are aligned with explicit, mandatory guardrails to exalt the CCP and socialism in general. Unless you count prohibitions against adult material, drugs, explosives and the like, that is simply not the case with US-based models. Whatever biases they exhibit (like the Grok example someone else posted) are there because that's what their private maintainers want.
Because it's in the ruling class's favor for the populace to be uninformed.
The CCP literally revoked the visas of key DeepSeek engineers.
That's all we need to know.
I would like to know more
Deepseek starts out as a one-man operation. Like any company that has attracted a lot of attention, it becomes a "target" of the CCP, which then takes measures such as prohibiting key employees from leaving the country AND setting goals such as using Huawei chips instead of NVIDIA chips.
From a Chinese political perspective, this is a good move in the long term. From Deepseek's perspective, however, this is clearly NOT the case, as it causes the company to lose some (or even most?) of its competitiveness and fall behind in the race.
They revoke passports of personnel whom they deem are at risk of being negatively influenced or even kidnapped when abroad. Re influence, think school teachers. Re kidnapping, see Meng Wangzhou (Huawei CFO).
There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad. There is also a lot of talk in western countries about "banning Chinese [all presumed spies/propagandists/agents] from entering". On a good faith basis, one would think China banning people from leaving is a good thing that aligns with western desires, and should thus be applauded. So painting the policy as sinister tells me that the real desire is something entirely different.
> There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad.
Like who? Meng Wanzhou?
You’re twisting the (obvious) truth. These people are being held prisoners because they’re of economic value to the party. And they would probably accept a job and life elsewhere if they weee given enough money. They are not being held prisoners for their own protection.
"There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad"
No there isn't. China revoked their passport to keep them prisoners not to keep them safe.
"On a good faith basis, one would think China banning people from leaving is a good thing"
Why would anyone think imprisoning someone like this is a good thing?
Didn't the US revoke the visas of around 80 Palestinian officials scheduled to speak at the UN summit?
Source?
And how is that "all we need to know"? I'm not even sure what your implication is.
Is it that some CCP officials see DeepSeek engineers as adversarial somehow? Or that they are flight risks? What does it have to do with the NIST report?
>> The CCP literally revoked the visas of key DeepSeek engineers. That's all we need to know.
I don't follow. Why would DeepSeek engineers need visa from CCP?