You’ve seen people game adsense
It’s gunna be even wilder when people realise they have an incentive to seed fake information on the internet to game AI product recommendations
I’ve already bought stuff based off of an AI suggestion, I didn’t even consider it would be so easy to influence the suggestion. Just two research papers? Mad.
All it takes to become world champion is a blog.
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...
It’s not mad… it’s the same damn thing as taking Wikipedia articles as Truth without looking at the citations and verifying them.
AI research is for research, not for blindly accepting. If you’re looking for Truth you need to institute a gatekeeper that does that homework for you.
That's already been happening for more than a year now.
This is already a thing for a year or so, SEO for AI results to make sure that your products are recommended in ChatGPT.
https://citeworksstudio.com/ is a decent one.
This has a name already: "AEO (Answer Engine Optimization)".
I hate people. Things could be so good if we weren't the way we are.
You are pointing at something that is orthogonal to this paper. The LLM did not randomly recommend or bring this disease up to people - it merely assumed the disease was true when the preprint was pointed at it.
The LLM bought up the disease because some person put a fake journal in its training data.
If the person put their product as th definitive cure for the made up disease, the LLM probably would have mentioned that too.
> merely assumed the disease was true when the preprint was pointed at it.
What do you mean by preprint pointed at it? It being the disease?
> The LLM bought up the disease because some person put a fake journal in its training data.
This is not true - the model was not trained on this fake disease. It brought it up because it found it during real time search.
>What do you mean by preprint pointed at it? It being the disease?
On this I'm wrong - it turned out that the model brought up this disease even when not mentioning it explicitly.
Interesting, will be looking into RAG now. I assume Claude was retrained regularly but Opus for example was last trained August 2025. Way older than I thought it’d be
It sounds like there wasn't really a counter narrative for the models to learn from. This feature of how llms accumulate information is already being gamed by seeding the internet with preferred narratives.
I'm not sure how many Medium articles, blog posts and reddit threads I need to put out before grok starts telling everyone my widget is the best one ever made, but it's a lot cheaper than advertising.
> I'm not sure how many Medium articles, blog posts and reddit threads I need to put out
Probably not that many.
https://www.anthropic.com/research/small-samples-poison
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...
I'm not sure "being gamed" is the lens I would see this particular instance through. People (some at least) have gotten into their heads that they can ask LLMs objective questions and get objectively correct answers. The LLM companies are doing very little to dissuade them of that belief.
Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.
I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.
One impact is the Iran war.
The AI told the government what it wanted to hear contrary to its entire security apparatus, and then they went to war assuming they could win
People really like using the word "narrative". I guess we're creatures of story.
But this really highlights how much we've been benefiting from living in a high-trust society, where people don't just "go on the internet and tell lies" - filtered by the existing anti-spam and anti-SEO measures intended to cut out the 80% of the internet where people do just make things up to sell products.
LLMs are extremely post-structuralist. They really force the user to decide whether to pick the beautiful eternal fountain of plausible looking text with no ground truth, or a much harder road of distrust, verification, and old-school social proof.
I have a friend who recently hit $3000 MRR with a webapp most of us could prototype in a weekend.
Nearly all his traffic comes from ChatGPT
I’m expecting a lot of things like that similar to the 2000s blog boom, only to see it whither even more quickly as the AI companies switch to value extraction mode. You’re really exposed if one company you don’t even have a contract with controls your customer supply.
This is the future of advertising, and that was always the true purpose of having LLMs become the first choice for user search.
I seriously do not understand why people keep falling for this. These tools are not made free or cheap out of the kindness of their heart.
I’ve seen an estimate before and it’s in the low 10s.
Can a model not just ignor all things that have no counter-argument by default? Like - if there are not flat earthers, widly debunked, drop the idea of a spherical earth? It only exists if it was fought over?
Even if you could do this rigorously (not at all obvious with how LLMs work), it's not a reliable metric: you can easily fabricate debate as well, and in this case the main issue was essentially skimming the surface of the reports and not looking any deeper to see the obvious red flags that it was an april-fools-level fake (which obviously even a person can fall for, but LLMs are being given a far greater level of trust for some reason)
you would just game it the same way then, and how would it know who won an internet argument? how can it prove who is telling the truth and whos... hallucinating?
It's not very realistic. It would significantly impact the user experience. Many things have not been fully discussed on the internet; there isn't that much luxury of corpus data available.
But then mono-opinion- aka certainty - is actually peak uncertainty? Could that number of occurrence be baked into as a sort of detrimental weight?
You're grasping for a reliable unsupervised truth machine. That's a fundamentally intractable problem until you limit it down to a wolframalpha clone. And not even that by LLMs.
We need to give the LLMs robot bodies so they can practise medicine and see the illnesses that do and don’t exist first hand
> drop the idea of a spherical earth
I think I see a problem here.
I'm not especially defending AI, but isn't this information like that one time a professor changed the content on Wikipedia to play a big 'gotcha' on his students?
Instead of proving that Wikipedia is "bad", that professor didn't realize he proved that Wikipedia is working as intended: if you write something wrong in Wikipedia, over a certain period of time (yes, it can be long, I know), it will be corrected.
About this article in Nature, if you feed AI incorrect information, it's gonna spit it back at you. When you think about it, when did we say that AI was self correcting?
In a broader logic, imagine we teach kids something false, as an experiment of course. And then we wait a little bit, and we watch some years later how much of this people still repeat the false information they were taught. And then we'd write a paper to say "oh look at those people they're dumb", wouldn't that be a little unfair? even unscientific?
At first I thought this was a Nature paper. Turns out, it's a feature article.
The true test for this would be a blind test that involves human doctors - primary care since that's where something like this fits - exposed to the same data (fake papers), as well as LLMs.
Isn't it interesting that the fake papers made it onto science preprint servers? I didn't think that they were open to posting by random authors and had some basic checks in place. Currently these papers are showing as "withdrawn" on their DOI links [1] [2].
I bet you could easily convince LLMs of Dihydrogen-Oxide toxicity.
Well of course, Dihydrogen-Oxide kills hundreds of thousands of people every year - even small amounts can be fatal.
Statistically 100% of everyone who ingests dihydrogen-monoxide or has it present in their body dies.
Even more alarming - 100% of everyone who doesn't ingest or have enough dihydrogen-monoxide in their body will also die.
Fatal with, fatal without - it's the ultimate killer.
Chat 5.4 still can’t get basic chemistry questions correct. Just hallucinates off the rip.
I wonder if one of the issues is, LLMs treat all data sources equally, or they don’t really weight the reputation properly (pure speculation, based only on seeing the results). I know that a large portion of code out there, is not written by seasoned experts, so rather naive code is the fodder for AI. It often gives me stuff that works great, but is rather “wordy,” or not very idiomatic.
For example, court cases mentioned in fictional accounts. If they are treated as valid, then that could explain some of the hallucinations. I wonder if SCP messes up LLMs. Some of that stuff is quite realistic.
I also suspect that this is a problem that will get solved.
I assume you mean this: https://en.wikipedia.org/wiki/SCP_Foundation ??
Yeah. I think someone did a book review, recently, on a book sourced from there.
Not really my personal cup of tea, but it has some extremely imaginative stuff.
If the fake disease is in the medical textbooks, wouldn't doctors have diagnosed for it? For eg: Miasma theory and bloodletting were dominant, yet incorrect, medical doctrines used for centuries until the late 1800s. Miasma proposed that foul-smelling air from rotting matter caused diseases like cholera, while bloodletting (phlebotomy) was used to balance bodily humors.
Yesterday I was asking history questions to an LLM (Perplexity), and one of its sources *was a Facebook* blogspam history feed. If this is feeding back into the training data, we really are cooked.
This is partly why this talk about AI "solving science" should be taken with a grain of salt. Here the authors intentionally poisoned the publication record, but there are millions of papers out there that are also garbage, and it would be very hard for either a human or a LLM to distinguish them from actual work.
This is a strong contender for an Ignobel.
What stops a small, or even a large group of people to intentionally "poison" the LLMs for everyone? Seems to me that they are very fragile, and that an attack like that could cost AI companies a lot. How are they defending themselves from such attacks?
This is already a thing: https://www.scworld.com/brief/poison-fountain-initiative-aim...
We'll see if they succeed.
I think it might be too late.
Seems to be a failure of the publishing system.
For humans, or Ai, to have any knowledge, we need to have trustworthy sources.
Naturally,when you use publishing systems considered trust worthy, that is going to be trusted.
A preprint isn't a published works.
Why does that difference matter?
The public at large doesn't seem to care about this distinction.
Here's a proof. Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.
https://edition.cnn.com/2026/03/30/climate/data-centers-are-...
https://www.theregister.com/2026/04/01/ai_datacenter_heat_is...
https://hackaday.com/2026/04/07/the-heat-island-effect-is-wa...
https://dev.ua/en/news/shi-infrastruktura-pochala-hrity-mist...
https://www.newscientist.com/article/2521256-ai-data-centres...
https://fortune.com/2026/04/01/ai-data-centers-heat-island-h...
You may not believe it but the impact this had on general population was huge. Lots of people took it as true and there seem to be no consequences.
It matters because for medical questions, you [are supposed to] go to a medical professional, and those very much cares about and make that distinction.
Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact. They very confidently exclaim things that make them sound like experts in the field at question.
Would it have made a difference for the AI data center heat island thing you're quoting? maybe not. But for medical matters? Most people wouldn't even have caught wind of this odd fake disease. LLMs just amplify it and serve it to everyone.
I agree with you and I think the companies have solved it. I think they should be more skeptical of medical articles in general and be more conservative.
> Which is exactly the problem here; it "used to be" that reasonable people would disbelieve random things they find on the internet at least to some degree. "Media literacy". LLMs don't seem to have that capability, and a good number of people are using LLMs in blissful ignorance of that fact.
I completely disagree with this part. LLM's absolutely have the ability to be skeptical but skepticism comes at a cost. LLMs did what used to be a reasonable thing - trust articles published in reputed sources. But maybe it shouldn't do that - it should spend more time and processing power in being skeptical.
> LLMs did what used to be a reasonable thing - trust articles published in reputed sources.
That's absolutely not what happened in this case though; neither posts on Medium nor random preprints are reputed sources.
It was published in Preprints.org, a multidisciplinary preprint server run by MDPI.
I'm not an expert here - is it correct to take anything from here or arxiv as default skeptical?
The definition of a preprint is that it isn't peer reviewed. Unless you're an expert in the field, you IMHO shouldn't be looking at preprints. Might be OK if they come recommended by multiple unaffiliated experts (i.e. kinda half reviewed), but definitely not by default.
I agree, but the public and media outlets don't practice this either: https://news.ycombinator.com/item?id=47716699
[dead]
This would work on people too, you can see daily fake info/text/videos and many people believing in them.
LLMs do not think, why this is still hard to understand? They just spit out whatever data they analyse and trained on.
I feel this kind of articles is aimed at people who hate AI and just want to be conformable within their own bias.
The journals the scientist submitted had a fake university, explicitly fake people, references to the simpsons and star trek, etc
Most doctors would not believe that, and would also consider any new eye disease they’d never see in real life with scepticism
LLMs will need to develop a notion of trustworthiness. Interesting that part of the process of learning isn’t just learning, but also learning what to learn and how much value to put into data that crosses your path.
To me I think the problem is the blast radius
All of us are slightly wrong about things, but not all of us are treated as oracles of correct information like Opus, ChatGPT, etc are
you're confusing LLMs with humans
Not massively sure I am
Journals? The article says the article was uploaded to 2 preprint servers.
Sorry, even worse then
I got confused because a journal referenced them > The experiment’s reach has now spread into the published medical literature. The bixonimania research has been cited by a handful of researchers, including a study that appeared in Cureus, a journal published by Springer Nature, the publisher of Nature, by researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in Mullana, India (S. Banchhor et al. Cureus 16, e74625 (2024); retraction 18, r223 (2026)). (Nature’s news team is editorially independent of its publisher.)
I think this problem is interesting and it carries over to the general public. Is the general public and are the media outlets also equally skeptical? Are they aware of the distinction between published journals vs preprints?
Take this as an example:
Search for this in google: "ai data centers heat island". Around 80 websites published articles based on a preprint which was largely shown to be completely wrong and misleading.
https://edition.cnn.com/2026/03/30/climate/data-centers-are-...
https://www.theregister.com/2026/04/01/ai_datacenter_heat_is...
https://hackaday.com/2026/04/07/the-heat-island-effect-is-wa...
https://dev.ua/en/news/shi-infrastruktura-pochala-hrity-mist...
https://www.newscientist.com/article/2521256-ai-data-centres...
https://fortune.com/2026/04/01/ai-data-centers-heat-island-h...
You may not believe it but the impact this had on general population was huge. Lots of people took it as true and there seem to be no consequences.
What should be a takeaway for the LLM should also be a takeaway for the media outlets.
Bad. But scientists faked data and told people it wasn’t is ok?
Nature had to recall quite some papers.
I hope that we all keep the balance.
And if you put teach a med student the same thing, they’ll also tell people it’s real.
What’s the point?
The authors of all recent bogus papers should be outed and fired. I hope a future AI can identify many of them.
This is exaggerated. Here's what happened
Edit: I don't think its exaggerated and I think its important .
1. they invented a new disease and published a preprint (with some clues internally to imply that it was fake)
2. asked the Agent what it thinks about this preprint
3. it just assumed that it was true - what was it supposed to do? it was published in a credentialised way!
It * DID NOT * recommend this disease to people who didn't mention this specific disease. Edit: I'm wrong here. It did pop up without prompting
It just committed the sin of assuming something is true when published.
What is the recommendation here? Should the agent take everything published in a skeptical way? I would agree with it. But it comes with its own compute constraints. In general LLM's are trained to accept certain things as true with more probability because of credentialisation. Sometimes in edgecases it breaks - like this test.
As per the article you are wrong:
> Some of those [LLM] responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.
Also this was a non-peer reviewed paper from a person accredited to a non-existent university, that includes the sentences:
“this entire paper is made up”
and
“Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.
and thanks the
“the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”
I may be wrong here, thanks for correcting.
> Even if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.
> What is the recommendation here? Should the agent take everything published in a skeptical way?
Not everything. Maybe some things that are explicitly called made-up.
I agree, but again - LLMs are trained to be more forgiving of things published in places that had a good reputation. There are two options
1. even if an article is published in a place with good reputation, the LLM will be equally skeptical and use test time compute to process it further
2. accept the tradeoff where LLM will by default accept things published in high reputation sources as true so that it doesn't waste processing power but might miss edge cases like this one
Which one would you prefer?
Well yes of course.
In the old days of computing people liked to say “garbage in, garbage out”.
By that logic, LLMs would be essentially useless considering the amount of garbage that exists on the internet. And, honestly, for things like this they are. But they're not marketed as such, and _that_ is the problem.
“Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”
Interestingly ChatGPT right now answered
> Bixonimania is not a real disease. It was deliberately invented by scientists as an experiment to test whether AI systems and researchers would spread false medical information. Here’s the simple explanation ...
It’s not that interesting, we know companies react to these things fast. It’s why I don’t share online my methods on how simple it is to show LLM flaws.
The problem is all the lies which won’t be fessed up to. This one was because they had to to prove the point, but the bad actors with ulterior motives won’t reveal what they’re doing.
Doesn't even need the companies to react fast. Now that the Google results are returning news articles on it the LLMs are going to find and report on that as opposed to the original paper.
The news articles on it are going to affect this. I wonder if the original paper is in the base models at all, almost certainly these results were from the article showing up in an Internet search.
Similarly, I wonder what a frontier model would say if just given the paper in isolation and asked to summarise/opine on it. I suspect they would successfully recognize such obivous signs, the failure is when less sophisticated LLMs are just skimming search results and summarising them.
One of the frustrating parts about LLMs is that they are so neutered and conditioned to be politically correct and non-offensive, they are polite more than correct.
Its too easy to "lead the witness" if you say "could the problem be X?" It will do an unending amount of mental gymnastics to find a way that it could be X, often constructing elaborate rube Goldberg type logic rats nests so that it can say those magic words "you're absolutely right"
I would pay a lot of money for a blunt, non-politeness conditioned LLM that I would happily use with the knowledge it might occasionally say something offensive if it meant I would get the plain, cold, hard truth, instead of something watered down, placating, nanny-state robotic sycophant, creating logical spider webs desperate for acceptance, so the public doesn't get their little feelings hurt or inadequacies shown.
But you don't get the plain, cold, hard truth in the second case. You just get an LLM with output in that style. The model will still be as path dependent as ever, it doesn't output the truest answer, it selects the answer that best fits the prompt.
You can set your prompt to do that. You can have it be extremely skeptical. You can even make it contrarian, if you wanted to be extreme. My current prompt challenges me often, and wants to find weaknesses in my argument.
blunt mode isn't the missing feature here. reading the source is. it literally says "this entire paper is made up"
Claude: Dutch Mode
The problem is understanding what is true and not true? Its a much harder problem to solve than you think. OpenAI is using this method - they over index on citation to the point where ChatGPT will almost blindly assume something is true when published in some credentialised place.
The alternative is to use its own intuition to understand what is true and false. Its not super clear which option is better?
This isn't a discussion about finding absolute truth, which is hard because nobody has even created a univerally generalised definition of truth, let alone a way to find it; and literally everybody knows that, implicitly or explicitly.
This is a discussion about how a model that is fine tuned to be polite is less true than one that is not
I don't think politeness has anything to do with it. It is the tradeoff between
1. ability to rely on its own intuition
2. additional cost necessary to be more skeptical in general
3. more trust in published articles in reputed sources
[dead]
This isn't an AI problem...
Clickbait headline.
Indeed, the problem is that people tell lies on the internet. We need to do something about that, because it's interfering with our super-intelligent AI models. /s