Actual non-reg statement: https://www.gov.uk/government/news/tech-firms-will-have-to-t...
The key phrase is "non-consensual intimate image" commonly known as "revenge porn". It seems this includes fakes as well.
Edit: full text of draft legislation https://www.gov.uk/government/collections/crime-and-policing... ; still very much in the process of being amended.
I note that publishing NCII is already an offence in Scotland, although it doesn't have this kind of liability for platforms. Primarily used against ex-partners publishing real or fake revenge images.
Regardless of how you feel about content moderation, 48 hours is a ridiculously long time given what AI can do today. That “bad” image could have been propagated around the world to millions of people in that time. It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore. However, the compute costs would eat into profits…
Again, I’m not judging about content moderation, but this is an extremely weak initiative.
> It can and should be removed in minutes because AI can evaluate the “bad” image quickly and a human moderator isn’t required anymore.
CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas whether an image was shared nonconsensually seems like it'd often require context that is not in the image itself, possibly contacting the parties involved.
I would not want to be the supervisor that has to review any CSAM positives to check for false ones
Indeed. It seems that the process being described is some kind of one-stop portal, operated by or for OFCOM or the police, where someone can attest "this is a nonconsensual intimate image of me" (hopefully in some legally binding way!), triggering a cross-system takedown. Not all that dissimilar to DMCA.
> CSAM can be detected through hashes or a machine-learning image classifier (with some false positives), whereas
Everything can be detected "with some false positives". If you're happy with "with some false positives", why would you need any context?
The issue is that if you need to achieve 0% false negatives, you're going to get a lot of false positives.
Another, related issue is that the takedown mechanism becomes a de facto censorship mechanism, as anyone who has dealt with DMCA takedowns and automated detectors can tell you.
Someone reports something for Special Pleading X, and you (the operator) have to ~instantly take down the thing, by law. There is never an equally efficient mechanism to push back against abuses -- there can't be, because it exposes the operator to legal risk in doing so. So you effectively have a one-sided mechanism for removal of unwanted content.
Maybe this is fine for "revenge porn", but even ignoring the slippery slope argument (which is real -- we already have these kinds of rules for copyrighted content!) it's not so easy to cleanly define "revenge porn".
DMCA isn't directly that bad. DMCA is under penalty of perjury, so false take downs are rare.
The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty. Though if it ever happens to you I suspect you have a good case against whoever did this - but the lawyer costs will far exceed your total gain. (as in spend $30 million or more to collect $100). Either we need enough people affected by a false non-DMCA take down that a class action can work (you get $0.50 but at least they pay something), or we need legal reform so that all take downs against a third party are ???
> DMCA is under penalty of perjury, so false take downs are rare.
Maybe true with the platonic ideal "DMCA takedown letter" (though these are rarely litigated, so who really knows), but as you note, they're incredibly common with things like the automated systems that scan for music in videos (and which actually are related to DMCA takedowns), "bad words" and the like.
> The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty.
It's true that most takedowns in the US aren't under DMCA, but even that once-limited process has metastasized into large, fully automated content scanning systems that proactively take down huge amounts of content without much recourse. Companies do this to avoid liability as part of safe harbor laws, or just to curry favor with powerful interests.
We're talking about US laws here, but in general, these kinds of instant-takedown laws become huge loopholes in whatever free speech provisions a country might have. The asymmetric exercise of rights essentially guarantees abuse.
I believe google issues legitimate dmca takedowns for copyright strikes, even when there is no infringement. They put the work to defend the strike on the apparent commiter, often with little to no detail.
While the false takedown may be rare. Using dmca as a mechanism to inflict pain where no copyright infringement has taken place is indeed common enough that it happens to small time youtubers like myself and others I have talked to.
Regardless of how you feel about content moderation, we are talking about a situation where the government is DEMANDING corporations to implement automated, totalitarian surveillance tools. This is the key factor here.
The next step would be for the government to demand direct access to these tools. Then the government would be able to carry out holocausts against any ethnic group, only 10 times more effectively and inevitably than Hitler did.
> the government is DEMANDING corporations to implement automated, totalitarian surveillance tools
You've got this the wrong way around. These are social media sites.
People are publicly publishing revenge porn, and the government has told sites that if they are requested to take down revenge porn then they have to.
They don't have to monitor, because they are being told of it's existence.
Are you conflating this specific principle with the much wider Online Safety Act? Because, while the latter has certain privacy-undermining elements to it, I'm not sure how asking social media companies to take down content has anything to do with 'surveillance'.
No, I'm not conflating. The point here is that for taking content down in the way they want it, the implementation of totalitarian surveillance tools is necessary.
As bad as I think this law is, this isn't demanding any degree of surveillance in the sense that real human beings have their information or activity tracked. This is mandating taking down content, not surveiling anyone.
> This is mandating taking down content, not surveiling anyone.
As far as I understand, it precisely mandating to monitor EVERYONE.
They are not talking about removing a specific image from the platform based on its hash or something. They are talking about actions that involve automated analysis of all content on the platform for patterns arbitrarily specified by the government.
The technologies discussed differ from totalitarian surveillance by simply toggling a single flag on the platform, and are indistinguishable from such surveillance for the user.
Every social media site is already reading, analysing, and filtering the content posted on their platforms.
Imagine dang wrote a script to delete every HN comment that contains the string "velociraptor". Under your logic, this involves surveiling every HN commenter. This is true in the pedantic sense that every comment posted to the site would be checked for "velociraptor".
But most people understand the word "surveillance" to mean more involved information collection than just deleting content that matches certain criteria.
Every social media site already has a system for removing porn.
These systems does not imply the tools of totalitarian surveillance.
In contrast to the proposed one, which should be able to classify all content on the platform at any arbitrary moment in time according to a post-factum specified arbitrary filter. Literally a mechanism of totalitarian surveillance.
The threats of demanding the handing over of revenue by ofcom mean nothing. Offenders outside the UK just laugh and ridicule at the demands or block uk traffic which uk people can bypass with a VPN. They have no power to enforce the demands.
You’d have to essentially police all VPN use beyond China levels to get the worst offenders of this.
I think the UK is capable of suing a US-based publicly traded corporation. It's not like they're hiding.
(keyword appears to be Recognition Acts: https://www.uniformlaws.org/committees/community-home?Commun... )
They can sue a US company in a US court under US law, but won't get far in a US court trying to enforce UK law.
Preston Byrne is a good follow on this: https://x.com/prestonjbyrne
Hence their threat to block the platform in the UK
Yes, and that is all of their leverage on US companies. They don't need US courts or any US cooperation for that. 4chan doesn't care, so they don't care about the £20,000 fine that Ofcom imposed on them. We will likely find out how much X cares soon.
In the meantime, Zoom is refusing to help police track down peddos that use Zoom meetings to broadcast live sexual abuse of children.
Sorry, building surveillance and control infrastructure isn't intended to "help the children," that's just a confusing part of the name. Our mistake whoops.
I'm not a fan of forcing sites to police themselves. As far as I'm concerned, that's what the police are for. Simply removing such an image does not necessarily mean that the uploader is being held accountable. This also only affects the lowest of IQ predators while creating an additional barrier to entry (regulatory capture) for anyone that wants to run their own "social media site".
Can we also get a legal definition of "social media"? Is that really just as simple as "services which allow multi-directional communication"? Hate to break it to them, but the internet proper is, itself, a service which allows multi-directional communication. No matter how many walled gardens are created, the 1s and 0s will continue to flow unimpeded.
> The amendment follows outrage over the Elon Musk-owned chatbot Grok's willingness to generate nude or sexualized images of people, mainly women and girls, which forced a climbdown earlier this year.
Laws are reactive. When abuses of the system happen lawmakers need to find ways to minimize the damage. This is one of the reasons that Google used to follow the "do not do evil" doctrine. It was a smart way to minimize regulation. The new big tech has thrown any aparence of morality thru the windows and that creates a strong need to regulate their actions.
Reactivity is not a virtue. If your existing laws are already not being followed, then adding more is a folly that merely possesses the illusion of effectiveness.
They're both underdefining what "intimate images" means and using the term images instead of photos. So this means they want this to apply to everything that can be represented visually even if it has nothing to do with anything that happened in reality. Which means they don't care about actual harms. The way they're using the word 'harm' seems be to more in line with the word 'offend'. So now in the UK if there is an offensive image (like a painting) posted on a web site (or other internet protocol) they are going to be, " treated with the same severity as child sexual abuse and terrorism content,". That's wild. And dangerous. This policy will do far more damage than any painting or other non-photo images would.
I had to google a bit, but this Guardian article[1] goes into a lot more detail than the Register piece here. I was of the opinion that this sounds too onerous and ill-defined when I first read the Register piece especially with censorship on the rise in Europe recently, but the Guardian piece made me side more with this particular policy. It doesn't sound as broad as the Register piece puts it, it sounds like it's specifically for revenge porn and generating deep fake porn non-consensually, not any "intimate image" which I agree is far too broad. Albeit, of all governments, I'd suspect especially the current UK government is to be amongst the most likely to say expand these powers to speech they don't like or general pornography one day, etc, it doesn't sound like this specific policy is broad yet according to the Guardian article. The Register piece is using "intimate image" as a euphemism I think whereas the intent of the policy is a bit more defined and specific.
[1] https://www.theguardian.com/society/2026/feb/18/tech-firms-m...
I agree that laws such as this to be defined very carefully, but I think "images" is the appropriate term to use, rather than "photos". LLMs make it near trivially easy to render a photo in countless styles, after all, such as paintings, or sketches.
I think if I produced inappropriate images that are identifiable as a specific child victim, who obviously cannot consent to have inappropriate images generated of their likeness, I believe images and photos are a distinction without a difference.
Blame xAI. It has to be worded in this way to capture the behavior they allowed to persist.
[flagged]
how is blaming X's ai product victim blaming? It was allowing its users to generate CSA images and X's first response(s) to the problem didn't appear to take the issue seriously.
lol, in fact, lmao
What bit of "intimate images shared without a victim's consent" is lacking context in the article?
What qualifies as an "intimate image"? A photo of someone in a swimsuit at the beach?
Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?
If you envision yourself as a potential victim of such content, I think the answers to these questions all become pretty obvious. A swimsuit photo might or might not be intimate, depending on what kind of swimsuit it is and the context in which the person posting the photo is presenting it. A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
> A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
What if someone claims to feel violated by an image of a person that looks totally different: different skin color, different build, different facial structure, etc?
I hate to appear to defend this, but generative AI has sort of collapsed the distinction between a photo and an image. I could generate an image from a photo which told the same story, then delete the photo, and now everything is peachy fine? So that could have been a motivation for "images".
Though I wonder if not existing frameworks around slander and libel could be made to address the brave new world of AI augmented abuse.
Libel is both incredibly expensive (£100k average per case, not eligible for legal aid) and unreliable (can be used to silence true information).
> The UK is bracketing "intimate images shared without a victim's consent" along with terror and child sexual abuse material, and demanding that online platforms remove them within two days.
Bare in mind, this would have been used to stop the Epstein images of the former Prince Andrew from being viewed [1].
> Platforms that do not do so would potentially face fines of 10 percent of "qualifying worldwide income" or have their services blocked in the UK.
Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
> The amendment follows outrage over the Elon Musk-owned chatbot Grok's willingness to generate nude or sexualized images of people, mainly women and girls, which forced a climbdown earlier this year.
The AI didn't just randomly generate NSFW content, it did it at the request of the user. Remember, there was no interest in removing the CP content from Twitter prior to Musk buying it, and then they all moved to Mastodon / BlueSky where they now share that content.
> The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
Ofcom simply doesn't have this kind of power. 4chan are showing as much [2]. This is simply massive overreach by the UK government, and I would advise tech giants to stop complying.
> Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
Because fines are not a tax, they exist as a punishment for bad behaviour, so the question should really be "why not 100% of worldwide income?" (Or 200%, because a year of income is arbitrary; or "why not enough to bankrupt the company?" because profit, income, and bank balance are all different things, etc.)
The reason for only 10% is that this is to signal "we're still playing cooperate" in game theory, just with a baked-in rough (and rounded) guess of what big tech's typical income distribution is, in order to avoid shenanigans during court cases where e.g. someone tries to argue with creative accounting that their UK revenue is less than it really is.
> Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
Regardless of whether the actions of the British state are correct, I do not think it is a good position that a foreign tech company is more powerful than the British state.
This is basically the same argument for the US forced divestiture of TikTok.
The British press has strict libel laws, and stories that the American press publishes are absolutely not legal in Britain, yet the British state doesn't threaten the New York Times or Washington Post with reparations (unless I'm missing something.)
If this is the end goal, then they should do the same thing China does. Make back doors mandatory on all devices and ban any sensitive foreign platforms at the network level. If anyone is using VPNs, Tor, or whatever the UK police can flag those individuals and investigate what they are doing. At minimum, they can get ad revenue for Google, X, Meta, etc close to $0 in the UK which will dis-incentivize those platforms from having users there.
There is also a future here where the UK will not be able to monitor or see what their users are doing. SpaceX is already breaking foreign sovereignty with Starlink usage in Iran. If the UK or the rest of EU fails to really crack down at the scale China did, they may completely lose control of what is distributed within their borders. A combination of satellites and mesh networks could be much harder to monitor than the current telecom infrastructure.
The current approach is going to get the UK pressured at the nation state level by the US. In that case the UK isn't answering to some foreign tech company but whatever party is in power in the US at the time.
> If this is the end goal, then they should do the same thing China does.
This seems like an extreme over reaction; why can't the US platforms just stop profiting from revenge porn?
> I do not think it is a good position that a foreign tech company is more powerful than the British state.
Britain is a weak state. There will always be foreign companies more powerful than it is. The only way to change that would be for the British state to become extremely powerful.
Does it bother you if a British company is more powerful than the state of Turkmenistan?
> Bare in mind, this would have been used to stop the Epstein images of the former Prince Andrew from being viewed
(Bear, although your typo is awkwardly relevant...)
Would redacted images, and those that do not identify the victim, actually count?
> Why on earth would it be 10% of their world wide income and not their UK-based income? These politicians really think they have more power than they really do.
I mean, when it comes down to a fine or blocking access altogether, surely they can ask for whatever they want? They could've made it "one bajillion dollars" if they wanted. Actually collecting the fine is a whole other matter.
> Mastodon / BlueSky where they now share that content.
I regularly check Bluesky and occasionally check Mastodon, and I've never seen even 'tame' porn on either. I have absolutely seen porn on X, though.
This is just another tactic to bring censorship and intimidation against free speech.
I wonder how 'abusive' is categorized. By some committee maybe?
See my comment below. The Guardian piece states the images are flagged to the government, hopefully once to reduce the onerousness of it, by the victim.
So any flagged item must be removed by law?