Charlie Stross's blog is next.
Liability is unlimited and there's no provision in law for being a single person or small group of volunteers. You'll be held to the same standards as a behemoth with full time lawyers (the stated target of the law but the least likely to be affected by it)
http://www.antipope.org/charlie/blog-static/2024/12/storm-cl...
The entire law is weaponised unintented consequences.
> the stated target of the law but the least likely to be affected by it
The least likely to be negatively affected. This will absolutely be good for them in that it just adds another item to the list of things that prevents new entrants from competing with them.
There has been new information since that blog post which has reaffirmed the "this is much ado about nothing" takes because Ofcom have said that they do not want to be a burden on smaller sites.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
"We’ve heard concerns from some smaller services that the new rules will be too burdensome for them. Some of them believe they don’t have the resources to dedicate to assessing risk on their platforms, and to making sure they have measures in place to help them comply with the rules. As a result, some smaller services feel they might need to shut down completely.
So, we wanted to reassure those smaller services that this is unlikely to be the case."
Nothing more reassuring than a vague “we’re unlikely to go after you [if you stay on our good side.]”
It’s clear the UK wants big monopolistic tech platforms to fully dominate their local market so they only have a few throats to choke when trying to control the narrative…just like “the good old days” of centralized media.
I wouldn’t stand in the way of authoritarians if you value your freedom (or the ability to have a bank account).
The risk just isn't worth it. You write a blog post that rubs someone power-adjacent the wrong way and suddenly you're getting the classic "...nice little blog you have there...would be a shame to find something that could be interpreted as violating 1 of our 17 problem areas..."
For my friends everything, for my enemies the law.
Uneven enforcement is the goal.
Changing the code of practice is a years long statutory consultation process, they're not going to be able to change the rules to go after you on a whim.
Sovereign is he who makes the exception.
> So, we wanted to reassure those smaller services that this is unlikely to be the case
This is the flimsiest paper thin reassurance. They've built a gun with which they can destroy the lives of individuals hosting user generated content, but they've said they're unlikely to use it.
"... unlikely ..."
Political winds shift, and if someone is saying something the new government doesn't like, the legislation is there to utterly ruin someone's life.
You can try the digital toolkit and see for yourself if this is a realistic pathway for a small site (such as a blog with a comment function). Personally, I find it puzzling that Ofcom thinks what they provide is helpful to small sites. Furthermore, they make it pretty clear that they see no reason for a purely size-based exemption (“we also know that harm can exist on the smallest as well as the largest services”). They do not explore ways to reach their goals without ongoing collaboration from small site owners, either.
Ofcom need to change the law then.
Unless Ofcom actively say "we will NOT enforce the Online Safety Act against small blogs", the chilling effect is still there. Ofcom need to own this. Either they enforce the bad law, or loudly reject their masters' bidding. None of this "oh i don't want to but i've had to prosecute this crippled blind orphan support forum because one of them insulted islam but ny hands are tied..."
The Canadian government did the same thing when they accidentally outlawed certain shotguns by restricting bore diameter without specifying it was for rifles.
A minister tweeted that it didn’t apply to shotguns, as if that’s legally binding as opposed to you know, the law as written.
The democrats wrote a bill to hire 60k new armed IRS agents and promised they wouldn't be used to go after anyone with an income less than 250k. Senator Mike Crapo tried to add an ammendment to put that in the bill but they blocked it. We have a serious problem with politicians lying about the text of bills.
While I certainly would prefer that the IRS first and foremost go after tax evasion perpetuated by the wealthy (if for no other reason than there's likely more bank for the buck there), tax law is tax law. If someone making less than $250k/yr is evading paying taxes, the IRS should go after them just the same as if it was someone making $5M/yr.
Usually people complain that the IRS doesn't go after >250k. I've never heard anyone argue that they don't go after <240k enough. This is why the democrats promised it would only be used to go after >250k.
The problem is the dishonesty, saying the intent is one thing but being unwilling to codify the stated intent.
in order for going after everyone (or whatever arbitrary number we choose) it needs to be economically feasible. it is simple math and should be explained in simple math terms. it cost on average X amount to “go after someone” - if that amount exceeds what potential benefit is based of course on earning then we do it. otherwise it makes no sense. except we make this a political issue (as everything else). any sane person running IRS would do the math and figure out what the number is where it makes sense to go after someone
The use of "unlikely" just screams that Ofcom will eventually pull a Vader..."We are altering the deal, pray we don't alter it any further".
"Unlikely," I suppose if you don't have any significant assets to be seized and don't care about ending up in prison, you may be willing to take the chance.
> unintented consequences
Intended consequences no doubt.
This is an honest question. Why does a blog need to shutdown? If they moderate every comment before it is published on the website, what's the problem? I ask because I've got a UK-based blog too. It has got comments feature. Wouldn't enabling moderation for all comments be enough?
No, you still need to do things like write an impact assessment etc and you're still on the hook for "illegal" comments where you aren't a judge and have to arbitrarily decide what might be when you have no legal expertise whatsoever.
If I'm moderating all comments before they're published on the website, what's the problem? I mean, I've got a simple tech blog. I'm not going to publish random drive-by comments. Only comments that relate to my blog are ever going to be published. Am I making sense?
Does anyone in your blog comments ever discuss circumvention of DRM?
That's a criminal offence in the UK (two year prison sentence in some circumstances). Do you have a good feeling for what might count as incitement in those circumstances?
> Does anyone in your blog comments ever discuss circumvention of DRM?
No, they don't. My blog is not all that popular. It has got some programming puzzles, Linux HOW-TOs and stuff. Most of my audience is just my friends.
The point is, whether you think the comments are safe is irrelevant because you are not the judge.
Of course that is the cynical version of it. But as others have pointed out some people dont like these sort of risk.
It's weird that you completely ignored his answer and then asked the same question again. Refer to the post you responded to for an answer.
Ah! Sorry. I did miss that they said "you still need to do things like write an impact assessment".
So what's the best course of action? Remove comments feature entirely? Maybe that's what I should do. I wonder what everyone else's doing.
What standards would you want individuals or small groups to be held to? In a context where it is illegal for a company to allow hate speech or CSAM on their website, should individuals be allowed to? Or do you just mean the punishment should be less?
The obvious solution is to have law enforcement enforce the law rather than private parties. If someone posts something bad to your site, the police try to find who posted it and arrest them, and the only obligation on the website is to remove the content in response to a valid court order.
I don't have a strong view on this law – I haven't read enough into it. So I'm interested to know why you believe what you've just written. If a country is trying to, for example, make harder for CSAM to be distributed, why shouldn't the person operating the site where it's being hosted have some responsibility to make sure it can't be hosted there?
For one thing, because that person is not obliged to follow due process and will likely ban everything that even might even vaguely require them to involve a lawyer. See for example YouTube’s copyright strikes, which are much harsher on the uploader than any existing copyright law.
Your argument is that it's better to have the illegal stuff (say, CSAM) online than for a site owner to, for practical reasons, ban a lot of legal stuff too? Why?
Some sorts of goods should be prioritized over some sorts of bads. There would be no terrorism if we locked every human in a box and kept them there, yet you do not support this position, why? I jest, but I think public discourse is an unalloyed good and I would rather we not compromise informal small discourse for the sake of anti-terrorism, anti-CSAM, etc. These things won’t be fully rooted out, they’ll just go to ground. Discourse will be harmed though.
Let's consider two ways of dealing with this problem:
1) Law enforcement enforces the law. People posting CSAM are investigated by the police, who have warrants and resources and so on, so each time they post something is another chance to get caught. When they get caught they go to jail and can't harm any more children.
2) Private parties try to enforce the law. The people posting CSAM get banned, but the site has no ability to incarcerate them, so they just make a new account and do it again. Since they can keep trying and the penalty is only having to create a new account, which they don't really care about, it becomes a cat and mouse game except that even if the cat catches the mouse, the mouse just reappears under a different name with the new knowledge of how to avoid getting caught next time. Since being detected has minimal risk, they get to try lots of strategies until they learn how to evade the cat, instead of getting eaten (i.e. going to prison) the first time they get caught. So they get better at evading detection, which makes it harder for law enforcement to catch them either. Meanwhile the site is then under increasing pressure to "do something" because the problem has been made worse rather than better, so they turn up the false positives and cause more collateral damage to innocent people. But that doesn't change the dynamic, it only causes the criminals to evolve their tactics, which they can try an unlimited number of times until they learn how to evade detection again. Meanwhile as soon as they do, the site despite their best efforts is now hosting the material again. The combined costs of the heroic efforts to try and the liability from inevitably failing destroys smaller sites and causes market consolidation. The megacorps then become a choke point for other censorship, some by various governments, others by the corporations themselves. That is an evil in itself, but if you like to take it from the other side, that evil causes ordinary people chafe. So they start to develop and use anti-censorship technology. As that technology becomes more widespread with greater public support, the perpetrators of the crimes you're trying to prevent find it easier to avoid detection.
You want the police to arrest the pedos. You don't want a dystopian megacorp police state.
That is not the argument. The argument is that, with appropriate court order, a site operator must take down the illegal material (if it hasn’t already been moderated out). However, the site owner should not be liable for that content appearing on their site since it was not put there by them and since there is value in uncensored/unmoderated online communities. The person who posted the content should be liable, not the site owner. In neither case is the content just freely siting there harming the public and unable to be removed because nobody is liable for punishment.
I think an interesting alternate angle here would be to require unmoderated community admins to keep record of real identity info for participants, so if something bad shows up the person who posted it is trivially identifiable and can easily be reprimanded. This has other problems, of course, but is interesting to consider.
How about:
Individuals and small groups not held directly liable for comments on their blog unless its proven they're responsible for inculcating that environment.
"Safe harbour" - if someone threatens legal action, the host can pass on liability to the poster of the comment. They can (temporarily) hide/remove the comment until a court decides on its legality.
How about have separate laws for CSAM and "hate speech". Because CSAM is most likely just a fig-leaf for the primary motivation of these laws.
big DMCA energy
Its very much intended. It's easier for the powers that be to deal with a few favored oligarchs. They're building a great British firewall like china.
Hexus is a big one, being UK-based and UK-centric they are just deleting 24 years of history rather than trying to geoblock around it.
Hexus shut down years ago did it not?
The reviews/news side did, but the forums kept going until this.
I know the law is BS and this just hit me differently. I am really really pissed.
Should order this list by number of affected rather than alphabetical IMO. The 275K monthly user platform is almost hidden relative to the 49 and 300 user examples.
I will host public proxied site for these websites, open for UK people, just to troll them :D
Whilst I don't condone being unlawful (are you sure you want to run that risk?), that's the hacker spirit one needs these days.
Being silly to ridicule overreaching laws is top-trolling! Love it.
The trouble here is that the law is so crazy that third parties allowing users in the relevant jurisdiction to access the site could result in the site still be liable, so then they would have the same reason to block your proxy service if a non-trivial number of people were using it.
To do any good you don't want to cause grief for the victims of the crazy law, you want to cause grief to its perpetrators.
Then I guess that'd be a use case of technologies like Tor or I2P, properly, securely used.
Worth mentioning that the lawyer who runs onlinesafetyact.co.uk, Neil Brown, has its onion address in his profile.
https://mastodon.neilzone.co.uk/@neil
http://3kj5hg5j2qxm7hgwrymerh7xerzn3bowmfflfjovm6hycbyfuhe6l...
Ofcom have said that they consider geoblocking to be sufficient in writing, so at least they would probably lose any legal case brought against them.
Which would in turn cause the whole thing to be a farce, because then the solution would be for every site to geoblock the UK and then every person in the UK to use a proxy.
It is.
being unlawful is a vital tool for people to keep tyranny in check, I would hope that most people are incredibly strong supporters of lawlessness when the laws are wrong. To give an extreme example, I imagine you supported the hiding of jewish people during nazi germanys reign, which means you support unlawful activity as long as it's against laws that are against the people.
If GP is not a UK citizen and does not live in the UK, how would that be unlawful? They're not beholden to or subject to UK law. The UK's belief that they can enforce this law on non-UK entities is ridiculous.
Doesn't this act effectively create a new form of DDoS? A bad actor can sufficiently flood a platform with enough hate content that the moderation team simply cannot keep up. Even if posts default to not show, the backlog could be enough to harm a service.
And of course, it will turn into yet another game of cat and mouse, as bad actors find new creative ways to bypass automatic censors.
Aka a “heckler’s veto”: https://en.m.wikipedia.org/wiki/Heckler's_veto
This already happens but attackers go after hosts and registrars.
Is Hacker News also affected by this act?
International law limits state jurisdiction to territorial boundaries (Art. 2(1) UN Charter). Hacker News is a US web site and Y Combinator LLC is a US company. The OSA, which is a UK law, cannot mandate physical enforcement (e.g., server seizures) on foreign soil. If they really didn't like HN, UK government could try to suppress HN access for their citicens by local means. If HN had a branch in the UK, the UK government could take action against that branch. As far as I know that's not the case.
Yes, but I don't really understand how the UK can expect to enforce this law against non-UK entities that don't have any employees or physical presence in the UK.
HN/YC could just tell them to go pound sand, no? (Assuming YC doesn't have any operations in the UK; I have no idea.)
they could impound you while on a layover in UK (not that you'd ever want to do that)
pg lives in Britain if I'm not mistaken.
Yes.
"Furry.energy"? With a total of 49 members? My World of Warcraft guild has more active players...
This is exactly the point, isn't it? The smallest websites are destroyed, leaving only the megacorps.
I'm sure they can find a community elsewhere. Discord comes to mind... "Oh but it's illegal", trust me on this: Discord only cares if somebody actually reports the server and the violations are severe enough.
But why should they _have_ to find a community elsewhere?
It is right that a country should snuff out all communities, large and small, and drive them to hosting in another country, or "under the wing" of a behemoth with a fully-funded legal department?
It's a blatantly destructive law.
That is not the stated purpose of the law and there is recourse built into it. Too often folks view these laws as binaries where none exists.
It's never the stated purpose of the law, but we might do well to be concerned with what it actually does rather than what the proponents claim it would do.
Recourse doesn't matter for a sole proprietorship. If they have to engage with a lawyer whatsoever, the site is dead or blocked because they don't have the resources for that.
What recourse? A small, 5o-member community doesn't have the resources to ensure they're in compliance, and Ofcom's statement about how smaller players are "unlikely" to be affected is not particularly reassuring.
The "stated purpose" is irrelevant. Even if they are being honest about their stated purpose (questionable), the only thing that matters is how it ends up playing out in reality.
Right or wrong I think many have misread the legislation or read poor coverage of it given people's reasoning.
Much of things boils down to doing a risk assessment and deciding on mitigations.
Unfortunately we live in a world where if you allow users to upload and share images, with zero checks, you are disturbingly likely to end up hosting CSAM.
Ofcom have guides, risk assessment tools and more, if you think any of this is relevant to you that's a good place to start.
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
it's not that simple - illegal and harmful content can include things like hate speech - worth a longer read... https://www.theregister.com/2025/01/14/online_safety_act/
If I ran a small forum in the UK I would shut it down - not worth risk of jail time for getting it wrong.
The new rules cover any kind of illegal content that can appear online, but the Act includes a list of specific offences that you should consider. These are:
terrorism
child sexual exploitation and abuse (CSEA) offences, including
grooming
image-based child sexual abuse material (CSAM)
CSAM URLs
hate
harassment, stalking, threats and abuse
controlling or coercive behaviour
intimate image abuse
extreme pornography
sexual exploitation of adults
human trafficking
unlawful immigration
fraud and financial offences
proceeds of crime
drugs and psychoactive substances
firearms, knives and other weapons
encouraging or assisting suicide
foreign interference
animal cruelty
> hate
Is it really just listed as one word? What's the legal definition of hate?
Thanks.
> Something is a hate incident if the victim or anyone else think it was motivated by hostility or prejudice based on: disability, race, religion, gender identity or sexual orientation.
This probably worries platforms that need to moderate content. Sure, perhaps 80% of the cases are clear cut, but it’s the 20% that get missed and turn into criminal liability that would be the most concerning. Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
Further, prejudices in terms of language do change often. As bad actors get censored based on certain language, they will evolve to use other words/phrases to mean the same thing. The government is far more likely to be aware of these (and be able to prosecute them) than some random forum owner.
It's important to understand that the act we're talking about does not make owners simply liable for stuff that happens on their sites, nor does it require them to stop everything. It's about what the risks are of these things happening, and what you do about that.
In fact, if you have had a place that people can report abuse and it's just not really happening much then you can say you're low risk for that. That's in some of the examples.
> Not to mention a post from one year ago can become criminal if someone suddenly decides it was motivated by one of these factors.
That would impact the poster, not the site.
Just want to add that I couldn't find any references to gender identity in the linked Wikipedia article as well as the article on hate incidents in the UK.
Hate is whatever I don't like.
Whatever the current government says it means. What did you think it meant?
I don't see what the big deal is - Governments don't change hands or selectively prosecute.
From that list I don't see HN being affected, although I read somewhere that a report button on user generated content was required to comply for smaller sites.
I think it's hard to make the case for anything other than a pretty tiny group or organisation that that you can get away without having some reporting and moderation process.
I don't think you need a report button but a known way of reporting things by your users is likely going to be required if you have a load of user generated stuff that's not moderated by default.
I might be falling for what I've read second-hand but isn't one of the issues that it doesn't matter where the forum is based, if you've got significant UK users it can apply to your forum hosted wherever. You've got to block UK users.
The good thing about forums is their moderation. It seems like mostly what the law covers is already enforced by most forums anyways.
A forum that merely has good moderation is not automatically compliant with the act. It requires not just doing things, but paperwork that shows that you are doing things. The effort to do this well enough to be sure you will be in compliance is far beyond what is reasonable to ask of hobbyists.
> Much of things boils down to doing a risk assessment and deciding on mitigations.
So... paperwork, with no real effect, use, or results. And you're trying to defend it?
I do agree with need something, but this is most definitely not the solution.
Putting in mitigations relevant to your size, audience and risk factors is not "no real effect".
If you've never considered what the risks are to your users, you're doing them a disservice.
I've also not defended it, I've tried to correct misunderstandings about what it is and point to a reliable primary source with helpful information.
> if you allow users to upload and share images
On my single-user Fedi server, the only person who can directly upload and share images is me. But because my profile is public, it's entirely possible that someone I'm following posts something objectionable (either intentionally or via exploitation) and it would be visible via my server (albeit fetched from the remote site.) Does that come under "moderation"? Ofcom haven't been clear. And if someone can post pornography, your site needs age verification. Does my single-user Fedi instance now need age verification because a random child might look at my profile and see a remotely-hosted pornographic image that someone (not on my instance) has posted? Ofcom, again, have not been clear.
It's a crapshoot with high stakes and only one side knows the rules.
> On my single-user Fedi server,
Then you don't have a user to user service you're running, right?
> And if someone can post pornography, your site needs age verification.
That's an entirely separate law, isn't it?
You're right. Plus, the overreactions have been walked back or solved in some cases, e.g: LFGSS is going to continue on as a community ran effort which will comply with the risk assessment requirements. Most of the shutdowns are on long-dead forums that have been in need of an excuse to shutter. The number of active users impacted by these shutdowns probably doesn't break 100.
Related ongoing thread: Lobsters blocking UK users because of the Online Safety Act - https://news.ycombinator.com/item?id=43152178
Safety from dissent, for an authoritarian government. This is just weaponized "empathy".
It seems like governments around the world are shifting their priorities away from their domestic economies.
Shifting their priorities towards stifling the speech of anyone who tries to complain about the domestic conditions.
The Chaos Engine forums - a site for game developers to discuss, moan, and celebrate fellow and former colleagues... Now moved to Discord due to this act. It really is a strange time we are living through.
Non UK sites should IP block UK IPs and create a block page that advertises VPNs.
You know what's really rich about the OSA?
One of the exemptions is for "Services provided by persons providing education or childcare."
Do you have an explicit reference for that?
Not doubting it, but if you have a reference to hand it will save me having to search.
If it's just something you remember but don't have a reference then that's OK, I'll go hunting based on your clue.
In the text of the act, schedule 1 part 1 paragraph 10 https://www.legislation.gov.uk/ukpga/2023/50/schedule/1/para...
... unlike the issue of what size of service is covered, this isn't a pinky swear by Ofcom.
Super ... many thanks.
So, how does all this apply to community discords, slacks, Matrix rooms, IRC chats, etc?
Is it discord's responsibility to comply, the admin/moderators, or all of the above?
The hosting platform is responsible for compliance. For Discord or Slack it's easy, but for Matrix, it might be more fuzzy. Certainly the homeserver that is hosting a room would be responsible, but would other homeservers that have users who are members of the room also be responsible?
Yes, at least for platforms like Discord, they bear the responsibility based on my non-lawyer reading of the plain English. YMMV, IANAL.
What concept allows the UK to (attempt to) enforce this against non citizens whose business or life has no ties to their country? Plenty of small countries have odd censorship laws but have escaped similar legal hand wringing.
I am part of a small specialist online technical community. We just moved it over to a Hetzner box in Germany and someone there is paying for it instead of it being hosted in the UK.
What are you going to do Ofcom?
If you live in the UK and can still be linked as an operator/organizer of the site (or if it's not you, other UK residents), can't they still come after you directly? I don't know about you, but I don't think running an online community would be worth huge fines to me.
There are no UK residents involved in the organisation or operation of it now even though we came up with it.
Apparently the law is dreadfully written. I was reading the lobste.rs thread and wow, it’s like they took a programming course in goto and it statements and applied it to the law…
I had the complete opposite impression from that thread. It seemed like people were politically motivation to interpret the law in a certain way, so they could act like they were being coerced.
These closures are acts of protest, essentially.
I agree with @teymour's description of the law. It is totally normal legislation.
Not only is this law terrible, there are several other laws like this that have existed for years.
People saying criticism is politically motivated (ignoring the fact that this law was drafted by the Tories and passed by Labour...so I am not exactly clear what the imagined motivation might be) ignore the fact that the UK has had this trend in law for a long time and the outcome has generally been negative (or, at best, a massive waste of resources).
Legislation has a context: if we lived in a country where police behaved sensibly, I could reasonably see how someone could believe this was sensible...that isn't reality though. Police have a maximalist interpretation of their powers (for example, non-crime hate incidents...there is no legislation governing their use, they are used regularly to "question the thinking" of people who write critical things about politicians, usually local, or the police...no appointed authority gave them this power, their usage his been questioned by ministers...they register hundreds of thousands of a year still).
Btw, if you want to know how the sausage is made: security services/police want these laws, some event happens, and then there is a coordinated campaign with the media (the favour is usually swapped for leaks later) to build up "public support" (not actual support, just the appearance of support), meetings with ministers are arranged "look at the headlines"...this Act wasn't some organic act of legislative genius, it was the outcome of a targeted media campaign from an incident that, in factual terms, is unrelated with what the Act eventually became (if this sounds implausible, remember that May gave Nissan £30m on the back of SMMT organising about a week's worth of negative headlines, remember that Johnson brought in about 4m migrants off the back of about two days of briefing against him by a six-month old lobbying group from hotels and poultry slaughterhouses...this is actually how the govt works...no-one reads papers apart from politicians).
Giving Ofcom this power, if you are familiar with their operations, is an act of literal insanity. Their budget has exploded higher (I believe near a quarter of a billion now). If you think tech companies are actually going to enforce our laws for us, you are wrong. But suggesting that Ofcom with their new legions of civil servants is supposed to the watchdog of online content...it makes no sense, it cannot be described as "totally normal" in a country other than China.
The State of Utopia has published this report on the source of funding of Ofcom, the U.K. statutory regulator responsible for enforcing the Online Safety Act:
https://medium.com/@rviragh/ofcom-and-the-online-safety-act-...
Seems like an overreaction in some of these. Perhaps the people running them were close to the edge and more mental burden just pushes them over it.
It's like local US news websites blocking European users over GDPR concerns.
Feel free to put a stop to it by buying liability insurance for all of these service providers, which you may have to persuade the underwriter should be free. ;-)
> It's like local US news websites blocking European users over GDPR concerns.
I don't know if you said this sarcastically, but I have a friend in Switzerland who reads U.S. news websites via Web Archive or Archive IS exactly because of that.
Accessing some of these news sites returns CloudFlare's "not available" in your region message or similar.
It's not just the EU; I'm in a poorer region outside the EU and seeing "not available in your region" is quickly becoming the norm. Site administrators try to cut down on bot traffic (scraping, vulnerability scanners, denial of service, etc) and block whole regions they're not interested in.
Hell, we do that ourselves, but only for our own infrastructure that isn't expected to be used outside the county. Whitelisting your own country and blocking everything else cuts out >99% of scrapers and script kiddies.
No sarcasm. I totally understand why a local news website in the US would just block since its irrelevant for them any traffic from outside the country and they're have little resources. I don't judge them from blocking.
Fact is that its very unlikely they would ever face any issues about having it not blocked.
So much for the "world wide" web.
Just another bad UK law not worth knowing about ;)