I definitely get the proactive response here, as I’ve considered the same for my small platform. The biggest issue is the definitions of who the majority of the requirements apply to is quite hard to find. It’s buried on page 65 of this pdf:
https://www.ofcom.org.uk/siteassets/resources/documents/onli...
It defines “large service” as “a service that has more than 7 million monthly active United Kingdom users”. This is 25% of the UK population. If your service isn’t a household name, it mainly doesn’t apply to you, but the language they use makes it seem like this applies to more.
7 million is about 10% of the UK population, not 25%.
They also block Brave browsers entirely. I tried reading into it, but it appears to be one of those "programmer personality quirks". The fellow that runs the site appears to think Brave is a scam of some sort, and just decided to block the entire browser.
Oh well, I'll stick to HN.
You can scroll through the lobste.rs moderation log to get in impression of how moderation on lobste.rs works:
Also incidents like this:
https://lobste.rs/s/zp4ofg/lobster_burntsushi_has_left_site
This kind of stuff gives me hugbox vibes, i would not feel safe there. I'm somewhat sure some of the moderators use the website as personal political leverage.
Is it me, or is brave more of a cryptocurrency platform that pretends to be a browser?
A lot of people don't really like the toxic discussions that crypto usually tends to devolve in. So it makes sense to block the browser if you don't want those people on your server.
Related ongoing thread: In memoriam - https://news.ycombinator.com/item?id=43152154
Related, a list of other sites which are blocking the UK or shutting down altogether rather than deal with OSA:
As someone who self-hosts Mastodon, should I be geoblocking the UK as well?
For the record, I only host it for myself, so I'm pretty sure I wouldn't have received any of the legal protections that the OSA is now stripping away, and thus geoblocking the UK wouldn't matter. But if there's something else I'm missing, please let me know.
Block and route around failures.
Is a decentralized approach to communication an effective bulwark against this type of legislation? As is mentioned in the linked thread, the act itself is extremely hard to parse, so maybe it's not even possible to know the answer. Just curious if anyone has done research from that angle.
Neil Brown[0] has been attending the Ofcom online sessions and asking them about Fediverse servers but they've been unhelpfully vague as to how/if/why/when they fall under OSA.
Im guessing they linked the archive so that when Lobsters is geoblocked UK users will still be able to read it.
ELI5: Why should a US-based site, hosted on US soil and run by a US citizen, care about laws in all the hundreds of other random countries located thousands of miles away?
Because he can face legal consequences for breaking the (local elsewhere) law. The US do the exact same, look up FATCA.
Because they can be arrested and extradited to UK. Not a high chance, but not zero. And a realistic one is to be arrested while traveling in Europe and extradited to UK.
The simple answer is to not recognize the authority of the UK government when you don't have a physical presence there, as long as they are failing to protect their citizens with sane, ethical digital policy.
Tough pill to swallow for some, but there is no difference between irrational demands made from the government of the UK, and say, North Korea. It's everyone's choice which side of history they'd like to be on.
Just to play devil's advocate, the relationship between the EU and the US compared to North Korea is not at all similar. I think that's why this is listed as an option:
> A statement from the US Department of State that it does not believe the law applies to American entities and a commitment to defend them against it.
Haven't you heard of FATCA and other similar local US laws which impact institutions everywhere in the world?
As a person living in the UK, I really hope the rest of the world gives the middle finger to this pathetic extra territorial law by totally ignoring it.
They can ask ISPs to do the censorship if they really want to keep us “safe”.
Question: why pro-active block some authoritarian countries like the UK, but not others like China? Is it because only the UK passes legislation that threaten people outside of its borders? Or does China do it too and we ignore it?
China probably can’t get the USA to extradite an American citizen for breaking a Chinese law. The UK can.
As much as I hate this legislation, this is really just a small forum deciding they don't have the time to understand the legislation and therefore it's easier to block IP's from the UK (while it's not even clear if that will exempt them from liability). Fair enough but hardly earth shattering. I remember several US based websites geo-blocking all of Europe after GDPR came in (I think the LA Times was one of the biggest) and that went on for years.
We need legislation that tackles the various issues this legislation aims to tackle but in much more targeted ways. It needs to specifically target the biggest social media companies (Meta, X, Reddit, etc). Smaller forums are irrelevant. If your algorithm recommends self-harm content to kids who then go on to kill themselves it is right that you should be held responsible. "We're so big we can't be expected to police the content we host" should not be an acceptable argument.
I think the legislation does not name specifically companies (that would be quite shortsighted since new apps etc appear all the time and they don't want to be updating legislation every time something gets popular), but simply says if you have more than 7 million 30-day active UK users of the user-to-user part of your site, then you're in-scope. That's quite a big audience.
Small forums run as hobby projects need to require little-to-no investment of time and money, or the ROI quickly goes negative and they just shut down instead.
A little while back there was the story [0] of a Mastodon admin who hated CloudFlare and its centralized protection, but found that he had no choice but to sign up for it anyway because a disgruntled user kept launching DDoS attacks and he had no other way to keep his instance online. A bunch of people here and elsewhere kept unhelpfully replying that, “you don’t need CloudFlare, you could just do [incredibly convoluted and time-consuming solution] instead”, and all of those people were missing the point: CloudFlare is “set it and forget it”, which is a non-negotiable requirement for anything which is run as a hobby instead of a full-time job.
It’s the same with this UK law: yes, you could spend weeks of your life learning the intricacies of laws in some other country, or you could just block them and be done with it. Businesses which might need revenue from UK users will do the former, but if I’m running a site out of my own time and money, I’ll do the latter. And I don’t want hobby sites to have to disappear: the Internet is commercialized enough as it is, and regulating passion projects out of existence would kill the last remaining independent scraps.
I don't know anything about lobste.rs, but they mention lfgss and when that was discussed on HN a couple months ago the person that runs lfgss mentioned these as things they would have to do to comply:
> 1. Individual accountable for illegal content safety duties and reporting and complaints duties
> 2. Written statements of responsibilities
> 3. Internal monitoring and assurance
> 4. Tracking evidence of new and increasing illegal harm
> 5. Code of conduct regarding protection of users from illegal harm
> 6. Compliance training
> 7. Having a content moderation function to review and assess suspected illegal content
> 8. Having a content moderation function that allows for the swift take down of illegal content
> 9. Setting internal content policies
> 10. Provision of materials to volunteers
> 11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
> 12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
A lot of those sound scary to deal with but upon closer look don't actually seem like much of a burden. Here's what I concluded when I looked into this back then.
First, #2, #4, #5, #6, #9, and #10 only apply to sites that have more than 7 000 000 monthly active UK users or are "multi-risk". Multi-risk means being at medium to high risk in at least two different categories of illegal/harmful content. The categories of illegal/harmful content are terrorism, child sexual exploitation or abuse, child sex abuse images, child sex abuse URLs, grooming, encouraging or assisting suicide, and hate.
Most smaller forums that are targeting particular subjects or interests probably won't be multi-risk. But for the sake of argument let's assume a smaller forum that is multi-risk and consider what is required of them.
#1 means having someone who has to explain and justify to top management what the site is doing to comply.
#2 means written statements saying which senior managers are responsible for the various things needed for compliance.
#3 is not applicable. It only applies to services that are large (more than 7 000 000 active monthly UK users) and are multi-risk.
#4 means keeping track of evidence of new or increasing illegal content and informing top management. Evidence can come from your normal processing, like dealing with complaints, moderation, and referrals from law enforcement.
Basically, keep some logs and stats and look for trends, and if any are spotted bring it up with top management. This doesn't sound hard.
#5 You have to have something that sets the standards and expectations for the people who will dealing with all this. This shouldn't be difficult to produce.
#6 When you hire people to work on or run your service you need to train them to do it in accord with your approach to complying with the law. This does not apply to people who are volunteers.
#7 and #8 These cover what you should do when you become aware of suspected illegal content. For the most part I'd expect sites could handle it like the handle legal content that violates the site's rules (e.g., spam or off-topic posts).
#9 You need a policy that states what is allowed on the service and what is not. This does not seem to be a difficult requirement.
#10 You have to give volunteer moderators access to materials that let them actually do the job.
#11 This only applies to (1) services with more than 7 000 000 monthly active UK users that have at least a medium risk of image-based CSAM, or (2) services with a high risk of image-based CSAM that either have at least 700 000 monthly active UK users or are a "file-storage and file-sharing service".
A "file-storage and file-sharing service" is:
> A service whose primary functionalities involve enabling users to:
> a) store digital content, including images and videos, on the cloud or dedicated server(s); and
> b) share access to that content through the provision of links (such as unique URLs or hyperlinks) that lead directly to the content for the purpose of enabling other users to encounter or interact with the content.
#12 Similar to #11, but without the "file-storage and file-sharing service" part, so only applicable if you have at least 700 000 monthly active UK users and are at a high risk of CSAM URLs or have at least 7 000 000 monthly active UK users and at least a medium risk of CSAM URLs.
[stub for offtopicness]