No, The UK’s Online Safety Act Doesn’t Make Children Safer Online
from the this-does-the-opposite dept
Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA).
The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:
- Primary priority content that is harmful to children:
- Pornographic content.
- Content which encourages, promotes or provides instructions for:
- suicide;
- self-harm; or
- an eating disorder or behaviours associated with an eating disorder.
- Priority content that is harmful to children:
- Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
- Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
- Content that encourages, promotes or provides instructions for serious violence against a person;
- Bullying content;
- Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional);
- Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and
- Content that encourages the self-administration of harmful substances.
- Non-designated content that is harmful to children (NDC):
- Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
- the content’s potential financial impact;
- the safety or quality of goods featured in the content; or
- the way in which a service featured in the content may be performed.
- Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:
- Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”
To do this, all users on sites that host this content must verify their age, for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18.
- Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”
- Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.”
Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab.
The UK Must Do Better
The UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December.
But the UK’s scramble to find an effective age verification method shows us that there isn’t one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.
And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple’s App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures.
The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
Originally posted to the EFF’s Deeplinks blog.
Filed Under: age verification, children's rights, online safety act, uk


Comments on “No, The UK’s Online Safety Act Doesn’t Make Children Safer Online”
In the UK right now every citizen is treated as a child until they prove they’re an adult by passing a government ID over to a third party USA data farming operation
Some people would argue that passing their government ID over to a third party USA data farming operation, with no visible bona fides or any reputation for honesty, would mean you have the mind of an extremely young and gullible child and shouldn’t be allowed to access adult parts of the web anyway, but not I!
No I just think it means you’re a moron who probably needs power of attorney placed over you to protect you from fraud
Much like the politicians who decided it was a good idea to call the people who oppose the OSA “predators”
It gets crazier
The official UK government line is that everyone who is against the OSA is a predator
Re:
Which is the same as everyone who decries the violence committed by the Israeli government hates Jews.
No, it doesn’t make kids safer, but that wasn’t the actual goal of the act (and an “act” it is) anyway.
Can’t wait to see teen mortality rates here in the UK soar about an extra 150% back to where they were back before the internet suddenly made loads of us aware we were not alone in our experiences and actually loads of the insurmountable seeming issues facing us in our lives actually had help and support available, not to mention just being able to learn about stupid shit we probably shouldn’t do, and how to mitigate harms of the stuff we did decide to do…
And we can tie this into the Collective Shout/Steam and Itch censorship issue: Ofcom’s own guidance on how to enforce the OSA says that if a business like Steam refuses to censor itself, Ofcom will stop paypros from working with that business. Welcome to the Age of Censorship, folks. We’re gonna be in this shit for a long while.
Want to know what I think?
Honestly, I think we do need to protect kids; the internet is being flooded with garbage lately, from AI-generated sites that put advertising revenue over accuracy, social media algorithms that push fear and hate and extremism, ads ads and more ads, and more stuff I don’t want to think too hard about right now.
But, this ID-collecting craze isn’t the way to do it. It adds an obstacle to casual browsing, it puts your personal information and metadata in the hands of complete strangers, it’s training people to share said personal information on demand instead of being careful with what they post online, and it marginalizes and isolates people who don’t want to go along with privacy violations. And the moderation requirements are a crushing burden on smaller forums.
Personally? Something I’ve been thinking about is whether we should organize some kind of online library. Encyclopedia articles that have been fact-checked for accuracy (including handling adult content in a professional manner), historically significant literature and film, high quality animations and fanfiction and games, useful how-to guides, and so forth.
The idea being, like, a curated repository you and your kids can browse, safely away from misinformation, propaganda, AI-generated hallucinations, or predatory advertising. Something kids can grow up with and learn how the world works, and how to tell fact from fiction, before setting foot into the wider internet.
Possibly some dedicated forums for all-ages content as well. As opposed to, you know, trying to censor the entire internet.
Any thoughts? Would it be doable or useful?
Re:
What is useful? What is safe?
It just so happens that learning about how your government is evil and is working against you is on that “not for minors” list. I’m giving an example on more of the extreme end, but just remember that by declaring a government entity as one in charge of information access also puts them in charge of deciding how data is categorized.
The other part of that is knock on effects of information being censored out of fear of retaliation.
Finally. Should a 14 year old have access to books on sex, rape, and so on? Many would say no. That, that would be the parents job. Not all kids have parents, or adults in their lives. Not all kids have parents that are supportive, or willing to help them.
This all goals for a curated list of info like you suggest.
And as always. If you are that worried about your kid, do you lock them in a basement and never let them out in public?
Re: Re:
Indisputably, yes. They should have free access to resources on anything and everything they are likely to encounter in their lives, so as to mitigate any possible harms of not being prepared for them when they inevitably come up in their lives in one way or another.
Re: Re: Re:
And if they have questions about the content in those books, that’s what parents are for. A parent who can’t talk about that content with their children is kind of a shitty parent.
Re: Re: Re:2
And, to be clear, some of those shitty parents are like 12 years old, and became parents mostly because they didn’t have access to those books.
The average age of female puberty, by the way, is 10.5 years after birth. Which means that many could be much younger—it’s only considered “precocious” before 8 years—and need to know this stuff before then. One person is well documented to have given birth at 5.6 years old. There’s really no usefully safe age at which to withhold information on this stuff. (And, as another reminder, no age at which it’d be harmful to learn it.)
Re: Re: Re:3
That’s the problem a lot of stick-up-their-ass “moral guardians” have with age-appropriate sex education, though: They don’t want children to learn about sex, even if all they learn are the proper names for body parts and how to tell if someone is touching those parts inappropriately. What’s equal parts hilarious and sad about that mindset is that comprehensive sex education helps reduce the numbers for both teen pregnancies and abortions, so if those assholes really wanted to stop those things from happening, they should support sex ed in schools.
But every time one of them comes out against sex ed, alls I can think is “pedocon theory is looking less like a theory and more like a fact these days”.
Re: Re: Re:4
As a father, I’m all for age-appropriate sex education with the emphasis on “sex education”. To that end, I taught my four-year-old daughter how to label all of her body parts accurately, including the ones “down there”, and also told her that no one should touch her vulva or her fanny without her permission, including me or her mom, unless absolutely essential.
Re: Wikipedia
We do have Wikipedia, which has decently curated content (and it is applying for a “suitable for all ages” status, because of information on reproduction and such.)
The thing is that children and adolescents can really use realistic information about sex, but if that’s put behind the same age lock as porn we can not expect youngsters to get proper information.
Re: Re:
The cultural appetite for realistic information about sex varies widely across the world too. Go to a natural history museum in any continental European country and you might see a hands on kids exhibit about how pregnancy works with detailed cutaway diagrams of all the relevant body parts.
It was never about keeping kids safe online. There are two goals of this law.
The first is controlling what we say and have access to online though the complete removal of all anonymity. It is a censorship charter, plain and simple.
The other is to pave the way for Tony Blair’s mandatory government ID nonsense which he failed to implement while lying to everyone as PM.
The Tech Secretary deliberately shutting down criticism by saying in plain terms that all opposition to the law is support for paedophila and child abuse is proof enough that this is about censorship and the removal of anonymity online.
Politicians just like: “Epstein had all that blackmail info on all our sexual interests – how about if we did that, but for every voter who might be a threat to us?”
Re:
…you realize that leftists wrote this law, right?
Re: Re:
Lettuce see what Copilot says:
Thanks for the clarification! You’re referring to the Online Safety Bill, not the Official Secrets Act. Here’s the scoop:
📜 Was the Online Safety Bill introduced by the Tories under Sunak?
Not exactly. The Online Safety Bill was not originally introduced during Rishi Sunak’s premiership. It was first drafted and developed under previous Conservative leadership—particularly under Boris Johnson and Theresa May. However, Sunak did oversee its final stages and significant amendments during his time as Prime Minister.
🔧 What happened under Sunak’s leadership?
– In January 2023, Sunak faced a major backbench rebellion from around 50 Conservative MPs who wanted tougher penalties for tech executives failing to protect children online.
– To avoid defeat in Parliament, Sunak and Culture Secretary Michelle Donelan agreed to amend the bill so that tech bosses could face jail if they “consented or connived” in ignoring child safety requirements.
– This marked the third time Sunak had backed down to internal party pressure since becoming PM.
🧠 Key takeaway:
The Online Safety Bill was a Conservative initiative, but Sunak inherited it and played a crucial role in shaping its final form—especially by responding to internal party demands and public pressure around child safety online.
Want a breakdown of what the bill actually covers or how it compares to similar laws in other countries?
Re: Re:
Did they? From what I can find, two Conservative MPs introduced the bill.