Online Safety Act? Saving kids or dooming them?
- Shannon Walker
- Sep 5
- 2 min read
On the 25th of July 2025, the UK government introduced its newest online safety act. Requiring online platforms to protect children and adults from "harmful" content.
Platforms such as PornHub are required to use the government's "highly effective" age verification software, which consists of the platform scanning users' faces and determining their age from the facial scan.
Ofcom, the independent regulator for online safety, is, as stated, independent; the government does not regulate it.
So, why should the public give them access to information like bank cards or government IDs? Things that, if leaked, could cause havoc.
Any apps or platforms that allow users to contact each other have to follow the new act.
For example, I am sober, an app that keeps track of how long users have been sober, as well as offers helpful tips to stay sober.
It also features a community page, where users can provide support to others in similar situations. Their community page has been removed to comply with the Online Safety Act.

Removing pages like this could cause more harm than good. Preventing people from easily accessing information in a crisis could result in serious harm.
It's not only I am sober that has suffered under the act, Reddit and X (formerly Twitter) have fallen into the firing line with pages to discuss issues like being trans or medical issues being blocked behind age verification.
This doesn't only affect children, adults without any forms of identification or a baby face cannot access these pages that could provide vital information.
Unlike the government's claims, Ofcom's verification technique is not "highly effective," as people online have demonstrated by fooling the age verification system using Norman Reedus's character from Death Stranding.
Many people feel that the act does not protect children but instead endangers adults, and if the act does not ease up, people will get hurt.




Comments