- New Online Safety Rules Begin 25 July 2025: UK tech firms must implement over 40 safety measures to shield children from harmful content like self-harm, pornography, and online abuse.
- Heavy Penalties for Non-Compliance: Companies that fail to comply could face fines up to £18 million or 10% of global revenue, and even criminal charges in severe cases.
- Criticism from Both Sides: Child protection advocates say the Act doesn’t go far enough, while privacy groups argue the age checks are intrusive and pose security risks.
In a landmark step towards regulating online content, the UK government has finalized new child safety regulations under the Online Safety Act. Coming into effect on 25 July 2025, the rules will require tech companies operating in the UK—including social media, search engines, and gaming platforms—to adopt more than 40 mandatory safety measures. These aim to protect children from exposure to content involving self-harm, suicide, pornography, eating disorders, bullying, and other forms of online abuse.
Under the new rules, platforms must redesign algorithms to limit harmful content in children’s feeds, introduce stricter age verification systems, and remove dangerous content more quickly. Companies are also expected to provide support to children exposed to harmful material and designate a specific executive responsible for child safety. Failure to meet these standards could result in penalties of up to £18 million or 10% of global revenue—and, in severe cases, criminal prosecution or blocking access to platforms within the UK.
Despite its strong stance, the Act has attracted criticism from campaigners and child safety advocates who argue that the rules do not go far enough. Calls have been made for an outright ban on social media for under-16s, and concerns have been raised about the law’s lack of safeguards around private messaging apps. Meanwhile, privacy groups warn that age verification methods may pose risks related to surveillance, data misuse, and exclusion, sparking a fresh debate over how to balance child safety with digital rights.
The Act also tackles other illegal content, mandating platforms to proactively remove material related to child sexual abuse, terrorism, coercive behavior, and the sale of illegal drugs or weapons. New offences have been introduced, such as cyber-flashing and the distribution of AI-generated deepfake pornography. These additions are part of a broader initiative to modernize legal protections in the digital age.
While most children in the UK are highly active online—spending between two and five hours per day—the data paints a troubling picture. Nearly 60% of teens report seeing harmful content, and many have encountered misogynistic and violent media. In response, regulators and safety organizations are encouraging parents to use available tools like parental controls, teen account settings, and app-specific restrictions. However, with one in five children reportedly disabling such protections, the success of the Online Safety Act will ultimately depend on both industry enforcement and sustained parental involvement.





















