UK Regulator Ofcom Proposes AI-Based Safety Measures for Social Platforms

UK Regulator Ofcom Proposes AI-Based Safety Measures for Social Platforms

The UK’s communication regulator, Ofcom, is considering implementing new digital safety measures aimed at curbing the spread of illegal content across social media platforms.

This initiative includes advanced AI-based monitoring and restrictions on screen-recording of children’s livestreams, addressing increasing concerns about AI-powered manipulation, viral content dissemination, and improvements in content moderation technologies.

These proposed measures could have a significant impact on tech companies that utilise advanced AI or content moderation models.

Many of these firms already face intense scrutiny over their handling of illegal and harmful content online. By setting stricter regulations, Ofcom aims to protect vulnerable users, particularly children, and uphold digital safety standards.

The rapid growth of social media has brought about a myriad of challenges, particularly in content moderation. With millions of posts uploaded daily, it becomes increasingly difficult to monitor and filter harmful or illegal content effectively.

Advancing AI

AI advancements offer a promising solution, but they also bring their own set of risks, such as potential biases in moderation algorithms or misuse of AI technologies for harmful purposes. Ofcom’s proactive approach in consulting on these measures reflects the need to balance technological innovation with robust regulatory frameworks to ensure a safer online environment.

One emerging consideration in Ofcom’s proposal is the requirement for transparency around algorithmic decision-making.

Social media platforms may soon be compelled to disclose how their AI systems prioritise, flag, or remove content.

This push for algorithmic accountability marks a turning point in digital governance, highlighting the growing demand for explainability in AI operations. Such measures would not only bolster public trust but also encourage platforms to refine their systems for fairness and accuracy, reducing unintended censorship or bias.

Another facet gaining attention is the psychological and developmental impact of social media on younger users, particularly in the context of livestreaming. Restrictions on screen recording are just one aspect; broader controls may include AI-driven tools to detect grooming behaviours in real-time or to prevent age-inappropriate content exposure.

These developments are part of a wider European trend, where regulators are increasingly taking cues from the EU’s Digital Services Act.

The UK’s alignment with such frameworks underscores a shift towards more harmonised digital oversight, potentially setting precedents for other jurisdictions.

Key Data and Developments

  • Online Safety Act Implementation:

    • The Online Safety Act became law in October 2023, introducing the UK’s first comprehensive online safety regulations (Ofcom bulletin, May 2025).

    • From March 17, 2025, user-to-user and search service providers must implement safety measures tailored to risk assessments and the specifics of their platforms (RPC Legal, May 2025).

    • All providers must name an individual accountable for compliance and conduct annual risk assessments, with a focus on child protection (Ofcom statement, April 2025).

  • AI-Powered Content Moderation:

    • Ofcom is consulting on expanding the use of AI and automated tools to detect and block illegal content, including deepfakes, hate speech, and child sexual abuse material (UK Tech News, April 2024).

    • The regulator is considering measures to prevent screen-recording of children’s livestreams and to improve recommender systems, aiming to stop illegal content from going viral (BBC News, June 2025).

    • Companies such as Unitary AI and Arwen AI are already analysing millions of videos daily to detect harmful content using machine learning.

  • Equal Treatment for AI-Generated Content:

    • Ofcom has confirmed that content created by AI tools or chatbots will be treated the same as human-generated content under the Act (Pinsent Masons, Nov 2024).

    • Platforms must ensure that AI-generated content does not violate safety standards or legal requirements.

  • Child Protection Focus:

    • More than 40 new measures will be enforced from July 2025 to protect children online, including safer social feeds, stronger age checks, and controls to prevent contact from strangers (Ofcom, April 2025).

    • Providers must complete children’s risk assessments by July 24, 2025, and implement the required safety measures by July 25, 2025 (Ofcom statement, April 2025).

  • Enforcement and Penalties:

    • Platforms failing to comply face potential fines of up to 10% of global revenue or £18 million—whichever is higher—and possible site-blocking orders in the UK (Online Safety Alliance, March 2025).

 


Latest Tech and AI Posts