The UK government will work with Microsoft, academics and experts to develop standards and an evaluation framework for tools that detect AI-generated deepfake content. Officials framed the effort as part of a broader push to set consistent technical and policy standards to counter criminals weaponising manipulated material.

The UK announced a public-private initiative to create a standardized evaluation framework and technical standards for deepfake detection tools, partnering with Microsoft, academic researchers and subject-matter experts. Government officials cited growing concern that manipulated audio and video are being weaponised by criminals to commit fraud, spread disinformation and undermine public trust. The framework aims to provide agreed metrics, test datasets and assessment methods so developers, procurement teams and regulators can compare detectors on consistent grounds. Organizers said the project will also address operational constraints such as false positive rates, adversarial robustness and real-world deployment challenges across platforms and devices. The initiative is presented as complementary to policy work on harmful AI content, evidence-sharing with law enforcement and international collaboration, while avoiding reliance on proprietary, closed evaluation regimes. By aligning technical evaluation with regulatory expectations, the UK hopes to accelerate reliable detection tools that can be adopted by industry, media outlets and public agencies to mitigate the rising tide of synthetic-media fraud and protect consumers and institutions.