February AI policy update: India summit shifts focus as UK targets chatbots
The global AI summit broadens its ambition, the Sovereign AI Unit bets on startup speed, and the PM cracks down on AI chatbots in February’s policy roundup.
UK updates
UK AI minister puts opportunity at the heart of UK strategy
In a wide-ranging interview with Jimmy McLoughlin for Jimmy’s Jobs of the Future, the AI minister, Kanishka Narayan, set out his vision for capturing the opportunities of AI in the UK and ensuring those opportunities are spread across the country.
The minister explained why he thinks building cognitive vigilance is vital when using AI, ensuring people can discern between what’s accurate and what’s not. He also detailed his journey to becoming an MP and why he feels public policy can make a positive difference.
Sovereign AI Unit gambles on matching private VC
In a revealing interview for Sifted, Maya Dharampal-Hornby spoke to Josephine Kant, head of ventures for the government’s newly established Sovereign AI Unit, the £500m initiative to support early-stage startups.
Josephine said the unit’s ambition is to operate like a private VC, offering similar terms at a similar speed while providing startups with greater access to government. Given that government is not usually known for speed, this will be critical to whether the unit succeeds in its mission.
Sandboxes face scrutiny
The Department for Science, Innovation and Technology recently closed a consultation on the development of the AI Growth Lab, a proposed regulatory sandbox to help AI companies navigate regulations and bring their products to market more quickly.
While regulatory sandboxes can help bring safe AI products to market, the Ada Lovelace Institute’s Elsa Donnat argues in a recent piece that sandboxes must meet four key governance steps to ensure they work for people and society. These include ensuring innovation serves the public interest, protecting fundamental rights and safety during testing, ensuring companies remain viable after exiting the sandbox, and using sandbox evidence to drive democratic reform.
FCA launches AI review for retail markets
The Financial Conduct Authority has launched a review into the impact of AI on retail financial markets and consumers, with a special focus on agentic AI.
Meanwhile, amid the hype surrounding the AI agent social network Moltbook, where many supposedly autonomous agents turned out to be controlled by humans, Formation Advisory’s Dani Dhiman argues that such distractions obscure real commercial and regulatory developments that require attention, including AI firms moving increasingly into the retail space.
PM announces crackdown on AI chatbots and children
The prime minister announced new powers to crack down on illegal content created by AI and keep children safe amid fast-moving technologies.
The government has promised to move quickly to close a legal loophole and require all AI chatbot providers to comply with the Online Safety Act’s obligations on illegal content, or face the consequences of breaking the law.
This follows previous government action to call out non-consensual intimate images being shared on Grok, which subsequently led to the function being removed from the social media site.
Global updates
India AI summit: from safety to action to impact
Taking place in New Delhi, the annual global AI summit has evolved since the first event in Bletchley in 2023. While the first summit had a narrow focus on AI safety and last year’s Paris event focused on action, this year’s summit had a broader focus on impact and understanding who can benefit from the technology.
But, as Jakob Mökander, director of tech policy at the Tony Blair Institute, said in a Politico article on the summit, while greater inclusion is a strength, it inevitably makes focus harder.
For daily snapshots from the summit, see Theresa Yurkewich Hoffmann’s posts and Rachel Adams’ posts.
International safety report flags evidence gaps
The International AI Safety Report 2026 was published this month. Now in its second year, the report assesses what general-purpose AI systems can do, what risks they pose and how those risks can be managed.
Led by Turing Award winner Yoshua Bengio and authored by more than a hundred international experts, the report aims to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. Models are becoming more capable, but evidence on their risks is slow to emerge. The report synthesises what is known about risks as concretely as possible while highlighting remaining gaps.
For more, see Tess Buckley’s summary. For an analysis of the societal resilience needed to resist, absorb, recover from and adapt to AI-related disruptions, see Patricia Paskov’s post.
Open source as the path to AI sovereignty
The Tony Blair Institute has published a new report on how middle powers can build influence in the age of AI.
Rather than creating domestic frontier models from scratch, the report recommends building capability across the open source AI ecosystem, enabling governments to focus on driving adoption, innovation and economic growth. To strengthen a country’s competitiveness, it also recommends establishing a flagship open source programme and shaping the market through government procurement.
For more on the report and why it matters, see Keegan McBride’s post.
New views
Translating AI principles into practice
Continuing the Appraise Network’s series of conversations with those putting AI policy into practice, we spoke to Isabela Parisio, a postdoctoral research associate at King’s College London and Responsible AI UK.
Isabela discussed her work researching regulatory sandboxes to translate AI principles into practice. While many policy guidelines set out high-level principles such as accountability or transparency, they often provide limited guidance on how developers and deployers should apply them. Organisations can interpret the same principles differently, and regulators face challenges assessing compliance. Isabela’s work helps close this gap and generate more effective regulation.
Why AI hyperbole obscures real risks
The hype surrounding the AI agent social network Moltbook, where claims of AGI and superintelligence turned out to be overblown, highlights a recurring problem in AI discourse.
For a deep dive into why AI hyperbole obscures genuine concerns and risks, see Verity Harding’s Substack.
New book: The AI Paradox
Virginia Dignum, professor of responsible AI, has published a new book, the AI Paradox, a guide to navigating the often contradictory relationship between artificial intelligence and human intelligence.
As Virginia says, the core message is that the more powerful AI becomes, the clearer it is that the real stakes are human choices, and the more urgent it is to take responsibility for how we design, deploy, govern and use it.
New opportunities
UK policy advisor, ControlAI
ControlAI, which campaigns to keep humanity in control of artificial intelligence, is hiring a UK policy advisor.
The new policy advisor will help build political momentum in Westminster to address the extinction risk posed by superintelligence. The core of the job is briefing parliamentarians in both houses and across parties, helping them understand why superintelligence is a first-order policy issue.
To get an idea of what the organisation has learnt from briefing more than 140 parliamentarians, read Leticia García Martínez’s summary.
Chartered PR practitioner James Boyd-Wallis is MD of tech and AI focused corporate and public affairs agency Highbury Communications and co-founder of AI policy network, Appraise.
Further reading
UK marks AI action plan milestone as deepfake crackdown hits X

.png_0f7278855842d9d4461b97b1ec506a3a.png)

