November AI policy update: £20bn boost, child safety laws and science strategy
The UK announces major AI investments and child protection laws, while experts examine AI agents and their local impacts.
UK AI funding package
The UK government announced significant AI funding and investments to drive forward the AI Opportunities Action Plan.
These included some £20bn in private funding across two AI Growth Zones in north and south Wales. In addition, the Sovereign AI Unit is backed by a new £500m fund to support British AI firms.
The Sovereign AI Unit also announced the Advanced Market Commitment to procure £100m of inference systems into the public compute AI Research Resource.
New compute paradigms can deliver orders-of-magnitude performance gains over incumbent platforms. But the journey from design to production is challenging. So, the new commitment guarantees demand, which de-risks private investment, and provides commercial validation, which can serve as a springboard into wider opportunities.
James Chalmers, a senior policy advisor in the Sovereign AI Unit, has more on the announcement.
And see Dom Hallas’ LinkedIn post for a detailed rundown of all the commitments.
New initiative: AI for Science
The government has also launched its AI for Science strategy, which aims to accelerate AI-driven scientific discovery and to position the UK as a global leader in research and innovation.
As techUK's Usman Ikhlaq explains, this provides a £137m investment over the next five years to support AI-enabled breakthroughs with a focus on advanced materials, fusion energy, medical research, engineering biology, and quantum technologies.
Usman has more on techUK's insights on the strategy.
New report: the dilemmas of delegation
The Ada Lovelace Institute has published a new report that analyses the policy challenges posed by advanced AI assistants and natural-language AI agents.
The report's author, Harry Farmer, notes that the research examines the problems that arise when these systems do not work as intended, as well as those that may emerge even when they work exactly as designed.
In addition, the report examines the aspirations of tech industry leaders for advanced AI assistants to become the primary means through which we interact with the internet and digital information and the implications this has for power dynamics between firms and the public.
New report: AI in the street
A new report from Rachel Coldicutt from Careful Industries, Dr Maya Indira Ganesh from the Leverhulme Centre for the Future of Intelligence, and Noortje Marres from the University of Warwick, among others, explores the perception that government AI policy does not serve the needs of local communities in the urban environments where AI innovation takes place.
As Rachel explains, the report finds there is a disconnect between tech and innovation policy and social policies focused on flourishing communities. Recommended responses include a cross-departmental social and wellbeing AI strategy to empower local governments.
It's a fascinating paper and is required reading for anyone interested in the local and regional impact of AI and how policymakers should respond.
New UK legislation: AI and online safety
The UK government announced new laws to prevent AI from being exploited to create child sexual abuse material.
The new legislation aims to empower AI developers and child protection organisations to test models safely and help build safeguards into AI from the start.
As the Internet Watch Foundation's chief executive, Kerry Smith, said: "Safety needs to be baked into new technology by design. Today's announcement could be a vital step to make sure AI products are safe before they are released."
The Internet Watch Foundation has more on the announcement.
Policy brief: frontier safety standards
Sophie Williams, with co-authors from the Centre for the Governance of AI, has published a new policy brief that investigates the merits of frontier AI companies assessing risk relative to their competitors' models.
Sophie notes that this means that if a competitor has a similar model with weaker mitigations, a company might decide to lower its own mitigations. The basic idea is that this wouldn't increase the ecosystem's total risk.
While this may seem reasonable in principle, in reality, things are far more complicated for several reasons. For instance, it is hard to know how capable or well-mitigated competitors' models actually are, and it allows the least responsible actor to set the bar for safety standards.
New interview: international AI safety
New on the Appraise Substack, Audrey H. has interviewed Shalaleh Rismani of Mila - Quebec AI Institute on the International AI Safety Report.
The report brings together research from experts worldwide to provide a shared evidence base on the capabilities and risks of advanced AI systems. Its recent publication focuses on rapid advances in AI reasoning capabilities and examines how these developments intersect with emerging risks.
Originally published in the Internet Exchange newsletter, Audrey spoke to Shalaleh to better understand the thinking behind the report, its aims, key highlights and how to ensure policymakers take notice.
New book: Prophecy by Carissa Véliz
Carissa Véliz, a professor at the Institute for Ethics in AI at the University of Oxford, is launching a new book focused on prophecies – or the predictions that now determine our lives, from personal finance to the news we consume.
Author of the Economist's best book of the year in 2020, Privacy is Power, Carissa has turned her attention to how predictions about humans are often self-fulfilling and how a robust society requires not more prediction, but better preparation.
It will be essential reading for anyone interested in how technology impacts society and the challenges policymakers face in navigating such prophecies.
In conversation with Alisar Mustafa
Continuing the Appraise Network's series of articles interviewing AI policy leaders, I spoke to Alisar Mustafa, head of AI policy and safety at Duco.
In the interview, Alisar, also the author of the AI Policy Newsletter, discusses the evolving, high-stakes environment of AI policy. She also details how to translate principles into practice and how governments can help build safety from the start.
New newsletter: AI Beyond Borders
Minh H Chau has launched a new Substack: AI Beyond Borders.
In his new fortnightly newsletter, Minh will broaden the AI conversation beyond the usual US-China spotlight and explore how under-the-radar countries are approaching AI governance and adoption in ways that fit their local context.
He's hoping to feature a diversity of views, so get in touch with him if you want to share how AI looks from your perspective.
New event: scaling state AI capacity
Tom Westgarth and txp, in collaboration with AI publication Transformer, are hosting an event about what it will take to turn the UK into the best place in the world to work on AI in government.
Featuring the minister for AI and online safety, Kanishka Narayan, and former No 10 AI advisor to the prime minister, Henry de Zoete, it promises to be a great event.
Tom Westgarth has more details on how to sign up.
Chartered PR practitioner James Boyd-Wallis is vice-chair of the CIPR Public Affairs group and co-founder of The Appraise Network, for AI policy and advocacy professionals in the UK.
Read more
How AI comms must change in 2026
Can you be trusted? CIPR East Anglia tackles misinformation in the digital age
AI policy update: UK bets on beating US as EU gambles €1bn
AI policy update: UK-US tech prosperity deal and national security
How to make yourself irreplaceable in an age of AI




