October AI policy update: UK bets on beating US as EU gambles €1bn
Britain’s AI minister says the UK can beat the US, critics question €1bn EU adoption plans, regulators launch new sandboxes. This month’s policy roundup covers the battles shaping AI.
California AI law
California governor Gavin Newsom signed the Transparency in Frontier AI Act into law.
The law is the first of its kind in the US and regulates frontier AI firms, requiring them to fulfil transparency commitments and report AI-related safety incidents. NBC AI reporter Jared Perlo has more on this unprecedented legislation.
Meanwhile, Jonas Freund asks how the new law attempts to address key risks and how whistleblower protections change accountability.
UK national security and the Frontier AI Act
This week, the Centre for Emerging Technology and Security and the Alan Turing Institute published a new report on the implications of California's Frontier AI Act on British national security.
The report's author, Connor Attridge, suggests the act could create second-order national security benefits for the UK, advance AI regulation, and enhance accountability more broadly.
Is the British government prepared for a major AI incident?
The Centre for Long-Term Resilience has published a new report this week, assessing whether the UK government is prepared for major AI incidents.
The short answer is no, according to the report.
However, as the report’s author Tommy Shaffer Shane says, there are steps the government can take to correct this failing.
Last, new thinking: why AI needs social science
In a fascinating and persuasive piece, Cosmina Dorobantu, professor of practice at LSE, and Helen Margetts, professor of society and the internet at the Oxford Internet Institute, University of Oxford, argue why AI needs social science.
The most sophisticated AI models are about people. And if people are central to AI, they argue that the social sciences must play a crucial role in its development.
The social sciences offer a lens through which we can study and understand people. They shed light on how economies function, how politics, governance, and law shape our environment, and what drives societies and human behaviour.
Therefore, the only way to shape an AI future that serves humanity is by building bridges between AI and the social sciences, they conclude.
New EU initiative: the Apply AI strategy
The EU launched the Apply AI strategy, which, according to the European Commission, is designed to enhance the competitiveness of strategic sectors and strengthen the EU's technological sovereignty.
It aims to boost AI adoption and innovation across Europe, particularly among small and medium-sized enterprises and make the EU an "AI continent".
Philipp Hacker, professor of law and technology at ENS, says the frontier AI initiative is crucial for developing models that rely on a European tech stack, particularly in fraught geopolitical times.
However, Frederike Kaltheuner questions whether it's worth directing €1bn (£877m) towards AI. Given that the money is from the existing budget, it'll come with trade-offs, which the commission doesn't explore.
AI safety needs sceptics
In a fascinating piece published on the Tech Policy Press website, Eryk Salvaggio argues that the AI safety and risk community has an unshakable belief in imagined threats.
While well-meaning, he asks whether the language used by those in the community to describe the technology contributes to the very problem it aims to curb.
As a result, he argues that we instead build our policy and social infrastructure around realistic assessments of the tools we engage with.
New thinking: the political economy of AI
Adam Thierer, an innovation and technology policy analyst at the think tank R Street Institute, has shared a short slide deck on why AI policy is so challenging.
In it, he identifies four key issues, including AI's definitional challenges, its general-purpose nature, dual-use applicability, and a rapidly changing business landscape.
Despite this, interest in AI policy is growing with differing agendas, interests and rationales for regulation.
New views: in conversation with Christabel Randolph
Continuing The Appraise Network's series of articles interviewing those making an impact in AI policy, we spoke to the excellent Christabel Randolph, associate director at the Center for AI and Digital Policy (CAIDP).
In the piece, Christabel discusses how governments are developing AI policy and the principles that inform these policies. She also discusses the new risks and fresh questions that generative AI has led to, and the role that the CAIDP plays in promoting the development of AI policy that benefits people.
It's a rich and insightful interview from someone at the forefront of AI policy.
Policy responses to AI's economic impact
The frontier model developer Anthropic has published a new blog asking how the arrival of powerful AI systems will change the structure of the economy.
Policy responses the firm puts forward include those relevant for all likely scenarios, such as worker reskilling. The blog also details policies where we experience moderate acceleration, such as taxes on compute, and responses for fast-paced scenarios, like sovereign wealth funds with stakes in AI.
For a thoughtful and considered take on some of these responses, see Julian Jacobs' post, where he details those ideas he likes, those he's less fond of and the ideas that are missing.
New interview: the UK's AI minister
In a cracking piece published late last month, political reporter Zoë Crowther at PoliticsHome and the House Magazine interviews the government’s AI minister, Kanishka Narayan.
In the piece, Kanishka says that Britain can outpace the US in terms of AI adoption. However, he suggests this relies on building trust and creating agency around the technology's adoption.
"If we can convince people, through proof as well as narrative, that this is something that can be meaningful for their personal lives, for their communities … that is the fundamental thing that we're focused on," says the minister.
However, Imogen Parker, associate director at the Ada Lovelace Institute, adds caution in the piece: "When you've got political pressure flowing down, there's a real risk that what you end up with is techno-solutionism."
It's a great piece and is well worth your time.
AI Growth Labs
The Department for Science, Innovation and Technology (DSIT) announced its intention to create AI Growth Labs - a regulatory sandbox to support growth and responsible AI innovation.
DSIT wants the UK to be the best place to test, scale and grow AI. So, the sandbox - a controlled testing environment - will explore where the UK government can lift or amend outdated rules to help accelerate AI adoption.
Vinous Ali, deputy executive director of the Startup Coalition, which called for the policy, celebrates the win and is seeking opinions from AI start-ups.
New paper: Centering the margins
This week, Garfield Benjamin, assistant professor in AI ethics and society at the University of Cambridge, has a new paper out on mapping AI systems as assemblages of social relations to highlight power structures.
Part of the purpose of the research is to drive a critical and structural analysis of AI systems and help provide insights for relevant groups, such as policymakers, on who is most affected.
With three UK public sector examples, the paper is an essential read for anyone focused on UK AI policy and adoption.
Global AI governance: the Brussels effect
The “Brussels effect”, the idea that companies outside Europe adopt EU rules and standards, is not the only global governance regime with cross-border influence.
As Nanjira Sambuli notes, there is growing talk of a “Beijing effect” and a “Delhi effect” as rivals to Europe’s vision for tech governance and regulation.
In a new article, Nanjira examines how these effects shape the African continent and the pros and cons of each.
It’s an insightful piece worth reading to understand how global tech governance trends are influencing Africa.
How to drive AI adoption
How should we drive AI adoption? Lengthy strategies and new pilots and sandboxes?
No, argues Alexandru Voica, head of corporate affairs and policy at AI video platform Synthesia, in an article in Transformer.
Instead, he says we should focus on what businesses care about: building trustworthy workflows that deliver real-world value and are based on international standards such as ISO 42001, 27001, and 27701.
It’s a well-argued piece that offers a pragmatic approach to building trustworthy AI and fostering safe deployment.
New opportunities: UK public policy manager, Cohere
Enterprise AI firm Cohere is hiring a government affairs and public policy manager in the UK.
The firm, aiming to scale intelligence for humanity, is building a UK team to engage with the emerging landscape of enterprise AI policy.
The new UK manager will represent Cohere in London, the UK, and beyond as the AI policy, procurement, and innovation agendas evolve.
Cohere’s head of EMEA government affairs, Stéphanie Finck has more on the role.
Chartered PR practitioner James Boyd-Wallis is vice-chair of the CIPR Public Affairs group and co-founder of The Appraise Network, for AI policy and advocacy professionals in the UK.
Read more
Can you be trusted? CIPR East Anglia tackles misinformation in the digital age
AI policy update: UK-US tech prosperity deal and national security
How to make yourself irreplaceable in an age of AI




