The UK is laser-focused on the frontier
The UK’s AI Safety Summit is fast approaching; industry and governments will gather to discuss risks of advanced AI at Bletchley Park this week. Guests include US Vice President Kamala Harris, European Commission President Ursula von der Leyen, U.N. Secretary-General António Guterres and a range of leading industry figures and scientific experts.
The UK has come a long way since the publication of its AI governance white paper. Back in March, it proposed a light-touch regulatory approach by empowering existing regulators to use existing laws, there was no mention of regulating models on “the frontier,” and the administration had hastily set up the Foundation Model Taskforce.
Fast-forward to October and frontier models are at the epicentre of the Government’s concerns. The Taskforce is now focused on the frontier and extreme risk, and the UK has published a detailed paper on the future risks of frontier AI, to be discussed at the Summit. In the Prime Minister’s pre-Summit speech on Thursday, he emphasised how he is “unashamedly optimistic about the power of technology to make life better for everyone”. Despite the positive rhetoric, the accompanying publications focus heavily on risk. The relaxed approach does not extend to the most advanced models on the frontier. The Prime Minister will have a chance to further set out his views in conversation with Elon Musk on Thursday evening.
However, the UK faces competition. The US is clearly leading the way on frontier AI, having secured voluntary commitments from the world’s top AI companies to manage risks, which is now augmented by an Executive Order. The Vice President is also poised to deliver a speech on the Biden-Harris Administration’s vision for the future of AI ahead of the Summit, where she is set to reveal a $200 million investment in AI from philanthropic foundations to protect democracy, support workers and enhance transparency. The G7 has now published an AI code of conduct for organisations developing advanced AI systems as part of its Hiroshima process investigating opportunities and challenges of generative AI.
Could the Summit be a success?
Despite developments in other fora, the Summit is still influencing the agenda, and will centre on establishing:
(1) A global advisory group on AI risk. Foreshadowed by tech leaders, such as Eric Schmidt, and Ursula von der Leyen in her State of the Union address, the Prime Minister has confirmed his support for an advisory group loosely modelled on the UN’s Intergovernmental Panel on Climate Change. A leaked communiqué (sent to EU Member States) indicates the Summit’s intentions to drive this through existing fora such as the UN or Global Partnership on AI. To grip this issue, the UK has published a comprehensive discussion paper on capabilities and risks of frontier AI.
However, despite the communiqué, it’s unclear which Summit attendees will support the advisory group. Delivering this will be a major challenge, particularly given China has recently questioned the West’s ideas on AI regulation by establishing its own Global AI Governance Initiative.
(2) An AI Safety Institute. The Prime Minister insists the UK’s answer to frontier risks is “not to rush to regulate”. Instead, his Thursday announcement included a commitment to establish an AI Safety Institute to allow national security experts to evaluate AI labs’ most advanced models. Emerging from the Frontier AI Taskforce, and loosely modelled as a “CERN” for AI safety, labs have already provided the Government with access to their systems, putting the UK in a unique position (aside from the US).
Even with this solid groundwork, international buy-in is highly desirable: coherent agreement on future regulation that determines how and when models should be accessed and deployed is vital for instilling trust in the technology and future applications. The UK has voiced its intentions to deepen partnerships at the Summit to drive this forward. Given the flurry of announcements coming from the US, this is likely to at least come in the form of a UK-US partnership; it will be interesting to see whether the initiatives enjoy wider support.
What does this mean for AI regulation?
Given the political focus on AI safety, businesses should prepare for how this will affect their own AI use, downstream of the developers. DSIT Secretary Michelle Donelan has defended the UK’s principles-based strategy for AI regulation, reiterating her rejection of a ‘one size fits all’ approach.
However, this does not preclude future regulation. Increasing discussion and understanding of extreme future risks and mitigations between governments has the potential to prompt downstream effects, such as new regulatory and technical tools applied to “current” risks or less advanced models. The UK has assessed a wide range of risks presented by frontier AI in its discussion paper, including how advanced models contribute to bias, fairness, and representational harms. Biden’s Executive Order extends federal oversight of AI beyond advanced systems, and includes a call on Congress to pass bipartisan privacy legislation.
Away from advanced models, neglecting debate on general AI regulation risks regulatory fragmentation. The EU is set to introduce its AI Act soon, which contrasts starkly with the US focus on voluntary risk management, and even more so with India’s intention to introduce no legislation. Yet, all these strategies are relaxed compared with China’s strict regime.
While it’s notable that the US Executive Order and $200 million investment includes supporting workers’ rights, privacy and advancing equity, it will be important that these strategies are not implemented in silos. Businesses are still feeling the effects of a lack of a global cohesion on data regulation, even among relatively aligned western allies. This continues to stymie trade, investment, and innovation across borders. Governments should learn from this when approaching AI to avoid regulatory splintering. However, as regulation is often value-driven, consensus will be tough and will require proactive dialogue.
What might Labour do differently?
The current Government’s ambitions might not come to fruition given the looming General Election. If successful, Labour will inherit the same challenges and trade-offs. Reacting to the Prime Minister’s speech, Peter Kyle called on the Government to publish next steps to protect the public through regulation of advanced AI, suggesting that Labour may want to bring in rules and frameworks quickly. A Labour government would “set clear standards for AI safety” to restore public services and boost growth.
It is possible a Labour government would change focus from frontier AI safety to realising the opportunities across society – including improving public services and the machinery of government. Labour might also elevate the importance of concentrating on current or near-term issues, such as changes in workplace dynamics. The Shadow DSIT Secretary’s speech at conference indicated that Labour will not duck the geopolitical tensions around the technology, emphasising that the UK cannot leave AI governance to the US and China. Labour recognises the strength of the UK’s AI ecosystem and will want to support this at home and on the world stage.
What this means for business
The rise in policymakers’ focus on risk and safety, combined with the increasing use of AI across businesses (most organisations use third-party AI tools to enhance operations), means all organisations should develop an AI governance strategy. This should assess the deployment of AI, exposure to regulation, and establish internal capability to keep the use AI tools under review.
Ultimately, all businesses need also to prepare for an increasingly fragmented environment. It is vital to continue to emphasise the importance of global regulatory alignment to policymakers.
With the upcoming election, there is an opportunity for businesses to establish and maintain close relationships with Labour around AI use and solutions. This will be important to continue the party’s recognition and focus on investing in societal benefits of AI and enhancing public services.
This blog was written by Charlie Jackson, manager; David Skelton, partner and Josh Simons, specialist partner. Charlie specialises in advising clients on complex AI and technology issues. David leads Flint’s work on tech policy in the Asia Pacific, based in Singapore. Josh was a Research Fellow at Harvard University and is now Director of Labour Together, a centre-left think-tank specialising in centre-left political strategy and policy.
To find out more about how Flint can help you navigate the risks and opportunities of these developments, get in touch.