Huw Roberts suggests that while the AI summit was a success, the UK has focused on leading the international conversation at the expense of domestic AI regulation.
In February 2023, a white paper outlining the UK’s approach to regulating AI had still not been published following a year of delays. The country had no coherent plan for the role it wanted to play internationally in governing AI. And requests to join EU and US AI governance dialogues were being snubbed.
Fast-forward to November and the UK has just hosted an AI summit that brought together 29 governments, key international institutions, major AI companies, and a host of civil society and academic stakeholders to discuss risks to safety from ‘frontier AI’ systems that possess dangerous capabilities. In the communiqué from the summit – the Bletchley Declaration – participating governments agreed to a set of common risks from frontier AI systems and to collaborate in addressing these through international processes, including future AI safety summits.
International governance is hard, particularly for a highly politicised technology like AI. So reaching agreement among a diverse range of stakeholders should be considered a remarkable success. Of particular note is the fact that both China and the US signed the Bletchley Declaration at a time of heightened political tension between the countries over AI. The summit also saw companies agree to ‘like-minded countries’ testing their AI models before release.
These efforts are praiseworthy, but it is important to be clear-eyed about the costs associated with the UK’s decision to host an AI summit and the significant uncertainties surrounding what comes next.
Domestic regulatory lag
While the UK prioritised its resources towards making the AI summit a success, other governments have been progressing domestic AI regulation. Just before the AI summit kicked-off, the US published a sweeping executive order that regulates how federal government uses AI and requires companies to disclose results from safety tests of advanced AI systems.
In August, China introduced a new law focused on ‘generative AI’; systems like ChatGPT and DALL-E 3 that can generate text, images, and other content. The law includes wide-ranging restrictions related to the data used to train these systems and liability clauses for the content generated.
The EU is concluding trilogue negotiations between the European Commission, European Council, and European Parliament on its comprehensive AI Act. This law aims to regulate different types of AI based on the level of risk they are deemed to pose. This includes provisions related to the most advanced types of AI, which were excluded from early drafts of the AI Act.
In the UK, a strong emphasis on leading an international response to ‘frontier AI’ has left the foundations of good domestic governance neglected. There are currently no concrete plans to introduce a cross-cutting AI law, with the country instead relying on existing regulators to address risks related to their remits. Promising work is being undertaken by individual regulators, but overarching questions about coordination and regulatory capacity remain.
A Central Risk Function designed to alleviate these concerns by supporting coordination and monitoring risks was announced in September. However, information about the resources, specific functions, and capacities of the body has been limited. There is little to indicate that the UK is taking warnings about regulatory coordination and capacity for addressing immediate-term risks from AI seriously. This means that while the UK attempts to lead the conversation on AI internationally, it may fail to govern AI effectively domestically.
International regulatory competition
Despite the UK’s best efforts, there is also significant uncertainty surrounding the longer-term impact that the AI summit will have.
Future summits have been scheduled in South Korea and France in six months and a year respectively. However, the international AI governance landscape is already extremely crowded. In October alone, the UN announced its High-Level AI Body tasked with advancing recommendations on international AI governance and the G7 agreed on generative AI principles and a voluntary code of conduct. It will be challenging to maintain momentum into future summits in light of competing initiatives.
Arguably more promising is the announcement of an AI Safety Institute (AISI): an expert body designed to examine, evaluate, and test advanced AI systems. Modelled on the Intergovernmental Panel on Climate Change (IPCC) which provides expert knowledge on climate issues, the AISI aims to inform a global response to the opportunities and risks posed by frontier AI.
Unlike the multilateral Bletchley Declaration, the AISI was established unilaterally by the UK. This point is crucial. The IPCC is far more than an independent expert body. It is a recognised intergovernmental body of the UN and a political process whereby expert reports are negotiated and approved by 195 member countries. Government agreement, as much as scientific consensus, provides the IPCC with its authoritativeness.
The peculiarity of the UK unilaterally establishing an expert body to guide technologies predominantly developed by American and Chinese companies has not been lost on other governments. US Vice President Kamala Harris, for instance, stated that it is America that will lead global action on AI before announcing the US’ own AI Safety Institute. This raises questions over the authoritativeness of the UK’s AISI.
With the AI summit now concluded, the challenge of making it a long-term success has begun. More diplomatic work will be needed to convince other states of the central role the new AISI should play. Collaborating with partners and continuing to bridge dialogue with China will be central to this ambition. But it will be a largely futile endeavour if inadequacies in domestic AI regulation allow harms to materialise that undermine the UK’s reputation internationally.
By Huw Roberts, DPhil student, Oxford Internet Institute.