The authoritative source for independent research on UK-EU relations

26 Jun 2023

Relationship with the EU

Huw Roberts examines the differing approaches being taken by the UK and EU to regulate in response to the rapidly evolving technological landscape of AI. 

Artificial intelligence (AI) is increasingly impacting all aspects of our lives. OpenAI’s ChatGPT has become the fastest growing internet application of all time, with companies ranging from Microsoft to Duolingo integrating it into their products.

While AI technologies bring about benefits, they also pose significant risks, which is why the EU and UK have both shown a keen interest in regulating these technologies. Efforts to regulate AI are ongoing, yet the approaches being taken by the UK and EU differ significantly.

European Union

In April 2021, the European Commission proposed the AI Act, a draft legislation that set out rules for governing AI within the EU. This draft has since been amended by the EU Council and Parliament, with the final text set to be agreed by late 2023 or early 2024.

The draft AI Act is a “horizontal” regulation, meaning it lays out rules for AI across all sectors and applications. It establishes four levels of risk for AI: unacceptable risk, high risk, limited risk, and minimal risk. Different rules apply depending on the level of risk a system poses to fundamental rights.

AI deemed to pose an unacceptable risk, including real-time remote facial recognition systems used in public spaces, are set to be prohibited. High-risk systems, like those used in critical infrastructures will be subjected to several requirements, including conformity assessments. Limited and minimal risk systems will follow transparency requirements and voluntary guidance respectively.

The EU’s horizontal approach provides overarching rules for AI, but its rigidity has drawbacks for a fast-moving field like AI. Notably, the risk framework proposed may struggle to adapt to new developments.

This issue is already materialising. Since the first draft of the AI Act was published, there have been significant developments in “foundation models”, like OpenAI’s ChatGPT, which are trained on broad data and designed to be easily adapted for multiple tasks. For instance, ChatGPT can be used for generating benign text, like a football chant for a new signing, or for malicious purposes, like generating text for sophisticated phishing attacks.

Foundation models complicate the initial risk-based framework proposed by the EU, as the framework was designed to regulate AI trained to complete a specific task, like a system used to sift CVs or a facial recognition camera. As a result, the initial draft of the AI Act places relatively few restrictions on foundation models, considering the significant risks they pose.

Efforts have been made by the EU Council and Parliament to update the task-specific risk framework to account for foundation models, yet it is doubtful that these afterthought revisions will adequately address the full spectrum of harms.

Despite these challenges, the EU’s AI Act look set to have an international influence. Because of the EU’s market size and regulatory capacities, companies are incentivised to develop and offer EU-compliant products. For many types of AI systems, particularly those that are difficult to alter based on the region of deployment, companies are likely to simply adopt the EU’s rules internationally.

United Kingdom

The UK has taken a different approach to AI regulation, favouring a “vertical” strategy that relies on existing regulators considering the impacts of AI on their jurisdictions. This position was first laid out in 2018 and subsequently affirmed in the recently published AI Regulation White Paper. However, after receiving industry feedback emphasising that this approach risked inconsistency, overlap, and gaps, the Government proposed a set of central functions to support regulatory coordination and to monitor cross-cutting risks.

The rationale behind the UK’s vertical approach is that it limits new regulatory burdens which may hinder innovation, while also providing sufficient flexibility to deal with new technological advances. Given the difficulty the EU has faced in updating its regulatory framework, there is some merit to this position.

The key drawback of the UK approach is the continued ambiguity over how it will be enacted in practice. Regulators are being provided with no new powers or funding to support them in addressing AI harms. Little detail has also been provided about what central Government support functions might look like. Because of this, it is unclear whether regulators will have the resources to address emerging risks, particularly from foundation models that have a cross-sectoral impact.

This domestic ambiguity has not prevented the UK touting its credentials as an international leader in AI regulation. Over the past few months, Rishi Sunak has adopted AI as a key international policy priority. He announced that the UK will host the first Global Summit on AI Safety, while also promoting London as the home of a global AI regulatory body.

The types of harms Sunak focuses on are those that received less attention in the EU AI Act. The Global AI Summit centres on longer-term AI Safety issues, with explicit mention of threats that could “endanger humanity”. This is a major U-turn in UK policy, which under a year ago was asking regulators not to focus on this type of “hypothetical risk”.

Prospects for UK leadership

Going forward, there is scope for synergistic leadership roles from the EU and UK. EU regulation provides a strong foundation for addressing harms that are already materialising, like AI bias. Meanwhile, the UK’s narrative appears to be leaning towards agile leadership, particularly for addressing the longer-term risks of cutting-edge systems that are not adequately addressed in the draft AI Act.

This type of complementary governance is desirable, but it will not be easy. There are two key risks for UK AI leadership going forward. First, an excessive focus longer-term risks without supporting regulators in tackling present harms could undermine domestic governance efforts. This threatens the UK’s international credibility as an AI leader.

Second, the international community may fail to get behind UK AI leadership. The EU and US have already been aligning many aspects of AI policy through the Trade and Technology Council, from which the UK has been excluded from. International bodies, like the OECD, UNESCO, and the Global Partnership on AI, have also already been working towards reaching international agreements on AI.

To overcome these risks, the UK needs to translate its rhetoric on regulation into a reality. Only then can it become a credible international leader in AI governance.

By Huw Roberts, Research Fellow in AI & Sustainable Development, University of Oxford

MORE FROM THIS THEME

Kicking the can down the road? The continued precarity of EU pre-settled status

Labour’s Brexit policy faces a hard collision with reality

Starmer meets Macron: making friends for the future

Will the 2026 TCA review reshape UK-EU relations?

Citizens’ rights and computer glitches: is digital immigration status fit for purpose?

Recent Articles