Publication

Publication of UK Government’s Response to its 2023 White Paper on AI Regulation

February 26, 2024

The UK Government has this month, February 2024, via the Department of Science, Innovation Technology (“DSIT”), published its response paper titled “A pro-innovation approach to AI regulation; government response”. This follows its consideration of responses to its White Paper on AI Regulation published last year. The intention stated in the White Paper had been to report back on this in the Autumn of last year and so the publication of the response is a little later than expected.

This is a short briefing setting out the key points detailed in the response paper.

  • Maintenance of a regulatory rather than legislative approach.

The Government have maintained the approach detailed in its White Paper whereby AI will initially be regulated not pursuant to new legislation but rather by means of supervision by existing industry regulators. These will be tasked with seeking to ensure that AI is used in a safe and responsible way, having regard to the relevant context and with the object being to ensure adherence to 5 cross-sectoral principles previously identified by the Government in the White Paper:

1) safety, security and robustness;
2) appropriate transparency and explain-ability;
3) fairness;
4) accountability and governance; and
5) contestability and redress.

As such, the Government intends to maintain its proposal of a flexible approach to the regulation of AI that “avoids unnecessary blanket rules that apply to all AI technologies, regardless of how they are used”.

Not legislating for the control of AI is, however, in contrast to the approach that the EU will be taking with its soon-to-be implemented legislation, namely the AI Act.1

  • Request for regulators to publish updates on the approach they are taking.

Whilst noting that certain regulators are already taking active steps to regulate AI, for example the Competition Markets Authority (“CMA”) which has published a review of foundation models, the Government has written to a number of regulators in spheres impacted by AI to ask them to publish an update outlining their strategic approach to the regulation of AI by 30 April this year.

The paper indicates that for now there will be no legislation imposed on regulators requiring them to undertake their duties to ensure compliance with the cross-sectoral principles. This approach will though be kept under review.

Investment is, however, being made in regulators to ensure that they are able to develop the right capabilities and tools to respond to AI. £10 million is being made available to fund this.

This was a concern noted in our prior paper2 where we expressed the view that we were not persuaded that regulators were currently necessarily able and/or have the right knowledge to take on board the task of regulating AI use in their respective industries.

  • Establishment of a central function to oversee the regulation of AI.

The Government’s White Paper proposed the establishment of a “central function” within Government that will monitor and assess AI risks across the whole economy and support regulator coordination and clarity. This was apparently widely welcomed by the responses received from stakeholders. Reasons cited were the risk, without such a central function, of regulatory overlaps, gaps and poor coordination as multiple regulators consider and seek to regulate the impact of AI in their domains.

The paper states that the Government has already began to establish the central function in a range of ways.

So, for example, it states that a new multi-disciplinary team to undertake cross-sectoral risk monitoring in respect of AI has already been established. The team’s function is to continuously examine cross-cutting AI risks including evaluating the extent to which government and regulators are effectively intervening to guard against the risks arising. In 2024 a targeted consultation on a cross-economy AI risk register will be established to ensure it comprehensively captures the range of risks.

  • Responsibilities for developers of highly capable general-purpose AI systems.

A section of the response considers general-purpose AI systems, being systems that can be adapted to a wide variety of purposes. It notes that some companies have publicly stated their goals are to build AI systems that are more capable than humans at a range of tasks.

The paper anticipates that voluntary measures are not likely to be sufficient to guard against the risks that these types of systems give rise to. It notes that some countries, such as the US, are beginning to explore binding measures applying to such systems, including mandatory reporting requirements for the most powerful systems.

The paper identifies that one of the problems with such general-purpose AI systems is that they could be applied across a wide variety of sectors and applications and if there was a single feature or flaw in one model, it could have an impact causing multiple harms across the whole economy.

The paper identifies the risk that existing regulation may not be effective to mitigate the risks because of their cross-sectoral impact. Thus there is the risk that the impact of general-purpose systems may be felt well beyond the remit of any single regulator, potentially leaving risks without effective mitigations. Further, the paper says it is not always clear how existing rules can be applied to effectively address the risk that highly capable general-purpose models can present.

Having concluded that the intended context-based regulatory approach which the Government is undertaking may miss significant risks posed by highly capable general-purpose systems, the paper states that all jurisdictions are likely in time to want to place targeted mandatory interventions on the design development and deployment of such systems. As such, it does appear to anticipate a likely need for legislation in due course in respect of such systems.

  • Roadmap for next steps.

The paper concludes with a roadmap setting out next steps for 2024 including:

I. Continuing to develop a domestic policy position on AI regulation (including this summer engaging with experts on interventions for highly capable AI systems; publishing an update on the government’s work on new responsibilities for developers of highly capable general-purpose AI systems by the end of the year; and collaborating across government and regulators to analyse and review potential gaps in existing regulatory powers and remits on an ongoing basis).

II. Progressing action to promote AI opportunities and tackle AI risks;

III. Building out the central function and supporting regulators (including by launching a new £10m program to support regulators to identify and understand risks in their domain and to develop their skills and approaches to AI, establishing a steering committee to support and guide the activities of a formal regulator coordination structure within government in the spring and asking key regulators to publish key updates on their strategic approach to AI by 30 April);

IV. Encouraging effective AI adoption and providing support for industry, innovators and employees.

V. Supporting international collaboration on AI governance.
We will publish further updates as the UK government further develops its approach to the regulation of AI.

This article reflects only the present personal considerations, opinions, and/or views of the authors, which should not be attributed to any of the authors’ current or prior law firm(s) or former or present clients.


1 EU Agreement on the Text of a New AI Act, Author James Brown, IPWatchdog, January 4, 2024.
2 What direction for the UK regulation of Artificial Intelligence? Author James Brown, June 15, 2023

Media Contacts