Publication

What direction for the UK regulation of Artificial Intelligence?

June 15, 2023

Like electricity or the internet, artificial intelligence (“AI”) has the potential to change our world.

Whilst it may bring huge benefits, there is also a considerable risk of harm.

Though there is some legislation and regulation in the UK that is of potential application to AI, the UK government is now consulting on imminent cross-sectorial regulation.

The UK government’s March 2023 white paper

The UK government set out its proposals for the regulation of AI in a March 2023 white paper titled “A pro-innovation approach to AI regulation”.

The initial proposed approach is a somewhat “light touch” one, though it remains to be seen whether the consultation process and/or developments in technology and indeed progression towards regulation in other jurisdictions (particularly the EU) will result in the soon to be implemented regime being as proposed.

A regulatory framework to ensure adherence to five key principles

The proposed regulatory framework is to be “context-specific” – focusing on the potential outcomes AI is likely to generate in particular contexts so as to determine appropriate regulation.

The framework will be underpinned by five principles as follows:

  • safety, security and robustness – AI systems should function in a robust, secure and safe way with risks being continually identified, assessed and managed;
  • appropriate transparency and “explainability” – it must be possible to understand how decisions are made by AI;
  • fairness – in terms of its outcomes and use, and that it applies relevant law;
  • accountability and governance – to ensure effective oversight of the supply and use of AI systems;
  • contestability and redress – ensuring that its outcomes can be challenged.

The intention is that the principles will be applied by various regulators already in place within the UK which are already charged with regulating certain industries/activities. The notion is that these are best placed to determine the issues and risk in their existing areas of regulation, and to act accordingly to encourage adherence to the principles.

So there is no proposal for a single new “AI regulator” in the UK.

Nor is there a proposal for the implementation, at least initially, of new legislation to provide the principles with a statutory footing. The paper reasons that:

“New rigid and onerous legislative requirements on business could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators”.

However, the possibility of legislation is reserved for the future when Parliament has more time to consider proposed legislation, and it is anticipated that a statutory duty will likely be introduced requiring regulators to have due regard to the principles.

The paper does however recognise the need for certain “support functions” to be undertaken by government to ensure that the framework for the regulation of the AI industry is sufficient and proportionate in promoting innovation whilst protecting against risk. This involves functions such as monitoring and evaluation of the effectiveness of the regulatory framework and implementation of the principles in terms of promoting innovation whilst protecting against risk, “conducting horizon scanning” so as to ensure that emerging new technologies are noted early and appropriate responses are implemented, and providing education and awareness to business and individuals to ensure that their voices are heard as the regulatory framework is developed. In addition, the paper recognises a need to promote “interoperability” with international regulatory frameworks.

Next steps

The paper raises a number of questions that are open for consultation until 21 June.

Once the initial consultation process is completed later this month, the government will publish its response and then issue its cross-sectorial principles to regulators, together with initial guidance for their implementation by the various regulators.

It will also design and publish an “AI Regulation Roadmap” with plans for establishing the central support functions referenced above. These will be provided in conjunction with certain key partner organisations outside of government.

Research will be commissioned to monitor the extent to which businesses face barriers to compliance with the regulations and the best way to overcome these.

The government anticipates that this will all occur within the period to around September of this year.

Thereafter, in the period through to March of next year, the government anticipates that it will begin to deliver the key central support functions (entering partnership agreements to do so). It will encourage the various regulators to publish guidance on how the cross-sectoral principles will apply within their remit. It will also publish proposals for a central “monitoring and evaluation framework”, which will identify metrics, data sources and thresholds or triggers for further intervention of iteration of the framework.

Commentary on the proposals

There are some obvious issues with the proposals:

  • whilst there is sense in the notion - detailed in the paper - that existing regulators are best placed to understand the particular industry or sector issues that arise in their own spheres of existing regulatory responsibility, query whether they possess the necessary technical knowledge as to “AI” (and further have the capacity to keep on top of developments in the technology) to apply the principles effectively;

  • the risk of certain activities or practices “falling between” the gaps of regulation when the approach is to rely on existing regulators to cooperate to ensure effective and proportionate
    regulation;

  • the potential for existing regulators to be “overwhelmed” by the additional burden that will arise from the need to regulate now also in respect of AI.

Being “light touch”, the proposals are also at odds with the approach that the European Union is taking, which is moving to implement legislation that will govern the development and use of AI including prohibiting certain “high risk” applications of it (which legislation took a significant step closer yesterday - 14 June 2023 - to being adopted after the EU A.I. Act was approved by the European Parliament).

A light touch approach to regulation is also at odds with recent suggestions of the UK’s Prime Minister, Rishi Sunak, that the UK should serve as a possible hub for a future global regulator of AI technologies, modelled on the nuclear body the International Atomic Agency (IAEA), and with recent warnings of the Prime Minister’s own special advisor on AI, Matt Clifford (see e.g. The Times, 5 June, “AI systems “could kill many humans” within two years”).

It will therefore be interesting to see whether the consultation process and the recently apparent views of the Prime Minister lead to changes from the white paper’s proposals as the regulation is implemented.

Current issues for consideration

For now, commercial parties with UK operations who are involved in either the development, supply or use of AI systems should:

  • be aware of the above, and the fact that regulation is under consideration but coming.

  • keep abreast of regulatory and possible legislative developments.

  • have regard to the five principles that the white paper has set out and plan your development and/or use of AI technologies to ensure adherence to these as far as possible.

  • keep an eye on legislative and regulatory developments outside of the UK, in particular the European approach. In the view of the author of this article, it seems very likely that the approach taken in other jurisdictions will influence the approach taken in the UK, and quite possible that in time a more unified global approach will need to be implemented to regulate and control the new technologies.

Further briefings will be provided concerning regulatory developments in this area.

Media Contacts