Publication

The UK Courts and Tribunals Judiciary Issues Guidance to Judicial Office Holders on Use of Artificial Intelligence

January 03, 2024

On 12 December 2023, the UK Courts and Tribunals Judiciary issued non-binding guidance to Judicial office holders, regarding the use of artificial intelligence technologies (“AI”) in the courts (the “AI Guidance”). While this guidance is non-prescriptive and at a relatively high level, it recognises that there are valid and sensible uses of AI in the judicial context (such as there are in the practice of law), but at the same time offers a useful reminder to all lawyers as to the considerations to bear in mind in the context of AI in the legal setting.

AI is already embedded in legal practice

AI is currently being used by many law firms and in-house lawyers in the context of disclosure review. Technology-assisted review can help make the process of document review more efficient and accurate by detecting concepts and trends between documents, enabling lawyers to get eyes on to relevant documents more quickly, meaning that fewer junk documents (such as spam) need to be reviewed. Similarly, legal research tools use AI to find relevant case law and treaties.

This is not only permitted by the courts, but also actively encouraged by many judges, with PD57AD of the Civil Procedure Rules (that is to the say, the procedural rules of the English courts) envisaging judges making orders regarding the use of “software or analytical tools, including technology-assisted review software and techniques” in disclosure review.

AI is not a new phenomenon in the legal arena; we are just in the process of adjusting to it occupying more areas.

The need to uphold confidentiality and privacy

Perhaps one of the most significant considerations in respect of modern AI systems in the law is the need to uphold confidence and privacy. While lawyers are bound by the code of conduct of the Solicitors Regulation Authority to keep their clients’ information confidential, judges are also expected to keep confidential information secure and ensure that data subjects’ privacy rights are respected.

To this end, the AI Guidance reminds judges of the importance of making sure that confidential information is not inputted into AI systems. This is important because many AI systems use prompts from users to further train and refine the underlying data models, and in so doing, do not respect the sensitivity of the provided information, and will rather later use it in providing responses to other users. For this reason, were a lawyer to input confidential information about their clients' issues or, say, a case strategy into such a system, there is the prospect of it being absorbed within the system and later shared with another user.

The challenge of quality

While acknowledging that AI tools may be a useful way of finding potentially relevant information, with many platforms such as VLEx’s “Vincent AI” claiming to help find relevant cases quickly, the guidance offers a reminder of the dangers of AI systems “hallucinating.”

Essentially, AI models have an unhappy habit of frequently, confidently and persuasively creating and rattling off inaccurate information as correct. A user must fact-check to ensure that the sources returned by an AI system do, in fact, say what the computer says they do (or, in fact, even do exist).

A salutary warning in this can be found in the mortifying story of Steven Schwartz, an American lawyer who cited non-existent cases with fake citations, which he had been provided with by ChatGPT to the New York District Court. He was subsequently required to write an apologetic affidavit and faces disciplinary proceedings.

The potential warning signs provided by the AI Guidance offer useful reminders for all lawyers using AI tools:

i) Know that generative AI tools hallucinate. Mr. Schwartz could not fathom that an AI tool can fabricate cases.
ii) Be on the lookout for cases which do not sound familiar or have unfamiliar citations;
iii) Look out for citations from different bodies of case law;
iv) Always check that submissions accord with your understanding of the law;
v) Look out for American spelling, or overseas cases, which may be more likely to be inaccurate; and
vi) Always ensure that content makes sense on closer inspection, despite how persuasive it may appear on first glance.

The need to take responsibility for the output of AI

Linked to the previous consideration is a reminder that, at the end of the day, it is only humans who can draft legal submissions and render judgments. Accordingly, the AI Guidance offers a reminder that all court participants must ensure that they are happy with the quality of their submissions and judgments, and they cannot offload this responsibility to machines. Technology, like spell check, may make our lives easier, but we still need to perform a final check ourselves with rigour.

That is not to say that the output of AI systems cannot be useful – just that one must accept ultimate responsibility.

Take, for example, Lord Justice Birss, who recently told a Law Society conference that he had asked ChatGPT to give him a summary of an area of law, which he subsequently adopted in his judgment – describing the summary as “jolly useful.” Birss LJ was at pains to emphasise that, “I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else. All it did was a task which I was about to do and which I knew the answer and could recognise an answer as being acceptable.”

Being on the lookout for AI-produced materials

As well as using AI for more traditional “lawyer” activities, the AI Guidance cautions judges regarding the use of AI by unrepresented litigants (who may be at particular risk of believing “hallucinated” cases) or the potential of AI-forged evidence (so called “deep-fakes.”)

However, as the AI Guidance acknowledges, courts have always had to deal with forgeries and incorrect references. The court will just need to remain on the lookout for potential improper behaviour.

The dangers of bias

Many AI tools are trained on large corpuses of data found on the internet and may therefore reflect many of the racist, misogynistic and other discriminatory attitudes found online in the output which they generate. Judges in particular should be wary of this and ensure that their final output (in orders and judgments) is respectful, unbiased and in accordance with the Equal Treatment Bench Book.

A useful tool, but one that has limitations

Ultimately, the AI guidance recognises that there are perfectly valid uses for AI in fulfilling the functions of a judicial office holder. However, the limits of AI must be recognised and care taken at all times to ensure that there has not been a derogation of responsibility to such systems. As the introduction to the AI Guidance makes clear: “Any use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice,” which provides a useful maxim for all practising lawyers faced with the opportunities and risks presented as AI moves towards greater incorporation within the legal profession.

Media Contacts