Jun 12, 2025 .

Legal Commentary: Court’s Admonition on the Unethical Use of Generative AI in Ayinde v Haringey [2025] EWHC 1383

Introduction

 

In the recent and profoundly significant case of Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), the Divisional Court delivered a trenchant critique of the misuse of generative artificial intelligence (GenAI) by legal professionals. Heard under the Hamid jurisdiction of the Court which was introduced to prevent abuse of the Court system. It derives from the immigration case of R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin), but the Court has confirmed it is not confined to immigration or even public law cases. The case exposed not merely individual lapses but illuminated systemic vulnerabilities in legal practice concerning training, supervision, and adherence to ethical norms in the age of AI. Presided over by Dame Victoria Sharp P and Mr. Justice Johnson, the judgment is a clarion call to the legal profession on the dangers of uncritical reliance on AI in legal submissions and the ethical imperative to maintain the integrity of court proceedings.

The Court’s Core Admonitions

 

The Court began by acknowledging the potential utility of AI in legal work, including in areas such as disclosure and document management. However, it issued a pointed warning: generative AI systems, particularly those based on large language models, are capable of producing convincingly plausible outputs that are in fact wholly inaccurate. The judges observed that such systems can fabricate entire cases, quote passages that do not exist, and generate false citation numbers. As the Court put it, “Those coherent and plausible responses may turn out to be entirely incorrect. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source” [§6].

Against this backdrop, the Court underscored the non-negotiable professional obligation on lawyers to verify the accuracy of all legal materials before presenting them in court. This duty, the Court said, is not diminished merely because the content was produced by an AI tool, a junior colleague, or even the client. Rather, lawyers must always cross-check such material against authoritative sources, including official government legislation databases, the National Archives, and trusted legal publishers [§7]. The obligation is rooted not just in competence, but in the fundamental duty to the court, which includes honesty, integrity, and a commitment to the proper administration of justice.

Of particular gravity was the Court’s observation that even where there is no deliberate intent to deceive, the submission of fabricated citations constitutes a serious breach of professional standards. As it made clear, “It matters not that the misleading of the court may have been inadvertent… Such conduct brings the profession into disrepute… which may well lead to disciplinary proceedings” [§12]. This reaffirms that negligence or recklessness in the use of AI is no shield from liability; professional misconduct can arise even absent malicious intent.

The Court then catalogued the range of sanctions available in cases of unethical AI usage. These include wasted costs orders, public reprimands, striking out pleadings, and formal referrals to regulators such as the Bar Standards Board and the Solicitors Regulation Authority. In egregious cases—particularly where there is an intent to mislead—the Court warned that it may initiate contempt of court proceedings or refer the matter to the police for investigation under the common law offence of perverting the course of justice, which carries a maximum penalty of life imprisonment [§23–25].

While the Court refrained from initiating contempt proceedings in the Ayinde case due to the junior status and complex personal circumstances of the barrister involved, it issued a stern warning to the profession at large: “Lawyers who do not comply with their professional obligations… risk severe sanction” [§69].

 

Recommendations for the Profession

 

In light of these admonitions, it is imperative that the legal profession take proactive and structural steps to integrate AI responsibly into legal practice. First and foremost, lawyers must undergo mandatory training focused not only on the technical workings of AI tools but, crucially, on their ethical and professional implications. Such training should instill a firm understanding of the risks of AI hallucinations, the unreliability of AI-generated legal analysis, and the necessity of cross-verification. Lawyers must learn to distinguish between genuine legal authority and AI-generated fabrications, and to treat any AI output as tentative until it has been validated through independent, authoritative research. This training should be a required part of continuing professional development (CPD), pupillage programmes, and law firm onboarding processes, thereby embedding ethical AI practices at every stage of professional development.

Equally important is the need for institutional oversight. As the Court emphasized, individual ethical failings often reflect broader failures in leadership and supervision. Heads of chambers, managing partners, and supervising solicitors bear a collective responsibility to ensure that internal protocols are in place for the ethical use of AI. The Court noted that in future Hamid hearings, it will no longer suffice to point to individual error; the judiciary will also inquire into whether institutional leadership fulfilled its obligations to supervise, train, and ensure competence [§9]. Firms and chambers must implement review procedures, auditing mechanisms, and reporting obligations to enforce these expectations.

Moreover, the Court’s findings reveal the inadequacy of relying solely on voluntary guidance. While the Bar Council, Solicitors Regulation Authority (SRA), and judiciary have issued helpful documents warning of the dangers of GenAI, the judgment rightly observes that “promulgating such guidance on its own is insufficient… More needs to be done to ensure that the guidance is followed” [§82]. This means regulatory bodies must move from soft norms to enforceable standards. Codes of conduct should be updated to require disclosure of AI-generated content, mandate verification procedures, and impose disciplinary consequences for failures to comply.

Conclusion

This case stands as a formidable precedent in the regulation of AI use in legal practice. It represents the judiciary’s unwavering insistence on truthfulness, diligence, and professional accountability. The message is unmistakable: generative AI may be a powerful tool, but it is not a substitute for legal judgment, ethical discernment, or human responsibility. As the Court concluded, “The administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it” [§5].

Lawyers must therefore approach GenAI not as a convenience but as a regulated tool—one whose outputs demand scrutiny and whose misuse can attract severe consequences. The future of AI in law hinges not on its capabilities but on the ethical fidelity of those who wield it.

Cart (0 items)