AI in Disclosure: Balancing Innovation with Legal Obligations

Artificial Intelligence (AI) is playing an increasingly significant role in UK case preparation and evidence disclosure (both criminal and civil proceedings). The Law Society and the Courts and Tribunal Judiciary have both published guidance on the use of AI in the legal sector and in March 2024 the UK Government hosted a Justice technology and AI roundtable.

As AI-driven tools become more sophisticated, a question remains: how do we harness the power of AI while ensuring compliance with established legal frameworks and principles of justice?

 

AI in Criminal Disclosure: Opportunities and Challenges

 

For criminal cases heard in England & Wales, the prosecution’s disclosure obligations are governed by the Criminal Procedure and Investigations Act 1996 (CPIA), the Criminal Procedure Rules (CrimPR) and further guided by the Attorney General’s Guidelines on Disclosure (2022).

In response to a Parliamentary Question on AI use by the Serious Fraud Office (SFO), on 4 February 2025, Lucy Rigby – Solicitor General (Attorney General’s Office), confirmed that:

“During the past 12 months, the SFO has been trialling the use of Technology Assisted Review (TAR), utilising AI, on a live criminal case. The trial demonstrated that TAR could help meet legal disclosure obligations more efficiently. The trial adhered to relevant disclosure guidelines and officials are still making the decisions on what is in fact relevant and what is disclosed. Following the success of the trial, the SFO is planning to use TAR in more SFO cases in the future.”

While TAR is a powerful tool for automating the document review process, AI can offer broader capabilities, including pattern recognition, predictive analysis, multimedia analysis, and real-time data processing.

In criminal investigations, combining AI with TAR could provide a more holistic and efficient approach, leveraging the strengths of both technologies to uncover insights faster and with more accuracy, which could be of significant benefit to both the prosecution and the defence. The advent of AI in this regard presents both opportunities and risks

 

Practitioner Insight:

“While I recognise that the increased use of technology, especially TAR enhanced by AI tools, is both inevitable and a welcome advance in the management of disclosure in serious fraud trials, it is critical that these advances are managed carefully through both the introduction of necessary regulatory safeguards and not at the expense of human oversight.”

David Stern – Joint Head of Business Crime & Financial Regulation at 5 St Andrew’s Hill

 

Opportunities

With appropriate training and user knowledge around AI strengths and limitations, the potential benefits of deploying AI solutions include:

  • Increased efficiency in sifting through complex digital evidence resulting in considerable time and cost saving.
  • Reduced human error in identifying disclosable material.
  • Improved resource allocation, thereby allowing legal professionals to focus on strategic case analysis.

Risks

Use of AI for disclosure purposes also presents risk and challenges. Some of the most significant issues include:

  • AI can assist but cannot replace legal judgment. Prosecutors maintain personal accountability for disclosure decisions.
  • AI may struggle with contextual nuances, leading to unfair or incomplete disclosures increasing the risk of bias and error.
  • Transparency and accountability must underpin AI-driven decision-making. Processes must be explainable and open to scrutiny, particularly in respect of Article 6, ECHR, which guarantees the right to a fair trial.
  • Using AI may lead to increased cyber security risk (e.g., in relation to hacking, corruption of data and other malicious cyber activity). Use of AI in handling sensitive material must align with data protection and compliance requirements (e.g., GDPR and the Data Protection Act 2018).
  • AI use may lead to Intellectual copyright infringements.

 

Practitioner Insight:

“AI has great potential so long as prosecutors are faithful to their individual and collective continuing duty to review and disclose material in accordance with the law… [u]sed wisely, it could have huge benefits for all parties; used poorly or irresponsibly it could lead to terrible miscarriages of justice”.

Natasha Wong KC – 5KBW

 

AI in Civil Proceedings: A Proportional Approach to Disclosure

 

The disclosure landscape in civil litigation is shaped by the Civil Procedure Rules (CPR), particularly Practice Direction 31B, which deals with electronic documents. The use of TAR, including predictive coding, has been recognised as an efficient way to manage large-scale document disclosure.

Key Developments

Best Practices for Legal Professionals

  • AI should be viewed as a complement to, rather than a replacement for, human judgment in document review and disclosure.
  • Legal practitioners must stay informed on evolving case law and judicial guidance concerning AI-driven disclosure.
  • AI can improve cost-effectiveness for clients, but transparency in how AI is used in case management remains paramount.

 

Scope for Procedural and Legal Challenges Due to AI Disclosure Failures

 

Using AI in disclosure raises significant questions about procedural fairness and legal accountability when relevant obligations are not properly met. The following issues may arise:

Grounds for Procedural Challenges

  • If AI-assisted disclosure results in material that could assist the defence being overlooked, there could be grounds for appeal or case dismissal.
  • AI-driven errors that compromise disclosure could undermine the defendant’s rights under Article 6, ECHR.
  • Defence teams may challenge AI-driven disclosure failures through judicial review, particularly where transparency in AI decision-making is lacking.
  • Under civil procedure, parties could argue that the use of AI was not a reasonable and/or proportionate method for disclosure if it led to key documents being overlooked, violating CPR Rule 31.5.

Mitigating Risk and Ensuring Compliance

  • AI-assisted processes should always be subject to human verification to minimise errors and bias.
  • Agreement between the parties regarding the use of AI may help overcome unclear regulations, prevent procedural challenges and protect the integrity of legal proceedings.
  • AI tools used for disclosure should be capable of generating detailed audit trails to facilitate review and challenge if necessary.
  • Those using AI should have sufficient training to better understand and mitigate against potential risks.

While AI’s utility in areas of legal work is recognised (as noted by Sir Geoffrey Vos, MR ), the judiciary has reinforced that lawyers remain responsible for all work under their name. This means AI-generated outputs must be manually verified before submission.

 

Expert Insight:

“… in the early phases of adoption in which we currently find ourselves in, it should be expected that human input, both in the generation and evaluation of AI outputs will be crucial to successful and defensible case outcomes… as was the case with keyword searching and TAR before it, acceptable standards will become agreed for Generative AI that will become reference cases for future use…”

Badar Nadeem – FTI Consulting

 

AI Use in UK Disclosure

 

AI’s growing role in legal practice remains largely untested and unendorsed. Unlike TAR, AI faces challenges in transparency, validation, and legal accountability, including, for example:

  • Vendors’ proprietary models often lacking auditability, making it difficult to fully examine and test AI-generated results in court.
  • A lack of any structured, auditable framework to prove the defensibility of outputs. Unlike TAR, AI responses may yield different outputs from the same prompts and may cause, for example, ‘reproducibility’ concerns.

The Way Forward: Practical Implementation and Industry Collaboration

At Spencer West, we are advising clients on the implications of AI across the legal and compliance landscape. If your organisation is navigating the intersection of responsible AI and disclosure obligations, or you are interested in discussing this topic, we invite you to connect with Nabeel Osman, Lisa McKinnon-Lower, and Gerallt Owen.

Generative AI was responsibly used to help develop this piece of Thought Leadership.

 

Nabeel Osman
Partner - Dispute Resolution, Fraud & Financial Crime, Tax Disputes & Investigations
Lisa McKinnon-Lower
Partner - Criminal Defence Litigation & Human Rights