Security & Confidentiality

Your clients' data
is non-negotiable.

AI is powerful — but in the wrong hands, it's a data breach waiting to happen. Here's how to get it right.

It's already happening

In late 2025, the Upper Tribunal's Immigration and Asylum Chamber published a landmark ruling that every solicitor in the UK should read.

A practising solicitor admitted to uploading client emails and Home Office decision letters into ChatGPT — to improve their drafting and to summarise documents for clients.

"To [put client documents into ChatGPT] is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege."
— Judge Fiona Lindsley, Upper Tribunal

The solicitor was referred to both the SRA and the Immigration Advice Authority. The tribunal was clear: using open AI tools for client work is a data breach, full stop.

In the same hearing, another firm was found to have submitted AI-generated case citations that didn't exist — "hallucinated" references that sent judges on a "fool's errand." The tribunal reported a "considerable increase" in fictitious authorities being cited in proceedings.

The key takeaway

The judge was blunt: "Whether [citation errors] are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked."

The responsibility stays with the supervising solicitor. Always.

Open vs closed AI tools

Not all AI tools are created equal. The critical difference for law firms is where your data goes:

🔓 Open AI (HIGH RISK for client work)

  • ChatGPT free tier, Google Bard, Perplexity free
  • Your input is sent to external servers
  • Data may be used to train future models
  • No guarantee of deletion or confidentiality
  • Putting client documents into these tools is a data breach

🔒 Closed / Enterprise AI (LOWER RISK)

  • Microsoft Copilot for Business, Azure OpenAI, private deployments
  • Data stays within your organisation's tenant
  • Not used to train external models
  • Contractual data protection agreements available
  • Still requires policies, training, and oversight

The judge in the Upper Tribunal case specifically noted that "closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks."

We help firms evaluate exactly which tools sit in which category — and build policies accordingly.

GDPR and data protection

When you send client data to an AI tool, you need to ask:

  • Where is the data processed? (UK, EU, US, or unknown?)
  • Is it encrypted in transit and at rest?
  • Does the provider retain your input data?
  • Is there a Data Processing Agreement (DPA) in place?
  • Can you fulfil Subject Access Requests if data is held by the AI provider?
  • Have you updated your privacy notice to cover AI processing?

If you can't answer these questions confidently for every AI tool your firm uses, you have a gap. We help you close it.

How we help

  • AI Tool Assessment — We evaluate every tool your firm uses or is considering, and categorise them by risk level
  • Policy Development — We help you create clear, practical AI usage policies that your team will actually follow
  • Staff Training — We train your solicitors and support staff on what's safe, what's not, and what to do when they're unsure
  • Incident Preparedness — If a breach occurs, we help you respond correctly — including SRA/CLC notification requirements
  • Ongoing Review — AI tools change constantly. We help you stay current without the anxiety

Don't wait for an incident.

Book your 15-Minute Diagnostic and find out exactly where your firm stands on AI security.

Stop the Profit Leak →