The regulatory stance
In 2026, the question is no longer whether legal practices will use AI, but how securely they will use it.
However, this embrace is conditional. The SRA and CLC expect firms adopting AI tools to apply the same standards of professional conduct, data security, and client care as they do for any other technology.
Key regulatory expectations
- Accountability: You cannot blame the AI. The SRA and CLC position is that a firm is
responsible for the outputs of the technology it selects.
- Confidentiality: Using consumer-grade AI tools (like the free version of
ChatGPT) that use your data for training is a breach of your confidentiality obligations.
- Competence: Legal practitioners must understand the technology they use. This means understanding the risks and limitations of the tools you deploy.
If AI hallucinates and includes a fake precedent or clause, the regulator will hold the
practitioner responsible, not the tech provider.
Three steps to compliance
- Internal Audit: Map out where AI is already being used and replace unsafe tools with enterprise-grade, encrypted versions.
- Staff Policy: Implement a clear, written AI usage policy that explicitly forbids
consumer-grade LLMs for matter-related work.
- Train your staff: A policy document in a drawer won't prevent a breach. Your
staff need practical training on the difference between safe and unsafe AI usage.
- Check your PI Insurance: Call your broker. Ensure that your current
Professional Indemnity policy covers AI-assisted work, and clarify what safeguards the insurer
expects you to have in place.
Need help building an SRA or CLC-compliant AI strategy?
At UtterConnection, we specialise in bridging the gap between innovative AI capabilities and strict
regulatory compliance for conveyancing firms.
Book a free, 15-minute diagnostic
call today.