This website uses cookies

We use cookies to improve your experience on our website. If you continue without changing your settings, we'll assume that you are happy to accept all cookies on the CLC website. You can change your settings at any time.

AI and Technology Principles

To help foster innovation and support greater use of technology, the CLC’s Technology and Innovation Working Group has developed eleven AI and Technology Principles1.

Introduction

  1. 1. Technology (including AI) can help to make legal services more efficient and accurate, improve the client experience and outcomes for clients, as well as supporting access to justice. The CLC therefore supports safe and responsible use of technology by practices and with input from stakeholders and industry experts, the CLC has developed these AI and Technology Principles (the Principles).
  1. 2. Following these Principles will help practices ensure that technology is used responsibly, safely and ethically, protecting and benefitting clients and practices.
  1. 3. Compliance with the Principles is not mandatory and they are non-prescriptive, however the CLC strongly encourages practices to consider them when adopting or updating technology. Complying with the Principles can help practices manage the risks associated with using technology and may be regarded as a mitigating factor if a practice is found not to have met other, relevant mandatory requirements.
  1. 4. These Principles should be read with the AI & Technology Guidance, and the CLC Code of Conduct and topic-specific Codes, compliance with which is mandatory.
  1. 5. It is important to bear in mind that practices and lawyers are responsible and accountable for the decisions they take and the legal advice they provide, whether assisted by technology or not.

The AI and Technology Principles

  1. Risk of harm: practices should ensure that technology will not cause harm to clients or the practice, or risk causing harm more broadly, e.g. to public trust in the profession, and should keep the risk of harm under review.
  2. Security: practices should satisfy themselves that technology incorporates adequate safeguards against malicious attack, misuse and unauthorised access.
  3. Robustness: practices should obtain assurance that technology performs as intended, i.e. as it was designed to, and that it provides reliable and consistent outputs.
  4. Transparency2 : practices should be able to clearly communicate appropriate information about when and for what purposes technology is used. Transparency helps support explainability (see below).
  5. Explainability3 : practices should be able to interrogate how technology produces outputs or makes decisions, particularly those impacting clients, and provide appropriate and understandable explanations when asked. This will help to demonstrate that the technology is safe, fair, and free of bias.
  6. Data use, privacy and security: practices should ensure that any personal data is processed in line with UK GDPR, the Data Protection Act 2018 and any other applicable data protection laws or regulatory requirements.
  7. Interoperability: practices should ensure that technology facilitates the secure exchange of data between different systems that are in use in the practice, and systems commonly in use across the wider conveyancing and probate sectors.
  8. Risk and impact assessment: practices should conduct proportionate risk and impact assessments before integrating technology and in response to evolving risk and take proportionate steps to eliminate or mitigate risk or any adverse impact identified.
  9. Accountability: practices should take steps to ensure that technology is used for its intended purposes only. As CLC lawyers are responsible and accountable for the advice they provide, it is important that they maintain effective oversight and meaningful control of technology enabled or supported decisions, advice or other outputs, including through human review where appropriate.
  10. Fairness and bias: practices should assure themselves that technology is not inherently biased and routinely monitor and take appropriate measures to mitigate the risk of technology-enabled decisions, advice and outputs being biased, unfair or discriminatory.
  11. Capability: practices should ensure they have the necessary capability to integrate technology safely and effectively, provide the necessary training and implement appropriate procedures to support its safe and effective use.

Note: Please refer to the AI and Technology Guidance, which provides context, practical advice, and prompts to help practices apply the AI and Technology Principles in practice. The guidance outlines clear steps that practices can take to ensure technology is used responsibly, safely, and ethically-protecting the interests of clients and practices alike.


1. These principles have been informed by the Legal Services Board’s Guidance on promoting technology and innovation to improve access to legal services, and the UK’s AI Regulatory Principles.

2. The Government’s Artificial Intelligence Playbook provides useful explanations of transparency in the context of AI systems and what evidence technology providers should be able to provide to demonstrate the transparency of their systems.

3. For further information on explainability, see The Government’s Artificial Intelligence Playbook and guidance published by the Turing Institute.