This website uses cookies

We use cookies to improve your experience on our website. If you continue without changing your settings, we'll assume that you are happy to accept all cookies on the CLC website. You can change your settings at any time.

AI and Technology Guidance

This guidance provides context, practical advice and prompts to explain the AI and Technology Principles and help practices identify steps that they can take to ensure technology is used responsibly, safely, and ethically, protecting the interests of clients and practices alike.

Introduction

  1. This guidance includes examples of the practical measures practices can take to implement the AI and Technology Principles (the Principles).
  2. Compliance with this guidance is not mandatory, and it is not intended to be prescriptive. However, following it can help practices manage the risks associated with using technology. Compliance with the guidance may be regarded as a mitigating factor if a practice is found not to have met other, relevant mandatory requirements.
  3. This guidance should be read with the AI and Technology Principles, and the CLC Code of Conduct and topic-specific Codes, compliance with which is mandatory.
  4. It is important to bear in mind that practices and lawyers are responsible and accountable for the decisions they take and the legal advice they provide, whether assisted by technology or not.

How to use this guidance

  1. The guidance is structured around the eleven AI and Technology Principles and includes further ‘context’ to help explain each Principle and under ‘good practice’ it includes suggested questions and prompts included to help practices make the right enquiries and translate each Principle into practical actions. Over time, case studies will be included to promote good practice and share learning from across the sector.

Principle 1: Risk of harm

Practices should ensure that technology will not cause harm to clients or the practice, or risk causing harm more broadly, e.g. to public trust in the profession, and should keep the risk of harm under review.

Context: Whilst it is usually not intended or deliberate, technology can cause harm.
The nature of any harm, how harm manifests, and its impact can vary widely depending on the technology in question and the circumstances in which it is used. This guidance therefore includes a broad outline of potential harms but is not definitive.

As a rule of thumb, harm refers to an adverse, negative or detrimental impact, outcome or consequence caused by technology itself (e.g. how it’s programmed, the data it relies on or is trained on), the way it is implemented, used or misused (e.g. if is used for purposes it was not designed for), the outputs it generates or the decisions it supports.

Harm could include for example:

  • Breach of privacy: a system security failure allowing client data to enter the public domain or use of client data to train an AI system without consent of the client.
  • Damage to a practice’s reputation and loss of public trust: a practice failing to spot errors in contract terms auto- generated by software, resulting in damage the reputation of a practice and clients losing trust in the practice and by extension, the wider profession.
  • Bias or differential treatment: inherent bias in ID verification systems resulting in clients from minority ethnic backgrounds being subject to further, more rigorous, or intrusive checks in the absence of any other evidence supporting the need for further checks.
  • Financial loss: failure to ensure human review of AI generated reports on title resulting in clients being inadequately advised about title problems or associated risks, resulting in loss for the client and damage to the reputation of the practice.

Good practice:

  • Look for technology that was designed in line with the safety and privacy-by-default principle; ask technology providers what they do to prevent and detect risks, and their processes for responding and recovering from risk incidents.
  • Understand whether there is any risk of harm from the technology itself or how it is used (or misused) in practice; understand the range of known harms the likelihood of those manifesting, how do they manifest and in what circumstances, and consider the likely impact for the practice and clients.
  • The practice should ensure that it puts appropriate mitigations or safeguards are in place where necessary and assure themselves that their provider has taken any necessary mitigating actions.
  • Consider the practice’s risk tolerance, i.e. the balance between potential risk and benefit arising from the use of technology, and whether, with appropriate mitigations and safeguards in place, the practice can accept that level of risk.
  • Keep records of risk assessments that the practice undertakes.
  • Consider whether it is necessary to inform PII and or cyber insurers before deploying any technology (responses to the 2024 Annual Regulatory Return show that either no impact or beneficial impact on the cost of premiums).
  • Mitigations might include adaptations to the technology itself, regular monitoring for anomalies, tiered access (i.e. limiting the use of certain functionality to senior staff or CLC lawyers), providing staff awareness training, making clear user-guides available to staff and clients, limiting data inputs or training data used by AI, ensuring human oversight of automated outputs.
  • Consider testing or piloting technology before deploying it and ensure appropriate training and information is provided to staff including effective use, appropriate use including the need for human oversight where relevant, limitations on its use, safety, how to identify anomalies in outputs.

Principle 2: Security

Practices should satisfy themselves that technology incorporates adequate safeguards against malicious attack, misuse, and unauthorised access.

Context: Malicious attack on systems, and misuse and unauthorised access can cause significant disruption to business, it can also result in financial loss and damage to the reputation of practices and in extreme cases, may result in complaints, claims being brought against the practice or the practice having to claim against its Professional Indemnity or other insurance.

Clients can also potentially be adversely impacted if system security is compromised. Client data could be compromised, their transactions delayed which in turn may result in financial loss, or they could suffer emotional distress.

It is therefore crucial that practices seek assurance from technology providers that there are adequate safeguards against malicious attack or misuse of systems, and that providers have appropriate protocols in place to ensure that only those who are authorised to access their systems can.

Practices should have appropriate local procedures or protocols in place to a further line of defence against malicious attack, misuse, and unauthorised access of systems.

Good practice:

  • Consider requiring periodic changes to passwords, the use of dual factor authentication to access systems, and limiting system access to only those staff members who require access.
  • Training should be provided to staff prior to deployment of any technology, and as necessary following system upgrades or significant updates.
  • Practices should provide regular cyber training to staff, including senior staff and should provide staff with protocols and user guides on the safe and appropriate use of technology.
  • Seek information from technology providers about the types of threats their technology may be susceptible to, how often testing is conducted, and what controls are in place to mitigate any threats.
  • Put in place appropriate protocols and contingency arrangements if systems are compromised and ensure that staff are aware of protocols and how to report or escalate concerns about potential security threats.
  • Establish whether there are inbuilt controls or protections i.e. security by design, or whether certain security features are an opt-in.
  • Consider the need for ongoing risk-assessment and system updates to mitigate the risk of new threats, the frequency of risk-assessments and updated and the associated costs.
  • Establish what support is provided in the event of system failure and contingency planning in the event of outages.
  • Consider approvals and access controls for staff and support staff working for the technology provider, and whether access to and use of systems can be tracked or audited.
  • Provide role-based training for staff i.e. provide training on how someone in that role or with that job title should use the technology.
  • Implement appropriate use policies and ensure that staff are aware of the policies and comply with them.

Principle 3: Robustness

Practices should obtain assurance that technology performs as intended, i.e. as it was designed to, and that it provides reliable and consistent outputs.

Context: Implementing new technology or upgrading existing systems is likely to be costly, resource intensive, and potentially disruptive. Most technology is also likely to be business-critical, so practices making this investment will want to have confidence in its robustness.

Practices should assure themselves that technology works reliably and consistently in normal conditions and that, within reasonable parameters, it continues to function reliably and consistently in unexpected conditions.

Before investing in technology practices should understand the expected range of, or boundaries of any variance in performance or outputs, consider whether this gives rise to any risks for the practice, whether those can be mitigated and whether, with safeguards and mitigations in place, the risks fall within the practice’s risk tolerance.

What is considered robust and how this is assessed will vary depending on the technology and a range of other factors such as what the technology is used.

Good practice:

  • It is important to assess whether the system performs reliably to specification i.e. does it do the right thing or what it was designed to do?
  • Does it reliably perform to specification under normal conditions i.e. when it is used typically, or in the way it is intended or expected to be used?
  • Does it perform to specification in ‘edge’ conditions i.e. when it is used in ways that are not typical or ways that are unexpected but possible?
  • Understand what could go wrong with the technology i.e. when or how it might fail or the margins of error, the severity or impact if it does.
  • Take steps, with providers where necessary, to put appropriate contingencies or mitigations in place.
  • Ask about the provider’s incident management protocols if the technology fails, delivers erroneous outputs or outputs outside of specification.
  • Ensure that the practice has its own contingency plans and escalation protocols should there be unplanned downtime or technology starts delivering unreliable or inconsistent outputs, and ensure staff are aware of these and when they should be engaged.
  • Subscribe to the provider’s ‘status updates’ mailing list where relevant and ensure that any system updates are promptly installed.
  • Ensure staff know when and how to report system failures or outputs that go beyond tolerance, it is good practice to have documented escalation protocols.
  • Where relevant, practices should have documented protocols to manage the impact of any system failures, including appropriate reporting or escalation protocols, and in circumstances where failures might impact clients, ensure that the practice has procedures to communicate with clients and manage the impact.

Principle 4: Transparency 1 :

practices should be able to communicate appropriate information about when and for what purposes technology is used. Transparency helps support explainability (see Explainability below).

Context: there is growing use of technology in all areas of life including legal services, particularly in the way legal services are delivered and how clients engage with practices.

Whilst research2 shows that consumers are increasingly using technology in the context of public services and welcome its use where it improves outcomes and convenience, we also know that consumers value transparency, particularly in relation to when and how their personal data is collected and used.

It is therefore important that practices know and can explain if asked, what technology they use, what it is used for and when they use it, particularly if its use entails collecting, processing3 and or storing client data4 .

Being open and transparent with clients and communicating in clear, easily understood language will build trust and help ensure that clients feel confident in consenting to the use of their personal data.

Good practice:

  • Use plain-English to explain in simple terms, what technology is in use in the practice, particularly if the technology involves automated decisions that impact clients, such as profiling or risk-scoring.
  • Explain clearly why the practice uses the technology and what its benefits are for clients (speed, convenience, ease of access to information, accuracy, consistency).
  • If personal data is collected and processed, what is it used for, whether it is stored and for how long, whether it is shared, with whom and for what purposes.
  • Ensure that appropriate Data Protection Impact Assessments are done and that the practice is compliant with the UK GDPR principles and any other relevant data protection legislation, retaining evidence of compliance as necessary.
  • Technology providers may be able to support practices with lay-friendly information that could be incorporated in a leaflet or explanatory text on client portals.

Principle 5: Explainability5

Practices should be able to interrogate how technology produces outputs or makes decisions, particularly those impacting clients, and provide appropriate and understandable explanations when asked. This will help to demonstrate that the technology is safe, fair, and free of bias.

Context: In practice, explainability6 means being able to provide an appropriate level of information or explanation about the processes, services or decisions that are enabled or generated by technology, using everyday language aimed at non-specialists.

Practices are accountable for the advice they provide including advice that is in some way supported, enabled by or generated using technology. Being able to provide appropriate explanations is a way of demonstrating transparency and accountability, and doing so will provide assurance to clients that technology in use in the practice is safe, fair and free of bias.

Transparency and explainability work in tandem and will help client understanding of the benefits of the technology for them, enable clients to have confidence and trust in the technology and any advice that is supported, enabled or generated using technology.

Good practice:

  • Practices should be able to interrogate systems, test or challenge the logic and where necessary, take steps to improve its performance or how it functions.
  • Practices should understand and be able to explain the rationale for outputs or decisions, including key factors, data and sources of information that systems take account of.
  • Practices should be able to offer assurances about the level of human oversight given to outputs or decisions.
  • In instances where bias or unfairness is a risk, practices should be able to explain how they monitor this and what safeguards are in place to mitigate any risks.
  • Practices (or their technology providers) should routinely test for fairness and potential bias in addition to the reliability and accuracy of systems and be able to explain this to clients who ask in an easily understood way.
  • Practices should make it clear to clients that they can ask questions and have clear and easily accessible routes for clients to raise concerns or make a complaint about the practice’s use of technology.

Principle 6: Data use, privacy and security

Practices should ensure that any personal data is processed in line with UK GDPR, the Data Protection Act 2018 and any other applicable data protection laws or regulatory requirements.

Context: Use of technology, particularly AI, may present a risk to client data, therefore practices should ensure that it incorporates robust data protection, access control and cyber security features.

We know from research carried out by the ICO7 that 52% of people surveyed feel cautious about confidentiality. 55% of adults have experienced a data breach in the past and 34% of people who have experienced a data breach reported losing trust as a result.

In other words, as well as practices having a legal obligation under Data Protection legislation, protecting client data against unauthorised or unlawful processing, accidental loss, destruction, or damage, research would suggest that good data governance is fundamental to maintaining the trust of clients.

The measures necessary to comply with data protection requirements will depend on various factors, including the specific technology, what it is used for and what personal data it collects, processes and stores.

Good practice:

  • Consider the data protection implications of any technology that is adopted and ensure that appropriate measures are taken to comply with UK GDPR and any other applicable data protection legislation.
  • Understand what personal data is collected, how it is processed, where and for how long it is stored, whether it is shared with or disclosed to third parties, and if so, for what purposes (for help with this consider using the ICO’s online fair processing notice generator for SMEs8 ).
  • Undertake risk and data impact assessments as necessary and in particular, if technology processes personal data.
  • Be able to explain to clients what personal data systems collect, how that data is used, and where it is stored, explanations should be concise, transparent, easily accessible and written in plain language.
  • Before integrating technology and on an ongoing basis, data and system security should be risk assessed, and appropriate steps taken to eliminate risks where possible, or mitigate them.
  • Ensure that staff receive data protection training and keep training needs under regular review. It is particularly important to ensure that staff receive training when new systems are implemented or there are significant updates to existing systems.
  • Ensure that staff are aware of practice protocols and procedures relating to data protection, privacy and security and that they follow them.

Principle 7: Interoperability

Practices should ensure that technology facilitates the secure exchange of data between different systems that are in use in the practice, and systems commonly in use across the wider conveyancing and probate sectors.

Context: Interoperability refers to how a new system works with existing systems i.e. how one system exchanges data with or ‘talks’ to existing systems in a practice and other external systems that the practice routinely exchanges data with.

Interoperability is enabled by the ‘Application Programming Interface’ (API) which is a set of rules that allows one piece of software to exchange data with another piece of software.

Interoperability is an important consideration because custom integration, i.e. a bespoke API to connect or bridge two systems and enable data-sharing and seamless functionality, will add to the cost and time taken to safely and effectively implement a new system.

In addition, the effectiveness of wider conveyancing systems and to a lesser extent probate systems in the UK, rely heavily on the ability to safely and reliably share information both between practices and other institutions like banks, insurers, HMLR/HM Probate Office etc.

Difficulty or delay in sharing data can add to the time transactions take and needlessly add to the workload of practices. Interoperability is therefore one of the primary considerations for practices implementing technology.

Good practice: the question of interoperability will largely depend on existing systems the practice has in place and the new technology it is looking to implement.

Practices may need the input of experts to manage the technical aspects of ensuring interoperability, but the questions and prompts below provide a starting point.

  • Ask whether the system supports open standards, what they are and whether any proprietary formats are required.
  • Consider data portability, i.e. how data, meta-data and audit logs can be exported from existing systems to the new system, who is responsible for data migration and the associated cost.
  • Does the provider publish its API, what support is provided during integration and cost and whether APIs are stable.
  • What is the integration and implementation process?
  • Who is responsible for integration, what support is available, is there a dedicated integration project manager.
  • What set-up, onboarding and training is offered at implementation?
  • What are the most likely integration challenges, and who is responsible for addressing them.
  • Does the provider recommend a sandbox i.e. piloting the system with a small, live data set to validate integration.
  • Who is responsible for maintenance of APIs, is there routine monitoring and updates?
  • Consider existing systems, including their APIs, how much data is held and the format, system access constrains, network or hosting provisions and limitations, compliance with Data Protection requirements and security considerations during data migration.
  • What ongoing maintenance, software updates and support are provided and is this done in line with a service level agreement?

Principle 8: Risk and impact assessment

Practices should conduct proportionate risk and impact assessments before integrating technology and in response to evolving risk and take proportionate steps to eliminate or mitigate risk and any adverse impact identified.

Context: among many other benefits, technology has the potential to significantly improve efficiency and streamline processes, reduce cost, save time and make engaging with practices easier and more convenient. However, it can also introduce a range of different risks, so it is important that practices identify all potential risks and undertake proportionate risk and impact assessments depending on the type of technology being implemented.

Risk can be effectively managed, but to do so practices need to identify and understand the different risks that might arise with the introduction of technology and take the necessary steps to put appropriate safeguards and mitigations in place to either eliminate the risk entirely, lower the likelihood of it manifesting or reduce its impact.

Good practice: practices should ensure that all necessary risk assessments are done in addition to those already mentioned above.

  • Risk assessments should be proportionate and tailored to the circumstances, including the type of technology and potential risks involved, what data it processes, who is affected, how critical it is to the business, how novel or complex it is.
  • Factors that can be considered in deciding on the scale of any risk assessment include the severity and likelihood of harm, who is likely to be impacted (clients, vulnerable clients, staff, lenders, insurers etc.), the practice’s legal and regulatory obligations and impact on the business.
  • There is a greater imperative to undertake risk assessment where technology impacts clients and involves the use of client data.
  • Practices must ensure they are compliant with their obligations under any applicable data protection legislation; this includes the need to do privacy or data protection impact assessments9 where technology will involve collecting, processing, storing and sharing personal data.
  • Many factors have a bearing on risk and risk can evolve over time, it is therefore good practice to routinely monitor risks and where necessary, conduct a new risk assessment using up to date information.
  • Risk assessments should be documented and the rationale for or reasons behind mitigating actions and safeguards that are put in place should be recorded.

Principle 9: Accountability

Practices should take steps to ensure that technology is used for its intended purposes only. As CLC lawyers are responsible and accountable for the advice they provide, it is important that they maintain effective oversight and meaningful control of technology enabled or supported decisions, advice or other outputs, including through human review where appropriate.

Context: CLC practices and lawyers are responsible and accountable for the decisions they take and the legal advice they provide, whether assisted by technology or not. It is therefore necessary to have an appropriate level of understanding of how technology works and ensure that it is used within its design-parameters and for the defined purpose for which the practice has implemented it.

Accountability also entails being honest and transparent about the role that any technology has in supporting lawyers to deliver legal services. In other words, lawyers need to be honest and transparent about their use of technology, particularly where they have relied on technology to support decision making or provide advice to clients.

Good practice:

  • Maintain effective oversight and control for example by ensuring there is human review of any technology enabled or supported decisions or outputs.
  • Check any automated decisions or outputs i.e. is the output correct, are sources verifiable10 , does the output accurately reflect the facts or circumstances.
  • Understand how systems generate outputs and use personal data, particularly client data, and ensure that client data is used only within the parameters of systems approved for use in the practice.
  • Ensure that staff are competent to use technology that is relevant to their role or area of practice; to maintain competence, individuals and practices should regularly review training needs and ensure they are met.
  • Adopt policies and procedures which support appropriate use of technology, ensure staff are aware of and comply with such policies.
  • Ensure transparency about the use of technology including when billing/invoicing clients i.e. time saved using technology should be reflected in billable hours.
  • Take steps to keep up with evolving guidance on the use of technology in legal practice, share good practice and update practice policies and procedures accordingly.

Principle 10: Fairness and bias

Practices should assure themselves that technology is not inherently biased and routinely monitor and take appropriate measures to mitigate the risk of technology-enabled decisions, advice and outputs being biased, unfair or discriminatory.

Context: Practices have obligations and duties in relation to ensuring fairness and avoiding bias and discrimination under the Equality Act 201011 and UK GDPR12 , and it is a regulatory requirement that practices promote and support equality, diversity and inclusion in practice, service delivery and dealings with clients13

This document does not aim to provide comprehensive guidance on compliance with legal requirements but does include guidance which explains what fairness and bias is in the context of technology and prompts to help practices mitigate the risk of bias or discrimination.

Fairness refers to fair treatment and non-discrimination14; this includes the use of personal data by systems in ways that people would reasonably expect and ensuring that the use of data does not result in unfair or discriminatory outcomes.

Bias is an aspect of decision-making including automated decisions (decisions made by technology including AI), which can result in discrimination i.e. an adverse effect or outcome for an individual or group of people.

Technology should not be inherently biased or result in bias, unfairness or any discriminatory effect or outcome.

Good Practice:

  • Practices relying on technology, including AI, should take all reasonable steps to assure themselves that systems are tested for inbuilt bias before integration, and should test and monitor for bias or discrimination on an ongoing basis. Ongoing testing may be something providers can support and something practices may wish to include in any service level agreement or contract.
  • Should systems show evidence of bias or produce decisions or outputs which are unfair or discriminatory, practices need to take immediate steps to address this, if need be, with the system developer or other experts.
  • Technology providers should be able to provide assurance that systems process data fairly and have been tested for bias and discrimination. Providers should be able to provide appropriate evidence to support this, including information about the data systems were trained, the metrics or proxies used in testing, tolerances and performance indicators, and what was done to address any bias, unfairness or discrimination identified.
  • Technology providers should be willing to explain automated decision making and provide an appropriate level of information to enable practices in turn to provide appropriate explanations to clients.
  • The sorts of things clients are likely to want to know and seek assurance about include what decisions does the system make about them/their matter, does a human review those decisions, what information is used to make those decisions, what can they do if the decision is wrong or they want to challenge it, is their personal data safe.

Principle 11: Capability

Practices should ensure they have the necessary capability to integrate technology safely and effectively and provide the necessary training and implement appropriate procedures to support its safe and effective use.

Context: practices must determine the necessary capability and expertise required to implement and use technology safely and effectively, and ensure that staff receive the necessary training and support to use it safely and to best effect i.e. for its intended purposes and or the purposes for which the practice intends it to be used.

Good practice:

  • Practices should ensure staff, including senior partners, managers and other decision makers have the necessary level of understanding and technical competence to use technology safely and effectively. This is particularly important for those using AI or other technology that produces automated decisions or outputs; staff should have a general understanding of how it works, including its limitations and the potential risks associated with its use.
  • Practices should have policies in place to ensure that staff understand the importance of human oversight, when it necessary i.e. what automated decisions or outputs need to be verified, how they should be verified (against what data, sources or records), and what to do if they are not confident in the reliability or accuracy of decisions or outputs.
  • Practices should have clear policies that specify what technology can be used, in what circumstances and by whom, this should cover technology and systems within the practice and the use of technology, including AI that is available online, such as ChatGPT and other AI systems.
  • Practices should provide appropriate training for staff using technology and ensure that training and skills are kept up to date.
  • Based on an assessment of the risks involved in the use of certain technology such as AI for example, practices should have appropriate processes in place to supervise and where necessary review the work of junior colleagues and support staff using that technology.

Other resources for reference


1. AI Playbook for the UK Government provides useful information explaining what transparency is in the context of AI systems and what evidence technology providers should be able to provide to demonstrate the transparency of their systems.

2. DSIT Tracker Survey (Wave 4) on Public Attitudes to Data and AI

3. Under UK GDPR processing includes personal data processed wholly or partly by automated means (that is, information in electronic form).

4. UK GDPR defines personal data as ‘any information relating to an identified or identifiable natural person (‘data
subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’.

5. For further information on explainability, see The Government’s Artificial Intelligence Playbook and guidance published by the Turing Institute.

6. For further information on explainability, see ICO Guidance on Explaining decisions made with AI and guidance published by the Turing Institute.

7. Findings from the Public Attitudes on Information Rights Survey, 2024

8. ICO Privacy Notice Generator (note however that parts of this guidance are under review following implementation of the Data Use and Access Act 2025 which came into effect on 19 June 2025).

9. See the ICO’s guidance on data protection impact assessments.

10. R (Ayinde) v The London Borough of Haringey 2025] EWHC 1040 (Admin) offers a salutary lesson in the risks of failing to check outputs generated by AI.

11. See Equality and Human Rights Commission Guidance for businesses

12. See ICO Guidance on fairness in AI

13. See Ethical Principles 6, Code of Conduct

14. See ICO explanation of bias, discrimination and fairness