1. Understanding AI
1.1 It is essential that experts recognise when they are using AI. AI refers to computer software which simulates human intelligence by performing tasks that typically require human cognition, such as learning, problem-solving, decision-making, and perception. Unlike conventional computing, which follows explicit, pre-programmed instructions, AI technologies learn from data to identify patterns and generate outputs without being explicitly coded for every possible scenario. At the same time, AI is not sentient, and its capabilities are based on (and limited by) the tasks and data sets available to it from which to learn.
1.2 There are different forms of AI used today, typically defined by their capabilities or goal. The key forms are:
- Predictive AI, which learns from historical data to forecast future outcomes or classify information. Examples include systems that predict a stock price or identify fraud risk.
- Generative AI (GenAI), which is trained to produce novel, realistic content. This includes text (like large language models), images, audio, or code, based on the patterns learned from its training data. ChatGPT is a GenAI system.
- Agentic AI, which is becoming increasingly prevalent and refers to a system that is designed to take actions autonomously to achieve a specific goal. It plans multi-step tasks, reasons about its environment, and can interact with other systems or data sources without constant human prompting.
1.3 In some cases, it will be obvious that a tool uses AI; for example, when using prominent AI services such as ChatGPT or when using sophisticated tools which provide quick, tailored responses. In other cases, it won’t be obvious; for example, when using data analytics tools, document review tools (e.g. in eDiscovery software) and even internet search engines, all of which are increasingly using AI.
1.4 When using any form of technology, experts should consider carefully whether AI is involved as this is likely to have implications for their work. The following features are likely to be indicators that the technology incorporates some form of AI:
- ‘AI’ label: if the technology is labelled as an ‘AI’ tool, then this is likely to be a strong indicator (although not a guarantee) of AI. Note that AI comprises a range of technologies, so other labels – reflecting these technologies – might be used instead. Please refer to the Glossary in the Appendix for further information on these technologies.
- Automation and efficiency: where the technology automates complex tasks and/or provides results more efficiently compared to manual methods, then this could be a sign that AI is being used. However, even non-AI technologies can speed up manual processes, so this is not always a determinative indicator of AI.
- Pattern recognition: where the technology recognises patterns in data (for example, identifying related documents in a large dataset) which would be difficult for a human to discern, then this is likely to be an indicator of AI.
- Understanding language: where the technology understands human language (for example, by summarising documents, extracting key information, or translating text), then this is also likely to be an indicator of AI.
- Image, video or audio analysis: where the technology is able to analyse image, video or speech data (for example, extracting images, manipulating video or identifying speech patterns), then it is likely to use AI.
- Generating content: where the technology generates new content (whether text, image, video or audio), particularly in response to prompts, then it is also likely to use AI.
1.5 Experts should be slow to conclude that they are not using AI where the technology in question presents any of the indicators above or otherwise appears to operate in a sophisticated way. If there is any doubt about whether the technology uses AI, then the expert should undertake further due diligence to establish this.
2. Using AI
2.1 When experts use AI, a key point is that they remain ultimately responsible for their work product. Experts cannot divest responsibility or evade their duties when they use AI. AI is not a substitute for the expert’s opinion. Experts should therefore be cautious in their use of AI and should apply an appropriate level of professional oversight to any AI-generated output that they use in the course of their work.
2.2 Experts might consider using AI for the following purposes:
- Self-education: using AI to generate explanations or other educational content (for example, training material) regarding technical or complex concepts related to the subject matter of the expert’s instruction.
- Data analysis: using AI to analyse large volumes of data, identifying patterns and insights, and making predictions as to future outcomes.
- Quality control: using AI to undertake proof-reading or to identify possible errors or inaccuracies in the expert’s work product.
- Summarising content: using AI to condense large amounts of information into summaries, for example, to make it easier for the expert to digest.
- Generating content: using AI to generate analysis or to create visuals, for example.
- Enhancing communication: using AI in communications with other stakeholders (for example, to translate text between languages, or to translate jargon into plain language, and to assist in virtual meetings e.g. with transcription).
3. Risks of using AI
3.1 The use of AI presents risks, including practical, legal and reputational risks. Some of these risks are novel and do not arise with the use of conventional software. Experts should familiarise themselves with these risks and always keep them in mind when using AI.
3.2 Some of these risks may be more common and/or more serious in particular forms of AI (e.g. generative or agentic AI), in certain AI tools and in particular domains. Experts should therefore consider carefully the risks that AI poses in the specific contexts and circumstances in which they are using it.
3.3 The risks of using AI include:
- Inaccuracies/hallucinations: Generative AI in particular can give incorrect or even fabricated information – text that appears to be confident and convincing superficially but is factually incorrect (including invented sources/citations of e.g. academic papers) (a ‘hallucination’). Inaccuracies can also result from users failing to ask the right questions (or in the right way). Inadvertent reliance on AI hallucinations can have serious consequences, both to the expert’s credibility and for the client’s legal case. Perhaps most importantly, it is also likely to result in a breach by the expert of their duties to the court or tribunal. The number of reported cases in which hallucinations have come to light is increasingly rapidly; typically, where lawyers or litigants in person have relied upon AI to identify previous judgments which have then been relied upon in submissions to the court, and which have later transpired to be hallucinations. For example, this occurred in Harber v Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC), where nine fictitious tribunal decisions were cited, and in 2025 High Court cases Al-Haroun v Qatar National Bank (Q.P.S.C.) [2025] EWHC 1495 (KB) (18 fake cases) and Ayinde v London Borough of Haringey [2025] EWHC 1494 (KB) (five fake cases). These incidents led to judicial criticism, regulator referrals, and reputational damage for those involved.
- Confidentiality/privilege: Many AI tools are publicly available and free-to-use. However, these tools typically don’t offer confidentiality over the material uploaded to them; often the providers of these tools retain the right to use that material to improve the tool over time. In using these tools, experts may therefore risk inadvertently disclosing confidential client information. This could also result in a loss of any legal professional privilege which might otherwise apply to that information. This is a serious risk. Experts should check the relevant terms and the information security arrangements before they upload confidential information to any tool.
- Bias/toxicity: AI tools, particular those which use generative AI, can produce biased outputs. For example, AI tools may naturally have a preference or inclination towards a particular outcome or viewpoint, based on the data on which they have been trained. There may be inherent toxicity in these outputs; for example, a presumption of gender or ethnicity in a particular scenario. The outputs, depending on how they are used, could also therefore cause discrimination.
- Privacy: When experts input personal data into AI tools, particularly generative AI tools (or if the output from the AI tool includes personal data), they will need to comply with applicable data protection laws. Experts should also satisfy themselves that the AI tool itself complies with these laws; for example, by reviewing any relevant policies or terms of service.
- Intellectual property (IP): The use of AI tools can raise IP concerns. For instance, the data used to train AI models or the outputs generated by these models may be subject to IP rights. Experts need to be aware of the ownership and usage rights associated with AI-generated content, as improper use could lead to infringement claims or disputes over IP ownership.
- Regulatory compliance: A number of jurisdictions are introducing regulations (or amending existing regulations) to govern the use of AI, including specifically in the context of legal proceedings. Experts must ensure that their use of AI complies with all relevant legal and regulatory frameworks. Failure to adhere to these regulations can result in penalties or undermine the credibility of the expert’s work. It is crucial for experts to stay informed about the evolving regulatory landscape surrounding AI technologies.
4. Experts’ duties
4.1 Experts are typically subject to professional duties, including to the court or tribunal. For example, under the Civil Procedure Rules of England and Wales, experts are required to:
- provide evidence which is their independent product, uninfluenced by the pressures of litigation;
- assist the court/tribunal by providing objective, unbiased opinions on matters within their expertise;
- consider all material facts, including those which might detract from their opinions; and
- make clear in their reports when a question or issue falls outside their expertise, and where they are not able to reach a definite opinion; for example, because they have insufficient information.
4.2 Similarly, in arbitrations, parties and tribunals often agree that experts should provide evidence in accordance with similar rules and principles; for example, by reference to guidelines published by arbitral organisations, such as the Chartered Institute of Arbitrators.
4.3 Prior to using any AI, experts should consider the impact that this may have on their duties and, importantly, must ensure that they can and do continue to comply with those duties at all times. As emphasised in Section 2 above, experts remain responsible for ensuring that they are compliant with their duties to the court or tribunal when using AI.