Guidance for Expert Witnesses on the use of Artificial Intelligence (AI)

Introduction

The use of artificial intelligence (AI) in the legal industry is now commonplace. AI tools are being increasingly used by legal professionals, members of the judiciary and expert witnesses to assist them in their work.

This document, published by The Academy of Experts (the Academy), is intended to provide guidance to expert witnesses on the use of AI in their work.

The term ‘expert’ is used in this guidance to refer to an expert witness acting in the capacity of an independent expert witness instructed to provide evidence in legal proceedings (as opposed to an advisory expert, for example).

This document sets out:

  • In Section A, background information on AI, how experts might use it, and the risks involved.
  • In Section B, the Academy’s guidance on the use of AI by experts, including how experts can engage with AI in a manner which is both compliant with their duties as an expert and which upholds public trust.
  • In Section C, a checklist of points for experts to consider in relation to using AI.
  • In the Appendix, a brief glossary of key AI-related terms.

The guidance in this document is not intended to replace human judgment. Experts should consider carefully their use of AI and the impact of this on their duties, their professional relationships with their client and/or instructing solicitors, and ultimately their work. However, it is hoped that this guidance provides helpful information and suggestions for experts to consider when using AI.

Thanks are given to Minesh Tanna of Simmons & Simmons LLP for his invaluable contribution in leading the project.


Note: This guidance was published on 30th January 2026. A pdf version is available for download here – Guidance on the use of Artificial Intelligence

The release of ChatGPT-3.5 at the end of November 2022 was rightly perceived as heralding an artificial intelligence revolution which would have extraordinarily far-reaching effects on all areas of life which involved processing of information. Scarcely three years on, the potential extent of the revolution can be seen to be even greater than was anticipated three years ago, and the speed with which the revolution has developed has startled even many of the enthusiasts and experts in the AI field.

It is now clear that both the potential benefits and the potential risks of using AI are very substantial. So it is vital that anyone whose work could benefit from AI should be aware of those benefits and those risks. There are already a mass of examples of work which has been rightly – and often publicly – rejected or criticised for uncritical use of AI owing to its hallucinations and other imperfections. And I believe that it will not be long before, at least in some areas of work, those who have not used AI will be criticised for not drawing on it as an important source of information and ideas.

So, a proper and updated understanding of the appropriate use, benefits and drawbacks of AI is vital for anyone concerned with the processing of information. And that is particularly true for those people whose work is of public significance, and that plainly extends to all those working in the legal system, and in particular in the field of dispute resolution. Expert witnesses play an important – I believe an increasingly important – role in that field. It is therefore vital that all expert witnesses are fully informed and up-to-date about the advantages, limits and risks of using AI when preparing their reports, and about the appropriate information to include in their reports as to their use of AI.

I therefore unreservedly welcome this important and clear Guidance, and I congratulate those members of the Academy who were responsible for it. I very strongly recommend all Expert Witnesses to read this Guidance and bear in mind all that it says when considering their views and evidence and when preparing their reports.


Rt Hon Lord Neuberger of Abbotsbury

Lord Neuberger graduated with a degree in natural science from Oxford and worked at a merchant bank until he entered Lincoln’s Inn and was called to the English Bar in 1974. He specialised in property law, and was appointed Queen’s Counsel in 1987 and became a Bencher of Lincoln’s Inn in 1993.

From 1996 to 2004, he served as a Judge of the Chancery Division of the High Court. He was appointed a Lord Justice of Appeal in 2004 and a Lord of Appeal in Ordinary in 2007. He became Master of the Rolls in 2009, and in 2012 he was appointed President of the Supreme Court of the United Kingdom until his retirement in September 2017.

He has since been practising as an arbitrator, mediator and legal expert from One Essex Court. He became a Non-Permanent Judge of the Court of Final Appeal in 2009 and a Judge of the Singapore International Commercial Court in 2018, and continues to sit in both courts.

1. Understanding AI

1.1 It is essential that experts recognise when they are using AI. AI refers to computer software which simulates human intelligence by performing tasks that typically require human cognition, such as learning, problem-solving, decision-making, and perception. Unlike conventional computing, which follows explicit, pre-programmed instructions, AI technologies learn from data to identify patterns and generate outputs without being explicitly coded for every possible scenario. At the same time, AI is not sentient, and its capabilities are based on (and limited by) the tasks and data sets available to it from which to learn.

1.2 There are different forms of AI used today, typically defined by their capabilities or goal. The key forms are:

  1. Predictive AI, which learns from historical data to forecast future outcomes or classify information. Examples include systems that predict a stock price or identify fraud risk.
  2. Generative AI (GenAI), which is trained to produce novel, realistic content. This includes text (like large language models), images, audio, or code, based on the patterns learned from its training data. ChatGPT is a GenAI system.
  3. Agentic AI, which is becoming increasingly prevalent and refers to a system that is designed to take actions autonomously to achieve a specific goal. It plans multi-step tasks, reasons about its environment, and can interact with other systems or data sources without constant human prompting.

1.3 In some cases, it will be obvious that a tool uses AI; for example, when using prominent AI services such as ChatGPT or when using sophisticated tools which provide quick, tailored responses. In other cases, it won’t be obvious; for example, when using data analytics tools, document review tools (e.g. in eDiscovery software) and even internet search engines, all of which are increasingly using AI.

1.4 When using any form of technology, experts should consider carefully whether AI is involved as this is likely to have implications for their work. The following features are likely to be indicators that the technology incorporates some form of AI:

  1. ‘AI’ label: if the technology is labelled as an ‘AI’ tool, then this is likely to be a strong indicator (although not a guarantee) of AI. Note that AI comprises a range of technologies, so other labels – reflecting these technologies – might be used instead. Please refer to the Glossary in the Appendix for further information on these technologies.
  2. Automation and efficiency: where the technology automates complex tasks and/or provides results more efficiently compared to manual methods, then this could be a sign that AI is being used. However, even non-AI technologies can speed up manual processes, so this is not always a determinative indicator of AI.
  3. Pattern recognition: where the technology recognises patterns in data (for example, identifying related documents in a large dataset) which would be difficult for a human to discern, then this is likely to be an indicator of AI.
  4. Understanding language: where the technology understands human language (for example, by summarising documents, extracting key information, or translating text), then this is also likely to be an indicator of AI.
  5. Image, video or audio analysis: where the technology is able to analyse image, video or speech data (for example, extracting images, manipulating video or identifying speech patterns), then it is likely to use AI.
  6. Generating content: where the technology generates new content (whether text, image, video or audio), particularly in response to prompts, then it is also likely to use AI.

1.5 Experts should be slow to conclude that they are not using AI where the technology in question presents any of the indicators above or otherwise appears to operate in a sophisticated way. If there is any doubt about whether the technology uses AI, then the expert should undertake further due diligence to establish this.

2. Using AI

2.1 When experts use AI, a key point is that they remain ultimately responsible for their work product. Experts cannot divest responsibility or evade their duties when they use AI. AI is not a substitute for the expert’s opinion. Experts should therefore be cautious in their use of AI and should apply an appropriate level of professional oversight to any AI-generated output that they use in the course of their work.

2.2 Experts might consider using AI for the following purposes:

  1. Self-education: using AI to generate explanations or other educational content (for example, training material) regarding technical or complex concepts related to the subject matter of the expert’s instruction.
  2. Data analysis: using AI to analyse large volumes of data, identifying patterns and insights, and making predictions as to future outcomes.
  3. Quality control: using AI to undertake proof-reading or to identify possible errors or inaccuracies in the expert’s work product.
  4. Summarising content: using AI to condense large amounts of information into summaries, for example, to make it easier for the expert to digest.
  5. Generating content: using AI to generate analysis or to create visuals, for example.
  6. Enhancing communication: using AI in communications with other stakeholders (for example, to translate text between languages, or to translate jargon into plain language, and to assist in virtual meetings e.g. with transcription).

3. Risks of using AI

3.1 The use of AI presents risks, including practical, legal and reputational risks. Some of these risks are novel and do not arise with the use of conventional software. Experts should familiarise themselves with these risks and always keep them in mind when using AI.

3.2 Some of these risks may be more common and/or more serious in particular forms of AI (e.g. generative or agentic AI), in certain AI tools and in particular domains. Experts should therefore consider carefully the risks that AI poses in the specific contexts and circumstances in which they are using it.

3.3 The risks of using AI include:

  1. Inaccuracies/hallucinations: Generative AI in particular can give incorrect or even fabricated information – text that appears to be confident and convincing superficially but is factually incorrect (including invented sources/citations of e.g. academic papers) (a ‘hallucination’). Inaccuracies can also result from users failing to ask the right questions (or in the right way). Inadvertent reliance on AI hallucinations can have serious consequences, both to the expert’s credibility and for the client’s legal case. Perhaps most importantly, it is also likely to result in a breach by the expert of their duties to the court or tribunal. The number of reported cases in which hallucinations have come to light is increasingly rapidly; typically, where lawyers or litigants in person have relied upon AI to identify previous judgments which have then been relied upon in submissions to the court, and which have later transpired to be hallucinations. For example, this occurred in Harber v Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC), where nine fictitious tribunal decisions were cited, and in 2025 High Court cases Al-Haroun v Qatar National Bank (Q.P.S.C.) [2025] EWHC 1495 (KB) (18 fake cases) and Ayinde v London Borough of Haringey [2025] EWHC 1494 (KB) (five fake cases). These incidents led to judicial criticism, regulator referrals, and reputational damage for those involved.
  2. Confidentiality/privilege: Many AI tools are publicly available and free-to-use. However, these tools typically don’t offer confidentiality over the material uploaded to them; often the providers of these tools retain the right to use that material to improve the tool over time. In using these tools, experts may therefore risk inadvertently disclosing confidential client information. This could also result in a loss of any legal professional privilege which might otherwise apply to that information. This is a serious risk. Experts should check the relevant terms and the information security arrangements before they upload confidential information to any tool.
  3. Bias/toxicity: AI tools, particular those which use generative AI, can produce biased outputs. For example, AI tools may naturally have a preference or inclination towards a particular outcome or viewpoint, based on the data on which they have been trained. There may be inherent toxicity in these outputs; for example, a presumption of gender or ethnicity in a particular scenario. The outputs, depending on how they are used, could also therefore cause discrimination.
  4. Privacy: When experts input personal data into AI tools, particularly generative AI tools (or if the output from the AI tool includes personal data), they will need to comply with applicable data protection laws. Experts should also satisfy themselves that the AI tool itself complies with these laws; for example, by reviewing any relevant policies or terms of service.
  5. Intellectual property (IP): The use of AI tools can raise IP concerns. For instance, the data used to train AI models or the outputs generated by these models may be subject to IP rights. Experts need to be aware of the ownership and usage rights associated with AI-generated content, as improper use could lead to infringement claims or disputes over IP ownership.
  6. Regulatory compliance: A number of jurisdictions are introducing regulations (or amending existing regulations) to govern the use of AI, including specifically in the context of legal proceedings. Experts must ensure that their use of AI complies with all relevant legal and regulatory frameworks. Failure to adhere to these regulations can result in penalties or undermine the credibility of the expert’s work. It is crucial for experts to stay informed about the evolving regulatory landscape surrounding AI technologies.

4. Experts’ duties

4.1 Experts are typically subject to professional duties, including to the court or tribunal. For example, under the Civil Procedure Rules of England and Wales, experts are required to:

  1. provide evidence which is their independent product, uninfluenced by the pressures of litigation;
  2. assist the court/tribunal by providing objective, unbiased opinions on matters within their expertise;
  3. consider all material facts, including those which might detract from their opinions; and
  4. make clear in their reports when a question or issue falls outside their expertise, and where they are not able to reach a definite opinion; for example, because they have insufficient information.

4.2 Similarly, in arbitrations, parties and tribunals often agree that experts should provide evidence in accordance with similar rules and principles; for example, by reference to guidelines published by arbitral organisations, such as the Chartered Institute of Arbitrators.

4.3 Prior to using any AI, experts should consider the impact that this may have on their duties and, importantly, must ensure that they can and do continue to comply with those duties at all times. As emphasised in Section 2 above, experts remain responsible for ensuring that they are compliant with their duties to the court or tribunal when using AI.


1. Prior to using AI

1.1 Prior to using any AI in their work, experts should:

  1. Ensure that they are permitted to use AI at all.
    This will require consideration of the relevant legal and regulatory landscape (for example, any prohibitions on the use of AI in any applicable court or tribunal rules), the expert’s professional duties, and the contractual arrangements with their instructing lawyers (which might prohibit the use of AI).
  2. Determine the purpose for which they intend to use AI.
    Experts should clearly identify the intended purpose (see Section 2 above for examples of some purposes) in order to determine the lawfulness, appropriateness, and risk level involved, as well as the safeguards that the expert should implement in using AI for this purpose (see below). Identifying the specific purpose will help the expert to consider these points properly.
  3. Carefully consider whether it is lawful and appropriate to use AI for the specific purpose identified, and the risk level involved.
    Experts should not use AI merely because it is available. The expert should assess the lawfulness and appropriateness of the use of AI for their intended purpose, having regard (without limitation) to the factors identified in point (A) above, the benefits of using AI and the risks involved.

    Experts may find it helpful to consider three categories of risk in relation to an identified use or purpose:

    1. Prohibited:
      An impermissible use under applicable law or regulation (e.g. a prohibited use under Article 5 of the EU AI Act), a use which would be a breach of the expert’s duty (e.g. the complete outsourcing of the expert’s work to an AI tool), or a contractual prohibition on the use.
    2. High-risk:
      A use which carries greater levels of risk due to the legal or regulatory position (for example, a high-risk use under the EU AI Act or use of sensitive personal data in the AI tool), or where the nature of the purpose carries increased risks. For example, where AI is used to:

      1. generate substantive content for the expert’s report (e.g. the expert’s opinion);
      2. undertake material analysis on which the expert’s opinion will be based;
      3. recreate scenarios (for example, counterfactual scenarios) to use as the basis for analysis underpinning the expert’s opinion; and/or
      4. more generally, assist the expert in forming their opinion where that opinion is likely to have an important impact on one or more individuals; for example, on their fundamental rights, health and/or financial well-being.

      This is not an exhaustive list and experts should consider carefully the risks (and the extent of those risks) of any particular purpose.

    3. Low-risk:
      A less risky purpose; for example, where AI is used to:

      1. provide information to the expert or used as a research tool;
      2. summarise, extract key information from or analyse documents (for example, in a large dataset), where this is unlikely to constitute material analysis on which the expert’s opinion will be based;
      3. review text for grammar, spelling, punctuation, and to suggest basic stylistic or tonal corrections, without altering the substantive content;
      4. undertake administrative tasks; for example, organising documents or scheduling work.

      There may be other low-risk uses, but each purpose should be considered on its own facts. For example, the uses above might be high-risk in particular circumstances (e.g. using AI to research topics with which the expert has no familiarity at all).

  1. Select the appropriate AI tool for the purpose.
    Experts should consider different AI tools for their purpose (rather than just defaulting to their preferred or the most accessible AI tool) and identify the one that is most appropriate for that purpose. This is likely to require considering a range of different AI tools, determining the intended purpose of those tools (e.g. what they were designed to do) and their capabilities, how they operate (e.g. the data they ingest and the output they provide), and their limitations. Experts should consider in particular the limitations of using general-purpose AI tools e.g. ChatGPT or Copilot, for specialised tasks for which a more tailored AI tool might be more appropriate (for example, an AI tool trained on medical literature is likely to be more reliable in assisting a medical expert then a general-purpose AI tool). Ultimately, the expert should be prepared to explain why they have selected a particular AI tool over others for a particular task. Finally, experts should ensure that they are using the latest version of any AI tool which is likely to deploy state-of-the-art capabilities and safety features.
  2. Understand how the AI tool works.
    Experts should ensure they have a thorough understanding of how the AI tool works, including its functionality, how it operates, the type of information it processes, and the output it provides. This is likely to require the expert to identify, obtain and review any supporting or explanatory information about the tool (to the extent not already reviewed as part of the due diligence on the tool).
  3. Liaise appropriately with instructing lawyers.
    Experts should consider whether to disclose their proposed use of AI to their instructing lawyers (or even obtain their approval). Experts should check and follow what is said in their engagement letter (or equivalent) about the use of AI and any necessary disclosures or consents. In the absence of any contractual requirements:

    1. For any proposed high-risk purpose, the expert should disclose this to their instructing lawyers and ensure that there are no objections (either from the instructing lawyers or the underlying client) before proceeding. It might be useful to discuss some of the points set out above with the instructing lawyers (e.g. particular AI tools for the proposed purpose) as well as more generally the impact of using AI on the expert’s work and opinion.
    2. For any proposed low-risk purpose, the expert may not need to disclose this, but they should satisfy themselves that there are unlikely to be any concerns raised by the instructing lawyers or the underlying client, and they should nevertheless be prepared to explain their use of AI under cross-examination. If the expert is in any doubt about the riskiness of any purpose or about their instructing lawyers’ position on this, then they should disclose this. For convenience, the expert might wish to obtain their instructing lawyers’ consent to the use of AI for low-risk purposes more generally in their work.
  4. Record key AI uses and decisions.
    Experts should ensure that they document, and have a clear record of, key decisions taken in relation to their use of AI and their particular purpose(s) (with appropriate supporting explanation); for example, why it was decided to use AI at all, the benefits and risks of using AI for the particular purpose(s), why the particular AI tool was chosen for each purpose, and for which aspect or part of the expert’s work AI was used.

2. Safeguards

2.1 Experts should ensure they implement appropriate safeguards when using AI. In particular, experts should:

  1. Ensure adequate human oversight.
    It is crucial for experts to maintain adequate human oversight when using AI tools, ensuring that they do not over-rely on the technology and that they satisfy themselves that the AI tool’s output is appropriate, reliable and accurate. Experts should consider implementing processes to this end (for example, regular testing of the AI tool or regular checks of its output). Experts should adopt a proportionate approach; for example, using AI for purely administrative purposes e.g. arranging documents, is unlikely to require testing or checking. In contrast, human oversight is particularly important in high-risk purposes and/or where generative or agentic AI tools are used.
  2. Document key steps.
    As well as documenting key decisions around the use of AI in the first place (see above), experts should also keep an on-going written record (which they should be prepared to disclose to their instructing lawyers and in the legal proceedings) of key steps; notably, around broadly how the AI tool was used, how the output was used, any issues encountered (and how those were overcome), and the safeguards implemented. These records should contain sufficient detail to enable a court or tribunal to understand why the expert took each step. Keeping a contemporaneous written record may also be a helpful way of guiding experts through each of these steps. For high-risk purposes, experts should also keep a written record of how they specifically used the AI tool; for example, any specific instructions or prompts to generate content. Ultimately, experts may be challenged on their use of AI, and so should be prepared to justify this and explain why the AI tool was reliable and had a net positive impact on the expert’s evidence.
  3. Check for inaccuracy/hallucination.
    As part of adequate human oversight, and especially when using generative AI tools in a high-risk content, experts should be vigilant in checking AI outputs for inaccuracy or hallucination. This is likely to involve cross-referencing AI outputs (including any sources cited by the AI tool) with known facts and data to ensure their validity and reliability.
  4. Ensure compliance with applicable law and regulation.
    Experts should have identified in advance the relevant legal or regulatory requirements applicable to their intended purpose. Experts should ensure that they remain compliant with these requirements at all times when using AI. Experts should pay particular attention to requirements around confidentiality, privacy, IP, and discrimination. Experts should be especially careful when using client confidential information (or any other potentially sensitive material) in any AI tools, given the confidentiality risk highlighted above.
  5. Maintain vigilance around AI use.
    In addition to maintaining human oversight over the specific operation of AI tools, experts should remain vigilant about their use of AI in general. They should continuously reflect on their use of AI (and their specific purposes) in the broader context of the expert’s work and the case as a whole, particularly when developments occur. They should regularly ask themselves why they are using AI, what the benefits are that it brings, and the appropriateness of using it for their particular purpose(s). Experts may need to reassess their use of AI if their instructions change or if new facts emerge.
  6. Keep in mind impact of AI on professional duties.
    Experts should always consider their professional duties and continuously reflect on whether their use of AI impacts these duties. This is likely to be especially important at particular times; for example, when experts incorporate AI-generated material into their reports or when they review or analyse AI outputs to form their opinions.
  7. Liaise with instructing lawyers.
    Experts may have already consulted with their instructing lawyers prior to using AI for their intended purposes. Nonetheless, they should consider liaising with instructing lawyers whenever it is appropriate, prudent, or helpful over the course of their instruction. This might be the case if the AI tool produces unexpected analyses or identifies novel issues, or even just to sense-check aspects of their AI use with the instructing lawyers; for example, the extent of the expert’s reliance on AI or the nature of prompts used.
  8. Consider team members and their potential use of AI.
    Experts often work in teams and rely on work conducted by others when drafting their reports or forming their opinions. It is vital for the named expert to understand if and how their team members are using or have used AI in connection with their work. If AI has been used by a team member, the named expert should ensure they have a thorough understanding of this so that they are prepared in case they are challenged on it. They should discuss this with team members early on and ensure that those team members follow this guidance.
  9. Consider disclosing use of AI.
    Experts should verify whether their professional duties and/or applicable laws and regulations (including any court rules) require them to disclose the use of AI to the court or tribunal, and/or to the opposing side. It is prudent to discuss this with instructing lawyers, especially where there is no clarity on whether there is a requirement to disclose, as it may still be advisable to do so.

3. Keeping up-to-speed with AI

3.1 Finally, it is important to remember that AI is a rapidly evolving field, with frequent significant developments in the technology (such as new models and capabilities, as well as emerging risks), and in legal and regulatory frameworks.

3.2 Experts should keep up-to-speed with these developments by following relevant academic literature, reviewing legal developments and taking practical training sessions offered by the Academy and elsewhere.


Examples

The following examples are demonstrative rather than exhaustive.
Experts remain responsible for assessing risk and applying appropriate safeguards when using AI.

AI Activity / Use Case Description Example Tools Risk Level Notes
Basic internet search Simple factual look-ups using search engines that may use AI Google, Bing Very Low No AI-generated content relied on. Expert still interprets results.
Grammar / spell checking Editorial review without altering substance MS Word Editor, Grammarly Very Low Safe if not altering meaning or reasoning.
Transcription services Automatic meeting/call transcription Otter, Teams/Zoom transcripts Low Review for accuracy required.
Summarising non-sensitive documents AI condenses large volumes of material for internal use Claude, ChatGPT Low Expert must check accuracy and ensure no confidential information is exposed.
Document organisation Classifying or tagging files eDiscovery platforms Low Generally safe; does not affect expert opinion.
Pattern analysis of non-critical datasets Identifying trends or clusters for preliminary insight Domain-specific analytics tools Medium Requires expert verification before forming any opinion.
Research assistance on familiar topics AI provides background explanations or references ChatGPT, Gemini Medium Risk of hallucinations; expert must verify sources independently.
Drafting illustrative diagrams AI generates neutral graphics or visual aids Midjourney, DALL·E Medium Must ensure visuals are accurate and not misleading.
Analysis used in part to inform an expert opinion AI helps process large datasets feeding into expert work Specialist tools (medical, engineering, financial) High Expert must understand method, validate results, and maintain full responsibility.
Generating narrative content for a report (non-opinion sections) AI drafts background or descriptive text ChatGPT etc. High Must be checked rigorously; transparency may be required.
Scenario modelling AI recreates hypothetical situations used for reasoning Agentic/simulation tools High Significant risk of hidden assumptions or errors.
AI drafts substantive analysis or opinion AI contributes to expert’s reasoning or conclusions Any GenAI Extreme Expert’s opinion must be their own.
AI writes the entire expert report (even if edited afterwards) Complete or majority outsourcing Any GenAI Prohibited/Extreme Clear violation of duties; cannot be justified.
Inputting confidential case data into public AI tools Uploading client materials to unrestricted systems Public ChatGPT, Gemini Prohibited Breach of confidentiality and possibly privilege.

Ai Risk Scale Infographic

Brief Glossary of Key AI-related Terms

Agentic AI: Agentic AI refers to artificial intelligence systems designed to operate with a higher degree of autonomy, initiative, and goal-directed behaviour than traditional AI models. Unlike systems that simply respond to prompts, agentic AI can plan actions, break down tasks into sub-tasks, make decisions in pursuit of objectives, and interact with tools or environments to achieve outcomes. These systems often integrate reasoning, memory, and continual feedback loops, enabling them to work more like assistants capable of taking proactive steps rather than merely generating outputs on command.

AI (Artificial Intelligence): AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. AI technologies typically act with some degree of autonomy, rather than being constrained by explicitly programmed rules.

AI assistant: An AI assistant is a software application designed to help users perform tasks by providing information, offering recommendations, or automating routines. Familiar examples include virtual assistants like Siri or Google Assistant.

AI agent: An AI agent is an autonomous form of technology that interacts with its environment to achieve specific goals. It possesses the ability to make decisions, learn from its experiences, and adapt its behaviour over time, often operating with a degree of independence.

Algorithm: An algorithm is a step-by-step set of instructions or rules that a computer follows to solve a problem or complete a task. In AI, algorithms are the foundation upon which models learn and make decision.

Generative AI: Generative AI is a type of AI designed to create new content, such as text, images, music, or videos. It achieves this by learning the patterns and structures from existing data to produce novel outputs that mimic human-like creation.

Large Language Model (LLM): LLMs are advanced AI models trained on vast amounts of text data to comprehend and generate human language. They utilise both NLP and machine learning techniques to produce coherent, contextually relevant, and often highly articulate responses.

Large Multi-Modal Model (LMM): LMMs are AI models with the capability to process and generate content across multiple types of data formats (modalities), including text, images, audio, and video. They integrate information from different modalities to provide more comprehensive and integrated outputs.

Machine learning (ML): Machine learning is a subset of AI focused on training algorithms to learn from data and subsequently improve their performance over time without explicit programming for every task. Deep learning is a specialised area within machine learning that utilises neural networks with multiple layers (often called “deep” neural networks) to model and discover complex patterns in large datasets.

Model: The model refers to the computer code that is capable of producing an output based on its training. It is the learned representation of data that the AI, guided by specific algorithms, uses to make predictions, generate content, or perform tasks.

Natural Language Processing (NLP): NLP is a branch of AI dedicated to enabling computers to understand, interpret, and generate human language.

Predictive AI: Predictive AI leverages data analysis and algorithms to forecast future outcomes or trends. It is widely applied in fields like finance, healthcare, and marketing to anticipate events based on historical data patterns.

Search our registers

Contact Us