What happened?

On 9 March 2026, the Competition and Markets Authority (CMA) published its guidance on Complying with Consumer Law When Using AI Agents (guidance). The guidance arrives as agentic artificial intelligence (AI) is rapidly becoming a fixture of modern business operations, with many companies deploying agentic AI systems in consumer-facing roles – including tools to handle customer queries, process refunds, recommend products and manage marketing campaigns.

As AI systems capable of taking autonomous actions on behalf of a business (such as interacting with customers, making decisions and carrying out tasks in much the same way that a member of staff would), agentic AI tools present opportunities businesses are understandably drawn to. But the CMA’s guidance delivers a key guiding principle for use of AI in deployments facing or affecting consumers: The fact that it is an AI agent, rather than a human, performing these functions does not diminish the business’s obligations under consumer protection law. The same rules apply and businesses need to be prepared.

What does the guidance say?

At its core, the guidance functions as a practical framework that businesses can follow to ensure compliance with consumer law when implementing, deploying and maintaining AI agents. Four principles permeate the CMA’s approach:

  1. Transparency – When deploying a publicly facing AI agent, companies must be transparent with customers about when they are interacting with AI rather than a human.
  2. Compliance by design – Businesses should ensure that the operation of their AI agent has been properly grounded in the relevant consumer rights laws (for example, through fine-tuning its operation and/or through applying guardrails and consumer protection compliance rule sets at the inference layer). The guidance also recommends A/B testing as a means of evaluating whether the AI agent’s grounding is translating into compliant customer interactions.
  3. Human oversight – The guidance is clear that deploying an AI agent is not a set it and forget it exercise. Businesses are expected to maintain ongoing human monitoring of their AI agents to ensure they continue to operate as intended and within the law.
  4. Swift remediation – Given the scale at which AI agents can operate (potentially interacting with thousands, if not tens or hundreds of thousands, of customers in a short space of time), the guidance stresses the importance of acting quickly when an AI agent is not performing correctly. The potential for harm to spread rapidly makes a prompt and effective response essential.

Critically, the guidance makes clear that it is the business deploying the AI agent, not the company that designed or trained the underlying model, that bears legal responsibility for any failure to comply with consumer protection laws.

How could AI get it wrong?

UK consumer protection law is set out across multiple pieces of legislation, including, among others:

  • The Consumer Rights Act 2015
  • The Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013
  • The Digital Markets, Competition and Consumers Act 2024

These laws cover a range of topics – including, among others, price presentation, discounting practices, marketing claims, unfair contract terms, and returns and refunds. These obligations rely on myriad different legal tests. There are, for example, “banned practices” which are always prohibited, while other types of “misleading actions” require that a consumer made a different “transactional decision” as a result of being misled. In addition, consumers may have different remedies – e.g. in relation to refunds depending on the circumstances of their purchase.  

AI agents that fail to operate in a way that effectively navigates this web of regulation properly – e.g. by not discussing unavoidable fees and charges until late into the purchasing process, creating a false sense of urgency through their interactions with a customer or miscalculating a returns deadline – can potentially expose the business that deploys them to significant noncompliance risk. The potential consequences could be severe, with maximum fines of up to 10% of worldwide turnover.

Things businesses should do if considering implementing an AI agent

If a business is considering deploying an AI agent, there are several practical steps it should be taking now.

  1. Confirm that its operations and knowledge set are appropriately grounded in the relevant consumer protection laws (e.g. through fine-tuning its operation, applying guardrails at the inference layer, and ensuring outputs and actions are grounded in a business’s internal compliance standards and policies).
  2. When leveraging third-party AI tooling to power AI agents, do not assume that compliance has been built in – you need to make sure this is in place. Where possible, consider introducing appropriate protections and commitments in the underlying contractual arrangements with the provider.
  3. Conduct extensive prelaunch testing whenever changes are made to the AI agent’s implementation to ensure it continues to interact with customers in a compliant manner. Testing should not be a one-off exercise at launch; it should be an ongoing part of how the agent is managed, supported by a schedule of continuous human review and audit to catch instances of behaviours that expose the company to consumer law noncompliance risk.
  4. Have a remediation strategy ready to deploy if consumer law breaches are identified. Where instances of noncompliant actions are identified, businesses must be able to intervene quickly and effectively. The guidance is explicit that speed matters, although businesses also need to be cognisant that forward-looking remediation is not a cure for or defence against previous breaches.

What’s next?

AI agents may be a new and evolving tool, but the consumer protection obligations that govern their use are well established – and regulators are watching. Businesses that take a proactive approach now, by asking the right questions of their providers, building compliance into their contracts, testing rigorously and maintaining meaningful human oversight, will be best placed to harness the considerable commercial benefits of agentic AI without assuming undue noncompliance risk. Those that fail to do so risk not only significant financial penalties, but reputational damage that could prove costly in the long run.

If you would like to discuss what the guidance means for your business, or how to best implement AI agents in a compliant manner, please do not hesitate to get in touch with a member of the Cooley team listed below.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as "Cooley"). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction, and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. When advising companies, our attorney-client relationship is with the company, not with any individual. This content may have been generated with the assistance of artificial intelligence (Al) in accordance with our Al Principles, may be considered Attorney Advertising and is subject to our legal notices.