News

CMA Review Could Help Shape UK's AI Landscape

Law360
June 16, 2023

Editor's note: Authored by Anna Caro, Leo Spicer-Phelps and Claire Temple, with contribution from Caroline Hobson,  this article was originally published in Law360.

On May 4, the U.K.'s Competition and Markets Authority announced the launch of an initial review of the market for artificial intelligence foundation models.[1]

Through this initial review, the CMA aims to establish an early understanding of the competition and consumer protection principles that could best guide the development of the market for AI foundation models in the U.K. 

The initial review is taking place against the background of the CMA's role as the U.K.'s competition and consumer protection regulator and is in line with its continued focus on the digital sector in the U.K.

The review follows the publication of the U.K. government's March 2023 white paper on AI,[2] which sets out the government's proposed framework on regulating AI. It also gives instructions to the CMA to consider how the innovative development and deployment of AI can be supported against five overarching principles:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

Depending on the CMA's findings, the initial review may be a precursor to a more in-depth investigation into the U.K. market for foundation models. Any findings are highly likely to be used by the CMA to feed into its recommendations to the U.K. government concerning future legislative and regulatory changes affecting this sector.

In this context, the CMA has also recently published a response to the government's white paper, which provides some insights into the CMA's emerging views on the subject.

Foundation Models

Broadly speaking, foundation models are a class of AI system that are trained on massive unlabeled datasets and require significant compute resources to enable such training.

These models can be fine-tuned or trained within specific contexts to serve as the basis or foundation of various potential deployed AI applications, which may ultimately be used by consumers or business users.

Prominent examples include:

  • Open AI's ChatGPT-4 and other large language models, which can be used to generate natural language responses to user prompts and engage in dialogues with users in a coherent and conversational manner; and
  • Stability AI's Stable Diffusion and other text-to-image models, which can be used to generate hyperrealistic images based on users' text prompts.

Scope of CMA's Initial Review

The launch document of the review published by the CMA emphasizes that "the best way to help emerging technologies reach their maximum potential for people and businesses is by enabling them to develop in open, competitive markets."

Consequently, the CMA is focusing its initial review on three core areas:

  • Competition and barriers to entry in the development of foundation models, e.g., in respect of accessing the requisite data and compute resources necessary to train these models, access to talent and funding, and the ways in which these foundation models could disrupt or reinforce the position of the largest firms;
  • The impact foundation models may have on competition in other markets, e.g., the implications of certain foundation models and associated capabilities, which may be controlled by a limited number of large organizations, becoming necessary for companies to compete effectively in other markets, such as search or productivity; and
  • Consumer protection, e.g., in respect of risks arising from the use of foundation models in products and services made available to consumers, including in relation to false or misleading information generated as outputs from technologies supported by such models.

Within the scope of its review, the CMA will not only consider the current conditions but also how the market may develop in the near future.

The relatively narrow scope of this review is consistent with the CMA's mandate as the U.K.'s competition and consumer protection regulator.

However, the CMA's announcement emphasizes that the review is intended to operate in line with the U.K. government's March 2023 white paper on AI,[3] which sets out the government's policy plans for the development of a pro-innovation approach to AI regulation in the U.K.

CMA's Next Steps

In addition to collating and evaluating existing research in the area, the CMA will rely on various sources of evidence to perform its analysis in the focus areas noted, including:

  • Voluntary submissions: The CMA has invited interested parties to make submissions on the review.
  • Stakeholder information requests: The CMA plans to issue information requests to key stakeholders. This is to include developers, researchers and suppliers of inputs such as compute and data, as well as customers and investors.
  • Meetings: The CMA also plans to hold bilateral meetings with key interested parties.

The evidence and analysis will inform a written report that will set out the CMA's findings. The report may include recommendations to the government on implementing the legislative and regulatory aims set out in its recent white paper, as well as guidance to suppliers, developers, businesses and end users. The CMA plans to publish its findings in early September.

Potential Effects and Outcomes

Any outcomes from the CMA's initial review could help shape the U.K.'s burgeoning AI sector and determine the U.K.'s and U.K. businesses' role in the global AI landscape. It also could have a material impact on any U.K. and international businesses looking to do business in the AI sector or leveraging AI-enabled technologies or solutions in the U.K.

In preparing next steps, the CMA will want to ensure that it is taking actions and recommending next steps that protect U.K. consumers against any harmful effects from AI.

However, scrutiny of its recent merger decisions may increase the pressure on the CMA to avoid overly prescriptive regulation, and it will have to strike a balance between using its tools to prevent harm to U.K. consumers and encouraging a thriving ecosystem that fosters innovation in a nascent market.

In this context, the CMA's response to the government's white paper on AI regulation[4] might foreshadow some of the effects and outcomes from its review.

In its response, the CMA gave its support to the government's proposed approach of leveraging and building on existing regulatory regimes while also establishing a central coordination function for monitoring and support in regulating AI.

The CMA also expressed its support for the government's proposal of placing the five overarching principles for the development and deployment of AI on a nonstatutory footing in the first instance, in contrast to the approach taken by the European Union, which has proposed regulating AI through the draft EU Artificial Intelligence Act.

However, in light of more recent commentary, including from U.K. Prime Minister Rishi Sunak, emphasizing the existential risks posed by AI,[5] the U.K.'s AI strategy may ultimately move away from the nonstatutory approach outlined in the white paper.

The CMA also noted that as a result of the cross-sector and cross-discipline impact of AI, it saw value of coordination across regulators, either through the government's Department for Science, Innovation and Technology, and independent regulator, or through a more informal body akin to the existing U.K. Digital Regulation Cooperation Forum, which was established in 2020 to ensure greater cooperation between regulators on digital matters.

This is consistent with an ongoing regulatory trend in the EU and U.K. of using cross-sectoral tools and regulations to address concerns in the digital sector, such as with the EU Digital Markets Act and the draft U.K. Digital Markets, Competition and Consumers Bill.

If the CMA identifies any competition or consumer protection issues in respect of the U.K.'s market for foundation models or related industries, it may investigate these further, e.g., through a lengthier market investigation.

While market investigations can be effective tools to introduce regulation to markets that are not working well, their length — 18 months from the date of reference — may make one unappealing in this instance.

If the CMA receives evidence that entities active in the sector have breached competition or consumer requirements, it also could launch enforcement action into suspected breaches.

The U.K.'s focus on AI, as evidenced by the launch of the initial review, the government's white paper and recent guidance from other regulators, such as the U.K. Information Commissioner's Office, is consistent with large-scale regulatory scrutiny and activity in this sector across the globe.

While it is expected that the U.K. will take steps to regulate AI, whether through legislative measures or on the basis of voluntary measures, and despite the U.K. wishing to set itself apart as a pro-innovation AI hub, the CMA's recommendations and U.K. government's policy in this area may well end up being shaped, to a greater or lesser degree by the policy and regulatory approaches adopted by authorities in other key markets, notably, the U.S. and EU.

Indeed, it appears that that process may already be underway, with a seemingly increased appetite within the U.K. government to implement more prescriptive regulatory constraints around the development and deployment of AI.

The U.K. government is also expected to act swiftly to seek to foster international cooperation on this topic, e.g., the U.K. and U.S. recently issued the Atlantic Declaration, setting out an intention to cooperate to ensure the safe and responsible development of AI technologies.[6]

Given this backdrop, it appears that there is potential that certain elements of the CMA's initial review may be overtaken by a fast-changing landscape. The world that the CMA was instructed to review earlier this year may look significantly different come September.

[1] https://www.gov.uk/cma-cases/ai-foundation-models-initial-review.

[2] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

[3] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

[4] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1160272/AI_regulation_-_a_pro-innovation_approach.pdf

[5] https://www.theguardian.com/technology/2023/may/25/no-10-acknowledges-existential-risk-ai-first-time-rishi-sunak.

[6] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1161879/THE_ATLANTIC_DECLARATION.pdf.

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.