News

European Commission Sets Out Approach on Future Regulation of AI

Cooley Alert
February 21, 2020

On 19 February 2020, the European Commission released a white paper with its long-awaited proposals on regulating artificial intelligence.

The white paper contains far-reaching proposals that, if adopted, could have a significant impact on all product manufacturers and developers working in AI. The white paper raises the prospect of requirements being imposed both at the design stage and once products are on the market together with changes to safety and liability legislation to account for perceived risks posed by AI. For certain "high risk" sectors, such as healthcare, transport, energy and certain public sectors, or use cases affecting workers' (and possibly consumer) rights, or involving biometric checks or surveillance, mandatory requirements are on the cards.

The Commission notes that AI is already subject to existing EU legislation (including on data protection, consumer law, product safety and liability, amongst others). However, existing EU legislation may not cover all risks AI brings, exposing regulatory weaknesses and gaps.

Definition

A key issue for the future regulatory framework is the definition of AI, which is currently not defined under existing EU legislation. The European Commission notes that any definition will require flexibility in the face of technical progress as well as legal certainty.

High-risk applications

The Commission has proposed a risk-based approach for its regulation. The approach under consideration would be to identify high-risk applications by taking an exhaustive list of sectors (e.g., healthcare, transport, energy and parts of the public sector) combined with the risk of significant impact on legal rights, injury, death or significant material damage. The Commission has also flagged some uses that should always be defined as high risk, such as the use of AI in ways that affect workers' (and possibly consumer) rights or the use of AI for biometric identification or for surveillance.

It would include legally binding requirements for developers and users of AI, building on existing EU legislation. The European Commission highlights that this would be a targeted approach, not imposing any new additional administrative burdens on applications that are deemed low risk where existing provisions of EU legislation would apply.

Requirements

The European Commission has identified a number of requirements that would be mandatory for high-risk use cases. It envisages that the following key features would be covered by standards:

  • training data
  • data and record-keeping
  • information to be provided
  • robustness and accuracy
  • human oversight
  • specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification

The Commission also suggest that is may be necessary to specify economic actors within the supply chain who would bear responsibility for these different obligations. This would be without prejudice to existing rules that impose liability for defective products on the producer.

The Commission envisages that these requirements will be subject to prior conformity assessments with procedures for testing, inspection or certification. This is likely to pose a significant burden for SMEs and the Commission has recognised that structures will need to be put in place to ensure that innovation is not undermined.

Safety and liability framework

The "Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics" accompanies the white paper and assesses the impact of AI, IoT and robotics on existing areas of EU product safety and liability legislation and highlights where the Commission considers amendments are required. This ties in with the Commission's review of safety and liability legislation under its Work Programme 2020, which we addressed in a Productwise blog post.

The Commission is considering specific amendments to individual pieces of EU legislation, applying a targeted risk-based approach.

We will be taking a closer look at these proposals in a future blog.

Non-high risk applications

For lower risk applications, the European Commission has proposed a voluntary labelling scheme. Options include signing up to the mandatory requirements or to similar requirements especially established for the purposes of the voluntary scheme. Adherents to the scheme would be awarded a quality label to use with their AI applications.

Governance

Under this option national authorities would be entrusted with the implementation and enforcement of the future regulatory framework.

Consultation and next steps

The European Commission is inviting comments on its proposals set out in the white paper. The consultation is open until 19 May 2020. This is an important opportunity for stakeholders to have their say and shape the future of EU policy in this area. Amendments to existing EU product safety and liability legislation may have a broader scope and have an impact on products that do not incorporate AI or other digital technologies.

The Commission then intends to follow up with a legislative proposal and impact assessment toward the end of this year.

View the European Commission's press release announcing the white paper.

Check in soon for Part 2, which will focus on the European Commission's accompanying "Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics".

This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.