Beyond the UK AI Safety Summit – Outcomes and Direction of Travel
The UK hosted more than 100 representatives from across the globe at its AI Safety Summit in early November 2023. Leading up to the summit, we outlined the UK government’s objectives and its current approach to artificial intelligence (AI) regulation.
We have now reflected on the outcomes of the summit – along with recent developments in the global regulatory landscape – and have summarised our key takeaways below.
Outcomes of the AI Safety Summit
- Bletchley Declaration on AI safety – Twenty-eight countries, including the US and China, as well as the European Union, reached a consensus on the need for sustained international cooperation to combat the risks posed by ‘frontier AI’.1 Under the Bletchley Declaration, these nations have agreed to work together to ensure the development and deployment of ‘human-centric, trustworthy and responsible AI’. The declaration emphasises the need to build a ‘shared scientific and evidence-based understanding’ of the risks posed by frontier AI and ‘respective risk-based policies across countries’ to ensure safety. It also signifies that the global conversation on AI safety is far from over. Indeed, the Republic of Korea is set to co-host a ‘mini virtual summit’ on AI in May 2024, and France will host the next in-person summit in November 2024.
- AI Safety Institute –The UK announced the creation of its AI Safety Institute, tasked with researching the most advanced AI capabilities and testing the safety of emerging types of AI. Separately, the US government announced the formation of its own AI Safety Institute, which will work together with the UK’s institute. In addition to collaborating with its international counterparts and ‘like-minded’ governments, the UK’s AI Safety Institute is expected to partner with domestic organisations – including the Alan Turing Institute and private companies.
- AI testing and research – According to government materials, leading AI companies have recognised the importance of collaborating with governments, including the UK, on testing the next generation of AI models both before and after they are deployed. The UK government also announced that it has invested £300 million in its national AI Research Resource. The government’s aim is to provide enhanced AI infrastructure for research projects to maximise the benefits of AI, while supporting critical work into frontier AI risk mitigation.
- Frontier AI ‘State of the Science’ Report – Countries represented at the summit agreed to develop a ‘State of the Science’ Report on the capabilities and risks of frontier AI. The report will summarise existing scientific research on risks and identify priority areas for further research. According to government materials, the report will be published ahead of the mini virtual AI summit in Korea and will inform and complement other international initiatives.
- Accelerating safe AI development globally – According to a government press release, the UK will work with Canada, the US, the Bill and Melinda Gates Foundation and partners in Africa ‘to fund safe and responsible AI projects for development around the world’.
Global AI regulation beyond the AI Safety Summit
The summit facilitated a global conversation on AI safety and established forums intended to promote international collaboration on AI regulation. However, divergent views remain on exactly what type of regulation is required for AI, with multiple processes running in parallel – both nationally and internationally.
Just a few days before the summit, G7 leaders and the US government progressed separate efforts to regulate AI – with the G7 releasing a set of guiding principles and a voluntary code of conduct, and the Biden administration issuing an executive order on safe, secure and trustworthy AI. In addition, the UN recently launched a new Advisory Body on Artificial Intelligence, which will issue its own preliminary recommendations on building scientific consensus and ‘making AI work for all of humanity’ by the end of 2023. While these initiatives may be helpful in establishing principles and promoting knowledge-sharing, it remains to be seen whether there will be an alignment of international standards for regulating AI. The risk of divergence has the potential to make this a challenging area for businesses to navigate.
At the EU level, disagreements on the regulation of foundation models may have potentially slowed the progress of negotiations on the draft EU AI Act. France, Germany and Italy have reportedly released a joint paper advocating for more limited regulation of foundation models. This contrasts with the position of other EU countries, such as Spain, which are in favour of more strict regulation of foundation models. The joint paper reportedly proposes an innovation-friendly approach to regulating foundation models based on mandatory self-regulation.
What’s next for the UK?
In relation to the UK’s domestic policy, there was no mention of an AI bill in the King’s speech on 7 November 2023, despite continued pressure from the House of Commons Science, Innovation and Technology Committee. Indeed, the government confirmed in its post-summit response to the committee’s interim report on AI governance that it is committed to maintaining a pro-innovation approach and ‘will not rush to legislation’. This response echoed UK Prime Minister Rishi Sunak’s acknowledgement at the summit that binding requirements will ‘likely be necessary’ to regulate AI in the future, but sufficient testing is needed to ensure legislation is based on empirical evidence.
The UK government is expected to issue the much-awaited response to its March 2023 AI white paper consultation later this year, and we will continue to monitor developments.
- The UK government defines ‘frontier AI’ as highly capable, general purpose AI models – including foundation models – that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.
Cooley trainee Mo Swart also contributed to this alert.
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.