UK AI Safety Summit 2023: What To Expect
On 1 and 2 November 2023, the UK government will host world leaders, experts and leading technology companies at the first global AI Safety Summit. The UK government’s aim is to facilitate a ‘critical global conversation’ on artificial intelligence (AI) and encourage a global coordinated approach to AI safety.
Focus and objectives
With a focus on the serious misuse of AI, the summit will cover two types of AI systems based on the risks they may pose:
- Frontier AI: highly capable, multipurpose AI models (e.g., foundation models) that match or exceed the capabilities present in today’s most advanced models – and pose significant risks associated with misuse, unpredictable advances and loss of control over the technology.
- Narrow AI: AI designed for a specific task with potentially dangerous capabilities – such as bioengineering AI models that could be used to develop bioweapons.
The UK government highlighted five key objectives for the summit:
- Build a shared understanding of the risks posed by frontier AI and the need for action.
- Initiate a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
- Propose appropriate measures which individual organisations should take to increase frontier AI safety.
- Identify areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.
- Showcase how ensuring the safe development of AI will enable AI to be used for good globally.
Although two types of AI systems are expected to be covered as set out above, based on published government materials, the focal point is clearly frontier AI.
On day 1, attendees of the summit are expected to weigh in on the novel challenges and risks posed by recent and next-generation frontier AI models, as well as measures to combat misuse by bad actors. Discussions also are expected to touch on topics such as the direction of AI development and how frontier AI developers can scale responsibly.
Day 2 is set to see a smaller group discussion among governments, companies and experts on measures to address the risks in emerging AI technology.
See the proposed agenda for more details.
How might this impact the UK’s approach to regulating AI?
In March 2023, the UK government issued a policy paper detailing its pro-innovation approach to regulating AI. The policy paper sets out a principles-based strategy whereby existing sector-specific regulators use existing laws to implement principles such as safety, fairness and accountability.
Despite this ‘light-touch’ strategy and the summit’s apparent focus on high-risk models only, it may be the case that the summit will drive the UK toward some form of specific domestic regulation, whether focused on frontier AI models only or more widely applicable. Indeed, calls for regulation from industry leaders – and the imminent arrival of the European Union’s AI Act – may put additional pressure on the UK to reflect on its domestic policies.
Alternatively, it may be that the UK will successfully find a role as a bridge between the EU’s rules-based approach (embedded in the EU AI Act) and the decentralised approach of other nations, such as the US. In any case, with varying perspectives across the globe, it will not be easy to achieve a consensus on the direction of AI safety and regulation, and it will be interesting to see how the conversation unfolds.
We will publish an overview of the key outcomes of the UK AI Safety Summit after it has taken place.
Cooley trainee solicitor Mo Swart also contributed to this alert.