Key takeaways:
- UK government is grappling with the 'competing' pressures of innovation and regulation
- AI Safety Summit is to focus on a coordinated approach to the risks and opportunities presented by 'frontier AI'
- How much do the different jurisdictions' roadmaps differ? Is it possible for them to co-exist and collaborate?
On the 1 and 2 November 2023, the UK government will be holding its first AI Safety Summit at Bletchley Park, the home of British codebreaking during World War Two. The government's aim for the summit is to promote and encourage an international, coordinated approach on AI safety, with a particular focus on 'foundation models' (large-scale networks which generate human-like language, such as OpenAI's 'GPT' and Google's 'Gemini'). The summit will bring together international policy makers, AI companies, experts and researchers to discuss the risks posed by these frontier models and to encourage safe, responsible development.
What is the purpose of the summit?
The five objectives of the summit are:
- Develop a shared understanding of the risks posed by frontier AI and the need for action
- Put forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
- Propose appropriate measures which individual organisations should take to improve frontier AI safety
- Identify areas for potential collaboration on AI safety research, including evaluating the potential capabilities of AI and the development of new standards to support governance
- Showcase how ensuring the safe development of AI will enable AI to be used for good globally, for example with new rapid medical and drug advancements.
The global submit will also focus on potentially dangerous risks which could rise from further advances in AI technology.
What is a 'frontier AI' model?
These are advanced foundation models that power systems such as OpenAI's GPT-3 and GPT-4 (which underpin the widely known 'ChatGPT'). Whilst these models have enormous potential and innovation benefits, they have dangerous capabilities which should not be underestimated. Such models can pose severe risks to public safety and global security as by advancing towards human ability, they can be used to exploit vulnerabilities in software systems and spread persuasive disinformation at mass scale.
These complex models pose a distinct regulatory challenge because:
- New AI can be unpredictable and dangerous capabilities can arise unexpectedly (regardless of intensive testing) and remain undetected until after harm has been caused.
- AI is a tool that can be used to cause harm and it is increasingly difficult for creators and regulators to specify what it is the AI should be allowed to do. It can be difficult to distinguish dangerous and useful outputs without knowing the context, which is often unknown to developers at the outset.
- Frontier models are much more difficult to train than to use, so end up being available to the public who will inevitably use the AI for tasks the developers never expected, which can result in misuse. Open-source models can also result in dangerous capabilities to be introduced at a later stage more easily by third parties.
The UK government has recently launched the Frontier AI Taskforce (which was previously known as the Foundation Model Taskforce) to focus on the significant risks posed by these particular systems.
Current regulation of AI in the UK: a 'pro-innovation' approach'
The UK government has so far promoted a pro-innovation approach and opted not to impose legislation in the area, yet there has been recognition that foundation models need more thought and intervention. In its White Paper presented on 29 March 2023, the Office for AI proposed a de-centralised and self-regulatory approach in order to encourage innovation. The theory was that regulators would use existing legal frameworks to implement the following five principles, on a non-statutory basis:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
However, there is currently ongoing debate about whether the existing frameworks do enough to mitigate AI based risks, and how best to balance this and the strive for innovation.
Since the White Paper was published, the government has been working with regulators to determine whether the principles are being applied in a proportionate and effective manner, or if statutory intervention is needed.
The UK Governance of Artificial Intelligence: Interim Report
Published on 31 August 2023, the House of Commons Science, Innovation and Technology Committee pointed to specific topics for discussion at the summit and highlighted 12 governance challenges to be addressed to ensure public safety and confidence in AI. The report suggested that these 12 areas form the basis of debate at the summit, and highlighted these challenges as requiring domestic policy and/or international engagement:
- Bias
- Privacy
- Misrepresentation
- Access to data
- Access to compute
- Black box
- Open-source
- IP and copyright
- Liability
- Employment
- International co-ordination
- Existential
All of these topics will be important points to frame discussion, but in relation to frontier AI, some of the most important challenges are bias, misrepresentation, and access to data:
- Bias - AI can make grossly biased decisions based on flawed input data or programming, which can introduce or perpetuate biases against minority groups in society.
- Misrepresentation – AI can create material which deliberately misrepresents someone's behaviour, opinions or character and disseminates misinformation.
- Access to data – the more powerful AI systems require very large datasets, which are held by few organisations. This causes competition concerns for the AI developers that are not tech giants.
The Committee also suggested that a 'tightly-focussed AI Bill' would help the government's ambition to position the UK as an AI governance leader. It reported that if the UK does not introduce any new statutory regulation it risks being left behind by other legislation, such as the EU Artificial Intelligence Act, which could become the 'de facto' standard and be hard to replace.
Observers are therefore looking towards the impending summit to see how discussion about the above challenges will be framed, what the outcomes will be and how the UK's approach to risk and governance on AI could be set to change.
EU regulation
The proposed EU AI Act has been criticised by EU businesses and tech firms who believe the rigid regulatory scheme is overly 'bureaucratic' and have argued for a more hands-off approach. There have also been arguments that the compliance requirements may suppress innovation for SMEs developing AI systems, who have more limited resources in comparison to larger corporations.
One particularly significant area which has attracted debate are the draft rules for foundation models, particularly generative AI systems. These were not explicitly covered by the European Commission's original draft, but due to the rapid advances made by foundation models, regulation for foundation models has been added to the most recent draft. The EU AI Act now sets out specific compliance obligations which providers of foundation models have to follow in order to put their product in the EU market. This may have been in response to growing debate that the Act's risk classes approach is too static for AI developed in 2023, as foundation models have the flexibility to learn new tasks and create solutions which humans could never think to do.
Looking forward – is international collaboration possible?
With divergent approaches to AI regulation across the globe, it will not be easy to achieve a consensus on the direction of AI regulation, particularly with the interplay of various frameworks and agendas.
Whilst AI remains broadly unregulated, businesses are often aiming to implement frameworks that reflect the draft requirements that are being discussed by UK and EU regulators and governments, so that when the somewhat inevitable regulations are imposed, businesses are already prepared.
If you would like to discuss the development or use of AI in your business and how to practically prepare for implantation and potential future regulations, WBD have a team of digital and data specialists who can help guide your business whilst the legal landscape on this area continues to evolve.
We will be publishing a detailed review of the outcomes of the UK AI Safety Summit after it has taken place in early November.