OpenAI, Google DeepMind, and Anthropic commit to providing access to their AI models for evaluation and safety research.
The UK government has taken a significant step towards addressing the safety and evaluation of artificial intelligence (AI) technologies by securing commitments from prominent AI companies. OpenAI, Google DeepMind, and Anthropic have pledged to provide “early or priority access” to their AI models, supporting the government’s initiative to prioritize AI safety.
Prime Minister Rishi Sunak, speaking at the London Tech Week, expressed his dedication to leading cutting-edge AI safety research in the UK. With an investment of £100 million into an expert task force, the government is allocating more funding to AI safety than any other government worldwide. The task force will primarily focus on AI foundation models, ensuring comprehensive research and evaluation.
Sunak emphasized the collaboration with pioneering AI labs, including Google DeepMind, OpenAI, and Anthropic. He announced their commitment to granting early or priority access to their AI models for research and safety purposes. This access will enable researchers to develop better evaluation methods and gain a deeper understanding of the opportunities and risks associated with AI systems.
In addition to the commitments from AI giants, the prime minister reiterated the government’s plan to host the first-ever global AI Safety Summit. Drawing inspiration from the COP Climate conferences, the summit aims to achieve global consensus and cooperation in addressing AI safety concerns. Sunak emphasized his vision for the UK to become the intellectual and geographical home of global AI safety regulation.
This recent shift in the UK government’s approach to AI safety demonstrates a significant change of gears. Previously, they adopted a pro-innovation stance, downplaying safety concerns in favor of flexible principles outlined in a white paper. However, recent advancements in generative AI and warnings from tech giants regarding potential risks have prompted a reevaluation of the government’s strategy.
While this collaboration with AI giants presents an opportunity for the UK to lead in developing effective evaluation and audit techniques, there are concerns about potential industry capture of the country’s AI safety efforts. It is crucial for the government to ensure the involvement of independent researchers, civil society groups, and those disproportionately at risk of harm from automation. By doing so, the government can mitigate the risk of bias and prioritize the well-being of society.
The UK government’s commitment to AI safety is gaining momentum with the support of OpenAI, Google DeepMind, and Anthropic. By prioritizing research and evaluation, the government aims to position the UK as a global leader in AI safety regulation while addressing potential risks associated with AI technologies. This collaborative effort between the government and AI giants is a significant step towards ensuring the responsible and ethical development of AI systems that benefit society as a whole.