Context
Recently, the UK and the US signed an agreement on Artificial Intelligence (AI) safety testing.
- Last year, the EU reached a deal with member states on its AI Act which includes safeguards on the use of AI within the EU.
Crucial Insights of the UK US AI Safety Agreement
- Sharing of Information: Both countries will share vital information about the capabilities and risks associated with AI models and systems.
- They will also share fundamental technical research on AI safety and security with each other, and work on aligning their approach towards safely deploying AI systems.
- Joint Testing & Personnel Exchange: They intend to perform at least one joint testing exercise on a publicly accessible model. They also intend to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes.
- Risk Addressal: This move comes as the world is moving fast with the proliferation of AI systems. Although these systems offer opportunities, they pose a significant threat to a number of societal set-ups, from misinformation to election integrity.
- A Common Approach: The US and the UK AI Safety Institutes have also laid out plans to build a common approach to AI safety testing and effective tackling of risks.
- Both have also committed to develop similar partnerships with other countries to promote AI safety across the globe.
Also Read: Global Partnership On Artificial Intelligence – GPAI
To get PDF version, Please click on "Print PDF" button.