Context: In a recent development, 18 countries have expressed support for new global guidelines on AI aimed at ensuring that artificial intelligence (AI) secure by design, thereby elevating the cybersecurity standards for AI.
New Global Guidelines on AI
- Guidelines: Formulated by the UK’s National Cyber Security Centre (NCSC) and the US’s Cybersecurity and Infrastructure Security Agency (CISA).
- Signatories: US, UK, France, Germany, Australia, Canada, Norway, Israel, Japan, South Korea, Singapore, Italy, New Zealand, Poland, Chile, Czechia, Estonia and Nigeria.
- The pact covers all G7 countries.
- Nature: The agreement is not legally binding and primarily consists of broad recommendations, including the monitoring of AI systems for potential misuse, safeguarding data against tampering, and thoroughly assessing software suppliers.
- Coverage: The guidelines deal with secure design, development, deployment, operation and maintenance of AI systems.
- These encompass evaluating and modeling threats, preventing attacks on AI systems, establishing incident management procedures, responsibly releasing models, monitoring the system, and ensuring security by design in system updates.
- Aim: The objective is to enhance the cybersecurity standards of artificial intelligence, providing assistance to AI developers in making well-informed cybersecurity decisions at every stage of their development process.
- Significance: This is a first-of-its-kind global agreement on AI cybersecurity issues.
Further Reading: Global AI Summit London 2023
News Source: Reuters