The G7’s digital and technology ministers have agreed on international guidelines to regulate generative artificial intelligence, Nikkei Asia reported last week. They will likely be approved at a virtual G7 summit in the coming weeks.
The guidelines are accompanied by the G7’s voluntary code of conduct for artificial intelligence developers agreed upon in October this year. Both regulations form the “centrepiece” of the G7’s “Comprehensive Policy Framework” on Artificial Intelligence. These developments come amidst G7 President Japan’s efforts to set international standards and comprehensive rules for advanced artificial intelligence systems. The G7 ministers have also approved a plan for their implementation next year.
The “Group of 7”, or G7, is an international grouping of seven of the world’s states—France, the United States, the United Kingdom, Germany, Japan, Italy, Canada, and the European Union (non-enumerated).
What do the guidelines cover?: According to Nikkei Asia, 11 of the guideline’s 12 principles are borrowed from the earlier code of conduct, with both regulations applying to users and suppliers of artificial intelligence.
The code of conduct’s principles state:
- Identify, evaluate, and mitigate risks at different stages of a system’s lifecycle, whether while developing or deploying them.
- Identify and mitigate any system vulnerabilities after its deployment, and when it hits the market. Also, identify incidents and patterns of system misuse.
- An AI system’s capabilities, limitations, and cases of appropriate and inappropriate use should be publicly reported to ensure transparency and accountability.
- Responsibly share and report information incidents with organisations developing advanced systems, including with industry, civil society, academia, and governments.
- Use a risk-based approach to develop and implement AI governance policies, which should include privacy and mitigation policies too.
- Invest in physical security, cyber security, and other security controls across the system’s life cycle.
- Develop content authentication mechanisms to help users identify content that is generated by artificial intelligence.
- Priorise artificial intelligence research to mitigate related risks, and prioritise investing in “effective mitigation measures”.
- Prioritise developing advanced systems to address global challenges like education, climate change, and global health.
- Advance the development and adoption of global technical standards for artificial intelligence.
- To protect personal data and intellectual property, implement data input measures and protections.
Nikkei Asia adds that the 12th principle under the guidelines, a new addition, centres around promoting and contributing to the “trustworthy and responsible use of advanced AI systems”. This includes improving digital literacy, and sharing information on artificial intelligence-related risks, like misinformation and disinformation. Some G7 members were reportedly concerned about these technologies being used to spread misinformation by the likes of China and Russia.
- G7 Tech Leaders Reaffirm Commitment To Data Free Flows With Trust, India Observes On The Sidelines
- G20 Not The Right Forum To Discuss Global Cross Border Data Flows: Civil Society Orgs
- How Does Geopolitics Shape Cross-Border Data Flows, And What Is India’s Diplomatic Stance? #PrivacyNama2022
- 3 Tech Policy Planks To Keep An Eye On During India’s G20 Presidency
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!