We missed this earlier: European Commission President Ursula von der Leyen proposed for creating a global framework for artificial intelligence (AI) with partner countries, tech companies, and independent experts, during her address at the European parliament on September 12. Von der Leyen noted that the framework should be built on three pillars: guardrails, governance, and guiding innovation.
While stating that AI can be used to improve healthcare, productivity, and to address climate change, she added that risks related to AI use cannot be underestimated.
Key points on AI from Ursula von der Leyen’s speech:
Guardrails: “Our number one priority is to ensure AI develops in a human-centric, transparent and responsible way,” the EU chief stated. She added that EU’s AI Act can be used as a blueprint for other countries working on devising policies for AI regulation.
Governance: Similar to the Intergovernmental Panel on Climate Change, Von der Leyen called for a global panel to tackle challenges posed by impact of AI on societies. In order to better understand AI systems and develop “a fast and globally coordinated response”, she emphasised on the need to work with scientists, tech giants, and experts with a global approach.
Article continues below ⬇, you might also want to read:
- US Lawmakers Discuss ‘Safe Innovation’ And Regulation At AI Insight Forum With Tech Majors
- Google, Microsoft, OpenAI Collaborate To Establish ‘Frontier Model Forum’ For Artificial Intelligence
- Talking Points: AI Companies Agree To Watermark Content And Seven Other Commitments, US Announces
- UN Secretary-General Proposes Global Watchdog For Governing Development Of AI Tech
Guiding innovation: Highlighting the need for an “open dialogue on AI” with those involved in developing and deploying AI, the EU chief also talked about working with AI companies to encourage voluntary commitment to the principles of EU’s AI Act before it comes into force. She cited the ongoing deliberations between the US government and tech companies about addressing AI-related risks. Additionally, she also announced that Europe will provide access to its high-performance computers to AI startups to train their models in order to support innovation.
Why it matters:
As governments across several countries are deliberating upon policies to regulate AI, a multi-stakeholder approach has emerged as a common thought among authorities and industry players too. On September 13, US lawmakers convened a meeting with tech majors, representatives from the civil society and researchers to build a consensus for government regulation of AI alongside self-regulatory measures adopted by industry actors for responsible deployment of AI. The UN Secretary-General António Guterres has also proposed the creation of a new UN entity for AI governance insisting that the process requires a universal approach and that UN is “the ideal place” to lead the efforts on this.
On the other hand, in July, Google, Microsoft, OpenAI and Anthropic together established the Frontier Model Forum, an industry body for ensuring “safe and responsible development of frontier AI models”. The forum will primarily work towards evaluation of AI models for safety, facilitating research in AI safety mechanisms and sharing such knowledge with governments, academia and civil society groups to protect people from AI-related harms. As countries carve out their national strategies as per opportunities and challenges relevant at the local level, global standards establishing principles for ethics, safety, and non-discriminatory use of AI must serve as a guide for AI governance across regions.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
