We missed this earlier: United Nations (UN) Secretary-General António Guterres on July 18 proposed the creation of a new UN entity to govern the development of artificial intelligence (AI) technology, insisting that the governance of AI requires a universal approach and that the UN is “the ideal place” to lead the efforts on this.
The Secretary-General made this proposal while addressing the UN Security Council’s first formal meeting to discuss the risks of AI. The Security Council is a 15-nation body tasked with ensuring international peace and security.
While the Secretary-General acknowledged the benefits of AI and how it could be used to “turbocharge global development,” he also warned that it could be used for harm at a massive scale.
“Let’s be clear: The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale. […] Both military and non-military applications of AI could have very serious consequences for global peace and security. […] The advent of generative AI could be a defining moment for disinformation and hate speech – undermining truth, facts, and safety; adding a new dimension to the manipulation of human behaviour; and contributing to polarization and instability on a vast scale. ” — UN Secretary-General António Guterres
The proposed UN watchdog will be mandated to “support countries to maximize the benefits of AI for good, to mitigate existing and potential risks, and to establish and administer internationally agreed mechanisms of monitoring and governance,” the Secretary-General stated.
Why does this matter: The common thread among the various countries and stakeholders that participated in the UN discussion was calls for international coordination on regulating AI. The UN Secretary-General’s proposal for a global watchdog for AI, if followed through, would lead to the creation of a body similar to the International Atomic Energy Agency (set up to monitor nuclear weapons), the International Civil Aviation Organization (set up to govern international air transport), or the Intergovernmental Panel on Climate Change (set up to advance the study of climate change). It’s not clear how effective such a body would be in addressing the risks of AI because UN bodies have a mixed track record. For example, the International Atomic Energy Agency’s effort to prevent the proliferation of nuclear weapons was not entirely successful with countries like India and Pakistan developing nuclear weapons, although the agency managed to prevent the spread of nuclear weapons to many other countries.
Article continues below ⬇, you might also want to read:
- G20 Cybersecurity Summit: Home Minister Amit Shah Calls For Uniformity In Cybersecurity Laws Across The Globe
- UNESCO Discusses How The Use Of AI Generated Evidence Leads To Discrimination
- Talking Points: AI Companies Agree To Watermark Content And Seven Other Commitments, US Announces
- TRAI Recommends Setting Up Regulator For Artificial Intelligence In India
Who said what:
- AI development cannot be left to the private sector: Reiterating that AI poses huge benefits and threats, Jack Clark, Co-founder of Anthropic, opined that the development of AI cannot be left solely to private-sector actors because they will have corporate interests in mind and that governments must keep companies accountable. “The Governments of the world must come together, develop State capacity and make the development of powerful AI systems a shared endeavour across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.”
- Establish rules while stakeholders are willing to unite: The representative from the United Arab Emirates suggested taking advantage of this brief window of opportunity where key stakeholders are willing to unite and come up with commonly agreed-upon rules.
- Mixed position on the use of AI in the military: Japan’s representative emphasized that military use of AI should be responsible, transparent, and based on international law. Ecuador, however, categorically rejected the militarization of AI. “The robotization of conflict is a great challenge for our disarmament efforts and an existential challenge that this Council ignores at its peril,” Ecuador’s representative said. Malta’s representative added that AI systems in military operations cannot make human-like decisions involving the legal principles of distinction, proportionality, and precaution and lethal AI-based autonomous weapons should be banned.
- Don’t use AI to censor people: “We are now working with a broad group of stakeholders to identify and address AI-related human rights risks that threaten to undermine peace and security. No Member State should use AI to censor, constrain, repress or disempower people,” the US representative said.
- The General Assembly is already looking into this, why duplicate the efforts: Russia’s representative agreed with the concerns posed by AI but noted that the issue is already discussed in the UN General Assembly and that duplication of such efforts is counterproductive. Russia also criticized the West, noting that “the West has no ethical qualms about knowingly allowing AI to generate misanthropic statements in social networks.”
- Constrain national ambitions for dominance: Ghana’s representative asked to “constrain the excesses of individual national ambitions for combative dominance” and ensure that there are frameworks that would govern AI for peaceful purposes.
- Put ethics first, give developing countries equal access: China’s representative said that ethics must be put first to ensure that technology always benefits humanity and that developing countries must enjoy equal access and use of AI technology, products, and services.
- Human oversight is always necessary: Brazil’s representative informed that human oversight is essential in any autonomous system. “There is no replacement for human judgment and accountability,” he stated.
- AI’s role in peacekeeping missions: The representative from Gabon pointed to the benefits of AI in UN peacekeeping missions, explaining that AI increases the capacity of early warning systems by analyzing vast quantities of data from various sources, thereby making it easier to detect emerging threats by analyzing vast quantities of data from various sources very quickly.
- Carries risk of human extinction: AI carries a risk of human extinction simply because “we haven’t found a way to protect ourselves from AI’s utilization of human weakness,” Yi Zeng of the Institute of Automation at the Chinese Academy of Sciences stated. “In the long term, we haven’t given superintelligence any practical reasons why they should protect humans,” he added, while proposing that the UN creates a working group on AI.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!