The United States government on May 4 announced new actions to further “responsible AI” innovation and to address AI-related risks to people’s rights and safety, according to a White House statement. The statement was issued before Vice President Kamala Harris and senior administration officials met with the CEOs of companies at the forefront of AI innovations—Alphabet, Anthropic, Microsoft, and OpenAI. Acknowledging the need to mitigate risks posed by the new technology, the statement added that, “companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public”. Following are the latest announcements: Investments to power responsible American AI research and development (R&D): The National Science Foundation, a US government agency, has announced $140 million to launch seven new national AI research institutes. These institutes are expected to boost AI R&D in critical areas including climate, agriculture, health, education, and cybersecurity, among others, and also collaborate with other educational, industry, and federal institutions to work with AI in a “responsible, trustworthy and ethical” manner. Public assessments of existing generative AI systems: Leading AI developers like Google, NVIDIA, Microsoft, OpenAI and Stability AI will participate in a “public evaluation of AI systems, consistent with responsible disclosure principles” at the AI Village at DEFCON 31, one of the largest hacker conventions in the world. Testing AI models independent of government or companies that have developed them will help provide critical insights to researchers and the public about the impact of these systems. Policies to mitigate AI…
