Google, Microsoft, OpenAI and Anthropic have collaborated to establish the Frontier Model Forum, an industry body for ensuring “safe and responsible development of frontier AI models”. According to a joint statement dated July 26, the forum will primarily work towards evaluation of AI models for safety, facilitating research in AI safety mechanisms and sharing such knowledge with governments, academia and civil society groups to protect people from AI-related harms.
Why it matters: The collaboration comes after tech companies like Google, Microsoft, Open AI, Anthropic and Inflection voluntarily committed to adopting seven key measures for safe deployment of AI products in a meeting convened by the US government. Tech companies like OpenAI have in the past acknowledged the risks posed by AI systems and have advocated for independent testing of these tools before they are launched. As governments struggle to arrive at a consensus for establishing the best-suited regulatory framework for AI, the alliance between big tech companies can be seen as an effort to self-regulate and discourage greater government regulation.
Article continues below ⬇, you might also want to read:
- Indian Govt “Studying” The Requirement Of Regulatory Framework For AI: IT Ministry In Rajya Sabha
- OpenAI Discontinues Its AI Detection Tool Due To Low Rate Of Accuracy
- Amazon Launches AWS HealthScribe, Generative AI Service To Assist Healthcare Practitioners
- India AI And Meta Sign MoU To Collaborate On AI Technologies, Will Create Datasets For Indian Languages
- Talking Points: AI Companies Agree To Watermark Content And Seven Other Commitments, US Announces
The primary objectives of the forum include:
- Advancing AI safety research for responsible development of AI models, minimising risk, and “enable independent, standardized evaluations of capabilities and safety”.
“The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models,” the statement informs.
- Identifying best practices for the responsible deployment of frontier models and helping people understand the “nature, capabilities, limitations, and impact of the technology”.
- Collaborating with governments, policymakers, academics, civil society and companies, and share knowledge about AI safety and risks.
- Supporting efforts to develop applications that can help towards meeting larger public goals like mitigating impact of climate change.
Course of action: According to the statement, the Forum will soon establish an Advisory Board, which will guide the frontier members on the strategy and priorities of the group. The founding companies will also work on “institutional arrangements including a charter, governance and funding with a working group and executive board”.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
