Leading artificial intelligence companies have announced their plans to create an industry-led body to develop safety standards for AI technology. The Frontier Model Forum, introduced by Google, OpenAI, Microsoft, and Anthropic, aims to promote AI safety research and technical evaluations for the next generation of AI systems. These systems are expected to be even more powerful than the current large language models used in chatbots. The Forum will also facilitate the sharing of information about AI risks between companies and governments.
The establishment of the Frontier Model Forum highlights the proactive approach of AI companies in addressing safety concerns, while policymakers in Washington are still debating the need for a government-led AI regulator. It also comes in response to recent voluntary commitments made by these companies to submit their systems to independent tests and develop tools for public awareness of AI-generated content.
However, industry-led initiatives like this have been met with criticism from consumer advocates who argue that they transfer accountability to third-party experts and could potentially derail necessary legislation to regulate the industry.
While Congress is currently working on crafting a framework to address the threats posed by AI, government-led regulation is not expected to be implemented soon. European policymakers have been working on their own AI legislation, but its enforcement is also several years away.
In the absence of formal regulations, politicians are urging companies to take voluntary steps to mitigate AI risks. The founding companies of the Frontier Model Forum have already participated in the White House’s AI pledge and have committed to sharing data on AI system safety with the government and outside academics. Some tech executives have also endorsed the E.U. AI Pact.
The Forum is still in its early stages of development and aims to engage in joint initiatives with companies that are developing powerful AI models beyond the capabilities of existing models. The focus on future AI capabilities aligns with statements made by OpenAI CEO Sam Altman, who highlighted the need for regulation when AI models reach a level of intelligence equivalent to all of human civilization.
Meta, which recently made its AI model Llama 2 available for public use, was not part of the Forum’s founding members.