Big Tech giants Google, Microsoft, OpenAI (creator of ChatGPT), and AI startup Anthropic have formed a new industry watchdog group called the Frontier Model Forum. The purpose of this forum is to focus on the “safe and responsible” development of frontier AI models. The companies released a joint statement emphasizing the need for further work on safety standards and evaluations in AI development.
The main goals of the Frontier Model Forum include advancing research on AI safety, identifying best practices for responsible development and deployment of frontier models, collaborating with governments and civil leaders, and supporting application development efforts.
Membership in the Forum is open to organizations that develop and deploy frontier models. Brad Smith, the vice chair and president of Microsoft, highlighted the responsibility of AI technology creators to ensure its safety, security, and human control.
The Frontier Model Forum plans to establish an advisory board in the near future to guide the group’s priorities and strategy. The founding companies also intend to consult with civil society and governments to shape the Forum’s design.
This recent initiative comes after a meeting on July 21 where prominent AI companies, including OpenAI, Google, Microsoft, and Anthropic, committed to the safe, secure, and transparent development of AI. Additionally, United States lawmakers introduced a bill in June to create an AI commission addressing concerns within the industry.
Overall, the formation of the Frontier Model Forum demonstrates the collective effort of major tech players to promote responsible AI development and address the challenges associated with powerful AI models. Through collaboration and oversight, these companies aim to ensure that AI technology benefits humanity as a whole.