Four prominent artificial intelligence (AI) companies, including Google, Microsoft, Anthropic, and OpenAI, have collaborated to form a new industry group. The primary objective of this group is to identify best practices for ensuring safety in the field of AI, as well as to promote the technology’s application in addressing significant societal challenges.
Amidst ongoing discussions among policymakers regarding appropriate regulations for AI, these companies are taking the initiative to establish safety standards within the industry. With a focus on promoting responsible development, minimizing risks, and facilitating independent evaluations of capabilities and safety, the group aims to maintain a proactive stance until formal regulations are in place.
In a blog post, Google outlined the four main goals of the newly formed group:
1. Advancing research on AI safety to ensure responsible development of frontier models, reduce risks, and enable standardized evaluations of capabilities and safety.
2. Identifying best practices for the responsible development and deployment of frontier models, while also enhancing public understanding of the technology’s nature, limitations, and impact.
3. Collaborating with policymakers, academia, civil society, and other companies to share knowledge on trust and safety risks associated with AI.
4. Supporting efforts to develop AI applications that can effectively address society’s greatest challenges, such as climate change mitigation, early cancer detection, and combating cyber threats.
To join the group, organizations must fulfill certain criteria, including developing or deploying frontier models that surpass the capabilities of existing advanced models. Additionally, they must demonstrate a commitment to safety through technical and institutional approaches.
In the coming months, the group plans to establish an advisory board composed of individuals from diverse backgrounds to guide their priorities. The founding companies will seek input from civil society to shape the group’s governance structure and funding strategy.
As policymakers consider implementing regulations for AI, it is crucial to strike a balance that allows for innovation without compromising the country’s competitive position in the AI race. Senate Majority Leader Chuck Schumer has been leading efforts to create a legislative framework for AI, and various bills targeting specific areas of AI impact have already been introduced. The White House has also been engaging with industry leaders and AI experts to discuss AI safety and recently announced a voluntary pledge from leading companies to prioritize the safe development of AI.
Overall, the formation of this industry group signifies a proactive approach from leading AI companies to ensure the responsible development and deployment of AI, while addressing societal challenges and fostering collaboration in the field.