AI Industry Leaders Establish Forum to Regulate Big Machine Learning Models

OpenAI, Microsoft, Alphabet’s Google, and Anthropic have come together to launch a forum aimed at regulating the development of large machine learning models. The purpose of this initiative, according to the leaders in the artificial intelligence field, is to ensure the safe and responsible development of what are known as “frontier AI models”. These models surpass the capabilities of existing advanced models and possess potentially dangerous capabilities that could pose severe risks to public safety.

Generative AI models, such as the one used in chatbots like ChatGPT, have the ability to process vast amounts of data rapidly, enabling them to generate responses in the form of prose, poetry, and images. While these models have numerous use cases, both government bodies, including the European Union, and industry leaders, like OpenAI CEO Sam Altman, have expressed the need for appropriate measures to address the risks associated with AI.

The newly established industry body, Frontier Model Forum, aims to promote AI safety research, establish best practices for the deployment of frontier AI models, and collaborate with policymakers, academia, and companies. However, it will not engage in any government lobbying activities.

Microsoft President Brad Smith emphasized the responsibility of companies developing AI technology to ensure its safety, security, and human control. The forum plans to form an advisory board, secure funding through a working group, and establish an executive board to oversee its efforts.

Overall, the forum’s primary objective is to regulate the development of large machine learning models, with an emphasis on ensuring their safety and responsible deployment while collaborating closely with stakeholders in the industry and policymaking community.