A group of companies, including GitHub, Hugging Face, Creative Commons, and others, have submitted a paper to EU policymakers, urging for more support for open-source development in AI models as they finalize the AI Act. The paper, also cosigned by EleutherAI, LAION, and Open Future, puts forward several suggestions to the European Parliament to enhance the final rules.
Their recommendations include the need for clearer definitions of AI components, specifically highlighting that hobbyists and researchers working on open-source models are not commercially benefiting from AI. They also propose allowing limited real-world testing for AI projects and establishing proportional requirements for different foundational models.
GitHub’s senior policy manager, Peter Cihon, explains that the purpose of the paper is to offer guidance to lawmakers on the most effective ways to support AI development. He emphasizes that companies want to be heard as other governments begin drafting their own AI laws.
The EU has been at the forefront of AI regulation discussions, with the proposed AI Act generating significant attention. However, critics argue that the Act’s definitions of AI technologies are too broad, while still being too focused on the application layer. In the paper, the companies express hope that the AI Act can set a global precedent by addressing risks and encouraging innovation through support for the open ecosystem approach to AI.
Although the Act is intended to cover various types of AI, much of the focus has been on its regulations for generative AI. Some developers of generative AI models have embraced the open-source ethos, sharing access to their models with the larger AI community. However, this approach has raised concerns about transparency and competition, leading to companies like OpenAI limiting access to their research.
The companies behind the paper argue that proposed regulations could disproportionately impact developers without significant financial resources. They highlight that involving third-party auditors is costly and unnecessary for mitigating risks associated with foundational models. Additionally, they assert that sharing AI tools on open-source libraries should not be subject to regulatory measures, as it does not constitute commercial activities.
Furthermore, the companies contend that rules prohibiting real-world testing of AI models would impede research and development, as open testing is crucial for improving functionality. Currently, AI applications cannot be tested outside closed experiments due to legal concerns.
It comes as no surprise that AI companies have been actively advocating for their interests in the development of the EU’s AI Act. OpenAI, for example, lobbied against stricter rules for generative AI, resulting in some of their suggestions being included in the latest version of the Act.