Congress Must Legislate AI Guidelines for Transparency and Labeling, Experts Urge

Washington, D.C., September 12, 2023 – Leading experts have emphasized the need for the United States to enforce regulations requiring transparency and labeling in companies that develop and use artificial intelligence (AI) models. Testifying at a hearing held by the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security, these experts highlighted the urgent necessity of addressing the lack of transparency surrounding AI systems.

Ramayya Krishnan, the dean of Carnegie Mellon University’s college of information systems and public policy, stressed the pressing need for leading AI vendors to disclose data attributes and provenance. Krishnan proposed a solution in the form of standardized documentation that outlines data sources and bias assessments, akin to a “nutrition label” for users, verified by auditors.

Witnesses from the private industry and human rights advocacy sectors also agreed that legally binding guidelines are crucial, encompassing transparency and risk management. Victoria Espinel, the CEO of the Business Software Alliance, emphasized that although the AI risk management framework developed by the National Institute of Standards and Technology is valuable, it is not sufficient. She proposed mandatory impact assessments and internal risk management programs for companies operating in high-risk situations.

Recommendations put forth by the panel indicated that guidelines should differ based on whether companies develop AI models or use them, and that they should be primarily applicable to high-risk applications. These proposals align with ongoing discussions in the European Union regarding AI legislation, where the level of scrutiny varies depending on the assessed risk of a model’s application.

The hearing also addressed the need to label AI-generated content. Experts agreed that expecting consumers to identify deceptive yet realistic AI-generated images and voices is unreasonable. Sam Gregory, director of human rights advocacy group WITNESS, suggested that comprehensive labels, which cannot be easily removed, should be mandated for AI-generated images and videos. Such labels could potentially consist of cryptographically bound metadata.

Experts also highlighted the importance of labeling content as AI-generated to assist developers, as generative AI models are less effective when trained on content created by other AIs. Panelists, however, expressed concerns about privacy pertaining to the personal information of human creators in protocols that verify content origins.

As discussions surrounding AI policy continue, it is evident that transparency and labeling regulations are imperative in shaping an ethical and accountable AI ecosystem.

FAQ

Why is transparency important for AI systems?

Transparency ensures that users and regulators have access to information about how AI models are developed and trained. It allows for accountability and helps identify potential biases or ethical concerns associated with AI systems.

What is the purpose of labeling AI-generated content?

Labeling AI-generated content helps distinguish between human-created and AI-generated content. It aims to prevent deception and enable users to make informed decisions about the authenticity and reliability of the content they consume.

What are high-risk applications of AI?

High-risk applications of AI refer to situations where AI models make significant decisions with potential consequences, such as healthcare diagnosis, hiring processes, and autonomous driving. These applications require heightened scrutiny due to their potential impact on individuals and society.