Why the “voluntary AI commitments” are inadequate

The recent White House meeting with representatives from tech giants such as Amazon, Google, and Microsoft aimed to establish “voluntary AI commitments.” While these commitments may seem like a step in the right direction, they fall short of addressing fundamental issues related to artificial intelligence (AI).

The commitments made during the meeting include testing the security of AI systems, sharing knowledge and best practices, investing in cybersecurity, disclosing the capabilities and limitations of AI systems, and developing technologies to address societal challenges.

While these commitments are commendable, they lack specificity and enforceability. They can be interpreted in various ways, leaving room for ambiguity. If we compare these commitments to a hypothetical scenario involving the defense industry, it would be akin to promising not to supply advanced missile systems to terrorists without outlining any clear guidelines.

Two critical issues that were not addressed during the White House summit are the need for continuous monitoring and improvement of AI systems and the requirement for human oversight in AI decision-making processes.

AI systems are not static like traditional products. They are dynamic and constantly evolving. This evolutionary nature can lead to unintended consequences, commonly referred to as “drift.” For example, data drift in autonomous vehicles or facial recognition technologies can have severe consequences, including loss of life or wrongful imprisonment. To mitigate these risks, it is crucial to establish regulations and guidelines for continuous improvement and adaptation of AI systems.

Additionally, it is necessary to incorporate human oversight into AI systems. While AI technology has made significant advancements, it is not flawless. Adversaries can exploit vulnerabilities in AI systems, as seen with the manipulation of generative AI tools like ChatGPT. Current safeguards are not enough to prevent the weaponization of AI technologies, as these exploits can be executed using natural language and prompt engineering techniques.

To truly harness the potential of AI for good and minimize its potential for harm, there is a need for stronger and more concrete commitments. These commitments should include measures for continuous improvement, adaptation, and mitigation of risks, as well as ensuring human oversight in AI decision-making processes. Without addressing these fundamental issues, the current “voluntary AI commitments” extracted by the White House are insufficient in their ability to protect society and maximize the benefits of AI technology.