AI Could Pose a Grave Threat by Facilitating Bioweapons Capabilities, Warns Safety Chief

The CEO of Anthropic, an artificial intelligence (AI) safety and research company, cautioned the Senate Judiciary subcommittee on Tuesday about the potential for AI to empower malicious actors worldwide with the ability to carry out biological weapons attacks. Dario Amodei, the CEO, highlighted the medium-term risk of AI aiding in the development and delivery of such weapons, an issue that his company is actively addressing.

Amodei explained that certain steps in bioweapons production currently require a high level of specialized knowledge that is not readily accessible through traditional means. However, he noted that present AI tools can partially assist in filling in these knowledge gaps, albeit incompletely and unreliably. Furthermore, he emphasized that current AI developments exhibit early signs of danger and expressed concern that within the next two to three years, AI systems could bridge all existing information gaps, making large-scale biological attacks more feasible.

Anthropic has already shared their assessment with government officials, who were troubled by the results. While Amodei supports the responsible development of AI, he asserted that private action alone is insufficient to address this harmful risk. Instead, he proposed three steps for the government to consider. Firstly, he recommended implementing restrictions on the export of equipment that aids in AI system development in order to prevent these technologies from falling into the hands of malevolent individuals or groups. Additionally, he called for the establishment of rigorous testing and auditing protocols for powerful new AI models, insisting that these models should only be released to the public after meeting these standards. Lastly, Amodei emphasized the importance of testing the systems used to audit AI tools in order to identify and mitigate potential harmful behaviors.

Anthropic is one of the seven AI companies that recently endorsed a set of guidelines promoted by the White House, which seek to develop AI tools that are safe, secure, and trustworthy. Amodei, along with representatives from Amazon, Google, Inflection, Meta, Microsoft, and OpenAI, was present at the White House when President Biden announced this initiative.

It is crucial for policymakers to recognize and proactively address the potential risks associated with AI, particularly in the context of bioweapons capability. By implementing effective policy measures, such as export controls, rigorous testing, and improved auditing systems, governments can play an instrumental role in mitigating the threats posed by AI advancements.