Skip to main contentSkip to content
AI Governance · Dual-Use Risk

The AI Bioweapons Paradox:Dual-Use Risk & Board Oversight of Autonomous Defence Systems

The same AI that maps protein folding for cures can be “flipped” to design novel pathogens. Why directors must treat dual-use AI as a fiduciary risk.

10 min read
By June Lai, CFA, CPA, CMA
TL;DR — Executive Summary

The Core Argument in Plain English

The issue: AI is a textbook dual-use technology. The same models that accelerate drug discovery, map protein structures, and reduce animal testing can also be used to design novel pathogens, optimise biological agents for lethality, and lower the expertise barrier for bioweapons development. RAND Corporation research published in 2025 confirmed that contemporary AI models can already guide users through tasks applicable to producing pathogenic viruses.

The governance gap: OpenAI’s Pentagon contract permits use for “all lawful purposes.” This phrase creates ambiguity about dual-use applications in defence contexts. Meanwhile, Anthropic was designated a “supply chain risk” by the U.S. government for maintaining stricter guardrails—sending a signal that safety-first positions carry commercial and political penalties.

For directors: If your organisation uses AI in R&D, life sciences, or any field with dual-use potential, you have a fiduciary obligation to audit the biosecurity posture of your AI providers and your own internal guardrails.

01 · The Dual-Use Problem

The Dual-Use Problem: AI for Good and AI for Harm

The term “dual-use” refers to technology that has both civilian and military applications. A centrifuge enriches uranium for power plants or for weapons. GPS guides ambulances and cruise missiles. The technology is identical; only the intent differs.

AI in life sciences is the most consequential dual-use challenge of this decade. On the beneficial side, AI-driven research is transforming medicine: accelerating the discovery of new therapeutic compounds by analysing vast genetic variability, better establishing the balance between drug efficacy and safety across diverse populations, and using high-fidelity computational models to significantly reduce the need for animal testing.

But the same capabilities have a darker application. An AI model that maps protein folding for drug development can be redirected to identify vulnerabilities in pathogens—or to engineer novel ones. An AI system that optimises a molecule for therapeutic effect can, when the objective function is inverted, optimise it for toxicity.

The Inversion Problem

In one widely cited experiment, researchers at a pharmaceutical AI firm instructed their generative drug-discovery platform to maximise toxicity instead of therapeutic potential. Within six hours, the system generated approximately 40,000 candidate molecules, some of which resembled known chemical warfare agents. The researchers reported that the modification required only a minor change to the model’s objective function. [Source: Urbina et al., “Dual use of artificial-intelligence-powered drug discovery,” Nature Machine Intelligence, 2022]

02 · The Evidence

The Evidence: What AI Models Can Already Do

This is not a hypothetical future risk. Multiple credible research institutions have documented what current AI systems are capable of in the biosecurity domain:

RAND Corporation (2025): Researchers engaged three 2024 foundation AI models—Llama 3.1, ChatGPT-4o, and Claude 3.5 Sonnet—and documented that the models successfully provided accurate instructions and guidance for recovering a live poliovirus from synthetic DNA. RAND concluded that current models are already capable of guiding motivated users toward biological weapons development.

Nuclear Threat Initiative & Munich Security Conference (December 2025): An expert group warned that AI’s rapid evolution is eroding barriers to bioweapons development by malicious actors. They found it would be technically possible to use existing and emerging AI-driven biological tools to create new pathogens with pandemic-level risks—and that current security measures are unprepared for these threats.

CSIS (August 2025): The Center for Strategic and International Studies reported that Anthropic’s own safety evaluation of its Claude 3.7 Sonnet model found it demonstrated concerning improvements in its capacity to support aspects of biological weapons development. OpenAI similarly reported that its Deep Research capability could help experts with operational planning of reproducing known biological agents.

International AI Safety Report (2025 Paris AI Action Summit): The report found that large language models showed an approximately 80% improvement in 2024 in their ability to accurately respond to queries about the acquisition and formulation of deadly biological agents.

The Speed of Change

In 2024, a RAND red-team exercise concluded that large language models were not yet mature enough to meaningfully assist with bioweapons development. By early 2025, the same researchers concluded that they could. This shift took less than twelve months. For directors, the lesson is clear: biosecurity assessments based on last year’s capabilities are already outdated.

03 · The Loophole

The “All Lawful Purposes” Loophole

OpenAI’s Pentagon contract permits the U.S. military to use its AI technology for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

The governance problem is the word “lawful.” What is lawful is not static. It can be redefined by executive order, by legislative change, by classified interpretation, or by wartime authority. As the Center for Democracy & Technology observed, the contractual language contains no enforceable red lines beyond having a “lawful purpose.”

For dual-use AI applications, this ambiguity is particularly dangerous. The Biological Weapons Convention (BWC) prohibits the development and stockpiling of biological weapons. But the BWC has no formal verification mechanism and no enforcement body. If a government redefines defensive biodefence research in ways that edge closer to offensive capability—and does so “lawfully” under domestic statute—an AI provider bound by an “all lawful purposes” contract has no independent basis to refuse.

Why This Is a Board-Level Problem

If your organisation uses an AI provider whose models have demonstrated dual-use capabilities (and most frontier models now have), and that provider has agreed to let a government use its technology for “all lawful purposes,” your board must consider the reputational, legal, and operational risk of being in that provider’s supply chain.

04 · Concentration Risk

Concentration Risk: When Safety Is Punished

The events of late February 2026 sent a clear signal to the AI industry about the commercial cost of maintaining strict ethical guardrails.

Anthropic refused to soften its red lines against mass domestic surveillance and fully autonomous weapons in its Pentagon contract negotiations. In response, the U.S. government designated Anthropic a “supply chain risk”—a label normally reserved for companies with connections to foreign adversaries, and one never before publicly applied to an American technology company. This designation could prevent Anthropic from doing business with the U.S. government entirely.

Within hours of the designation, OpenAI announced its own deal with the Pentagon, accepting the “all lawful purposes” language that Anthropic had rejected.

The Governance Signal

For directors, this sequence of events raises a systemic concern. If the company that maintains the strictest safety guardrails is punished with a “supply chain risk” designation, while the company that accepts broader usage terms is rewarded with a contract, the incentive structure actively discourages safety-first behaviour across the entire industry. This is a market-level governance failure, not just a single-company issue.

The concentration of AI defence contracts in a single provider also creates operational risk. If OpenAI becomes the sole AI provider for both the Pentagon and NATO (even on unclassified networks), it represents a single point of failure for the Western alliance’s AI infrastructure. If that system is compromised, misconfigured, or subject to a legal dispute, the impact cascades across every dependent organisation.

05 · Fiduciary Implications

Fiduciary Implications for Directors

What is this?

A fiduciary duty is the legal and ethical obligation of a director to act in the best interests of their organisation and its stakeholders. In the context of dual-use AI, this duty extends beyond data privacy to encompass the biosecurity posture of your technology supply chain.

Why does this matter?

If your organisation uses AI models with demonstrated dual-use capabilities—and most frontier models now qualify—the board has three material exposures:

Regulatory risk. Biosecurity regulation is accelerating. The EU AI Act classifies certain AI systems as high-risk. CSIS has recommended that governments institute licensing requirements for sophisticated biological design tools. If your AI provider’s models are found to facilitate biological threats, your organisation’s use of those models may face regulatory scrutiny.

Reputational risk. If the AI provider your organisation depends on is publicly linked to bioweapons-capable outputs or to defence contracts with ambiguous ethical guardrails, the association creates reputational contagion. This is particularly acute for organisations in life sciences, healthcare, and pharmaceuticals.

Operational risk. If a government designates your AI provider as a “supply chain risk” (as occurred with Anthropic), your organisation may lose access to that provider’s technology with minimal notice. AI provider diversification is not just a technical decision; it is a governance imperative.

06 · Dual-Use Checklist

The Director’s Dual-Use Checklist

As a board member, you should pose these questions to your CTO, Chief Science Officer, and legal counsel:

“Are any of our AI tools dual-use capable?”

Most frontier AI models (from OpenAI, Anthropic, Google, Meta) have demonstrated some degree of dual-use capability in biosecurity evaluations. If you use any of these models in R&D, drug discovery, genomic analysis, or materials science, the answer is likely yes.

“What guardrails does our provider maintain?”

Does the provider conduct regular biosecurity red-team evaluations of its models? Does it publish the results? Does it have an escalation framework (like Anthropic’s Responsible Scaling Policy) that triggers additional safety controls when models reach certain capability thresholds?

“What is our exposure to the ‘all lawful purposes’ language?”

If your AI provider has accepted government contracts with open-ended usage language, understand what that means for the models you use. Are the same model weights deployed commercially and for defence? Are the safety guardrails identical across both contexts?

“Do we have provider diversification?”

If your critical AI workflows depend on a single provider that could be designated a “supply chain risk” or lose government approval overnight, you have a concentration risk that the board should formally assess.

“Are we BWC-compliant in all jurisdictions?”

If your organisation operates in multiple countries, ensure that your AI-assisted R&D workflows comply with the Biological Weapons Convention in every jurisdiction. The BWC’s lack of enforcement does not reduce the fiduciary obligation to comply.

Deep Dive — Pillar Page

Pentagon-OpenAI Deal: Why ‘Safety Stacks’ Are a Fiduciary Risk for Board Directors →

Related Analysis

The NATO-OpenAI Trust Gap: Why the CLOUD Act Is a Fiduciary Risk for Board Directors — How the CLOUD Act, the UK Investigatory Powers Act, and cross-border warrants create data sovereignty risk for directors adopting U.S.-based AI providers.

07 · References

References & Sources

[1] RAND Corporation, “Contemporary Foundation AI Models Increase Biological Weapons Risk,” 2025. https://www.rand.org/pubs/perspectives/PEA3853-1.html

[2] RAND Corporation, “When Should We Worry About AI Being Used to Design a Pathogen?” October 2025. https://www.rand.org/pubs/research_briefs/RBA4087-1.html

[3] CNAS, “AI and the Evolution of Biological National Security Risks,” September 2024. https://www.cnas.org/publications/reports/ai-and-the-evolution-of-biological-national-security-risks

[4] CSIS, “Opportunities to Strengthen U.S. Biosecurity from AI-Enabled Bioterrorism,” August 2025. https://www.csis.org/analysis/opportunities-strengthen-us-biosecurity-ai-enabled-bioterrorism-what-policymakers-should

[5] Euronews, “Extremists could use AI to make bioweapons capable of sparking future pandemics,” 3 December 2025. https://www.euronews.com/health/2025/12/03/extremists-could-use-ai-to-make-bioweapons-capable-of-sparking-future-pandemics-tech-exper

[6] Frontiers in AI, “Artificial intelligence challenges in the face of biological threats,” April 2024. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1382356/full

[7] Harvard Belfer Center, “Biosecurity in the Age of AI: What’s the Risk?” November 2023. https://www.belfercenter.org/publication/biosecurity-age-ai-whats-risk

[8] CIGIONLINE, “AI Is Reviving Fears Around Bioterrorism,” 2025. https://www.cigionline.org/articles/ai-is-reviving-fears-around-bioterrorism-whats-the-real-risk/

[9] OpenAI, “Our agreement with the Department of War,” 27 February 2026. https://openai.com/index/our-agreement-with-the-department-of-war/

[10] Axios, “OpenAI-Pentagon deal faces same safety concerns,” 1 March 2026. https://www.axios.com/2026/03/01/openai-pentagon-anthropic-safety

[11] Wikipedia: Dual-use technology — https://en.wikipedia.org/wiki/Dual-use_technology (Wikidata: Q2915641)

[12] Wikipedia: Biological weapon — https://en.wikipedia.org/wiki/Biological_weapon (Wikidata: Q130796)

[13] Wikipedia: Biological Weapons Convention — https://en.wikipedia.org/wiki/Biological_Weapons_Convention (Wikidata: Q608658)

[14] Wikipedia: Lethal autonomous weapon — https://en.wikipedia.org/wiki/Lethal_autonomous_weapon (Wikidata: Q1426359)

[15] Wikipedia: CRISPR gene editing — https://en.wikipedia.org/wiki/CRISPR_gene_editing (Wikidata: Q15645384)

© 2026 AI Board Course. This article is for educational and governance training purposes. It does not constitute legal, medical, or biosecurity advice. Directors should consult qualified specialist counsel.

Next Step

Your Board Needs This Framework

The AI Board Course gives directors the language, frameworks, and technical literacy to lead on AI governance — not just defer to IT. Taught by June Lai, CFA, CPA, CMA.