Artificial intelligence is edging into territory that has long kept biodefense experts up at night. As microbiologist and federal biosecurity adviser Dr. David Relman tells the New York Times, in a test run for a major AI company, a chatbot not only laid out how to tweak a dangerous pathogen to resist treatment, but also mapped out how to disperse it via a vulnerability in a mass-transit system—offering tactical ideas that Relman hadn't even thought to ask for, with a "level of deviousness and cunning that I just found chilling." Other scientists shared transcripts in which models from OpenAI, Google, and Anthropic walked through buying genetic material, assembling viruses, and even evading airport security or maximizing economic damage to US agriculture.
AI firms and some researchers tout huge medical upsides to using the technology in the field, while biosecurity pros warn the tools are lowering the bar for sophisticated biological attacks. Guardrails exist but can be bypassed, "like a flimsy wooden fence that is easy to overcome," says Dr. Cassidy Nelson of the Center for Long-Term Resilience. Plus, older models stay accessible even after updates are made, and studies show chatbots already match or beat most virologists on technical questions. Even some AI CEOs say biology is the risk that worries them most, "because of its very large potential for destruction and the difficulty of defending against it," as Anthropic CEO and biologist Dario Amodei wrote in January. For expert reactions and specific chatbot examples, read the full piece here. Meanwhile, more here on how AI-designed toxins were able to bypass safety checks.