Digital Trends

Microsoft says AI can create “zero day” threats in biology – MIT Technology Review

Artificial intelligence can design toxins that evade security controls.
A team at Microsoft says it used artificial intelligence to discover a “zero day” vulnerability in the biosecurity systems used to prevent the misuse of DNA.
These screening systems are designed to stop people from purchasing genetic sequences that could be used to create deadly toxins or pathogens. But now researchers led by Microsoft’s chief scientist, Eric Horvitz, says they have figured out how to bypass the protections in a way previously unknown to defenders. 
The team described its work today in the journal Science.
Horvitz and his team focused on generative AI algorithms that propose new protein shapes. These types of programs are already fueling the hunt for new drugs at well-funded startups like Generate Biomedicines and Isomorphic Labs, a spinout of Google. 
The problem is that such systems are potentially “dual use.” They can use their training sets to generate both beneficial molecules and harmful ones.
Microsoft says it began a “red-teaming” test of AI’s dual-use potential in 2023 in order to determine whether “adversarial AI protein design” could help bioterrorists manufacture harmful proteins. 
The safeguard that Microsoft attacked is what’s known as biosecurity screening software. To manufacture a protein, researchers typically need to order a corresponding DNA sequence from a commercial vendor, which they can then install in a cell. Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert.
Can AI create a life form? These “generative” genomes are a start
To design its attack, Microsoft used several generative protein models (including its own, called EvoDiff) to redesign toxins—changing their structure in a way that let them slip past screening software but was predicted to keep their deadly function intact.
The researchers say the exercise was entirely digital and they never produced any toxic proteins. That was to avoid any perception that the company was developing bioweapons. 
Before publishing the results, Microsoft says, it alerted the US government and software makers, who’ve already patched their systems, although some AI-designed molecules can still escape detection. 
“The patch is incomplete, and the state of the art is changing. But this isn’t a one-and-done thing. It’s the start of even more testing,” says Adam Clore, director of technology R&D at Integrated DNA Technologies, a large manufacturer of DNA, who is a coauthor on the Microsoft report. “We’re in something of an arms race.”
To make sure nobody misuses the research, the researchers say, they’re not disclosing some of their code and didn’t reveal what toxic proteins they asked the AI to redesign. However, some dangerous proteins are well known, like ricin—a poison found in castor beans—and the infectious prions that are the cause of mad-cow disease.
“This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism,” says Dean Ball, a fellow at the Foundation for American Innovation, a think tank in San Francisco.
Ball notes that the US government already considers screening of DNA orders a key line of security. Last May, in an executive order on biological research safety, President Trump called for an overall revamp of that system, although so far the White House hasn’t released new recommendations.
Others doubt that commercial DNA synthesis is the best point of defense against bad actors. Michael Cohen, an AI-safety researcher at the University of California, Berkeley, believes there will always be ways to disguise sequences and that Microsoft could have made its test harder.
“The challenge appears weak, and their patched tools fail a lot,” says Cohen. “There seems to be an unwillingness to admit that sometime soon, we’re going to have to retreat from this supposed choke point, so we should start looking around for ground that we can actually hold.” 
Cohen says biosecurity should probably be built into the AI systems themselves—either directly or via controls over what information they give. 
But Clore says monitoring gene synthesis is still a practical approach to detecting biothreats, since the manufacture of DNA in the US is dominated by a few companies that work closely with the government. By contrast, the technology used to build and train AI models is more widespread. “You can’t put that genie back in the bottle,” says Clore. “If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model.”
They’re turning their backs on a technology thought to have saved millions of lives—with the potential to save many more.
Can AI create a life form? These “generative” genomes are a start
Her computations allow physicians to more quickly diagnose and treat life-threatening genetic diseases.
The president also blamed the painkiller Tylenol for autism, but the evidence doesn’t stack up at all.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.

© 2025 MIT Technology Review

source
This is a newsfeed from leading technology publications. No additional editorial review has been performed before posting.

Leave a Reply