close
close
Experts warn that artificial intelligence could one day trigger a pandemic

CHat robots aren’t the only AI models that have made progress in recent years. Specialized models trained on biological data have made similar advances and could help speed up vaccine development, cure diseases, and breed drought-resistant crops. But the same characteristics that make these models so useful also bring with them potential dangers. For a model to develop a safe vaccine, for example, it must first know what is harmful.

That’s why experts are calling on governments to introduce mandatory oversight and guardrails for advanced biological models in a new policy paper published August 22 in the peer-reviewed journal The 4000. Science. While today’s AI models are unlikely to contribute “significantly” to biological risk, the authors write, future systems could help develop new pathogens that can trigger a pandemic.

“The essential ingredients for developing high-concern, advanced biological models may already be in place or soon will be,” write the authors, who are public health and legal experts at Stanford School of Medicine, Fordham University and the Johns Hopkins Center for Health Security. “Establishment of effective governance systems is needed now.”

“We need to plan now,” says Anita Cicero, deputy director of the Johns Hopkins Center for Health Security and co-author of the paper. “Some structured government oversight and regulations will be necessary to reduce the risks of particularly powerful tools in the future.”

Read more: The researcher tries to take a look into the future of AI

Humans have a long history of using biological agents as weapons. In the 14th century, Mongolian troops are said to have catapulted plague-infested corpses over enemy walls, possibly helping to spread the Black Death in Europe. During World War II, several major powers experimented with biological weapons such as plague and typhus, which Japan used in several Chinese cities. And at the height of the Cold War, both America and the Soviet Union conducted extensive bioweapons programs. But in 1972, both sides – along with the rest of the world – agreed to end such programs and ban bioweapons, leading to the Biological Weapons Convention.

This international treaty, although widely regarded as effective, was not able to completely eliminate the threat posed by biological weapons. As recently as the early 1990s, the Japanese sect Aum Shinrikyo repeatedly attempted to develop and release bioweapons such as anthrax. These efforts failed because the group lacked the technical know-how. But experts warn that future AI systems could fill this gap. “As these models become more powerful, the level of sophistication needed by a malicious actor to cause damage decreases,” says Cicero.

Not all weaponized pathogens can be transmitted from person to person, and those that can tend to become less infectious as they become less deadly. Yet AI may be able to “figure out how a pathogen can maintain its transmissibility while maintaining its health,” Cicero says. A terrorist group or other malicious actor isn’t the only way this could happen. Even a well-intentioned researcher without the right protocols could inadvertently develop a pathogen that “gets released and then spreads unchecked,” Cicero says. Bioterrorism continues to raise global concerns, including among people like Bill Gates and U.S. Commerce Secretary Gina Raimondo, who has led the Biden administration’s approach to AI.

Read more: UK AI Safety Summit ends with limited but significant progress

The gap between a virtual blueprint and a physical biological agent is surprisingly small. Many companies offer the ability to order biological material online. While there are some measures in place to prevent the purchase of dangerous genetic sequences, they are applied inconsistently both in the U.S. and abroad, making them easy to circumvent. “The dam has a lot of little holes that water spurts out of,” Cicero explains. She and her co-authors call for mandatory screening requirements, but point out that even these are not enough to completely prevent the risks of biological AI models.

So far, 175 people – including researchers, academics and industry experts from Harvard, Moderna and Microsoft – have signed a series of voluntary commitments contained in the Responsible AI x Biodesign community statement released earlier this year. Cicero, one of the signatories, says she and her co-authors agree that while these commitments are important, they are not enough to protect against the risks. The document points out that in other high-risk biological areas, such as using live Ebola virus in a lab, we should not rely on voluntary commitments alone.

The authors recommend that governments work with experts in machine learning, infectious diseases and ethics to develop a “battery of tests” that biological AI models must undergo before being made available to the public, with a focus on whether they could pose “pandemic-level risks.”

Cicero explains: “There has to be some kind of minimum. At a minimum, the risk-benefit assessments and pre-testing of biological design tools and high-performance large language models would have to include an assessment of whether these models could, among other things, lead to pandemic-level risks.”

Because testing such capabilities in an AI system can be risky in itself, the authors recommend creating proxy assessments – for example, whether an AI can synthesize a new harmless pathogen, as a proxy for its ability to synthesize a deadly one. Based on these tests, authorities can decide whether and to what extent to restrict access to a model. Oversight policies must also take into account the fact that open-source systems can be modified after release, potentially making them more dangerous.

Read more: Republicans’ promise to repeal Biden’s AI decree worries some experts

The authors also recommend that the US create a set of standards for the responsible sharing of large data sets on “pathogenic traits of concern” and that a federal agency be empowered to work with the recently established US AI Safety Institute. The UK AI Safety Institute, which works closely with its US counterpart, has already conducted safety testing on leading AI models, including for biological risks, but this has largely been about assessing the capabilities of general, large language models, rather than biology-specific systems.

“The last thing we want is to shut down the industry and hamper our progress,” says Cicero. “It’s a balancing act.” To avoid hampering research through over-regulation, the authors recommend that regulators initially focus on only two types of models: those trained on biological data with very large computational power, and models of any size trained on particularly sensitive biological data that are not widely available, such as new information linking viral genetic sequences to their potential to cause pandemics.

Over time, the scope of models of concern could expand, especially if future AIs can conduct research autonomously, Cicero says. Imagine “100 million Pfizer chief science officers working 24/7 at 100 times faster than the real ones,” Cicero says, noting that while this could lead to incredible breakthroughs in drug development and discovery, it would also greatly increase the risks.

The paper stresses the need for international cooperation to address these risks, especially given that they threaten the entire globe. However, the authors point out that while policy harmonization would be ideal, “countries with the most advanced AI technology should prioritize effective evaluations, even at the expense of international consistency.”

Given the projected advances in AI capabilities and the relative ease of obtaining biological material and engaging third parties to conduct experiments remotely, Cicero believes biological risks from AI could manifest themselves “within the next 20 years, or maybe even much sooner” if there isn’t adequate oversight. “We need to think not only about the current version of all the tools available, but also the next versions, because we’re seeing exponential growth. These tools are going to become more and more powerful,” she says.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *