A small dog runs after the baballe. A more or less inspired musical piece. A murderous toxin that could kill us … Three examples of what artificial intelligences are capable of doing despite the locks imposed, according to a team of Microsoft’s biosafety researchers.
On October 2, its members published their work, in the journal Science, after having identified weaknesses in the software-Ifs that are supposed to avoid the creation of proteins capable of acting as fatal poisons.
Schematically, when biochemists want to produce a protein, they will resort to specialized companies that sell the synthetic DNA sequences they need. After the reception, they will integrate it into a cell and study its effects. To prevent a scholar with a terrorist doing something, these vendors use software that will detect if the order is close to a known toxin or pathogen, an alert is activated.
Imitation of fatal poisons and toxins
As part of its study, the Microsoft team focused on generative algorithms that lead to the creation of new protein forms, therefore unknown to the scientific world. A promising research field, which interests many laboratories and new companies for the search for new medicines. Obviously, these intelligent systems are potentially “double use” because they can use their training sets to generate beneficial or harmful molecules.
A problem that worries Microsoft. This is the reason why the company launched a “red team” in 2023 to identify the weaknesses of biosafety practices in the protein engineering process and see if biotbeterotists could create dangerous proteins.
This is how Bruce Wittmann, a bio-engineer from Microsoft, which until now had used proteins that probably contribute to the fight against diseases or food production, began playing bioterrorism in the bud.
In detail, he created thanks to the AI of digital protein patterns capable of imitating fatal poisons and toxins such as rich (already used in several terrorist attacks), botulinum toxin and Shiga bacteria. With their colleagues, as well as many experts in biosafety, they wanted to know what would happen if they controlled DNA sequences that allowed the development of proteins near those present in pathogens or toxins. to companies synthesizing nucleic acids. Therefore, they have attacked bioscutory detection software, which is used by DNA sequence suppliers.
Systems to improve
This software is an essential safeguard. However, they could not detect many of the Genoa designed by AI. After selecting 72 different proteins subject to legal controls, the researchers obtained more than 70,000 DNA sequences that probably generate variants. Some of these alternatives would also be toxic, depending on computer models.
Bruce Wittmann requested four suppliers of the bioscutive filtering system used by synthetic DNA laboratories to analyze these sequences, with a very variable performance. One of these tools identified 70% of these sequences, while another has lost more than 75% of potential toxins.
However, updates have been made in three of this software. On average, they managed to identify 72% of the sequences generated by AI, 97% of which the models considered the most capable of generating toxins.
Bruce Wittmann’s team obviously was not at the end of the exercise, did not order the proteins and did not carry out the necessary manipulations for the creation of potentially mortal agents.
“It’s just a beginning”
Given the danger of this discovery, the researchers, with the Revue Science agreement, have also decided not to reveal certain information on the DNA sequences generated by the AI and the filter systems established by the industry.
According to her, other biosafety safeguards must also be reinforced, because some DNA suppliers do not control their orders at all, while representing 20% of the market. It also considers that it is necessary to integrate additional guarantees in protein design tools by AI.
The vice president of policies and biosafety on Twist Bioscience, a DNA summary company, James Diggens, indicates that “the real number of people trying to create abuses is perhaps very close to zero.” Certainly, but sometimes, a person is enough …
Source: BFM TV
