Widely Available AI Could Have Deadly Consequences

Researchers warned that while AI is becoming more powerful and more accessible to anyone, there is almost no regulation or supervision for this technology and only limited awareness among researchers, such as himself, of its possible malicious uses.

“It is particularly difficult to identify dual-use equipment / materials / knowledge in the life sciences, and decades have been spent trying to develop frameworks to do so. There are very few countries that have specific statutory regulations in this regard,” he says. Filippa Lentzos, senior professor of science and international security at King’s College London and co-author of the paper. “There has been a discussion about dual use in the field of AI in general, but the main focus has been on other social and ethical concerns, such as privacy. And there has been very little discussion about dual-use, let alone in the subfield of AI drug discovery, “he says.

Although a significant amount of work and experience has been devoted to developing MegaSyn, hundreds of companies around the world already use AI for drug discovery, according to Ekins, and most of the tools needed to repeat their experiment. VX are publicly available.

“While we were doing this, we realized that anyone with a computer and limited knowledge of being able to find the data sets and find that kind of software that is publicly available and just gathering them can do it,” says Ekins. “How do you keep track of thousands of potential people, maybe millions, who could do this and have access to information, algorithms and also knowledge?”

Since March, the newspaper has garnered more than 100,000 views. Some scientists have criticized Ekins and the authors for crossing a gray line of ethics in conducting their VX experiment. “It’s really a bad way to use technology, and I didn’t think it would be a good idea to do it,” Ekins said. “Then I had nightmares.”

Other researchers and bioethicists have applauded the researchers for providing a concrete and proof-of-concept demonstration of how misuse of AI can be misused.

“I was quite alarmed when I first read this article, but it didn’t surprise me either. We know that AI technologies are getting more and more powerful, and the fact that they can be used that way doesn’t seem surprising,” he says. Bridget Williams, a public health physician and postdoctoral fellow at Rutgers University’s Population-Level Bioethics Center.

“At first I wondered if it was a mistake to publish this piece, as it could lead people with bad intentions to use this type of information in a malicious way. But the advantage of having an article like this is that it could make more scientists and the research community at large, including funders, journals, and prepress servers consider how their work can be misused and take action to avoid it. as the authors of this article did, “he says.

In March, the U.S. Office of Science and Technology Policy (OSTP) convened Ekins and his colleagues at the White House for a meeting. The first thing OSTP representatives asked was whether Ekins had shared any of the deadly molecules that MegaSyn had generated with anyone, according to Ekins. (OSTP did not respond to repeated requests for interviews.) The second question from OSTP representatives was whether they could have the file with all the molecules. Ekins says he rejected them. “Someone else could go and do this anyway. There’s definitely no supervision. There’s no control. I mean it’s just up to us, right?” he says. “There is only a great deal of dependence on our morals and our ethics.”

Source link

Leave a Reply