close
close
Explainable AI could help scientists develop better antibiotics

Artificial intelligence (AI) has grown tremendously in popularity and has revolutionized various aspects of our lives, from driving cars to developing new drugs. Despite its incredible capabilities, it can be difficult to understand AI’s decisions.

Explainable AI (XAI) offers a solution by providing justifications for AI decisions, thus increasing transparency. Researchers are now using XAI to examine predictive AI models in more detail and gain deeper insights into the field of chemistry.

These groundbreaking findings by researchers at the University of Manitoba will be presented at the upcoming fall meeting of the American Chemical Society (ACS).

AI is incredibly prevalent in today’s technology, but many AI models operate like black boxes, making it difficult to understand how they arrive at their results. This lack of transparency can lead to skepticism, especially when it comes to important tasks like identifying potential drug molecules.

“As scientists, we like justifications,” explains Rebecca Davis, chemistry professor at the University of Manitoba. “If we can develop models that provide insight into AI decision-making, scientists might be able to better understand these methods.”

Explainable AI (XAI) can solve this problem by providing insight into the decision-making process of AI models. Researchers like Davis are particularly interested in applying XAI to drug discovery AI models, especially for predicting new antibiotic candidates. Given that thousands of molecules must be screened to find just one effective drug and the ongoing threat of antibiotic resistance, accurate and efficient predictive models are critical.

“I want to use XAI to better understand what information we need for computational chemistry teaching,” says Hunter Sturm, a chemistry graduate student in Davis’ lab who is presenting the work at the meeting.

The researchers first used AI to analyze databases of known drug molecules and predict their biological effects. They then used an XAI model to identify the specific components of the drug molecules that influenced the predictions. This allowed for a better understanding of the AI ​​model’s criteria for determining molecule activity. The researchers discovered that XAI can detect nuances that might be missed by humans and can process a larger amount of data at once. When studying penicillin molecules, for example, XAI uncovered fascinating information.

“Many chemists believe that the core of penicillin is the key site for antibiotic action,” says Davis. “But that’s not what the XAI saw.” Instead, the structures associated with this core, rather than the core itself, were identified as the determining factor for classification. “This could be the reason why some penicillin derivatives with this core have low biological activity,” explains Davis.

The researchers plan to use eXplainable AI (XAI) to improve predictive AI models and guide the development of new antibiotic compounds. By understanding how computer algorithms prioritize molecular structures for antibiotic activity, they can train AI models to better identify effective compounds.

Working with a microbiology lab, the team will synthesize and test the compounds identified by the improved AI models. Ultimately, they hope that XAI will enable the development of more effective antibiotic compounds to combat antibiotic-resistant pathogens.

By using AI to explain their decision-making processes, the researchers believe they can help build trust and acceptance of the technology. They see AI applications in chemistry and drug discovery as the future of the field and want to pave the way for its widespread use.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *