Go to ewuu.nl

Many people believe that it is important to understand how AI reaches its conclusions in order to create trust. In this AI-hub project, researchers from UMCU, TU/e and UU are exploring explainable AI techniques that can help us understand how this black box works. They question whether these techniques are good enough to instill trust in AI.

Lack of trust in AI, for example among clinicians, patients, and regulators, may be one of the reasons why AI is not widely used in clinical settings. “This is understandable, especially in contexts where these decisions impact lives and could potentially be harmful,” says PhD student Alex Carriero from UMCU. “Ideally, we would be able to understand how an AI makes its decisions.”

Trust
Carriero and her colleagues are investigating the use of explainable AI techniques. These are processes and methods used to describe an AI model. Without these, the model is essentially a black box. Carriero says, “There is a widespread belief that explainable AI techniques, by explaining the decision-making processes of AI systems, will help inspire trust among all stakeholders and potentially lead to new clinical insights. In our research, we question whether the Explainable AI techniques are good enough to help instill trust.”

“Ideally, we would be able to understand how an AI makes its decisions”

Promises and Pitfalls
The major difficulty with explainable AI techniques is that it is hard to determine whether the explanations are correct and if they offer clinically useful information, explains Carriero. “In our research, we highlight the promises and pitfalls of current explainable AI techniques for use in a clinical context. We demonstrate how using these tools appropriately can help model developers catch mistakes in their models so they can improve them, and conversely, how misinterpreting their output can be dangerous and lead to decisions that may adversely affect patients.”

Potential
“Trust, accountability, fairness, and safety are essential for AI systems in clinical practice. We investigate how well explainable AI can help achieve these things,” Carriero concludes. “I think AI has great potential to work alongside humans and help us be more efficient. But in a medical context, I don’t think they would ever replace doctors or nurses.”

Team: Team: Maarten van Smeden (UMCU), Anna Vilanova Bartoli (TU/e), Anne de Hond (UMCU) Alex Carriero (UMCU), Karel Moons (UMCU), Dennis Collaris (TU/e), Bram Cappers (TU/e), Fernando Paulovich (TU/e), Sanne Abeln (UU)