Most AI models for CC screening in the literature are not based on XAI approaches. Thus, their internal logic and inner workings are hidden to the user, preventing them from verifying, interpreting and understanding the system’s reasoning and how particular decisions are made. By exploring the usage of example-based explanations to support DSS decisions, this work aims to increase the transparency of these tools and consequently enhance the trust and acceptance of the medical professionals. To achieve that, different kinds of example-based explanations will be addressed, such as normative or comparative explanations.
Expand FhP background knowledge in:
- Explainable Artificial Intelligence
- Machine Learning and Deep Learning
- Image Processing and Computer Vision
Author: Luís Marques
Type: MSc thesis
Partner: FEUP – Faculdade de Engenharia da Universidade do Porto