Explainable AI (XAI) is a developing area in machine learning that aims to address how black box decisions of AI systems are made. This area inspects and tries to understand the steps and models involved in making decisions. One valuable type of explanation for the clinicians is to justify a model decision pointing out some real examples that have strong similarities with the sample under analysis. On the other hand, examples representing the opposite (counterexamples) are also of great importance to the understanding of the decision and help establish trust between human users and AI.
However, the use of data (in particular, visual data containing biometric info) from other patients to support explanation of the decision raises several privacy concerns, which might in turn, have legal implications. The goal of this thesis is finding new ways and methodologies to perform XAI through analogous examples presents in the training set of that AI decision, but at same time ensure the obfuscation of biometric info that are present in the images.
Explore XAI with differential privacy approaches for TAMI (Transparent Artificial Medical Intelligence) project in the area of Glaucoma. Obfuscation of biometric info on XAI approaches that involves retrieving similar examples for explanation.
Author: Fábio Araújo
Type: MSc thesis
Partner: FEUP – Faculdade de Engenharia da Universidade do Porto