This dissertation aims at exploring Medical ML by learning how to build and evaluate a model through an approach of Trustworthy AI. The main focus will be on the uncertainty quantification methods that should accompany the entire ML process, from the **data** to the **model** and its **evaluation**. Although the main purpose is to develop an abstract framework to work under different use cases and data modalities, this work will focus its efforts on demonstrating the practical usefulness of uncertainty quantification in different medical datasets.
This dissertation aims at the development of high-quality uncertainty estimates to streamline the process of quantifying, evaluating, improving, and communicating uncertainty of machine learning models. This dissertation is aligned with the scientific roadmap of AICeBlock project.
Author: Raquel Simão
Type: MSc thesis
Partner: FCT NOVA – Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa