Description:
This project focus on machine learning uncertainty quantification using different classification models. The objective is to explore new techniques to deal with model misspecification uncertainty. Therefore, it is expected that each model prediction is accompanied by a confidence level. Understanding prediction system structure and defensibly quantifying uncertainty is possible, and, if done, can significantly benefit both research and practical applications of AI in this critical domain of life science fields.
Outcome:
This project focus on uncertainty quantification applied to machine learning models, where each model decision will be accompanied by a classification uncertainty. Understanding what a machine intelligence model does not know is critical in many fields and can significantly benefit AICOS research in the innovation area of Accountable AI, helping the adoption of AICOS technology.
Author: Catarina Pires
Type: MSc thesis
Partner: FCT NOVA – Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa
Year: 2020