This project focus on a hot research topic: explainable artificial intelligence (XAI). The objective is to explore new techniques to deliver explanations applied to time series. It is expected that the developed AI models deliver not only their predictions but also, an associated explanation in a sort of human intelligible format. It will explain a set of state-of-the-art time series classification problems using three approaches with an incremental level of complexity:
1) As a starting point, conventional machine learning approaches on univariate data will be explained by feature selection, symbolic representation transformations and sensitivity analysis;
2) Then will move into explaining Deep Learning decisions by applying Layer-wise Relevance Propagation into time series;
3) Finally, causal relationships on multimodal time series will be explained using Bayesian Belief Networks.
Explainable AI is not only of important and topical academic interest, but it will play a pivotal role in future AI systems deployed in the Industry. This project will foster innovative thinking in this area aiming to build explainable AI models and promote a Responsible AI mindset at AICOS.
Author: Maria Neves
Type: MSc thesis
Partner: FCT NOVA – Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa