Transparency and Explainability for Machine Learning


With the emergence of applications of deep learning to our everyday life (e.g. healthcare), interpreting models' decisions is critical for both end-users to be able to build trust on those models and contest decisions, and for domain experts to assess the fairness and reasoning of those decisions. Among those applications, automatic navigation is one the most challenging problems in Computer Science with a wide range of tasks, from finding shortest paths between pairs of points, to efficiently exploring and covering unknown environments, up to complex semantic visual problems ("Where are my keys?"). Addressing such problems is important for modern applications such as autonomous vehicles to improve urban mobility, or social robots.

For more information about my PhD, please consult this document.