Dependable, Certifiable & Explainable Artificial Intelligence for Critical Systems
Our collection of tools
Our strong relationship with industrial partners encouraged making our research results actionable. Thus, throughout the project, we developed 5 open-source libraries and 1 open-source dataset to provide meaningful tools to build dependable, reliable and explainable machine learning. We put a lot of attention into maintaining and continuously improving our tools.
Xplique (pronounced \ɛks.plik\) is a Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help understand complex neural network models.
Puncc (short for Predictive uncertainty calibration and conformalization) is an open-source Python library. It seamlessly integrates a collection of state-of-the-art conformal prediction algorithms and associated techniques for diverse machine learning tasks, including regression, classification and anomaly detection.
Deel-Lip provides an efficient implementation of k-Lispchitz layers for neural networks. Controlling the Lipschitz constant of a layer or a whole neural network has many applications ranging from adversarial robustness to Wasserstein distance estimation.
Oodeel is a library that performs post-hoc deep Out-Of-Distribution detection on any already trained neural network image classifiers. The philosophy of the library is to favor quality over quantity and to foster easy adoption.
Influenciae is a Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset and the generation of data-centric explanations for deep learning models.
LARD(Landing Approach Runway Detection) is a dataset of aerial front-view images of runways designed for the aircraft landing phase. It contains over 17K synthetic images of various runways, enriched with more than 1800 annotated pictures from real landing footage for comparison.