Uncertainty meets Explainability

IP5: Uncertainty meets Explainability: Combining Uncertainty Quantification and Explainable Machine Learning for Crop Monitoring

Lead: Ribana Roscher

PhD student: Mohamed Farag

Central Question: How to enhance the understanding of DNN models, and improve the robustness of sensing by combining Explainability and Uncertainty Quantification?

Motivation

Trustworthiness, reliability, and robustness in Machine Learning (ML) models become pivotal things to avoid consequential failure when deployed in real life.

Tools

Uncertainty estimation and explainable ML can play a significant role in providing intact outcomes. Let’s define both:

    • Explainability can provide us with interpretations but in a more human-friendly readable form which reduces the un-easiness of dealing with highly technical concepts for those who are from different backgrounds regarding the model’s predictions.
    • Uncertainty quantification is dealing with the wide spectrum of outcomes we would get due to uncertainties already inherited at both data and model (Aleatroic(data)- Epistemic(model) uncertainties).

Work Program

WP1: Uncertainty Quantification
WP2: Uncertainty Calibration
WP3 & WP4: Decomposition of uncertainty sources

Themes’ relation

IP5 contributes and is linked heavily to the other themes of the AID4Crops, it will provide estimates for uncertainty in Theme 1 which would lead to better models and data. Furthermore, for tactical decisions, it would provide outcomes accompanied by uncertainty to guide the policymaker.