Résumé

Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.e., Contextual Importance and Utility – CIU) and global feature explanation (i.e., Explainable Layers) with a rule extraction system, namely ECLAIRE. The proposed pipeline has been tested in four scenarios employing a breast cancer diagnosis dataset. The results show improvements such as the production of more human-interpretable rules and adherence of the produced rules with the original model.

Détails

Actions

PDF