Résumé

Deep learning models, with billions of parameters, have a great potential for improving our daily life activities, particularly in assisting the diagnosis of medical images. The detection of plus disease in retinopathy of prematurity is an example at the edge between two treatment strategies where a difficult decision in the diagnosis may make the difference between the risk of blindness and complete recovery. Explaining the complex decision-making of these models is crucial before their incorporation into the clinical workflows. This can guarantee that several aspects are respected, for example the absence of unwanted biases, model fairness, trustworthiness and, most importantly, to ensure that the model reflects the clinical expectations. This chapter begins by clarifying the confusion around the taxonomy related to interpretable AI, adding a rich literature review on explainability methods. To illustrate the use of our recently developed methods for explaining convolutional networks’ predictions, we have applied concept attribution to Retinopathy of prematurity. In collaboration with experts in ophthalmology, we define concepts that (1) are relevant to them for the diagnosis of plus disease and (2) can be extracted automatically from the images as visual features. We then evaluate how the Convolutional Neural Networks (CNN) predictions are influenced by these concepts. The results suggest that the decision-making process of the CNN aligns with that of the ophthalmologists.

Détails

Actions