Résumé

Explainable AI (XAI) has emerged in recent years as a set of techniques and methodologies to interpret and explain machine learning (ML) predictors. To date, many initiatives have been proposed. Nevertheless, current research efforts mainly focus on methods tailored to specific ML tasks and algorithms, such as image classification and sentiment analysis. However, explanation techniques are still embryotic, and they mainly target ML experts rather than heterogeneous end-users. Furthermore, existing solutions assume data to be centralised, homogeneous, and fully/continuously accessible—circumstances seldom found altogether in practice. Arguably, a system-wide perspective is currently missing. The project named “Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge ” (Expectation) aims at overcoming such limitations. This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees (both humans or virtual). To tackle the challenges posed by personalisation, decentralisation, and heterogeneity, the project fruitfully combines abstractions, methods, and approaches from the multi-agent systems, knowledge extraction/injection, negotiation, argumentation, and symbolic reasoning communities.

Détails

Actions

PDF