Towards XMAS : eXplainable and trustworthy Multi-Agent Systems

Ciatto, Giovanni (University of Bologna, Italy) ; Calegari, Roberta (University of Bologna, Italy) ; Omicini, Andrea (University of Bologna, Italy) ; Calvaresi, Davide (University of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis))

In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques. Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Arti_cial Intelligence (XAI). Current solutions { mostly too speci_c, and simply aimed at making ML easier to interpret { cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures. Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness. Accordingly, in this paper we (i) elicit and discuss the most signi_cant issues a_ecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.


Keywords:
Conference Type:
published full paper
Faculty:
Economie et Services
School:
HEG-VS
Institute:
Institut Informatique de gestion
Subject(s):
Informatique
Publisher:
Rende, Italy, 22 November 2019
Date:
2019-11
Rende, Italy
22 November 2019
Pagination:
Pp. 40-53
Published in:
Proceedings of the Proceedings of the 1st Workshop on Artificial Intelligence and Internet of Things co-located with the 18th International Conference of the Italian Association for Artificial Intelligence (AIxIA 2019)
External resources:
Appears in Collection:



 Record created 2021-01-08, last modified 2021-01-08

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)