The widespread use of Artificial Intelligence (AI) in various domains has led to a growing demand for algorithmic understanding, transparency, and trustworthiness. The field of eXplainable AI (XAI) aims to develop techniques that can inspect and explain AI systems’ behaviour in a way that is understandable to humans. However, the effectiveness of explanations depends on how users perceive them, and their acceptability is connected with the level of understanding and compatibility with users’ existing knowledge. So far, researchers in XAI have primarily focused on technical aspects of explanations, mostly without considering users’ needs, and this aspect is necessary to consider for a trustworthy AI. In the meantime, there is a growing interest in human-centered approaches that focus on the intersection between AI and human-computer interaction, what is termed as human-centered XAI (HC-XAI). HC-XAI explores methods to achieve user satisfaction, trust, and acceptance for XAI systems. This paper presents a systematic survey on HC-XAI, reviewing 75 papers from various digital libraries. The contributions of this paper include: (1) identifying common human-centered approaches, (2) providing readers with insights into design perspectives of HC-XAI approaches, and (3) categorising with quantitative and qualitative analysis of all the papers under study. The findings stimulate discussions and shed light on ongoing and upcoming research in HC-XAI.

Investigating Human-Centered Perspectives in Explainable Artificial Intelligence

Muhammad Suffian
Methodology
;
Alessandro Bogliolo
Supervision
2023

Abstract

The widespread use of Artificial Intelligence (AI) in various domains has led to a growing demand for algorithmic understanding, transparency, and trustworthiness. The field of eXplainable AI (XAI) aims to develop techniques that can inspect and explain AI systems’ behaviour in a way that is understandable to humans. However, the effectiveness of explanations depends on how users perceive them, and their acceptability is connected with the level of understanding and compatibility with users’ existing knowledge. So far, researchers in XAI have primarily focused on technical aspects of explanations, mostly without considering users’ needs, and this aspect is necessary to consider for a trustworthy AI. In the meantime, there is a growing interest in human-centered approaches that focus on the intersection between AI and human-computer interaction, what is termed as human-centered XAI (HC-XAI). HC-XAI explores methods to achieve user satisfaction, trust, and acceptance for XAI systems. This paper presents a systematic survey on HC-XAI, reviewing 75 papers from various digital libraries. The contributions of this paper include: (1) identifying common human-centered approaches, (2) providing readers with insights into design perspectives of HC-XAI approaches, and (3) categorising with quantitative and qualitative analysis of all the papers under study. The findings stimulate discussions and shed light on ongoing and upcoming research in HC-XAI.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2726372
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact