Explainable AI (XAI) research endeavors to enhance transparency and interpretability in Artificial Intelligence (AI) systems by elucidating their decision-making processes. However, existing XAI methods often fall short in meeting user requirements, leading to challenges such as overly complex explanations and mismatches between user expectations and provided explanations. Human-centred XAI (HC-XAI) emerges as a solution to these challenges, focusing on designing explanations that are actionable, user-friendly, and customizable to individual preferences. This dissertation proposes a human-centred approach to XAI, aiming to bridge the gap between technical advancements and practical usability. The research involves conceptualizing and implementing an XAI framework that provides transparent and understandable explanations to users. Existing algorithms often operate within the entire feature space when looking for counterfactuals (i.e., alternative facts to the observed ones) with the purpose of optimizing changes to address undesired outcomes, overlooking the identification of key contributors and practicality of suggested changes. To overcome these limitations and enhance the confidence in provided explanations, this dissertation presents a new approach for generating user feedback-based counterfactual explanations (UFCE). UFCE allows for the incorporation of user constraints to determine minimal modifications in actionable feature subsets while considering feature dependence. UFCE is a model-agnostic method for tabular datasets. UFCE is crafted by looking at human needs and caters to such needs by giving the opportunity to end users to customize the explanations for machine learning predictions. UFCE is developed as open-source software to promote open science. To evaluate the effectiveness of the developed approach, a novel interactive web-based game is implemented, allowing users to engage with the XAI system and receive explanations for machine learning decision-making tasks. Task performance, usability, and user satisfaction assessments are conducted to provide comprehensive insights into the practical applicability of UFCE, the new HC-XAI approach in real-world decision-making scenarios. Through this research, we aim to foster greater acceptance and utilization of AI technologies across various domains.

Explainable AI (XAI) research endeavors to enhance transparency and interpretability in Artificial Intelligence (AI) systems by elucidating their decision-making processes. However, existing XAI methods often fall short in meeting user requirements, leading to challenges such as overly complex explanations and mismatches between user expectations and provided explanations. Human-centred XAI (HC-XAI) emerges as a solution to these challenges, focusing on designing explanations that are actionable, user-friendly, and customizable to individual preferences. This dissertation proposes a human-centred approach to XAI, aiming to bridge the gap between technical advancements and practical usability. The research involves conceptualizing and implementing an XAI framework that provides transparent and understandable explanations to users. Existing algorithms often operate within the entire feature space when looking for counterfactuals (i.e., alternative facts to the observed ones) with the purpose of optimizing changes to address undesired outcomes, overlooking the identification of key contributors and practicality of suggested changes. To overcome these limitations and enhance the confidence in provided explanations, this dissertation presents a new approach for generating user feedback-based counterfactual explanations (UFCE). UFCE allows for the incorporation of user constraints to determine minimal modifications in actionable feature subsets while considering feature dependence. UFCE is a model-agnostic method for tabular datasets. UFCE is crafted by looking at human needs and caters to such needs by giving the opportunity to end users to customize the explanations for machine learning predictions. UFCE is developed as open-source software to promote open science. To evaluate the effectiveness of the developed approach, a novel interactive web-based game is implemented, allowing users to engage with the XAI system and receive explanations for machine learning decision-making tasks. Task performance, usability, and user satisfaction assessments are conducted to provide comprehensive insights into the practical applicability of UFCE, the new HC-XAI approach in real-world decision-making scenarios. Through this research, we aim to foster greater acceptance and utilization of AI technologies across various domains.

HUMAN-CENTRED COUNTERFACTUAL EXPLANATIONS FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE

SUFFIAN, MUHAMMAD
2024

Abstract

Explainable AI (XAI) research endeavors to enhance transparency and interpretability in Artificial Intelligence (AI) systems by elucidating their decision-making processes. However, existing XAI methods often fall short in meeting user requirements, leading to challenges such as overly complex explanations and mismatches between user expectations and provided explanations. Human-centred XAI (HC-XAI) emerges as a solution to these challenges, focusing on designing explanations that are actionable, user-friendly, and customizable to individual preferences. This dissertation proposes a human-centred approach to XAI, aiming to bridge the gap between technical advancements and practical usability. The research involves conceptualizing and implementing an XAI framework that provides transparent and understandable explanations to users. Existing algorithms often operate within the entire feature space when looking for counterfactuals (i.e., alternative facts to the observed ones) with the purpose of optimizing changes to address undesired outcomes, overlooking the identification of key contributors and practicality of suggested changes. To overcome these limitations and enhance the confidence in provided explanations, this dissertation presents a new approach for generating user feedback-based counterfactual explanations (UFCE). UFCE allows for the incorporation of user constraints to determine minimal modifications in actionable feature subsets while considering feature dependence. UFCE is a model-agnostic method for tabular datasets. UFCE is crafted by looking at human needs and caters to such needs by giving the opportunity to end users to customize the explanations for machine learning predictions. UFCE is developed as open-source software to promote open science. To evaluate the effectiveness of the developed approach, a novel interactive web-based game is implemented, allowing users to engage with the XAI system and receive explanations for machine learning decision-making tasks. Task performance, usability, and user satisfaction assessments are conducted to provide comprehensive insights into the practical applicability of UFCE, the new HC-XAI approach in real-world decision-making scenarios. Through this research, we aim to foster greater acceptance and utilization of AI technologies across various domains.
15-lug-2024
File in questo prodotto:
File Dimensione Formato  
Definitive _PhD Dissertation Human-centred XAI Muhammad_Suffian PDFA.pdf

embargo fino al 15/07/2025

Descrizione: Tesi
Tipologia: DT
Licenza: Non pubblico
Dimensione 7.58 MB
Formato Adobe PDF
7.58 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2739732
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact