Service of SURF
© 2025 SURF
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
This research reviews the current literature on the impact of Artificial Intelligence (AI) in the operation of autonomous Unmanned Aerial Vehicles (UAVs). This paper examines three key aspects in developing the future of Unmanned Aircraft Systems (UAS) and UAV operations: (i) design, (ii) human factors, and (iii) operation process. The use of widely accepted frameworks such as the "Human Factors Analysis and Classification System (HFACS)" and "Observe– Orient–Decide–Act (OODA)" loops are discussed. The comprehensive review of this research found that as autonomy increases, operator cognitive workload decreases and situation awareness improves, but also found a corresponding decline in operator vigilance and an increase in trust in the AI system. These results provide valuable insights and opportunities for improving the safety and efficiency of autonomous UAVs in the future and suggest the need to include human factors in the development process.
The Dutch healthcare system suffers from the ‘care gap’: its inadequacies to deliver healthcare services for growing numbers of clients. AI-systems are designed to remedy these problems, but for social acceptance these innovations must be realized with a human-centric approach. The research focuses on the impact of introducing social AI-agents in a healthcare social network.