Dienst van SURF
© 2025 SURF
Voor u ligt het booklet Sensing Streetscapes: perspectieven op verdichting. Het bundelt de interviews van elf ontwerpbureaus, drie gemeentelijke senior stedenbouwers, de voormalige rijksadviseur, vijf mondiale academische pioniers van de neuro-architectuur en een verkenning naar negen bijzondere Chinese woningbouwprojecten.Deze uitgave is onderdeel van het tweejarig onderzoeksproject Sensing Streetscapes. Hierin werken onderzoekers van de HvA samen met de praktijk en internationale onderzoeksgroepen aan het ontleden van het begrip menselijke maat voor het ruimtelijk ontwerp. De conceptversie van het booklet werd op het eindseminar-excursie van 28 mei 2021 gepresenteerd. Aan de publicatie zijn de inzichten uit het eindsymposium en excursie toegevoegd, alsmede een top-10 lijst van lessen voor het verdichten met een menselijke maat op ooghoogte.
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
This paper introduces the design principle of legibility as means to examine the epistemic and ethical conditions of sensing technologies. Emerging sensing technologies create new possibilities regarding what to measure, as well as how to analyze, interpret, and communicate said measurements. In doing so, they create ethical challenges for designers to navigate, specifically how the interpretation and communication of complex data affect moral values such as (user) autonomy. Contemporary sensing technologies require layers of mediation and exposition to render what they sense as intelligible and constructive to the end user, which is a value-laden design act. Legibility is positioned as both an evaluative lens and a design criterion, making it complimentary to existing frameworks such as value sensitive design. To concretize the notion of legibility, and understand how it could be utilized in both evaluative and anticipatory contexts, the case study of a vest embedded with sensors and an accompanying app for patients with chronic obstructive pulmonary disease is analyzed.