Service of SURF
© 2025 SURF
People tend to be hesitant toward algorithmic tools, and this aversion potentially affects how innovations in artificial intelligence (AI) are effectively implemented. Explanatory mechanisms for aversion are based on individual or structural issues but often lack reflection on real-world contexts. Our study addresses this gap through a mixed-method approach, analyzing seven cases of AI deployment and their public reception on social media and in news articles. Using the Contextual Integrity framework, we argue that most often it is not the AI technology that is perceived as problematic, but that processes related to transparency, consent, and lack of influence by individuals raise aversion. Future research into aversion should acknowledge that technologies cannot be extricated from their contexts if they aim to understand public perceptions of AI innovation.
LINK
Recommenders play a significant role in our daily lives, making decisions for users on a regular basis. Their widespread adoption necessitates a thorough examination of how users interact with recommenders and the algorithms that drive them. An important form of interaction in these systems are algorithmic affordances: means that provide users with perceptible control over the algorithm by, for instance, providing context (‘find a movie for this profile’), weighing criteria (‘most important is the main actor’), or evaluating results (‘loved this movie’). The assumption is that these algorithmic affordances impact interaction qualities such as transparency, trust, autonomy, and serendipity, and as a result, they impact the user experience. Currently, the precise nature of the relation between algorithmic affordances, their specific implementations in the interface, interaction qualities, and user experience remains unclear. Subjects that will be discussed during the workshop, therefore, include but are not limited to the impact of algorithmic affordances and their implementations on interaction qualities, balances between cognitive overload and transparency in recommender interfaces containing algorithmic affordances; and reasons why research into these types of interfaces sometimes fails to cross the research-practice gap and are not landing in the design practice. As a potential solution the workshop committee proposes a library of examples of algorithmic affordances design patterns and their implementations in recommender interfaces enriched with academic research concerning their impact. The final part of the workshop will be dedicated to formulating guiding principles for such a library.
LINK
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and value-oriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
MULTIFILE