Dienst van SURF
© 2025 SURF
BACKGROUND: Older adults want to preserve their health and autonomy and stay in their own home environment for as long as possible. This is also of interest to policy makers who try to cope with growing staff shortages and increasing health care expenses. Ambient assisted living (AAL) technologies can support the desire for independence and aging in place. However, the implementation of these technologies is much slower than expected. This has been attributed to the lack of focus on user acceptance and user needs.OBJECTIVE: The aim of this study is to develop a theoretically grounded understanding of the acceptance of AAL technologies among older adults and to compare the relative importance of different acceptance factors.METHODS: A conceptual model of AAL acceptance was developed using the theory of planned behavior as a theoretical starting point. A web-based survey of 1296 older adults was conducted in the Netherlands to validate the theoretical model. Structural equation modeling was used to analyze the hypothesized relationships.RESULTS: Our conceptual model showed a good fit with the observed data (root mean square error of approximation 0.04; standardized root mean square residual 0.06; comparative fit index 0.93; Tucker-Lewis index 0.92) and explained 69% of the variance in intention to use. All but 2 of the hypothesized paths were significant at the P<.001 level. Overall, older adults were relatively open to the idea of using AAL technologies in the future (mean 3.34, SD 0.73).CONCLUSIONS: This study contributes to a more user-centered and theoretically grounded discourse in AAL research. Understanding the underlying behavioral, normative, and control beliefs that contribute to the decision to use or reject AAL technologies helps developers to make informed design decisions based on users' needs and concerns. These insights on acceptance factors can be valuable for the broader field of eHealth development and implementation.
The subject of this textbook is a methodical approach on the complex problem-solving process of conceptual structural design, leading to a controlled build-up of insight into the behaviour of the structure and supporting the actual successive design decisions during the conceptual design phase on the basis of a coherent set of solution components.
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE