Dienst van SURF
© 2025 SURF
Artificial intelligence-driven technology increasingly shapes work practices and, accordingly, employees’ opportunities for meaningful work (MW). In our paper, we identify five dimensions of MW: pursuing a purpose, social relationships, exercising skills and self-development, autonomy, self-esteem and recognition. Because MW is an important good, lacking opportunities for MW is a serious disadvantage. Therefore, we need to know to what extent employers have a duty to provide this good to their employees. We hold that employers have a duty of beneficence to design for opportunities for MW when implementing AI-technology in the workplace. We argue that this duty of beneficence is supported by the three major ethical theories, namely, Kantian ethics, consequentialism, and virtue ethics. We defend this duty against two objections, including the view that it is incompatible with the shareholder theory of the firm. We then employ the five dimensions of MW as our analytical lens to investigate how AI-based technological innovation in logistic warehouses has an impact, both positively and negatively, on MW, and illustrate that design for MW is feasible. We further support this practical feasibility with the help of insights from organizational psychology. We end by discussing how AI-based technology has an impact both on meaningful work (often seen as an aspirational goal) and decent work (generally seen as a matter of justice). Accordingly, ethical reflection on meaningful and decent work should become more integrated to do justice to how AI-technology inevitably shapes both simultaneously.
Through a qualitative examination, the moral evaluations of Dutch care professionals regarding healthcare robots for eldercare in terms of biomedical ethical principles and non-utility are researched. Results showed that care professionals primarily focused on maleficence (potential harm done by the robot), deriving from diminishing human contact. Worries about potential maleficence were more pronounced from intermediate compared to higher educated professionals. However, both groups deemed companion robots more beneficiary than devices that monitor and assist, which were deemed potentially harmful physically and psychologically. The perceived utility was not related to the professionals' moral stances, countering prevailing views. Increasing patient's autonomy by applying robot care was not part of the discussion and justice as a moral evaluation was rarely mentioned. Awareness of the care professionals' point of view is important for policymakers, educational institutes, and for developers of healthcare robots to tailor designs to the wants of older adults along with the needs of the much-undervalued eldercare professionals.
This document presents the findings of a study into methods that can help counterterrorism professionals make decisions about ethical problems. The study was commissioned by the Research and Documentation Centre (Wetenschappelijk Onderzoeken Documentatiecentrum, WODC) of the Dutch Ministry of Security and Justice (Ministerie van Veiligheid en Justitie), on behalf of the National Coordinator for Counterterrorism and Security (Nationaal Coördinator Terrorismebestrijding en Veiligheid,NCTV). The research team at RAND Europe was complemented by applied ethics expert Anke van Gorp from the Research Centre for Social Innovation (Kenniscentrum Sociale Innovatie) at Hogeschool Utrecht. The study provides an inventory of methods to support ethical decision-making in counterterrorism, drawing on the experience of other public sectors – healthcare, social work, policing and intelligence – and multiple countries, primarily the Netherlands and the United Kingdom