Dienst van SURF
© 2025 SURF
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
From the article: The ethics guidelines put forward by the AI High Level Expert Group (AI-HLEG) present a list of seven key requirements that Human-centered, trustworthy AI systems should meet. These guidelines are useful for the evaluation of AI systems, but can be complemented by applied methods and tools for the development of trustworthy AI systems in practice. In this position paper we propose a framework for translating the AI-HLEG ethics guidelines into the specific context within which an AI system operates. This approach aligns well with a set of Agile principles commonly employed in software engineering. http://ceur-ws.org/Vol-2659/
The increasing use of AI in industry and society not only expects but demands that we build human-centred competencies into our AI education programmes. The computing education community needs to adapt, and while the adoption of standalone ethics modules into AI programmes or the inclusion of ethical content into traditional applied AI modules is progressing, it is not enough. To foster student competencies to create AI innovations that respect and support the protection of individual rights and society, a novel ground-up approach is needed. This panel presents on one such approach, the development of a Human-Centred AI Masters (HCAIM) as well as the insights and lessons learned from the process. In particular, we discuss the design decisions that have led to the multi-institutional master’s programme. Moreover, this panel allows for discussion on pedagogical and methodological approaches, content knowledge areas and the delivery of such a novel programme, along with challenges faced, to inform and learn from other educators that are considering developing such programmes.
The project aims to improve palliative care in China through the competence development of Chinese teachers, professionals, and students focusing on the horizontal priority of digital transformation.Palliative care (PC) has been recognised as a public health priority, and during recent years, has seen advances in several aspects. However, severe inequities in the access and availability of PC worldwide remain. Annually, approximately 56.8 million people need palliative care, where 25.7% of the care focuses on the last year of person’s life (Connor, 2020).China has set aims for reaching the health care standards of the developed countries by 2030 through the Healthy China Strategy 2030, where one of the improvement areas in health care includes palliative care, thus continuing the previous efforts.The project provides a constructive, holistic, and innovative set of actions aimed at resulting in lasting outcomes and continued development of palliative care education and services. Raising the awareness of all stakeholders on palliative care, including the public, is highly relevant and needed. Evidence based practice guidelines and education are urgently required for both general and specialised palliative care levels, to increase the competencies for health educators, professionals, and students. This is to improve the availability and quality of person-centered palliative care in China. Considering the aging population, increase in various chronic illnesses, the challenging care environment, and the moderate health care resources, competence development and the utilisation of digitalisation in palliative care are paramount in supporting the transition of experts into the palliative care practice environment.General objective of the project is to enhance the competences in palliative care in China through education and training to improve the quality of life for citizens. Project develops the competences of current and future health care professionals in China to transform the palliative care theory and practice to impact the target groups and the society in the long-term. As recognised by the European Association for Palliative Care (EAPC), palliative care competences need to be developed in collaboration. This includes shared willingness to learn from each other to improve the sought outcomes in palliative care (EAPC 2019). Since all individuals have a right to health care, project develops person-centered and culturally sensitive practices taking into consideration ethics and social norms. As concepts around palliative care can focus on physical, psychological, social, or spiritual related illnesses (WHO 2020), project develops innovative pedagogy focusing on evidence-based practice, communication, and competence development utilising digital methods and tools. Concepts of reflection, values and views are in the forefront to improve palliative care for the future. Important aspects in project development include health promotion, digital competences and digital health literacy skills of professionals, patients, and their caregivers. Project objective is tied to the principles of the European Commission’s (EU) Digital Decade that stresses the importance of placing people and their rights in the forefront of the digital transformation, while enhancing solidarity, inclusion, freedom of choice and participation. In addition, concepts of safety, security, empowerment, and the promotion of sustainable actions are valued. (European Commission: Digital targets for 2030).Through the existing collaboration, strategic focus areas of the partners, and the principles of the call, the PalcNet project consortium was formed by the following partners: JAMK University of Applied Sciences (JAMK ), Ramon Llull University (URL), Hanze University of Applied Sciences (HUAS), Beijing Union Medical College Hospital (PUMCH), Guangzhou Health Science College (GHSC), Beihua University (BHU), and Harbin Medical University (HMU). As project develops new knowledge, innovations and practice through capacity building, finalisation of the consortium considered partners development strategy regarding health care, (especially palliative care), ability to create long-term impact, including the focus on enhancing higher education according to the horizontal priority. In addition, partners’ expertise and geographical location was also considered important to facilitate long-term impact of the results.Primary target groups of the project include partner country’s (China) staff members, teachers, researchers, health care professionals and bachelor level students engaging in project implementation. Secondary target groups include those groups who will use the outputs and results and continue in further development in palliative care upon the lifetime of the project.
Artificiële Intelligentie (AI) speelt een steeds belangrijkere rol in mediaorganisaties bij de automatische creatie, personalisatie, distributie en archivering van mediacontent. Dit gaat gepaard met vragen en bezorgdheid in de maatschappij en de mediasector zelf over verantwoord gebruik van AI. Zo zijn er zorgen over discriminatie van bepaalde groepen door bias in algoritmes, over toenemende polarisatie door de verspreiding van radicale content en desinformatie door algoritmes en over schending van privacy bij een niet transparante omgang met data. Veel mediaorganisaties worstelen met de vraag hoe ze verantwoord met AI-toepassingen om moeten gaan. Mediaorganisaties geven aan dat bestaande ethische instrumenten voor verantwoorde AI, zoals de EU “Ethics Guidelines for trustworthy AI” (European Commission, 2019) en de “AI Impact Assessment” (ECP, 2018) onvoldoende houvast bieden voor het ontwerp en de inzet van verantwoorde AI, omdat deze instrumenten niet specifiek zijn toegespitst op het mediadomein. Hierdoor worden deze ethische instrumenten nog nauwelijks toegepast in de mediasector, terwijl mediaorganisaties aangeven dat daar wel behoefte aan is. Het doel van dit project is om mediaorganisaties te ondersteunen en begeleiden bij het inbedden van verantwoorde AI in hun organisaties en bij het ontwerpen, ontwikkelen en inzetten van verantwoorde AI-toepassingen, door domeinspecifieke ethische instrumenten te ontwikkelen. Dit gebeurt aan de hand van drie praktijkcasussen die zijn aangedragen door mediaorganisaties: pluriforme aanbevelingssystemen, inclusieve spraakherkenningssystemen voor de Nederlandse taal en collaboratieve productie-ondersteuningssystemen. De ontwikkeling van de ethische instrumenten wordt uitgevoerd met een Research-through-Design aanpak met meerdere iteraties van informatie verzamelen, analyseren prototypen en testen. De beoogde resultaten van dit praktijkgerichte onderzoek zijn: 1) nieuwe kennis over het ontwerpen van verantwoorde AI in mediatoepassingen, 2) op media toegespitste ethische instrumenten, en 3) verandering in de deelnemende mediaorganisaties ten aanzien van verantwoorde AI door nauwe samenwerking met praktijkpartners in het onderzoek.
Steeds meer organisaties vinden het belangrijk om ‘ethisch verantwoorde’ AI-toepassingen te ontwikkelen. Maar wat is precies ethisch verantwoord? En hoe ontwerp je AI-systemen die aan ethische richtlijnen voldoen? In dit coöperatieve spel ontwerp je samen, op basis van de ethische principes opgesteld door de EU, ethisch verantwoorde AI-toepassingen.