Dienst van SURF
© 2025 SURF
In this post I give an overview of the theory, tools, frameworks and best practices I have found until now around the testing (and debugging) of machine learning applications. I will start by giving an overview of the specificities of testing machine learning applications.
LINK
This paper introduces and contextualises Climate Futures, an experiment in which AI was repurposed as a ‘co-author’ of climate stories and a co-designer of climate-related images that facilitate reflections on present and future(s) of living with climate change. It converses with histories of writing and computation, including surrealistic ‘algorithmic writing’, recombinatory poems and ‘electronic literature’. At the core lies a reflection about how machine learning’s associative, predictive and regenerative capacities can be employed in playful, critical and contemplative goals. Our goal is not automating writing (as in product-oriented applications of AI). Instead, as poet Charles Hartman argues, ‘the question isn’t exactly whether a poet or a computer writes the poem, but what kinds of collaboration might be interesting’ (1996, p. 5). STS scholars critique labs as future-making sites and machine learning modelling practices and, for example, describe them also as fictions. Building on these critiques and in line with ‘critical technical practice’ (Agre, 1997), we embed our critique of ‘making the future’ in how we employ machine learning to design a tool for looking ahead and telling stories on life with climate change. This has involved engaging with climate narratives and machine learning from the critical and practical perspectives of artistic research. We trained machine learning algorithms (i.e. GPT-2 and AttnGAN) using climate fiction novels (as a dataset of cultural imaginaries of the future). We prompted them to produce new climate fiction stories and images, which we edited to create a tarot-like deck and a story-book, thus also playfully engaging with machine learning’s predictive associations. The tarot deck is designed to facilitate conversations about climate change. How to imagine the future beyond scenarios of resilience and the dystopian? How to aid our transition into different ways of caring for the planet and each other?
Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, but there is much disquiet over problematic and dangerous implementations of AI, or indeed even AI itself deciding to do dangerous and problematic actions. These developments have led to concerns about whether and how AI systems currently adhere to and will adhere to ethical standards, stimulating a global and multistakeholder conversation on AI ethics and the production of AI governance initiatives. Such developments form the basis for this chapter, where we give an insight into what is happening in Australia, China, the European Union, India and the United States. We commence with some background to the AI ethics and regulation debates, before proceedings to give an overview of what is happening in different countries and regions, namely Australia, China, the European Union (including national level activities in Germany), India and the United States. We provide an analysis of these country profiles, with particular emphasis on the relationship between ethics and law in each location. Overall we find that AI governance and ethics initiatives are most developed in China and the European Union, but the United States has been catching up in the last eighteen months.
Smart city technologies, including artificial intelligence and computer vision, promise to bring a higher quality of life and more efficiently managed cities. However, developers, designers, and professionals working in urban management have started to realize that implementing these technologies poses numerous ethical challenges. Policy papers now call for human and public values in tech development, ethics guidelines for trustworthy A.I., and cities for digital rights. In a democratic society, these technologies should be understandable for citizens (transparency) and open for scrutiny and critique (accountability). When implementing such public values in smart city technologies, professionals face numerous knowledge gaps. Public administrators find it difficult to translate abstract values like transparency into concrete specifications to design new services. In the private sector, developers and designers still lack a ‘design vocabulary’ and exemplary projects that can inspire them to respond to transparency and accountability demands. Finally, both the public and private sectors see a need to include the public in the development of smart city technologies but haven’t found the right methods. This proposal aims to help these professionals to develop an integrated, value-based and multi-stakeholder design approach for the ethical implementation of smart city technologies. It does so by setting up a research-through-design trajectory to develop a prototype for an ethical ‘scan car’, as a concrete and urgent example for the deployment of computer vision and algorithmic governance in public space. Three (practical) knowledge gaps will be addressed. With civil servants at municipalities, we will create methods enabling them to translate public values such as transparency into concrete specifications and evaluation criteria. With designers, we will explore methods and patterns to answer these value-based requirements. Finally, we will further develop methods to engage civil society in this processes.