Service of SURF
© 2025 SURF
Can you remember the last time the ground gave way beneath you? When you thought the ground was stable, but for some reason it wasn’t? Perhaps you encountered a pothole on the streets of Amsterdam, or you were renovating your house and broke through the floor. Perhaps there was a molehill in a park or garden. You probably had to hold on to something to steady yourself. Perhaps you even slipped or fell. While I sincerely hope that nobody here was hurt in the process, I would like you to keep that feeling in your mind when reading what follows. It is the central theme of the words that will follow. The ground beneath our feet today is not as stable as the streets of Amsterdam, your park around the corner or even a poorly renovated upstairs bedroom. This is because whatever devices we use and whatever pathways we choose, we all live in hybrid physical and digital social spaces (Kitchin and Dodge 2011). Digital social spaces can be social media platforms like Twitter or Facebook, but also chat apps like WhatsApp or Signal. Crucially, social spaces are increasingly hybrid, in which conversations take place across digital spaces (WhatsApp chat group) and physical spaces (meeting friends in a cafe) simultaneously. The ground beneath our feet is not made of concrete or stone or wood but of bits and bytes.
What you don’t know can’t hurt you: this seems to be the current approach for responding to disinformation by public regulators across the world. Nobody is able to say with any degree of certainty what is actually going on. This is in no small part because, at present, public regulators don’t have the slightest idea how disinformation actually works in practice. We believe that there are very good reasons for the current state of affairs, which stem from a lack of verifiable data available to public institutions. If an election board or a media regulator wants to know what types of digital content are being shared in their jurisdiction, they have no effective mechanisms for finding this data or ensuring its veracity. While there are many other reasons why governments would want access to this kind of data, the phenomenon of disinformation provides a particularly salient example of the consequences of a lack of access to this data for ensuring free and fair elections and informed democratic participation. This chapter will provide an overview of the main aspects of the problems associated with basing public regulatory decisions on unverified data, before sketching out some ideas of what a solution might look like. In order to do this, the chapter develops the concept of auditing intermediaries. After discussing which problems the concept of auditing intermediaries is designed to solve, it then discusses some of the main challenges associated with access to data, potential misuse of intermediaries, and the general lack of standards for the provision of data by large online platforms. In conclusion, the chapter suggests that there is an urgent need for an auditing mechanism to ensure the accuracy of transparency data provided by large online platform providers about the content on their services. Transparency data that have been audited would be considered verified data in this context. Without such a transparency verification mechanism, existing public debate is based merely on a whim, and digital dominance is likely to only become more pronounced.
MULTIFILE