After a week spent in Brussels and at the 25th anniversary of the Center for Internet Research in Aarhus, I’ve now arrived in Hamburg for the inaugural Search Engines and Society (SEASON) 2025 conference, which begins with a keynote by the great Matthias Spielkamp, the founder of German NGO AlgorithmWatch, who is also a partner in our ARC Centre of Excellence for Automated Decision-Making and Society. His keynote reflects on the past ten years of AlgorithmWatch’s efforts to promote algorithmic accountability.
AlgorithmWatch is a non-profit NGO based in Berlin and Zürich, seeking to ensure that algorithms serve to strengthen justice, human rights, democracy, and sustainability – so it is not inherently against algorithms and other technological systems, but rather for their prosocial and prodemocratic deployment. Most recently, of course, this has also increasingly focussed on the rise of generative artificial intelligence (GenAI).
AW works by gathering empirical evidence on the operation of algorithms in and on society, and uses this evidence to develop recommendations for their governance. This means that AW actively argues for policy change, and in that sense is also an advocacy organisation; in this, it seeks to serve the common good, unlike industry lobbying groups. This work is funded by a wide variety of institutional funders, including various private foundations, German, Swiss, and EU government organisations, and financial support by its general membership.
All of this was inspired by a report by Nick Diakopoulos on “Algorithmic Accountability Reporting”, published through the Tow Center, which called for more work on deciphering the contours of algorithmic power as it impacts on society; this requires investigative work that may be more journalistic than academic in both speed and style, but is also difficult for conventional journalism to do as its own precarious has grown. Rather, then, there was a space here for a philanthropically funded organisation that would use the tools of investigative data journalism to generate societal impacts.
AW started by doing unfounded pro bono work at first, though, and went public at the 2016 Re:publica conference with its Automated Decision-Making Manifesto. This highlighted the black-box nature of ADM systems, and the need to crack open those black boxes to ensure greater accountability. A first project to do so was a 2017 project, supported by several German state media authorities, that drew on data donations from ordinary users to investigate search results during the German federal election: a browser plugin installed by participants would automatically search for a range of topics every few hours, and report the results back to AlgorithmWatch. This became widely known through a media partnership with Der Spiegel.
In the end, this produced some 8 million datasets from 1,500 participants, showing very little variation in the results obtained by different participants. It was limited, though, by its set of very generic search terms that may not have reflected how real users would search, and the limited demographic diversity of participants. In extension of that project, it would have been desirable to establish a permanent, diverse, and representative panel of users to continue to track results patterns.
Such data donations approaches would not be necessary, though, if there was a change in applicable laws to enforce better access to data on and from automated decision-making systems. AW was among the early organisations calling for such legally binding data access frameworks at German and European levels, and initiated a Governing Platforms Project to explore possibilities in this context. This resulted in a policy brief that called for Putting Meaningful Transparency at the Heart of the Digital Services Act (DSA), which was then being developed by EU policy-makers. Many of these proposals were in fact adopted by relevant EU leaders.
In the absence of such laws, data donation work also continued in parallel, though – one such project showed how the Instagram algorithm favoured photos of more scantily clad users, thereby shaping the behaviours of its users towards particular forms of self-representation and self-sexualisation; it also investigated how different types of posts by political actors performed. Facebook, then the mother company for Instagram, predictably responded by denying the patterns uncovered by AlgorithmWatch.
Further, in 2021 AW, in collaboration with Süddeutsche Zeitung, sought to examine the ranking of political posts on Facebook through a data donation browser plugin; Facebook soon responded with a cease-and-desist request, threatening legal action in case of non-compliance. AW felt unable to push back and discontinued the project, but it had already collected the data it needed, and went public with an open letter about the company’s suppression of such independent public-interest research. That open letter was signed by more than 6,000 supporters, and also affected the further development of DSA policies at the European level. As the DSA came into effect, AlgorithmWatch also immediately explored the operation of its Article 40, governing data access requests; this showed the considerable bureaucratic hurdles still in place for this process.
Support structures for independent data gathering for prosocial purposes are still very much needed, therefore; this will continue to include data donation approaches, too, since even under the best of conditions DSA-governed data access will remain limited. Research consortia alone cannot solve this, either, given their complexities and potentially diverging interests. Beyond such direct data work, it also remains critical to monitor the deployment of automated decision-making systems across governmental, commercial, and societal institutions, shedding more light on their uses and impacts.
This has also involved the creation of a Tech Research team within AlgorithmWatch, which is intended to further collaborations with like-minded organisations such as AI Forensics, CASM technologies, or DFRLab. Such projects are designed to have shorter project durations and more immediate impact.
For instance, through such collaborations the AW team already systematically prompted various AI chatbots for information about German state elections in 2024, examining the quality of the information provided; this is covered in the Digging Deeper into Generative AI and Elections report, was featured in news reports, and seemed to result in a measurable and immediate change in how Microsoft Copilot responded to such queries – mostly by blocking such responses altogether. But why does it need an organisation like AlgorithmWatch to produce such change; why can companies like Microsoft not be more transparently proactive about this?
There is a certain reframing of AlgorithmWatch’s approach here, from its own data audits towards arguing for more reliable data access. One outcome of this change is a very recent legal complaint filed by AlgorithmWatch and various other partners (including media industry and journalist associations at German and EU levels) against Google over its rollout of AI Overviews. This focusses on the impact of AI Overviews on click-throughs to news publishers’ Websites: it does not argue against AIOs as such, but notes the lack of a full assessment of their risks and consequences.
Any such arguments for accountability and corporate social responsibility are now also affected by the turn towards authoritarianism, especially in the US where so many leading technology companies are based; US industry and government representatives are now falsely pushing back against the EU DSA and other laws and policies as ‘censorship’, and some such rhetoric is also being adopted by European and German politicians and parties. Funding for NGOs operating in this space is also coming under pressure as a result. This is deeply concerning, and there is a significant need both to push back against this and diversify the funding sources for such civil society organisations.