The final presenters in this session at the AoIR 2024 conference are Yarden Skop and Anna Schjøtt Hansen; their interests are in the third-party fact-checking network employed by Meta. This operates on the basis of a Meta-provided online dashboard that highlights potentially problematic content, and the dashboard’s operation directs fact-checking away from political content spread by major political figures, and towards other forms of content.
Many fact-checking organisations around the world now substantially rely on income from Meta through their engagement in its fact-checking programme; this is part of a global post-publication debunking turn, but also creates a dependency on Meta funding, of course. Meta claims that debunking reduces the reach of problematic posts, but provides no externally validatable data on this claim.
This process can be understood as an assemblage between human and non-human elements; it brings together Meta and third-party staff, the dashboard and its algorithms, and a number of other components. The present project explored this through interviews with fact-checkers and participation in the International Fact-Checking Network’s annual meetings.
Especially initially, fact-checkers essentially had to train the Meta dashboard to better identify posts that were both problematic and fact-checkable – the fact-checkers’ assumption here is that the system, which originally produced plenty of false positives, would learn from their actions. This was also seen as an unacknowledged labour contribution to the system, however, and some fact-checkers refused to participate in this way.
Fact-checkers also developed their own nuanced understanding of the veracity labels available to them, and specific labelling practices emerged over time – while those available labels also affected the fact-checkers’ own thinking about truths and falsehoods. To end-users, of course, these reasonings would remain opaque – they would only see the final fact-checking labels.
In this sense, fact-checkers are becoming ‘machine learners’, in Adrian Mackenzie’s understanding: their processes of critical thought are being shaped by the logics and data structures of machine learning. Meta’s fact-checking programme is cementing the politics of demarcation between fact and non-fact.