'Fake News' on Facebook: A Large-Scale, Longitudinal Study of Problematic Information Dissemination between 2016 and 2021
Axel Bruns, Daniel Angus, Xue Ying (Jane) Tan, Edward Hurcombe, Nadia Jude, Phoebe Matich, Stephen Harrington, Jennifer Stromer-Galley, Karin Wahl-Jorgensen, and Scott Wright
The sessions at this Norwegian Media Researcher Conference are organised in the form of particularly constructive feedback on work-in-progress papers – which is great as a format, but doesn’t lend itself particularly well to liveblogging. So, I’ll skip forward right to the next keynote by Raul Ferrer-Conill, whose focus is on the datafication of everyday life. This is something of a departure from his previous work on the gamification of the news.
He begins by outlining the datafication of the mundane: the way people’s social action – as well as non-human action, in fact – is being transformed into quantifiable data, especially online, and that such data therefore become a resource that can be utilised, operationalised, and exploited. Indeed, the sense in the industry is now that ‘everything starts with data’, which reveals a particular, peculiar kind of mindset. Over the past years, the Internet of Things has moved from an idea to a reality, and this has fuelled the “smart” delusion: the belief that more datapoints mean smarter decision-making processes (they usually don’t).
Before we launch properly into 2022 and the new Australian Laureate Fellowship that will be the main focus of my year, I need to close the loop on two more talks I presented just before my summer holidays in December, and which are now online as videos.
On 26 November 2021, I had the pleasure to present some thoughts on Facebook’s week-long blanket ban of news content in Australia in an invited presentation at Griffith University’s Centre for Governance and Public Policy. My sincere thanks to Max Grömping and the rest of the CGPP team for hosting me. The talk, available below, also gave me an opportunity to speak more generally about the continued challenges of researching social media platforms and their activities, and to outline some of the work that my colleagues and I in the QUT Digital Media Research Centre and the ARC Centre of Excellence for Automated Decision-Making and Society are doing to address these issues. The audio on the recording is a little soft, but I hope the overall discussion comes through clearly enough; slides and further details are linked below.
A few days later I gave a talk to the Social Media Data Science Group at the University of Sydney – many thanks to Monika Bednarek for the invitation. This was a great opportunity for me to step through a number of different, related concepts from groups through communities to publics, and organise some thoughts on how to distinguish these broadly similar but nonetheless distinct formations from one another. This is important especially in the context of network analysis, which all too often jumps to calling collections of similar entities a ‘community’ without paying sufficient attention to the specific meaning of that term: not every cluster is necessarily a community in the proper sense of the word.
I’ve not yet had the chance to write much about one of the major new projects I’m involved with: the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), a large-scale, multi-institutional, seven-year research centre that investigates the impact of automated decision-making technologies (including algorithms, artificial intelligence, and other such technologies) on all aspects of our personal and professional lives. In particular, for the first year of the Centre I’ve led the News & Media Focus Area, which recently held its inaugural symposium to take stock of current research projects and plan for the future. (This was also the time for me to hand that leadership over to my colleagues Jean Burgess (QUT) and James Meese (RMIT), as I step back from that role to concentrate on another major project – more on this in a future update.)
Within News & Media, I’ve also led a major research project which we launched publicly in late July, and which is now producing first research outcomes: the Australian Search Experience. Inspired by an earlier project by our ADM+S partner organisation AlgorithmWatch in Germany, this project investigates the extent to which the search results Australian users encounter as they query search engines like Google are personalised and therefore differ from user to user; if they are, this would leave open the possibility of user being placed in so-called ‘filter bubbles’ – a concept which I’ve questioned in my recent book Are Filter Bubbles Real? We even have a promo video:
Investigating such personalisation is difficult: since every user is assumed to see a personalised set of search results, we need to compare these results across a large number of users in order to determine whether there is any significant personalisation, and what aspects of these users’ identities might drive such personalisation. While some studies approach this challenge by setting up a large number of ‘fake’ user accounts that are given a particular user persona by making them search repeatedly for specific topics that are expected to contribute to the search engine’s profile for the account, AlgorithmWatch’s earlier, German study took a different approach and invited a large number of real users to contribute as citizen scientists to the study. To do so, they were asked to install a browser plugin that regularly searched for a predefined set of keywords and reported the results back to AlgorithmWatch’s server.
Our ADM+S project uses this same data donation approach, but extends it further: we query four major search engines (Google Search, Google News, Google Video, and YouTube), and we are able to vary our search terms over the duration of the project. Like the earlier project, we also ask users to provide some basic demographic information (in order to link any systemic personalisation patterns we may encounter with those demographics), but never access any of our participants’ own search histories. Our browser plugin is available for the desktop versions of Google Chrome, Mozilla Firefox, and Microsoft Edge, and I’m pleased to say that more than 1,000 citizen scientists have now installed the plugin.
Last week saw the annual conference of the Association of Internet Researchers (AoIR), which also marked the end of my six-year tenure on the AoIR Executive (serving two years each as Vice-President, President, and Past President). AoIR remains my intellectual home, and I’ve had a great time in these roles, even in spite of the additional pressure that these past two pandemic years and the resulting need to move our annual conference to an entirely online format have provided – I’ve worked with three excellent Executive Committees, and I’m particularly proud of the way that we didn’t just move the conference online, but created what has become a benchmark for many other online conferences. My sincere thanks to everyone who has served with me on the Exec over these six years – and with first Tama Leaver and then Nicholas A. John taking on the AoIR Presidency over the coming two terms, I know the Association is in very good hands as we return towards in-person events again, too.
But on to this year’s AoIR conference. I ended up being involved in quite a number of panels, drawing on the excellent and diverse research conducted by my colleagues in the QUT Digital Media Research Centre (DMRC) and collaborating with a range of colleagues from around the world. As the AoIR conference presentation videos themselves will be taken down again by the end of the year, we’ve now made these available via the DMRC YouTube channel, too – and since there’s only so much we can cover in AoIR’s three-minute presentation format, we’ve also recorded longer-form videos for a number of the papers on these panels. For more details on any of these presentations, click on the reference below the video.
Mis- and Disinformation
I’ll start with a panel on mis- and disinformation that is closely related to our current ARC Discovery project on Evaluating the Challenge of ‘Fake News’ and Other Malinformation. This bumper panel of five presentations brings together a large-scale study of suspected ‘fake news’ dissemination networks on Facebook over the past five years with detailed analysis of sharing and engagement patterns around two specific problematic outlets – the Russian state propaganda channel RT and the controversial commercial news channel Sky News Australia; it further combines this analysis of mis- and disinformation practices with two papers reviewing the discourse about ‘fake news’ and related phenomena in Australian media and politics, and in the Russian and Persian Twitterspheres. I must say I’m particularly excited about this panel also because it showcases the breadth and depth of the research being conducted at the DMRC and our partner institutions, and the diversity of our researchers – the RT paper alone covers content in English, Russian, Spanish, French, German, and Arabic, and I can’t think of too many other research centres that can readily assemble such a multi-lingual team.
Of the papers presented in the panel, we’ve recoded longer versions for two. The first of these is our large-scale, longitudinal study of ‘fake news’ sharing on Facebook. This draws on our masterlist of some 2,300+ outlets suspected of publishing mis- and disinformation, which we’ve compiled from the existing literature; we’ve gathered any posts that share links to these sites on public Facebook pages and groups, and mapped the networks between these Facebook spaces. The results are indicative of the key groups and communities, from around the world, that are involved in promoting such problematic information, and of the themes they tend to focus on – and they’re a starting point for the next stage of the work in our ARC Discovery project. Here is the long version of the presentation: