The next session at the ICA 2024 conference starts with a paper that my QUT Digital Media Research Centre colleague Dan Angus and I are presenting, so I’ll blog Dan’s part and then leave it to our slides to explain my contribution. Our work is part of a large project that investigates the dissemination of problematic, ‘fake news’ content on social media platforms.
We approached this by constructing a masterlist of some 2,300 problematic information domains which have been identified in past research, with a focus mostly on the United States, and building a research stack around that seed list. That stack drew on that list to gather public posts from Facebook’s CrowdTangle data service between 2016 and 2022 (some 42 million of them, from around 918,000 public pages and groups); identify the 1,000 most prominent pages and groups sharing problematic information; gather all of their posts during these years, independent of whether they contained problematic information or not (some 70 million from the 953 still available public pages and groups); and examine – through topic modelling and practice mapping – what else they talked about.
Slides are here, and more live-blogging below:
The initial analysis of the first dataset found large clusters of conservative and progressive Facebook spaces from the US, various other national and language communities sharing problematic information, and a range of conspiracist groups from crypto-bros to UFOlogists. We then constructed the second dataset by generating a combined ranking of these pages and groups (a rank product) to identify the top 1,000 spaces, and gathered all 70 million of their posts in a second dataset.
Topic modelling of their posts points to some of the obvious overall topics: politics, world news, religion, health and wellness, law and order, popular culture, and conspiracist content. In entertainment, there are actually more non-problematic links than problematic ones; in law and order, there is an even balance. In entertainment, many such posts are from tabloid and similar sources.
We then also applied our new practice mapping approach to the data (more on this at a later stage), to identify similar patterns in link-sharing, on-sharing, and YouTube-sharing across these pages and groups, and through this identified two large groups of generally pro-Trump, pro-MAGA pages and groups on the one hand, and ‘Berniecrat’ progressive Democrats on the other.
Their URL sharing patterns differed considerably: the Trump side focussed more strongly on sharing deeply problematic content, while the Bernie side combined some problematic information sharing with substantial sharing of a healthier mainstream media diet. This shows yet again the significantly asymmetrical nature of mis- and disinformation affinity amongst political partisans in the US.