You are here

Pushed towards Dysfunction: How Social Media API Restrictions Distort Research Outcomes (iCS 2018)

Locked out of Social Platforms: An iCS Symposium on Challenges to Studying Disinformation 2018

Pushed towards Dysfunction: How Social Media API Restrictions Distort Research Outcomes

Axel Bruns

Abstract

In the aftermath of the Brexit and Trump victories and similar unexpected election outcomes, considerable attention has once again been paid to the apparent role of social media as platforms for the establishment of extremist communities and channels for the distribution of their propaganda. Additionally, they have also come under pressure for the apparent ease with which interested state and commercial actors are able to deploy automated accounts and use them to target populations susceptible to their messaging. Such concerns are not entirely new, but have renewed moral panics about ‘echo chambers’, ‘filter bubbles’, and – perhaps most prominently – ‘fake news’ in social media.

From the political blogs of the early 2000s to the latest generation of social media platforms, scholarly research has produced plenty of case studies of such phenomena: it is comparatively easy to find Twitter hashtags or Facebook groups that exhibit strongly polarised, exclusionary tendencies and circulate highly partisan propaganda content. But importantly, the ready accessibility of such examples, and the comparative absence of more comprehensive assessments of their placement within the wider social media ecology, are also a product of the ways in which the increasingly restrictive data access regimes of these platforms are shaping the scholarly research agenda: the limitations imposed on the standard Application Programming Interfaces (APIs) are effectively pushing independent research towards the study of dysfunction.

On Twitter, for instance, it is considerably easier to capture user activities in explicitly political and highly polarised hashtags such as #tcot or #p2, or their equivalents outside of the U.S., than it is to observe the day-to-day activities of ordinary users. As a result, we know a great deal more about the communicative patterns and practices of the small and unrepresentative subset of ‘political junkies’ on the platform, fuelling popular perceptions of Twitter as sustaining ‘echo chambers’ or ‘filter bubbles’, than we do about those of the considerable majority of users who encounter and engage in political debate more serendipitously. On Facebook, we similarly have a better understanding of what happens in selected hyperpartisan public pages than of how political talk manifests across the vast social graph of private or semi-private personal profiles.

The glimpses of the social media iceberg below the waterline that are available to us instead tend to be provided mainly by survey- and interview-based studies rather than by research drawing on API access, and often paint a very different picture of the communicative structures enabled by social media. From their perspective, Facebook serves as an engine for context collapse rather than partisan segmentation, and Twitter supports the rapid dissemination of valuable information as much as of problematic disinformation. Only a handful of API-based studies have been able to document such patterns from a ‘big social data’ perspective, often by bending the platforms’ API rules close to breaking point. For scholars, then, there is a need to push back against increasing API restrictions – and indeed a need to convince platforms that a loosening of such restrictions is in their own best interest as it can correct the distorted public perceptions of their impact.