You are here

Some Questions about Filter Bubbles, Polarisation, and the APIcalypse

Rafael Grohmann from the Brazilian blog DigiLabour has asked me to answer some questions about my recent work – and especially my new book Are Filter Bubbles Real?, which is out now from Polity –, and the Portuguese version of that interview has just been published. I thought I’d post the English-language answers here, too:

1. Why are the ‘filter bubble’ and ‘echo chamber’ metaphors so dumb?

The first problem is that they are only metaphors: the people who introduced them never bothered to properly define them. This means that these concepts might sound sensible, but that they mean everything and nothing. For example, what does it mean to be inside an filter bubble or echo chamber? Do you need to be completely cut off from the world around you, which seems to be what those metaphors suggest? Only in such extreme cases – which are perhaps similar to being in a cult that has completely disconnected from the rest of society – can the severe negative effects that the supporters of the echo chamber or filter bubble theories imagine actually become reality, because they assume that people in echo chambers or filter bubbles no longer see any content that disagrees with their political worldviews.

Now, such complete disconnection is not entirely impossible, but very difficult to achieve and maintain. And most of the empirical evidence we have points in the opposite direction. In particular, the immense success of extremist political propaganda (including ‘fake news’, another very problematic and poorly defined term) in the US, the UK, parts of Europe, and even in Brazil itself in recent years provides a very strong argument against echo chambers and filter bubbles: if we were all locked away in our own bubbles, disconnected from each other, then such content could not have travelled as far, and could not have affected as many people, as quickly as it appears to have done. Illiberal governments wouldn’t invest significant resources in outfits like the Russian ‘Internet Research Agency’ troll farm if their influence operations were confined to existing ideological bubbles; propaganda depends crucially on the absence of echo chambers and filter bubbles if it seeks to influence more people than those who are already part of a narrow group of hyperpartisans.

Alternatively, if we define echo chambers and filter bubbles much more loosely, in a way that doesn’t require the people inside those bubble to be disconnected from the world of information around them, then the terms become almost useless. With such a weak definition, any community of interest would qualify as an echo chamber or filter bubble: any political party, religious group, football club, or other civic association suddenly is an echo chamber or filter bubble because it enables people with similar interests and perspectives to connect and communicate with each other. But in that case, what’s new? Such groups have always existed in society, and society evolves through the interaction and contest between them – there’s no need to create new and poorly defined metaphors like ‘echo chambers’ and ‘filter bubbles’ to describe this.

Some proponents of these metaphors claim that our new digital and social media have made things worse, though: that they have made it easier for people to create the first, strong type of echo chamber or filter bubble, by disconnecting from the rest of the world. But although this might sound sensible, there is practically no empirical evidence for this: for example, we now know that people who receive news from social media encounter a greater variety of news sources than those who don’t, and that those people who have the strongest and most partisan political views are also among the most active consumers of mainstream media. Even suggestions that platform algorithms are actively pushing people into echo chambers or filter bubbles have been disproven: Google search results, for instance, show very little evidence of personalisation at an individual level.

Part of the reason for this is that – unlike the people who support the echo chamber and filter bubble metaphors – most ordinary people actually don’t care much at all about politics. If there is any personalisation through the algorithms of Google, Facebook, Twitter, or other platforms, it will be based on many personal attributes other than our political interests. As multi-purpose platforms, these digital spaces are predominantly engines of context collapse, where our personal, professional, and political lives intersect and crash into each other and where we encounter a broad and unpredictable mixture of content from a variety of viewpoints. Overall, these platforms enable all of us to find more diverse perspectives, not less.

And this is where these metaphors don’t just become dumb, but downright dangerous: they create the impression, first, that there is a problem, and second, that the problem is caused to a significant extent by the technologies we use. This is an explicitly technologically determinist perspective, ignoring the human element and assuming that we are unable to shape these technologies to our needs. And such views then necessarily also invite technological solutions: if we assume that digital and social media have caused the current problems in society, then we must change the technologies (through technological, regulatory, and legal adjustments) to fix those problems. It’s as if a simple change to the Facebook algorithm would make fascism disappear.

In my view, by contrast, our current problems are social and societal, economic and political, and technology plays only a minor role in them. That’s not to say that they are free of blame – Facebook, Twitter, WhatsApp, and other platforms could certainly do much more to combat hate speech and abuse on their platforms, for example. But if social media and even the Internet itself suddenly disappeared tomorrow, we would still have those same problems in society, and we would be no closer to solving them. The current overly technological focus of our public debates – our tendency to blame social media for all our problems – obscures this fact, and prevents us from addressing the real issues.

2. Polarisation is a political fact, not a technological one. How do you understand political and societal polarisation today?

To me, this is the real question, and one which has not yet been researched enough. The fundamental problem is not echo chambers and filter bubbles: it is perfectly evident that the various polarised groups in society are very well aware of each other, and of each other’s ideological positions – which would be impossible if they were each locked away in their own bubbles. In fact, they monitor each other very closely: research in the US has shown that far-right fringe groups are also highly active followers of ‘liberal’ news sites like the New York Times, for example. But they no longer follow the other side in order to engage in any meaningful political dialogue, aimed at finding a consensus that both sides can live with: rather, they monitor their opponents in order to find new ways to twist their words, create believable ‘fake news’ propaganda, and attack them with such falsehoods. And yes, they use digital and social media to do so, but again this is not an inherently technological problem: if they didn’t have social media, they’d use the broadcast or print media instead, just as the fascists did in the 1920s and 1930s and as their modern-day counterparts still do today.

So, for me the key question is how we have come to this point: put simply, why do hyperpartisans do what they do? How do they become so polarised – so sure of their own worldview that they will dismiss any opposing views immediately, and will see any attempts to argue with them or to correct their views merely as a confirmation that ‘the establishment’ is out to get them? What are the (social and societal, rather than simply technological) processes by which people get drawn to these extreme political fringes, and how might they be pulled back from there? This question also has strong psychological elements, of course: how do hyperpartisans form their worldview? How do they incorporate new evidence into it? How do they interpret, and in doing so defuse, any evidence that goes against their own perspectives? We see this across so many fields today: from political argument itself to the communities of people who believe vaccinations are some kind of global mind control experiment, or to those who still deny the overwhelming scientific evidence for anthropogenic climate change. How do these people maintain their views even – and this again is evidence for the fact that echo chambers and filter bubbles are mere myths – they are bombarded on a daily basis with evidence of the fact that vaccinations save lives and that the global climate is changing with catastrophic consequences?

And since you include the word ‘today’ in your question, the other critical area of investigation in all this is whether any of this is new, and whether it is different today from the way it was ten, twenty, fifty, or one hundred years ago. On the one hand, it seems self-evident that we do see much more evidence of polarisation today than we have in recent decades: Brexit, Trump, Bolsonaro, and many others have clearly sensitised us to these deep divisions in many societies around the world. But most capitalist societies have always had deep divisions between the rich and the poor; the UK has always had staunch pro- and anti-Europeans; the US has always been racist. I think we need more research, and better ways of assessing, whether any of this has actually gotten worse in recent years, or whether it has simply become more visible.

For example, Trump and others have arguably made it socially acceptable in the US to be politically incorrect: to be deliberately misogynist; to be openly racist; to challenge the very constitutional foundations that the US political system was built on. But perhaps the people who now publicly support all this had always already been there, and had simply lacked the courage to voice their views in public – perhaps what has happened here is that Trump and others have smashed the spiral of silence that subdued such voices by credibly promising social and societal sanctions, and have instead created a spiral of reinforcement that actively rewards the expression of extremist views and leads hyperpartisans to try and outdo each other with more and more extreme statements. Perhaps the spiral of silence now works the other way, and the people who oppose such extremism now remain silent because they fear communicative and even physical violence.

Importantly, these are also key questions for media and communication research, but this research cannot take the simplistic perspective that ‘digital and social media are to blame’ for all of this. Rather, the question is to what extent the conditions and practices in our overall, hybrid media system – encompassing print and broadcast as well as digital and social media – have enabled such changes. Yes, digital and social platforms have enabled voices on the political fringes to publish their views, without editorial oversight or censorship from anyone else. But such voices find their audience often only once they have been amplified by more established outlets: for instance, once they have been covered – even if only negatively – by mainstream media journalists, or shared on via social media by more influential accounts (including even the US president himself). It is true that in the current media landscape, the flows of information are different from what they were in the past – not simply because of the technological features of the media, but because of the way that all of us (from politicians and journalists through to ordinary users) have chosen to incorporate these features into our daily lives. The question then is whether and how this affects the dynamics of polarisation, and what levers are available to us if we want to change those dynamics.

3. How can we continue critical research in social media after the APIcalypse?

With great tenacity and ingenuity even in the face of significant adversity – because we have a societal obligation to do so. I’ve said throughout my answers here that we cannot simplistically blame social media for the problems our societies are now facing: the social media technologies have not caused any of this. But the ways in which we, all of us, use social media – alongside other, older media forms – clearly play a role in how information travels and how polarisation takes place, and so it remains critically important to investigate the social media practices of ordinary citizens, of hyperpartisan activists, of fringe and mainstream politicians, of emerging and established journalists, of social bots and disinformation campaigns. And of course even beyond politics and polarisation, there are also many other important reasons to study social media.

The problem now is that over the past few years, many of the leading social media platforms have made it considerably more difficult for researchers even to access public and aggregate data about social media activities – a move I have described, in deliberately hyperbolic language, as the ‘APIcalypse’. Ostensibly, such changes were introduced to protect user data from unauthorised exploitation, but a convenient consequence of these access restrictions has been that independent, critical, public-interest research into social media practices has become a great deal more difficult even while the commercial partnerships between platforms and major corporations have remained largely unaffected. This limits our ability to provide an impartial assessment of social media practices and to hold the providers themselves to account for the effects of any changes they might make to their platforms, and increasingly forces scholars who seek to work with platform data into direct partnership arrangements that operate under conditions favouring the platform providers.

This requires several parallel responses from the scholarly community. Of course we must explore the new partnership models offered by the platforms, but we should treat these with a considerable degree of scepticism and cannot solely rely on such limited data philanthropy; in particular, the platforms are especially unlikely to provide data access in contexts where scholarly research might be highly critical of their actions. We must therefore also investigate other avenues for data gathering: this includes data donations from users of these platforms (modelled for instance on ProPublica’s browser plugin that captures the political ads encountered by Facebook users) or data scraping from the Websites of the platforms as an alternative to API-based data access, for example.

Platforms may seek to shut down such alternative modes of data gathering (as Facebook sought to do with the ProPublica browser plugin), or change their Terms of Service to explicitly forbid such practices – and this should lead scholars to consider whether the benefits of their research outweigh the platform’s interests. Terms of Service are often written to the maximum benefit of the platform, and may not be legally sound under applicable national legislation; the same legislation may also provide ‘fair use’ or ‘academic freedom’ exceptions that justify the deliberate breach of Terms of Service restrictions in specific contexts. As scholars, we must remember that we have a responsibility to the users of the platform, and to society as such, as well as to the platform providers. We must balance these responsibilities, by taking care that the user data we gather remain appropriately protected as we pursue questions of societal importance, and we should minimise the impact of our research on the legitimate commercial interests of the platform unless there is a pressing need to reveal malpractice in order to safeguard society. To do so can be a very difficult balancing act, of course.

Finally, we must also maintain our pressure on the platforms to provide scholarly researchers with better interfaces for data access, well beyond limited data philanthropy schemes that exclude key areas of investigation. Indeed, we must enlist others – funding bodies, policymakers, civil society institutions, and the general public itself – in bringing that pressure to bear: it is only in the face of such collective action, coordinated around the world, that these large and powerful corporations are likely to adjust their data access policies for scholarly research. And it will be important to confirm that they act on any promises of change they might make: too often have the end results they delivered not lived up to the grand rhetoric with which they were announced.

In spite of all of this, however, I want to end on a note of optimism: there still remains a crucial role for research that investigates social media practices, in themselves and especially also in the context of the wider, hybrid media system of older and newer media, and we must not and will not give up on this work. In the face of widespread hyperpartisanship and polarisation, this research is now more important than ever – and the adversities we are now confronted with are also a significant source of innovation in research methods and frameworks.