The second paper in this AoIR 2017 session is by Daniela van Geenen and Mirko Schäfer, whose focus is on 'fake news' on Twitter. They began by tracking activities in the Dutch Twittersphere, and identified a number of communities within this userbase; within these communities, news and other information are being shared, and a process of social filtering takes place.
Within a two-week sample of Dutch tweets, the project identified the references to traditional and alternative media sources; the former represented established media including broadcasters, newspapers, and similar outlets, while the latter were often online-only, topic-focussed sites that were not necessarily run by professional journalists. Traditional media were referred to in some 211,000 tweets, while 44,000 tweets referred to alternative media.
Out of this dataset, the project also identified a subset of highly active retweet networks, including especially a right-wing cluster that accounted for some 50% of all alternative media references. Other clusters covered left-wing politics, environmentalism, sports, vlogging, and other topics. A smaller left-wing clusters also shared a substantial amount of alternative media.
Tabloid content was shared especially in the right-wing cluster as well, with some attention also in the left-wing cluster (though the framing of these links may well differ across these groups). An analysis of framing approaches in a sample of these tweets showed that the framing of alternative media references is largely affirmative; traditional media references are similarly mainly affirmed, but with a greater percentage of negotiated and oppositional readings, and some tweets also concern the medium itself rather than the content published there. Tabloids are more frequently referenced than quality news sources; here, too, it is often the medium than the specific content that is endorsed.
There is a need to further extend the analysis from this sample to a larger subset of the dataset. Additionally, the styles of framing might intersect with the practices common on the platform. There is a bricolage of re-encoding of content: the dissemination is socially rather than algorithmically driven. This complicates the encoding/decoding model and introduces a number of additional levels of encoding. Finally, the notion of 'filter bubbles' and 'echo chambers' needs to be challenged: network visualisations in their identification of clusters generally promote a focus on such supposed structures, but this may obscure a more complex reality.