The next speaker in this AoIR 2019 session is Fabio Giglietto, whose focus is on inauthentic coordinated link sharing on Facebook in the run-up to the 2018 Italian and 2019 European election in Italy. ‘Coordinated inauthentic behaviour’ is a term used by Facebook itself, especially to justify its periodic mass account take-downs; the term remains poorly defined, however, and Facebook’s own press releases mainly point to a one-minute video that it has published to define the term.
The term marks a shift from content to process (including actors, propaganda, and information cascades), but – surprise! – largely remains unaware …
Rafael Grohmann from the Brazilian blog DigiLabour has asked me to answer some questions about my recent work – and especially my new book Are Filter Bubbles Real?, which is out now from Polity –, and the Portuguese version of that interview has just been published. I thought I’d post the English-language answers here, too:
1. Why are the ‘filter bubble’ and ‘echo chamber’ metaphors so dumb?
The first problem is that they are only metaphors: the people who introduced them never bothered to properly define them. This means that these concepts might sound sensible, but that they mean …
Well, it’s mid-year and I’m back from a series of conferences in Europe and elsewhere, so this seems like a good time to take stock and round up some recent publications that may have slipped through the net.
The very final session at IAMCR 2019 features a keynote by Jeff Jarvis, who begins by describing him self as ‘not as real academic, but just a journalism professor’. His interest here is in looking past mass media, past media, indeed past text, past stories, and past explanations.
We begin, however, with Gutenberg’s (re)invention of the printing press in 1450, and the subsequent invention of the newspaper in 1605 and its gradual industrialisation. But print as a commercial and copyrighted model was perhaps an aberration: Tom Pettitt has written of the Gutenberg parenthesis: a business model which emerged from the …
The final speaker in this IAMCR 2019 session is Brian Goss, whose interest is in flak as a socio-political force. This is influenced by the propaganda model of news media in the contemporary United States at the end of the Cold War. Media at the time were free from formal censorship, but several factors conditioned the performance of news workers, and this led to their allegiance to an overall (then mainly anti-communist) ideological positioning.
One of these factors is flak: a set of disciplinary mechanisms exerted from outside of news organisations. Flak comes into play when internal filters are insufficient …
The next speaker in this IAMCR 2019 session is Andrea Cancino-Borbón, whose focus is on satirical ‘fake news’ in Colombia.
At present, Enrique Peñalosa, the mayor of Bogotá is highly unpopular with citizens, and an independent media outlet has been set up to publish satire and parody news about him – but articles from this site have been picked up at times by mainstream news outlets and misunderstood as real reporting. This moves such obviously ‘fake’ stories from a harmless and humorous context to a much more problematic place.
So, how is the personal and political profile of the mayor …
The next speaker in this IAMCR 2019 session is Vanessa Cortez, whose focus is on hate speech in the recent presidential election in Brazil. This election was marked by increasing polarisation and hate speech, and to study this the project gathered content around the election itself.
Hate speech attacks others for specific individual or group characteristics. This is now quite prominent on social media in Brazil. The present project gathered data from comments around 16 leading news outlets in Brazil, and used a dictionary of some 260 hate speech terms in Brazilian Portuguese to identify hateful comments.
The next speakers in this IAMCR 2019 session are Changfeng Chen and Wen Shi, whose focus is on the ethical dimensions of AI-driven ‘fake news’ detection – as part of many ethical issues related to artificial intelligence more generally.
Detection mechanisms fall into two broad categories: context model-based and social context-based algorithms. The former of these applies deception detection approaches to news texts: it searches for linguistic clues about lies and truth in the articles. This can detect rumours and misinformation from the rich linguistic clues present in such articles.
Such models build on corpora of ‘fake’ and ‘true’ news …
The final IAMCR 2019 panel I’m attending today is on ‘fake news’ and hate speech, and we start with Andrew Duffy. His focus is on why people share ‘fake news’ stories via social media.
Much of the research on ‘fake news’ points out that it damages democracy – but it can also have significant negative or positive impacts on personal relationships. The sharing of such content fits into existing sharing behaviours; sharing the news with others is now a widespread social practice, and news is usually shared especially because stories are useful, emotions, bizarre, positive, entertaining, or exaggerated.