You are here

Panel: Coordinated Inauthentic Behaviour in Social Media: New Methods and Findings (ECREA 2021)

ECREA 2021

Panel: Coordinated Inauthentic Behaviour in Social Media: New Methods and Findings

Tim Graham, Marian-Andrei Rizoiu, Axel Bruns, and Dan Angus; Fabio Giglietto, Nicola Righetti, Luca Rossi, and Giada Marino; Dan Angus, Tim Graham, Tobias Keller, Brenda Moon, and Axel Bruns; and Franziska B. Keller, Sebastian Stier, David Schoch, and JungHwan Yang

Social media platforms are increasingly forced to address what Facebook now describes as ‘coordinated inauthentic behaviour’ (Gleicher 2018): online influence operations that seek to trick platform algorithms into promoting and recommending ‘problematic information’ (Jack 2017), to mislead the human users of such platforms into accepting and sharing such content, and thereby also to affect broader issue frames and news agendas in mainstream media coverage. Concerns about such coordinated inauthentic behaviour extend earlier fears about the influence of malignant social bots, but also transcend them: drawing on social bots as well as human labour, coordinated inauthentic behaviour is likely to involve a combination of manual and automated activity. This additional human factor also complicates the detection of such coordinated activities, and their distinction from genuine, organic, authentic coordinated actions.

This cross-national and interdisciplinary panel approaches the study of coordinated inauthentic behaviour from a number of directions. It outlines novel and innovative detection and analysis approaches for a number of leading social media platforms, and presents their results in the context of domestic and international political debates across several national contexts. Further, it also considers how mainstream journalism might report on and respond to such activities in order to protect news audiences from being affected by coordinated inauthentic behaviours.

The first two papers in this panel focus especially on coordinated inauthentic link-sharing practices. Paper 1 introduces Hawkes Intensity Processes (HIP), a novel technique for inferring the coordinated content promotion schedules of automated social media accounts, and applies this to a major dataset of 16.5 million tweets containing links to ten major sites identified as sources of hyperpartisan content and ‘fake news’. In doing so, it uncovers new networks of inauthentic Twitter actors. Paper 2 investigates similar coordinated link-sharing activity on Facebook in Italy during the 2018 Italian and 2019 European elections. It uncovers evidence for the involvement of dozens of pages, groups, and public profiles in such media manipulation attempts. Paper 3 complements this work by focussing especially on the temporal posting patterns in such coordinated activity. It employs the recurrence plotting technique to identify traces of inauthentic actors’ use of automated scheduling tools in systematically posting content to a network of apparently unrelated pages, focussing here especially on a group of far-right pages on Facebook. Paper 4, finally, examines ten coordinated disinformation campaigns across the globe (e.g., Hong Kong, Russia, USA, Spain and Germany) and identifies important traits that help distinguish between those participating in the disinformation campaign and the regular users they try to imitate.

Collectively, these studies contribute substantially to advancing the methodological toolkit and extending the empirical evidence base for the study of coordinated inauthentic behaviour, while also not losing sight of the stakeholders that such work seeks to support. They offer an independent assessment of the nature and extent of the problem across several leading social media platforms, complementing the platform providers’ own investigations into such activities and identifying possible responses to such concerns for both social and mainstream media actors.

Video of the Presentation

Paper Details

Paper 1 – Discovering the Strategies and Promotion Schedules of Coordinated Disinformation via Hawkes Intensity Processes

Tim Graham
Digital Media Research Centre
Queensland University of Technology
timothy.graham@qut.edu.au

Marian-Andrei Rizoiu
University of Technology Sydney
Marian-Andrei.Rizoiu@uts.edu.au

Axel Bruns
Digital Media Research Centre
Queensland University of Technology
a.bruns@qut.edu.au

Dan Angus
Digital Media Research Centre
Queensland University of Technology
daniel.angus@qut.edu.au

‘Fake news’ and broader ‘information disorders’ (Wardle & Derakhshan, 2017) such as mis- and disinformation have emerged as global issues that threaten to undermine democracy and authentic political communication on social media (Benkler et al., 2018). Increasingly sophisticated coordination strategies have intensified the scale and scope of the impact that disinformation has on public opinion and democratic trust. Howard et al. (2018) found that coordinated disinformation operations are now occurring in 48 countries, and in 2019 the European External Action Service detected and exposed over 1,000 cases of disinformation within the European Union (European Commission, 2019). Whilst disinformation has attracted much scholarly attention, most studies to date have focussed on the diffusion and impact of individual content (e.g. ‘fake news’ articles) and the activity of individual accounts (e.g. bots and trolls).

An emerging problem is to understand message coordination strategies, where content authored and distributed by agents (e.g. Twitter trolls) is governed and scheduled by some unknown principal actor (Keller et al., 2019). We know that coordinated promotion (e.g. sharing, liking, retweeting) of ‘fake news’ articles by trolls and social bots can greatly increase and amplify the negative effects of these attempts to sow discord and manipulate public conversations about election candidates and partisan issues such as immigration and climate change. Likewise, it is evident that disinformation campaigns unfold via ‘collaborative work’ that co-opts and cultivates organic systems in order to produce desired effects such as increased polarisation, distrust in news media and confusion of the audience (Wilson et al., 2018). This makes identifying ‘inauthentic’ versus ‘organic’ activity ever more difficult, as they are intricately enmeshed in real-world disinformation campaigns.

In this paper, we tackle the problem of inferring the coordinated promotion schedules of ‘fake news’ articles using a novel approach known as Hawkes Intensity Processes (HIP; see Rizoiu et al., 2017). We analyse the diffusion of articles from ten major sources of hyperpartisan information and ‘fake news’ within over 16.5 million tweets that linked to content from these sites during July to September 2019. Using HIP, we uncover not only coordination strategies but also the promotion schedules of ‘fake news’ content, where agents (in this case Twitter accounts) are being centrally managed by principals (e.g. state operatives, government officials, etc.) in order to strategically promote ‘fake news’ content and maximise its virality and longevity in the social memory. This paper provides preliminary results from this ongoing research, highlighting the current challenges as well as open problems and gaps for future work.

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.

European Commission. (2019). Action plan against disinformation: Report in progress. Retrieved 20 November 2019 from:https://ec.europa.eu/commission/sites/beta-political/files/factsheet_disinfo_elex_140619_final.pdf.

Howard, P. N., & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum. SSRN Electronic Journal. doi:10.2139/ssrn.2798311

Keller, F. B., Schoch, D., Stier, S., & Yang, J. (2019). Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign. Political Communication, 1-25.

Rizoiu, M. A., Xie, L., Sanner, S., Cebrian, M., Yu, H., & Van Hentenryck, P. (2017, April). Expecting to be hip: Hawkes intensity processes for social media popularity. In Proceedings of the 26th International Conference on World Wide Web (pp. 735-744). International World Wide Web Conferences Steering Committee.

Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking. Council of Europe Report DGI (2017) 09.

Wilson, T., Zhou, K., & Starbird, K. (2018). Assembling Strategic Narratives: Information Operations as Collaborative Work within an Online Community. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 183.

Paper 2 – It Takes a Village to Manipulate the Media: Coordinated Link Sharing Behaviour during 2018 and 2019 Italian Elections

Fabio Giglietto
University of Urbino Carlo Bo

Department of Communication and Human Studies
fabio.giglietto@uniurb.it

Nicola Righetti
University of Urbino Carlo Bo

Department of Communication and Human Studies
nicola.righetti@uniurb.it

Luca Rossi
IT University Copenhagen
lucr@itu.dk

Giada Marino
University of Urbino Carlo Bo

Department of Communication and Human Studies
giada.marino@uniurb.it

Over the last few years, attempts to define, understand and fight the spread of problematic information in contemporary media ecosystems have proliferated. Most of these attempts focus on the detection of false content and/or bad actors. Using the frame of media manipulation and a revised version of the original definition of “coordinated inauthentic behavior”, we present a study based on an unprecedented combination of Facebook data, accessed through the CrowdTangle API, and two datasets of Italian political news stories published in the run-up to the 2018 Italian general election (N = 84,815) and 2019 European election (N = 164,760).

By focusing on actors’ coordinated behavior, we identified 24 (2018 election dataset) and 92 (2019 election dataset) strongly coordinated networks composed of, respectively, 82 and 606 pages, groups, and verified public profiles (“entities”), that shared the same political news articles on Facebook within a very short period of time. Some entities in our networks were openly political, while others, despite also sharing political content, deceptively presented themselves as entertainment venues. The proportion of inauthentic entities in a network affects the diversity of the news media sources they shared, thus pointing to different strategies and possible motivations.

The presentation will have both theoretical and empirical implications: it frames the concept of “coordinated inauthentic behavior” in existing literature, introduces a method to detect coordinated link sharing behavior, and points out different strategies and methods employed by networks of actors willing to manipulate the media and public opinion.

Paper 3 – Recurrence Plotting for Detecting Duplicate Online Posting Activities

Dan Angus
Digital Media Research Centre
Queensland University of Technology
daniel.angus@qut.edu.au

Tim Graham
Digital Media Research Centre
Queensland University of Technology
timothy.graham@qut.edu.au

Tobias Keller
Digital Media Research Centre
Queensland University of Technology
tobias.keller@qut.edu.au

Brenda Moon
Digital Media Research Centre
Queensland University of Technology
brenda.moon@qut.edu.au

Axel Bruns
Digital Media Research Centre
Queensland University of Technology
a.bruns@qut.edu.au

There is significant concern regarding how bots and other institutional actors are engaging in and directing inauthentic activities in online social spaces. A specific issue at present is the orchestration of multiple accounts or pages that seek to artificially boost the visibility of content posted online through coordinated duplicate posting behaviours (Badawy, Addawood, Lerman, & Ferrara, 2019). Such artificial boosting is typified by duplication of activities and actions online across multiple accounts that may be seen by different or indeed the same online audiences (Weedon, Nuland, & Stamos, 2017).

The coordination of online activities has been studied through the use of network science and statistical techniques which often look to the specific timings of activities, or other ‘abnormal’ behaviours. However, due to the adversarial nature of these online activities it is an uphill battle to continue to accurately detect such coordination, as the orchestrators of these activities are shifting their tactics to counter new methods of detection.

In an effort to assist in ongoing efforts to detect artificial boosting this paper looks towards a lesser known method from complex dynamical systems, recurrence plotting.

The recurrence plotting technique was initially invented as a technique to display and identify patterns from time series data, specifically data from high-dimensional dynamical systems (Eckmann, Kamphorst, & Ruelle, 1987). The recurrence plot is a 2D plot where the horizontal and vertical axes represent time series data, and individual elements of the plot indicate times where the phase space trajectory of the system visits the same region of phase space. Put another way, the recurrence plot locates and highlights closely matched sequences of activities/events/data.

In the case of online coordination, if for example a sequence of hyperlinks to misleading websites were shared to two different Facebook pages over the space of hours, days, or indeed years, regardless of the actual time these links were shared if the sequence in which they are shared is similar, a recurrence plot would detect and visually indicate this. What makes detection likely in the case of inauthentic online behaviour is that due to the labour involved in managing multiple pages/sites, the orchestration of posting may rely on automated content schedulers. While these schedulers can be randomised with regard to the timing of posts, the sequencing is more difficult to completely randomise, and as such it is the sequencing of these posts that enables the detection of inauthentic behaviour via recurrence analysis.

In this paper we explain the use of this recurrence plot approach in exposing inauthentic posting behaviour across a number of far-right Facebook pages. We compare the use of recurrence plotting to more standard measures of comparison based on post timing, and reveal how recurrence plotting enables the detection of more, or different forms of, coordinated behaviour.

Badawy, A., Addawood, A., Lerman, K., & Ferrara, E. (2019). Characterizing the 2016 Russian IRA influence campaign. Social Network Analysis & Mining, 9(1), 31. doi:10.1007/s13278-019-0578-6

Eckmann, J. P., Kamphorst, S. O., & Ruelle, D. (1987). Recurrence Plots of Dynamical Systems. Europhysics Letters, 5, 973-977.

Weedon, J., Nuland, W., & Stamos, A. (2017). Information operations and Facebook. Retrieved from Facebook:https://fbnewsroomus.files.wordpress.com//04/facebook-and-information-operations-v1.pdf.

Paper 4 – Astroturfing in Hong Kong and elsewhere: patterns of coordination in hidden Twitter campaigns

Franziska B. Keller

Hong Kong University of Science and Technology

Division of Social Science

Sebastian Stier

GESIS - Leibniz-Institut für Sozialwissenschaften

Computational Social Science

David Schoch

The University of Manchester

School of Social Sciences

JungHwan Yang

University of Illinois at Urbana-Champaign

Department of Communication

Political astroturfing, a centrally coordinated disinformation campaign in which participants pretend to be ordinary citizens, has been employed by variety of state actors on social media platforms in an attempt to influence public opinion in a large number of countries. We argue that these campaigns should be defined as disinformation, because while they may not necessarily spread falsehoods, they do deceive their audience about the nature of individuals in charge of the campaign accounts.

We examine ten such campaigns on Twitter, most of which have been identified by the company itself. These campaigns target a wide range of countries over the period of eight years and are waged in different languages and cultural contexts. Some aim at a domestic audience (Hong Kong, Russia, South Korea, Ecuador, Venezuela and Spain), others at the public of a specific territory abroad – such as the Russian Internet Research Agency (IRA) targeting the public in the US or Germany – or the international public at large – such as Iran or the UAE promoting their foreign policy goals. Some want to undermine trust in institutions and polarize the target audience (e.g. the IRA’s intervention in the US elections), while others are straightforward propaganda campaigns in favor of a government – as appears to be the case in the Venezuelan case – or against its opponents, such as the campaign against Hong Kong protesters in China.

Despite these variations, all the campaigns share important traits that help distinguish their participants from the regular users they try to imitate. We theorize these traits as a natural outcome of the centralized command structure inherent in such campaigns – as opposed to the decentralized nature of genuine grassroots movements – and to the principal-agent problems that emerge when campaign participants are not intrinsically motivated. We show, for instance, that campaign accounts tend to be more active during the office hours and weekdays in their country of origin – indicating that the participants are only active when they are actually paid and supervised for their work. The campaigns all display suspicious patterns of coordinated messaging: they contain a large number of account pairs that either post the same original message or retweet the same message at almost the same time, or else they disproportionally retweet messages from other campaign accounts. This points to participants shirking by sharing pre-existing text instead of coming up with original messages for each of their accounts.

In a series of case studies, we show that these differences persist even if we compare campaign participants not just to a random sample of ordinary users, but to politically interested users or suers that are participate in the same debates as the campaign. We also show that these differences help us identify additional participants. Finally, we highlight that the “social bots” that often captivate the general public’s attention – form only a small part of most campaigns, and that most participating accounts are at most partially automated.