It’s a Wednesday in Germany, and I’m in Bielefeld for a workshop of the Bots Building Bridges (3B) project. We start with an overview of the project’s activities to date, with Florian Muhle, Ole Pütz, Rob Ackland, and Matthias Orlikowski. The project focusses on online political discourse, and the dysfunctions in such discourse that are apparent in social media environments. This also addresses the questions of ‘echo chambers’, of polarisation, and of impacts on democratic discourse. Social media are not solely to blame for this: it may also be possible to support productive political discourse through social media, given the right technological supports.
Part of this raises the question of social bots and other automation technologies: these can be used for positive or negative purposes in online discourse. This also requires a focus on the dynamics of online discussions, paying attention to the use of argument and the role of in civility. From a technological point of view, this requires tools for bot detection (like the famous if controversial Botometer, which was built for Twitter and is no longer operational now); mechanisms for supporting civil and balanced online discussions; and possibly also bridging bots that connect users in online communities in order to increase diversity in political discussions.
The project’s analysis of coordinated bot behaviours has found a number of key strategies, for instance: hashtag bombing, massive coordinated retweeting, targeting of popular accounts, mass dissemination of identical contents, and hashtag hijacking; all of these are pre-ChatGPT, however, and the growing availability of generative AI has added considerable further sophistication to these efforts – the content of such coordinated posts is now no longer as identical as it used to be, and therefore much harder to detect with conventional methods. Sometimes these AI systems trip up, however, and contain phrases like “as an AI language model”; there are some other telltale signs that indicate AI-generated social media post and profile content, too.
Generative AI can also be used for good in social media contexts, however, and there is at any rate a need to understand the embedding of specific social media spaces within a much broader and more complex hybrid media system.
Another part of the project also investigated the dynamics of online discussions; one focus was on the 2020 US presidential debates on Twitter, for instance, where importantly the researchers also collected full conversations, not just the tweets that matched specific keywords. A manual analysis of some 260 of these reply chains explored the various forms of incivility that were present in this dataset, and identified typical dynamics within such incivil discussions. Personal attacks emerged as the most problematic form of incivility, but such attacks do not necessarily stop the flow of discussion: even though personal attacks occur in the flow of the debate, the argument between participants continues. This can be seen as a positive sign of debate resilience.
Such insights were then also explored further in face-to-face workshops with civil society organisations, online community managers, and other relevant stakeholders; this work sought to develop new approaches to intervening in public online discussions, by human or technical means. As part of this work, the project developed a useful taxonomy of possible interventions: non-verbal mechanical interventions (direct or indirect regulation), as well as conversational interventions including denouncing (referencing social norms or warning of consequences), empathising (empathy, humour, counterhostility), and debating approaches (providing facts and pointing out inconsistencies).
Further, the project sought to develop measures of the extent to which individuals or groups contribute positively to the information environment – with a particular focus on discursive engagement rather than the mere sharing of information. This aims to construct an Information Health Index (IHI) for each post contributing to a conversation, assessing how and to what extent such reply activity is producing information; this can then also be aggregated per individual or group of individuals. The IHI is based on the characteristics of the reply sequence the post is embedded in (how reciprocal it is, how politically or attitudinally diverse the group of participants is, etc.). But such an assessment is complicated by the fact that each reply can be embedded within multiple subsequent reply chains.
This can then also be used to identify partisan clusters of participants which exhibit low information health (described here as ‘echo chambers’), and to compare patterns across issues, events, platforms, and timeframes; additionally, of course, the complexity of such information health measures can also still be improved considerably.
Finally, the project has also experimented with the development of bots that engage in online debates; some such bots are designed to offer alternative views to those expressed by human participants, in order to generate constructive discussion and elicit further explicit justification for participants’ views (but not necessarily to convince human participants of those alternative views). Such bots have already been successful in eliciting further responses from participants, and thereby in extending online debates. Other bots have explored the use of humour, for instance also through visual materials.