You are here

New Methods for Understanding Structural Network Polarisation and Affective Polarisation in Social Media

The keynote speaker on this section day of the P³: Power, Propaganda, Polarisation ICA 2024 postconference is the wonderful Annie Waldherr from the University of Vienna, whose focus is on the use of online visual content for connective action and communication, especially also in the context of conflict. How do strategic actors and activists use visual communication, what narratives do they promote, how do audiences engage with this, and how do such narratives spread on social media as a result?

Annie’s work focusses on climate narratives in Austria and Germany, in particular, but the broader team also covers a wider transnational picture in Europe; it examines the production, pictures, publics, and propagation of climate change-related narratives across platforms. Key platforms here include Facebook, YouTube, Twitter, and TikTok, and a key interest is in concepts related to interactional, positional, and affective polarisation amongst the users who engage with relevant (visual) content.

Annie focusses here on two key aspects; the first of these is about structural network polarisation. Overall political polarisation is a process by which a social or political group is divided into two or more opposing sub-groups with conflicting and contrasting opinions; a standard method to examine this is community detection in network analysis, but this is problematic: modularity is a global network method, and does not provide a scale of the extent of polarisation in the network, or offer insights on the polarised stances of individual actors.

An alternative to this is ideal point estimation, which identifies the polarised end points of a unidimensional scale and thereby enables the assessment of individual actors’ affinity to those end points. This requires additional external information about the actors that is not always available, however.

A further extension of this is structural network polarisation, which can be studied through graph embeddings that convert network relationships into n-dimensional graph data and enables the identification of key ‘anchor points’; this in turn enables the assessment of individual actors’ affinity with these anchors. Which anchors are selected in such analysis depends to some extent on the choice of the dimensionality level as a hyperparameter of the analysis. Application of this approach to classic datasets shows some very promising results already.

Having tested this method, the project then applied it to data collected through the Twitter Academic API, for a dataset of some 185,000 tweets and retweets of 35 German and Austrian climate activist accounts (this was intended to be monitored over a longer timeframe, but the closure of that API has made this impossible). The analysis pointed in the first place to a combination of professional climate activists and more general Twitter accounts with a personal interest in the issue, rather than any distinct ideological patterns – so it challenged the idea that there was substantial polarisation present in the dataset.

A second part of the project then focussed on affective polarisation, especially also in response to radical climate activists’ actions (such as road blockages or the symbolic defacing of artworks in art galleries); data were gathered here from TikTok videos and comment threads, and analysed for their information entropy: in other words, what was the level of surprise in the emotions expressed in comments on those TikTok videos?

This draws on some 1,000 TikTok videos and 110,000 comments about climate change in Austria and Germany during 2023; data gathering was based on account lists and hashtags, and comments were coded for emotions by using the Llama Large Language Model. Anger, surprise, and happiness were the dominant emotions here, while neutral reactions, fear, sadness, and disgust were a lot less prominent. The project also conducted a keyframe extraction on the videos, and extracted the content features from these keyframes. This enabled a clustering of videos by similarity which appears to have worked very well in identifying major video categories.

Emotional responses to the videos across these clusters show a very similar distribution, however; there do not seem to be any significant differences in how TikTok users respond emotionally to videos of all kinds.