The post-lunch session at the P³: Power, Propaganda, Polarisation ICA 2024 postconference starts with my excellent QUT colleague Tariq Choucair, whose interest is in measuring polarising discourses during election campaigns. Tariq and the team have developed a method to measure polarisation at the level of specific discourses: it is rooted in core principles and operationalised approaches that are adaptable to other contexts. Measuring polarisation at the discourse level is important; so far, so much of the work on polarisation has been done using surveys on self-reported political positioning or feelings towards leaders or parties, or has drawn on voting patterns in parliaments – but in recent years there has been a growth in attention to polarising rhetoric.
This is important because it may enable us to identify highly polarised groups or actors, and assess the drivers, effects, and consequences of such discourses. Importantly, this does not assess issue-based, ideological, or interpretive forms of polarisation: instead, it focusses on how differences between groups are understood by different partisans – how speakers engage in affective, identity-based, and relational forms of polarisation.
Current measures struggle with such approaches, however. They often assess only whether discourses are polarised or not, contain emotion or not, feature hostility or not – but they do not assess who or what the objects or targets of such polarisation are.
This project explored such polarising rhetoric in the post of political leaders in recent elections in Australia, Denmark, Peru, and Brazil. It is guided by three principles: that it matters how in-groups are constructed in polarising discourse, not just how out-groups are constructed; that there are different levels of affiliation and opposition that express polarisation between in- and out-groups; that we need to identify towards whom polarising discourse is directed.
Following this method, each time an entity is mentioned by a political actor, that mention is classified by the level of affiliation or opposition that the actor expresses towards that entity. Such entity extraction is done automatically using Natural Language Processing tools or Large Language Models; next, a sample of these data is coded manually, and that hand-coded dataset is then used to train a Large Language Model to code the remainder of the dataset.
Afterwards, it is possible to aggregate these results for each leader’s posts, and assess the overall rhetorical strategies for each leader. Emerging from this is that the Latin American leaders tend to express much greater extreme affiliation with and opposition to their respective in- and out-groups, compared to the Australian and Danish leaders. Danish leaders also devote far less time to discussing their out-groups, and more to their in-groups; Brazilian leaders spend far more time attacking their respective out-groups, and the out-groups addressed by Brazilian leaders also represent a very different group of entities.