The next speakers in this Bots Building Bridges workshop session are Ozgur Can Seckin and Bao Truong, who begin by outlining the issue of political polarisation – especially in the United States. They distinguish between polarisation on specific issues on the one hand, and affective polarisation between the partisans supporting various political groups on the other; this latter form of polarisation is therefore a problem of in-group and out-group exposure and engagement.
Some approaches have sought to address this by increasing exposure to out-group content and perspectives; some have attempted to encourage people to imagine the views of the other side; some have tried to correct misperceptions of the out-group; and some have sought to foster inter-group communication across partisan camps. The idea here is that same-party conversations can intensify polarisation, while disagreement can decrease it – however, poorly moderated or managed inter-group communication can also backfire and intensify perceptions of differences between the different sides.
Critical here is congruence: disagreement from in-group members lowers perceptions of in-group homogeneity; agreement from out-group members lowers perceptions of out-group difference and extremity (and thus positive feelings towards the in-groups) – and both produce cognitive dissonance (and thus negative feelings towards the out-group).
The project seeks to test these assumptions using Large Language Models: LLMs are able to realistically model the expression of partisan interlocutors, and an observation of the engagement of human participants with such partisan LLM agents in a discussion about current political issues can explore the short-term effects of such conversations on the participants’ perceptions of their in- or out-group.
Before this can be done in practice, however, it is also important to ensure that the chatbot is reliably performing the political character assigned to it; safeguards to prevent inappropriate behaviour will also be required. This can also be tested by getting two chatbots to talk to each other, with one of the simulating the human participant. Initial test results are encouraging, and the next step is now to move to tests with human participants.