You are here

How Do AI-Based Chatbots Respond to Questions about Electoral Disinformation?

The next speaker at the P³: Power, Propaganda, Polarisation ICA 2024 postconference is Heesoo Jang, whose interest is in the potential biases in Large Language Models. In the United States, a majority of Republican nominees for office in the last mid-term elections denied or questioned the 2020 presidential election results, and in Brazil similar election denialist groups have emerged. This is worsened by political attacks on press freedoms in these and other countries; globally, the challenges to democracies by the rise of far-right authoritarianism are growing. But most existing theories and concepts still focus on ‘stable’ democracies, wherever we might still be able to find them. Out approaches now need to centre normative democratic commitments.

This should include a democracy-centred approach to the study of elections, too, incorporating a holistic framework for the analysis, first, of how they are covered by the press: foregrounding fairly contested elections as both an established norm and a political ideal, and treating election denial as a particularly egregious transgression against normative standards. Current election coverage in the US fails to do so, and rarely includes pro-democracy frames. Other aspects are how they are conducted by campaigns, and what the role of tech platforms and social media is.

But more specifically, the role of AI technologies also needs to be addressed. These will inevitably be used both in political campaigning and for the detection of deceptive content – but in addition we must also pay attention to how Large Language Models like ChatGPT answer questions about election integrity, based on the material they were trained on: a strategic non-commitment to definitive responses on critical questions (in essence, a bothsidesing of issues) is built into the design of several such AI tools in order to avoid any repercussions that would hurt their corporate owners. AI tools are explicitly told to avoid judging or the other side of major ‘culture war’ debates as good or bad. This may have normatively bad effects on public trust in democratic processes.

How might we evaluate LLMs’ responses to such critical questions, then? We might for instance systematically prompt them on critical topics, and assess their responses for their clarity about democratic practices, adherence to democratic norms, and potential implications for democracy. Such a coded dataset might also become input into the further evaluation of the democratic norm performance of LLMs.