You are here

Ethical Questions for ‘Fake News’ Detection Algorithms

The next speakers in this IAMCR 2019 session are Changfeng Chen and Wen Shi, whose focus is on the ethical dimensions of AI-driven ‘fake news’ detection – as part of many ethical issues related to artificial intelligence more generally.

Detection mechanisms fall into two broad categories: context model-based and social context-based algorithms. The former of these applies deception detection approaches to news texts: it searches for linguistic clues about lies and truth in the articles. This can detect rumours and misinformation from the rich linguistic clues present in such articles.

Such models build on corpora of ‘fake’ and ‘true’ news that are used to train detection algorithms, but this model encounters a number of challenges in its application to journalistic texts. Content-based models are limited to finding linguistic clues for deliberate deception, but fail to handle false news in a broader sense. The problem here is the definition of ‘deception’ as opposed to an unintentional misrepresentation of the facts, which may not offer the same clues as an outright lie.

Social context-based models focus on user-content and user-user interactions instead. This gives more wright to the people involved in the sharing of ‘fake news’ content, and draws on user features, news features, and network features in combination. Here, there are problems with the polarisation of discourses: the model might serve to empower elites and disempower vulnerable minority groups. Also, in this context the elites who operate the network algorithms have the power to define their affordances.

There is thus a mismatch between the technical philosophy behind algorithms,if news verification and the complex social logics of ‘fake news’. Dealing with ‘fake news’ must rely on more than just identifying problems within the texts; the dissemination of such content is also driven by social and societal factors that ought to be recognised.