You are here

News User Attitudes towards News Personalisation Algorithms

The next speaker at IAMCR 2019 is Jaron Harambam, whose focus is on the personalisation of news content to individual readers and the implications that this may have for the news that users encounter. This may help readers navigate a vast and complex information landscape, and enable news outlets to provide not only popular but also relevant niche stories to the relevant audiences.

At the same time, however, it may also mean that readers no longer share the same information landscape, and this could have deleterious impacts on democratic information and participation – the emergence of a ‘filter bubble’ is the central concern here.

If this is the case, how can we achieve greater fairness in news personalisation? A number of principles will be important here: transparency in the work of algorithms; user control over the way these technologies work; explainability of their actions; credibility of their operation; diversity in their choices; and therefore also a value-sensitive design of algorithms.

This project pursues this goal in collaboration with AI specialists at one of the largest media corporations in the Netherlands, and explored these issues in focus groups with readers of the news – identifying their ideas, concerns, and needs; exploring specific situations; discussing possible solutions; and prioritising future developments.

Key reader concerns include a loss of general overview of the news, and of control over what news they receive. They worry that this might result in a limited news diet, and in being locked into filter bubbles. Further, there are now also significant concerns about data collection and interpretation by the news organisations and intermediary operators, not least in the context of broader concerns about datafication in recent times.

The concept of fairness remains elusive to news users in this context, and the project sought to elicit their perspectives by exploring specific situations. Ultimately, they felt that media corporations should be open and transparent about their motivations for using personalisation technologies, as well as about their practices and data security measures. Further, they also wanted to retain their own control through user profile settings and feedback and intervention opportunities. Finally, they felt that media companies had a public duty to avoid the emergence of filter bubbles.

Solutions to all this fall into three categories: legal, explanation, and control. Legal approaches may include the development of stricter laws, codes of conduct, or certification measures that guide the development and use of personalisation algorithms. But audiences also view these with suspicion, and feel that they may not be enforced sufficiently firmly. Another approach is to provide more detailed explanations of algorithmic choices – but such explanations also reveal further how much information companies have already collected about the users, and this might trouble them. Finally, more personal controls were seen as a valuable – a choice of selection algorithms, user input options to chose specific interest areas, and output options to select formats and selections, were all seen as useful.

Control, especially seems to emerge as an important future issue here. Additionally, the ‘filter bubble’ metaphor seems to have enormous power over our personalisation imaginaries: there is a palpable moral panic about the potentials of personalisation, which may not be boirne out by their actual capabilities. There is an urgent need to dismantle the highly reductive filter bubble narrative.