For the final (wow) session of AoIR 2019 I’m in a session on news automation, which starts with Marijn Martens. He begins by describing algorithms (for instance, news recommender algorithms) as a form of culture, as well as as a form of technical construct – and by highlighting as well how algorithms are being imagined, perceived, and experienced through the mental models that users construct for them.
So, what assumptions do users have about the construction of a new recommender system – what is their personal algorithmic imaginary? Martijn conducted two-stage interviews with users, first as an in-depth interview to explore their overall algorithmic imaginaries (e.g. of existing systems like Google News or the Facebook newsfeed, or of other known systems) before they encountered new recommender systems.
The second interview followed one month after, and asked participants to reflect on their previous mental model based on their further experience during that month. The focus here was especially on different types of personal data: data that users give to a system; data that a system extracts from users; and data that a system processes on behalf of users.
There were three types of respondents: first, unaware people had no idea that there even was an algorithm that might alter their newsfeeds. Participants assumed that systems simply responded to their actions – but when prompted they also realised that there might be processing of the data they provided to the system.
Second, aware people were focussing on the data that they provided or that the system extracted from them; they were also aware of possible privacy violations, and either perceived them as inevitable or felt wronged when such violations took place. The rationalised such data extractions, and perceived recommendation systems as black boxes.
Third, realistic respondents saw data as coding the real world; they critiqued such coded representations, identified data monoliths and data silos, and worried about the impact recommendations based on outdated data. They also asked for data transparency, but were easily satisfied even with very limited transparency; and they were frustrated with poor recommendations because they thought recommendation systems should know them better.
Finally, critical respondents were aware of given, extracted, and processed data types, and highlighted the technical characteristics and limitations of recommender systems. These people saw the black box as something inevitable and proprietary, and deliberately fed the system specific data with the expectation of a particular return.
Mostly, respondents spoke only about themselves and their uses of recommendation systems; their rationale for using such systems was self-commodication. Few respondents were aware of of developers and other agents within the system, and had a certain amount of trust in the ability of algorithm providers to hire expert developers and other staff, while the CEOs of such companies (such as Mark Zuckerberg) were often seen in extremely positive or negative light. They also highlighted major companies like Google or Facebook and were unaware of the broader algorithmic ecosystem.