You are here

Understanding the Factors That Affect Facebook’s Algorithmic Profiling of Users

The first ICA 2018 session I’m seeing this Monday morning is on echo chambers, and starts with Kelley Cotter and Mel Medeiros, who outlines the processes by which social media platforms generate algorithmic identities for their users. These identities determine what kind of content users encounter in their (algorithmically curated) newsfeed.

The project then examined how this works in practice: it conducted a survey of Facebook users and asked participants to provide their downloaded Facebook data for comparison. The Facebook data include aspects such as the pages they’ve liked, and the interests inferred (correctly or incorrectly) from these pages. From these data, the project extracted pages and ad topics (i.e. inferred interests), and matched these against politicians and parties.

Some 52% of participants had no political interests listed, another 26% had 1-2 listed. Given such low numbers, then, how does the newsfeed algorithm choose political content to push into the newsfeed? Assessments of participants’ friends’ political stances may play a role here; trace interests such as news consumption may also affect these processes. Such patterns predict some of the algorithmic choices experienced.

This means that there is an entanglement between personal choices, algorithmic profiling, and friends’ social media habits; together, they affect algorithmic selections. In turn, of course, political actors also use such profiling to target ads at users, and as users engage with these ads this might also lead to a spiral effect of inequality, where somewhat interested users are more and more strongly profiled as politically interested, and others are more and more painted as politically disinterested.