You are here

Governance and Regulation on Social Media Platforms

It is already the middle of the first day of AoIR 2017, and I'm finally getting to see a panel, on 'fake news', which starts with Christian Katzenbach and Kirsten Gollatz. They start by noting the increasing discussion about platform governance initiatives designed to limit the circulation of 'fake news', however the term is defined; this also builds on considerable amounts of research into the politics of platforms.

But there is a conceptual gap (where and what is the governance in platforms?) and an empirical gap, with a lack of a long-term view on platform governance. Governance on platforms might mean law, terms of service, algorithmic or human governance processes, etc.; there is a turn also to practice theory and discourse theory that doesn't simply take a legalistic, regulatory approach.

The central question here is, or should be, the ordering and coordination of digital communication. For this, legal and regulatory interventions might be mobilised, but other governance approaches may also be relevant. Governance and regulation need to be distinguished: governance is a long-term, meandering process, while regulation represents intentions interventions into these processes and incorporates a number of different modes of ordering that interact with each other.

How might these theoretical ideas be put into practice? The empirical part of this study examines Facebook as a platform, and combines text and content analysis, document and discourse analysis, to track the factual rule changes, as well as the public perception of these rules, on this platform over time. Facebook's standards for what is inappropriate content have varied over time; there were vague statements about inappropriate content at first, which have become increasingly detailed and complex over time (if not necessarily much clearer).

Policy on nudity has changed from a general restriction on nudity to a more detailed list of specific prohibited content, for example, also alongside a growing list of exceptions (e.g. pictures of breastfeeding or mastectomy scars, or images of paintings or sculptures). Hate speech has been defined in increasingly detailed statements, too, and this also reflects the changing issues being addressed in hate speech acts on Facebook (with recent issues especially around hate speech against refugees and women). Policies on 'fake' content are still evolving, too; early concerns focussed on fake profiles and the impersonation of public figures, while much more attention is now being paid to the sharing of 'fake news' and other misleading material on current events.

There is, then, an ongoing battle over what content is allowed on this platform; the standards continue to change, and discourses about them show enable an observation of policy changes. Critical moments serve as moments of contestation and justification, yet the communities involved in these discussions differ substantially between the different areas of content policy explored here.