The next speaker in this packed AoIR 2016 session is Eugenia Siapera, whose focus is on hate speech and its regulation in social media. This is analysed by examining the Terms of Service of major social media platforms, as well as through interviews with key informants from Facebook, Twitter, and YouTube. What constitutes acceptable and non-acceptable speech from the point of view of these companies? What underlying ideologies does this point to?
The definition of hate speech on these platforms is usually not derived from existing legislation, but emerges from within the platforms themselves, informed especially by user reports of unacceptable behaviours. There is considerable movement of the policy-makers between these platforms as well, interestingly, so the rules of one often also influence the rules of another. Further, anti-hate speech enforcement is often balanced with the desire to continue to grow the userbase of these platforms.
All user reports of hate speech on these platforms appear to be assessed by humans rather than algorithms. The leading social media platforms have teams around the world that assess such reports, and these include a range of native speakers in order to be able to understand the finer nuances of posts; the importance of reports is also ranked by their sensitivity before they are full assessed. If in doubt, the content is retained, but if it is reported again it may be re-reviewed.
There is a general reluctance to act as an 'Internet police' here; there is a strongly stated commitment to principles of free speech, but balancing this with the need to address hate speech is difficult. The preference is for the communities of users to deal with hate speech themselves rather than to engage from the platform perspective. Further, there are tensions between local sensibilities (and legislation) and the global nature of these platforms; this is difficult to address on a general basis, and offending content is usually taken down on a case-by-case basis. There is generally relatively little sympathy for countries with strict laws against hate speech and other unacceptable content.
Largely there is also a reflection of responsibilities back to the offended users: the suggestion is that users simply block the content and users they are offended by, rather than even reporting such content. Users are thus conditioned to perform in accordance with the companies' liberal ideologies, and the management of hate speech is individualised rather than being dealt with on a more comprehensive basis.