The next presenter in this Social Media & Society 2018 session is Oluwaseun Ajao, who shifts our focus to the question of ‘fake news’ on Twitter. Why is such content circulated on the platform? In part this is because these stories often generate more impact than ‘real’ news stories: this might result in significant shifts in political opinion, financial gains, or other outcomes that are desirable to the operators behind such initiatives.
The present study explores whether the veracity of a set of tweets might be able to be ascertained through automated content analysis. Are there semantic of linguistic features that might be used for this purpose, without prior knowledge about the topics being discussed in the tweets? The project built on a dataset of five major crisis events: the Charlie Hebdo shooting, the Sydney siege, the Ottawa shooting, the Germanwings crash, and the Ferguson shooting.
These tweets were assessed for their veracity by professional journalists, but there did not seem to be any obvious differences in the keywords used; overall, automated name recognition also indicated there were more male than female participants tweeting about each event. Further processing of the texts and images contained in the dataset highlighted a range of features that may be used to automatically detect ‘fake news’ content, and showed some 80% accuracy. One particular challenge here is also to detect photoshopped images.