Reykjavík.
The next speaker at ECPR 2011 is Ben O’Loughlin, whose interest is in the effect of near real-time semantic analysis of public sentiments (online) on continuing political processes: in the end, we may end up with a kind of semantic polling of available social media and other electronic data, which enables political actors to target their messages to voters with unprecedented precision and speed. The 2010 election in the U.K. may have been the first rudimentary example of such a feedback loop.
Ben’s study examined the social media data used by TV and print journalists during the election, and interviewed key actors about their emerging practices in dealing with such data. Three main types of reporting were notable: anecdotal (pulling random tweets out of the timeline); quantitative (general stats on user activity as reported by various polling companies); and semantic (processing the content of social media sources).
Nick Anstead now takes over, and discusses the quantitative analysis of the three leaders’ debates: the semantic analysis companies generated some wildly diverse results, presented in non-standard results formats when compared to traditional polling companies; also, they are not governed by legal and industry standards in the same way that standard opinion polling is.
Additionally, the question of what Twitter or other social media data actually represents is rarely raised: is the Twitter public representative of the wider British public at all; can findings about its opinion be adjusted to account for demographic biases in the Twitter population? This is not unimportant, since reported opinions held by ‘the public’ also have an impact on actual public opinion.
And what about the quality of the data (this is also a question for mainstream public opinion polling, of course)? To what extent does current semantic analysis cope with irony and sarcasm, for example?