You are here

WebSci '16

Web Science 2016 conference, Hannover, Germany, 23-25 May 2016.

Motivations for Participating in Gamified Citizen Science Projects

The final speaker in this WebSci 2016 session is Ramine Tinati, whose focus is on citizen science platforms. Citizen science itself has been around for hundreds of years, but more recent developments in online crowdsourcing techniques have enabled even greater mass participation in such scientific activities; one early success in this was Zooniverse, which asks users for help in classifying galaxy types.

Patterns of YouTube Video Ad Consumption

The next paper in this WebSci 2016 session is presented by Mariana Arantes, whose interest is in the matching of video ads to YouTube videos. Such ads are displayed before some YouTube videos, and they can often be stopped after a set number of seconds. How do users consume these ads? How does their popularity change over time? What is the relationship between videos and ads, and does a better content match mean that ads are more likely to be watched all the way through?

Identifying MOOC Learners on Social Media Platforms

We start the first paper session at WebSci 2016 with a paper by Guanliang Chen that examines learner engagement with Massively Open Online Courses (MOOCs). These generate a great deal of data about learner engagement during the MOOC itself, but there's very little information about learners before and after this experience. Can we use external social Web data to identify and profile these learners, in order to better customise the learning experience for them?

Web Science and Biases in Big Data

It's a cool morning in Germany, and I'm in Hannover for the opening of the 2016 Web Science conference, where later today my colleague Katrin Weller and I will present our paper calling for more efforts to preserve social media content as a first draft of the present. But we start with an opening keynote by Yahoo!'s Ricardo Baeza-Yates, on Data and Algorithmic Bias in the Web.

Ricardo begins by pointing out that all data have a built-in bias; additional bias is added in the data processing and interpretation. For instance, some researchers working with Twitter data then extrapolate across entire populations, although Twitter's demographics are not representative for the wider public. There are even biases in the process of measuring for bias.


Subscribe to RSS - WebSci '16