I arrived a little late to Tanja Aitamurto's AoIR 2015 paper about crowdsourced journalism in northern Europe, where news sites used their readers to gather data on homeloan terms, for instance – crowdsourcing is thus defined as a mechanism for collaborative problem-solving that is driven by the initiator of the project; the locus of power therefore remains with the media organisation.
Another crowdsourced journalistic project examined the trading documents of a large number of stock market brokers to identify cases of short-stock selling; here, the project revealed serious misconduct an a Finnish bank executive was fired.
Crowdsourcing may involve readers trawling through large datasets, then, as well as becoming involved in selecting and developing stories at any stage of the journalistic process; this builds on the cognitive diversity as well as the large numbers of participants.
Journalists have found this very helpful in a number of contexts and cases, especially where the total volume of source materials is too large for journalistic staff alone to master. Additionally, the variety of views and experiences that crowdsourcing participants bring to the project helps to generate a fuller and more holistic picture of the issues being investigated.
But the diversity and volume of contributors can also create problems, because the logic of crowds and the logic of journalism are not necessarily compatible. Journalism does not deal well with a large and undefined crowd of participants, nor with a large number of diversely structured contributions that must be synthesised into a homogenous whole. The most likely way to find a solution to this issue is to find better evaluation and synthesis methods for crowdsourced contributions.