CASRO: The DarkSide of Crowdsourcing

New research published by InSites Consulting (myself, Tom de Ruyck and Stephan Ludwig and Moritz Mann) is since yesterday featured in the CASRO magazine, you check out the online edition here, and we’ve embedded it below. The study provides evidence that, on average, community discussion threads optimally encompass 30 posts. This paper empirically highlights the inefficiencies of an excessive focus on the quantity, rather than the quality, of crowd sourcing contributions and assesses the drivers of relevant, on-topic community posts.

Conversations

Research has changed from asking questions to having conversations with consumers. With the rise of social media Online Research Communities have proven to be a viable environment to engage with an audience and stimulate insighting on an ongoing basis. What makes research communities unique is that they assemble a crowd in an asynchronous long term setting by applying social media techniques. Companies outsource certain tasks to that crowd (e.g. product and service creation and testing) in an open call in order to bring consumers into organizations all the way up to the boardroom. Hence, research communities are a form of “crowdsourcing”. The strength of crowdsourcing is the extent of social interaction, namely the combination and exchange of knowledge – which is stimulated in online communities.

SlideShare Presentation

As an intermezzo, find below a powerpoint presentation that I used at the CASRO Technology Conference, June 2-3 in New York City.

The potential pitfall

Many studies have looked at and pointed to the effectiveness of crowd sourcing. Each online research community supplier has their own methods and approaches to communities. Based on our research and experience we have come to learn that sample sizes of 50 (for 3 week communities) to 150 participants (for 3 month communities) are sufficient. These communities generate greater qualitative insights with good engagement between company and consumers in a cost efficient manner.
While many practitioners focus on the quantity of opinions and postings, past research thus far neglected a potential pitfall of crowd sourcing methods pertaining to a decreasing marginal returns in terms of post quality in lengthy crowd discussions. More specifically, our study provides evidence that, on average, community discussion threads optimally encompass 30 posts. Beyond that amount discussions can be become dysfunctional e.g. due to social dynamics. This paper empirically highlights the inefficiencies of an excessive focus on the quantity, rather than the quality, of crowd sourcing contributions and assesses the drivers of relevant, on-topic community posts. Check it out below:
 

 
Note: In this blogpost I’ve skipped the scientifical citations and referrals; find them inn the CASRO-article.

You might also be interested in

The Creativity Gap [podcast]

There’s one skill that is sought out by employers more than any other. It’s a skill that companies feel they…

Key learnings on size and composition of online research communities

Online research communities have become a common methodology over the past 10 years. Many agencies have embraced this digital way…

Determining consumer readiness for future tech for Royal Bank of Scotland

Going beyond regular research, our Culture + Trends solution provides future-facing insights that impact innovation, customer experience and branding challenges. Together with…