CASRO: The DarkSide of Crowdsourcing

New research published by InSites Consulting (myself, Tom de Ruyck and Stephan Ludwig and Moritz Mann) is since yesterday featured in the CASRO magazine, you check out the online edition here, and we’ve embedded it below. The study provides evidence that, on average, community discussion threads optimally encompass 30 posts. This paper empirically highlights the inefficiencies of an excessive focus on the quantity, rather than the quality, of crowd sourcing contributions and assesses the drivers of relevant, on-topic community posts.

Conversations

Research has changed from asking questions to having conversations with consumers. With the rise of social media Online Research Communities have proven to be a viable environment to engage with an audience and stimulate insighting on an ongoing basis. What makes research communities unique is that they assemble a crowd in an asynchronous long term setting by applying social media techniques. Companies outsource certain tasks to that crowd (e.g. product and service creation and testing) in an open call in order to bring consumers into organizations all the way up to the boardroom. Hence, research communities are a form of “crowdsourcing”. The strength of crowdsourcing is the extent of social interaction, namely the combination and exchange of knowledge – which is stimulated in online communities.

SlideShare Presentation

As an intermezzo, find below a powerpoint presentation that I used at the CASRO Technology Conference, June 2-3 in New York City.

The potential pitfall

Many studies have looked at and pointed to the effectiveness of crowd sourcing. Each online research community supplier has their own methods and approaches to communities. Based on our research and experience we have come to learn that sample sizes of 50 (for 3 week communities) to 150 participants (for 3 month communities) are sufficient. These communities generate greater qualitative insights with good engagement between company and consumers in a cost efficient manner.
While many practitioners focus on the quantity of opinions and postings, past research thus far neglected a potential pitfall of crowd sourcing methods pertaining to a decreasing marginal returns in terms of post quality in lengthy crowd discussions. More specifically, our study provides evidence that, on average, community discussion threads optimally encompass 30 posts. Beyond that amount discussions can be become dysfunctional e.g. due to social dynamics. This paper empirically highlights the inefficiencies of an excessive focus on the quantity, rather than the quality, of crowd sourcing contributions and assesses the drivers of relevant, on-topic community posts. Check it out below:
 

 
Note: In this blogpost I’ve skipped the scientifical citations and referrals; find them inn the CASRO-article.

You might also be interested in

Back to the Future with Christina Lee (Pernod Ricard)

In a world where change and disruption have become the norm, it is more important than ever for brands to…

Exploring the role of AI in retail healthcare

Exploring the role of AI in retail healthcare

Artificial Intelligence (AI) in market research; is it a new trend, overrated or a game changer? In recent years, every…

How the rise of e-commerce during the pandemic is fueling ‘liquid expectations’

According to IBM’s Retail Index, COVID-19 has accelerated e-commerce adoption by five years. As many consumers turned to online shopping…