Hutch Carpenter in Spigit blog writes about the various models for running crowdsourced contests. He describes four models according to the different ways that crowdsourcing activities for gathering, filtering and selecting among the submissions of people are integrated.
These activities are:
- Crowdsourced Submissions: Crowdsourcing starts with the contributions of people from around the globe. These submissions are aggregated into a common site. Submissions are provided in the format matching the contest objectives.
- Crowdsourced Feedback: People provide their feedback on the submissions of others. This feedback can be up-down votes, star ratings, comments and buying into ideas with virtual currency. This process can be collaborative, helping refine submissions.
- Selection Experts: Organizations establish panels of experts who review the crowdsourced submissions, and select those best meeting their requirements. Experts possess distinct domain knowledge to make the final decision in the contest.
- Crowdsourced Selections: The winners of the contest are determined by people’s votes and other measures. This selection process is a mix of overall crowd sentiment, weighted for higher reputed members, and the power of individuals to leverage word-of-mouth marketing.
The above components can be integrated in different ways to provide four different models for running crowdsourced contests:
Model #1: Crowd Sentiment, Expert Decision
Model #2: Crowd Decision
Model #3: Expert Decision
Model #4: American Idol
The author describes the characteristics and business objectives of each model and concludes that the biggest takeaway for anyone considering such an initiative is the flexibility of approaches to accomplish different objectives.
Source
Spigit blog: Four Models for Competitive Crowdsourcing