Feedback Use in Crowdsourcing Ideation Contests: Bias and Debiasing

Department of Decision Sciences and Managerial Economics

Crowdsourcing ideation contests allow idea-seeking organizations to solicit solutions from external solvers to address specific problems. A topic of interest is solvers’ use of developmental feedback that is given by seekers or competing solvers in the contests. In this study, we examine and address the anchoring effect in solvers’ use of different feedback. In the first experiment, we show that solvers anchor their ideas more to seeker feedback than to peer feedback. Such behaviors, possibly considered as rational in ideation contests, lead to suboptimal outcomes such as the underuse of high-practicality peer feedback and overuse of low-practicality seeker feedback. In the second experiment, we test the effectiveness of a feedback rating mechanism in debiasing the source-based anchoring effect. The results show that the mechanism can correct the suboptimal outcomes in solvers’ feedback use, particularly for perceptive solvers. In a way, the mechanism can cause solvers to behave less rationally (in terms of their self-interest) but more optimally (in terms of the ideation process) in contests. This study contributes by providing insights into solver behaviors in ideation contests and proposing a bias-mitigating mechanism to enhance the ideation process.