skip to main content
survey

Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions

Published:04 January 2018Publication History
Skip Abstract Section

Abstract

Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar—all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives, and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.

Skip Supplemental Material Section

Supplemental Material

References

  1. Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford, and Aleksandrs Slivkins. 2016. How many workers to ask?: Adaptive exploration for collecting high quality labels. In ACM SIGIR 2016. 473--482. http://doi.acm.org/10.1145/2911451.2911514Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bo Thomas Adler and Luca De Alfaro. 2007. A content-driven reputation system for the Wikipedia. In WWW 2007. 261--270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bo Thomas Adler, Luca De Alfaro, Santiago M. Mola-Velasco, Paolo Rosso, and Andrew G. West. 2011. Wikipedia vandalism detection: Combining natural language, metadata, and reputation features. In Computational Linguistics and Intelligent Text Processing. Springer, 277--288. Google ScholarGoogle ScholarCross RefCross Ref
  4. Charu C. Aggarwal. 2013. An introduction to outlier analysis. In Outlier Analysis. Springer, 1--40. Google ScholarGoogle ScholarCross RefCross Ref
  5. Salman Ahmad, Alexis Battle, Zahan Malkani, and Sepander Kamvar. 2011. The Jabberwocky programming environment for structured social computing. In UIST’11. 53--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Luis von Ahn. 2006. Games with a purpose. Computer 39, 6 (June 2006), 92--94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In CHI 2014. 3665--3674.Google ScholarGoogle Scholar
  8. Mohammad Allahbakhsh, Boualem Benatallah, Aleksandar Ignjatovic, Hamid Reza Motahari-Nezhad, Elisa Bertino, and Schahram Dustdar. 2013. Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing 17, 2 (March 2013), 76--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Elisa Bertino, and Norman Foo. 2012. Reputation management in crowdsourcing systems. In CollaborateCom 2012. 664--671. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Mohammad Allahbakhsh, Samira Samimi, Hamid Reza Motahari-Nezhad, and Boualem Benatallah. 2014. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In SOCA 2014. 17--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2012. Collaborative workflow for crowdsourcing translation. In CSCW 2012. 1191--1194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Iheb Ben Amor, Salima Benbernou, Mourad Ouziri, Zaki Malik, and Brahim Medjahed. 2016. Discovering best teams for data leak-aware crowdsourcing in social networks. ACM Transactions on the Web 10, 1 (2016), Article 2, 27 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. 2013. Steering user behavior with badges. In WWW 2013. 95--106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jesse Anderton, Maryam Bashir, Virgil Pavlu, and Javed A. Aslam. 2013. An analysis of crowd workers mistakes for specific and complex relevance assessment task. In CIKM 2013. ACM, 1873--1876. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Paul André, Robert E. Kraut, and Aniket Kittur. 2014. Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks. In CHI 2014. 139--148.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Donovan Artz and Yolanda Gil. 2007. A survey of trust in computer science and the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 5, 2 (2007), 58--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Bahadir Ismail Aydin, Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao, and Murat Demirbas. 2014. Crowdsourcing for multiple-choice question answering. In 26th IAAI Conference.Google ScholarGoogle Scholar
  18. Piyush Bansal, Carsten Eickhoff, and Thomas Hofmann. 2016. Active content-based crowdsourcing task selection. In CIKM 2016. 529--538. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Carlo Batini, Cinzia Cappiello, Chiara Francalanci, and Andrea Maurino. 2009. Methodologies for data quality assessment and improvement. ACM Computing Surveys (CSUR) 41, 3 (2009), 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Michael S. Bernstein, David R. Karger, Robert C. Miller, and Joel Brandt. 2012. Analytic methods for optimizing realtime crowdsourcing. CoRR abs/1204.2995 (2012). http://arxiv.org/abs/1204.2995Google ScholarGoogle Scholar
  21. Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A word processor with a crowd inside. In UIST 2010. ACM, 313--322.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly real-time answers to visual questions. In UIST 2010 (UIST’10). 333--342.Google ScholarGoogle Scholar
  23. David Boud. 2013. Enhancing Learning Through Self-assessment. Routledge.Google ScholarGoogle Scholar
  24. Ioannis Boutsis and Vana Kalogeraki. 2016. Location privacy for crowdsourcing applications. In UbiComp 2016. 694--705. DOI:http://dx.doi.org/10.1145/2971648.2971741 Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Alessandro Bozzon, Marco Brambilla, and Stefano Ceri. 2012. Answering search queries with crowdsearcher. In WWW 2012. 1009--1018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Alessandro Bozzon, Marco Brambilla, Stefano Ceri, and Andrea Mauri. 2013. Reactive crowdsourcing. In WWW 2013. 153--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jonathan Bragg, Daniel S. Weld, and others. 2013. Crowdsourcing multi-label classification for taxonomy creation. In 1st AAAI Conference on Human Computation and Crowdsourcing.Google ScholarGoogle Scholar
  28. Caleb Chen Cao, Lei Chen, and Hosagrahar Visvesvaraya Jagadish. 2014. From labor to trader: Opinion elicitation via online crowds as a market. In KDD 2014. 1067--1076. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Cinzia Cappiello, Florian Daniel, Agnes Koschmider, Maristella Matera, and Matteo Picozzi. 2011. A quality model for mashups. In ICWE 2011. 137--151. DOI:http://dx.doi.org/10.1007/978-3-642-22233-7_10 Google ScholarGoogle ScholarCross RefCross Ref
  30. Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. 2014. Modal ranking: A uniquely robust voting rule. In AAAI 2014. 616--622.Google ScholarGoogle Scholar
  31. Ruggiero Cavallo and Shaili Jain. 2012. Efficient crowdsourcing contests. In Proceedings of AAMAS 2012 - Volume 2. 677--686.Google ScholarGoogle Scholar
  32. Jesse Chandler, Gabriele Paolacci, and Pam Mueller. 2013. Risks and rewards of crowdsourcing marketplaces. In Handbook of Human Computation. Springer, 377--392. Google ScholarGoogle ScholarCross RefCross Ref
  33. Xi Chen, Qihang Lin, and Dengyong Zhou. 2013. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In ICML 2013, Vol. 28, 64--72.Google ScholarGoogle Scholar
  34. Justin Cheng, Jaime Teevan, and Michael S. Bernstein. 2015a. Measuring crowdsourcing effort with error-time curves. In CHI 2015. ACM, New York, 1365--1374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein. 2015b. Break it down: A comparison of macro- and microtasks. In CHI 2015. 4061--4064. http://doi.acm.org/10.1145/2702123.2702146Google ScholarGoogle Scholar
  36. Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. In HCOMP 2010. 1--9. http://doi.acm.org/10.1145/1837885.1837889Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Peng Dai, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In CSCW 2015. ACM, New York, 628--638. DOI:http://dx.doi.org/10.1145/2675133.2675260 Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. 2013. Aggregating crowdsourced binary ratings. In WWW 2013. 285--294. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Martin Davtyan, Carsten Eickhoff, and Thomas Hofmann. 2015. Exploiting document content for efficient aggregation of crowdsourcing votes. In CIKM 2015. 783--790. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Luca De Alfaro, Ashutosh Kulshreshtha, Ian Pye, and Bo Thomas Adler. 2011. Reputation systems for open collaboration. Communications of the ACM 54, 8 (2011), 81--87. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Luca De Alfaro, Vassilis Polychronopoulos, and Michael Shavlovsky. 2015. Reliable aggregation of boolean crowdsourced tasks. In HCOMP 2015.Google ScholarGoogle Scholar
  42. Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudré-Mauroux. 2013. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal 22, 5 (2013), 665--687.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In HCOMP 2014.Google ScholarGoogle Scholar
  44. Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2012. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch. 26--30.Google ScholarGoogle Scholar
  45. Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell me what you like, and I’ll tell you what to do. In WWW 2013. 367--374.Google ScholarGoogle Scholar
  46. Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. In WWW 2016. 855--865.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Mira Dontcheva, Robert R. Morris, Joel R. Brandt, and Elizabeth M. Gerber. 2014. Combining crowdsourcing and learning to improve engagement and performance. In CHI 2014. 3379--3388. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Christoph Dorn, R. N. Taylor, and S. Dustdar. 2012. Flexible Social Workflows: Collaborations as human architecture. IEEE Internet Computing 16, 2 (March 2012), 72--77.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. 2016. Toward a learning science for complex crowdsourcing tasks. In CHI 2016. ACM, New York, NY, USA, 2623--2634. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Steven Dow, Anand Kulkarni, Scott Klemmer, and Björn Hartmann. 2012. Shepherding the crowd yields better work. In CSCW 2012. 1013--1022.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg, and Daniel S. Weld. 2016. MicroTalk: Using argumentation to improve crowdsourcing accuracy. In HCOMP 2016.Google ScholarGoogle Scholar
  52. Carsten Eickhoff and Arjen P. de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval 16, 2 (2013), 121--137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In SIGIR 2012. 871--880.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Kinda El Maarry, Ulrich Güntzer, and Wolf-Tilo Balke. 2015. A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks. In WISE 2015. 293--308.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Boi Faltings, Radu Jurca, Pearl Pu, and Bao Duy Tran. 2014. Incentives to counter bias in human computation. In HCOMP 2014. http://www.aaai.org/ocs/index.php/HCOMP/HCOMP14/paper/view/8945.Google ScholarGoogle Scholar
  56. Meng Fang, Jie Yin, and Dacheng Tao. 2014. Active learning for crowdsourcing using knowledge transfer. In 28th AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  57. Siamak Faradani, Björn Hartmann, and Panagiotis G. Ipeirotis. 2011. What’s the right price? Pricing tasks for finishing on time.Human Computation 11 (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Oluwaseyi Feyisetan and Elena Simperl. 2016. Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing. In ICWE 2016. 405--412.Google ScholarGoogle ScholarCross RefCross Ref
  59. Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In WWW 2015. 333--343. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Linton C. Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35--41. Google ScholarGoogle ScholarCross RefCross Ref
  61. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In CHI 2015, Vol. 15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Snehalkumar (Neil) S. Gaikwad, Durim Morina, Adam Ginzberg, Catherine Mullings, Shirish Goyal, Dilrukshi Gamage, Christopher Diemert, Mathias Burton, Sharon Zhou, Mark Whiting, Karolina Ziulkoski, Alipta Ballav, Aaron Gilbee, Senadhipathige S. Niranga, Vibhor Sehgal, Jasmine Lin, Leonardy Kristianto, Angela Richmond-Fuller, Jeff Regino, Nalin Chhibber, Dinesh Majeti, Sachin Sharma, Kamila Mananova, Dinesh Dhakal, William Dai, Victoria Purynova, Samarth Sandeep, Varshine Chandrakanthan, Tejas Sarma, Sekandar Matin, Ahmed Nasser, Rohit Nistala, Alexander Stolzoff, Kristy Milland, Vinayak Mathur, Rajan Vaish, and Michael S. Bernstein. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. In UIST 2016. 625--637.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Chao Gao, Yu Lu, and Denny Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. In ICML 2016. 603--611.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Mihai Georgescu, Dang Duc Pham, Claudiu S. Firan, Wolfgang Nejdl, and Julien Gaugaz. 2012. Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries. In CIKM 2012. ACM, 1970--1974. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Derek L. Hansen, Patrick J. Schone, Douglas Corey, Matthew Reid, and Jake Gehring. 2013. Quality control mechanisms for crowdsourcing: Peer review, arbitration, 8 expertise at familysearch indexing. In CSCW 2013. 649--660.Google ScholarGoogle Scholar
  66. Kotaro Hara, Vicki Le, and Jon Froehlich. 2013. Combining crowdsourcing and google street view to identify street-level accessibility problems. In CHI 2013. 631--640. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Jan Hartmann, Alistair Sutcliffe, and Antonella De Angeli. 2008. Towards a theory of user judgment of aesthetics and user interface quality. ACM Transactions on Computer-Human Interaction 15, 4 (2008), 15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Kenji Hata, Ranjay Krishna, Li Fei-Fei, and Michael S. Bernstein. 2017. A glimpse far into the future: Understanding long-term crowd worker quality. In CSCW 2017. 889--901.Google ScholarGoogle Scholar
  69. Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In CHI 2010. 203--212.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks. In CHI 2012. 1539--1548.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. James Herbsleb, David Zubrow, Dennis Goldenson, Will Hayes, and Mark Paulk. 1997. Software quality and the capability maturity model. Communications of the ACM 40, 6 (1997), 30--40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Paul Heymann and Hector Garcia-Molina. 2011. Turkalytics: Analytics for human computation. In WWW 2011. 477--486.Google ScholarGoogle Scholar
  73. Chien-Ju Ho, Rafael Frongillo, and Yiling Chen. 2016. Eliciting categorical data for optimal aggregation. In NIPS 2016. Curran Associates, Inc., 2450--2458.Google ScholarGoogle Scholar
  74. Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing high quality crowdwork. In WWW 2015. 419--429. DOI:http://dx.doi.org/10.1145/2736277.2741102 Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Chien-Ju Ho and Jennifer Wortman Vaughan. 2012. Online task assignment in crowdsourcing markets. In AAAI, Vol. 12. 45--51.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014. Situated crowdsourcing using a market model. In UIST 2014. ACM, 55--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Tobias Hossfeld, Christian Keimel, and Christian Timmerer. 2014. Crowdsourcing quality-of-experience assessments. Computer 47, 9 (Sept. 2014), 98--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Jeff. Howe. 2006. The rise of crowdsourcing. Wired (June 2006).Google ScholarGoogle Scholar
  79. Chang Hu, Philip Resnik, Yakov Kronrod, and Benjamin Bederson. 2012. Deploying MonoTrans widgets in the wild. In CHI 2012. 2935--2938. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Shih-Wen Huang and Wai-Tat Fu. 2013a. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In CHI 2013. 621--630.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Shih-Wen Huang and Wai-Tat Fu. 2013b. Enhancing reliability using peer consistency evaluation in human computation. In CSCW 2013. 639--648. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Ngoc Tran Lam, and Karl Aberer. 2013a. BATC: A benchmark for aggregation techniques in crowdsourcing. In SIGIR 2013. 1079--1080. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Lam Ngoc Tran, and Karl Aberer. 2013b. An evaluation of aggregation techniques in crowdsourcing. In WISE 2013. Springer, 1--15.Google ScholarGoogle ScholarCross RefCross Ref
  84. Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, and Karl Aberer. 2015. Minimizing efforts in validating crowd answers. In SIGMOD 2015. 999--1014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Jane Hunter, Abdulmonem Alabri, and Catharina van Ingen. 2013. Assessing the quality and trustworthiness of citizen science data. Concurrency and Computation: Practice and Experience 25, 4 (2013). Google ScholarGoogle ScholarCross RefCross Ref
  86. Trung Dong Huynh, Mark Ebden, Matteo Venanzi, Sarvapali D. Ramchurn, Stephen J. Roberts, and Luc Moreau. 2013. Interpretation of crowdsourced activities using provenance network analysis. In HCOMP 2013.Google ScholarGoogle Scholar
  87. Aleksandar Ignjatovic, Norman Foo, and Chung Tong Lee. 2008. An analytic approach to reputation ranking of participants in online transactions. In WI/IAT 2008. 587--590. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Kazushi Ikeda and Michael S. Bernstein. 2016. Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity. In CHI 2016. 4111--4121. http://doi.acm.org/10.1145/2858036.2858327Google ScholarGoogle Scholar
  89. Panagiotis G. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 2 (Dec. 2010), 16--21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted crowdsourcing with a billion (potential) users. In WWW 2014. 143--154.Google ScholarGoogle Scholar
  91. Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In CHI 2013. 611--620.Google ScholarGoogle Scholar
  92. Srikanth Jagabathula, Lakshminarayanan Subramanian, and Ashwin Venkataraman. 2014. Reputation-based worker filtering in crowdsourcing. In NIPS 2014. Curran Associates, Inc., 2492--2500.Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Manas Joglekar, Hector Garcia-Molina, and Aditya Parameswaran. 2013. Evaluating the crowd with confidence. In KDD 2013. ACM, 686--694. Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Oliver P. John, Laura P. Naumann, and Christopher J. Soto. 2008. Paradigm shift to the integrative big five trait taxonomy. Handbook of Personality: Theory and Research (3rd ed.). Guilford Press, New York, 114--158.Google ScholarGoogle Scholar
  95. Oliver P. John and Sanjay Srivastava. 1999. The big five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: Theory and Research (2nd ed.). Guilford Press, New York, 102--138.Google ScholarGoogle Scholar
  96. Hyun Joon Jung and Matthew Lease. 2011. Improving consensus accuracy via Z-score and weighted voting. In Human Computation.Google ScholarGoogle Scholar
  97. Hyun Joon Jung and Matthew Lease. 2012. Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization. In SIGIR 2012. 1095--1096. Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Hyun Joon Jung, Yubin Park, and Matthew Lease. 2014. Predicting next label quality: A time-series model of crowdwork. In HCOMP 2014.Google ScholarGoogle Scholar
  99. Ho-Won Jung, Seung-Gweon Kim, and Chang-Shin Chung. 2004. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software 5 (2004), 88--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks. In CSCW 2016. 1637--1648. DOI:http://dx.doi.org/10.1145/2818048.2820016 Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. David R. Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In NIPS 2011. Curran Associates, Inc., 1953--1961.Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. David R. Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (2014), 1--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Geoff Kaufman, Mary Flanagan, and Sukdith Punjasthitkul. 2016. Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game. In CHI 2016. 4122--4128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Gabriella Kazai, Jaap Kamps, Marijn Koolen, and Natasa Milic-Frayling. 2011. Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In SIGIR 2011. 205--214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker types and personality traits in crowdsourcing relevance labels. In CIKM 2011. ACM, 1941--1944. Google ScholarGoogle ScholarDigital LibraryDigital Library
  106. Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2012. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In CIKM 2012. ACM, 2583--2586. Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. Gabriella Kazai and Imed Zitouni. 2016. Quality management in crowdsourcing using gold judges behavior. In WSDM 2016. 267--276. DOI:http://dx.doi.org/10.1145/2835776.2835835 Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. Robert Kern, Hans Thies, Cordula Bauer, and Gerhard Satzger. 2010. Quality assurance for human-based electronic services: A decision matrix for choosing the right approach. In ICWE 2010 Workshops. 421--424. Google ScholarGoogle ScholarCross RefCross Ref
  109. Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of mechanical turk for low-income workers in india. In 1st ACM Symposium on Computing for Development. ACM, 12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Roman Khazankin, Daniel Schall, and Schahram Dustdar. 2012. Predicting QoS in scheduled crowdsourcing. In CAISE 2012. 460--472. Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Ashiqur R. KhudaBukhsh, Jaime G. Carbonell, and Peter J. Jansen. 2014. Detecting non-adversarial collusion in crowdsourcing. In HCOMP 2014.Google ScholarGoogle Scholar
  112. Aniket Kittur. 2010. Crowdsourcing, collaboration and creativity. ACM Crossroads 17, 2 (2010), 22--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. Aniket Kittur, Ed H, Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  114. Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut. 2012. CrowdWeaver: Visually managing complex crowd work. In CSCW 2012. 1033--1036.Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In CSCW 2013. 1301--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In UIST’11. 43--52.Google ScholarGoogle Scholar
  117. Masatomo Kobayashi, Shoma Arita, Toshinari Itoko, Shin Saito, and Hironobu Takagi. 2015. Motivating multi-generational crowd workers in social-purpose work. In CSCW 2015. 1813--1824. Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In WWW 2015. 592--602.Google ScholarGoogle Scholar
  119. Markus Krause and René F. Kizilcec. 2015. To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing. In HCOMP 2015. 102--109.Google ScholarGoogle Scholar
  120. Ranjay A. Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A. Shamma, Li Fei-Fei, and Michael S. Bernstein. 2016. Embracing error to enable rapid crowdsourcing. In CHI 2016. 3167--3179. Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. Kyriakos Kritikos, Barbara Pernici, Pierluigi Plebani, Cinzia Cappiello, Marco Comuzzi, Salima Benrernou, Ivona Brandic, Attila Kertész, Michael Parkin, and Manuel Carro. 2013. A survey on service quality description. ACM Computing Surveys (CSUR) 46, 1 (2013), 1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016b. Crowdsourcing processes: A survey of approaches and opportunities. IEEE Internet Computing 20, 2 (2016), 50--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016a. ReLauncher: Crowdsourcing micro-tasks runtime controller. In CSCW 2016. 1607--1612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012a. Collaboratively crowdsourcing workflows with Turkomatic. In CSCW’12. ACM, New York, 1003--1012.Google ScholarGoogle Scholar
  125. Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann. 2012b. MobileWorks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing 16, 5 (Sept. 2012), 28--35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  126. Anand Kulkarni, Prayag Narula, David Rolnitzky, and Nathan Kontny. 2014. Wish: Amplifying creative ability with expert crowds. In HCOMP 2014.Google ScholarGoogle Scholar
  127. Walter S. Lasecki, Christopher D. Miller, and Jeffrey P. Bigham. 2013. Warping time for more effective real-time crowdsourcing. In CHI 2013. 2033--2036. Google ScholarGoogle ScholarDigital LibraryDigital Library
  128. Walter S. Lasecki, Jeffrey M. Rzeszotarski, Adam Marcus, and Jeffrey P. Bigham. 2015. The effects of sequence and delay on crowd work. In CHI 2015. 1375--1378. Google ScholarGoogle ScholarDigital LibraryDigital Library
  129. Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time crowd labeling for deployable activity recognition. In CSCW 2013. 1203--1212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In CSCW 2014. ACM, 248--256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. Paolo Laureti, Lionel Moret, Yi-Cheng Zhang, and Yi-Kuo Yu. 2006. Information filtering via iterative refinement. Europhysics Letters 75 (2006), 1006. Google ScholarGoogle ScholarCross RefCross Ref
  132. Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. 2016. Curiosity killed the cat, but makes crowdwork better. In CHI 2016. ACM, New York, 4098--4110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. John Le, Andy Edmonds, Vaughn Hester, and Lukas Biewald. 2010. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation. 21--26.Google ScholarGoogle Scholar
  134. Robert C. Lewis and Bernhard H. Booms. 1983. Emerging Perspectives on Service Marketing. American Marketing, 99--107.Google ScholarGoogle Scholar
  135. Hongwei Li, Bo Zhao, and Ariel Fuxman. 2014. The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In WWW 2014. 165--176.Google ScholarGoogle Scholar
  136. Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J. Quinn. 2016. Crowdsourcing high quality labels with a tight budget. In WSDM 2016. 237--246. Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. Christopher H. Lin, Ece Kamar, and Eric Horvitz. 2014. Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing. In AAAI 2014. 908--915.Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010b. Exploring iterative and parallel human computation processes. In ACM SIGKDD Workshop on Human Computation. 68--76. Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010a. Turkit: Human computation algorithms on Mechanical Turk. In UIST 2010. ACM, 57--66.Google ScholarGoogle Scholar
  140. Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010c. Turkit: Human computation algorithms on Mechanical Turk. In UIST’10. ACM, New York, 57--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. Chao Liu and Yi-Min Wang. 2012. TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings.. In ICML 2012. icml.cc/ Omnipress.Google ScholarGoogle Scholar
  142. Qiang Liu, Alexander T. Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In NIPS 2013. Curran Associates, Inc., 1914--1922.Google ScholarGoogle Scholar
  143. Benjamin Livshits and Todd Mytkowicz. 2014. Saving money while polling with interpoll using power analysis. In HCOMP 2014.Google ScholarGoogle Scholar
  144. Thomas W. Malone, Robert Laubacher, and Chrysanthos Dellarocas. 2010. The collective intelligence genome. IEEE Engineering Management Review 38, 3 (2010), 38.Google ScholarGoogle ScholarCross RefCross Ref
  145. Andrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, and Arfon M. Smith. 2013. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In HCOMP 2013.Google ScholarGoogle Scholar
  146. Adam Marcus, David Karger, Samuel Madden, Robert Miller, and Sewoong Oh. 2012. Counting with the crowd. In Proceedings of the VLDB Endowment, Vol. 6. VLDB Endowment, 109--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  147. Elaine Massung, David Coyle, Kirsten F. Cater, Marc Jay, and Chris Preist. 2013. Using crowdsourcing to support pro-environmental Community Activism. In CHI 2013. 371--380. Google ScholarGoogle ScholarDigital LibraryDigital Library
  148. Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. In WWW 2016. 843--853.Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. 2016. Why is that relevant? Collecting annotator rationales for relevance judgments. In HCOMP 2016.Google ScholarGoogle Scholar
  150. Patrick Minder and Abraham Bernstein. 2012. Crowdlang: A programming language for the systematic exploration of human computation systems. In Social Informatics. Springer, 124--137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  151. Aliaksei Miniukovich and Antonella De Angeli. 2015. Visual diversity and user interface quality. In British HCI 2015. 101--109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-task crowdsourcing. In KDD 2013. 677--685. Google ScholarGoogle ScholarDigital LibraryDigital Library
  153. Robert R. Morris, Mira Dontcheva, and Elizabeth M. Gerber. 2012. Priming for better performance in microtask crowdsourcing environments. IEEE Internet Computing 16, 5 (Sept. 2012), 13--19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. Yashar Moshfeghi, Alvaro F. Huertas-Rosero, and Joemon M. Jose. 2016. Identifying careless workers in crowdsourcing platforms: A game theory approach. In ACM SIGIR 2016. 857--860. Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. Swaprava Nath, Pankaj Dayama, Dinesh Garg, Y. Narahari, and James Y. Zou. 2012. Threats and trade-offs in resource critical crowdsourcing tasks over networks. In AAAI 2012.Google ScholarGoogle Scholar
  156. Edward Newell and Derek Ruths. 2016. How one microtask affects another. In CHI 2016. 3155--3166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. Dong Nguyen, Dolf Trieschnigg, and Mariët Theune. 2014. Using crowdsourcing to investigate perception of narrative similarity. In CIKM 2014. ACM, 321--330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  158. Jakob Nielsen, Marie Tahir, and Marie Tahir. 2002. Homepage Usability: 50 Websites Deconstructed. Vol. 50. New Riders Indianapolis, IN.Google ScholarGoogle Scholar
  159. Evangelos Niforatos, Ivan Elhart, and Marc Langheinrich. 2016. WeatherUSI: User-based weather crowdsourcing on public displays. In ICWE 2016. 567--570.Google ScholarGoogle ScholarCross RefCross Ref
  160. Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. 2011. Platemate: Crowdsourcing nutritional analysis from food photographs. In UIST 2011. ACM, 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  161. Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas Krause, and Donald Kossmann. 2015. Crowd access path optimization: Diversity matters. In HCOMP 2015.Google ScholarGoogle Scholar
  162. Jungseul Ok, Sewoong Oh, Jinwoo Shin, and Yung Yi. 2016. Optimality of belief propagation for crowdsourced classification. In ICML 2016. JMLR.org, 535--544.Google ScholarGoogle Scholar
  163. David Oleson, Alexander Sorokin, Greg P. Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. HCOMP 2011 11, 11 (2011).Google ScholarGoogle Scholar
  164. Jasper Oosterman and Geert-Jan Houben. 2016. On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks. In ICWE 2016. 413--421. Google ScholarGoogle ScholarCross RefCross Ref
  165. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the Web. (1999).Google ScholarGoogle Scholar
  166. Chris Preist, Elaine Massung, and David Coyle. 2014. Competing or aiming to be average?: Normification as a means of engaging digital volunteers. In CSCW 2014. 1222--1233.Google ScholarGoogle Scholar
  167. Cindy Puah, Ahmad Zaki Abu Bakar, and Chu Wei Ching. 2011. Strategies for community based crowdsourcing. In ICRIIS 2011. 1--4. Google ScholarGoogle ScholarCross RefCross Ref
  168. Alexander J. Quinn and Benjamin B. Bederson. 2014. AskSheet: Efficient human computation for decision making with spreadsheets. In CSCW 2014. 1456--1466.Google ScholarGoogle Scholar
  169. Goran Radanovic and Boi Faltings. 2016. Learning to scale payments in crowdsourcing with properboost. In HCOMP 2016.Google ScholarGoogle Scholar
  170. Karthikeyan Rajasekharan, Aditya P. Mathur, and See-Kiong Ng. 2013. Effective crowdsourcing for software feature ideation in online co-creation forums. In SEKE 2013. 119--124.Google ScholarGoogle Scholar
  171. Huaming Rao, Shih-Wen Huang, and Wai-Tat Fu. 2013. What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task. In HCOMP 2013.Google ScholarGoogle Scholar
  172. Vikas C. Raykar and Shipeng Yu. 2011. Ranking annotators for crowdsourced labeling tasks. In NIPS 2011. Curran Associates Inc., 1809--1817.Google ScholarGoogle Scholar
  173. Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S. Bernstein. 2014. Expert crowdsourcing with flash teams. In UIST. ACM, 75--85.Google ScholarGoogle Scholar
  174. Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM.Google ScholarGoogle Scholar
  175. Markus Rokicki, Sergiu Chelaru, Sergej Zerr, and Stefan Siersdorfer. 2014. Competitive game designs for improving the cost effectiveness of crowdsourcing. In CICM 2014. ACM, 1469--1478. Google ScholarGoogle ScholarDigital LibraryDigital Library
  176. Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team competition designs for crowdsourcing. In WWW 2015. 906--915.Google ScholarGoogle ScholarDigital LibraryDigital Library
  177. Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2015. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal 24, 4 (2015), 467--491. Google ScholarGoogle ScholarDigital LibraryDigital Library
  178. Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In UIST 2011. ACM, 13--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  179. Jeffrey M. Rzeszotarski and Aniket Kittur. 2012. CrowdScape: Interactively visualizing user behavior and output. In UIST 2012. ACM, 55--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  180. Yuko Sakurai, Tenda Okimoto, Masaaki Oka, Masato Shinoda, and Makoto Yokoo. 2013. Ability grouping of crowd workers via reward discrimination. In HCOMP 2013.Google ScholarGoogle Scholar
  181. Benjamin Satzger, Harald Psaier, Daniel Schall, and Schahram Dustdar. 2013. Auction-based crowdsourcing supporting skill management. Information Systems 38, 4 (June 2013), 547--560. Google ScholarGoogle ScholarDigital LibraryDigital Library
  182. Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013a. Incentives and rewarding in social computing. Communications of the ACM 56, 6 (2013), 72--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  183. Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013b. Programming incentives in information systems. In Advanced Information Systems Engineering. Springer, 688--703. Google ScholarGoogle ScholarDigital LibraryDigital Library
  184. Daniel Schall, Benjamin Satzger, and Harald Psaier. 2014. Crowdsourcing tasks to social networks in BPEL4People. World Wide Web 17, 1 (2014), 1--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  185. Daniel Schall, Florian Skopik, and Schahram Dustdar. 2012. Expert discovery and interactions in mixed service-oriented systems. IEEE Transactions on Services Computing 5, 2 (2012), 233--245. Google ScholarGoogle ScholarDigital LibraryDigital Library
  186. Thimo Schulze, Dennis Nordheimer, and Martin Schader. 2013. Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In AMCIS 2013.Google ScholarGoogle Scholar
  187. Nihar Bhadresh Shah and Denny Zhou. 2015. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In NIPS 2015. Curran Associates, Inc., 1--9.Google ScholarGoogle Scholar
  188. Nihar Bhadresh Shah and Dengyong Zhou. 2016. No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing. In ICML 2016. 1--10.Google ScholarGoogle Scholar
  189. Aashish Sheshadri and Matthew Lease. 2013. SQUARE: A benchmark for research on computing crowd consensus. In HCOMP 2013.Google ScholarGoogle Scholar
  190. Yaron Singer and Manas Mittal. 2013. Pricing mechanisms for crowdsourcing markets. In WWW. 1157--1166.Google ScholarGoogle Scholar
  191. Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, and Andreas Krause. 2014. Near-optimally teaching the crowd to classify. In ICML 2014. JMLR.org, II-154--II-162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  192. Klaas-Jan Stol and Brian Fitzgerald. 2014. Two’s company, three’s a crowd: A case study of crowdsourcing software development. In ICSE 2014. 187--198.Google ScholarGoogle ScholarDigital LibraryDigital Library
  193. Yu-An Sun and Christopher Dance. 2012. When majority voting fails: Comparing quality assurance methods for noisy human computation environment. arXiv:1204.3516 (2012).Google ScholarGoogle Scholar
  194. James Surowiecki. 2005. The Wisdom of Crowds. Anchor Books.Google ScholarGoogle ScholarDigital LibraryDigital Library
  195. Oksana Tokarchuk, Roberta Cuel, and Marco Zamarian. 2012. Analyzing crowd labor and designing incentives for humans in the loop. IEEE Internet Computing 16, 5 (Sept. 2012), 45--51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  196. Lisa Torrey and Jude Shavlik. 2009. Transfer learning. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques 1 (2009), 242.Google ScholarGoogle Scholar
  197. Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D. Ramchurn, and Nicholas R. Jennings. 2015. Crowdsourcing complex workflows under budget constraints. In AAAI 2015. 1298--1304.Google ScholarGoogle Scholar
  198. Antti Ukkonen, Behrouz Derakhshan, and Hannes Heikinheimo. 2015. Crowdsourced nonparametric density estimation using relative distances. In HCOMP 2015.Google ScholarGoogle Scholar
  199. Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: Crowd contributions in short bursts of time. In CHI 2014. 3645--3654.Google ScholarGoogle Scholar
  200. Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. 2014. Crowdsourcing algorithms for entity resolution. Proceedings of the VLDB Endowment 7, 12 (2014), 1071--1082. Google ScholarGoogle ScholarDigital LibraryDigital Library
  201. Fernanda B. Viégas, Martin Wattenberg, and Matthew M. McKeon. 2007. The hidden order of Wikipedia. Online Communities and Social Computing. Springer, 445--454.Google ScholarGoogle ScholarDigital LibraryDigital Library
  202. Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. reCAPTCHA: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465--1468. Google ScholarGoogle ScholarCross RefCross Ref
  203. Maja Vukovic and Claudio Bartolini. 2010. Towards a research agenda for enterprise crowdsourcing. Leveraging Applications of Formal Methods, Verification, and Validation. Springer, 425--434. Google ScholarGoogle ScholarCross RefCross Ref
  204. Maja Vukovic, Mariana Lopez, and Jim Laredo. 2010. Peoplecloud for the globally integrated enterprise. In ICSOC/ServiceWave 2009 Workshops on Service-Oriented Computing. Springer, 109--114. Google ScholarGoogle ScholarCross RefCross Ref
  205. Bo Waggoner and Yiling Chen. 2014. Output agreement mechanisms and common knowledge. In HCOMP 2014.Google ScholarGoogle Scholar
  206. Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y. Zhao. 2012. Serf and turf: Crowdturfing for fun and profit. In WWW 2012. 679--688.Google ScholarGoogle Scholar
  207. Fabian L. Wauthier and Michael I. Jordan. 2011. Bayesian bias mitigation for crowdsourcing. In NIPS 2011. Curran Associates, Inc., 1800--1808.Google ScholarGoogle Scholar
  208. Mark E. Whiting, Dilrukshi Gamage, Snehalkumar (Neil) S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. In CSCW 2017. 1902--1913. DOI:http://dx.doi.org/10.1145/2998181.2998234 Google ScholarGoogle ScholarDigital LibraryDigital Library
  209. Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2012. Strategies for crowdsourcing social data analysis. In CHI 2012. 227--236. Google ScholarGoogle ScholarDigital LibraryDigital Library
  210. Stephen M. Wolfson and Matthew Lease. 2011. Look before you leap: Legal pitfalls of crowdsourcing. Proceedings of the American Society for Information Science and Technology 48, 1 (2011), 1--10. Google ScholarGoogle ScholarCross RefCross Ref
  211. Yan Yan, Glenn M. Fung, Rómer Rosales, and Jennifer G. Dy. 2011. Active learning from crowds. In ICML 2011. 1161--1168.Google ScholarGoogle Scholar
  212. Jie Yang, Judith Redi, Gianluca DeMartini, and Alessandro Bozzon. 2016. Modeling task complexity in crowdsourcing. In HCOMP 2016. 249--258.Google ScholarGoogle Scholar
  213. Ming Yin, Yiling Chen, and Yu-An Sun. 2014. Monetary interventions in crowdsourcing task switching. In HCOMP 2014.Google ScholarGoogle Scholar
  214. Lixiu Yu, Paul André, Aniket Kittur, and Robert Kraut. 2014. A comparison of social, learning, and financial strategies on crowd engagement and output quality. In CSCW 2014. 967--978.Google ScholarGoogle Scholar
  215. Yi-Kuo Yu, Yi-Cheng Zhang, Paolo Laureti, and Lionel Moret. 2006. Decoding information from noisy, redundant, and intentionally distorted sources. Physica A: Statistical Mechanics and its Applications 371, 2 (2006), 732--744. Google ScholarGoogle ScholarCross RefCross Ref
  216. Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2015. TaskRec: A task recommendation framework in crowdsourcing systems. Neural Processing Letters 41, 2 (2015), 223--238. Google ScholarGoogle ScholarDigital LibraryDigital Library
  217. Jing Zhang, Xindong Wu, and Victor S. Sheng. 2015. Imbalanced multiple noisy labeling. IEEE Transactions on Knowledge 8 Data Engineering 27, 2 (2015), 489--503.Google ScholarGoogle ScholarCross RefCross Ref
  218. Zhou Zhao, Da Yan, Wilfred Ng, and Shi Gao. 2013. A transfer learning based framework of crowd-selection on twitter. In KDD 2013. ACM, 1514--1517. Google ScholarGoogle ScholarDigital LibraryDigital Library
  219. Haiyi Zhu, Steven P. Dow, Robert E. Kraut, and Aniket Kittur. 2014. Reviewing versus doing: Learning and performance in crowd assessment. In CSCW 2014. 1445--1455.Google ScholarGoogle Scholar
  220. Honglei Zhuang and Joel Young. 2015. Leveraging in-batch annotation bias for crowdsourced active learning. In WSDM 2015. 243--252. DOI:http://dx.doi.org/10.1145/2684822.2685301 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Computing Surveys
            ACM Computing Surveys  Volume 51, Issue 1
            January 2019
            743 pages
            ISSN:0360-0300
            EISSN:1557-7341
            DOI:10.1145/3177787
            • Editor:
            • Sartaj Sahni
            Issue’s Table of Contents

            Copyright © 2018 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 4 January 2018
            • Revised: 1 September 2017
            • Accepted: 1 September 2017
            • Received: 1 June 2016
            Published in csur Volume 51, Issue 1

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • survey
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader