Abstract
Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar—all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives, and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions
- Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford, and Aleksandrs Slivkins. 2016. How many workers to ask?: Adaptive exploration for collecting high quality labels. In ACM SIGIR 2016. 473--482. http://doi.acm.org/10.1145/2911451.2911514Google ScholarDigital Library
- Bo Thomas Adler and Luca De Alfaro. 2007. A content-driven reputation system for the Wikipedia. In WWW 2007. 261--270. Google ScholarDigital Library
- Bo Thomas Adler, Luca De Alfaro, Santiago M. Mola-Velasco, Paolo Rosso, and Andrew G. West. 2011. Wikipedia vandalism detection: Combining natural language, metadata, and reputation features. In Computational Linguistics and Intelligent Text Processing. Springer, 277--288. Google ScholarCross Ref
- Charu C. Aggarwal. 2013. An introduction to outlier analysis. In Outlier Analysis. Springer, 1--40. Google ScholarCross Ref
- Salman Ahmad, Alexis Battle, Zahan Malkani, and Sepander Kamvar. 2011. The Jabberwocky programming environment for structured social computing. In UIST’11. 53--64. Google ScholarDigital Library
- Luis von Ahn. 2006. Games with a purpose. Computer 39, 6 (June 2006), 92--94. Google ScholarDigital Library
- Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In CHI 2014. 3665--3674.Google Scholar
- Mohammad Allahbakhsh, Boualem Benatallah, Aleksandar Ignjatovic, Hamid Reza Motahari-Nezhad, Elisa Bertino, and Schahram Dustdar. 2013. Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing 17, 2 (March 2013), 76--81. Google ScholarDigital Library
- Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Elisa Bertino, and Norman Foo. 2012. Reputation management in crowdsourcing systems. In CollaborateCom 2012. 664--671. Google ScholarDigital Library
- Mohammad Allahbakhsh, Samira Samimi, Hamid Reza Motahari-Nezhad, and Boualem Benatallah. 2014. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In SOCA 2014. 17--24. Google ScholarDigital Library
- Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2012. Collaborative workflow for crowdsourcing translation. In CSCW 2012. 1191--1194. Google ScholarDigital Library
- Iheb Ben Amor, Salima Benbernou, Mourad Ouziri, Zaki Malik, and Brahim Medjahed. 2016. Discovering best teams for data leak-aware crowdsourcing in social networks. ACM Transactions on the Web 10, 1 (2016), Article 2, 27 pages.Google ScholarDigital Library
- Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, and Jure Leskovec. 2013. Steering user behavior with badges. In WWW 2013. 95--106. Google ScholarDigital Library
- Jesse Anderton, Maryam Bashir, Virgil Pavlu, and Javed A. Aslam. 2013. An analysis of crowd workers mistakes for specific and complex relevance assessment task. In CIKM 2013. ACM, 1873--1876. Google ScholarDigital Library
- Paul André, Robert E. Kraut, and Aniket Kittur. 2014. Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks. In CHI 2014. 139--148.Google ScholarDigital Library
- Donovan Artz and Yolanda Gil. 2007. A survey of trust in computer science and the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 5, 2 (2007), 58--71. Google ScholarDigital Library
- Bahadir Ismail Aydin, Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao, and Murat Demirbas. 2014. Crowdsourcing for multiple-choice question answering. In 26th IAAI Conference.Google Scholar
- Piyush Bansal, Carsten Eickhoff, and Thomas Hofmann. 2016. Active content-based crowdsourcing task selection. In CIKM 2016. 529--538. Google ScholarDigital Library
- Carlo Batini, Cinzia Cappiello, Chiara Francalanci, and Andrea Maurino. 2009. Methodologies for data quality assessment and improvement. ACM Computing Surveys (CSUR) 41, 3 (2009), 16.Google ScholarDigital Library
- Michael S. Bernstein, David R. Karger, Robert C. Miller, and Joel Brandt. 2012. Analytic methods for optimizing realtime crowdsourcing. CoRR abs/1204.2995 (2012). http://arxiv.org/abs/1204.2995Google Scholar
- Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A word processor with a crowd inside. In UIST 2010. ACM, 313--322.Google ScholarDigital Library
- Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly real-time answers to visual questions. In UIST 2010 (UIST’10). 333--342.Google Scholar
- David Boud. 2013. Enhancing Learning Through Self-assessment. Routledge.Google Scholar
- Ioannis Boutsis and Vana Kalogeraki. 2016. Location privacy for crowdsourcing applications. In UbiComp 2016. 694--705. DOI:http://dx.doi.org/10.1145/2971648.2971741 Google ScholarDigital Library
- Alessandro Bozzon, Marco Brambilla, and Stefano Ceri. 2012. Answering search queries with crowdsearcher. In WWW 2012. 1009--1018. Google ScholarDigital Library
- Alessandro Bozzon, Marco Brambilla, Stefano Ceri, and Andrea Mauri. 2013. Reactive crowdsourcing. In WWW 2013. 153--164. Google ScholarDigital Library
- Jonathan Bragg, Daniel S. Weld, and others. 2013. Crowdsourcing multi-label classification for taxonomy creation. In 1st AAAI Conference on Human Computation and Crowdsourcing.Google Scholar
- Caleb Chen Cao, Lei Chen, and Hosagrahar Visvesvaraya Jagadish. 2014. From labor to trader: Opinion elicitation via online crowds as a market. In KDD 2014. 1067--1076. Google ScholarDigital Library
- Cinzia Cappiello, Florian Daniel, Agnes Koschmider, Maristella Matera, and Matteo Picozzi. 2011. A quality model for mashups. In ICWE 2011. 137--151. DOI:http://dx.doi.org/10.1007/978-3-642-22233-7_10 Google ScholarCross Ref
- Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. 2014. Modal ranking: A uniquely robust voting rule. In AAAI 2014. 616--622.Google Scholar
- Ruggiero Cavallo and Shaili Jain. 2012. Efficient crowdsourcing contests. In Proceedings of AAMAS 2012 - Volume 2. 677--686.Google Scholar
- Jesse Chandler, Gabriele Paolacci, and Pam Mueller. 2013. Risks and rewards of crowdsourcing marketplaces. In Handbook of Human Computation. Springer, 377--392. Google ScholarCross Ref
- Xi Chen, Qihang Lin, and Dengyong Zhou. 2013. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In ICML 2013, Vol. 28, 64--72.Google Scholar
- Justin Cheng, Jaime Teevan, and Michael S. Bernstein. 2015a. Measuring crowdsourcing effort with error-time curves. In CHI 2015. ACM, New York, 1365--1374. Google ScholarDigital Library
- Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein. 2015b. Break it down: A comparison of macro- and microtasks. In CHI 2015. 4061--4064. http://doi.acm.org/10.1145/2702123.2702146Google Scholar
- Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. In HCOMP 2010. 1--9. http://doi.acm.org/10.1145/1837885.1837889Google ScholarDigital Library
- Peng Dai, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In CSCW 2015. ACM, New York, 628--638. DOI:http://dx.doi.org/10.1145/2675133.2675260 Google ScholarDigital Library
- Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. 2013. Aggregating crowdsourced binary ratings. In WWW 2013. 285--294. Google ScholarDigital Library
- Martin Davtyan, Carsten Eickhoff, and Thomas Hofmann. 2015. Exploiting document content for efficient aggregation of crowdsourcing votes. In CIKM 2015. 783--790. Google ScholarDigital Library
- Luca De Alfaro, Ashutosh Kulshreshtha, Ian Pye, and Bo Thomas Adler. 2011. Reputation systems for open collaboration. Communications of the ACM 54, 8 (2011), 81--87. Google ScholarDigital Library
- Luca De Alfaro, Vassilis Polychronopoulos, and Michael Shavlovsky. 2015. Reliable aggregation of boolean crowdsourced tasks. In HCOMP 2015.Google Scholar
- Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudré-Mauroux. 2013. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal 22, 5 (2013), 665--687.Google ScholarDigital Library
- Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In HCOMP 2014.Google Scholar
- Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2012. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch. 26--30.Google Scholar
- Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell me what you like, and I’ll tell you what to do. In WWW 2013. 367--374.Google Scholar
- Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. In WWW 2016. 855--865.Google ScholarDigital Library
- Mira Dontcheva, Robert R. Morris, Joel R. Brandt, and Elizabeth M. Gerber. 2014. Combining crowdsourcing and learning to improve engagement and performance. In CHI 2014. 3379--3388. Google ScholarDigital Library
- Christoph Dorn, R. N. Taylor, and S. Dustdar. 2012. Flexible Social Workflows: Collaborations as human architecture. IEEE Internet Computing 16, 2 (March 2012), 72--77.Google ScholarDigital Library
- Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. 2016. Toward a learning science for complex crowdsourcing tasks. In CHI 2016. ACM, New York, NY, USA, 2623--2634. Google ScholarDigital Library
- Steven Dow, Anand Kulkarni, Scott Klemmer, and Björn Hartmann. 2012. Shepherding the crowd yields better work. In CSCW 2012. 1013--1022.Google ScholarDigital Library
- Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg, and Daniel S. Weld. 2016. MicroTalk: Using argumentation to improve crowdsourcing accuracy. In HCOMP 2016.Google Scholar
- Carsten Eickhoff and Arjen P. de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval 16, 2 (2013), 121--137. Google ScholarDigital Library
- Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In SIGIR 2012. 871--880.Google ScholarDigital Library
- Kinda El Maarry, Ulrich Güntzer, and Wolf-Tilo Balke. 2015. A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks. In WISE 2015. 293--308.Google ScholarDigital Library
- Boi Faltings, Radu Jurca, Pearl Pu, and Bao Duy Tran. 2014. Incentives to counter bias in human computation. In HCOMP 2014. http://www.aaai.org/ocs/index.php/HCOMP/HCOMP14/paper/view/8945.Google Scholar
- Meng Fang, Jie Yin, and Dacheng Tao. 2014. Active learning for crowdsourcing using knowledge transfer. In 28th AAAI Conference on Artificial Intelligence.Google Scholar
- Siamak Faradani, Björn Hartmann, and Panagiotis G. Ipeirotis. 2011. What’s the right price? Pricing tasks for finishing on time.Human Computation 11 (2011).Google ScholarDigital Library
- Oluwaseyi Feyisetan and Elena Simperl. 2016. Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing. In ICWE 2016. 405--412.Google ScholarCross Ref
- Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In WWW 2015. 333--343. Google ScholarDigital Library
- Linton C. Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35--41. Google ScholarCross Ref
- Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In CHI 2015, Vol. 15.Google ScholarDigital Library
- Snehalkumar (Neil) S. Gaikwad, Durim Morina, Adam Ginzberg, Catherine Mullings, Shirish Goyal, Dilrukshi Gamage, Christopher Diemert, Mathias Burton, Sharon Zhou, Mark Whiting, Karolina Ziulkoski, Alipta Ballav, Aaron Gilbee, Senadhipathige S. Niranga, Vibhor Sehgal, Jasmine Lin, Leonardy Kristianto, Angela Richmond-Fuller, Jeff Regino, Nalin Chhibber, Dinesh Majeti, Sachin Sharma, Kamila Mananova, Dinesh Dhakal, William Dai, Victoria Purynova, Samarth Sandeep, Varshine Chandrakanthan, Tejas Sarma, Sekandar Matin, Ahmed Nasser, Rohit Nistala, Alexander Stolzoff, Kristy Milland, Vinayak Mathur, Rajan Vaish, and Michael S. Bernstein. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. In UIST 2016. 625--637.Google ScholarDigital Library
- Chao Gao, Yu Lu, and Denny Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. In ICML 2016. 603--611.Google ScholarDigital Library
- Mihai Georgescu, Dang Duc Pham, Claudiu S. Firan, Wolfgang Nejdl, and Julien Gaugaz. 2012. Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries. In CIKM 2012. ACM, 1970--1974. Google ScholarDigital Library
- Derek L. Hansen, Patrick J. Schone, Douglas Corey, Matthew Reid, and Jake Gehring. 2013. Quality control mechanisms for crowdsourcing: Peer review, arbitration, 8 expertise at familysearch indexing. In CSCW 2013. 649--660.Google Scholar
- Kotaro Hara, Vicki Le, and Jon Froehlich. 2013. Combining crowdsourcing and google street view to identify street-level accessibility problems. In CHI 2013. 631--640. Google ScholarDigital Library
- Jan Hartmann, Alistair Sutcliffe, and Antonella De Angeli. 2008. Towards a theory of user judgment of aesthetics and user interface quality. ACM Transactions on Computer-Human Interaction 15, 4 (2008), 15.Google ScholarDigital Library
- Kenji Hata, Ranjay Krishna, Li Fei-Fei, and Michael S. Bernstein. 2017. A glimpse far into the future: Understanding long-term crowd worker quality. In CSCW 2017. 889--901.Google Scholar
- Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In CHI 2010. 203--212.Google ScholarDigital Library
- Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks. In CHI 2012. 1539--1548.Google ScholarDigital Library
- James Herbsleb, David Zubrow, Dennis Goldenson, Will Hayes, and Mark Paulk. 1997. Software quality and the capability maturity model. Communications of the ACM 40, 6 (1997), 30--40. Google ScholarDigital Library
- Paul Heymann and Hector Garcia-Molina. 2011. Turkalytics: Analytics for human computation. In WWW 2011. 477--486.Google Scholar
- Chien-Ju Ho, Rafael Frongillo, and Yiling Chen. 2016. Eliciting categorical data for optimal aggregation. In NIPS 2016. Curran Associates, Inc., 2450--2458.Google Scholar
- Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing high quality crowdwork. In WWW 2015. 419--429. DOI:http://dx.doi.org/10.1145/2736277.2741102 Google ScholarDigital Library
- Chien-Ju Ho and Jennifer Wortman Vaughan. 2012. Online task assignment in crowdsourcing markets. In AAAI, Vol. 12. 45--51.Google ScholarDigital Library
- Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014. Situated crowdsourcing using a market model. In UIST 2014. ACM, 55--64. Google ScholarDigital Library
- Tobias Hossfeld, Christian Keimel, and Christian Timmerer. 2014. Crowdsourcing quality-of-experience assessments. Computer 47, 9 (Sept. 2014), 98--102. Google ScholarDigital Library
- Jeff. Howe. 2006. The rise of crowdsourcing. Wired (June 2006).Google Scholar
- Chang Hu, Philip Resnik, Yakov Kronrod, and Benjamin Bederson. 2012. Deploying MonoTrans widgets in the wild. In CHI 2012. 2935--2938. Google ScholarDigital Library
- Shih-Wen Huang and Wai-Tat Fu. 2013a. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In CHI 2013. 621--630.Google ScholarDigital Library
- Shih-Wen Huang and Wai-Tat Fu. 2013b. Enhancing reliability using peer consistency evaluation in human computation. In CSCW 2013. 639--648. Google ScholarDigital Library
- Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Ngoc Tran Lam, and Karl Aberer. 2013a. BATC: A benchmark for aggregation techniques in crowdsourcing. In SIGIR 2013. 1079--1080. Google ScholarDigital Library
- Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Lam Ngoc Tran, and Karl Aberer. 2013b. An evaluation of aggregation techniques in crowdsourcing. In WISE 2013. Springer, 1--15.Google ScholarCross Ref
- Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, and Karl Aberer. 2015. Minimizing efforts in validating crowd answers. In SIGMOD 2015. 999--1014.Google ScholarDigital Library
- Jane Hunter, Abdulmonem Alabri, and Catharina van Ingen. 2013. Assessing the quality and trustworthiness of citizen science data. Concurrency and Computation: Practice and Experience 25, 4 (2013). Google ScholarCross Ref
- Trung Dong Huynh, Mark Ebden, Matteo Venanzi, Sarvapali D. Ramchurn, Stephen J. Roberts, and Luc Moreau. 2013. Interpretation of crowdsourced activities using provenance network analysis. In HCOMP 2013.Google Scholar
- Aleksandar Ignjatovic, Norman Foo, and Chung Tong Lee. 2008. An analytic approach to reputation ranking of participants in online transactions. In WI/IAT 2008. 587--590. Google ScholarDigital Library
- Kazushi Ikeda and Michael S. Bernstein. 2016. Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity. In CHI 2016. 4111--4121. http://doi.acm.org/10.1145/2858036.2858327Google Scholar
- Panagiotis G. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 2 (Dec. 2010), 16--21. Google ScholarDigital Library
- Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted crowdsourcing with a billion (potential) users. In WWW 2014. 143--154.Google Scholar
- Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In CHI 2013. 611--620.Google Scholar
- Srikanth Jagabathula, Lakshminarayanan Subramanian, and Ashwin Venkataraman. 2014. Reputation-based worker filtering in crowdsourcing. In NIPS 2014. Curran Associates, Inc., 2492--2500.Google ScholarDigital Library
- Manas Joglekar, Hector Garcia-Molina, and Aditya Parameswaran. 2013. Evaluating the crowd with confidence. In KDD 2013. ACM, 686--694. Google ScholarDigital Library
- Oliver P. John, Laura P. Naumann, and Christopher J. Soto. 2008. Paradigm shift to the integrative big five trait taxonomy. Handbook of Personality: Theory and Research (3rd ed.). Guilford Press, New York, 114--158.Google Scholar
- Oliver P. John and Sanjay Srivastava. 1999. The big five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: Theory and Research (2nd ed.). Guilford Press, New York, 102--138.Google Scholar
- Hyun Joon Jung and Matthew Lease. 2011. Improving consensus accuracy via Z-score and weighted voting. In Human Computation.Google Scholar
- Hyun Joon Jung and Matthew Lease. 2012. Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization. In SIGIR 2012. 1095--1096. Google ScholarDigital Library
- Hyun Joon Jung, Yubin Park, and Matthew Lease. 2014. Predicting next label quality: A time-series model of crowdwork. In HCOMP 2014.Google Scholar
- Ho-Won Jung, Seung-Gweon Kim, and Chang-Shin Chung. 2004. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software 5 (2004), 88--92. Google ScholarDigital Library
- Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks. In CSCW 2016. 1637--1648. DOI:http://dx.doi.org/10.1145/2818048.2820016 Google ScholarDigital Library
- David R. Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In NIPS 2011. Curran Associates, Inc., 1953--1961.Google ScholarDigital Library
- David R. Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (2014), 1--24. Google ScholarDigital Library
- Geoff Kaufman, Mary Flanagan, and Sukdith Punjasthitkul. 2016. Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game. In CHI 2016. 4122--4128.Google ScholarDigital Library
- Gabriella Kazai, Jaap Kamps, Marijn Koolen, and Natasa Milic-Frayling. 2011. Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In SIGIR 2011. 205--214.Google ScholarDigital Library
- Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker types and personality traits in crowdsourcing relevance labels. In CIKM 2011. ACM, 1941--1944. Google ScholarDigital Library
- Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2012. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In CIKM 2012. ACM, 2583--2586. Google ScholarDigital Library
- Gabriella Kazai and Imed Zitouni. 2016. Quality management in crowdsourcing using gold judges behavior. In WSDM 2016. 267--276. DOI:http://dx.doi.org/10.1145/2835776.2835835 Google ScholarDigital Library
- Robert Kern, Hans Thies, Cordula Bauer, and Gerhard Satzger. 2010. Quality assurance for human-based electronic services: A decision matrix for choosing the right approach. In ICWE 2010 Workshops. 421--424. Google ScholarCross Ref
- Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of mechanical turk for low-income workers in india. In 1st ACM Symposium on Computing for Development. ACM, 12. Google ScholarDigital Library
- Roman Khazankin, Daniel Schall, and Schahram Dustdar. 2012. Predicting QoS in scheduled crowdsourcing. In CAISE 2012. 460--472. Google ScholarDigital Library
- Ashiqur R. KhudaBukhsh, Jaime G. Carbonell, and Peter J. Jansen. 2014. Detecting non-adversarial collusion in crowdsourcing. In HCOMP 2014.Google Scholar
- Aniket Kittur. 2010. Crowdsourcing, collaboration and creativity. ACM Crossroads 17, 2 (2010), 22--26. Google ScholarDigital Library
- Aniket Kittur, Ed H, Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453--456. Google ScholarDigital Library
- Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut. 2012. CrowdWeaver: Visually managing complex crowd work. In CSCW 2012. 1033--1036.Google ScholarDigital Library
- Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In CSCW 2013. 1301--1318. Google ScholarDigital Library
- Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In UIST’11. 43--52.Google Scholar
- Masatomo Kobayashi, Shoma Arita, Toshinari Itoko, Shin Saito, and Hironobu Takagi. 2015. Motivating multi-generational crowd workers in social-purpose work. In CSCW 2015. 1813--1824. Google ScholarDigital Library
- Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In WWW 2015. 592--602.Google Scholar
- Markus Krause and René F. Kizilcec. 2015. To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing. In HCOMP 2015. 102--109.Google Scholar
- Ranjay A. Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A. Shamma, Li Fei-Fei, and Michael S. Bernstein. 2016. Embracing error to enable rapid crowdsourcing. In CHI 2016. 3167--3179. Google ScholarDigital Library
- Kyriakos Kritikos, Barbara Pernici, Pierluigi Plebani, Cinzia Cappiello, Marco Comuzzi, Salima Benrernou, Ivona Brandic, Attila Kertész, Michael Parkin, and Manuel Carro. 2013. A survey on service quality description. ACM Computing Surveys (CSUR) 46, 1 (2013), 1.Google ScholarDigital Library
- Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016b. Crowdsourcing processes: A survey of approaches and opportunities. IEEE Internet Computing 20, 2 (2016), 50--56. Google ScholarDigital Library
- Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016a. ReLauncher: Crowdsourcing micro-tasks runtime controller. In CSCW 2016. 1607--1612.Google ScholarDigital Library
- Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012a. Collaboratively crowdsourcing workflows with Turkomatic. In CSCW’12. ACM, New York, 1003--1012.Google Scholar
- Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann. 2012b. MobileWorks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing 16, 5 (Sept. 2012), 28--35.Google ScholarDigital Library
- Anand Kulkarni, Prayag Narula, David Rolnitzky, and Nathan Kontny. 2014. Wish: Amplifying creative ability with expert crowds. In HCOMP 2014.Google Scholar
- Walter S. Lasecki, Christopher D. Miller, and Jeffrey P. Bigham. 2013. Warping time for more effective real-time crowdsourcing. In CHI 2013. 2033--2036. Google ScholarDigital Library
- Walter S. Lasecki, Jeffrey M. Rzeszotarski, Adam Marcus, and Jeffrey P. Bigham. 2015. The effects of sequence and delay on crowd work. In CHI 2015. 1375--1378. Google ScholarDigital Library
- Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time crowd labeling for deployable activity recognition. In CSCW 2013. 1203--1212. Google ScholarDigital Library
- Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In CSCW 2014. ACM, 248--256. Google ScholarDigital Library
- Paolo Laureti, Lionel Moret, Yi-Cheng Zhang, and Yi-Kuo Yu. 2006. Information filtering via iterative refinement. Europhysics Letters 75 (2006), 1006. Google ScholarCross Ref
- Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. 2016. Curiosity killed the cat, but makes crowdwork better. In CHI 2016. ACM, New York, 4098--4110. Google ScholarDigital Library
- John Le, Andy Edmonds, Vaughn Hester, and Lukas Biewald. 2010. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation. 21--26.Google Scholar
- Robert C. Lewis and Bernhard H. Booms. 1983. Emerging Perspectives on Service Marketing. American Marketing, 99--107.Google Scholar
- Hongwei Li, Bo Zhao, and Ariel Fuxman. 2014. The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In WWW 2014. 165--176.Google Scholar
- Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J. Quinn. 2016. Crowdsourcing high quality labels with a tight budget. In WSDM 2016. 237--246. Google ScholarDigital Library
- Christopher H. Lin, Ece Kamar, and Eric Horvitz. 2014. Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing. In AAAI 2014. 908--915.Google ScholarDigital Library
- Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010b. Exploring iterative and parallel human computation processes. In ACM SIGKDD Workshop on Human Computation. 68--76. Google ScholarDigital Library
- Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010a. Turkit: Human computation algorithms on Mechanical Turk. In UIST 2010. ACM, 57--66.Google Scholar
- Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010c. Turkit: Human computation algorithms on Mechanical Turk. In UIST’10. ACM, New York, 57--66. Google ScholarDigital Library
- Chao Liu and Yi-Min Wang. 2012. TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings.. In ICML 2012. icml.cc/ Omnipress.Google Scholar
- Qiang Liu, Alexander T. Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In NIPS 2013. Curran Associates, Inc., 1914--1922.Google Scholar
- Benjamin Livshits and Todd Mytkowicz. 2014. Saving money while polling with interpoll using power analysis. In HCOMP 2014.Google Scholar
- Thomas W. Malone, Robert Laubacher, and Chrysanthos Dellarocas. 2010. The collective intelligence genome. IEEE Engineering Management Review 38, 3 (2010), 38.Google ScholarCross Ref
- Andrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, and Arfon M. Smith. 2013. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In HCOMP 2013.Google Scholar
- Adam Marcus, David Karger, Samuel Madden, Robert Miller, and Sewoong Oh. 2012. Counting with the crowd. In Proceedings of the VLDB Endowment, Vol. 6. VLDB Endowment, 109--120. Google ScholarDigital Library
- Elaine Massung, David Coyle, Kirsten F. Cater, Marc Jay, and Chris Preist. 2013. Using crowdsourcing to support pro-environmental Community Activism. In CHI 2013. 371--380. Google ScholarDigital Library
- Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. In WWW 2016. 843--853.Google ScholarDigital Library
- Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. 2016. Why is that relevant? Collecting annotator rationales for relevance judgments. In HCOMP 2016.Google Scholar
- Patrick Minder and Abraham Bernstein. 2012. Crowdlang: A programming language for the systematic exploration of human computation systems. In Social Informatics. Springer, 124--137. Google ScholarDigital Library
- Aliaksei Miniukovich and Antonella De Angeli. 2015. Visual diversity and user interface quality. In British HCI 2015. 101--109. Google ScholarDigital Library
- Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-task crowdsourcing. In KDD 2013. 677--685. Google ScholarDigital Library
- Robert R. Morris, Mira Dontcheva, and Elizabeth M. Gerber. 2012. Priming for better performance in microtask crowdsourcing environments. IEEE Internet Computing 16, 5 (Sept. 2012), 13--19. Google ScholarDigital Library
- Yashar Moshfeghi, Alvaro F. Huertas-Rosero, and Joemon M. Jose. 2016. Identifying careless workers in crowdsourcing platforms: A game theory approach. In ACM SIGIR 2016. 857--860. Google ScholarDigital Library
- Swaprava Nath, Pankaj Dayama, Dinesh Garg, Y. Narahari, and James Y. Zou. 2012. Threats and trade-offs in resource critical crowdsourcing tasks over networks. In AAAI 2012.Google Scholar
- Edward Newell and Derek Ruths. 2016. How one microtask affects another. In CHI 2016. 3155--3166. Google ScholarDigital Library
- Dong Nguyen, Dolf Trieschnigg, and Mariët Theune. 2014. Using crowdsourcing to investigate perception of narrative similarity. In CIKM 2014. ACM, 321--330.Google ScholarDigital Library
- Jakob Nielsen, Marie Tahir, and Marie Tahir. 2002. Homepage Usability: 50 Websites Deconstructed. Vol. 50. New Riders Indianapolis, IN.Google Scholar
- Evangelos Niforatos, Ivan Elhart, and Marc Langheinrich. 2016. WeatherUSI: User-based weather crowdsourcing on public displays. In ICWE 2016. 567--570.Google ScholarCross Ref
- Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. 2011. Platemate: Crowdsourcing nutritional analysis from food photographs. In UIST 2011. ACM, 1--12. Google ScholarDigital Library
- Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas Krause, and Donald Kossmann. 2015. Crowd access path optimization: Diversity matters. In HCOMP 2015.Google Scholar
- Jungseul Ok, Sewoong Oh, Jinwoo Shin, and Yung Yi. 2016. Optimality of belief propagation for crowdsourced classification. In ICML 2016. JMLR.org, 535--544.Google Scholar
- David Oleson, Alexander Sorokin, Greg P. Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. HCOMP 2011 11, 11 (2011).Google Scholar
- Jasper Oosterman and Geert-Jan Houben. 2016. On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks. In ICWE 2016. 413--421. Google ScholarCross Ref
- Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the Web. (1999).Google Scholar
- Chris Preist, Elaine Massung, and David Coyle. 2014. Competing or aiming to be average?: Normification as a means of engaging digital volunteers. In CSCW 2014. 1222--1233.Google Scholar
- Cindy Puah, Ahmad Zaki Abu Bakar, and Chu Wei Ching. 2011. Strategies for community based crowdsourcing. In ICRIIS 2011. 1--4. Google ScholarCross Ref
- Alexander J. Quinn and Benjamin B. Bederson. 2014. AskSheet: Efficient human computation for decision making with spreadsheets. In CSCW 2014. 1456--1466.Google Scholar
- Goran Radanovic and Boi Faltings. 2016. Learning to scale payments in crowdsourcing with properboost. In HCOMP 2016.Google Scholar
- Karthikeyan Rajasekharan, Aditya P. Mathur, and See-Kiong Ng. 2013. Effective crowdsourcing for software feature ideation in online co-creation forums. In SEKE 2013. 119--124.Google Scholar
- Huaming Rao, Shih-Wen Huang, and Wai-Tat Fu. 2013. What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task. In HCOMP 2013.Google Scholar
- Vikas C. Raykar and Shipeng Yu. 2011. Ranking annotators for crowdsourced labeling tasks. In NIPS 2011. Curran Associates Inc., 1809--1817.Google Scholar
- Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S. Bernstein. 2014. Expert crowdsourcing with flash teams. In UIST. ACM, 75--85.Google Scholar
- Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM.Google Scholar
- Markus Rokicki, Sergiu Chelaru, Sergej Zerr, and Stefan Siersdorfer. 2014. Competitive game designs for improving the cost effectiveness of crowdsourcing. In CICM 2014. ACM, 1469--1478. Google ScholarDigital Library
- Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team competition designs for crowdsourcing. In WWW 2015. 906--915.Google ScholarDigital Library
- Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2015. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal 24, 4 (2015), 467--491. Google ScholarDigital Library
- Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In UIST 2011. ACM, 13--22. Google ScholarDigital Library
- Jeffrey M. Rzeszotarski and Aniket Kittur. 2012. CrowdScape: Interactively visualizing user behavior and output. In UIST 2012. ACM, 55--62. Google ScholarDigital Library
- Yuko Sakurai, Tenda Okimoto, Masaaki Oka, Masato Shinoda, and Makoto Yokoo. 2013. Ability grouping of crowd workers via reward discrimination. In HCOMP 2013.Google Scholar
- Benjamin Satzger, Harald Psaier, Daniel Schall, and Schahram Dustdar. 2013. Auction-based crowdsourcing supporting skill management. Information Systems 38, 4 (June 2013), 547--560. Google ScholarDigital Library
- Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013a. Incentives and rewarding in social computing. Communications of the ACM 56, 6 (2013), 72--82. Google ScholarDigital Library
- Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013b. Programming incentives in information systems. In Advanced Information Systems Engineering. Springer, 688--703. Google ScholarDigital Library
- Daniel Schall, Benjamin Satzger, and Harald Psaier. 2014. Crowdsourcing tasks to social networks in BPEL4People. World Wide Web 17, 1 (2014), 1--32. Google ScholarDigital Library
- Daniel Schall, Florian Skopik, and Schahram Dustdar. 2012. Expert discovery and interactions in mixed service-oriented systems. IEEE Transactions on Services Computing 5, 2 (2012), 233--245. Google ScholarDigital Library
- Thimo Schulze, Dennis Nordheimer, and Martin Schader. 2013. Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In AMCIS 2013.Google Scholar
- Nihar Bhadresh Shah and Denny Zhou. 2015. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In NIPS 2015. Curran Associates, Inc., 1--9.Google Scholar
- Nihar Bhadresh Shah and Dengyong Zhou. 2016. No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing. In ICML 2016. 1--10.Google Scholar
- Aashish Sheshadri and Matthew Lease. 2013. SQUARE: A benchmark for research on computing crowd consensus. In HCOMP 2013.Google Scholar
- Yaron Singer and Manas Mittal. 2013. Pricing mechanisms for crowdsourcing markets. In WWW. 1157--1166.Google Scholar
- Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, and Andreas Krause. 2014. Near-optimally teaching the crowd to classify. In ICML 2014. JMLR.org, II-154--II-162.Google ScholarDigital Library
- Klaas-Jan Stol and Brian Fitzgerald. 2014. Two’s company, three’s a crowd: A case study of crowdsourcing software development. In ICSE 2014. 187--198.Google ScholarDigital Library
- Yu-An Sun and Christopher Dance. 2012. When majority voting fails: Comparing quality assurance methods for noisy human computation environment. arXiv:1204.3516 (2012).Google Scholar
- James Surowiecki. 2005. The Wisdom of Crowds. Anchor Books.Google ScholarDigital Library
- Oksana Tokarchuk, Roberta Cuel, and Marco Zamarian. 2012. Analyzing crowd labor and designing incentives for humans in the loop. IEEE Internet Computing 16, 5 (Sept. 2012), 45--51. Google ScholarDigital Library
- Lisa Torrey and Jude Shavlik. 2009. Transfer learning. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques 1 (2009), 242.Google Scholar
- Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D. Ramchurn, and Nicholas R. Jennings. 2015. Crowdsourcing complex workflows under budget constraints. In AAAI 2015. 1298--1304.Google Scholar
- Antti Ukkonen, Behrouz Derakhshan, and Hannes Heikinheimo. 2015. Crowdsourced nonparametric density estimation using relative distances. In HCOMP 2015.Google Scholar
- Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: Crowd contributions in short bursts of time. In CHI 2014. 3645--3654.Google Scholar
- Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. 2014. Crowdsourcing algorithms for entity resolution. Proceedings of the VLDB Endowment 7, 12 (2014), 1071--1082. Google ScholarDigital Library
- Fernanda B. Viégas, Martin Wattenberg, and Matthew M. McKeon. 2007. The hidden order of Wikipedia. Online Communities and Social Computing. Springer, 445--454.Google ScholarDigital Library
- Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. reCAPTCHA: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465--1468. Google ScholarCross Ref
- Maja Vukovic and Claudio Bartolini. 2010. Towards a research agenda for enterprise crowdsourcing. Leveraging Applications of Formal Methods, Verification, and Validation. Springer, 425--434. Google ScholarCross Ref
- Maja Vukovic, Mariana Lopez, and Jim Laredo. 2010. Peoplecloud for the globally integrated enterprise. In ICSOC/ServiceWave 2009 Workshops on Service-Oriented Computing. Springer, 109--114. Google ScholarCross Ref
- Bo Waggoner and Yiling Chen. 2014. Output agreement mechanisms and common knowledge. In HCOMP 2014.Google Scholar
- Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y. Zhao. 2012. Serf and turf: Crowdturfing for fun and profit. In WWW 2012. 679--688.Google Scholar
- Fabian L. Wauthier and Michael I. Jordan. 2011. Bayesian bias mitigation for crowdsourcing. In NIPS 2011. Curran Associates, Inc., 1800--1808.Google Scholar
- Mark E. Whiting, Dilrukshi Gamage, Snehalkumar (Neil) S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. In CSCW 2017. 1902--1913. DOI:http://dx.doi.org/10.1145/2998181.2998234 Google ScholarDigital Library
- Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2012. Strategies for crowdsourcing social data analysis. In CHI 2012. 227--236. Google ScholarDigital Library
- Stephen M. Wolfson and Matthew Lease. 2011. Look before you leap: Legal pitfalls of crowdsourcing. Proceedings of the American Society for Information Science and Technology 48, 1 (2011), 1--10. Google ScholarCross Ref
- Yan Yan, Glenn M. Fung, Rómer Rosales, and Jennifer G. Dy. 2011. Active learning from crowds. In ICML 2011. 1161--1168.Google Scholar
- Jie Yang, Judith Redi, Gianluca DeMartini, and Alessandro Bozzon. 2016. Modeling task complexity in crowdsourcing. In HCOMP 2016. 249--258.Google Scholar
- Ming Yin, Yiling Chen, and Yu-An Sun. 2014. Monetary interventions in crowdsourcing task switching. In HCOMP 2014.Google Scholar
- Lixiu Yu, Paul André, Aniket Kittur, and Robert Kraut. 2014. A comparison of social, learning, and financial strategies on crowd engagement and output quality. In CSCW 2014. 967--978.Google Scholar
- Yi-Kuo Yu, Yi-Cheng Zhang, Paolo Laureti, and Lionel Moret. 2006. Decoding information from noisy, redundant, and intentionally distorted sources. Physica A: Statistical Mechanics and its Applications 371, 2 (2006), 732--744. Google ScholarCross Ref
- Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2015. TaskRec: A task recommendation framework in crowdsourcing systems. Neural Processing Letters 41, 2 (2015), 223--238. Google ScholarDigital Library
- Jing Zhang, Xindong Wu, and Victor S. Sheng. 2015. Imbalanced multiple noisy labeling. IEEE Transactions on Knowledge 8 Data Engineering 27, 2 (2015), 489--503.Google ScholarCross Ref
- Zhou Zhao, Da Yan, Wilfred Ng, and Shi Gao. 2013. A transfer learning based framework of crowd-selection on twitter. In KDD 2013. ACM, 1514--1517. Google ScholarDigital Library
- Haiyi Zhu, Steven P. Dow, Robert E. Kraut, and Aniket Kittur. 2014. Reviewing versus doing: Learning and performance in crowd assessment. In CSCW 2014. 1445--1455.Google Scholar
- Honglei Zhuang and Joel Young. 2015. Leveraging in-batch annotation bias for crowdsourced active learning. In WSDM 2015. 243--252. DOI:http://dx.doi.org/10.1145/2684822.2685301 Google ScholarDigital Library
Index Terms
- Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions
Recommendations
Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsCrowdsourcing is increasingly being used as a means to tackle problems requiring human intelligence. With the ever-growing worker base that aims to complete microtasks on crowdsourcing platforms in exchange for financial gains, there is a need for ...
Quality Control in Crowdsourcing based on Fine-Grained Behavioral Features
CSCW2Crowdsourcing is popular for large-scale data collection and labeling, but a major challenge is on detecting low-quality submissions. Recent studies have demonstrated that behavioral features of workers are highly correlated with data quality and can be ...
The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy
CIKM '12: Proceedings of the 21st ACM international conference on Information and knowledge managementInformation retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of crowdsourcing, leading to varied output quality. While a ...
Comments