Introduction to Crowdsourcing Platforms:

Amazon’s Mechanical Turk. Amazon’s Mechanical Turk (MTurk) is a crowdsourcing platform commonly used for conducting human subjects research.

MTurk participants take part in Human Intelligence Tasks (HITs) for which they are paid. This platform was not designed for academic research, and many of the HITs posted to MTurk are not research related (for example, some involve transcribing, data entry, or identifying photos of products). MTurk participants are called workers, and many MTurk workers rely on MTurk to earn their entire income. Those who recruit workers to complete HITs are called requestors.  

Please review MTurk’s Acceptable Use Policy before using MTurk. Note that the Acceptable Use Policy prohibits the collection of personally identifiable information from workers, such as email addresses.  Despite this policy, MTurk is limited in the extent to which workers’ privacy and confidentiality are protected. For example, Amazon stores information about the studies workers take part in. Amazon also allows researchers to automatically collect MTurk IDs, which could be used to identify MTurkers, in that MTurk IDs are associated with public Amazon accounts.  Please see Amazon’s privacy notice for additional information. 

Prolific. Prolific is an alternative crowdsourcing platform that was specifically designed for academic research. Prolific has established a minimum amount that participants can be awarded per hour ($6.50), and has standards in place for the fair treatment of participants. Prolific also stores information about participants, include participants’ demographic information. Please see Prolific’s privacy notice

If you are using a crowdsourcing platform to conduct Human Subjects Research (including MTurk, Prolific, or a different platform), please address the following items in the materials you submit for IRB review:

The Human Subjects Research Application: 

  • Determining Participant Eligibility. Please describe how you will determine which users of the crowdsourcing platform are eligible to take part in your study. MTurk and Prolific offer the following screening tools:
    • TurkPrime is a platform that integrates with MTurk and allows researchers to only select workers who meet certain recruitment criteria. TurkPrime can also be used to exclude participants who have previously participated in a given study, and it can be used to collect longitudinal data.
    • Prolific allows researchers to filter participants based upon several hundred demographic variables that Prolific has already collected. If researchers are using a very specific sample that is not identified by the prescreening questions, Prolific suggests that researchers run two separate studies to recruit participants. Researchers may not screen participants within the study or by using the study description only.
  • Survey Platform: In the Human Subjects Research Application, please specify which survey platform will be used to collect data. Lehigh researchers may not use Amazon’s data survey tool. This tool is not designed to protect workers’ privacy and confidentiality, and Amazon stores participants’ data on their own servers. Other survey tools, such as Qualtrics, can be used within crowdsourcing platforms and has privacy and data security standards in place. All Lehigh University faculty, staff, and students can access Qualtrics using their Lehigh credentials.
  • Identifying Information: Please describe any data that could be used to identify a participant.
    • Please note that Qualtrics automatically collects participants’ IP addresses. IP addresses will automatically download when data is exported from Qualtrics. IP addresses are considered identifiable information because they indicate a participant’s location. This function can be turned off, which is always recommended.
    • MTurk worker IDs and are considered identifiable information because they are associated with workers’ public Amazon accounts, which may include private information, such as participants’ first and last names. In the application, please note if worker IDs will be collected, how these data will be protected, and when these data will be deleted. If researchers are using TurkPrime, researchers have the option of using Anonymized Worker IDs (more information here)If this option is selected, all Worker IDs that are downloaded as part of the dataset will be encrypted. Encrypted IDs can be use the same way as non-encrypted IDs in TurkPrime. They can also be used to link participants’ data across time if researchers are conducting a longitudinal study.
    • Prolific IDs are not automatically collected from all participants, but are instead used in specific instances, such as when conducting a longitudinal study. Prolific IDs are not considered identifiable, as researchers are not able to use them to access identifying information about participants.
  • Rejections: If there are conditions under which workers’ participation will be rejected and they will not be paid, please describe these conditions in the Human Subjects Research Application (as well as in the study description, and in the consent form, described below). Participants could be rejected for the following reasons:
    • Participants indicate they do not meet the inclusion criteria that was clearly outlined in the study description and in the consent form. Please screen for inclusion criteria at the beginning of the survey, and program the survey so that it ends if participants do not meet the inclusion criteria.
    • Screening methods indicate that participants are bots or workers using virtual private servers and/or server farms. In the application, please describe how these determinations will be made. These screening procedures should take place at the beginning of the study, and the study should end if participants do not pass the screening.
    • Participants who meet inclusion criteria and take part in a study may not be rejected, even if they fail to pass attention checks. Rejecting participants is inconsistent with Lehigh University’s IRB’s “Payments to Research Participants” policy. According to this policy, payment must accrue as a study progresses. In other words, participants may not be denied payment if they do not complete a research study or if they fail to pass attention checks. This would amount to a penalty. Furthermore, each rejection harms participants’ approval rating. Both MTurk and Prolific allows researchers to screen participants based upon their approval rating. Therefore, a rejection makes it more difficult for workers to complete future work that requires a minimum approval rating. This also amounts to a penalty. If participants do not pass attention checks or complete the study, the best practice is to pay the participants, but exclude their data from analyses.
    • When using MTurk, do not block workers. This can cause workers’ accounts to be suspended, which could potentially have a negative impact on workers. For example, if a worker’s account is suspended, that worker could lose his or her source of income. If you are concerned about workers participating in a study more than once, MTurk offers multiple methods for preventing this that do not involve blocking participants. Prolific also offers methods for restricting participants’ access to your studies.

The Study Description

  • When submitting study materials to IRBNet, please include the recruitment information that will be posted to the crowdsourcing platform. This should include:
    • A brief description of the study’s procedure
    • The approximate duration of the study
    • The compensation
    • All inclusion criteria. Please include this information regardless of if you are using the platform to screen participants for you. Please also describe any procedures used to screen for bots or server farms.
    • Any circumstances in which participants will not receive full payment. (Note that payment must not be contingent upon completing the entire study, and payment must accrue as the study progresses. See “Payments to Research Participants”)

The Consent Form

  • In addition to all required elements of informed consent, please include the following information:
    • All inclusion criteria. Please include this information regardless of if you are using the platform to screen participants for you.
    • If you are collecting MTurk worker IDs and IP addresses, tell workers how this information will be used and retained for the purposes of research. Specify when this information will be removed from the dataset.
    • If the study involves stimuli or questions that participants my find upsetting, please describe this in the consent form.
      • Describe any circumstances in which participants will not receive full payment. (Note that payment must not be contingent upon completing the entire study, and payment must accrue as the study progresses. See “Payments to Research Participants”)

Best Practices for Crowdsourcing:

Crowdsourcing platforms are best used for methodologies that are not negatively impacted if participants become distracted. Attention checks can be used to determine if participants are attending to study materials (however, participants should not be denied payment or other benefits if they do not pass attention checks). Furthermore, crowdsourcing platforms are not well suited for collecting rich, qualitative data, because workers typically attempt to complete studies quickly, as their income can depend upon completing as many studies as possible. Requestors should also consider that some workers might be dishonest about their demographic characteristics in order to fit requestors’ recruitment criteria. If this is a concern, it is often possible to use MTurk and Prolific to screen participants (see “Determining Eligibility” above). See the following articles for further reading on best practices for use of MTurk and Prolific:

Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). An evaluation of Amazon’s Mechanical Turk, its rapid rise, and its effective use. Perspectives on Psychological Science13(2), 149-154.

Palan, S., & Schitter, C. (2018). Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22-27.

Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science23(3), 184-188.

Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology70, 153-163.

The Ethics of Crowdsourcing:

When conducting research using crowdsourcing platforms, please consider the negative impact of low pay and rejections on participants. Many participants rely on crowd sourcing platforms to earn their entire income, or to supplement an income that is below a living wage. The following articles offer information about crowd workers’ experiences, as well as recommendations for the ethical treatment of crowd workers:

Gleibs, I. H. (2017). Are all “research fields” equal? Rethinking practice for the use of data from crowd sourcing market places. Behavior Research Methods. 40(4), 1333-1342.

Newman, A. (2019, November 15). I Found Work on an Amazon Website. I Made 97 Cents an Hour. The New York Times. Retrieved from http://www.newyorktimes.com