libcblog

Recruiting and testing participants: not a lousy job Picutre by Thomas Hawk

Recruiting and testing participants: not a lousy job

Consider this blog a letter to anyone (students, supervisors) who is captured by the idea that data collection is a disrespectful job that can be done by anyone. Let me remind you why recruitment and testing of participants is anything but ignoble.

Recently, I have noticed the ‘image’ of data collection (by which I mean participant recruitment and testing) has become quite negative, as if the work is not important. It seems this work is perceived similar to getting coffee or copying files (which is not unimportant either!), in other words a lousy job. Although I personally understand how demotivating routine lab work can be, this only stresses the need for motivational sources – and perhaps a recap on how important this work really is.

Data (and therefore data collection) may be the most important aspect of any empirical study. Researchers may differ with regard to the topics or fields they are studying and methodologies used, but all empirical research is based on data, which is analyzed and interpreted for information. And, at least when studying human behavior, we need human participants in order to provide us that data [Now that we are talking: a massive thank you to all volunteers participating in research, your contributions are of invaluable importance!]. However, recruiting those participants is a challenging task that takes lot of effort and should not be underestimated. Yet, it is not impossible, despite the many students who try to convince me otherwise ...

Once recruited, an experiment leader (or ‘experimenter’) will guide the participant through the study to ensure the quality of the data. It is of greatest importance to the quality of the data, and therefore to research in general, that this is done through standardized procedures and with the greatest precision, and with a number of issues in mind:
Remember “Clever Hans” (Pfungst, 1911), the horse that could do math? It turned out the horse picked up (unconscious) body language, movements, or gestures, of the experimenter (may it be the direction of eyegaze, movement of the nostrils, or the raising of the eyebrows), which made it tap the right answer to the questions. This provides us a classic example of how the experimenter can drive the effects in their data.

Right now, numerous articles and books have been written about the ‘Experimenter effect’ as put forward by Rosenthal (1966); any form of subtle cue or signal from an experiment leader may affect the responses of a participant. Such a cue or signal can be verbal or non-verbal (e.g. tone of voice or facial expression), intentional or unintentional, and even conscious or unconscious (e.g. automatic gestures). Still, it can significantly affect the outcome of an experiment. Even small differences in the instructions given to a participant may lead to different outcomes (Rosenthal, 1998).

Double-blind experiments provide a way to prevent some of these effects, which are often also referred to as experimenter bias. Say we are comparing condition A to condition B, and we expect participants in condition B to demonstrate better performance: As long as both the participant and the experimenter do not know who belongs to which condition, at least the expectation or bias of the participant and experimenter and its effect on the results will be minimized.

Although we may be able to minimize expectation effects as such, the experimenter effect may explain why some results can only be obtained by one experimenter or group of experimenters. Failure to replicate an effect may be in part due to differences in cues (as discussed above), explanation, or motivation of the experimenter. Considering the recent ‘replication crisis’ (see also here) in social sciences, it is important to pay close attention to such factors, and remind ourselves of their importance.

In order to minimize experimenter effects, it is important to ‘standardize’ the procedures followed. That is why, in my opinion, data collection is a fragile job that needs to be done with the greatest caution. This implies any experiment leader needs to fulfill a number of tasks that require knowledge, effort, and skills:
Experimenters are required to have the knowledge and ability to guide the participants through the study, which implies knowledge about the background of the study and its purpose; thorough experience with the procedure; affinity with the population in such a way that s/he can adapt the level of explanation and empathizes with questions, opinions, or attitudes of the participant; and communication (verbal and non-verbal) skills. To ensure the quality of data collection the experimenter further needs knowledge and experience with specific hard- and software, to ensure equal treatment of all participants, an accurate and independent work attitude, a flexible and practical mindset, motivation and time, and planning skills, to name a few. This is not the most easy set of skills.

If we acknowledge data collection (participant recruitment and testing) as the most vital part of any empirical research, it follows naturally that it should be done with the greatest precision. This requires not only appropriate standardization of data collection procedures (and then adhering to them!), but also numerous skills and thorough training of experimenters. Most of all, it requires understanding and appreciation of the importance of their work.

Hróbjartsson, A., Thomsen, A. S. S., Emanuelsson, F., Tendal, B., Hilden, J., Boutron, I., ... & Brorson, S. (2012). Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. BMJ, 344, e1119.
Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science a crisis of confidence?. Perspectives on Psychological Science, 7(6), 528-530.
Pfungst, O. (1911). Clever Hans. The horse of Mr. von Osten. New York, USA: Henry Holt
Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science, 9(1), 59-71.
Rosenthal, R. (1966). Experimenter effects in behavioral research. East Norwalk, CT, USA: Appleton-Century-Crofts
Rosenthal, R. (2002). Covert communication in classrooms, clinics, courtrooms, and cubicles. American Psychologist, 57(11), 839.

0 Comments