Summary of “How Random Is That?” & ” A TED Talk ABOUT “Bad Science”

2996560_how_random_is_that-1_1

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Part A: “How Random Is That?” This article (see attachment) discusses sampling methods commonly used in many psychology department research studies. Write a 2page summary of the article, including your opinion of it. Reflect on how we’ve discussed sampling methods in class, and how this article fits in with your knowledge of sampling. Be sure to read all the “Sidebars” as well. Part B: Summary of “How Random Is That?” & Here’s Ben Goldacre in a 20-minute TED video talking about his favorite topic, Bad Science. He addresses the journal-to-journalism cycle, the flaws of authority, the placebo effect, industry-sponsored trials, publication bias, correlation/causation, ethics, and more. The overall message is the importance of publishing all data–both for and against a drug, therapy, or intervention. To receive credit, write a 2page summary of the video, ( http://www.ted.com/talks/ben_goldacre_battling_bad_science.html) including your opinion of it. Reflect on how we’ve discussed science, methods, and ethics in class, and how this article fits in with those discussions.

How Random is That?

Students are convenient research subjects but they’re not a simple sample

Compared to the hard rock of empirical methods, 18- to 20-year-old college students are a wet marsh of spontaneous behavior and malleable minds. In 1971, notable personality researcher Rae Carlson called students “unfinished personalities” who may fundamentally differ from non-students in a number of psychological ways. Fifteen years later, APS Fellow and Charter Member David O. Sears wrote in the Journal of Personality and Social Psychology that “college students are likely to have less-crystallized attitudes, less-formulated senses of self, stronger cognitive skills, stronger tendencies to comply with authority, and more unstable peer group relationships.” They change personal ideologies from lecture to lecture, scuttle to and fro as their hormones direct, wake up at six o’clock — in the evening. But despite being behavioral works-in-progress, college students remain the primary subject pool for most psychological researchers, leaving some to question whether findings from this “convenient” population can generalize to the world at large.

See Also:

Engaging Research Participants

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Diving Into the Subject Pool

Making Research Educational

On Both Sides of the Consent Form

“The goal of psychology is to make nomothetic laws — laws that apply to all people,” said APS Fellow Lisa Feldman Barrett, Boston College. “The question is, how well can you do that when you’re sampling by convenience?”

The question is an important one, considering that in 1999, students made up 86 percent of the samples for subject-based articles appearing in the Personality and Social Psychology Bulletin, and 63 percent for the Journal of Personality and Social Psychology, according to a study led by APS Fellow Richard C. Sherman. Since its inception in 1992, the Journal of Consumer Psychology has included college samples in another 86 percent of its empirically based articles.

Feldman Barrett

Though the numbers may seem alarming, asking why students are so widely used is like asking why breathing air is the preferred method for oxygen intake — the reasons range from the obvious to the more obvious. “They are a very convenient and captive subject pool that researchers can dip into with relative ease,” said Michael Hunter, University of Victoria. So convenient, they are commonly known as the “convenience sample,” often showing up at a researcher’s door as part of a requirement for an introductory psychology class.

The price is right, too, said APS Fellow and Charter Member Peter Killeen, Arizona State University. “They’re cheaper than white rats, and they’re more similar to the population to which we hope to generalize,” he said. “And they seldom bite.” Feldman Barrett believes that without these low-cost, easy-access samples, textbooks would be as empty as journals and her lab would be as empty as either; in such a scenario, she predicts being able to run merely a quarter of the experiments she does now.

Ironically, just about the only thing college samples have not been used to study are themselves. It is this lack of empirical confirmation that APS Fellow and Charter Member Harry Reis, editor of Current Directions in Psychological Science, calls the most definitive reason why students remain the default sample. “The suspicion that the literature is flawed because of reliance on college students has been around for a long time,” said Reis, University of Rochester. “The objection is an obvious one, but I’ve never seen data, to date, showing that it is a serious problem.”

But like a formative undergrad, that might be about to change.

Emerging Evidence
Robert A. Peterson is not a psychologist, but he often plays one in the research lab. A professor in the McCombs School of Business at the University of Texas at Austin, Peterson’s research regularly crosses paths with consumer psychology and appears in publications like the Journal of Applied Psychology and Psychology and Marketing. Recently, he completed what he believes to be the most thorough empirical study on student samples, primarily because few such studies exist.

“How do you comprehensively analyze the results of psychological experiments? How good are data based on college student samples? These are questions we need to ask, since the vast majority of studies being published use students,” said Peterson, who has been interested in such methodological questions for 40 years. “If you look at the issue of using students there is virtually no empirical evidence supporting or challenging their use.”

Studies do exist on whether samples of convenience can produce research of significance, but many are anecdotal in nature, said Peterson, or are driven by logic and emotion. APS Charter Member Robert Dipboye and Michael Flanagan fell just short of implying that convenience samples were a serious problem in their 1979 paper, “Are Findings in the Field More Generalizable Than in the Laboratory?” Dipboye and Flanagan analyzed content from volumes of the Journal of Applied Psychology, Organizational Behavior and Human Performance, and Personnel Psychology to determine if field research was more generalizable than lab findings, as was the common belief. Instead, they found field research to be “as narrow as laboratory research in the actors, settings, and behaviors sampled” — hardly a glowing endorsement for either sample — and suggested using both populations whenever possible.

But few others have addressed the problem with Peterson’s scientific rigor. He and his assistants spent years scouring the literature for articles where student and non-student samples had been used in the same research. When the dust cleared, he had compared an estimated 650,000 student and non-student subjects — perhaps trying to make up for the lack of literature in one fell swoop. The result was his paper, “On the Use of College Students in Social Science Research,” and the first dent in the armor of student samples.

After analyzing 65 behavioral and psychological relationships, Peterson found that nearly one in five conclusions based on a college student may differ directionally from that of a non-student. He also found that 29 percent of the relationships differed in magnitude — the larger effect size in a pair of student and non-student studies exceeded the smaller one by a factor of two or more. Altogether, nearly half of the effect sizes observed for students and non-students “differed substantially.”

Despite presenting some of the topic’s most convincing evidence, Peterson’s conclusion was more cautionary than apocalyptic, suggesting outside replication of student-based research before any generalizations are made. “At a minimum,” he said, “research based on one sample of college students from one subject pool at one university needs to be replicated with students from a different university.” Instead of putting an end to one problem, however, the results bring attention to another — a core barrier to generalizability: the difficulty of reaching a non-matriculating population.

So Many Barriers
Some of the ado may be about nothing. Conveniently enough, a number of disciplines seem tailor-made for the convenience sample. “In some areas — perception, memory, attention, many of the cognitive sciences — it typically doesn’t seem to matter,” said APS Fellow and Charter Member James Cutting, editor of Psychological Science. Whatever their behavioral idiosyncrasies, undergraduates maintain the same core neural networks, some argue. When studying attachment processes, college students might even be the ideal sample, Feldman Barrett said, since they are often under self-esteem threat and are the exact population to which such research should generalize.

Other areas in psychological science are not as lucky. As a developmental researcher, APS Fellow and Charter Member Valerie Reyna often lacks the luxury of calling upon a student sample. An undergraduate might roll out of bed and into the lab, but a child subject must be located, a parent paid (or, in more cases, convinced) to take off work and drive to the lab. “For people who do developmental psychology research, it’s a big job to set up the things necessary to do a study,” Cutting sympathized. “It’s hard to get subjects for experiments, and it’s harder to get non-college students.” Should Reyna or a colleague successfully find the child and get him or her to the lab, bureaucratic red tape makes it hard to secure a nearby parking spot. “Even then,” said Cutting, “an undergraduate might park there anyway.”

At times, Reyna’s biggest obstacles are obstinate school systems, hesitant to release information about a study to parents (forget about reaching children directly). “Some people understand that research is an important social good,” said Reyna, University of Texas at Arlington. “But some school systems do not let researchers give out information to see if someone would even potentially want to be in a study.” To tackle critical questions like the effectiveness of public health curricula in curbing risky adolescent behavior and the best ways to interview crime victims, “it is crucial to break down the barriers to locating and recruiting non-college populations,” she said. “Scientists are limited in the questions they ask because there are so many barriers in getting to outside populations.”

However difficult it is, reaching non-student populations remains important. Ask APS Member Tom Pyszczynski if reaching non-students takes a prohibitive amount of time, effort, and money, and he agrees — then says do it anyway. “When we do research with college students, we’re assuming that psychological processes are relatively universal,” said Pyszczynski, University of Colorado at Colorado Springs. “That’s an assumption — it’s not necessarily absolutely correct.”

To see how these processes are activated in different cultures, Pyszczynski often replicates a study in an entirely new population — comparing results from a convenient sample to those collected one state, or one hemisphere, away. To test his terror management theory, Pyszczynski asked middle-aged judges in Arizona to assign bond to a prostitute. Before assigning bond, some judges were primed with images of death, and others were not. According to the theory, people control a fear of death by buying into cultural belief systems — shared rules of behavioral conduct. A violator of these social rules, such as a prostitute, causes a fear of death and elicits a harsh reaction from the community. Judges reminded of death assigned prostitutes an average bond of $450; control judges averaged only $50.

Pyszczynski then replicated the study with college students who, lo and behold, also recommended a higher bond when reminded of death first. In this case, however, their monetary figure depended on whether they approved or disapproved of prostitution, which varies more in students than in judges. Borrowing Pyszczynski’s theory, German psychologist Randolph Oxman did the same study overseas and found that prostitution was not a great activator. At first glance, this seems at odds with both the judges’ and the students’ responses, until one point is made clear: prostitution is legal in Germany and thus seen as less of a moral transgression.

To Pyszczynski, these results show the importance of getting as many samples as possible, since even universal psychological processes may be activated differently depending on a type of people. “Generalization is very important, but it’s not a simple question of whether this study will yield the same results with a different person,” he said. “The question is, if you translate your psychological variables to fit the subculture you’re working with, would you find equivalent results?”

Cutting agreed that better journal submissions use many populations. One way to do this is by using Web-based research to augment laboratory findings, a method he sees happening more and more. This confronts two problems with the convenience sample: age and gender. If college students are roughly age 20, Cutting estimates, Web-based participants are five to 10 years older. In addition, it is common for two-thirds of Web samples to be male, which counteracts the predominance of females in classroom and laboratory experiments. (In 2004, nearly half the studies published in Psychological Science tested for gender; of these, 59 percent of the participants were female.)

Guinea Pigs: (from left) Ron Goode, Nikhil Baerjee, Peter Killeen, Matthew Sitomer, and Diana Posadas-Sanchez gather as participants in Killeen’s psychological studies including the internal clock and economic models of reinforcement.

“When you have a lab-based study and a Web-based study, and the results match, then that’s nice,” Cutting said. “One feels a bit more confident in generalizing.”

This confidence is far from unanimous. Feldman Barrett acknowledged that Web samples may be more representative of the general population’s age, but they carry their own limitations — namely, an above-average socioeconomic status. “You’re still getting a non-representative sample on some dimension,” she said. “Scientific conclusions will be drawn on the basis of studying those who can afford to access a computer with an Internet connection.”

For once, somewhat refreshingly, the Internet might not provide the answers. Instead, a step forward might require revisiting a time when fundamental statistical methods ruled research.

Statistically Speaking
Feldman Barrett admits that some psychologists aren’t careful enough about making generalizations — in part because they lack a sufficient understanding of statistics. “Most of us are not trained in sampling theory,” she said. “There are sciences that take sampling much more seriously than we do.”

Michael Hunter is trained in sampling theory, and he does take it seriously. Co-author of the 2002 paper “Sampling and Generalizability in Developmental Research,” Hunter also sees this shift away from statistical methodology in psychological research, away from a time when using analysis of variance supported an experiment’s causality and using multiple regression analyses helped infer generalizability.

“Today, and for some time now, psychological researchers judge generalizability not on statistical grounds but instead on methodology, educated guesses or common sense, and accumulated knowledge,” Hunter said. Some of these techniques are fine to draw generalizations from, he said — sometimes even highly effective — as long as they are based on random samples of the population in question. The problem, of course, is that college students are seldom a random sample of a university population, let alone a national one. “Obviously, this basis of generalizability does not augur well for the generalizability of some research with college students, who are a selective population and who are rarely randomly sampled.”

Fortunately, there is a way around taking a random sample from a broad population, says Peter Killeen. Unfortunately, that way is far less traveled. Instead of traditional statistical techniques, which are often inaccurate when used with samples of convenience, Killeen advocates permutation tests, which were difficult to implement before computers yet still remain untaught in this age of machines.

“Some researchers don’t use permutation methods because, while they let us make causal claims, they don’t support predictions to larger populations,” Killeen said. “But convenience samples preclude such generalizations in the first place. So, if we continue using convenience markets, such as introductory psychology classes, for our subjects, then we should at least get right with the statistical models we use to analyze their data.”

The permutation test Killeen referred to is often called the Pitman test, after Tasmanian statistician E.J.G. Pitman. According to Cliff Lunneborg, professor of statistics and psychology at the University of Washington, the Pitman test is known as a randomization test when used on convenience samples. The key to a randomization test is that it does not require a sample to come from a large population — in fact, randomization thrives on taking one random sample, such as a set of convenient university students, and randomly dividing it into two experimental groups, which can then be analyzed and concluded upon as though they had both been drawn randomly from a vast pool of participants. “Randomization tests should be much more widely taught than they are,” Lunneborg said. “Psychological researchers understand the value of randomization in controlling for experimental error. What they have not adopted is the notion that randomization provides a powerful basis for statistical inference as well.”

Whether such methods are adopted or remain empirical orphans, Hunter said there is one unequivocal way to ensure more generalizable results, whatever the sample: “The greatest gains in generalizability will come from better measurement rather than better sampling,” he said. “The best sampling scheme in the world will not overcome poor measurement.”

Cumulative Nature to Knowledge
Like a precocious freshman in an advanced seminar, who may be onto something others are not ready to hear, Peterson’s work has been met with reluctance. “Because the reviewers evaluating the submitted manuscript used college student samples, they were generally not predisposed toward the study’s findings and conclusions,” he said. “In general, I believe it is very difficult to get research published that goes against an existing methodological paradigm.” He has since had some difficulty publishing additional research on the topic, this time looking at difference in attitudes of over 3,000 students from 58 universities. Though he preferred not to have the study quoted since its publication remains uncertain, Peterson says his new findings take the old ones slightly further, strongly urging the scientific community not to generalize based on students samples, but instead to set theoretical boundaries within which further research must be performed.

Pyszczynski thinks such a parameter may be dead on. After all, science is not about answering all questions with one study — strands of smaller generalizations can be sewn over time into a variegated tapestry of behavior. “Rather than study one population for all human beings, it’s better to come up with a conceptual model and adapt it for use with another population,” he said. “No one study makes a point that anyone should take seriously. There’s a cumulative nature to knowledge.”

If nothing else, new scrutiny on college samples emphasizes an even larger, more urgent concern in the behavioral community — the ability to fund experiments whose immediate application might be impossible, but whose theoretical contributions will lay the groundwork for improved public health. Reyna already sees this happening with aging research — an area that requires older participants. The National Institute of Aging has created incentives for people to go beyond college populations to study aging, she said. If other funding institutions made efforts to pool and pre-screen research volunteers, science could greatly expand its reach beyond the shadows of the ivory tower.

“Scientific organizations and agencies should communicate to the public how valuable their contribution to research would be, and should help create policies that remove barriers to participation,” Reyna said. “Sometimes people see research as a luxury, when in fact it’s a necessity.”

References

· Carlson, R. (1971). Where is the person in personality research? Psychological Bulletin, 75, 212.

· Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s view of human nature. Journal of Personality and Social Psychology, 51, 515-530.

· Sherman, R. C., Buddie, A. M., Dragan, K. L., End, C. M., & Finney, L. J. (1999). Twenty years of PSPB: Trends in content, design, and analysis,” Personality and Social Psychology Bulletin, 25, 177-187.

· Peterson, R. A. (2001). On the use of college students in social science research: Insights from a second-order meta-analysis. Journal of Consumer Research, 28, 250-261.

· Dipboye, R. L. (1979). Are findings in the field more generalizable than in the laboratory? American Psychologist, 34, 141-150.

Sidebar I: Engaging Research Participants

The nagging thought that ran through my head as I prepared to run my first study with undergraduate participants was “I sure hope they do a better job than I used to do when I was an undergrad!” As an apprehensive freshman participant, I just wanted to get a taste of what it would be like to be a psychology major and also put five or ten bucks in my pocket. I sincerely wanted to help the researchers too, and I certainly didn’t want to disappoint anyone, but on a few occasions I couldn’t help feeling like I had done just that.

Some of those studies were pretty boring, and others seemed unnecessarily taxing (“you want me to read how many pages of instructions?!”). I felt especially bad when a memory researcher asked me earnestly “are you sure that’s all you can remember?” after I had done my best to recall long strings of nonsense syllables. Looking back, of course, I don’t know if his entreaties were a part of his study design or not—I guess I didn’t always read the debriefing sheets too carefully back then, either.

So you would think that, armed with these memories, I would set out to design the most compelling, fun, and educational studies that I could. But of course, that’s not how it really worked out. In my zeal to get the “right” data, it was just too easy to keep adding more and more questionnaires and filler tasks. And I heard my share of complaints about it (“why did you ask the same question over and over?” was a familiar refrain). There were always a few students who protested by responding randomly on questionnaires, or pushing random keys during computer tasks—and then there was the one who refused to take off his headphones, whose guarantee that he would leave his CD player off was not particularly reassuring.

Rosenthal

But it seemed that most of the time, even when the tasks themselves weren’t the most inspiring, the student participants were still genuinely interested in what the study was about. I found that the little extra time that I put into making the debriefing sheets and sessions educational, and honestly taking all of students’ questions (and even their study design suggestions) seriously, were qualities that were well appreciated. An occasional student even referred friends to participate—although that might have been because I was conducting narcissism research and they thought their friends were narcissistic!

Of course, I’m not recommending designing long, arduous studies. But in my experience, as long as there is something engaging about a study, students who have already gone out of their way to participate in it will leave with a positive attitude.

The educational drive of the undergraduates in these studies makes me particularly confident that they are taking their participation seriously. But it can also have a downside. In my experience, “real world” participants are very likely to take a study at face value, while curious and savvy students (especially the ones who are in psychology classes) are more likely to try to “outwit” your design and hypotheses. Usually, this means that they try to give you the outcome they believe you’re hoping for, but occasionally their motives aren’t so benign. Luckily, they’re wrong about the details more often than they’re right. But whatever their motivation, or how accurate they are about the study’s details, the student participants who “analyzed” the studies while they were participating in them caused me great concern about the validity and generalizability of my data (especially when coupled with the demographic outlier status of most of the students). For me, that means that I’m more comfortable using undergraduates in straightforward, correlational research than in cleverly designed experiments. But plenty of my colleagues in graduate school have disagreed with me about that.

Practically speaking, having a good study pool was a tremendous help—in general, participants from psychology classes took their participation more seriously than those who were just there to make a few bucks. As for those paid participants, freshmen in their first few weeks of school, and seniors in their final ones (post-thesis!) were the most reliable. I was also surprised to find that most students prefer a guaranteed small sum of money over a chance to win a big prize in a lottery drawing. But most of all, it seemed that conveying to the undergraduates (implicitly or explicitly) that I was excited about the research, and that I cared about them and their experience as participants, led to the most rewarding experience for them, and the best data for me. And adding a bottomless bowl of candy to the mix didn’t hurt either!

SETH A. ROSENTHAL is a PhD candidate at Harvard University.

Sidebar II: Diving Into the Subject Pool

I watched as a fair-skinned, well-built man with sandy blonde hair approached a woman from behind as she crossed the street. In a flash, he grabbed her purse and ran off. The stranger beside me also saw this turn of events and immediately struck up a conversation with me. “Did you see that?” she asked, “I can’t believe the guy stole that lady’s pink purse! He looked way too skinny to be any kind of threat!”

That’s funny, I thought, he looked rather built to me and wasn’t the purse red? Had this stranger and I watched the same event?

Minutes later an investigator came into the viewing area and invited me to a back room for some questioning. “What did the thief look like?” “What was his approximate build?” “What was he wearing?” “What color was the purse that was stolen?”

After giving my description, the investigator asked if I would be willing to view a photo array of possible suspects. As I began to flip through the photos, I began to question my memory. Could he have been skinnier than I initially thought? Did he have blonde hair or was it brown? Maybe the purse was purple, not red? After much deliberation, I finally chose someone who I believed resembled the thief. Close enough, I thought, knowing full well that I had to be in my next class in 15 minutes.

Clifasefi

I was thanked for my time, told that the stranger in the viewing room with me really was a confederate trying to change my memory, and finally given an extra credit slip for participating in my first psychology experiment. As I walked away from that study, I remember musing, would I have been willing to point the finger if this was real life and I was down at the police station? And that really is the question: How differently do people behave in a laboratory experiment versus the real world? Moreover, do students motivated by extra credit points really care about the outcome and behave like everyday people?

The answer is some do and some do not. Ironically, I now study eyewitness memory and rely heavily on the same subject pool I once participated in to get subjects. Being on the other side of the subject pool is a whole different kettle of fish, so to speak.

As a first-year psychology student, my primary motivation was to receive the extra credit points that went along with participating. Every hour of credit equated to .05 grade points. So, six hours of credit (the maximum you could do) equated to a rise of a .3 grade point average. Little did I know that the studies that I took part in would strike a passion in me for conducting interesting research.

As a researcher using the student subject pool, I have learned that there are three types of subject pool subjects. First, there is the “ultra motivated” subject. This individual is motivated by grades, interested in the research matter, and wants to get their extra credit hours out of the way so they can spend more time studying. These subjects sign up at the beginning of the quarter, show up, ask questions when they are done, and — truth be told — are few and far between. The second type is the good intention “no show” subject. These individuals have every intention of finishing their extra credit hours early, so they sign up early and never show. Throughout the quarter, they continually sign up for your study, and continually fail to turn up. The “no-show” subject is extremely frustrating because valuable time is wasted. I hire research assistants and if there are no subjects to run, I still have to pay my research assistants. (Let me take this opportunity to apologize to any researchers whose studies I signed up for and didn’t turn up to when I said I would.)

Finally, there is the “end of quarter” subject who realizes that her grades are suffering and if she doesn’t get extra credit she might not pass the class. As a researcher I have found that these seem to make up the majority of the subject pool, as indicated by the fact that sign up sheets fill up towards the end of the quarter.

As a student, I tended to sign up for studies that had catchy names and sounded interesting. As a researcher, I try to come up with catchy names and make my studies interesting in order to attract students. It is almost a battle of wits among researchers running “competing” studies. I mean really, would you rather participate in a study that boasts the title: “Individual Differences in Remembering and Forgetting” or “Who Dunnit? Memory for Crime?” I know which one I would choose.

SEEMA CLIFASEFI is a Postdoctoral Fellow at the University of Washington and Past President (2002-2003) of the APS Student Caucus.

Sidebar III: Making Research Educational

My motivation as a participant in the psychology department subject pool was similar, I suspect, to that of many undergraduates: to fulfill course requirements. I did so somewhat grudgingly, not entirely understanding the benefits of research and thinking mostly about how the extra time commitments were going to burden an already busy schedule. Some of the experiments seemed interesting at the time; most

With a few exceptions, my first experiment was representative of them all. I showed up on a Saturday morning and waiting for me was a bright-eyed research assistant. She was an advanced undergraduate who I had seen in some of my classes. The study, she said, “tests how people think.” (I’m still not sure which studies are not included in that description!). She said I would see a series of words on a computer screen, each printed in a different color. My task was to say the color of the word out loud as quickly as possible. I knew that she was administering some version of the Stroop task because I had read about that in a cognitive psychology text, but that’s the sum of what I gathered intellectually from the experience.

Slavich

It turns out that the study was actually both socially important and very interesting. The principle investigator was Ian Gotlib, Stanford University, and the goal of the study was to examine how depressed and non-depressed individuals process emotionally-valenced information. The thinking was that depressed individuals might exhibit preferential attention to and memory for depressotypic information, and that these processes may work to maintain one’s negative cognitions and, in turn, the depressive symptoms. The computer tasks that I completed were designed to detect these precise biases in cognitive functioning, which are often measured in milliseconds. Even this brief description, I believe, sounds more interesting than simply saying that the study examines “how people think.”

The research process is much more interesting from the perspective of a graduate student or professor because they understand the theoretical basis for the experimental tasks being administered. They know what the tasks are trying to detect, why certain tasks are more appropriate than others, and how the data derived from the tasks fit into the larger conceptual picture of the phenomenon under study. For these reasons, they also understand why each undergraduate’s time is valuable. Undergraduate research participants do not understand these features of the research process because these features are rarely explained to them. Debriefing forms are mandatory, but almost never interesting, let alone conceptually informative.

The opportunity to partake in the research process as a subject is often framed as an educational experience to undergraduate students. I think this promise is rarely fulfilled. Instead of vague, uninformative, and boring debriefings that say little about why the study is socially relevant and very interesting, why not use the experience as a teaching tool to convey something exciting about research methods and the specific content area under investigation? The opportunities here are endless. Undergraduates could evaluate empirical articles that are related to the experiments, write brief reaction papers to the experiments, and/or design and propose similar experiments in either a written or oral presentation format. We might not be able to make all of the experimental tasks fun, but we can certainly make the overall process of being a participant in the subject pool more exciting and educationally valuable.

GEORGE SLAVICH is a PhD candidate in clinical psychology at the University of Oregon. He received the inaugural Psi Chi-APS Albert Bandura Graduate Research Award in 2004.

Sidebar IV: On Both Sides of the Consent Form

I entered a psychology laboratory for the first time during the spring semester of my freshman year at Emory University. Like most of the other students taking introductory psychology, I viewed research participation as another hoop to jump through in the process of procuring a decent grade in the course.

Despite my reluctance to give up part of a Saturday morning, I was filled with rampant curiosity about what was going to transpire during the hour-long session. What sort of tasks would I be required to perform? Would I be in the control or the experimental group? What is the purpose of this experiment? While I had envisioned a scenario involving carefully planned deception and confederate participants (a symptom that commonly manifests itself right after the Milgram experiments are covered in class), the experiment was rather mundane.

Butler

Sitting alone in a small room, I filled out a questionnaire about relationships, both romantic and non-romantic. During the debriefing, I asked (hopefully) whether a small mirror on the wall was really a hidden observation window. Amused, the experimenter assured me that it was not. And that was my first experience in a psychological laboratory — nothing too exciting. I went on to participate in about seven more experiments during college. I experienced several different psychological paradigms, ranging from a visual attention task (in which my eye movements were recorded) to an autobiographical memory study.

Now, as a third-year graduate student at Washington University in St. Louis, I have switched roles completely. Instead of offering my brain and behavior for study, it is I who request this of others, most of them unpaid undergraduates. Some people take issue with the practice of requiring undergraduates to participate in psychology experiments for course credit, calling it nothing short of coercion. However, research participation — to invoke a frequently used justification — is an educational experience. During my hours as a participant, I learned a lot about what should and, perhaps more often, should not be done in an experiment. I gained a good understanding of how motivated the average person is to take part in a series of demanding tasks for an unknown purpose. Also, I realized how important it is to always thank participants for their time and describe the rationale behind the experiment in terms that can be easily understood.

Most of all, I gained the invaluable ability to view an experiment from the perspective of a participant. One might argue that the educational benefits of research participation only extend to those who go on to become experimental psychologists, but I disagree. Even for people who pursue an unrelated vocation, participating in an experiment provides an intimate glimpse of the scientific process. Although the knowledge gained, as in much of education, is largely implicit, it helps students to better understand psychology and science as a whole. To be sure, there is a limit to the educational benefits of participation: the marginal utility derived from participating declines sharply after about five or six experiments. Most universities (e.g., Washington University) have a cap on the number of experimental hours that people can amass. With appropriate safeguards and an emphasis on education, undergraduate research participation benefits everyone.

A few hours are a small price to pay for an enlightening experience and the advancement of psychological science. Perhaps I am overly optimistic, but I appreciate my experiences on both sides of the consent form.

ANDREW BUTLER is a graduate student at Washington University in St. Louis. He is current Graduate Advocate of the APS Student Caucus.

Still stressed from student homework?
Get quality assistance from academic writers!

Order your essay today and save 25% with the discount code LAVENDER