In This Issue
Presidential Column: Nancy Baker, PhD
By Nancy Baker, PhD
When I read the morning paper, I am accustomed to reading about issues to which Div. 1’s overarching and integrative approach to the field of psychology could contribute in a meaningful way. There are the long standing problems of racism and sexism, along with serious issues raised by both the rise of religious fundamentalism with violence generated in the name of religion and the impact of catastrophic climate change (often referred to as global warming) sustained by the lack of political will to combat it. But recently, I have been distressed to find myself reading about issues in psychology. First it was the Hoffman Report and the reopening of the critique of psychology’s role in the so-called “harsh interrogations” (aka torture) of detainees at Guantanamo. More recently it has been about the failures of replication for psychological research identified by the replication project of the Open Science Collaboration. But once again, there are contributions that the perspective of Div. 1 can bring. For this column, I want to focus primarily on the most recent issue that faces us specifically as psychologists — the concerns about the nature of our science.
To some extent this choice is due to the fact that most of what I would say about the organizational challenges revealed by the Hoffman Report were covered in the Div. 1 Executive Committee’s statement released prior to the August APA Annual Convention. At this point, perhaps the most important thing that can be said about the issues raised were summarized in the following lines from the statement:
The structural and philosophical issues that led to this situation must be addressed. In particular, we urge APA to consider these fundamental questions:
- How does a membership organization with volunteer elected leaders operate in a way that ensures that the paid staff on which the volunteer leaders rely do not usurp the control of the organization — whether for good or bad?
- How does a profession dedicated to the good of humanity develop and maintain its ethics and ethical standards independent of the press of guild or self-interest motives?
In this process and all activities going forward, real transparency will be critical.
Whether or not those issues are addressed in a transparent manner is not under the direct control of Div. 1 or our individual members.
The same can’t be said about the issues raised by the replication project. We are, after all, a science division and most of us engage in and teach about psychological research. The state of our science is both our concern and our territory.
As most of us are probably only too aware, the Replication Project was a joint effort by a number of individuals in psychological science to replicate results published in three leading psychology journals. They went to great lengths to attempt to replicate the methods used by the original study authors, including inviting those authors to provide materials and review planned replications. The results of that project, that 60 percent of the studies failed to replicate, were publicly released in August, published online in Science (Open Science Collaboration, 2015). It is an interesting and important article to read. Furthermore, the authors provided as open source material substantial data about their efforts that will, I suspect, be the subject of many secondary analyses over the coming months or even years.
There are standard issues raised about failure to replicate. There is the critique of the practices of only publishing new results rather than replications. There is also the issue of privileging novel or unusual results. Finally, there is the charge that we worship significance levels rather than looking at effect sizes. These concerns were supported by the project’s findings. Obviously if many studies don’t replicate, we should demand a literature replete with replications before accepting the findings as something approaching received wisdom. Further, the experiments most likely to replicate were those with large effect sizes.
Today, my morning paper had an opinion piece by another psychologist, Lisa Barrett (2015), arguing that this failure to replicate is nothing to be upset about — it is merely the nature of science. She noted that failures to replicate clue us into additional factors about when an effect occurs and when it doesn’t. She further notes that while “much of science still assumes that phenomenon can be explained by universal laws,” psychologists understand that context matters. Here I think that Barrett is both correct and incorrect at the same time. Context does matter.
It matters because, as her examples about rats and learned fear demonstrate, distinguishing between the situations where we do get a particular result and the situations where we don’t, provide important information and scientific clues to a better understanding of the phenomenon under study. Hazel Markus and her colleagues previously demonstrated that not only are contexts and materials important, but the people participating are as well. Her work has outlined a number of areas where “classic” psychological findings are different in Japan compared to the U.S. (Shweder, et al., 2006). One author of a study that is reported to have asserted this issue saying that the reason her study on women’s hormonal levels influencing ratings of male attractiveness failed to replicate was because she studied Italian women while the replication used American college students.
My concern is that, as a profession, we are more imbued with a belief in universal laws and less sensitive to the importance of context than Barrett admits. I fear this is particularly true when it comes to the importance of recognizing that the participants in our studies are not widgets but humans who come to us with individual, community, cultural and historical experiences that shape how they understand and react to anything we do. While we may have begun to talk about the importance of understanding people in their intersectional complexity, our research designs and reports don’t seem to reflect that appreciation. Furthermore, in many (most) psychological experiments, we don’t get an effect in 100 percent of the participants, nor do we generally get a uniform effect of the variable of interest on each participant. Currently my students and I are looking at the frequency with which our studies of psychological interventions are reported in ways that allow us to address intersectional issues. The preliminary impressions are not encouraging. I certainly hope that as the field engages in discussion about the results of the Replication Project, we begin to discuss the importance of addressing human diversity rather than looking for universal decontextualized humans following universal context free laws. That conversation will be reflected in my presidential theme — “Roots and branches: Envisioning a psychology free of racism and sexism.”
Barrett, L.F. (2015, Sept. 1). Psychology is not in crisis. The New York Times, p. A21.
Open Science Collaboration (2015). "Estimating the reproducibility of psychological science." Science, 349, 943-951. DOI: 10.1126/science.aac4716
Shweder, R.A., Goodnow, J.J., Hatano, G., LeVine, R.A., Markus, H.R., & Miller, P.J. (2006). "The cultural psychology of development: One mind, many mentalities." "Handbook of child psychology (6th ed.): Vol 1, theoretical models of human development." (pp. 716-792) John Wiley & Sons Inc, Hoboken, New Jersey. Retrieved from http://search.proquest.com/docview/621353475?accountid=10868.