Author: Fonagy, Peter
Publisher: London: International Psychoanalytic Association (2nd Revised Edition)
Reviewed By: J. Stuart Ablon, Summer 2005, pp. 79-80
The first thing one cannot help but notice when picking up the second revised edition of An Open Door Review of Outcome Studies in Psychoanalysis prepared by the Research Committee of the IPA is its weight. Upon reviewing the descriptions of more than eighty empirical studies, I am reminded of a frequent refrain that goes something like this: “There is no empirical research to support psychoanalysis.” This refrain is oft cited when discussing why insurance companies will not support psychoanalytic treatment and why psychoanalysis has fallen out of favor in the public eye as compared to newer, briefer, manualized, so-called empirically validated treatments. Clearly, this edition is a testimony to the inaccuracy of such claims. While there remain far fewer empirical studies of the outcome of psychoanalysis than say cognitive-behavioral therapy, there is a wealth of empirical data available supporting the effectiveness of psychoanalysis.
The central question then becomes, “Why is no one listening?” One potential answer to this question is the lack of a central source of information about systematic research efforts in psychoanalysis that can be referenced easily. Fortunately, the members of the Research Committee of the IPA, all international leaders in psychoanalytic research, have taken on this monumental task. The latest revision of the book represents an impressive compilation of descriptions of empirical studies from across the world. The editor and contributors are to be commended for the work that they have done towards the common goal of using empirical research to validate the utility of psychoanalysis and psychoanalytic ideas while also advancing analytic theory and technique.
The Open Door Review begins with reflections upon the epistemological and methodological context to psychoanalytic research—a conversation that is at once an appropriate and yet odd preface to the compendium of descriptions of empirical studies that follow. In one sense, it seems that this debate might be better suited to a different context entirely - especially since the ideas are not discussed or elaborated later in the book nor are the ideas related specifically to the empirical findings. There also exists an unmistakable incongruity between the epistemological discussion and the presentation of empirical findings. This incompatibility stems from the fact that the former takes place at a level of abstraction that could not be more different from that of the actual studies surveyed. The chasm between these foci seem to reflect the analytic community’s ambivalent relationship with empirical data amid concerns that quantitative research is inherently too reductionistic to capture the significant elements of psychoanalysis.
This second revision of the book adds a significant contribution in the area of research methodology that will be a wonderful resource for researchers. The contributors provide a cogent description of the challenges inherent in researching psychoanalysis. Current methodological and statistical approaches are reviewed before discussing actual instruments. For those not steeped in the world of psychoanalytic research, however, it is important to note that this section details just a few of the myriad measurement instruments relevant to studying psychoanalysis. The appendix also describes several instruments that can be used to quantify psychotherapy process for empirical study. Again, it is important to note that the appendix does not represent an exhaustive list of such measures but, rather, a sampling of some of the most widely used. The dichotomous classification of these measures as either “therapy process” or “process-outcome” measures is, however, somewhat misleading. All of the measures listed describe therapy process and indeed all could be used to examine process correlates of outcome using correlational methods. For example, while the Psychotherapy Process Q-sort (PQS) is listed as a process-outcome measure, it is actually a process measure that produces quantitative data that can be analyzed to examine the relationship with independent outcome measures. The PQS itself is not a measure of therapy outcome.
The Open Door Review concludes with an acknowledgement of the many limitations of the evidence presented. Any critical review of a research study can find considerable limitations. One important missing limitation to all of the studies described in this volume is that of allegiance effects. Allegiance effects color the results of almost every treatment outcome study conducted to date. Lester Luborsky has elegantly demonstrated empirically what most of us will admit to knowing intuitively – that the results of outcome studies tend to support those theories and modalities to which the researchers themselves prescribe. The power of allegiance effects underscores the ultimate need for dialogue and collaboration with researchers outside the world of psychoanalysis. Such partnership will also be necessary to validate psychoanalytic principles and treatment using comparative trials.
The largest limitation of the research presented in this impressive volume remains, however, that, as stated earlier, no one seems to be listening – be it inside or outside the insular world of psychoanalysis. One of the most noted psychotherapy researchers of the last century, Hans Strupp, keenly observed that, despite thousands of studies attesting to the effectiveness of different psychotherapeutic approaches, we continue to focus on proving outcomes, and each new study documenting the effectiveness of psychotherapy is met with a mix of excitement and surprise. With research on the effectiveness of psychoanalysis still in its relative infancy, would it not be a great shame to invest millions of dollars and years of effort to prove what we already know—that psychoanalysis works and works about as well as other forms of treatment—only to have the findings largely ignored? Why does no one take note? Within the analytic community, a cynical answer is that no one cares – that clinicians know the value of what they do and are insulted that empirical evidence should take precedence over clinical experience and case reports which represent data of another sort. Another possible answer is that even the summaries of findings in this review are too inaccessible to clinicians.
In the next edition, might it be possible to ask the authors of each study to summarize their findings for clinicians and highlight their clinical relevance? Outside the analytic community, the problem likely revolves around prevailing perceptions of psychoanalysis. Results of the American Psychoanalytic Association’s Strategic Marketing Initiative (2002) suggested that the value of psychoanalytic theory remains widely appreciated despite the fact that analysts are seen as not relating well to other mental health professionals, as arrogant, intimidating and uninterested in what others have to say, and that the analytic community is viewed as isolated, patronizing, not open to new ideas, resistant to change, and not interested in dialogue. Even if these impressions are inaccurate, unfortunately, it is unlikely that any amount of empirical data will overcome the strength of resistance stemming from such misconceptions. If empirical data are to help reinstate psychoanalysis’ deserving place, then steps also must be taken to communicate more openly and effectively with those outside the community.
Another reason for the skepticism about empirical findings has to do with the design of many studies themselves. The state of the art for empirically validating treatment approaches is inherently biased against approaches that are not brief or circumscribed in their approach. In fact, the more flexible and heterogeneous the theory and the treatment approach, the less likely it can be studied effectively using the method of randomized, controlled clinical trials. For example, since there are no treatment manuals for psychoanalysis, psychoanalysis is by default excluded from any controlled clinical trials. Furthermore, recent research suggests that results of controlled clinical trials of manualized treatments can also be highly misleading even for brief, manualized treatments. Therapies that are hypothetically distinct may be quite similar in their clinical application. For example, interpersonal psychotherapy (IPT) and cognitive-behavioral therapy (CBT) are two empirically validated psychotherapies for depression that hypothetically seem quite different.
But empirical research comparing IPT and CBT has shown that the clinical application of the two approaches is quite similar (Ablon & Jones, 1999; Ablon & Jones, 2002). Important areas of overlap have also been observed when comparing empirically the process of CBT and the process of brief psychodynamic psychotherapy. Psychodynamic therapists applied a notable amount of both psychodynamic and cognitive-behavioral strategies (Ablon & Jones, 1998). In fact, in one sample of brief psychodynamic psychotherapy there was no significant difference in the amount of psychodynamic and cognitive-behavioral process that was fostered. While the CBT treatments followed a cognitive-behavioral model more strictly, when the cognitive-behavioral therapists did foster a psychodynamic process, it was this process that was correlated significantly with positive treatment outcome. Even small “doses” of processes borrowed from other approaches can be powerful predictors of outcome. Thus, treatment process cannot be fully controlled and, as a result, brand names of therapy can be quite misleading. When evaluating psychotherapy outcomes, it is increasingly clear that one must first characterize a treatment accurately by examining what actually goes on between patient and therapist.
Clinicians are right to be skeptical of studies which attempt to control the process of interactions between patient and therapist—both because such controls restrict the extent to which a good clinician customizes treatment according to the characteristics and needs of the patient and because human interactions cannot possibly be scripted even with the most comprehensive of treatment manuals. Furthermore, the very idea of manualization implies that therapists simply apply techniques to patients, rather than recognizing that the therapeutic process is co-created by patient and therapist. It is highly likely that the results of empirical research would be of much greater value to clinicians if they focused on identifying change processes that are correlated with positive outcome that already exist in naturalistic treatments, as opposed to trying to artificially control the therapeutic process in order to study it.
Indeed, psychoanalytic researchers have always struggled with the arduous and often thankless task of trying to use empirical methods to quantify incredibly rich, complicated and subtle constructs and processes in accurate and meaningful ways that advance theory and technique. The Open Door Review is a testimony to the current dilemma of balancing the need to prove the efficacy of the analytic approach while staying true to the roots of psychoanalysis, which always aimed to better understand human thoughts, feelings, and behaviors. Taken together, the more than eighty empirical studies described in this volume balance those agendas nicely. For the future of psychoanalytic treatment research, however, it is clear that the most powerful method for addressing these competing agendas simultaneously lies in the naturalistic study of change processes (not treatments) that promise to improve therapeutic effectiveness while furthering our understanding about how psychoanalysis works.
This article was originally published in the Journal of the American Psychoanalytic Association. Used with permission. © 2004 American Psychoanalytic Association. All rights reserved.