Feature Article

President's Column

The author examines the factors that limit the diffusion of evidence-based practices into everyday use by practitioners.

By John E. Lochman, PhD

When I was in graduate school some number of years ago (that number has now grown to be over 40, which seemed like a frightfully large number of anything related to age at that time), one of the most compelling visions I had of becoming a clinical psychologist was to be able to take our emerging psychological knowledge and to bring it to bear in real ways in the lives of children and families. That continues to be an important vision for me, and for our division.

One thing that we have learned much about in the intervening years has been what type of intervention programs can effectively reduce many emotional and behavioral problems in children and adolescents, and prevent later severe maladjustment. There are a number of websites that evaluate effective interventions for children, and these evidence-based practices have been taught in graduate school and provided through varying degrees of training to mental health practitioners in the field. At the policy level, evidence-based practices have been included in federal and state mandates for funding for mental health services for children.

From one vantage point, we as a field have come a long way in beginning to transport our accrued knowledge about evidence-based practices into real-world settings. However, we all know children in clinic and school settings who are not receiving the kind of care we expected by now. Why is that? Certainly limited financial and manpower resources are a limitation. Many of our evidence-based programs can be perceived to be lengthy and perhaps too cumbersome by practitioners, administrators and consumers themselves.

But beyond how resources affect the “fit” of programs to settings, what other factors limit the diffusion of evidence-based practices into truly everyday use by practitioners? In contrast to efficacy research, empirical work on the intervention dissemination process is rudimentary. Three key questions are:

How much can practitioners adjust a given program and still have it considered likely to have the tested positive effects that were obtained in clinical trials? To really answer this we need to know much more about the specific active mechanisms within programs (some program elements may be ancillary, but not essential), and to know which active mechanisms are necessary for which subgroup of clients. With more of this knowledge, we will become better at tailoring interventions to fit particular children, and our work can be briefer.

How much and what kind of training of practitioners is necessary? Although limited evidence is available, what does exist suggests that more intensive training of practitioners translates into better outcomes for children. Intensive training can include multi-day, highly interactive workshops, followed both by regular consultation with master trainers and by review of video/audio records of sessions. However, is that full array of expensive training necessary in all cases? Practitioners who already have been well-trained in a class of interventions may need less intensive training to implement a new intervention in that class. And how can web-training be effectively incorporated into this dissemination process?

Finally, once clinicians have received training, why do they not sustain use of an evidence-based program in subsequent years? As with the prior questions, there is little evidence to guide us now. In one project in our center, we have found that the school counselors who sustain use of a specific evidence-based program years after training were the counselors who perceived more teacher support for the program, had more active expectations to use the program, and had greater perceived and actual improvement in children's behavior during the training period. Our training could better address these (and other) issues important for later sustained use.

These questions require further discussion and study in the field and in our division by practitioners with boots-on-the-ground, policymakers, and dissemination researchers.