Center for Advancing Correctional Excellence logo
  A Conversation with John Laub, Ph.D.  
John Laub, Ph.D. is a Distinguished University Professor in the Department of Criminology and Criminal Justice at the University of Maryland, College Park. He served as the Director of the National Institute of Justice (NIJ) in the Office of Justice Programs from July 22, 2010 to January 4, 2013. Dr. Laub received the Edwin H. Sutherland Award from the American Society of Criminology in 2005, and was awarded the Stockholm Prize in Criminology along with his colleague Robert Sampson in 2011. His research focuses on crime and the life course, desistance and crime and public policy.

Catherine Salzinger, a Graduate Research Assistant at ACE!, recently contacted Dr. Laub to discuss issues related to fidelity for ACE!’s Spring 2014 Advancing Practice publication. Dr. Laub’s full responses are below, as space was limited in the publication.

What are three key issues you see with program fidelity?

  • The first thing is that it is hard to properly test an idea if it has not been properly implemented. I think of fidelity as the foundation by which we test the success or failure of an idea. As a graduate student, I first came across the topic of fidelity from Malcolm Klein in the first issue of the Crime and Justice series. He was doing a rigorous evaluation of what we knew about deinstitutionalization and diversion. He called it program integrity at the time, and was looking at what diversion was doing in practice versus what the idea was in theory.
  • A second issue with program fidelity is to what extent this may help explain varying program effects. This could be because there are different clientele that make the program successful for some, but not for others. But it is also possible that variation could be due to varying implementation, and this needs to be examined. The MDRC (Michael Weiss, Howard Bloom, and Thomas Brock) have published a really good paper on studying program effects across a number of domains and they have a whole section on fidelity.
  • A third key issue is the unintended consequence of an intervention. For example, the front page of the Washington Post today (March 6, 2014) was talking about the crackdown on prescription painkillers leading to the upswing of heroin use. The piece was discussing whether the White House had anticipated this occurring. One of the things we know is that when we squeeze one part of the system, something is going to happen somewhere else. Looking at the unintended consequences in the context of fidelity is important.

 

What is the worst current practice you have come across with regards to program fidelity?

There is a tendency after a positive finding for others to want to replicate quickly, but with a fast and cheap solution. Scaling up or replicating quickly and cheaply without any real thought or training is a real problem. For example, consider Project Hope (Hawaii's Opportunity Probation with Enforcement) and what Judge Steven Alm did in Hawaii and others thinking about adapting it to their jurisdiction. One of the things that I wanted to do while at the National Institute of Justice (NIJ) was a randomized control trial replication of Project HOPE. When I talked about launching the study, I had a meeting with a number of stakeholders and the message was, ‘We're really interested in HOPE, but let us do it the way we want to do it.’ That’s been the strategy. You take an idea, but you don't try to seriously replicate it with fidelity. But it’s important to push for replication with fidelity, because when you turn up with mixed effects, you really miss an opportunity to learn something. Were the mixed effects due to the failure of the idea or the failure to implement properly? 



What is the best current practice you have come across with regards to program fidelity?


I think David Old’s (Ph.D.) work with the Nurse-Family Partnership National Service Office is a good example. They have identified the essential elements of the program. If you’re going to do the Nurse-Family Partnership, you must implement those essential elements and then you allow for local variation (context) and choices at the ground level. But you start with the idea that here are the clean and replicable items to put this program in place. Here are the exact essentials if you're going to replicate this program. One thing that is intriguing is that they have an office that was developed after their initial site study that oversees the implementation and scale up process.


This makes me think -- where there is ownership of implementation within the criminal justice system? I think one potential strategy we could develop would be to first use NIJ to generate research evidence. Then, the second part would involve multi-site replication. Specifically, NIJ and the Bureau of Justice Assistance (BJA) could jointly replicate in a series of demonstration field experiments, with BJA overseeing and doing the technical assistance. Assuming continued positive results from the multi-site replication, in the third step BJA would be responsible for the scaling up, implementation, and oversight process. This is a potential model for how we could move the field forward, with the federal government responsible for both process of research and development and scaling up with fidelity.



What are your thoughts on improving fidelity?


We need to do a better job of data collection in terms of the implementation process. I think a question we need to ask is: what do we need to do to assess fidelity? We would want to look at things like dosage levels, who are the clientele. We need to be thinking more broadly about implementation science, things like organizational capacity, leadership, site-specific aspects such as where is the program going to take place and information about the service delivery experience in order to assess the whole program. I recognize that there is tension between strict replication and local adaptation though. Fidelity and Evidence-Based Practices (EBPs) need to allow for adaptations and local needs. We need to find the right balance. I think a part of successful implementation with fidelity involves strong collaboration with frontline practitioners (Technical Assistance (TA) providers). This includes providing these programs with training, technical assistance, and coaching so they are provided with sustainability and fidelity.


In addition, I think our theories about programming are very weak. I think we've spent a lot of time over the last 10 to 20 years or so trying to figure out and identify programs that are effective and those that are promising. But ultimately this is very limiting in my view. I think we need to begin to ask why does a program work, and if it does work, what are the underlying mechanisms that make it successful. Our theory-based evaluations are weak, but I do think we are making some great strides, especially in the re-entry area.


A second piece to that is that we have virtually no theories focusing on implementation. How do you implement something that is effective. Everything seems to me to be ad hoc. What are the conditions that facilitate EBPs? What are the obstacles to implementing EBPs? We just don’t have enough systematic knowledge here. I think that's where academia can make a huge contribution. I think we have a lot of ideas being kicked around, but the health field is more advanced in the area of implementation science.


Another thing to think about is how do we get evidence into the hands of policy makers and how can we help them to implement what we know works? We have a lot of knowledge that is generated in our field. Research has become so much better in terms of statistical modeling; the data are often poor but we are collecting better data all the time. My concern is that we haven't had the impact that we should have. There are still too many things being done that don't have evidentiary support. The federal government’s role in this process is absolutely crucial. 

 

References

  • Weiss, M.J., Bloom, H., & Brock, T. (2014). A Conceptual Framework for Studying the Sources of Variation in Program Effects. In MDRC Working Papers on Research Methodology. Retrieved from http://www.mdrc.org/publication/conceptual-framework-studying-sources-variation-program-effects

George Mason University logo

George Mason University
Criminology, Law and Society

Home | About | Research | Criminology, Law, & Society | Partners | Tools | Products & Publications