The Science of Social Welfare

Complex research provides the evidence for evidence-based practice.

By Carl Vogel

News Type
SSA Magazine (Archive)

VOLUME 18 | ISSUE 1 | SPRING 2011

Persuasive rhetoric and moral appeals have always been used to advocate for social work programs. But while a convincing argument is still useful, it’s the toolbox of science that increasingly tells practitioners and policymakers what interventions and treatments should be adopted and taken to scale.

Image
Humans across all ages surrounded by math symbols

Abstract:

Today, studies that use randomized trials, longitudinal data and other scientific methodologies to provide statistically valid results have become a major part of social work research. At SSA, examples such as Jens Ludwig’s study of the effects of the massive public housing experiment called Moving to Opportunity, Neil Guterman’s research on interventions of how to help fathers become more involved to combat child maltreatment, and the University of Chicago’s Crime Lab study of the Youth Guidance’s Becoming a Man (BAM) project illustrate the challenges of running such rigorous studies—and the benefits for public policy and programs of having the evidence of their results.

________________________________________________________________________________________________________

Published in the Spring 2011 issue of SSA Magazine

“In the late 1970s, some people began to ask some tough questions: ‘Well, how do you know if a social work service really works, that it helps people get better?’ says Neil Guterman, the dean and Mose & Sylvia Firestone Professor at SSA. “That spurred the use of scientific methods to identify practices and policies that not only sound good but have the evidence to back them up. The skepticism has helped advance the field in much more rigorous ways.”

Today, studies that use randomized trials, longitudinal data and other scientific methodologies to provide statistically valid results have become a major part of social work research. “The field has been developing, and there are more and more rigorous, randomized experiments being done,” says Jeanne Marsh, SSA’s George Herbert Jones Distinguished Service Professor and the president-elect of the Society of Social Work and Research (SSWR). “When the question is, do social programs work, these studies provide an answer. We can find out if an intervention really has the impact that is envisioned.”

With such evidence, policymakers are able to support practices that are known to make a positive difference in people’s lives. Yet the effort required to achieve these results is time consuming, complex and intellectually challenging. The studies have ethical and practical considerations that require careful planning, and real-world complications can derail or detour a study mid-stream. Once a study of an intervention is complete, a research team must gather the data and draw conclusions based on strict statistical guidelines, or else the final results will be of little use.

Take the studies of Moving to Opportunity (MTO), a landmark demonstration project sponsored by the federal Department of Housing and Urban Development. In 1994, when MTO launched, policymakers and advocates were excited about a big idea: Could the well-being of poor people who live in urban communities with concentrated poverty be drastically improved if they were able to move to a better neighborhood?

MTO was designed to get answers to that question. The housing authorities in five cities worked with local nonprofits to recruit about 4,600 very low income families who lived in public housing in the poorest parts of the city (where more than 40 percent of local residents live below the poverty line). A third of the participants were randomly chosen to have the option to use housing vouchers that could only be used in a neighborhood where less than 10 percent of the population was poor, a third were offered standard Section 8 vouchers with no restrictions, and a control group continued to live as they had.

The four-year program has been called the most important social experiment of the past 25 years. It was a massive effort, gathering data from thousands of participants and following families in a longitudinal study for nearly a decade. Eight sets of researchers used the information and have been producing a fascinating and complex set of conclusions, including that adults who moved with program vouchers were more likely to be satisfied with their housing and less likely to be obese or have mental health issues. But they found virtually no significant difference in employment, earnings or rates of adults receiving welfare. It turns out that giving families the option to move out of the poorest parts of a city can help in many ways, but it is no panacea.

Jens Ludwig has been a key researcher on MTO, including as one of three co-authors of a 2005 Quarterly Journal of Economics article that found that the offer to relocate reduced arrests for female youth for violent and property crime and reduced arrests for violent crime for male youths, but increased their problem behaviors and property crime arrest. The McCormick Foundation Professor of Social Service, Law and Public Policy at SSA, Ludwig says that William Julius Wilson’s influential book The Truly Disadvantaged, which argues that low-income families are deeply affected by the neighborhood they live in, made a big impression on him in grad school when it was published in 1987.

“Imagine my surprise when I started to work on MTO and we began to see findings accumulate indicating that the impacts on poor families from large changes in neighborhood environments were more mixed and complicated than most people expected,” Ludwig says.

Trained as an economist, Ludwig also points out that the results of scientific studies such as the MTO research allows policies and programs to be built on real evidence, meaning that dollars and efforts can be allocated for interventions that are known to make a difference. “This experience of working on MTO has made me appreciate how limited our theories currently are,” he says, “and makes me inclined to be relentlessly empirical and open-minded about how the world really works.”

This type of research typically starts with a review of what’s known about an issue. For example, Guterman is now working on building an intervention that is sorting out the complex question of how best to involve fathers in home-based services aimed at reducing a child’s risk of future abuse or neglect. What Guterman and his team have found, though, is that there are no empirically tested strategies for reaching out to the fathers, even though there is a body of research that shows that fathers play a crucial role in shaping family and child outcomes.

“There’s a little research on interventions to improve involvement for fathers, much of which dates back to the 1990s, but these are largely peer support programs and most haven’t been rigorously tested,” says Assistant Professor Jennifer Bellamy, who’s working with Guterman on the father involvement study. “And child and family programs like Head Start have attempted to engage fathers, but that research is also underdeveloped.”

With the relevant research in mind and an understanding of how to craft an effective intervention, Guterman and Bellamy have designed a pilot test for an enhancement that they think, based on the latest evidence, will improve fathers’ involvement in early home visitation services for at-risk families. Their plan will try out the intervention on a small group of families and compare it with families getting only standard services, looking for preliminary signs of its benefits across three home visiting programs in the Chicago area.

Creating a program that can be implemented and be effective across a wide variety of situations is crucial, not only so the intervention is more likely to have a positive impact when tested more broadly in the real world, but also if and when it is adopted after the research is done. “The overall purpose of the work is to make a difference. By design, the purpose of intervention research is to improve individual, family or community health outcomes, not just to measure them,” says Assistant Professor Alida Bouris, whose research is centered on adolescent sexual behavior and HIV/AIDS (see the sidebar story “Giving Parents a Voice”).

Before a program can even be piloted, though, it must pass through the University’s Institutional Review Board, an ethics committee of University of Chicago social science researchers who look closely at the methodology, making sure the study will treat all the participants ethically and safely, not only those who will go through the intervention. “They look carefully to be sure that nobody is denied services for refusing to be in the study or because they don’t fit the profile of who is being studied, for instance,” says doctoral student Aaron Banman, who has worked with Guterman on several of his studies.

Once a pilot project is complete, researchers typically “manualize” the program for efficacy trials, where the researcher measures intervention’s impact when it is administered by staff at social service agencies. By providing a painstakingly detailed manual, the researchers try to ensure that the intervention that is administered in the field is delivered as it was intended, without alterations that could skew the results.

For Parents Together, an intervention created and studied by Guterman to help mothers receiving home visitation services to build and optimize their informal social networks, the manual is more than 70 pages long, with sections that explain the research that led to the program’s creation and the goals and activities for each of the six group sessions. Guterman provided guidance on running the program in its first test in New York City and at agencies in and around Chicago in subsequent trials.

“Neil came and spoke with my whole team, and he went through a very specific training with me on how to deliver the model,” says Christina LePage, A.M. ’07, who was the program manager at the Infant Welfare Society of Evanston when the agency ran Parents Together as part of Guterman’s research. “He gave me weekly supervision, and after I’d run a group session, we’d debrief right away. He was very clear on what we were doing, which helped build trust among the mothers participating in the project.”

Something as simple as the onset of winter can become a headache when an intervention trial is running in the field. In 2009, the University of Chicago Crime Lab was studying Youth Guidance’s Becoming a Man—Sports Edition (B.A.M.) to see if its mix of group-counseling sessions, behavioral therapy and non-traditional after-school sports would improve educational outcomes and reduce gun violence for at-risk adolescent boys. However, the fact that the days were getting shorter as the school year progressed started challenging the carefully created cohorts of 800 students in 15 Chicago Public Schools, split among those who were receiving the after-school intervention, those in just the school-day intervention and those in a control group.

“When it gets dark early, some of our students wouldn’t feel safe walking home alone in their neighborhoods, so they would leave before the program started, or ask a few friends to come to the program too, and those could be kids who weren’t assigned to that intervention,” says Wendy Fine, A.M. ’00, the director of research evaluation and technology for Youth Guidance, the social service organization that originated B.A.M. and worked with the Crime Lab to expand it for the study.

Even before the program began, Fine and her team worked with Harold Pollack, the Helen Ross Professor at SSA, and, along with Ludwig, the co-director of the Crime Lab, to figure out how to balance the scientific requirements of sample size and demographics with the constraints and needs of the schools and students. To get statistically significant results that demonstrate a program’s efficacy, the research has to compare groups that as are similar as possible—and so random assignment is used when possible. If there are notable differences between the participants in the intervention group and the control group (age, special education status, school attendance, educational achievement, etc.) those factors could be the reason one group did better or worse in measurements after the trial is complete.

Once the groups have been crafted with random assignment, real-world circumstances—like students who miss the intervention because it’s getting dark out—can intrude. “Another of the real challenges was that we had to get the family consent to be in this intervention. That really constrained how many kids we could recruit,” Pollack says. “Then there are a lot of other after-school programs going that can compete for the student’s attention one afternoon. There were a lot of mid-course adjustments.”

Pollack, Fine and others on the team worked with Chapin Hall, the Chicago Public Schools and the Chicago Police Department to gather the necessary outcome data—another task that a research team has to plan when designing their study. “Those partnerships were really useful, because it meant that almost all the money we raised for the study could be spent on actual services to kids, instead of spending a portion on gathering data,” Pollack says.

With data in hand, the calculations of results can begin. “Cleaning” the data to get to a regression analysis can be a long process. “Everybody talks about the number of cases in your study sample in quantitative research. The smaller that gets, the more each data point is really sacred. It can be a very complex statistical process to find ways to replace missing values,” says Banman, who teaches statistics at SSA. “You have to have a vision for what to do with the data. Working with the U of C statistics department is a really nice resource, and I have been lucky to have Neil [Guterman] as a resource to talk things through.”

Pollack says that what he and his team are learning through the B.A.M. Sports Edition project is invaluable for their ongoing work at the Crime Lab. “Many very practical implementation insights became clearer than ever to us, including how essential it is to have partnerships with community organizations that are prepared to operate in the realistic practice environments of Chicago communities and schools,” he says. “I have to say that B.A.M. Sports Edition has been some of the most challenging work I’ve ever done, and certainly some of the most rewarding.”

--

Even tools as powerful as randomized controlled studies have their weak points. Ludwig and his research partners have pointed out, for example, that the MTO studies are specifically about families who were given the option to participate. “MTO is silent on the effects of involuntary mobility programs, which is an important point, given ongoing HOPE VI activities across the country to demolish some of our highest-poverty housing projects,” they wrote in a 2008 paper. And there are always unique factors in the time and place where a study was run, everything from the state of the economy to the effectiveness of the local schools, that limit how broadly the lessons can be applied.

Marsh notes that control group studies also aren’t able to answer every question. “Do parents do better in a program if they feel a bond with the social worker? Common sense would tell us they do. But you can’t randomly assign people to have a good relationship,” she says. “We’re getting increasingly sophisticated at applying what kind of research works for which questions. [At SSWR] we support a variety of approaches, including mixed methods, with qualitative and quantitative methodologies working together in one study. It’s really a generative period in the development and application of innovative research methodologies in social work research.”

From Ludwig’s perspective, the limits of randomized social experiments are far outweighed by the knowledge they bring. Statisticians use the term “internal validity” when judging a study’s impact estimates within its boundaries, and “external validity” for the degree to which those impacts can be generalized to other populations or time periods. “Experiments are often criticized for having low external validity,” he says. “But a different way to think about it is that without internal validity you can’t have external validity. I think the way to improve policy is to accumulate experimental findings strategically to try to round out the picture.”

This process of adding information piece by piece and building on what has been learned before is central to the scientific method; it’s what allowed medicine to progress and Guterman argues that the same process is now changing social work. “Scientific breakthroughs are built on progressively creating a rigorous, expanding knowledge base,” Guterman says. “By using these scientific techniques, we can demonstrate the efficacy of interventions and build evidence-based practice.”