Skip to main content
PDF Version
Submit a Comment

The Use of Evidence in Out of School Time Initiatives: Implications for Research and Practice

Send by email

Introduction

With the recent passage of the No Child Left Behind Act, the U.S. Department of Education is moving with more speed towards an evidence-based approach to defining the design and delivery of education policies and programs; the education policy community is following suit. This approach involves weighing the broad spectrum of high-quality evidence about these educational policies, and then drawing the most reasonable conclusions. This differs markedly from evidence-backed policy, in which advocates find some data to justify continued investment in a particular direction, often without attention to the quality of the evidence or the universe of possible studies. The short history of two high-profile federally supported initiatives serves as a good illustration of the tension between advocacy and evidence.

In light of troubling academic outcomes and increased risk taking behaviors among today's youth, particularly for low-income, urban youth in poor performing schools, policy makers have shown a great deal of enthusiasm for programs which occur above and beyond the normal school day. Support for such services as tutoring and after school programs has grown tremendously over the past five years. For example, the America Reads Challenge program, begun in 1997 under the Clinton Administration, currently funds college work-study students in over 1,400 colleges and universities to serve as reading tutors to preschool and elementary students. Prior to this program, a survey revealed that 85 percent of higher education institutions in the United States offered some form of tutoring for elementary or secondary students (Matthews, 1993). Moreover, as part of the No Child Left Behind Act, the federal government legislated for Supplemental Educational Services, including after school programs, to assist students who attend Title I schools not meeting performance goals. The federal government's explicit attention to Supplemental Educational Services came at a time when appropriations for 21st Century Community Learning Centers (CCLC) had reached $1 billion. A key component of the CCLC program is the provision of academic enrichment opportunities during out of school time for children attending low performing schools.

There is strong public support for these current investments in out of school time programs. For example, nine in ten adults believe that there is a need for some type of organized activity or place where children can go after school (Afterschool Alliance, 2002). Data shows that children are most vulnerable to the risk-taking behaviors that could negatively impact academic, social, and behavioral outcomes during the hours immediately after school. Thus, reason follows that both volunteer tutoring programs and after-school programs could likely lead to improved outcomes for participants. But, what is the evidence that these programs are contributing towards the intended outcomes?

Advocacy or evidence?

Policymakers and the general public support these interventions because they presuppose them to be effective, or at the very least assume that they will "do no harm." However, the research to support such presuppositions is weak, at best. To date, there have been no systematic reviews of high quality evaluation research suggesting that these interventions improve academic, social or behavioral outcomes of participating youth. The reviews that have been conducted in this field have several weaknesses which limit the conclusions that can be drawn across the included evaluations. Evaluations of both volunteer tutoring and after school programs range widely in quality, and most reviews do not account well for this variance. Several reviews have also focused on evaluations which have shown promising results, without examining the evaluations that have measured null or negative effects. Finally, many reviews of these interventions published to date group non-comparable programs together, placing more traditional after school programs in the same bucket with mentoring and volunteer tutoring programs, and then drawing conclusions across this wide variance in program models. Clearly, the field of out of school time programming is in need of reviews which synthesize evidence on the effectiveness of these interventions, and use this evidence to make recommendations on how programs could be improved to better deliver quality services.

It is our strong belief that reviews of social programs, like volunteer tutoring and after school programs, should be conducted systematically with several objectives in mind that will move policy and practice forward. Reviews should include rigorous, experimental program evaluations of similar program models, provide descriptions of the programs, and describe the evaluation methodologies and outcome measures. The reviews should then identify programs with evidence of effectiveness and compare and contrast these successful programs with other evaluated models that may have shown null or negative effects. Finally, systematic reviews should synthesize the findings of comparable, rigorous evaluations to indicate whether, overall, the intervention is effective and, if feasible, under what conditions. Above all, the entire review process should be transparent, able to be replicated and/or expanded as future evidence becomes available.

These guidelines have been adopted by researchers forming the Campbell Collaboration, an international group of policy researchers writing systematic reviews of studies of social policy interventions.2 We are currently writing reviews for the Campbell Collaboration on volunteer tutoring and after school programs. In the search for high quality, experimental evaluations, we have searched major databases, read prior reviews, conducted internet searches, scanned major research organizations doing work in these areas, and relied upon contacts in the field to direct us to evaluation work. After months of searching, our yield of high quality experimental studies is very slim - five tutoring program evaluations and four after school program evaluations.

What does this mean? It is fair to say that continued expansion of these two federal programs is currently not evidence-based. Policymakers have relied on a very thin knowledge base to justify the continued allocation of tremendous resources. We know little about the benefits of the programs, about who benefits most, about which types of interventions might be the most effective, and how programs might be improved. Many advocates in the field would disagree with us, citing evidence from methodologically suspect evaluations, or anecdotal and journalistic accounts of "successful" programs.

So what? If the public supports after school and volunteer tutoring programs, should evidence of effectiveness matter? Well, yes and no. If the goals of such programs include positive growth in target youth, then research on program effectiveness is necessary. Furthermore, continued political and private support for such programs is contingent upon showing that programs are indeed providing youth with quality services, and that participation leads to improved outcomes. To illustrate our point, the Bush Administration recently proposed a 40% reduction in 21st CCLC appropriations after the release of a high-profile experimental evaluation that could not clearly document strong evidence of effectiveness (U.S. Department of Education, 2003). On the other hand, programs may simply be expected to provide a positive structure, such as opportunities for cross-cultural interaction between college and elementary students, or the provision of a safe haven after school for students without other positive opportunities. In this case, evidence from high-quality experimental studies that demonstrates that the programs improve academic, social, and behavioral outcomes may not be necessary. However, implementation and/or process evaluations should still be conducted with the goal being to continuously improve the programs for the youth they serve.

As education researchers who have conducted experimental and quasi-experimental evaluations of both after school and volunteer tutoring initiatives, and believe in the necessity of such studies to answer questions about program effectiveness we challenge the assumption that these programs are effective and the non-experimental evidence that is cited to support that view. For example, of the few experimental evaluations of after-school programs (of varying quality) that exist, at least two have shown some questionable and unintended negative effects on participants (as compared to a control group).3 Programs have shown some positive effects, but not in areas that, at this time, are of greatest interest to policy makers (e.g., grades, test scores, homework completion, television watching). Many of the volunteer tutoring programs are "pull-out" programs where students are removed from their regular classes to attend tutoring sessions. Evaluations of these programs also show null effects or detrimental effects for participants (Ritter, 2000).

Where do we go from here?

The use of volunteer tutoring and after school programs has grown exponentially in the past few years, despite the dearth of evidence that either intervention leads to improved outcomes for youth in urban areas. It is more important than ever to rigorously examine these efforts. Policymakers and practitioners both need to better understand what volunteer tutoring and after school programs can and cannot accomplish, and how programs might be improved to better meet their goals. The limited amount of high-quality evaluation research collected to date may not provide these answers, but this thin knowledge base can still be used as a tool to move the entire field of out of school time programming forward.

Toward this end, there is no substitute for experimental design evaluations that not only measure impacts, but also integrate qualitative methodologies to look closely at program operations and processes that help to contextualize the impact findings. Given the lack of high-quality research out there, funding should be targeted towards multiple, small-scale longitudinal studies that are designed to measure a likely range of outcomes given the program goals. In this way, the evidence from rigorous small evaluations of comparable interventions can be pooled to provide overall evidence of effectiveness. With this approach, the education policy field can move more efficiently and effectively towards evidence-based practice.

Practitioners also have a role in furthering research that will document evidence of effectiveness and make a difference for program sustainability. The recent release of the first year findings from the National Evaluation of the 21st Century Community Learning Centers Program (U.S. Department of Education, 2003) has caused a flurry of responses from practitioners, many critiquing the use of one high-profile report to justify a decrease in proposed 21st CCLC allocations.4 Recent exchanges among practitioners in the field who believe strongly in their work suggest that the gap between research and practice in this field is growing wider.5 Instead, we encourage practitioners (whose expertise lies with providing enriching experiences for youth during their out of school time) and researchers (whose expertise lies in high quality research methods) to collaborate and translate the anecdotal evidence that demonstrates that programs are making a difference for youth into the quality of research that policymakers currently expect. Practitioners and researchers will need to work together to develop measures which capture the full range of benefits these programs have for youth. Research might then be more effective at communicating to policymakers how improving intermediate outcomes (like self-esteem, attachment to caring adults, and interest in post-secondary education) may be integral in improving long-term academic, social, and behavioral outcomes for youth.

Conclusion

Robert Slavin made an interesting point in a talk to a 1997 American Federation of Teacher leaders meeting. Slavin is not optimistic about the use of volunteer tutoring programs in an effort to combat the serious reading problems in the United States.

"Imagine that President Kennedy had said, 'We are going to put a man on the moon and we are going to do whatever it takes to put a man on the moon within a certain number of years.' He knew it was attainable in principle, but it was going to take serious investment and serious time to accomplish the goal. But then to say, 'And we're going to do it with volunteer engineers' - I don't think so" (Gursky 1998, p. 13)

Just as volunteer tutoring programs propose that we can drop in any volunteer to do a teacher's work, the huge expansion in after school programs has been misdirected in thinking that we can implement any heavily funded model without solid evidence on best practices.6 These politically expedient approaches have been relatively easy to implement, and have satisfied the public's captivation with volunteerism and after school programs. But, as we are all constantly reminded, real education reform takes time, and needs to be informed by more than political tinkering.

In this vein, we propose an evidence-based model for funding and expanding programs. Resources might better be targeted towards programs with solid evaluations that show evidence of positive impacts and practices that could be replicated in multiple contexts. In no way does this imply a "one size fits all" approach to program implementation - we understand that a broad spectrum of program models with very different goals and strategies will positively impact youth. This approach likely means that, in the short term, fewer youth would participate in after school and tutoring programs. However, in the long term, research and practice would work more closely together to eventually provide high quality programming for urban youth who could benefit from such services.

References

Afterschool Alliance. (2002, November). Afterschool alert poll report, 5. Retrieved from: http://www.afterschoolalliance.org/school_poll_final_2002.pdf.

Gursky, D. (1998, March). Volunteer tutoring: No magic bullet. American Teacher, 13.

Lauver, S. (2002). Assessing the benefits of an after-school program for urban youth:
Results of an experimental design and process evaluation. Unpublished dissertation: University of Pennsylvania.

LoSciuto, L., Freeman, M. A., Harrington, E., Altman, B., & Lanphear, A. (1997). An Outcome Evaluation of the Woodrock Youth Development Project. Journal of Early Adolescence, 17(1), 51-66.

Matthews, S. (1993). Helping college tutors define reading and mold active learners. Journal of Reading, 36(8), 636-640.

Ritter, G. (2000) The academic impact of volunteer tutoring in urban public elementary schools: Results of an experimental design evaluation. Unpublished dissertation: University of Pennsylvania.

U.S. Department of Education, Office of the Under Secretary. (2003) When schools stay
open late: The national evaluation of the 21st Century Community Learning
Centers program, First year findings. Washington, D.C.

Weisman, Stephanie A., Soule, David A., & Womer, Shannon C. (2001, June). Maryland After School Community Grant Program: Report on the 1999-2000 School Year Evaluation of the Phase 1 After-School Programs. College Park, MD: University of Maryland.

Notes

1 - The three authors contributed equally to the writing of this commentary. (back)

2 - The Campbell Collaboration (C2) (http://www.campbellcollaboration.org) , aims to prepare, maintain and disseminate systematic reviews of studies of social policy interventions. In collaboration with the American Institutes of Research, C2 will establish the What Works Clearinghouse under a grant from the U.S. Department of Education to summarize evidence of effectiveness on educational policies, programs and strategies. (back)

3 - Both the National Evaluation of 21st Century Community Learning Centers (U.S. Department of Education, 2003) and the Maryland After School Community Grant Program (Weisman, Soule, & Womer, 2001) have shown some questionable and negative effects of programs on participants. (back)

4 - Responses have included concerns over the one-year data collection period and measuring the effectiveness of programs in early implementation stages. (back)

5 - Comments have been made on the Promising Practices in Afterschool (PPAS) Listserv, supported by the Academy for Educational Development (http://www.afterschool.org). (back)

6 - 21st CCLC grantees receive an average of $300,000 per year. (back)

Gary W. Ritter is an Assistant Professor of Education and Public Policy at the University of Arkansas, where he is the Associate Director of the inter-disciplinary Public Policy Ph.D. program. He earned a Ph.D. in Education Policy in 2000 from the Graduate School of Education at the University of Pennsylvania under the advisement of Rebecca A. Maynard. Gary currently teaches courses in Education Policy, Program Evaluation, and Research Methods to graduate students. His research interests include volunteer tutoring programs, program evaluation, standards-based and accountability-based school reform, racial segregation in schools, the impact of pre-school care on school readiness, and school finance. His work has been published in Educational Evaluation and Policy Analysis, the Journal of Education Finance, the Georgetown Public Policy Review, Black Issues in Higher Education, and Education Week. He is currently working on a review of the impacts of volunteer tutoring programs for the Campbell Collaboration.

Susan Goerlich Zief is a Ph.D. candidate in Education Policy at the Graduate School of Education of the University of Pennsylvania. A former middle school science teacher, she currently works on the evaluation of an after-school program in a low-income suburb of Philadelphia, and on a Campbell Collaboration review of the impacts of after school programs. During her doctoral studies, she has worked for the Consortium for Policy Research in Education (CPRE) where she was a part of the evaluation team for a local systemic change initiative sponsored by the National Science Foundation. She was also part of the planning team for an evidence-based approach reform initiative to improve teaching and learning in urban districts, and serves as a founding editor of Penn GSE Perspectives on Urban Education. Her research interests include the integration of qualitative and quantitative methods in the evaluation of social policies.

Sherri Lauver received her doctorate in Education Policy at the Graduate School of Education, University of Pennsylvania, in 2002. Her dissertation work involved an experimental and process evaluation of an after-school program in a Philadelphia middle school, which was funded by the Smith Richardson Foundation. She currently works as a consultant with the Campbell Collaboration and the Harvard Family Research Project's Out-of-School Time initiative. She begins a position as a Research Associate with the Center for Educational Evaluation and Technical Assistance, Institute of Education Sciences, at the U.S. Department of Education in June 2003.