You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 77 Next »

By Katie Floyd and Cassie Davis (Supervisor/editor: Paul von Hippel)

Introduction

School voucher programs, also known as opportunity scholarships, are scholarship programs – frequently government funded, but sometimes funded by private philanthropy that pay for students to attend private schools of their choice. Many private school voucher programs have been initiated around the world with goals that include improving measures of student performance, such as test scores.

Several studies have estimated the effects of different voucher programs on student test scores. On this page we summarize the most recent and complete review of the most rigorous studies. 

The Arkansas School Voucher Meta-Analysis

In May of 2016, researchers at the University of Arkansas published “The Participant Effects of Private School Vouchers Across the Globe: A Meta-Analytic and Systematic Review” (Shakeel et al., 2016). This study combines and systematically evaluates rigorous evidence from all randomized control trial (RCT) studies of the effects of private school vouchers on student reading and math scores. In a voucher RCT, vouchers are assigned through a random lottery, so that lottery winners who receive vouchers differ only in random ways from those who are denied vouchers. Random assignment is especially valuable in evaluating private school choice programs since, without random assignment, families who choose private schools would be “widely expected to be different from other families in unmeasurable ways that subsequently affect student achievement levels and gains" (Shakeel et al., 2016).

Summary of Meta-Analysis Findings

The University of Arkansas meta-analysis summarized the results of all RCTs that estimated the effect on test scores of receiving a voucher and actually using it to attend private school. The meta-analysis summarizes those results with "forest plots," two of which are shown below. For each study, the forests plot display a point representing the best estimate of the effect, a confidence interval (whiskers) representing a range of uncertainty about the effect; and a box represents the weight given to the study, determined primarily by its sample size. Any confidence interval that crosses zero signals that the estimated effect is not statistically significant and the true effect could have been zero. The diamonds represent the average effect across all US studies, all foreign studies, and all studies together (Shakeel et al., 2016).

 

Screen Shot 2016-11-19 at 8.56.56 PM.png

 

Screen Shot 2016-11-19 at 8.57.43 PM.png

Note. Results shown are the effects of "treatment on the treated" (TOT), or the effects on students who used their vouchers to attend private school. The effect of simply receiving a voucher, whether it is used or not, is estimated elsewhere in the meta-analysis.

Interpreting the Meta-Analysis

The results in these forest plots can be interpreted in several ways. We give three possible interpretations, running the gamut from voucher advocates to voucher skeptics.

The Friedman Foundation Interpretation

In a booklet published by the Friedman Foundation, a voucher advocacy organization, Greg Forster (2016) reviewed a similar though not identical body of studies. Forster used a "vote counting" method that classifies each estimate as having a positive or negative effect on the test scores of voucher students. Forster reported that 14 programs had positive effects on at least some students; more specifically, 6 programs had positive effects on all students and 8 had positive effects on some student subgroups. Forster reported that 2 programs (in Toledo and New York) had no effect, and only one program (in Louisiana) had a negative effect. Overall, Forster concluded that an overwhelming majority of programs had positive effect.

The Authors' Interpretation

The University of Arkansas researchers criticized Forster's vote-counting approach (Shakeel et al., 2016). Vote-counting has a poor reputation in meta-analysis because it does not distinguish between large and small effects, ignores the size of a study, and ignores the uncertainty of effects (Hedges, Laine, & Greenwald 1994), which the forest plot represents with confidence intervals. In a vote-counting analysis, a small and uncertain effect from a small study will get the same weight as a large effect from a large study.

The University of Arkansas researchers used more conventional meta-analytic methods to estimate the average effect and the heterogeneity, or variance of effects from one study to another. In reading, the authors "find null effects in the US and large positive effects...outside of the US". Similar, in math, they find positive effect on math scores when international programs are included, but not when the results are limited to the US (Shakeel et al., 2016). Overall, the authors interpret their meta-analysis results as positive and statistically significant when it comes to achievement effects of school vouchers, though they note that there are larger impacts for reading than for math.

 

A Skeptic’s Interpretation

A more skeptical viewer of this data might emphasize that many of the studies exhibit null results, that the average effect size is close to zero, and that there are a number of influential outliers (on both the positive and negative sides). He or she might pay close attention to the whiskers of the box plots; upon closely analyzing the forest plot for TOT math results in the meta-analysis, for example, one might conclude that many of the programs' effects are not statistically significant. Additionally, one might reconsider the Friedman report's assertion that the majority of schools indicate a positive effect on students, noting that only about half of the studies only have a positive impact on “some” students who are predominantly African American and who come from lower performing schools. Therefore, since not all students were positively affected, the results might not be considered significant. This reconsideration thus leads to a different interpretation from those outlined above: that most of the studies reveal null effects of school choice programs on participants’ academic achievement rather than positive impacts.

Examples of Outliers

The following voucher programs are special cases. When they are included in studies comparing various voucher programs, they tend to have a substantial impact on the overall effect. The following are considered outliers in school voucher programs. 

Colombia

Colombia uses a school voucher program that operates through a lottery system called Programa de Ampliacion de Cobertura de la Educacion Secundaria (PACES). It has served over 125,000 students. The vouchers offered through the program cover more than half the cost of private secondary school, and the remaining cost is covered by the students' household funds. The PACE program differs from US school choice programs in that larger gaps in educational quality exist between private and public schools in Colombia. Generally speaking, Colombian private schools tend to outperform public schools. Some studies attribute this discrepancy to the fact that there is more autonomy and incentives within private schools; moreover, there are likely unobservable factors at play (like differences in student selection policies) that widen the public-private performance gap in Colombia (Kattan). In addition, the PACE program provides individual student incentives which may contribute to the highly positive effect of the PACE program and thus the programs status as a significant outlier (Shakeel et al., 2016).

Louisiana

Most studies of Louisiana’s statewide voucher program, the Louisiana Scholarship Program (LSP), have found negative effects on participating students’ academic achievement. One prominent explanation for LSP's negative effects is that it had a poor initial design that included regulations requiring participating schools to administer state testing and to be subject to inspections by public officials (Forster, 2016). As a result of these strict regulations, many private schools that may have potentially participated in the program perhaps feared additional regulations that may be imposed by the program and opted not to participate. Schools that did join the LSP were thus likely to be poorly performing private schools that had difficulty retaining students and were therefore in need of the cash flow that commenced after enrolling in the program (Walters, 2016). As seen in the forest plot, the LSP exhibits a relatively large negative effect on the academic outcomes of participants in the program, and the inclusion of it in the Arkansas meta-analysis definitely pulls the average effect size down.

Milwaukee

Milwaukee is an important outlier because it was the first publicly funded voucher program in the country, established in 1990 (Rouse, 1997). Schools participating in the program were asked to admit students at random when classes were oversubscribed, which allowed for limited selection bias. The Milwaukee randomized experiment was evaluated by the state department to collect data on academic achievement based on student achievement on the Iowa Test of Basic Skills in the areas of reading and math (Greene, 1999). Both math and reading scores showed slight increases from year 1 to 3 and were found statistically significant for students remaining in the program 3-4 years (Greene, 1999). Other studies that used different methodologies found that there was an increase in math by 1.5-2.3 percentile points, while reading scores differed with both positive and negative coefficient estimates. This makes it difficult to validate the Milwaukee program's positive effect on reading achievement (Rouse, 1997). In the meta-analysis, the average TOT effect for math after 3 years was positively affected by the Milwaukee 3-year impact findings. However, Milwaukee did not play a significant role in influencing the reading achievement scores (Shakeel et al., 2016).


References

 

Angrist, Joshua, Eric Bettinger, Erik Bloom, Elizabeth King, and Michael Kremer. "Vouchers for private schooling in Colombia: Evidence from a randomized natural experiment." The American Economic Review 92, no. 5 (2002): 1535-1558.

Forster, Greg. A Win-Win Solution: The Empirical Evidence on School Choice. Report no. 4th Edition. N.p.: Friedman Foundation, 2016.

Greene, Jay P, and Jiangtao Du. "The Milwaukee Experiment." Education and Urban Society (1999).

Hedges, L. V., Laine, R. D., & Greenwald, R. (1994). An exchange: Part I: Does money matter? A meta-analysis of studies of the effects of differential school inputs on student outcomes. Educational researcher, 23(3), 5-14.

Latin America and the Caribbean Regional Office Human Development Sector Management Unit. Colombia The Quality of Education in Colombia An Analysis and Options for a Policy Agenda. By Raja Bentaouet Kattan, Felipe Barrera, Amy Walter, and Bibiana Taboad. Report no. 43906-CO.

Mills, Jonathan N., and Patrick J. Wolf. "The effects of the Louisiana Scholarship Program on student achievement after two years." Available at SSRN 2738805 (2016).

Morgan, Claire, Anthony Petrosino, and Trevor Fronius. "The impact of school vouchers in developing countries: A systematic review." International Journal of Educational Research 72 (2015): 70-79.

Rouse, Cecilia Elena. Private school vouchers and student achievement: An evaluation of the Milwaukee Parental Choice Program. No. w5964. National Bureau of Economic Research, 1997.

Shakeel, M. Danish, Kaitlin P. Anderson, and Patrick J. Wolf (2016). The Participant Effects of Private School Vouchers across the Globe: A Meta-Analytic and Systematic Review. The University of Arkansas, Department of Education Reform (EDRE).

Walters, Christopher. "School Vouchers and Student Achievement: First-Year Evidence from the Louisiana Scholarship Program." In 2016 Fall Conference: The Role of Research in Making Government More Effective. Appam, 2016.

 

  • No labels