The impact assessment of the MMR revealed that it was able to increase the pass rate in schools that received it.
A few weeks ago I wrote about the Gestão em Foco Program – Results Improvement Method (MMR), which was conceived by the São Paulo State Department of Education to improve student learning. The impact evaluation of the MMR revealed that it was able to increase the pass rate in schools that received it, especially in those that concentrated a greater number of students from a lower socioeconomic level. In this article, I return to the subject to describe the methodology used in the study.
The selection of which schools would receive the MMR was not made by drawing lots. This would often make impact assessment unfeasible, as in these cases the outcome of the program could have been influenced by unobserved factors. For example, without the knowledge of the Department of Education, teachers from schools that received the MMR could have participated in the same period of training to improve their pedagogical practices. If that happened, the causal result of the MMR would be contaminated, making it impossible to distinguish whether the training or the MMR was responsible for the observed results.
In these estimation contexts potentially biased by external factors, it is necessary to use other techniques to measure impact. In the case of the MMR, the fact that the criteria for inclusion of schools in the program are objective, observable, and quantified (specifically, location and past performance in assessments), in addition to data that individualize participating and not covered schools (such as socioeconomic status) , allowed the measurement process.
Based on this information, it was possible to combine two procedures to evaluate the program. First, using the data, it was identified which schools would be more likely to have received the program, thus forming the counterfactual group., that is, a group of schools whose evolution in results would represent what would have happened to the MMR participants if they were not included in the program. Afterward, it was verified whether there was a statistically significant change in the trajectory of the performance of schools participating in the program after its beginning and in relation to these similar units. Combining these methodologies, it was identified that the program improved pass rates, but not students’ grades in Portuguese or mathematics exams.
In many cases, our first instinct when faced with a project already in development is to think that it will not be possible to apply an impact assessment. Despite this, the assessment should not be immediately dismissed. Only after a very good understanding of the project context and available data should this judgment be made. A manager interested in impact assessment should not be afraid to devote part of his time to this task, as measuring impact is the most reliable way to know the real effects of the project.
*Breno Salomon Reis is a researcher at EvEx, a center linked to ENAP (National School of Public Administration) dedicated to helping the government to incorporate evidence into decision-making in public policies. He is also a Master in Public Policy from Insper. Her dissertation assessed the impact of the MMR program.