At MIT, there’s a School Effectiveness and Inequality Initiative (SEII) which is, of course, run out of the economics department. But they just released a study of long-term effects of universal preschool in Boston, and it has several small lessons and one huge lesson to impart.
The answer—and this is important—depends on what you mean by “works.”
Preschool had no real effect on test scores in the short term, and any effects had disappeared by later years. Preschool did not raise scores on the state’s Big Standardized Test.
Preschool reduced the likelihood that a student would get in trouble in high school or ever be jailed. And preschool increased the likelihood that students would graduate from high school and that they would go on to attend college.
These are good things, and if preschool done right can deliver these results, that’s good news.
But let’s not gloss over the above findings. Because by the standards we’ve been using for twenty years to measure school/teacher effectiveness, the preschools failed. Their teachers were ineffective. In some states, under some programs, the “bottom” of those preschools would be opened up for turnaround or takeover, maybe handed off to some charter operator. Op-eds would be written about how “student achievement” had stalled, and the programs had failed to boost “educational attainment.”
MORE FOR YOU
Because for at least twenty years, the prevailing measure of success has been test scores. Every educational idea has been tested for “effectiveness” by checking to see if it raises test scores.
For at least twenty years, we have insisted on measuring the wrong stuff and using fake fig leaf proxy language like “student achievement” to hide the fact that we’ve been measuring the wrong stuff. The folks at MIT were smart enough this time to use more than one bad measure. But how many schools, teachers, and education programs have been incorrectly labeled ineffective or low-quality because they failed by one bad measure—did it raise test scores?