At Michigan Future, we’ve been making the argument for some time now that our k-12 accountability systems need to measure schools based on the outcomes that matter most: what happens to students after they leave k-12. This is far different than what we do now. And therefore, our current system tells us very little about actual school quality.
Our current school accountability system ranks schools almost exclusively on standardized test scores. But standardized tests capture only a small piece of what it means to be college-ready.
A couple pieces of evidence. The book Crossing the Finish Line used a large, nationally representative set of student data to analyze what truly predicts success in college. And what they found was that a student’s high school GPA – made up of grades given by individual teachers across four years of high school – was far more predictive of eventual college graduation than her SAT/ACT score. And the student’s GPA was predictive regardless of high school attended; whether the student went to a “good” high school or a “bad” high school, a good GPA predicted college success.
Why? Because while test scores measure a student’s ability on a narrow band of math and reading skills, GPA measures a diverse set of capacities, encompassing academic habits, content knowledge, and non-cognitive skills, exhibited day after day across four-years of high school.
Research from Northwestern economist C. Kirabo Jackson came to a similar conclusion. Using a large set of student data from North Carolina, Jackson found that a “non-cognitive” index of grades, attendance, and disciplinary records was more predictive of long-term success than test scores. And he also found that the set of teachers that were able to improve this index was an entirely different set of teachers than those that were adept at raising test scores.
The message from both cases: when we focus only on test scores, we miss the really important stuff.
The case for focusing on long-term outcomes: data from Metro-Detroit schools
The argument we often hear against using long-term outcomes to measure school success is that there are far too many factors that intervene between high school graduation and college graduation to meaningfully hold high schools accountable for postsecondary results.
Our argument, however, is that there is so much a high school can do – outside of improving test scores – to improve postsecondary outcomes that it’s irresponsible not to include this data on an accountability scorecard.
A look at local data is illustrative. Detroit’s selective high schools, Cass Tech and Renaissance, were ranked in the 21st and 45th percentile, respectively, in the 2015-16 school rankings. High-ranking suburban high schools, like Birmingham Groves and Saline High School fell in the 84th and 95th percentile. To some extent, this is to be expected: test scores are highly correlated with socioeconomic status, and Cass and Renaissance, despite being “test-in” schools, serve a far higher proportion of economically disadvantaged students than their suburban counterparts.
Yet if we look at the high school class of 2008, postsecondary outcomes for these schools look remarkably similar. 57% of Renaissance graduates and 45% of Cass graduates earned a four-year degree in the 6 years after high school, with another roughly 7% at each school collecting a degree in the following two years. Meanwhile, 57% of Groves graduates and 49% of Saline graduates had earned a four-year degree 6 years after high school, with another 6 to 10% earning a degree in the following two years.
The postsecondary outcome data for these schools was roughly the same, despite the fact that a larger chunk of Cass and Renaissance student will likely face a more difficult path to college graduation.
Yet although postsecondary outcomes for these schools look largely similar, in the picture the public receives on school quality, Groves and Saline are exemplary, while Renaissance and Cass are failing.
What might be going on at Cass and Renaissance that we miss by looking only at test scores? Perhaps they’ve developed a rich college-going culture, a strong college-counseling department, or a broad curriculum that targets the wide-range of skills students need to do well in college.
Regardless, this example demonstrates just how much is missed when we only focus on test scores. Groves and Saline may very-well be exemplary. And Cass and Renaissance may very well have a lot of room for improvement. But we can’t tell any of that based on the picture of school success we’re given through the state accountability system.
Including long-term outcomes in our school evaluation system doesn’t give us all the information we need, but it certainly makes the picture just a bit clearer.