Ooms is a member of the West Denver Preparatory Charter School board, and several other boards involved in education reform
The recent results of Denver’s School Performance Framework (SPF) was fairly minor news. That’s encouraging, because it means that evaluating schools, with a premium on student academic growth, is more and more part of the lexicon. No one will, or should, claim that the SPF is the only metric that matters, but it is pretty hard to argue that the data is not useful (although I’ll offer even money that someone in the comments may take up this challenge).
At the same time, after spending considerable time with the SPF, I also think it needs to evolve. Now I come to praise the SPF, not to bury it — in my opinion, the Colorado Growth Model (the engine of the SPF) is one of the most important developments in recent memory. However let’s take the SPF seriously enough to acknowledge its limitations and look for ways to improve it.
There are three main ways I think the SPF could evolve to include and sort data to provide a fuller view of school achievement. It’s been true for too long that some board members actively resist comparative data, which allows them to support pet projects and political agendas when a hard look shows their programs to be underperforming. Moving to a data-informed opinion is critical to make any significant changes in the way we educate our children. The data I would add include: a confidence interval; inclusion of selective admissions, and a comparison by FRL. These are all highly important variables in school evaluation. Let me explain each.
First, as SPF academic data is based on the CSAP, which is administered only in grades 3-10, so the percentage of students whose scores count toward a school’s ranking varies considerably. For example, elementary schools offer 6 grades (K-5), in which academic growth data is available only for 4th and 5th graders. This means that — assuming every grade has an equal number of students — only 2 of 6 grades (or just 33% of students) are counted in the growth score, which is the single largest component of the SPF. There is a similar problem in high schools, in which all academic data is only available for roughly 50% of the student body (9th and 10th grades).
Assuming even distribution across grades, the percentage of students whose scores are included in the growth data varies considerably by type: elementary schools (33%); high schools (50%); K-8 (56%); 6-12 (71%) and middle schools (100%). Particularly for smaller schools — which are most often the elementary grades – this means that a pretty small cohort of kids can determine the academic growth score for the whole school.
What I’d like to see the SPF do is two-fold: first, there needs to be a confidence interval for each school. Now, as Paul Teske has pointed out, data is often based on sampling, and this alone does not invalidate the results. However, at a minimum one should be aware to comparisons between schools where 100% of the students contributed academic data versus only 33%. The required math here is not that hard (here is an online calculator) — for a school of 300 students, to get 95% confidence that the growth score is +/- 5 percentage points, you need a sample size of about 170 students. I don’t believe there is an elementary school in DPS that comes anywhere close to that standard, and my guess is that most have a possible swing on academic growth data of +/- 8 percentage points (so a mean growth score of 50% could be anywhere from 42% to 58% – which spans 3 SPF categories). That’s significant.
So in recognition of what will be very different confidence intervals, schools should thus be compared primarily by grades served (apples, meet apples). Compare K-8 programs first among themselves and the median of their group score, and then among all schools. Maintain the overall ranking, but acknowledge the significant difference between the data sets of different grades served by setting them apart (example to follow).
Secondly, I’d like to see the percentage of students in each school that are selective admissions — students who are awarded places based on academic ability or skill. This would include both entire magnet schools as well as selective admissions programs within larger student bodies. Simply put, it is deeply unfair to compare schools that can hand-pick students with those that do not. With few exceptions, the percentage of selective enrollment seats within many DPS programs is lost in the statistical bureaucratic muck, and badly deserves some transparent light. I’ve written about this previously, and I remain at a complete loss at a system in which schools with these different enrollment policies are ranked as if they are equal when they are clearly not.
Third is to more explicitly consider the percentage of students in poverty (or FRL). The correlation between subpar student academic achievement and poverty remains high, and particularly if we are serious about addressing the achievement gap, we need to look more closely at schools that have FRL higher than the district average (of about 65%), and less at those schools whose demographics only resemble those of our city when inverted.
What might this new SPF look like? Here is some of the data for DPS high schools (chosen because sample size is large enough to be interesting and small enough to be manageable):
Now I don’t have a confidence interval here — which is most useful in comparing schools who serve different grades — but given that all of these schools are relying on academic data from roughly 50% of their students, I’d sure like one. Selective admissions reveals one school: GW, whose 28% selective enrollment is from their web site and may be slightly dated, but I’d bet it’s pretty close.
Note that the four lowest scoring schools (who are in the two “danger” categories) all have FRL above 85%, while of the top four (in the second highest category), only one does. Which leads us to the second part: a graph comparing the SPF score with the percentage of students who are FRL (red is the regression line):
What is telling here is the easily discernible pattern through the lens of FRL and achievement. The median point score – for the high school category alone – is 45%. Three schools scored significantly above that median: CEC, East and GW. East has open enrollment and an FRL of 35% (note that the latter is neither a pejorative nor discredits their high SPF score); GW has 52% FRL and a selective admissions policy for over a quarter of their students, which makes a considerable difference (my guess is that without these students GW would drop a category). The high school that is most impressive is CEC, with high SPF points, FRL of 81%,
and an open-enrollment policy.* as benefits its isolated position in the top right.
Somewhat appealing are Lincoln and Manual – both received SPF scores just over the high school median, but did so with large numbers of FRL students. TJ had a somewhat higher score, but their relatively small FRL population shows them far below the trendline. Kennedy looks remarkably average or below; South, West and North all disappointing, and laggards Montbello and DVS are already (and rightly) undergoing programmatic changes.
Now this view is largely lost in the overall SPF, which gave CEC an overall ranking of 24th and placed them in the second-highest category of “Meets Expectations.” But if you are a parent searching for a good high school program, you care a lot less about the comparison to elementary, K-8 and middle schools. And you should take a hard look at the impressive results at CEC.
So while I believe it remains important to show the relative performance of all schools, this is how I would like to see the SPF evolve. For the combination so evident in CEC is, to me, the rare trifecta that narrows the achievement gap: academic growth (hopefully with a strong confidence interval);
open-enrollment policies;* and serving a large FRL population.
This trifecta is also really hard to do. Last year I wrote a depressing post on the SPF which was more specific about the truly lousy prospects for high-poverty, open-enrollment students. The results this year were just not that different – the worst schools have somewhat narrowed, but there is still a long way to go at the top, particularly in grades 6-12.
However we should acknowledge the achievements that are being made: for high schools that is East and especially CEC , who are deserving of recognition not easily apparent in the overall SPF. My guess is there are similar schools in each of the different grade structures. It would benefit all of us to have a clearer picture of who they are. Hopefully the SPF will take some tender steps towards this evolution.
*Update: I regretfully spoke too soon about CEC’s enrollment policy. The school does not have geographic enrollment, and instead accepts students based on an application process that requests transcripts and grades, awards received, attendance data, and three recommendations. This clearly places CEC (and they somewhat self-identify) as a magnet school with 100% selective admission. To operate as a magnet with 81% FRL is commendable, but this is not a school with open-enrollment and their achievements should include this qualification.
Popularity: 16% [?]