You are viewing the EdNews Blog archives.
These archives contain blog posts from before June 7, 2011
Click here to view the new First Person section of Chalkbeat Colorado

CSAP (in advance)

Posted by Mar 29th, 2010.

Students across Colorado finished their CSAP exams last week; results will not be distributed until late summer.

Of course, most articles about the CSAPs are written after the results are public, where most people dissect the numbers and look for data that supports/refutes specific arguments — often ones to which they are already attached.  And expect the usual carping about the CSAP itself (although the protests seem to have died down overall). So, well before the results are known, it’s worth asking what to look for — well in advance of the actual data.

The following is my list for DPS; I’m also interested in the perspectives of others — comments please!

1.  DPS Academic Growth – The district has seen encouraging news recently on enrollment (helped both by an expansion of preschool/kindergarten, demographic trends and the economic climate) and lower dropout rates.  What has been unclear is academic progress.  The district has spun the results positively — for example, pointing out where the DPS increase beats the State.  Unfortunately, this is easier to achieve when you have a low base: a 2 point bump starting at a score of 47 (overall DPS reading) is a little easier than starting at a score of 68 (overall state reading).  So when the 2010 CSAPs come out, start here: how much real academic growth has the district achieved?

2. District turnarounds: Cole/CASA, Trevista/Horace Mann, Gilpin – The lynchpin of the 2007-2008 DPS reform efforts that closed several schools promised invigorated programs at these campuses. Give each the benefit of a transitional year, and the next round of CSAPs should show if these schools are on the right track.  So far, the data (again, transitional year) is mixed at best. Cole and Gilpin remain Accredited on Probation on the DPS School Performance Framework, while Trevista is Accredited on Watch. None of these schools has shown more than about the median growth score on the Colorado Growth model.  The ability of these schools to change their trajectory will say a lot about the District’s possibilities of improving schools from within. A poor showing will be further evidence against the efficacy of an incremental approach.

3. Charter Expansions: West Denver Prep, DSST — The platform schools for DSST and WDP were ranked first and second on the DPS School Performance Framework.  The ability of these two schools to maintain their high academic standards while they grow is a critical test.  Both will now have results for a second school (DSST’s middle school; the WDP Harvey Park campus).  The results at these locations will say a lot about future expansion and their ability to reach even more families — and both organizations currently maintain substantial waiting lists.

4. Program Expansions: Kunsmiller Creative Arts Academy — KCAA was in some ways modeled (or at least sold) as a similar program to the successful Denver School of the Arts, but without the same selective admissions process and with the hope of attracting a student body similar to the neighborhood they serve.  A first-rate principal was recruited from a top magnet elementary school, and the initial enrollment numbers seemed strong. If the program can show clear academic growth while serving their local community, it could open the door for a similar attempts with different district programs, and a movement to spread successful magnet programs to different demographic groups.

One of the schools that I wish we would be able to track is the Math and Science Leadership Academy. I have long argued that this school — developed and run by DCTA teachers — has more potential to change urban education in Denver than any other single effort.  It is very much to the credit of the school leadership to take the challenge of urban education head-on.  Yet far more important than the efforts of adults, of course, are the results with kids. MSLA is only K-2 this year, and won’t have scores.  A good eventual showing in academic growth would give these efforts considerable credibility.  Teachers built it, let’s hope the results come.

So, don’t wait until the scores are in.  What else?

Popularity: 9% [?]

15 Responses to “CSAP (in advance)”

  1. jj says:

    “Unfortunately, this is easier to achieve when you have a low base: a 2 point bump starting at a score of 47 (overall DPS reading) is a little easier than starting at a score of 68 (overall state reading).”

    Oh, come on, now. Do we always have to qualify good news from DPS? And speaking of being um, “easier”, what if a school does start at 68? And the score rises only very slowly over many years–does that mean a perfectly superior school that does not rise is really a bad school? I mean, if a school scores a 100, there is no more up and if schools are rated these days on the basis of improvement, then eventually, good schools can’t win in the long run. Or is there something about CSAP being meaningless I missed?

    “So when the 2010 CSAPs come out, start here: how much real academic growth has the district achieved?”

    How much is humanly possible? And if that were possible, how would we get there? Look, there is no magic for school districts, no quick fix and possibly, no long term fix either. I would like to know if there has been a school reform movement in say the last…well, ever, that actually worked according to the demands of the reformers–one where reform lasted for years, and all the kids scored above average.

    • Alexander Ooms says:

      jj — depends a lot on what one things is an appropriate goal. I’d like to see proficiency rates of 80%+ in 10th grade (with a dropout rate in the single digits). So under this view:

      1. Yes, a district claiming it is “outperforming” the state while its students are 20 proficiency points lower should be placed in the correct context. I do think that is “good news” (and it is more than DPS managed for many years) but also that it should absolutely be qualified.

      DPS has averaged an annual increase of less than 2 proficiency points over the last 4 years. I did a calculation at one point that said even if no other District in Colorado improved at all, it will still be ~20 years before DPS is even with the State. This is about preparing students for higher ed or for careers — and you don’t get points in a job interview for a jump in proficiency if you lack the basic skills to be successful. And no, I think anything less than preparing students for the twin goals of college and/or a professional career is not acceptable.

      2. Yes, a school/district with high proficiency should have different growth goals. I think the Colorado Growth model has the correct matrix. But even high proficiency schools (the imaginary 100% proficiency) need at least median growth (1 year progress in 1 year time).

      3. How much is humanly possible? At the school level, there are schools with 15+ proficiency gains per year. Is that possible at the District level? Maybe not, but I sure think a District with the proficiency base of DPS should be able to see gains at about half that.

      4. I agree that there is no magic, and I don’t think we need any. There is a lot that could be done that is not being done. I personally don’t believe there has been a successful District reform movement precisely because most Districts settle for policies that push along incremental reforms, claim “victory” with statistics that lack context, and basically do little to change the essential pillars of the status quo. That’s my point.

      To say that there is “no quick fix and possibly no long term fix either” sounds to me like you think there is no way to fix urban districts. I’ll be happy to disagree.

  2. Kevin Crosby says:

    The questions remain: Is CSAP a worthy measure of school improvement? And based on the answer to that question, what percentage of a school’s resources (including instructional time with students) should be devoted to improving those scores?

    When a district’s overall CSAP scores are below the state average, what are the (presumably) unintended results for the education of the children? As Diane Ravitch might ask, does teaching become “test cramming and bean counting”? Are programs once thought essential lost because they are not tested by the state? Do some schools neglect to teach “the whole child” in the race to improve scores?

    While we discuss CSAP let us not forget that academic achievement scores do not necessarily measure overall quality of education.

    • Alexander Ooms says:

      I have a reasonable amount of sympathy for the desire for multiple measures of quality and achievement. Then I read comments like these, and the sympathy is drained out of me.

      For the sentiment here does not posit other suggestions for measurements of quality, but believes that schools cannot be measured. Under this view, there is no way any test could do justice to measuring learning, so schools, teachers, and the rest of the players in the system are above any evaluation. This is unique to education — in health care, which is at least as complex, no one questions both the desire and intent to measure quality of care. Things that cannot be measured are pretty hard to improve. How is it that education is somehow immune?

      The “unintended results” of test showing that students are not meeting basic proficiency measures might include some changes to the way those subjects are taught — but instead, this view assails the test itself. Nowhere is the abdication of basic responsibility greater than the opinion that the inability to measure the “whole child” means that we should therefore not measure any part.

      • Lee says:

        Not unique to education… health care has also not addressed the “achievement” gap between SES.- do some research you will find similar results in health care outcomes: if the patients do not access the care (come to school regularly) or follow the physician’s instructions (student effort/motivation) outcomes remain low… actually quite similar to outcomes seen in education.

        • Alexander Ooms says:

          Oh my. Lee, the point is not that there is a correlation between SES and quality of health care/education, it is that hospitals and other health care facilities are open to the idea of measuring and evaluating their services to address this gap.

          • Lee says:

            the point is there is a correlation.. check into health care outcomes. even with the continuous improvement process, as any viable mission-driven organization must practice (including educational organizations), health care organizations have not addressed the gap any better than educational services. the purpose of measuring and evaluating services is to address outcomes. but, i would further review which patients are including in evaluation of health care outcomes… they will not include those who do not participate in the health care service– as they are in educational outcomes. point here, is that the outcome would be based on constants or differentiated based on variables (multiple measures): patients who received the service– number of times, with particular health backgrounds– differences would be noted in the recovery rates of smoking or obese patients as opposed to those without those factors who had the same quality service provided. i don’t think anyone would argue that health care providers should be able to get the same outcomes for patients who smoke, don’t exercise, don’t keep appointments to receive services… just because they give the same quality service– this is the argument for multiple measures for any NPO, there is not one service that will fit all- service needs to match the needs of those served and outcomes need to measure what is intended, not variable factors. it is the concept that the educational service will overcome all of the variable factors without differentiating the outcomes based on these variables that continues to be ignored. i know the response here, and i would argue back it is not about “blaming” the student, it is however about looking at students as individuals with a variety of factors that influence their educational outcomes, which does include viable curriculum and effective research-based instructional strategies, amongst other variables.

          • Alexander Ooms says:

            I think we have maxed out on the ability to reply to a thread, or maybe the system is as confused as I am by Lee’s message below, but I assure you that health organizations are far superior to educational institutions in their willingness to measure outcomes. I drove past a billboard today with a hospital that displayed a quantitative ranking — and it did not qualify the SES status of its patients. Why is education so determined to resist measurement?

  3. Mark Sass says:

    “While we discuss CSAP let us not forget that academic achievement scores do not necessarily measure overall quality of education.”

    Then what does, or can we?

    You can teach the “whole child” and do well on test scores. We need to be careful of the “tryanny of the or.” It is not an either-or dichotomy, it should be the “genius of and.” See Rick DuFour’s latest blog on the “Tryanny of or.”

  4. Kevin Crosby says:

    I agree with Sass and Rick and Becky DuFour; we’re not talking about an “either-or dichotomy” here, and we shouldn’t be, but many schools have fallen into that trap, sacrificing PE, art, social studies or other programs not tested by the state for additional instruction in reading and math. Out of fear for their very jobs, some administrators choose the either-or mentality and sacrifice “non-essential” programs in the name of raising CSAP scores.

    I believe Ooms falsely assumes that by simply questioning the utility and consequences of high stakes testing like CSAP that I am opposed to accountability. I just think we need to keep perspective. Assessments are absolutely necessary, not only for accountability, but to inform instruction. If anything we need more and different assessments, but they should not be nine to twelve hour marathons, especially when we must wait months for the results. The wait time makes the assessment nearly worthless except for comparing how schools score. CSAP attempts to be exhaustive, but in only a few areas. DuFour and others in the PLC world advocate for common formative assessments that are immediately useful. Summative assessments for accountability purposes need not be so exhaustive in order to be useful for evaluating schools or even teachers. If we didn’t attempt to assess every cotton-pickin’ standard we could assess more content areas, such as PE, social studies and art.

    I would guess that some of the state-level attempts to mandate art and PE are direct reactions to the fact that these programs are disappearing in no small part because of CSAP. Also, ASCD did not launch for no reason. It was also a response to the narrowing of curriculum caused by NCLB and high stakes testing.

    Not only is CSAP inefficient in terms of time, but it is embarrassingly expensive. NWEA’s MAP does just as well for a fraction of the cost, and the results are instantaneous. So, please don’t jump to conclusions that someone who questions CSAP is opposed to accountability. We need better accountability, as well as better and less expensive assessments.

    Like I said, while we discuss CSAP let us not forget that academic achievement scores do not necessarily measure overall quality of education. This does not imply that the quality of an educational institution is immeasurable, nor does it imply that we should not be conducting measurements. It doesn’t mean there should be no accountability, and I don’t understand why this would be assumed.

    • Alexander Ooms says:


      Being theoretically open to accountability while arguing against any measurement systems is a bit of a paradox — a discussion of which is the optimal standardized test to use is very different from a discussion if “CSAP is a worthy measure of school improvement.” Believe me, I agree that the initial assessments we will use to measure school improvement will have their flaws, but you can’t get to the best possible system without some trial and error.

      I think a discussion about the best possible measurement systems is a productive one, but let’s not start it by saying that we should get rid of one of the few school measurement systems we have.

  5. Kevin Crosby says:


    Perhaps I’m not making myself clear, or maybe you’re projecting what you want to believe I mean…

    Take a look at this and you will see why I believe that we’ve done the CSAP trial, and it was an expensive boondoggle of an error:

    What we need are low-cost and efficient summative assessments for measuring school performance, and better formative assessments at the district and school level aimed at improving learning and instruction.

    Colorado does not need to spend almost $25 million per year on CSAP, plus additional district implementation costs (an estimated $900,000.00 for DPS alone, for example). We need to spend our money improving schools, not padding the wallets of our former president’s friends at McGraw-Hill.

    • Alexander Ooms says:

      Hi Kevin,

      As I’m saddled with an odd last name, please feel free to call me Alex. Or other names.

      There may be some significant projection on this blog; I doubt I am the only person afflicted. I know the Piton report, and I have a more benign view then you on it (more in a sec).

      Now, I’m all for a low-cost, highly-efficient assessment too (and we can add ecologically sound and other virtues) — but you are comparing the CSAP with this theoretical test that does not exist. Should we not do any assessments until we have one that is totally pure? Your initial comment was not about which assessment system to use, it was a critique of CSAP that read as if you advocated its abolition, not a substitution. Perhaps that was my error; others can decide for themselves.

      Now, Colorado spends $25M on CSAP, which in an overall K-12 education budget of what I believe is still in the neighborhood of $5.5 Billion, comes at out a whopping 0.005%. It’s both a decent sum of money and a rounding error in the budget. Without a state-wide assessment it’s hard to track progress at all. Do we want some objective measure of academic progress in an industry where we spend $5.5B in public money each year? I sure do, and it is well worth the 0.005% to me to do so. I’d frankly advocate a more substantial assessment system with multiple measures beyond CSAP, which would probably be even more expensive. I don’t know how one improves schools without measuring them.

      Back to projection — I don’t have much to say on your claim that CSAP is a conspiracy to line the pockets of ex-presidential cronies who went into publishing to cash out (because it is such a lucrative business). But I’ll project my belief that this is not particularly germane.

  6. Kevin Crosby says:

    Alex (if that really is your name):

    The mention about the McGraw-Hill connection is admittedly a bit of a conspiracy theory. I’m not inclined to dig it up at the moment, but word has it that the Bush and McGraw families are quite close, and that by signing NCLB Bush made his friend a ton of money. One could even cry conflict of interest, but germane or not, it makes one wonder. Still, it’s not so much that Bush signed the law, but that states bought the product.

    Anyway, I haven’t been “comparing the CSAP with this theoretical test that does not exist.” I’ve been comparing it quite specifically, as does the Piton report, with NWEA’s MAP. Regardless of percentages of overall budgets, a savings of $20 million a year could be better spent, or not spent at all. I mean, isn’t that 400 teachers at $50,000 each? At a time when budgets are so tight and teachers are being laid off we’re going to defend unnecessarily spending $20 million a year when there are equally effective and much less expensive alternatives? Over the last decade we could have saved $200 million!

Leave a Reply

Colorado Health Foundation Walton Family Foundation Daniels fund Pitton Foundations Donnell-Kay Foundation