You are viewing the EdNews Blog archives.
These archives contain blog posts from before June 7, 2011
Click here to view the new First Person section of Chalkbeat Colorado

Archive for the ‘Accountability’ Category

DPS’ response to the credit recovery controversy

Monday, June 6th, 2011

Editor’s note: This post was submitted to Education News Colorado by Antwan Wilson, Denver Public Schools’ assistant superintendent, office of post-secondary readiness. It offers the district’s response to this blog post from EdNews Publisher Alan Gottlieb, and this article from Westword.

I wanted to take this opportunity to address the concerns raised in recent media reports about the credit recovery at North High School.

The issues raised in the report are very serious ones, and we are actively investigating the claims and reviewing our overall credit-recovery procedures.  Should we find violations of our guidelines or ethical standards or the need to implement clearer or stronger policies, we will take action to ensure the integrity and rigor of that program and all of our programs.  We certainly recognize that for our diplomas to have value, our programs must be – and be seen as – rigorous.

In addressing the concerns about rigor, it’s important to take a minute to discuss the purpose of credit recovery and where it fits in our overall high school programs.

To date, that investigation has determined at a minimum that there were serious deficiencies in following procedures and keeping records during the 2009-10 school year.

First, a word on rigor.  Over the past several years, the Denver Public Schools has significantly strengthened the rigor of its high school programs. The district has increased the number of credits required for graduation from 220 to 240 (the highest in the state to our knowledge) by adding a fourth year of math and additional lab-science requirement, among other changes.

We have nearly doubled the number of students taking and receiving college credit from Advanced Placement courses over the past five years, and we have also nearly tripled the number of students concurrently enrolled in college-level courses.

The percent of concurrently enrolled students receiving As, Bs, or Cs in these college level courses (and therefore college credit) is over 80 percent. And these increases cross all racial and socioeconomic groups. Our district also has posted double-digit gains in math and reading proficiency on state assessments over the past five years.

Our mission at DPS is to ensure that all of our students graduate high school and successfully pursue postsecondary opportunities and become successful world citizens.  This is an important mission in that it sets a high bar that requires that we implement a system district-wide that meets the needs of all of our students regardless of who they are, where they come from, or what their previous academic performance may have been.

Aligning mission to Denver Plan

This mission aligns with the 2010 Denver Plan goal of being the best urban school district in the country.  It says that we recognize and appreciate the diversity within our student population and the many unique needs of our students and we are making it our responsibility to construct a system that prepares all students for success in the college and career opportunities they seek.

In order to fulfill this mission, we need to acknowledge where we are currently (a roughly 53 percent overall on-time graduation rate/66 percent for traditional high schools); we need to understand the challenges that negatively impacted efforts to improve in the past; and we need to work to construct a comprehensive system that better meets the needs of the students we serve.

Doing this requires improvement in how effectively we educate the entire child from kindergarten through 12th grade.  This includes raising the bar for all students in terms of academic rigor and expectations at all grades and at the same time implementing sufficient supports to ensure that students meet these expectations. We want our most motivated and successful students to know that they are noticed and appreciated, and that they will be challenged to reach their highest potential. At the same time we want our students who experience struggles to know that we expect them to be successful as well and will do what it takes to see that they too reach their potential.

This potential involves preparation for education beyond high school. Whether they be four-year universities and colleges, two-year community colleges or technical schools, or one-year certificated programs and/or military service, our goal is to prepare all of our students to enter these institutions having mastered the necessary standards and without the need for remediation.

In addition to implementing rigorous grading standards, we also recognize that we must have strong support systems when students fail to meet expectations or do not respond to the initial interventions by the classroom teacher and school leadership. Our students have a responsibility to learn, and we recognize that there are some students who have not mastered the study skills necessary to gain subject matter proficiency in their studies. In such cases, these students will earn failing scores and this will require us to provide more intensive supports to help them meet expectations.

Confronting tough challenges

Again, if we are to accomplish our mission to graduate all students and prepare them all to be postsecondary ready, we cannot give up when faced with these challenges. For these students, we will provide targeted support that helps them get back on the right path. These supports include, but are not limited to, interventions such as unit and credit recovery.

Unit recovery should be implemented as an on-time intervention after a student has not demonstrated mastery of content in a major unit of study while enrolled in a class. It consists of the collaboration between the classroom teacher and the student (with the support of school leaders) to re-take a unit that the student failed to master through the demonstration of competency on specific unit standards. This may occur in the classroom, online, or in a blended model.

Credit recovery, on the other hand, involves a student retaking a course they have previously failed. This is typically done in a blended learning environment involving online curriculum and assessments with instructional support provided by a teacher. We are partnering with APEX Learning on these efforts because of the rigor and comprehensiveness of their programs. Their programs are used across the nation in many urban districts to provide original credit, Foundational Courses, Literacy Intervention, Advance Placement courses and preparation, and unit and credit recovery. APEX is accredited by the Northwest Accreditation Commission and approved by the National Collegiate Athletic Association.

In order to ensure the rigor of our credit recovery courses, the courses are each supervised by a teacher and the student receives individualized instruction as well as working online. Individual assignments emphasize the mastery of essential state standards, as in traditional courses, and students must demonstrate through assignments mastery of each individual unit before they can move on to the final exam.

To pass a credit recovery course, a student must obtain a score of 80 percent or better, which is 20 points higher than in a traditional course that has a required semester of seat time.  Students in a blended learning environment should be supervised at all times and all assessments should be closely monitored as expected in all classrooms. When taking tests and quizzes, students (except as may be provided for in an IEP) may not use of books, notes, web sites, or any other aids.

A thorough investigation

We are doing a thorough investigation of credit-recovery practices and auditing graduation transcripts at North High School to determine if these guidelines were not followed. To date, that investigation has determined at a minimum that there were serious deficiencies in following procedures and keeping records during the 2009-10 school year.

We will continue a thorough and comprehensive review of credit-recovery at North and ensure that the shortcomings at that school from last year are not repeated in other programs throughout the district. We continue to believe strongly in the important role that unit and credit recovery play in our schools, as they do in districts nationwide.

It has long been clear that the old way of requiring a student who fails a course to repeat it again the following year in the same classroom fashion that the student failed it the first time is ineffective and leads to a big increase in dropouts. Our data clearly shows that the highest number of student dropouts fell off track during their ninth grade year due to failing core classes. Data also shows that it is increasingly harder to get these students on track the longer they are allowed to remain off track to graduate. The solution here must be to ensure the rigor of unit and credit recovery offerings, not to do away with them.

We must also face the question, as Mr. Gottlieb points out: “Whether the pressure exerted on high schools to improve graduation rates tacitly encourages school administrators to juke the stats to make themselves and the district look better.”

We acknowledge that this incentive exists here as in many places elsewhere.  The incentive to make oneself or one’s unit look as good as possible statistically is true regardless of whether you’re measuring graduation rates, financial performance, academic achievement, or athletic performance. The problem of teachers and schools having incentives to pass students on to graduation by reducing rigor long predates and extends far beyond credit recovery.

The question then is, how do you deal with the fact that these incentives have existed, do exist, and will exist. The answer cannot be to stop measuring or caring about our schools’ graduation rates. For that is clearly one of the most important measures of a high school.  Rather, the answer can only be in the district having a strong combination of clear procedures, ethical practices, and strong action to address of any violations.

As part of this effort, I convened earlier this year a task force of teachers and school leaders to clarify and strengthen grading policies, with clear alignment to state standards. Grades should not be based on process elements, like attendance, but on demonstrated proficiency through multiple assignments and test on the elements of the state standards the course is covering.

Setting high expectations for all

Students who are demonstrating an inability to complete assignments as expected by teachers should receive immediate intervention or consequences, depending upon the reason for not completing the work. This may include mandatory tutoring classes before school, at lunch, after school, or during the school day. It may also mean shortening the student’s academic class schedule to include core academic classes and a favorite elective, and then providing targeted study sessions the remainder of the day, with very small teacher-to-student ratios focused on supporting students with the completion at mastery level of work assigned by classroom teachers.

We cannot allow our students to choose to fail and for them to believe that we will do nothing to prevent it.  Teachers are NOT to give students either full or partial credit for work they did not do. In fact, we have taken recent action to end a grading practice at one of our high schools that allowed teachers to give a grade of 53 percent to students who missed an assignment.

Missing work is to be marked as missing in the grade book, and interventions are to be implemented immediately to support students who need additional instruction to complete the task or to hold students accountable for completing what was expected of them by their classroom teacher. Like school grading and measurement policies, school makeup work policies should be communicated effectively to all students, parents, and other stakeholders and consistently implemented throughout the school without exception.

We are here as public servants in the field of education for the sole purpose of giving ALL of our students the skills and confidence they need to make their dreams come true. We expect a lot from them and from ourselves. We work hard to challenge, support, and inspire our students. We do not accept excuses for failure; we will not tolerate dishonesty in reporting student achievement; and, we will never give up on a single student.

Popularity: 31% [?]

The graduation-proficiency gap in DPS

Monday, June 6th, 2011

Alexander Ooms is a member of the board of the Charter School Institute, the West Denver Preparatory Charter School and the Colorado chapter of Stand for Children.

The recent Westword article on Denver North High School’s manipulation of its graduation rates, the  belief that “juking the stats” likely spreads beyond a single school and a sage comment at the end of Alan’s post wondering what other Denver high schools were affected all indicate that this is a topic where rhetoric might benefit from a closer relationship with data.

At its crux, the question is if graduation rates tell us something meaningful about how district schools are performing academically. And it sure looks like they do, but not in the way one might have hoped.

For what the North debacle — and a previous yet related controversy over Lincoln High School — bring into question is twofold. First, does a high school diploma signify a reasonable, baseline level of student achievement; and second, is the rise in DPS’s graduation rate spread evenly throughout the district or is being used by some schools to mask a lack of academic rigor and proficiency.

To answer the first question, we need to see if there a pervasive gap  – particularly at certain schools — between a school’s graduation rate and the ability of its alums to read, write, and do math at grade level.  As one teacher at North commented for the Wesword article, are we reaching a point where someone could say “Oh, they went to North? They’ll give a diploma to anyone” – and for how many schools might this be an issue?

So here is a quick graph comparing respective 2010 graduation rates (data here) and 2010 average proficiency rates* (from CDE’s at a number of notable, open-enrollment DPS high schools.

The red line indicates the trend; the schools above the line will have more students who graduate with solid academic skills; those below the line will have more graduates who lack basic proficiency. How far you are from the line shows the gap: well above the line pretty much guarantees a close correlation between graduation and at least a base level of academic ability; well below the line increases the likelihood that a diploma has little relation to academic skills.

What do we see? Joining North below the trendline and as prominent outliers are Bruce Randolph and MLK – both of whom have graduation rates within spitting distance of 90 percent, and yet proficiency rates that are but a small fraction of those numbers. Also below the trendline, but somewhat closer, are Kennedy and Montbello; while Lincoln teeters just above the line but with poor scores on both. And perhaps this will surprise no one, but is is exactly these schools who have had the most recent progress with graduation rates, and DPS has not been shy on trumpeting this data as a mark of success.

The recent increases in DPS graduation rates seem to be driven by precisely this same set of schools — all of whom lag badly in academic proficiency.  While both Bruce Randolph and MLK are graduating their first class and don’t have previous data, the other schools all have double-digit percentage increases from 2009 (North 21 percent, Kennedy 17 percent, Montbello 15 percent and Lincoln 14 percent), while the four schools with higher proficiency saw far smaller jumps (East 4 percent, GW 6 percent, DSST 8 percent, and  TJ 10 percent).

So, are these schools masking their poor academic progress with the easier task of boosting graduation rates?  Should we celebrate these schools for their progress with graduation rates (as President Obama did with Bruce Randolph), or question why few of their graduates are able to do basic academic work? Particularly for administrators (as the Westword article showed), it may be far easier to achieve — ethically or not — higher graduation percentages (and proclaim your school a success) then the more difficult work of driving better academic results. Should one obscure the other, or should the two go hand-in-hand.

Mind the Gap

To look at the same data a slightly different way, here is a table showing the same schools, this time ranked on the final column of a graduation-to-proficiency gap (the ratio of graduation percentage over average proficiency).

There is one school with a graduation rate significantly above the mean, and a proficiency rate significantly below the mean: Bruce Randolph.  North places second, and it is testimony to its low proficiency that they do so while still ranking significantly below the mean in graduation rate.  Montbello manages the largest gap with stunning inadequacy at both ends, including some single-digit proficiency scores and the second-lowest graduation rate overall. Lincoln and MLK round out the quintet of schools where the numbers look askew (with Kennedy pretty close behind). While it is a somewhat arbitrary line, a gap ratio greater than 2:1 is a good place for further examination.

Does this mean that some of these schools, along with North, are “juking their stats”? It’s not clear – many are also achieving higher than average academic growth (including Bruce Randolph and MLK) — but then again, diplomas are intended to indicate some measure of academic proficiency, not growth.  And, as Westword pointed out, North, Montbello and Lincoln all have full-blown Credit Recovery centers offering a different (and let’s be honest and say a far less rigorous) path to graduation. In many ways, in boosting graduation rates — and any lowering of standards to ease the path to a diploma as is clearly the case at North — these schools are probably digging their proficiency holes even deeper.  It means not just that these schools may fulfill the fear articulated by the teacher at North of awarding a diploma to just about anyone, but that the gap may increase still further.

And, perhaps more importantly, does it even matter if the heightened graduation rates are “juked” (with programs such as online Credit Recovery) or honestly achieved if they are not accompanied by increased academic proficiency? In 2010, DPS increased its graduation rate by 5.4 percent but saw a boost in overall proficiency of just 1.3 percent (and that was for all schools – I’d bet for traditional high schools the proficiency increase was probably flat).  If you were a school administrator, where would you put your efforts (and what can you better control)? And if you were DPS, to which measure would you prefer to highlight?

Is Graduation an Academic Measure?

For the larger issue is a point on which there is surprising disagrement: Is it the primary purpose of public schools to graduate students with a certain threshold of academic skill?

A surprising number of people – some of them friends, many of them reasonable – argue that, particularly in high-poverty urban schools, academic achievement is subordinated to other measures. Advocates of these schools would say that increased graduation rates means kids are not dropping out, are meeting other metrics of responsibility (such as attendence and basic class assignments) to earn passing grades, and are absorbing critical social and other skills that leave them more mature and better equipped for their lives after high school.  Under this rubric, it is an achievement to simply keep these kids in school at all.

Detractors would argue that the purpose of schools is not simply to warehouse kids in a safe facility and build social aptitude, but to impart some basic level of academic ability, and that allowing them to graduate without these skills may do more harm than good, particularly when many of these students — who have, after all, successfully passed their classes — have no idea that they are ill-prepared compared to many of their peers, and will quickly find that the demands of college or the modern workforce far outstrip their preparation. There is no second chance at K-12 education.

A related problem involves rising remediation rates – the percentages of students who go to college who are unprepared and have to retake classes at a high-school level.  As Alan pointed out just over a year ago, this is a state-wide issue, but many of these same DPS schools (North, Montbello, Lincoln) are again leading the pack. There is a good and reasonable debate on what these remediation numbers really mean, but at a minimum, the relative differences between schools is cause for apprehension.  And in looking at proficiency scores, we are talking here about something even more fundamental – not just if students are prepared to continue on to higher education, but for those who have decided to stop (or are unable to continue) their scholastic careers, do they have the academic skills that one might expect after 13 years of public education?

Several states now require some independent assessment for graduation. California, by way of example, has a High School Exit Exam, which survived a considerable legal challenge on its way to becoming law. When they first instituted the test, nearly 20 percent of seniors failed it. Recent classes have done better. This exam is hardly draconian: one gets eight chances to pass, the test measures English at a 10th grade level and Math at an 8th grade level, and it requires just 60 percent or less of correct answers to pass. But if you have a high school diploma in California, it has a set meaning – one that connotes something of value to both its student recipients and the employers who seek to hire them. Does a diploma in Denver have the same meaning?

For these diplomas are widely viewed as a critical and central measure of public education. In the most recent (and final) mayoral debate, both candidates criticized DPS’s current 52 percent graduation rate and singled out graduation percentages as an important metric they would track to better understand the success and progress (or lack thereof) of public education in Denver. Graduation rates were mentioned more times than any other single metric, academic or otherwise.

As moderator of the debate, I asked both candidates about the graduation problems at North, and if they favored an independent academic assessment at graduation (or at other points in K-12 education) so that a DPS diploma would indicate a certain level of academic achievement. Both candidates somewhat slipped past the question without answering it directly (hear the question and responses in the full podcast at 36:30 to 40:40 via link or download).

Asking for a higher graduation rate without also wanting to measure or interpret what it may mean is the norm, and not just for politicians. This is partly due to the heightened political climate of Denver’s education debate, where a reform-oriented administration pumps up some stats beyond what they may deserve, while any negative news is seized by defenders of the status quo as a way to criticize the superintendent and  weaken the administration and its reforms. This discourse makes rational discussion increasingly difficult.

But aside from the political theatre, the people who are harmed the most by the graduation-proficiency gap are the legitimate students from many of these schools who have worked hard and justly earned their diplomas, only to find this achievement largely debased both by the actions of their peers, and a system that — rightly or wrongly — seems to increasingly use the mantra of “multiple measures of achievement” to boost graduation and other metrics while undermining academic preparation and proficiency. This, after all, is the blunt narrative at the heart of public education’s problems: adults fighting each other to protect jobs and for political supremacy while kids suffer.

* Note: It might be more accurate for a particular class to use 10th grade proficiency from 2008 (since this will be the graduation class in 2010), but I thought it was a more complete to look at the proficiency for the school overall, and also more fair if a school has had significant academic progress in intermittent two years.

Popularity: 34% [?]

Remediation rates misused, misunderstood

Monday, April 25th, 2011

Holly Yettick is a doctoral student in the Educational Foundations, Policy and Practice program at the School of Education at the University of Colorado in Boulder.

College remediation rates are the school accountability measure du jour. Once relegated to the dusty realms higher education, a topic largely ignored unless it involves athletics or scandals (or athletic scandals), remediation rates took center court when Colorado and other states linked K-12 and postsecondary databases.

All of a sudden, we could evaluate individual districts and schools by examining what percentages of their graduates were assigned to remedial college courses.

Like too many educational measures, college remediation rates are often misused and misunderstood. Specifically, I have noticed that they are often mentioned in the same breath as high school graduation rates. By this I mean that they are treated as if they are alternatives to standardized exams—i.e. as holistic assessments of the ultimate outcome of 12-plus years of education.

Students often walk onto college campuses completely unaware that they will be required to take a test that day, much less a high stakes test that will have a profound effect on their chances of postsecondary success.

They are not.

At least not in Colorado and many other states. According to Colorado’s remedial education policy, students are referred to remedial courses based upon their math, reading and/ or writing scores on ACT, SAT or ACCUPLACER exams. For instance, first-time undergraduates are referred to math remediation if they earn less than a 19 on the ACT math section, less than a 470 on the SAT math section or less than 85 on the ACCUPLACER Elementary Algebra test.

So unless you believe that these tests are holistic indicators of a student’s past performance and future potential (and some do), remediation rates are not alternative measures that prove or disprove the success or failure of the standardized testing movement or add a new, non-test-related dimension to the test score data already supplied by the state. At least not in Colorado. In Colorado, remediation rates ARE standardized exams.

There are certainly advantages to using a single, statewide cutoff score to assign students to remedial courses: It is quick. It is consistent. It is cost-efficient.

But is it valid? Questions were raised for me by a February Gates Foundation-funded report that Katherine L. Hughes and Judith Scott-Clayton wrote for the Community College Research Center at Columbia University. In their working paper, Hughes and Scott-Clayton describe the results of a 2009 meta-analysis (a quantitative summary of multiple studies) by the College Board, which administers the ACCUPLACER: When it comes to community college students whose test scores are high enough to exempt them from remediation, the correlation between the ACCUPLACER score and receiving a C in the relevant course ranges from .25 for the algebra exam to .10 for the reading comprehension test. (Correlations range from 0, meaning no relationship between the test and the course grade to 1, meaning the test score perfectly predicts the course grade).

Hughes and Scott-Clayton do conclude that ACCUPLACER and its ACT counterpart COMPASS are “reasonably valid predictors” at least for the students who place out of remediation. (The meta-analysis did not account for students who were assigned to remediation based upon their test scores.)

But they note that the tests are better at predicting results in math than in literacy and better at discerning which students will receive a B than which students will fail. Further, the tests do not take into account many factors that are important for college success (e.g. study skills, the presence of a strong support person). The test vendors themselves recommend using multiple measures to more accurately assign students to remediation.

An eye-opening session I attended at the recent Education Writers Association’s conference in New Orleans raised more questions for me about remediation rates based on test scores. During a panel discussion, Bruce Vandal of the non-profit, non-partisan Education Commission of the States noted that students often walk onto college campuses completely unaware that they will be required to take a test that day, much less a high stakes test that will have a profound effect on their chances of postsecondary success. Math is the most commonly flunked exam. Is it possible that a recent high school graduate has perhaps forgotten what he learned three or four years earlier in high school algebra?

For these reasons and others it is perhaps unsurprising that half of the students assigned to remediation have this reaction: They never even sign up for their remedial course.

Popularity: 36% [?]

Effectiveness Council recommendations ‘comprehensive’

Monday, April 18th, 2011

This post and the one that follows present starkly different views of  the work of the State Council for Educator Effectiveness, which issued its report to the State Board of Education last week.

This post was submitted by Matt Smith, chair, State Council for Educator Effectiveness (Vice President, Engineering, United Launch Alliance) and Nina Lopez, vice-chair, State Council for Educator Effectiveness (Special Assistant to the Commissioner, Colorado Department of Education).

We know great principals and great teachers can make all the difference in a child’s education.

In Colorado, we want to recruit, retain and reward more great teachers and school leaders.

In response, the state legislature passed a new law last year that garnered national attention to dramatically change the way teachers and principals are evaluated and compensated.

Colorado now has common statewide definitions of teacher and principal effectiveness, clearer expectations for job performance, and consistent scoring guides to rate job performance.

Leading this bold effort is the State Council for Educator Effectiveness. Governor Bill Ritter, Jr., appointed its 15 members in March 2010.

Over the last year, the Council held in-depth conversations about what effective teaching and school leadership are, how they are measured, and strategies for continuous improvement.

The Council studied research and best practices, and spoke with experts in local school districts and across the country. And members talked extensively about what’s best for Colorado, all while balancing state requirements with local values.

There was lots of input along the way too. More than 1,750 people responded in March to the Council’s online survey alone. A majority were teachers – plus principals, superintendents, school board members, legislators, and parents.

Their best hopes for the new educator evaluation system were improving instruction, fostering collaboration, using a common understanding of “effective” performance, and providing meaningful and regular feedback to educators.

The end result: A set of comprehensive recommendations presented last week to the Colorado State Board of Education that will help to ensure that every child has an effective teacher and an effective principal.

What’s different? Colorado now has common statewide definitions of teacher and principal effectiveness, clearer expectations for job performance, and consistent scoring guides to rate job performance.

In the past, state law required districts to rate educators as either satisfactory or not satisfactory. Now, teachers will fall into one of four categories: highly effective, effective, partially effective and ineffective. An educator’s nonprobationary status now is based on effectiveness in the classroom – not years of service.

The Council also recommends aligning teacher preparation programs in colleges and universities with the new performance standards for teachers and principals. Students who graduate from teacher prep programs will have more of the skills Colorado wants to see before they step into a classroom.

Districts are urged, but not required, to use student and parent survey data to inform teacher and principal evaluations.

The Colorado State Board of Education is now reviewing the Council’s recommendations and is expected to rule on them by November. The new educator evaluation system then returns to the state legislature in 2012 for a final review. State legislators can veto individual rules set forth by the state board of education.

According to state law, the new educator evaluation system will be rolled out in every district in fall 2013. The Council is recommending the new system begin in fall 2014 to allow time for further refinements after its piloted in select school districts.

Like other states, Colorado’s public schools are reeling from deep budget cuts that are resulting in hundreds of staff layoffs, bigger class sizes, and school closures. Many are asking how much a new educator evaluation system will cost.

A cost study estimates that districts could incur a one-time start-up cost of $53 per student. For ongoing annual evaluation, estimates for teachers and principals vary depending on their rating of effectiveness.

We know these costs will be a burden for districts already under severe financial pressure. To lessen the impact, the Council will advise the state to provide the maximum assistance allowable to districts. Districts, in turn, may need to explore reallocating resources and securing grant funding.

In the long run, the investment will result in better outcomes for students, educators and, ultimately, Colorado.

Popularity: 11% [?]

Effectiveness Council grade: Partially effective

Monday, April 18th, 2011

Van Schoales is executive director of Education Reform Now, a national advocacy group based in Denver.

Last week the State Council for Educator Effectiveness released the long awaited report on implementing the “Great Teachers and Leaders” law otherwise known as SB 10-191.   This report and the implementation of 191 could not only have significant implications for Colorado classrooms, but also for rest of the nation as multiple states follow Colorado’s lead in tying teacher evaluation and employment to teacher effectiveness.

Despite all of the time and resources spent by the council in drafting this report (not to mention the 177 pages of text), the council came up short in providing necessary and specific recommendations about how both teachers and principals should be evaluated. The report does a nice job of giving a broad overview of the work that needs to be done to implement an effective teacher and principal evaluation system, but gives very few specifics about what that evaluation system should entail.

I am not alone. Many in the education and business communities – including Colorado Stand for Children, Colorado Concern, the Metro Chamber of Commerce– have some of these same concerns.

The timing for this could not be worse with so much pressure now being placed on the Colorado State Board of Education and Colorado Department of Education (CDE) to make sure SB 10-191 works.   The permanent commissioner is still unknown and will likely not be able to start until later this summer.  If ever Colorado needed a bold, politically adept and reform-minded state commissioner, now is the time.

All of these complications are exacerbated by the fact that key CDE leader Rich Wenning is no longer able to shepherd such important and complicated projects through the bureaucracy.  Much will depend upon the state board and CDE staff stepping up to ensure that SB 10-191 is implemented well.

I had hoped that the council’s report would have been able to distill much of what is possible, providing a detailed roadmap for teacher and principal evaluation.  Ideally it would have been helpful to have some snapshots of a system like the DC’s IMPACT teacher evaluation system so that CDE could have clearer guidance for making the detailed rules.

Instead there were very broad recommendations that included a long list of standards, 27 for teachers and 26 for principals, many of which are not easily observed; nor can they easily be measured.

Here’s one example:

“Teacher Standard 1.4: Teachers make instruction and content relevant to students. Teachers incorporate postsecondary and workforce readiness and 21st century skills* into their teaching deliberately, strategically and broadly. These skills include creativity and innovation, collaboration, strong work ethic, critical thinking and problem-solving, civic responsibility, communication, personal responsibility, global and cultural awareness, IT skills, and the ability to discern, evaluate and use information.”

How does one evaluate based upon such vague and all-encompassing standards?  And, more importantly, how does one ascertain whether or not a teacher meets these standards based upon his students’ performance or work product? The standards set forth in the report must be more clearly defined and measureable on a quantitative basis.

Furthermore, the number of standards in the report needs to be reduced and simplified. The report currently outlines 27 different standards to which each teacher must be held. Could you imagine being evaluated on 27 different standards?

An effective system design would have fewer standards with a set of detailed indicators that could be fairly evaluated.   The council’s work is a start, to be sure, but it is far from finished.

The report did make an important recommendation to add a fourth non-probationary performance category.  This fourth category “Partially Efficient” will allow districts more flexibility in retaining effective teachers and letting non-performers go.  The range of teacher performance needs to be as robust as any good student evaluation system.

While some will be concerned about the one-time cost estimate of $53 per student for implementation of the law, critics should note that these costs are already built into what effective schools are doing every day in terms of evaluating educators. Good schools are regularly updating and improving educator evaluation systems, it’s at the core of what they do.

And for those schools that are not spending time evaluating educators, these costs are similar to what districts currently spend on student assessments. These costs represent less than 1 percent of what the state is currently spending on education. Given the immense value of a great teacher, it seems wise to allocate less than one percent of the current spending towards implementation of an evaluation system that will reward effective teachers and replace teachers with consistently low performance rates.

One final problem with the report is there are no detailed recommendations on how CDE will monitor and enforce school district implementation of SB 10-191.

What are the incentives for districts to implement the law effectively? And what are the consequences for those districts that do not live up to the intent of the law?

I am not alone. Many in the education and business communities – including Colorado Stand for Children, Colorado Concern, the Metro Chamber of Commerce– have some of these same concerns.

Because of SB 10-191’s passage, Colorado is now in the national spotlight when it comes to teacher and principal effectiveness. Not only will Colorado’s students greatly benefit from a new effective teacher and principal evaluation system tied to retention and promotion, but many other states are lined up to follow Colorado’s lead.

We have to get this right.

Popularity: 13% [?]

Higher ed: The next bubble?

Sunday, April 17th, 2011

A provocative hypothesis is newly making the rounds: Does higher education currently have the basic characteristics of a speculative economic bubble?

Technology has progressed to a point where it is not just replacing menial labor, but a broad swath of middle-class, white-collar jobs that require cognitive abilities.

Given new life by investor Peter Thiel, it is an idea that has been around since at least 2009 and this article in the Chronicle of Higher Education. On the back of a provocative discussion about the revenues needed to fund higher education in Colorado, I’ve been increasingly noticing a number of data points that seem to fit this hypothesis surprisingly well.

The most spectacular bubbles in recent years were the Internet (circa 2000) and housing (circa 2008). The hypothesis notes that bubbles such as these have certain qualities, among them: 1) everyone believes that the underlying value is both irrefutable and will continue to grow; 2) prices are rising exponentially faster than other goods or services; and 3) these prices are being met due in large part to the easy availability of capital (generally debt). To take these in turn.

That college is seen as inherently valuable is a truism.  Here is how Peter Thiel puts it:

“A true bubble is when something is overvalued and intensely believed,” he says. “Education may be the only thing people still believe in in the United States. To question education is really dangerous. It is the absolute taboo. It’s like telling the world there’s no Santa Claus.”

Bubble observers have heard this before — housing prices “always” go up (and owning a house is “always” good). Internet commerce, freed from the economics of retail stores, will enable unprecedented growth.   Does a dogmatic belief in the intrinsic value of a college degree make us unable to accurately assess what it is really worth?

Among those who think it might is Paul Krugman.  Part of the fidelity to a college education is the belief that the value of education will become increasingly important — that more and more jobs will require the basic reasoning and analytical skills which any college graduate should possess.  This is a universally accepted truth. Yet, as Krugman says, “what everyone knows is wrong:”

The belief that education is becoming ever more important rests on the plausible-sounding notion that advances in technology increase job opportunities for those who work with information — loosely speaking, that computers help those who work with their minds, while hurting those who work with their hands.

However, as Krugman points out, technology has progressed to a point where it is not just replacing menial labor, but a broad swath of middle-class, white-collar jobs that require cognitive abilities.  In contrast, Krugman notes:

Most of the manual labor still being done in our economy seems to be of the kind that’s hard to automate. [...] Meanwhile, quite a lot of white-collar work currently carried out by well-educated, relatively well-paid workers may soon be computerized. Roombas are cute, but robot janitors are a long way off; computerized legal research and computer-aided medical diagnosis are already here.

Even accepting that college is valuable, it is clear that the specific value is awfully hard to quantify, which makes an accurate assessment problematic. That college graduates make more money and have more productive careers than non-graduates is undoubtedly true.  But it is not entirely clear that it is their years at college which enables this success. What one would like to see is a study that matches the success of college graduates with people who were admitted to college but chose not to go (or whom left voluntarily)  – admittedly a data set that is probably so small and self-selecting as to be virtually useless.

In lieu of such a control group, other attempts to measure the value of higher education are raising a lot of questions.  At least one book, which performed an analysis of more than 2,300 undergraduates at twenty-four institutions, found that “45 percent of these students demonstrate no significant improvement in a range of skills—including critical thinking, complex reasoning, and writing—during their first two years of college” and 36 percent showed no progress in four years.  To rub salt in the wound, there also seems increasing evidence that the prestige of elite colleges — and their accompanying higher price tag — make little difference. One pundit went so far as to make a lucid comparison between American universities today, and American car companies at their zenith, presaging a long and sordid decline.

While value resists measurement, cost does not.  College has a price, and there is no doubt that it and the rise of technology stocks and housing prices — albeit on shorter cycles — have a common trajectory. According to the National Center for Public Policy and Higher Education, over the last generation, average college tuition and fees have risen by 440 percent — more than four times the rate of inflation and almost twice the rate of medical care. In fact, if there is a broadly consumed service that has risen more in price than college over the same time period, I’m not aware of it. The impact is particularly disproportionate on students from low-income families, who are more likely to be the first member of their family to attend college, and often have the most to gain.

Now the standard reply to the current price tag is that higher education is charging what the market will bear.  Of course, that was the argument in other bubbles too.  What is also similar is the influx of cheap capital, which echos the previous bubbles of housing (mortgage debt) and technology companies (equity investments).  In this case, the market depends increasingly on student loans.

So the primary components of a speculative bubble in higher education — a blinding belief in inherent virtue, rising prices, and increasing debt — seem to all be in place.

How do students and families afford the rising price of higher education? As I’ve written previously, and which is even more chilling with new data, student debt is dangerously high, approaching $1 trillion dollars.  As a 2006 report from the American Association of State Colleges and Universities (AASCU) states:

Students are deeper in debt today than ever before. Two out of three college students graduate with debt and the average borrower who graduates from a public [my emphasis] college owes $17,250 from student loans. Ten years ago, the average student borrower attending a public college or university graduated owing $8,000 from student loans (adjusted for inflation). [...] The number of college graduates with at least $40,000 in student loan debt has increased 10–fold in the past decade.

Student loan default rates are also rising, and at the same time there will likely be less assistance available (both scholarships and financial aid).  And in much the same way that the housing bubble was partially funded by government-supported mortgages (Fannie Mae), student debt is often supported by, um, government-supported loans (Sallie Mae). And, much like housing, the debt burden seems particularly inappropriate for  those who can afford it least. The sames AASCU report notes that 20% of the students who drop out of college without a degree have accumulated debt in excess of $20,000. And one important distinction remains: unlike other debt, one can’t declare bankruptcy and walk away from college loans. This is debt that is with you for life, with all sorts of unhealthy implications.

So the primary components of a speculative bubble in higher education — a blinding belief in inherent virtue, rising prices, and increasing debt — seem to all be in place. But the main characteristic about bubbles, of course, is that they eventually pop.  So, is higher education due for a shock decline similar to housing prices and internet stocks?

Personally, I don’t think so, mostly for a simple reason: there is a lack of viable alternatives.  Housing bubble? You can rent instead of own.  Technology  bubble? You can do lots of things with your money besides buy shares of internet companies.  But what else would one suggest that a smart, ambitious, 18-year-old do if they are not going to attend college?  For better or worse, it’s simply hard to see large groups of students who have the option of going to college – even at an exorbitant cost – deciding they would rather do anything else.

And there are other perverse strengths to the college model.  Not the least is that it intentionally promotes both scarcity and elitism.  Pretty much any other organization that, as Harvard University did last year, denied 92% of willing applicants the privilege of paying tuition and board in excess of $45,000 a year would probably find a way to make a little more space available (which Harvard could clearly do without a degradation in quality).  And, as Tiger Moms everywhere know all too well, this elitism trickles down. The inherent belief of the value of a Ivy League or similar education is so entrenched in our collective thinking – even without much hard evidence in support — that even if demand somehow fell by 50% at the most prestigious schools they would still have far more applicants than spaces. It is hard to see a decline in all of higher ed, all at once (which is, of course, exactly what people said about the housing market).

But there is no doubt that even if the bubble does not pop, there are some segments where it is either going (or has already started) to deflate, and fast. One place is the Tier III private colleges that are both expensive and not very good, as their high price and questionable value is already driving students to state universities and community colleges.

Another place that will see a deflating bubble is likely to be specialized schools, with the first of these seems to already have felt some impact.  The New York Times recently did a remarkable piece that posits pretty clearly that for recent graduates, the value of a law degree is often not worth the crippling debt that many students took on (no matter how determined a spin law schools place on their employment statistics). And it is no coincidence that law school applications in 2011 were down 11.5%, to the lowest level since 2001. Clearly, unlike heading off to college, there are a number of attractive opportunities a young person might pursue instead of three years of paper chasing and incurring substantial debt.

All in all – and I don’t think there is a definitive answer here — it is a fascinating hypothesis, and it will bear watching to see if the three winds of value, price and debt continue to blow, increasing the tension on the surface area of higher education.

Lastly, as someone who spends more time thinking about K-12 than higher education, let me also say that I remain a fierce advocate of the option — if not the necessity — of college for every child.  It is one thing if, after reasoned consideration, a student elects not to attend a four-hear college to which she has been accepted; it is another thing altogether for a student to lack the basic academic skills that would permit them to attend, and graduate, from a quality, four-year college. For is not the idea and allure of college that may be out of balance, but only its actual application.

Popularity: 17% [?]

Reform, the Adams 14 way

Tuesday, January 11th, 2011

Editor’s note: Susan Chandler, Ph.D., is superintendent of the Adams 14 School District.

Last week, Adams 14 was featured in an article written by Education News Colorado reporter Nancy Mitchell entitled Census shows education gaps by district, highlighting a variety of challenges that Adams 14 students face daily. While we cannot control challenges at home, we can strive to control teacher quality.

I appreciate Nancy closing the article with a description of the Adams 14 strategy:

But the key emphasis for Adams 14 – while some other struggling Adams County districts have implemented dramatic structural reforms such as Mapleton’s small schools and Westminster’s standards-based system – is improving teacher quality in a traditional setting.

“We believe, regardless of where our students start on the achievement level, they will still realize growth over time if we have highly effective teachers in our classrooms,” Albright said. “We’ve built all of our work around that.”

Adams 14 arrived at its strategy in 2008, after an external review team concluded that a systemic approach to reform was necessary to improve student learning. The review team found that Adams 14 was a district of schools as opposed to a school district. In a medium-sized, demographically homogeneous district with 13 schools and 7,500 students, it was evident that we could improve results by adopting initiatives that would be implemented with fidelity across all schools.

From this work, we arrived at the Adams 14 strategy: to improve student achievement so that 80 percent of students assessed will be on grade level by 2014 by ensuring that each classroom has a dynamic, standards-based teacher who provides powerful 21st century learning experiences to all students.

A system cannot improve what it cannot measure, so we selected the Balanced Scorecard as our performance management tool. All parts of the system must be measured to determine strengths and opportunities for growth. Through the Scorecard, we measure instructional quality throughout the year. This allows us to adjust teaching practices through professional development and coaching before it’s too late to make midcourse corrections.

To measure improvement in instructional practice, we first had to define effective instruction in Adams 14. In the spring of 2009, Adams 14 adopted a set of research-based teaching practices to set a baseline for high quality instruction in all classrooms. Every teacher receives training and feedback regularly. The baseline Adams 14 teaching practices – adapted from West Ed’s Teach for Success (T4S) – include three main elements:

  • Aligning instruction to the appropriate level of rigor to address the standards as outlined in the District curriculum, and including formative assessment in each lesson to gauge student understanding;
  • Gradually releasing responsibility for learning to students through explicit explanation and modeling (I Do), teacher-led practice (We Do), student independent practice with teacher support and monitoring (You Do) and small group instruction; and
  • Making engagement in learning mandatory for all students through a variety of teacher strategies.

As we have learned through our work with Teach for Success, any question that is important enough to ask one student is important enough to ask all students. With mandatory engagement, teachers instill accountability for learning in every student.

Our strategic plan outlines initiatives for increasing effective instruction, effective use of data to inform instruction and collaboration. Our most important reform thus far has been the implementation of weekly walk-throughs. As a walk-through team leader, I conduct weekly spot observations with two purposes: to measure the state of instruction in all classrooms, and to provide specific, timely and relevant feedback for each teacher observed. I am encouraged that the walk-through process we began in August 2009 is now accepted as standard practice in our district. This didn’t happen overnight, and launching this initiative was difficult; but our latest teacher feedback shows that teachers now understand the process and the purpose of walk-throughs.

To date, student achievement gains lag behind the improvements we have made in teacher practice; however, that does not shake my confidence in the work. And while this type of reform may not be considered revolutionary, we are excited to have long-term strategies in place to ensure consistent, quality instruction.

Realistically, we have just scratched the surface of the comprehensive reform necessary to improve results in our schools, but I am pleased with our instructional improvements thus far.

Popularity: 10% [?]

Heritage, related reports on Fla. don’t withstand scrutiny

Tuesday, November 30th, 2010

Many of you might recall an article last month in the Denver Post that discussed the release of a report from Colorado Succeeds, a new organization of business leaders focused on education issues in the state. The report was called “Proving the Possible: A case study of Florida’s K-12 education reforms and lessons for Colorado.”  At the time, I was intrigued, because our Think Twice think tank review project had just asked professor Madhabi Chatterji of Columbia University to review a similar report published by the Heritage Foundation called Closing the Racial Achievement Gap: Learning from Florida’s Reforms.

Professor Chatterji’s review of the Heritage report was released this morning. Before returning to the Colorado Succeeds report, let me quote the key six paragraphs from the press release about the Heritage report:

Did a collection of Florida education policies—ranging from grade retention to school choice and virtual schools—improve achievement and narrow the test-score gap? A recent Heritage Foundation report is part of a larger campaign to convince us that the answer to that question is “yes”—but a new review finds fundamental flaws in the Heritage report that render its conclusions untenable.

The Heritage report, authored by Matthew Ladner and Lindsey Burke, contends that Florida’s “far-reaching” education policies have caused test scores to increase and the achievement gap to narrow. In particular, the report focuses on fourth grade reading scores on the National Assessment of Educational Progress (NAEP). These research claims are also made by Dr. Ladner, the Vice President of Research at Arizona’s free-market Goldwater Institute, in a dozen other reports and articles similar to the Heritage report reviewed by professor Chatterji.

The claims, however, do not withstand scrutiny. “The report’s key conclusions are unwarranted and insufficiently supported by research,” Chatterji states in her review. Most importantly, she points out the very direct effects of the state’s grade-retention policy, causing the report’s comparisons to be largely meaningless. By analogy, consider growth in height instead of growth in test scores. If two states wanted to measure the average height of their fourth graders, but one state (Florida) first identified the shortest 20% of third graders and held them back to grow an additional year before measurement, the study’s results would not be useful.

That, in brief, is the key problem that professor Chatterji identifies with the Heritage report. Florida’s retention policy, instituted in 2002, focuses on third graders, who are held back when their reading scores are low. The Heritage report focuses on NAEP fourth grade reading scores. Low scoring readers—mostly black and Hispanic—were screened out of grade 4 tests, which resulted in inflated and erroneous fourth grade scores. “Chatterji’s review explains very clearly why the simplistic comparison of fourth graders before and after Florida’s grade retention policy is a predictable and worthless exercise,” says Kevin Welner, professor of education at the University of Colorado and the director of NEPC.

The review also points out that NAEP scores at other grade levels and even NAEP scores in fourth grade math do not show the same jump as NAEP fourth grade reading. That is, the report cherry-picks the best data. Also, even if scores in Florida are in fact increasing, the report’s methods are too weak to allow for a causal inference. The report uses only descriptive test score trends to compare states and then make sweeping generalizations. Moreover, many other changes occurred in Florida during the period analyzed, including the phasing in of one of the nation’s most ambitious class-size reduction reforms—yet the report never mentions these other possible causes of any improvements.

“In sum, the report’s analyses are highly biased and of very limited value,” Chatterji concludes. “The major elements of Florida’s education reform policies are in need of continuing and more careful examination, individually and collectively, before they can be recommended for wider policy adoption.”

So that’s the Heritage report.  What about the Colorado Succeeds report? It turns out that it is essentially the same as the Heritage report and, in fact, the same as similar reports published by free-market think tanks in New Mexico, Arizona, Wisconsin, Oklahoma, Utah, and Indiana, as well as by the Hoover Institution, the Pacific Research Institute, and articles in National Review Online and – but with a Colorado focus added on. In each of those (but not for the Colorado Succeeds report), Dr. Ladner was listed as either author or co-author.

In sum, the Colorado Succeeds report, was (whether the organization wanted to take credit for it or not) part of a larger campaign by Dr. Ladner to market the Florida reforms. Unfortunately, as shown by Professor Chatterji, the marketing was for a seriously flawed product.

Popularity: 6% [?]

Proceed with caution on value-added

Monday, November 29th, 2010

For fairness in blogging, I feel compelled to write about the relatively new Brookings report “Evaluating Teachers: The Important Role of Value-Added” that takes a different perspective on using (value-added) student achievement data than the EPI report (“Problems with the use of student test scores to evaluate teachers”) released a few months ago.

Both studies were co-authored by a large group of distinguished researchers, so there is lots of food for thought in this debate. But let’s be clear that this is not just another case of “top researchers disagree on facts, so I can just ignore all of that confusing research and support what my gut says.”

The EPI researchers point out many flaws in the current technologies of using student value-added achievement data, and therefore recommend against its use in high-stakes decisions (like teacher rewards or firing).  The Brookings researchers agree that there are many flaws in value-added data, but ask the reasonable question “compared to what,” noting that other current methods of evaluating teachers are not very good either.

So it is more an issue of interpretation than what the facts are.

Perhaps the most interesting thing the Brookings researchers do is look at one problem with value added data, the low correlation for a given teacher across years (the same teacher compared in year 1, 2 and 3), with parallel correlations in other domains, like baseball batting averages, insurance salesperson rankings, mutual fund performance rankings, etc.

The relatively low correlation of year-to-year teacher’s rankings (0.3-0.4 range) has been cited by critics as a reason not to use value-added data, since it appears to have too much measurement “noise” to be as accurate as we would like (that is, in a disturbing large number of cases, the same teacher is ranked as “good” one year, and then “bad” the next year.)

The Brookings researchers suggest that the “noise” is not greater than what we see in these other domains and that this is an argument to use value-added for high-stakes decisions, even with its flaws. In some sense, this is argument by analogy, the accuracy of which we should examine.

First we have to think about whether we believe a teacher varies greatly in her performance over time.  Does a teacher have a “true type” (good, bad, average) and our challenge is to sample and measure that (pretty consistent) “type” correctly, or does the teacher actually vary greatly in her ability over time?  (see my earlier blog on this topic.)

If we think the teacher type is relatively “fixed” than the low correlation is a big problem, and it represents “noise” and our inability to sample well enough to find the “true type.”  Personally, that perspective makes more sense than a belief that teacher quality varies greatly from year to year.

Second, let’s examine the analogies more carefully.  Hitting a baseball thrown 60 feet at 90 miles per hour is a notoriously difficult and fickle skill – concentration and mental state do seem to matter a great deal – perhaps also so does the quality of pitching, non-random choices about which hitter to put up against which pitcher (lefty against righty, fastball versus curveball), etc.

It makes sense to me that top hitters one year might be less effective the next year, when they might be facing a divorce, or a contract year, or whatever.  Also, baseball players are well-known to be highly compensated, perhaps partly for the risky elements of their performance over time.

This seems to me a weak analogy to a teacher who has six hours and 180 days a year in a more self-controlled environment to perform “good, bad or average.”

Second, let’s look at mutual fund performance.  Here, from living 15 years in NYC, where the financial markets are eaten for breakfast, I am quite confident that the top investors in boom times (buy more tech stocks in 1997!) are also likely to be among the worst investors in downturns (buy more tech stocks in 2000!) or slow growth periods (when the top performers are cautious, diverse portfolio investors).

Academic studies of financial markets strongly suggest “random walks” and very little likelihood that we should expect high correlations across years in top performers, making this, again a poor analogy.

Third, insurance sales seem similar to financial investment to me – perhaps during good economic times a particular type of salesperson is more easily able to get families to spend money on insurance – it might require a different set of skills to be a high sales performer  in a recession.   Thus, the low correlation is in fact caused by factors outside the salespersons’ control (as with the financial markets, and probably partly in baseball too).

Their best analogy might seem to be the use of ACT/SAT scores for college admission despite a relatively low correlation with student GPAs (and the fact that no other measurable admissions factor has a higher correlation).  While I first found this argument  more compelling, it is severely flawed by the fact that they are now correlating two different things – actual student course achievement and a specific test – not the same thing over multiple years – a statistician would expect more “noise” in correlating two different things.

So it seems that the Brookings researchers can’t have this both ways.  Either you believe teacher type is fairly fixed, and then the low correlation is really a problem with our measurement technologies and a true problem with using the data.

Or you believe that teachers’ quality varies as much as professionals in these other fields, where it seems quite clear that much of this variation is a function of the external environment.   Many teachers have argued that their lack of control over their environment is a major barrier to how much they can move students achievement – the non-random assignment of teachers to groups of students, the variation in students from year to year, variations in other supports in the school, changing curricula, etc.

So where does this leave us in a discussion that is not just theoretical? After all, Colorado’s State Council for Educator Effectiveness is working, even as we blog, on implementing the SB 191 requirement that 50 percent of a teacher’s high-stakes evaluation be based upon student achievement.   Researchers agree that value-added has problems – they disagree about how severe those problems are, and whether focusing too much on these flaws, and not on problems with other forms of evaluation, makes the “perfect the enemy of the good.”

My belief is that there are many current problems with using value-added – some are fixable, with more and better tests, with better administration of tests (to avoid outright cheating and overly teaching to the tests), and with a clear understanding of the challenges created.

But other problems are not easily fixable, and perhaps we don’t want to fix them because they have value within schools – the non-random assignment of students to teachers within a school, the group nature of teaching, especially in higher grades, the difficulty assessing performance improvement in the arts, physical education, and other domains.

These issues suggest using value-added data very carefully, and as only one component in a meaningful, high-stakes teacher evaluation system.

Popularity: 8% [?]

Anger, blindness – and grading schools

Tuesday, November 23rd, 2010

“Do you see this? … Look there, look there!” King Lear, Act V, scene III

Last Thursday’s “Montbello vote stirs up passions” (Denver Post) ended with these paragraphs. It quoted from opponents of the turnaround plan, who referred to previous district efforts at Manual and North High.

“Where is justice?” said Ed Augden, who said the reforms of the past have left students of color in worse positions.

But Stacie Gilmore, co-chairwoman of the Far Northeast Community Committee that has been meeting since April on how to remake the schools, said the community needs to put the vitriol behind.

I would appreciate through this process going forward that we could really come together,” she said. “When there is so much fear and anger around an issue, it usually means here is a lot of hurt and pain. We need to start to heal as a community, as a city and as a people.”

During the Thanksgiving and Holiday seasons this English teacher often asked his 7th and 8th graders to write an essay on their values.  A time of year when we are more reflective, perhaps, of deeper hopes and beliefs.  This fall I ask myself to take on that assignment.

My themes are anger and blindness.  And there’s a suggestion, if it helps us to see.


First: Anger. Vitriol. Fear.  When there is outrage at efforts to make major changes in a school community, Denver’s Far Northeast, it is puzzling.  Puzzling because, according to the district’s 2010 School Performance Framework, 12 out of 16 schools there are either Accredited on Watch, Accredited on  Priority Watch, or Accredited on Probation. Puzzling because most of these 16 were among the 69 schools DPS rated as not “meeting expectations” in 2009 as well—and it’s been the case for too long.  Resisting change—when it is so necessary?  Isn’t this defending the indefensible?

An outsider to the community probably cannot understand.  An outsider without ties to the local schools—no friends and family who work there, no history with parents and grandparents attending those schools, no years of hearing “downtown” propose “solutions” that made little difference—and hence years (generations?) of mistrust that can’t be overcome even by a diligent effort to get community input—an outsider can try to empathize, but no, I can’t step inside the shoes of those in the Far Northeast community who feel they have not been heard.

Those close to the community tell me: “You have the legacy of so many failed reforms. Why believe this will be any better?” And as schools are often “a central crossroads” for the neighborhood, it is more than the school—it is the fabric of our community “they” are breaking up when decisions are made to turnaround and transform schools. And it’s not just 900 Grant Street.  They remind me of the suspicion and upset that comes from a history of feeling betrayed by top-down decisions from government agencies; by seeing your neighborhood transformed—in places gentrified; by a loss of the community you and your family have called home for decades.  They tell me it becomes “territorial, you and me versus them.”

Maybe it’s a bridge too far; maybe too many of us cannot connect with this anger, this mistrust.  And yet we must try.

But there is another kind of anger.  Like that also expressed Thursday night by Gregory Hatcher, a recent graduate of Denver School of Science and Technology: “There are too many students failing in the Far Northeast, and it’s not fair. It’s an injustice.”  Yes, where is justice, when year after year school achievement shows little improvement?  When so many drop out? When the results, for kids, are tragic?


Second: We seek truth, but are often blind. Part of the tragedy of being human—an idea explored in western literature as far back as Oedipus Rex, on to King Lear, and one that continues to underlie our most enduring works of fiction.  In Oedipus the king uncovers the truth, which is so damning he blinds himself.  In Lear the blindness is first the King’s inability to see the truth and the lies in his daughters’ words, and a blindness to himself (as one deceitful daughter says, “he hath ever but slenderly known himself”); later the blindness is more literal, the gruesome moment in which Cornwall gouges out the eyes of Gloucester, who has also failed to see treachery in one son, devotion in the other.

The dramas build towards a painful realization of the truth, new “sight”—too late to change the fate of our main characters—but in time perhaps to offer them some redemption.

If self-deception and a refusal to accept the truth seems part of the universal condition, we shouldn’t be surprised to see it evident in debates about the economy, government entitlements, Iraq and Afghanistan, and –naturally—in K-12 education and how we look at our schools.

It would be arrogant to tell parents: You don’t see the truth about your son or daughter’s school.  But a question asked by DPS school board member Theresa Peña this past fall has stayed with me.  Ray Cortines, superintendent of the Los Angeles school district, was visiting Denver. Peña asked him about the challenge of finding parents more upset about schools being told to turnaround than about the ongoing low performance of their schools.

That’s my paraphrase of her concern. I sensed frustration in her voice: Where is the outrage, she seemed to imply, when the school in your community is consistently showing poor achievement and growth? Many of us wonder too: Why doesn’t that bring out local protest—rather than efforts by the district (and state and federal government) to say, “No more!”  To set up a process that compels low-performing schools to undergo major change—or to close.

Parents and community members (and teachers) might respond: what “truth” are you talking about, when you tell me our school is failing?  A truth based on state assessments? Based on data fed into a computer, a School Performance Framework (SPF) rating that knows nothing of the intangibles—the care, devotion, and tremendous effort we see in our school’s faculty day after day?  That fails to capture the marked improvement, the new climate we feel when we enter the school building these days….?

I hope it is not unfair to say, though, that this may be where emotion, loyalty, friendship—even familiarity–can blind us.  I would never say the whole truth was there in the 72-page power point presentation by the Far Northeast Community Committee, the “Proposed FNE Scenario,” given on Sept. 28, 2010, at Noel Middle School. (See just one small part of that presentation, page 5).  I would never say the all the facts can be found in CSAP tests, or the shocking rate of Montbello graduates unable to enter college without needing remedial classes, or the SPF rating.  But put them all together and they tell us something important, yes?  Add a different formula from the state—with similar results, putting 44 Denver schools in its lowest category, requiring a Turnaround Plan—and at some point we need to admit that the evidence is overwhelming.  We are fooling ourselves to defend the status quo.

Rating schools – what would help parents?

In response to Theresa Peña’s question, Los Angeles superintendent Cortines sounded sympathetic, but could only stress how vital it is to provide good information to parents.   Make sure families have the needed facts.  Colorado and DPS have made good progress here.  And yet, in spite of all the data, how can we still be in denial?  When one looks at the facts on page 5 from the FNCC about those six schools, how can a number of parents and community members join teachers to fight to maintain the current structures?  Which leads some to ask: in our goal of transparency, is it now all too complex – over 20 columns on Denver’s School Performance Framework? Is it possible to have TOO MUCH INFORMATION?

So now, one more question:  why not use all the information—and then give each school a letter grade?

Prior to the 2000 legislature Gov. Bill Owens presented his agenda of public education reform. I took issue with his plan to grade schools. In Another View #11 (Dec. 14, 1999) I wrote of the 15-30 page reports that I had helped work on for six schools as a member of an external team of 5-7 educators, after two–day visits. Even after submitting a report that detailed, our team would have said:

“… it is only a preliminary drawing, not a full portrait.  The comments are offered with a degree of  modesty that most educators consider respectful and appropriate.

“This is why I cannot fathom how outsiders who have never even been into the building would have the gall to grade a school community based on data and paperwork.  It is not respectful.  I find it exceedingly presumptuous.  I hope the state does not head down this road.

“On the other hand, I realize that the intent is not simply to give a school a B or a D.  The governor stated that the report card ‘will equip parents with the knowledge they need to make an informed decision as to which school is best for their child.’  This element I endorse wholeheartedly.”

For most of the past decade—many of them teaching in schools classified as Excellent or High on the state’s accountability reports—I would have held to that position.  I knew folks working as hard or harder, more gifted than me in reaching struggling students, in schools rated Low or Unsatisfactory.  If we were grading schools, these might have been labeled D’s and F’s—and that felt wrong.  The difference?  Not much, I suppose. Maybe it just seemed too harsh.

And yet now I hear the other side of the argument—and it seems time we consider it. Proving the Possible – A case study of Florida’s K-12 education reforms and lessons for Colorado (Oct. 2010), the recent report by Colorado Succeeds, recommends we borrow several practices from the sunshine state.  Among them:

“improve the Colorado Growth Model by replacing fuzzy school descriptors of Performance,    Improvement, Priority Improvement, and Turnaround with the letter grades A, B, C, D, and F ….An opportunity exists to more clearly and accurately label schools.  Parents can much more easily      understand grades, which convey a ranking scale in a way that a collection of descriptors will not. Many parents may not be too concerned if their child is going to an ‘improvement’ school; however, they will likely not be satisfied with a school that has earned a grade of C.”

Last spring the Arizona legislature passed SB 1286, which, according to Gov. Jan Brewer, “took an important step by changing the way schools are labeled.  We eliminated the ‘fuzzy labels’ of ‘Performing and Performing Plus’ and changed them to ‘A, B, C, D, and F’.”

And a number of other states just elected governors who plan to follow Florida and Arizona in making a similar change (see below).

If this were 1999 and we graded schools exclusively on CSAP achievement scores, I would still oppose this idea.  Or even 2008 when we were limited to CSAP, ACT, and growth scores. Kudos to Rich Wenning, associate commissioner at CDE, who has had a hand in how both the state and DPS have developed ways to include growth and a wide range of factors in assessing schools—for high schools even including college readiness.  Many now see the ratings as quite comprehensive, and—in great part—fair.

Is it nicer to call a school Accredited on Probation than to say it’s an F school? Sure. Does it oversimplify? Yes.  But is it helping parents? Is it communicating the appropriate urgency? Is it possible we have been more considerate of teachers, who will find a school grade discouraging, even humiliating, than of families—who want to make the best choices for their children?  Would we be less blind, less complacent, if we said that—by my count—this year Denver parents are sending about 10,000 students to schools Accredited on Probation, and instead we told them their child’s school received a grade of F?

Would it help us to see—and to act?  I am not sure.  But it seems a debate we should welcome.

The tragedy for Oedipus and Lear—for all of us—is when we see, too late to change.  The tragedy for many of our kids will be our fault, if we fail to open our eyes in time to improve their schools.

FNE Schools – School Performance Framework (SPF)

FNE Schools

SPF – Overall

07-08   08-09   09-10

Rationale (for Turnaround)

Green Valley 27% 32% 35% Consistent low performance, bottom 10% of all schools
McGlone 30% 33% 33% Consistent low performance, bottom 10% of all schools
Oakland 31% 29% 26% Declining performance last 3 years, 2nd lowest rated school in FNE
Ford 42% 35% 25% Declining performance last 3 years, lowest rated school in FNE
Noel 24% 30% 27% 2nd lowest rated middle school in the city
Montbello 45% 41% 35% Lowest rated comprehensive high school in the city. Graduation rate is only 59% and for every 100 students who enter as freshmen, only 4 go on to graduate and go to college without requiring remediation.

From presentation by the Far Northeast Community Committee at Noel Middle School, Sept. 28, 2010.

Part of a 72-page Power Point presentation.  Produced by A+ Denver and Denver Public Schools.

Grading schools – Governors and other states

Studying the education positions of gubernatorial candidates across the country earlier this fall, I heard at least half a dozen candidates—from both parties—support the idea of grading schools.  Here are statements from three recently elected governors, who are likely to bring this idea to their legislatures.

Pennsylvania: Gov.-elect Tom Corbett has called for “the General Assembly to develop a school grading       system to better explain educational success and identify those schools that are in need of the most assistance. A grading system is recognizable and allows families to become more involved in the education of their children through a system that readily explains the quality of education and educational opportunities their children are receiving. Parents often can ‘trigger’ a turnaround in low-performing schools and can continue to push high-achieving schools to do better. To calculate grades – A, B, C, D, and F – the school grading system would be based on student performance on state assessments and other objective measures of student achievement, including proficiency rates, learning gains, closing achievement gaps, graduation rates, accelerated coursework and college and workforce readiness.”

New Mexico: Gov.-elect Susanna Martinez’s education platform stated: “The Legislative Finance Committee currently grades the Public Education Department on several performance standards, but parents, students, and teachers have no easy format to understand the performance of individual schools throughout the state. We should adopt an easy to understand, easy to implement system of grading our schools based on the traditional school grading format. Schools will be assigned letter grades of A,B,C,D, or F and these grades will be posted to an easily accessible website for parents, students, and teachers to access, which will help to increase performance in our schools as well as increasing transparency in our school system. We can only take steps to correct failure if we first identify it, and reward success if we measure it.”

Nevada: Gov.-elect Brian Sandoval’s education position during the campaign included this goal:

Grade Schools Like We Grade Students

“Brian believes parents have a right to know how their schools are performing. Therefore, Brian will implement a simple, effective school accountability process that does two things:

■ Assigns a letter grade (A, B, C, D or F) to indicate school achievement; and

■ Evaluates student growth as well as proficiency scores for a more complete picture.

In high schools, the grade will include graduation and remediation rate progress.

Brian’s accountability model will include financial incentives for schools that earn an “A” grade and schools that move up two letter grades in any one year. Incentives will be paid directly to the school, to be allocated by a site-based committee of school personnel and parents, for instructional supplies or programs. … failing administrators will not be allowed to continue. If a school receives a failing grade, the school administration will be issued a warning. If a school receives failing grades in consecutive years, school administrators will be dismissed and replaced. Period. No more delays. No more excuses.”

And next door, in Utah, we hear “State ed leaders to consider grading schools”(Lisa Schencker, The Salt Lake Tribune, Nov. 5, 2010). “Sen. Wayne Niederhauser, R-Sandy, plans to sponsor a bill to hold schools accountable by giving them A-F grades.  (Now) State Superintendent Larry Shumway said Friday his office is also working on a rule to grade Utah schools based at least partly on academic performance. ‘It’s probably something that’s coming anyway,’ Shumway told state Board of Education members Friday, acknowledging that most educators would probably prefer not to assign letter grades to schools. ‘I would just as soon see you all involved in making the rule rather than less connected people further from the schools than you are.’”

Popularity: 7% [?]

Colorado Health Foundation Walton Family Foundation Daniels fund Pitton Foundations Donnell-Kay Foundation