You are viewing the EdNews Blog archives.
These archives contain blog posts from before June 7, 2011
Click here to view the new First Person section of Chalkbeat Colorado

Author Archive

Fair and balanced churning

Thursday, October 28th, 2010

I really value EdNews’ daily churn discussion of current issues in Colorado education policy and coverage of breaking events outside of Colorado. These are typically objective, news-oriented pieces.

That is why I am a bit concerned about the Thursday (October 28) daily churn item on Lorrie Shepard. The discussion seems to suggest there should be major concern about the fact that Professor and CU Boulder Dean Shepard was recently part of a major national report issued by the Economic Policy Institute (EPI) on teacher evaluation, while she is also serving on the Colorado State Council for Educator Effectiveness.

The report, which actually was released a few months ago, is a very thorough and scholarly discussion of problems with using student test score data to evaluate teachers. It takes a point of view and supports it quite well with research evidence. This perspective would seem to be important for the Colorado Council to keep in mind as they deliberate.

Colorado is very fortunate to have such a knowledgeable, national expert on teacher effectiveness in Professor Shepard (her relevant credentials include recently serving as President of the American Education Research Association, the National Council on Measurement in Education, and on the National Research Council’s Board on Testing and Assessment ).

Also among the 10 authors of this report are Robert Linn, CU Boulder professor emeritus, and Professor Helen Ladd, from Duke, who has also done outstanding, data-driven work on charter school performance. It is an impressive gathering of expertise on this topic. To even hint that this EPI panel is timed to be part of some kind of crazy left-wing effort to kill serious teacher evaluation is highly misleading. (“We didn’t see any National Education types on the Institute Board.” – I’m not sure what this even means).

EPI itself is a very respected, left-of-center public policy think tank – Richard Rothstein, former education writer for the New York Times, has been their central education policy scholar. While they are certainly pro-labor and supported by unions (teachers unions and others), their work is thoughtful and reasonable, and a fair counterpoint to work coming out of AEI, Brookings, Ed Sector, and others in DC.

And, perhaps I’ve missed it, but has there been a daily churn item noting the funding and board membership of groups like AEI, DFER, and others when they release reports ?

Popularity: 2% [?]

On state funding, it’s “Groundhog Day” not “Superman”

Tuesday, September 21st, 2010

I’m excited about the upcoming “Waiting for Superman” film – it will be great to stimulate more discussion and action around education reform.

At the same time, the deafening silence around Colorado’s education funding in the next fiscal year is troubling.   The September revenue update is in, and it isn’t pretty.   With current projections, another $257 million will have to be cut from the current year state budget, and a $1.1 billion gap is looming starting July 1, 2011, just 9 months from now.

And this is all part of the “rosy” scenario that assumes the 3 fiscal ballot initiatives don’t pass – if any pass, all bets are off.

Another movie analogy might be “Groundhog Day” – this seems like a slightly different version of the same bleak fiscal picture the state has faced for nearly a decade.  Except, now the stakes are higher – the possibility of zero state funding for higher ed is becoming more realistic, as other options for cuts and/or new revenues have already been exploited.

And, in some ways, this slow moving train wreak scenario is even more bizarre now.  The governor is a lame duck, though history suggests that he will surely do his best to try to shelter education from bearing all of burden of cuts, in his last proposed budget.   We won’t know who will control the state legislature until after November 2, and a new legislature and a new Governor will need time to figure out their priorities, and budget options.

Given few other budget options, it is likely that K-12 will take additional multi hundred million dollar cuts and higher ed cuts will likely wipe out at least half of the current (48th in the nation) funding, perhaps all.  But given the current antipathy to taxes, no candidate, even the most likely winners, can talk about any solutions to this dilemma, prior to the election.  I don’t see a concerted business/nonprofit/responsible institutions coalition coming together at the level needed to address these problems, as happened with Ref C.   State higher ed strategic plans, while important to develop, seem quite out of touch with the fiscal reality that starts in just nine months.

Meanwhile, I am reminded that we are not a stagnant state – our population, including K-12 and higher ed students, continues to grow, even as our funding does not.  An issues brief from the Colorado Fiscal Policy Institute shows that, even before we make these draconian cuts, the total state General Fund (GF) budget was about the same in 2010 as it was in 2001, but the state is now serving 700,000 more new residents, including 70,000 more K12 students, and 35,000 more students in higher education.

As the adage goes, I guess we’ll plan to make up for our losses with higher volume.

Popularity: 2% [?]

DFER, teachers’ unions and elections

Wednesday, September 15th, 2010

I’ve been thinking about this post for quite some time, though yesterday’s defeat of Adrian Fenty in DC   may provide a timely example, as part of Alan’s concern about “backlash.”

In a nutshell:   How concerned are DFER-types about how a split on education issues affects the Democratic party’s chances in close elections, if the teachers unions don’t strongly support Democratic candidates with DFER leanings?  If so, what can/should be done?

In some ways, this issue really first popped up at the pre-DNC event at the Denver Art Museum, when Al Sharpton and an array of reform minded superintendents and mayors, largely from the east coast, started to publicly state that the teachers unions were a big part of the problem in education.  This opened up what had previously been a quiet, but growing, internal fissure in the Democratic party.

Since then, in my view, the DFER agenda has rested uneasily next to the Broader, Bolder (BB) agenda for ed reform in the Democratic party.  (And, EdNews‘ opinion and commentary section has largely followed the DFER line, though certainly with diverse specific opinions).  And, the Obama administration has largely followed the DFER approach, through R2T, and other programs, while also trying to not alienate the BB and union agenda.

I view the DFER agenda as being pro-charter and pro-choice, a focus upon measurable student achievement gains, more accountability for schools and leaders based upon this evidence, more rigorous teacher evaluation, and probably greater autonomy at the school level.  Apart from public vouchers for private schools, it doesn’t differ greatly with the mainstream Republican party agenda for ed reform of the past decade.

The BB perspective – schools can’t truly improve until a range of wider societal conditions improve – would focus more government resources on combating concentrated urban poverty, poor health services for lower income families, limited pre-school opportunities, pre-natal care, etc., in addition to more resources in schools.

My sense is that most DFERs believe that the BB agenda too easily excuses the schools, since some, especially charters, have demonstrated success even in the face of these societal challenges, and that fixing the BB agenda may be even harder than fixing our schools, though most DFERs would probably also support more government programs in health, nutrition, early childhood, housing integration, etc.

What I wonder about is I what I’ll call the Democratic SABE (Shared Agenda Beyond Education) – do DFER’s think the Democratic party can win enough elections to implement these government programs beyond education, if the teachers unions support is not strong ?

While unionized workers in America’s private sector have decline from 25-30 percent post-WWII to less than 10 percent today, public sector unions, and especially teachers unions remain strong political forces.  They can raise money (as per the $600,000 from CEA against Prop 60, 61, 101), put lots of boots on the ground for ground campaigns, mobilize members who vote, etc.  Can the Democrats do well electorally without strong union support?

Wherever ones comes down on this divide, it creates a fascinating political dynamic in the Democratic party.  It may be true that the teachers unions have “nowhere to go” in a spatial electoral sense – they seem unlikely to become more Republican in their leanings.  But, they can decide to not turn out, not build campaigns for candidates, promote primary challenges from the left, etc.

Republicans should like this, and could hope to gain politically from these fissures.

Perhaps the SABE can still pull DFERs and BBs together in most elections.  Perhaps the unions can change, internally, and adapt more of a DFER like agenda.

Where do people think this is heading?

Popularity: 2% [?]

You did learn all you needed to know in kindergarten

Friday, September 3rd, 2010

Some intriguing study results came out during the R2T decisions, and I’ve been meaning to circle back to them, because they are both interesting and potentially relevant to teacher evaluation discussions. (Links: Study PowerPoints and NY Times summary.)

Raj Chetty of Harvard, and an impressive array of colleagues, linked Tennessee project STAR students from the 1980/90s, some of whom are now 30 years old, with current tax data, to see what their economic lives look like.

Those students who showed higher test score results in early grades, because they had more experienced kindergarten teachers and smaller class sizes (and “better” peers in some cases), are now making more money than comparable peers without those influences. And, they are also more likely to have gone to college and to have other characteristics associated with more stable economic and family lives.

For economists this is really a big deal, because to them actual earning ability is a far more important and meaningful measure than test scores or graduation rates.

But it is also really interesting because the prior research had shown that the impact of smaller class sizes and better teachers in kindergarten and first grade (the only grades at which class size was shown to matter) faded to some extent by middle and high school.  As with programs like Head Start, there was some concern about whether, in a cost/benefit sense, it was good policy to fund these early school interventions if any test score gains largely “wore off” later.

If you believe these studies, it suggests that even if the test score outcomes do fade away, the other skills students get in those better learning environments have a lasting effect that leads to better educated and better compensated adults.  Chetty suggests that some of this is probably related to social and emotional skills learned, which don’t necessarily show up on tests.

I also find this compelling because it very much parallels the longitudinal studies about very high quality pre-school programs – where students end up with much more positive life outcomes 20 years later.

As always, this is just one study in one state, but it’s a powerful one, with a long time series, strongly significant results and is based upon a randomized initial treatment.

Popularity: 1% [?]

Teacher evaluation is a sampling process

Thursday, September 2nd, 2010

I appreciate the recent blog discussion on student value-added methods for teacher evaluation, and I hope it can perhaps help bridge some of the “insider/outsider” rift on this topic.

My point in noting problems with value-added test score data is not to derail those efforts but to improve them.   Using these data in some manner is certainly better than having no information at all about how teachers influence student achievement.   But, we have to be careful, because the data have many flaws – they are necessarily only a sample of the “true” quality of a teacher, which is very hard to know, when we don’t observe them, in class for 180 days per year, six hours per day. That, of course, is impossible.

My concern (and a problem with the LA Times publication of such data) is that some people now think we have ironclad, precise data on teacher influence on student achievement, which we can now just plug in to evaluate the teacher.  The recent EPI report, and nearly all other recent research, point out very real problems with using these data, not just overly wonky anxiety.

When you sample, you always have implicit or explicit “confidence intervals” around the estimate.  A principal observing teacher quality via classroom activities is sampling.  If that principal only watches a particular teacher one time during the year, for 10 minutes, that is a very imprecise sample.   That teacher could be terrible most of the time, but the principal happened to catch her on part of a very good day, and (wrongly) writes down that she is an excellent teacher.  Obviously, the more observations a principal does, the more likely that the sample is an accurate reflection of the “true” teacher quality (Mike Miles in Harrison 2 has addressed this well, with 8-16 observations required per year).

(We know that flipping a coin is a 50/50 heads/tails probability, in a large sample.  But flip it 2 times, and the odds are that 25 percent of the time you get two tails, 25 percent both heads, and 50 percent a mix.   So, two random observations of a “good” teacher (“heads”) have a 25 percent chance of believing that she is not a good teacher (“tails”).)

Note that this also requires “random” observation – if the teacher knows the principal is coming to watch, she will obviously improve her performance at that time (a corollary to teaching to the test).

It is equally important to note that using test score data is also a sample of “real student learning.”  If the tests are valid, reliable and measure the learning we want students to have, they are better than tests that don’t have those characteristics (which fits most of our current tests).   But, one test a year (or two, or a few) are only samples of student learning.  If many of a teacher’s students are ill for March CSAPs, the sampling of learning won’t be terribly accurate.  The better the tests, the more tests we give, the more likely the sample is accurate, and in statistical terms, the smaller the confidence interval around the estimate, meaning we could say with more precision that teacher X is very good, and be correct about that.

Once-a-year CSAPs provide very imprecise estimates for individual teachers, which, along with other problems noted in other blog posts, should caution about high stakes use of these data.

Even with these caveats, the data can be used in some basic ways, probably helping to sort teachers into three categories.   I agree with Kevin Welner that we can think about using test score (and other) data to sort teachers into a small category of “excellent teachers” who seem to drive high student achievement over several years and also rank high on principal and/or peer evaluations.  At the other end, consistently low student achievement scores and low observational ratings should identify truly poor teachers.   The vast number of teachers will be in a middle ground, and trying to sort them more precisely will be going well beyond the ability of our sampling techniques.

Popularity: 6% [?]

A teachable moment?

Tuesday, August 31st, 2010

Complaints about some evaluators not fully understanding our true value …  a sense that points were taken away unfairly, despite reviewer training in the appropriate rubrics  ….  evaluators not understanding, and not crediting us, for the things we do well… a sense that someone in a higher position should reverse the injustice.   It all feels unfair.

Yes, but, most of these Colorado complaints about the round two R2T scoring could also be applied to premature teacher evaluation based upon the inappropriate use of faulty test score data.

Isn’t there some irony in the fact that some of the folks complaining about unfair R2T scoring of Colorado’s application are also among the ones who turned a deaf ear to, or brushed aside, some of the legitimate concerns about using current test scores to evaluate teachers?

My colleague Robert Reichardt made a similar point in April, after Colorado lost round 1 of R2T.  Now we feel twice the pain.

Let me be clear.  I support better teacher evaluation and we need to move in that direction, using multiple measures of better and more frequent principal and peer evaluation, and some appropriate use of student test scores.

There are certainly some individuals and groups who have looked for any reason not to advance real teacher evaluation, because they want to preserve the status quo (which is basically no useful teacher evaluation), and I don’t want to support that position.  At the same time, there are lots of others who see legitimate problems with the current technology that ties student test results to specific teacher evaluations, and want to proceed carefully, in order to do this right.  I was surprised how little attention policy makers gave to that latter group this spring.

As the implementation of SB 191 moves forward into the implementation stage, but now without federal funding to support it, we should keep these concerns in mind.

There are at least four reasons why we can’t now validly and reliably link teacher evaluations to student test scores.  When we address some of these elements, we will be able to more fairly and more effectively evaluate teachers.

First, we don’t have good value-added tests.  A annual March CSAP test is not good enough (you need a valid beginning and end of year test to the same students whose gain you want to assess), and more than half of Colorado grades/subjects don’t even have the annual CSAP available anyway.

Second, students are probably not randomly assigned to teachers, as this evaluation processes requires.  If teacher Jane is known by her principal to be good at teaching students with serious family problems, and thus gets assigned a group of difficult students, and moves their knowledge forward by 0.75 grade levels, while teacher Joan is known to not be good with difficult students, and gets all of the easier ones, and advances their knowledge by 1.0 grade level, who has done a better job?  (It isn’t clear that we can, or want to, “fix” this, but it is a reality that skews the data).

Third, one year of data is not a large enough sample to use for a teacher – you probably need 3.  Classes of 26 students, with 50% mobility levels that are not uncommon in urban areas, leave 13 students with a particular teacher all year – that is not enough data to make a reliable judgment about teacher quality.

Fourth, lots of good teaching is joint and collaborative, especially at the secondary level.   The social science teacher may be as responsible for improved student writing as is the English teacher.  We don’t want teaching to only be a solitary practice with no sharing and collaboration.

Added to these concerns, making student test scores very high-stakes will greatly increase the likelihood of outright cheating, as well as more subtle “teaching to the test” (and not the good kind, where people teach the subjects they are supposed to teach, but the overly narrowing kind where you only ask the types of questions known to be on the test).

I won’t try to make this post double-ironic, but among the beauty of Denver’s own ProComp is that it was put together by and with teachers, and advanced by a teacher vote, and it incorporates multiple measures, to recognize that we can’t really nail down a single dimension of teaching to assess and reward.  It is disappointing that we couldn’t summon that kind of process at the state level.

To see a different way of handling this issue, Chad Aldeman of the Quick and Ed blog (a strongly pro-reform  voice) recently contrasted LA’s handling of teacher data with Tennessee’s approach:

“In contrast, Tennessee has been using a value-added model since the late 1980’s, and every year since the mid-1990’s every single eligible teacher has received a report on their results. When these results were first introduced, teachers were explicitly told their results would never be published in newspapers and that the data may be used in evaluations. In reality, they had never really been used in evaluations until the state passed a law last January requiring the data to make up 35 percent of a teacher’s evaluation. This bill, and 100% teacher support for the state’s Race to the Top application that included it, was a key reason the state won a $500 million grant in the first round.”

Popularity: 3% [?]

At least Denver earns a high score

Wednesday, August 25th, 2010

While we learned that Colorado’s Race to the Top application was ranked 17th out of the 19 finalists by objective reviewers, Denver was ranked 4th out of 30 cities examined in a Fordham Institute report issued yesterday on urban district reform efforts and capacities.

(Fordham probably didn’t realize that the Race to the Top results announcement would dominate this week’s ed news world, but hopefully this urban report will still get the attention it deserves).

New Orleans, with its post-Katrina reform efforts, is ranked #1, followed by Washington, DC and New York City, then Denver.

Urban districts are ranked on their human capital (Denver is 5th here), financial capital (7th), charter environment (8th), quality control (14th), district environment (10th) and municipal environment (4th), for an aggregate Denver ranking of 4th.

As with all such ranking exercises, one can argue with the ratings themselves, the categories or some of the more subjective judgments.   And, a change in superintendent, school board, or mayor can alter these perspectives pretty quickly.

But, this national report does at least support the widely-shared local notion that Denver’s reform efforts are near the cutting edge of national reform, a notion that was shaken by the R2T ratings for Colorado.

Popularity: 1% [?]

Wuz we robbed?

Tuesday, August 24th, 2010

Peer-reviewed, discretionary federal grants, like Race to the Top, are indeed, um,  discretionary.  It will be interesting to see more information about why Colorado “lost” to states like Hawaii (which furloughed students and teachers on Fridays for the past year), Ohio, Maryland, and some others that were not perceived as national reformers (other winners, like Florida, were heavy favorites in any event).

If you think these decisions are mainly political, Colorado should have been a winner, with Senator Bennet in an important political race, a Democratic incumbent governor, and with DPS well-regarded by the Gates Foundation, which has lots of ties with US ED staff.

If you are less cynical, and view these decisions as mostly merit-based, the combination of CAP4K, Colorado’s growth model, local teacher compensation reforms like Procomp, all sealed with the “tough” new teacher evaluation bill, again Colorado should have been a winner.

And, Colorado did try hard to play this game well.   The approach in round 1 included a public participation process that was wider in scope than in any other state, and a clear alliance with the teachers unions, to demonstrate implementation “buy-in.”   When the teacher evaluation process was scored as weak, for round 2 Colorado produced important new legislation, in a tough political fight, that was meant to address that weakness.  Since that fight alienated the union support, it will be ironic indeed if lack of union buy-in is cited as a fatal flaw in the round 2 negative decision.

In any case, this leaves Colorado without the federal financial support that would have been used to jump-start the implementation of several of these reforms.  Given the state and district budget cutbacks already backed into this current fiscal year, and the larger ones looming in fiscal 2011-12, it will be a real challenge to finance these reform efforts.

Who has got some “gifts, grants, and private donations” ?

Popularity: 1% [?]

Summer doldrums are no excuse

Friday, July 16th, 2010

Summer is a wonderful season, and a great chance to relax, on many dimensions. But as I watch my somewhat bored children squabble daily, I wonder about the wisdom of the long summer break, for parents as well as for kids.

And I remember the very solid research on the summer achievement gap, by Karl Alexander and his colleagues. This shows that as much as two-thirds of the K-12 achievement gap can be related to larger, accumulated summer learning losses for low-income students.

It is a little hard to get overly worked up about anything in 90-degree summer heat, but I always think that this is one of our real scandals in education policy.

We know, for sure – combining common sense, good brain theory and solid empirical evidence – that it is bad for students to have a 10-week summer break, in terms of their learning trends, and it is particularly bad for low-income students who don’t get exposed to the summer reading programs, museum visits and education-oriented camps and vacations than many middle-income families enjoy.

Politically, it is also pretty clear why we don’t reduce or eliminate the long summer break for students – many parents don’t like it (when it has been tried in some districts, though surely some parents would like to reduce the hassles of figuring out what to do with kids for 10 weeks of no-school ), the long summer break is traditional, recreational and barbeque industries lobby to preserve it (they really do, just like they have a stake in daylight savings time issues), we don’t want to pay more for more teaching time, many school buildings are not air-conditioned and that would cost more money, etc.

But this is a pretty stark case where we know, with absolute certainty, that our current policies are bad for all students and are especially bad for low-income students. Yet we allow these other political preferences to outweigh the possibility of actually utilizing the known silver bullet of summer learning time. There is a whole organization devoted to this issue.

True, a smattering of good summer intervention programs are targeted at low-income kids, such as this one described recently in EdNews. These efforts are worthy and important but, like voluntary charity generally, there aren’t nearly enough resources to come near solving the whole problem.

A promising recent study suggests that just giving low-income students books might be a cost-effective way to reduce some summer reading loss.

Still, it is frustrating that we don’t seem to want to summon the energy to take this on, full-bore.

Popularity: 3% [?]

College is still a great deal

Tuesday, July 6th, 2010

The lifetime financial rate of return on a college degree has long been very high in the U.S. – much higher than the financial rate of return on other investments available (stock market, real estate, bonds, etc.).

And, of course, many of the best things about a college education and experience can’t be measured in dollars.

There has been some recent concern that rising college costs, rising debt levels for students and a changing job market are reducing the financial returns from college.  But, in the current recession, college graduates have an unemployment rate about half that of the non-college educated workforce.

And, a new study reported in The Wall Street Journal shows that the financial returns to a college investment remain high – about 10 percent on average.

The study, using some great, self-reported compensation data from graduates, also shows costs and earnings by specific institutions. The top rates of return tend to come from top engineering schools and elite universities, but in-state tuition at good state schools remains an excellent investment.

As state higher education funding in Colorado continues to plummet, and as tuition increases more and more, these figures are worth keeping in mind.

Popularity: 2% [?]

Colorado Health Foundation Walton Family Foundation Daniels fund Pitton Foundations Donnell-Kay Foundation