Accountability, denial, and the unit of improvement

You can count on presidential campaigns to make clear the conventional “wisdom” on education.  Any reader can recount the main ingredients:  [a] public schools are failing or in crisis, [b] if we don’t fix this, we can’t be Top Nation, because all those unskilled students will inhibit our economy;  [c] teachers and students are not doing their best, so we need to set high standards to tell them what is important, [d] accept no excuses, [e]   deploy an elaborate system of accountability, and [f] let market competition produce innovation and Quality, and don’t worry too much if the accountability system gives you bad news about the “innovations.”  You can’t make an omelette without breaking eggs, and in any case, things will get better after just a little longer.

In recent weeks, all the candidates have been asked about their educational policies and the one term that’s cropped up in all mouths (at least, all the mouths returning relevant answers), from the most conservative to the most “progressive,” is “accountability.”    This usually carries two components:  measurement and responsibility.  The candidates reflect the mainstream view (and in the case of education the mainstream stretches from bank to bank) that numbers matter, that they tell clear stories, and that we are measuring the right things.   It is also mainstream (part of the triumph of the technocratic mindset) to treat the components of the system as isolable elements whose behaviors can be interpreted with no reference to the rest of the system.

Inconveniently, there is strong evidence that this is a poor model.  If you make other assumptions, and actually incorporate more of the complexity of the real world, for example in teacher evaluation,

the fundamental message from the research is that the percentage of .. year-to-year, class-to-class, and school-to-school effective and ineffective teachers appears to be much smaller than is thought to be the case. When the class is the unit of analysis, and student growth is the measure we use to judge teacher effectiveness, what we find is a great deal of adequacy, competency, and adeptness by teachers in response to the complexity of the classroom. And, we see much less of the extraordinarily great and horribly bad teachers of political and media myth. (David C. Berliner here)

Here let me introduce a definition of the “Semmelweis fallacy”:  “the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs or paradigms.”

The blog  “Chronotope”  last June discussed this fallacy in the context of education research and how easily it is ignored, owing to prior intellectual commitments (the post is here) .  Carl Hendrick, the blogger, mentions several paradoxes that can be attributed to this mindset.  There is, for example, what he calls “Whole school cognitive dissonance”:

Whole School Cognitive Dissonance: What is the value in a school preaching Growth Mindsets in an assembly yet basing their entire school enterprise on the reductive and fixed mode of target grades and narrow assessment measures based on poor data? Why are kids explicitly told that their brain is malleable but implicitly told their target grades are not?

Now, much of the thrust of the blog post is about the resistance of teachers and school people to research results.  I think, however, that much resistance that we are seeing is healthy. So many mandates have been imposed, lifted, revised, and contradicted by others over the past few years, with such varying justifications and sanctions, that it’s hard not to see schools as pawns in some other game;  under those circumstances, endless patient compliance cannot be expected from the human subjects.

And every mandate has been related to someone (students and teachers, usually)  accounting to someone else (not a teacher)  about something (achievement measures).   Yet no matter how much accounting is going on, no one is satisfied with the answers.  Perhaps we’re looking at the wrong things.  Or holding the wrong people to account.

But as we enter another school year, I’d like to hear:  Is the data being collected where you are improving the students’ STEM learning experience?  That is, if John or Judy take a test this year, are the results going to help them next year?  Is it the “system” that is being measured, by sampling the anonymous flow of student achievers every few months, or is STEM education — particular people’s STEM education — getting better?  By this, I don’t mean, Are they pushed to get better test scores, but Do the test results mean that J or J themselves, next year, encounter more engaging classes, more authentic experiences of science  (etc) practices,  immersion in more meaningful science (etc) content?

 

 

 

 

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s