Data, data, data. Everybody wants more and better, it seems. A lot of educational research is aimed at using all this data to change… what? At this point, it’s hard to see how most of the data has resulted in research that enables improved practice.
In an August post (here), I asked:
Is the data being collected where you are improving the students’ STEM learning experience? That is, if John or Judy take a test this year, are the results going to help them next year? Is it the “system” that is being measured, by sampling the anonymous flow of student achievers every few months, or is STEM education — particular people’s STEM education — getting better?
There were some very interesting comments, but Louise Wilson gave the most direct answer to the question (I excerpt):
Generally, no. The tests are taken, and feedback is within a day in our school, although teachers are not generally trained on how to find the results. But students are assigned to classes without regard to their current skill level (so students in grade level equivalents of 2nd to 11th grade can be found in the same 10th grade math course) and everyone is expected to make the best of it. Nobody gets a better experience from the tests, because sorting students according to current skill level is apparently bad.
Such comments, and they are echoed by many voices in many forums, always raise the question, Who are the data for? And this can’t be answered by naming some constituency of possible users or beneficiaries. After the past couple of decades, in which we have developed data collection, instrument design, and other technologies to a high level of sophistication, we have to look at who’s actually using the data, and for what ends.
Before I go on, it’s worth noting that most of the headlines, commission reports, regulations, and expenditures are related to data being collected on classrooms, by people outside classrooms. These data then exist as a resource accessible to varying degrees, mostly to agents (people) outside classrooms, who make evaluations, allocations, or other decisions about the insides. Teachers have some access, sometimes, but the trend is to create systems which operate on data, rather than on judgment. (The algorithms, rubrics, and other mechanisms are a crude facsimile of “expert system,” that is, an intelligent software system designed to make inferences and decisions about, say, medical diagnosis or the management of inventories, using knowledge “captured” from actual experts in the field.)
It can be better than this, of course, if the school or district culture is one of teacher learning and engagement for shared pedagogical purposes, rather than external demands. In a research project on inquiry-based science teaching that Joni Falk and I conducted some years ago (see a paper here), we watched as the first Massachusetts high-stakes tests were implemented in two of our study districts, which happened to be adjacent towns. In one, a long-term movement towards an inquiry approach was largely subverted by anxiety about test scores. Next door, the district interpreted the testing reform in the light of their long-standing commitment to inquiry, and exerted a lot of ingenuity in seeking ways to use the new regime to reinforce their system, a response made possible by the system-wide vision of inquiry teaching and learning.
Larry Cuban has posted an interesting reflection on “Data driven teaching practices,” which I recommend. Cuban characterizes the hope behind data-driven practice in positive terms:
data-driven instruction–a way of making teaching less subjective, more objective, less experience-based, more scientific. Ultimately, a reform that will make teaching systematic and effective. Standardized test scores, dropout figures, percentages of non-native speakers proficient in English–are collected, disaggregated by ethnicity and school grade, and analyzed. Then with access to data warehouses, staff can obtain electronic packets of student performance data that can be used to make instructional decisions to increase academic performance. Data-driven instruction, advocates say, is scientific and consistent with how successful businesses have used data for decades in making decisions that increased their productivity.
He then asks, “What’s the evidence that this is making a difference?” and reviews a few studies (there do not seem to be very many) that look, not at test scores as a value in themselves, but as tools for instructional improvement. Despite the espoused aims of the assessment systems, they feel like an external process with which teachers (and others) must comply — they don’t tend to see it as having actual instructional value. As the volume and variety of data collection has increased, even a school culture with a clear and committed pedagogical vision may have trouble remembering what it values, and “filtering” the “reforms” to harmonize with those values.
Thus far, then, not an enviable research record on data-driven (or informed) decision-making either being linked to classroom practices and student outcomes.
Numbers may be facts. Numbers may be objective. Numbers may smell scientific. But numbers have to be interpreted by those who do the daily work of classroom teaching. Data-driven instruction may be a worthwhile reform but as now driving evidence-based educational practice linked to student achievement, rhetoric notwithstanding, it is not there yet.
I still want to hear (and I think many people would be glad to hear) from people who are using data to improve actual practice, rather than to drive up test scores (or drive them down, as seems to be the case with some recent assessments). But I will hope to encourage stories about other ways people are using research — not necessarily testing results — to improve practice. Stay tuned — but share stories, too. ( There’s about ten thousand of you out there, so there must be 10 or 20 good stories we should hear!)