Testing Education
“Any thing you can’t measure you can’t manage.” So said Wilbur Ross in his confirmation hearings to become commerce secretary. Data-driven decisions are usually better than those based on hunches, intuitions, and common sense heuristics (hello, Kahneman and Tversky), but measurements even if good and reliable will not always define how to manage. And sometimes there must be management without good measurement. These and related thoughts were prominently in my mind while reading about another confirmation hearing, that for education secretary.
We measure reading and math proficiency, but choices often have to be made about the significance of the data. For example, imagine two third grade classes, each with twenty students. At the beginning of the year, each class has ten students that read at the appropriate second grade level and ten that read only at the first grade level. At the end of the school year, assume thirteen students in one class are reading at the appropriate third grade level, but the remaining seven are still first grade readers. In the other class, ten students are at the third grade mark, and the other ten have made it up to the second grade level. Which class has been more effective? One class has 65% of its little charges reading at the appropriate level while the other one has only 50% doing so. But in the second class, all the kids improved, while 35% in the first class did not. If the two classes were taught by different methods, which method would you most encourage? However you answer that question, your answer is driven by values that don’t depend on the measurements.
As my own education taught me, not all that is important can be meaningfully measured. I thought that I had received a good education before going to college. My state, and certainly my town, were, as many places are, self-congratulatory about how good the educational system was. I did learn to read and do math. I did those things quite well, and my scores on the national tests placed me in the tiniest top fraction in the country. For me, the standardized testing of my day was a blessing, and I got into one of the nation’s top colleges.
Some publication gave the median SAT scores for my college class. Mine were well above that median, and I gloated to myself about how well I was going to do. But then school began.
My classes had Choate and Exeter and New Trier and Stuyvesant boys, and while their standardized scores were not likely to be as good as mine, I realized quickly that in some fundamental ways, they were better educated than I was. It was not that they knew more things than I did, but they knew how to think better than I did. We both could read The Sound and the Fury, but they had a better understanding of that book’s Easter symbolism. I could learn the facts of the World War I peace process as well as anyone, but they could think better than I about the consequences of the treaties. I could memorize facts with the best of them, but I had never learned to think about the meaning of those facts.
I quickly learned that while I could read and that while I knew grammar and sentence and paragraph construction, I could not really write. One of my early papers, I think, actually had written on it, ”You can’t write.” This was even though in high school I got an A on everything I wrote. In college, I thought back on how little meaningful feedback I ever got on those papers. Few outside of the math teachers in my high school actually pushed me to be better.
I started to learn that often my writing problem was a thinking problem. If my writing was unclear, probably my thinking was unclear. I learned that the writing and thinking processes went together–that often the best way to think about an issue was to write about it. I had to clarify my thoughts, make my thinking more logical, think about whether and how my sources supported my positions. In other words, I learned I could not write well until I could think well.
Perhaps what was hardest for me to learn was that I needed not only to consider what I did know, but to ponder what I did not know. Could those gaps affect my thinking, and if so, was there a way to get that knowledge? I learned I needed to have a skepticism about what I thought I knew so that I would challenge myself to fill the gaps in my understanding.
When years later I taught law students at a school that did not attract top college graduates, many of my students would tell me that they had a writing problem. They knew the material, they would say, but could not explain it on paper. However, when I probed their knowledge, it was almost always deficient. They couldn’t get the material on paper because they didn’t know it well or how to think about what they did know.
On standardized testing, only a few in a hundred or a thousand were better than I was entering college, but those tests did not test everything that was important, and my education up to that point had been crucially deficient. But even if that education had been better, would there have been good measurements for what I lacked? I accept that there were reasonably good and objective measurements of my geometry prowess or reading comprehension skills and my factual knowledge of history and literature. But I am agnostic about how well thinking can be measured. If it can’t be, what then? Thinking, which means something in addition to reading comprehension and solving equations, is crucial to a good education. Wilbur Ross said that if it can’t be measured, it can’t be managed. But often, as in much of education, we need good management even when there might not be good measurements. Can that be done?