I was watching a sitcom the other day. The mother asked her son what he had learned at school that day. "How to do well on the weekly standardized test so the school gets financial aid," replied the kid. Transparent satire, but that doesn't mean there isn't a grain of truth.
At one local school, students last year spent around an hour of class time each semester, in English language arts, mathematics and social studies, taking a computerized test called Measure of Academic Progress. MAP's implicit purpose is to target skills to be tested on the Palmetto Assessment of State Standards, which largely determines school report card ratings and funding. However, the tests weren't aligned with required state standards, so students might have had any number of items on which they hadn't received instruction.
Not counting losses from fire drills, school dances, assemblies and other interruptions, each course at the school has 180 hours of instructional time per year. We've accounted for two.
Students also were tested quarterly in all core academic subjects using a multiple-choice test created to prepare them for PASS. Tests took two or three hours per course, and often weren't available to teachers until a week or two before the test, so most of the intervening time was spent drilling and cramming in class. Conservatively, we've accounted for another eight hours.
Never miss a local story.
In March, the first halves of two days of instruction were devoted to PASS writing. In May, another two days were charitably donated to PASS reading and math, as well as social studies and science portions, which were randomly distributed. (No student took both, and these assignments were not correlated with the sections the students had taken the year before, so the value, beyond that of helping the test company design questions, is puzzling.) As of September, schools haven't received any scores from this test, so arguments that the test is used to drive instruction are also moot. This leaves around 164 hours of instruction to go, after at least 14 lost to testing.
As PASS neared, administrators instructed teachers to focus instruction on students who had a chance of moving to the next level of the test (from "below basic" to "basic," from "proficient" to "advanced"). These cash-cow bubble kids have the greatest potential for increasing a school's report card grade.
Now, before gathering an angry pitchfork-wielding mob and burning down the local district office, bear in mind that administrators are merely caving to the demands of No Child Left Behind, the federal law that requires an emphasis on standardized testing scores, or -bad school! - no more federal funding. If this seems a little unfair to the kids who actually have to take the tests, it is.
Testing fervor can be traced back to the Nation at Risk study of the '90s, which scared a lot of people by comparing our standardized scores with those of countries such as China, and coming to the conclusion that our schools must be really terrible if we weren't able to compete. Problem is, that's not a fair comparison. It is based, first of all, on the unfounded assumption that standardized testing is a good predictor of success in college and the workplace. Research says it is not.
What do we really need from public schools? Conservatives may bemoan their "failings" (while busily promoting "school choice" options that can only benefit from the stocks of public schools dropping), and liberals may misguidedly buy into the idea that they're "saving" underprivileged kids by focusing on stricter school accountability measures. But lower-income and minority students historically do not do well on standardized tests; they do not perform in a way that other measures of achievement suggest they should. (They are underrepresented in private schools, as well.) And whether your kids go to a public school or not, they'll be living and working beside those who do.
Schools are now paid to do well on tests, not to create moral, responsible citizens and competitors in a global economy. In a way, demanding standardized accountability encourages schools to fail, by taking away autonomy and focusing on one main skill - test-taking - over more useful and marketable skills.
If we want to "fix" schools, the first step might be to use a different verb. A doctor who tries to "fix" patients might be seen as a little megalomaniacal, the kind of person who might prescribe a full course of radiation before trying a small biopsy. In the case of our schools, what is needed is a series of biopsies, not the massive shock to the system of top-down federal regulations and standardized tests.