Another year, another PD, and I just had to share…
I love my students. I love the new classes I teach (I transitioned into teaching the International Baccalaureate Diploma Programme this year). I love my colleagues. I even love my new admin team (all but one of our building’s administrators resigned at the end of last year) and think they’re doing great things for the students and teachers.
I hate the state tests. I always have, and I always will. I led a portion of a district-wide professional development before the start of school, and even referred to them in my out-loud voice as “the wretched state tests” in front of an entire room of English teachers. Yes, I did it on purpose.
Now, thanks to some information presented today during a professional development session, I have another reason to question the true purpose of their existence, and be frustrated because of their impact on the careers of teachers. Theoretically (meaning that they’re still sorting out how it all works), new legislation requires that my evaluation be partly based on students’ state test results. Well, that is partly problematic for me, as 11th and 12th grade are not included in the state tests: students only test through the 10th grade. In 11th grade, they take the ACT, but that is a national test used mostly for college admissions. So far, I haven’t heard anything about how those test results would impact my evaluations.
Here’s the real problem that I learned about in PD today: the CSAP powers-that-be have not released any items from the test since 2004. (Released items show real questions that are used on the test, and allow teachers to help students “unpack” the questions and figure out how to answer them.) Problematically, without any indicator (other than teachers looking over the shoulders of students in order to figure out what the questions are requiring) of what the questions look like, teachers will be/are held accountable for their students being able to respond appropriately to those questions.
Here is the digression for the purpose of an example:
When I start each year, I present my students with a writing survey in order to assess their attitudes toward writing, their experiences, their skills, and to trick them into giving me a sample of their writing. One of the questions on the writing survey is: “What do you need to become a stronger writer?” The answers always vary, but a large percentage of students always say they would like to see models of the papers I will require them to write. Let’s face it, they aren’t the only ones who need to see what they’re supposed to produce. We all (myself included) learn in part by imitating in some ways the forms of things we’ve seen, whether in an academic setting or not.
Given that idea in relation to how people learn to do things, the same idea could be applied to teachers and the state tests. If we as a group know what the test questions look like, we have a better idea of how to transfer some of that knowledge into our students’ learning, right? Otherwise, how can you hold us accountable for something about which we have no knowledge? This of course brings up the whole issue of us “teaching to the test,” I know, which is a whole separate problem in American public education. I am of course frustrated that the conundrum exists in the first place, but my specific frustration in this case is that we’re being held accountable for kids taking tests about which we have no concrete knowledge.
Therein lies my “archery in the dark” analogy. Trying to guide students toward proficiency on the state tests is becoming more and more like trying, at the darkest time of night, to hit an archery range target. There’s the tension of the string that matters so much, the power of the arms to hold the string and bow straight, and most important of all, the aiming of the arrow. Outside factors also matter, like the strength and direction of the wind, the distance to the target, etc. You see how it works. Or, since it’s so dark, you don’t.