This week I have a team of teachers scoring assessment results from the past school year using a highly refined rubric… WAIT! I know what you are thinking… “Send me this highly refined rubric!” …but my reference to our rubric is a little bit of my highly refined sarcasm.
1 to 100,000 by Allison Dreon, Art Teacher
In an effort to provide data to monitor student progress in art, the schools I work with deliver an annual district-wide art assessment. My experiences developing and managing this assessment over the past several years have provided a fascinating glimpse into the challenges of generating valid and reliable data in an educational environment that demands accountability.
The details and content of our district art assessment will have to be deferred to another day. For our purposes here, it should suffice to know that the assessment is an authentic artmaking experience, very similar to what students would experience in their regular art class. Students are introduced to a situation and presented with an artmaking challenge, then are given time to plan for artmaking, create an artwork, and reflect on the process. The challenge, and topic for today, is figuring out how to measure student performance in a valid and reliable fashion.
When we developed the assessment several years ago, a rubric was developed. The rubric is a relatively simple tool to measure performance, on a 4 point scale, in three strands of our curriculum: Process, Product, and Understanding. A team of teachers worked, over the span of a couple of years, to refine it (and the assessment as a whole) before it was rolled out to the whole district. In total, many minds and many hours went into making it the very best we knew how.
Despite all of the thinking and revising that went into creating this measurement tool, the rubric we are using (this week) to score the art assessment is flawed in a number of ways. I believe these flaws are characteristic of the challenges we face developing valid and reliable data in art. Below, I will summarize the expectations for each strand briefly and address our biggest challenges with each.
The process rubric is looking for exploration of multiple possible solutions appropriate to the challenge, and evidence of planning and refinement. Evidence can be gleaned from the planning document, preliminary sketches, the artwork itself, or the reflection document where the student may, for example, reveal solutions they considered or decisions they made that are not evident elsewhere. In short, the process score is determined holistically from all available evidence.
Our biggest challenge with this part of the rubric has been defining what we mean by “multiple” ideas. The artistic/creative/design process can look very different in different situations and with different artists. Is it more or less effective to explore several dramatically different solutions to a challenge, or to identify a solution and experiment with several subtly different ways to execute it? If you believe, as I do, that both of these approaches is valuable, then should one be considered a higher order of thinking than the other? This is one example of the difficulty of defining specific learning expectations. As art educators, we can certainly teach these skills, but in an authentic artmaking situation, who is to say what the “right” approach is?
The product portion of the rubric is looking to see if the student meets the criteria of the challenge, applies appropriate media processes and techniques, shows good craftsmanship, and applies appropriate techniques for representing subject matter. While craftsmanship and techniques must be evident in the artwork, evidence that the work meets the established criteria is sometimes evident in the planning or reflection sheets, so these must be considered here as well.
Wait, what did that say up there? Yep. You read that correctly. It says, “good craftsmanship.” What were we thinking?! On several occasions, I have brought groups of art teachers together to improve inter-rater reliability in the use of these rubrics. Inevitably, craftsmanship is the most difficult hurdle to overcome. “Good” is a judgement, and each teacher (trust me on this) has a very different idea of what an appropriate level of craftsmanship should be. Our inability to come to an agreement on an acceptable level of craftsmanship has been a huge barrier in using this rubric. Good thing most art teachers are smarter than we are and don’t include craftsmanship in their evaluations. (I hope you’ve been paying attention. That was more sarcasm.) If you walk into an art classroom almost anywhere in the country, a huge amount of attention is paid to craftsmanship, both in instruction and assessment practices. Yet it is incredibly difficult to evaluate in a consistent manner.
The understanding portion of the rubric is looking for evidence that the student understands the nature of the authentic challenge, as well as the media and representation techniques they are using. Evidence of understanding may be visual (in the artwork or sketches), or verbal (in the planning or reflection documents).
Here we should reference the backward-by-design model of planning for instruction (Wiggins and McTighe): What are the desired learning outcomes? What evidence are you willing to accept that the student has learned it?… There’s the rub! Our rubric is looking for evidence of understanding, but the types of evidence can vary so much. It’s easy when a student responds to a reflection question articulately and describes all of their thinking, but this is rarely the case. We have been committed from the start that we would not be assessing the students ability to write, but rather their learning in art. Should proper and effective use of a painting technique demonstrate that the student understands that technique, even if they can’t (or just don’t) explain it in words?
There are an abundance of challenges with this rubric. These are just the three most problematic. Yet, these challenges represent the challenges every art teacher faces when attempting to gather valid data. We are left wondering, then, can a discipline that places a premium on creativity and divergent thinking ALSO define specific learning outcomes? I have heard even my most revered art education experts declare that we should “just say no” to the demand for data. I appreciate where they are coming from, but I believe we can and we must find a way.
The visual arts have a long history of declaring themselves “subjective,” but the current educational environment demands data and accountability for student learning. We operate in a standards-based world, and if we want to be a valued part of education, we need to be able to provide valid and reliable data that shows our students are learning.