Last year I had the brilliant idea to give a presentation to art educators called Stop Teaching Art. Classy, right?! This year, I presented the sequel: Stop Grading Art!
I don’t know why I keep choosing these subversive titles. Maybe I am playing on the antiestablishmentarian tendencies of art folks to generate some interest in what I have to say, but regardless, I feel like I need to quickly explain what I mean by it, and what I don’t mean.
Last year’s topic, Stop Teaching Art (which you can read about here), was not meant to suggest that we shouldn’t have art in our schools or that my audience should leave the profession. On the contrary! The purpose of that presentation was to take a look at how we teach and consider ways we can be more student-focused and more learning-focused by being less art-focused — stop teaching art, and start teaching students.
Similarly, Stop Grading Art is not meant to suggest that we should not have to give grades in art (though I suspect some attended the session with hopes this was the case). Rather it was to explore the many possibilities for finding evidence of student learning by looking in places other than the final product — stop grading [the] art and start grading the learning. (I guess that is the punch line, so those of you with short attention spans can leave now.)
Grading practices are deeply rooted in beliefs about the purpose of grading, and I know we do not all agree on this topic. In fact, I was able to illustrate this with the group in the room. When offered several choices between learning-focused purposes for grading (research-based theory) and those functions of grading often used in schools (practice), it was clear that there was not a lot of agreement. Choices such as this one left many unable to choose.
This uncertainty goes right back to the first topic, Stop Teaching Art, which is why I included an synopsis here. If we are overly art-focused then the choice on the left may seem appropriate, but if we are learning-focused, we must select the option on the right. Indeed, educational research supports the latter, but I believe we, as art educators, place too much emphasis on the artmaking, and on the quality of the product.
We are asking the wrong questions when grading. Rather than asking, “did the student learn?” we are asking, “Is this a good artwork?”
The purpose of grading should be to support learning, and therefore our grading practices must align with instructional practices, which must align with the intended learning outcomes. I like to think of this with the CIA acronym. I don’t know who first used this (it can be found referenced in a lot of different places), but I was introduced to it by Christopher Gareis, Ed. D. of William & Mary School of Education. He describes CIA (curriculum, instruction, and assessment) as three forms of the same thing — like three states of matter. Each should work in unison with the others.
The backward design model made popular in Understanding by Design (Wiggins and McTighe) describes a similar relationship from an instructional design perspective. In this model, the teacher determines what the students will learn (curriculum), what evidence they will accept that the student has learned it (assessment), and then the learning activities that will lead to this end (instruction).
No matter which of these resonates with you, the conclusion is that assessment and grading are meant to directly align with and support the learning goals – the standards – and all of our standards are not about the product.
Whether you are working with standards from the relatively new National Visual Arts Standards (in the US), or a state or local curriculum perhaps organized according to the older DBAE model, it is clear that all of the learning goals in art are not strictly about artmaking. DBAE addresses not only visual communication and production, but also art history, criticism and aesthetics. The National standards are organized into four different strands including creating, presenting, responding and connecting. In both cases, the product is primarily addressed in one of four strands.
Maybe I should, but I am not going to suggest this means the product should count for no more than twenty-five percent of grading. I don’t need to get down to numbers. I will, however, suggest that the majority of us do not do a sufficient job of assessing learning in the strands outside of art production, and I hope you will take these ideas into consideration and decide what is appropriate for yourself.
So what is a teacher to do? If, as I hope some will, a few of you are thinking, “hey you’re right, I should do better grading for learning in these other parts of my curriculum.” How do you move forward? Consider these three steps.
- Identify data sources (in addition to the art product) that will provide evidence of learning.
- Identify and “unpack” the standard (or benchmark or indicator) you are measuring with this data source.
- Identify or develop an assessment tool that will provide measurable data aligned with the standard.
These can be done in a different order, but starting with the data source can often be the most concrete way of getting started since it is so closely connected to instructional activities.
It sounds sterile or clinical, but all that is meant by “data source” is the other things can you look at (other than the art product) that can provide evidence of learning. Look first at the instructional activities that already happen in your classroom. Planning sheets, sketchbooks, warm-up activities, reflection activities, artist statements, critiques, and many more have the potential to provide standards-based evidence of learning.
UNPACK THE STANDARD
Think about the standard that describes the intended learning outcome related to the instructional activity and data source. Understand the standard and describe for yourself, in more detail, exactly what you want to see from your students that will show you they are learning it. (This is a great activity to do collaboratively.)
As you unpack the standard, take a look at the data source and make sure it is designed to give you the evidence you need. If you need to make an adjustment, such as rewording a prompt to better align the instructional activity with the standard, this will make your instruction and assessment all the stronger.
Finally you need a way to measure whether your students are meeting the learning expectation. Make no mistake, it is difficult work to create a good rubric that is learning-focused and aligned with a selected standard, so I would like to offer a “generic” tool that may help. The table below shows four ways to measure to what degree the student is meeting the learning expectations. It may not be perfect for every scenario, but this is designed to be connected with any given standard, and best of all, it is a standards-based, learning-focused design.
Grading practices are a tough topic. As we speak, my school division is taking a hard look at grading practices in middle and high schools, and we expect to get some new guidelines for grading that will be applied district-wide. These will, no doubt, cause some anxiety for teachers who are accustomed to grading a certain way, but in the end it will be better for students, and better for learning.
I don’t know what the recommendations and guidelines will be, but let me close by sharing this resource, and 15 fixes for broken grades (language abbreviated from the text). As you read through this list, ask yourself how many of these align with your grading practices.
- Don’t include behaviors (effort)
- Don’t reduce marks for late work
- Don’t use extra credit or bonus points
- Don’t punish with grades
- Don’t consider attendance
- Don’t include group work
- Organize by standards/ learning goals
- Provide clear expectations
- Don’t compare students
- Use only quality assessments
- Don’t average
- Don’t give zeros
- Don’t include formative assessments
- Emphasize recent achievement
- Involve students in the process