I hate marking. No, I hate the idea of it. I like it once I get going, but it’s awfully time-consuming. In principle I value it, as it’s one of the most important things a teacher can do for a student. Although years later people often remember great teachers or great moments from particular classes, what seems to matter most to students when they are at college is the marks they get. Or maybe that’s just what teachers say. Often I think that what matters most to my students, anyway, is the music they make (and how much they can drink). But when they’re paying for a degree, and working very hard to do well at it (most are, although “very hard” is relative – I had no idea what that really meant ‘til I started doing a PhD whilst working full-time in two jobs and playing drums for three bands), students deserve their marks back on time.
Assessing students’ work should be an honest, equitable and transparent process, grounded in soundly made judgments about students’ performance on given tasks. Marks need to be a fair reflection of students’ performances against the criteria prescribed by the dreaded grading rubric, and feedback should probably take into account the opportunities given to students to learn, study for, and provide evidence of whatever understanding or knowledge a particular assessment purports to be testing. This is complicated by the fact that, as Lucy Green (2000: 103) notes, “many musical qualities will always escape any system of either evaluation or assessment”. Feedback is perhaps as important as the letter or number handed down – a point discussed eloquently and brilliantly by Swanwick (1999). When students receive good marks, though, I doubt they often care much for reading feedback, as the punch line is that they did great. For those students who (in their minds or others’) under-performed, constructive (not sarcastic – and that can be a test sometimes; it amazes me what people do, write, say and appear to believe, and to believe acceptable, in their submitted work), feedback on how to improve is more helpful. As I noted in a paper on assessment a couple of years ago, “summative grades carry with them a tangibility, a permanence, and a legacy in ways that formative remarks and discussion, while profoundly helpful, do not” (Smith, 2011: 38). On a really bad piece of work, knowing when to stop with the red pen or comments in MS Word, and whether to leave an unlikely-to-be-pursued “please come and see me” is both a fine art and a sledgehammer.
While marking is never straightforward, in a horribly simplistic way the transaction is of course incredibly non-complex – students do their work, and I tell them how good it is. That’s how I remember it from college. But I am perpetually crippled with insecurities. Who or what gives me the right to cast judgment? How do I know I’m right? What happens when I didn’t set the work or teach the class, and I’ve been asked to mark 50 essays? What about (as so often seems to happen) when the task set clearly does not allow the student to achieve what is required of the module or course learning outcome? What about when I fundamentally disagree with the expectations of either my fellow faculty members or the stated aims of the module? Or when I am confident students could or should never achieve what is asked, nor could anyone else, because the wording is wrong or the rationale outdated? My confusion is compounded by Colwell’s (2006: 220-221) observation that “rubrics are highly effective in focusing student effort (narrowing it)” and that “it is difficult to imagine a rubric providing feedback that would be helpful”. Thus I frequently find myself asking the question, what on earth am I doing?!
How best to serve the students or wider society (whom these nurtured creatures will soon be joining as fully fledged graduates) in double-marking situations where I feel that colleagues have totally ignored or misunderstood key words in the grading criteria and have wantonly distributed A and B (or, equally frightening, E and F) grades to all and sundry? Do I commensurately inflate and deflate other students’ grades? Am I ethically obliged I assess work in a way that (only?) I consider accurate, thus potentially highlighting my senior colleagues’ possible shortcomings or my misguided arrogance to them or our boss, thereby giving my students differing grades from everyone else, in turn raising questions about my teaching, my students’ learning, and sealing my fate as a soon-to-be ex-employee? Or do I just “play ball” for a few years ‘til I get tenure somewhere more prestigious, and only then start to push for all students to work really hard, and for blatantly market-driven admissions processes to move to ensure an at-least-vaguely level playing field from the outset (don’t’ even get me started on this), wishing for the sake of all under-served alumni that I’d had the conviction to do this years ago? And what of when I meet former students years down the line? Do we speak of mutual tacit multiple complicities with a broken system that served them with mediocrity, papered over my morals, and only gave fuel to a blog about how annoyed I am to have a Pretty Sweet Job?
Where are my drums?!
Colwell, R. (2006). Assessment’s potential in music education. In R. Colwell (Ed.), MENC handbook of research methodologies (pp. 199-269). New York: Oxford University Press.
Green, L. (2000). On the evaluation and assessment of music as a media art. In R. Sinker & J. Sefton-Green (Eds.), Evaluation issues in media arts production (pp. 89-106). London: Routledge.
Smith, G.D. (2011) Freedom to versus Freedom From: Frameworks and flexibility in assessment on an Edexcel BTEC Level 3 Diploma popular music performance program. Music Education Research International, 5 34-45.
Swanwick, K. (1997). Assessing musical quality in the national curriculum. British Journal of Music Education, 14(3), 205-215.