Papers

Marking Work: Music Education, Feedback and Assessment

I hate marking. No, I hate the idea of it. I like it once I get going, but it’s awfully time-consuming. In principle I value it, as it’s one of the most important things a teacher can do for a student. Although years later people often remember great teachers or great moments from particular classes, what seems to matter most to students when they are at college is the marks they get. Or maybe that’s just what teachers say. Often I think that what matters most to my students, anyway, is the music they make (and how much they can drink). But when they’re paying for a degree, and working very hard to do well at it (most are, although “very hard” is relative – I had no idea what that really meant ‘til I started doing a PhD whilst working full-time in two jobs and playing drums for three bands), students deserve their marks back on time.

Assessing students’ work should be an honest, equitable and transparent process, grounded in soundly made judgments about students’ performance on given tasks. Marks need to be a fair reflection of students’ performances against the criteria prescribed by the dreaded grading rubric, and feedback should probably take into account the opportunities given to students to learn, study for, and provide evidence of whatever understanding or knowledge a particular assessment purports to be testing. This is complicated by the fact that, as Lucy Green (2000: 103) notes, “many musical qualities will always escape any system of either evaluation or assessment”. Feedback is perhaps as important as the letter or number handed down – a point discussed eloquently and brilliantly by Swanwick (1999). When students receive good marks, though, I doubt they often care much for reading feedback, as the punch line is that they did great. For those students who (in their minds or others’) under-performed, constructive (not sarcastic – and that can be a test sometimes; it amazes me what people do, write, say and appear to believe, and to believe acceptable, in their submitted work), feedback on how to improve is more helpful. As I noted in a paper on assessment a couple of years ago, “summative grades carry with them a tangibility, a permanence, and a legacy in ways that formative remarks and discussion, while profoundly helpful, do not” (Smith, 2011: 38). On a really bad piece of work, knowing when to stop with the red pen or comments in MS Word, and whether to leave an unlikely-to-be-pursued “please come and see me” is both a fine art and a sledgehammer.

While marking is never straightforward, in a horribly simplistic way the transaction is of course incredibly non-complex – students do their work, and I tell them how good it is. That’s how I remember it from college. But I am perpetually crippled with insecurities. Who or what gives me the right to cast judgment? How do I know I’m right? What happens when I didn’t set the work or teach the class, and I’ve been asked to mark 50 essays? What about (as so often seems to happen) when the task set clearly does not allow the student to achieve what is required of the module or course learning outcome? What about when I fundamentally disagree with the expectations of either my fellow faculty members or the stated aims of the module? Or when I am confident students could or should never achieve what is asked, nor could anyone else, because the wording is wrong or the rationale outdated? My confusion is compounded by Colwell’s (2006: 220-221) observation that “rubrics are highly effective in focusing student effort (narrowing it)” and that “it is difficult to imagine a rubric providing feedback that would be helpful”. Thus I frequently find myself asking the question, what on earth am I doing?!

How best to serve the students or wider society (whom these nurtured creatures will soon be joining as fully fledged graduates) in double-marking situations where I feel that colleagues have totally ignored or misunderstood key words in the grading criteria and have wantonly distributed A and B (or, equally frightening, E and F) grades to all and sundry? Do I commensurately inflate and deflate other students’ grades? Am I ethically obliged I assess work in a way that (only?) I consider accurate, thus potentially highlighting my senior colleagues’ possible shortcomings or my misguided arrogance to them or our boss, thereby giving my students differing grades from everyone else, in turn raising questions about my teaching, my students’ learning, and sealing my fate as a soon-to-be ex-employee? Or do I just “play ball” for a few years ‘til I get tenure somewhere more prestigious, and only then start to push for all students to work really hard, and for blatantly market-driven admissions processes to move to ensure an at-least-vaguely level playing field from the outset (don’t’ even get me started on this), wishing for the sake of all under-served alumni that I’d had the conviction to do this years ago? And what of when I meet former students years down the line? Do we speak of mutual tacit multiple complicities with a broken system that served them with mediocrity, papered over my morals, and only gave fuel to a blog about how annoyed I am to have a Pretty Sweet Job?

Where are my drums?!

References

Colwell, R. (2006). Assessment’s potential in music education. In R. Colwell (Ed.), MENC handbook of research methodologies (pp. 199-269). New York: Oxford University Press.

Green, L. (2000). On the evaluation and assessment of music as a media art. In R. Sinker & J. Sefton-Green (Eds.), Evaluation issues in media arts production (pp. 89-106). London: Routledge.

Smith, G.D. (2011) Freedom to versus Freedom From: Frameworks and flexibility in assessment on an Edexcel BTEC Level 3 Diploma popular music performance program. Music Education Research International, 5 34-45.

Swanwick, K. (1997). Assessing musical quality in the national curriculum. British Journal of Music Education, 14(3), 205-215.

Share Button

Gareth Dylan Smith is based in London, and is an endorsee of EcHo Custom Drums and TreeWorks Chimes. His expertise is in demand worldwide as a performer, educator, and academic. He drums in punk, musical theatre, blues, cabaret and alt. rock bands, recording and performing around the UK, Europe and the US. Recent collaborations include Roger Glover (Deep Purple), Richard O’Brien (Rocky Horror Show), Will Gompertz (BBC Arts Editor), Sony, Victoria’s Secret and Bloomberg. He has appeared on recordings by the Eruptörs, Stephen Wheel, Mark Ruebery, Gillian Glover and Neck. Gareth teaches drums, ensemble studies and research skills at the Institute of Contemporary Music Performance in London, history and philosophy of music education for Boston University, and rock and roll pedagogy at the University of Michigan. In 2013 Gareth’s book I Drum, Therefore I Am: Being and becoming a drummer – the world’s first academic study of drummers – was published by Ashgate. Gareth’s research interests include music making and leisure, embodiment in performance; intersections of music, education and entrepreneurship; and pedagogy, gender, democracy and social justice in music education. Gareth has presented research on five continents and is published widely in peer-reviewed journals and books. He has written for Rhythm and Drummer magazines, and maintains an observational comedic blog at DrDrumsBlog.com, where he also writes album and gig reviews. Gareth is on the review boards for The British Journal of Music Education, Psychology of Music and Malaysian Music Journal. He writes limericks for all occasions, and is passionate about good coffee, red wine and prog rock.

2 thoughts on “Marking Work: Music Education, Feedback and Assessment”

  1. Great blog post Gareth, most of which I can totally empathise with. I too have found myself in moments of insecurity, where I have revisited marks and feedback time after time before it is returned to the students, wondering if i have been consistently ‘fair and egalitarian’ in the awarding of those marks. I have to reassure myself that I constructed and set the assignment and that I have not asked them to try and achieve unachievable aims. I always make it clear to my students that there are a specific set of marking criteria, listed on the assignment brief, and that they will be marked against those and nothing else, so do not include ideas and concepts that you won’t get marked on, in other words use the word count carefully and constructively. Although the students are VERY clear on what they get marked against, it does at time seem a prescriptive exercise for many of them, rather than an opportunity to explore more abstract ideas.

    I am somewhat surprised you are being tasked in marking work for which you had no input at any level, this seems very counterproductive and leaves the assessor somewhat exposed to criticism and questioning of their ability to be fair and consistent. In my institution all assignment briefs are reviewed by another member of staff, and staff mark the work for the module they teach, with a moderation process involving a blind sample, reviewed by another member of staff with knowledge in that particular field of study or discipline. Then of course there is the external examiner who also reviews a sample.

    As many of us appreciate, in these days of increasing fees and students perception of themselves as paying clients of a ‘service industry’, the onus is on us as academics to basically make sure all is good from our end. Checks, double checks, relevance , efficacy, etc. As you rightly say feedback, whether it is read or not, has to be constructive, useful and clearly signposts and suggests what the student needs to do next time to improve their marks. Though it becomes more apparent to me that, when students book an appointment to ‘discuss’ their mark, the student has not read the feedback because I ensure that the feedback clearly explains why they have received the mark they have (very annoying and pointless use of mine and their time). Whilst I am always happy to discuss students marks with them, I always ask them to come along with a rationale that points to why their mark should be higher (as it is never the other way round). It then puts the onus on them to thoroughly review their submission and the feedback.
    Just some thoughts and as it turns out a useful distraction from the pile of marking I have in front of me.

  2. Dear Doctor Smith

    I found this article very interesting both as an educator and educational ‘assessor’ with experience of educational research in the area of assessment and as a student of music, albeit of an informal sort.

    Your frustration with the assessment regimes within which you have to work was clear, and I am sure that a great many teachers will agree with you. Your comments on feedback chime with the comments I made here about the conflict between different functions for assessment

    http://thinkingaboutmusic.com/?p=525

    I found your article here very interesting:

    http://cmer.arts.usf.edu/content/articlefiles/3345-MERI05pp34-45Smith.pdf

    I wondered after reading a post here by Dr Moir what the music assessment rubrics might look like, and so I was pleased to see an example. I admit to being somewhat shocked by its brevity. I agree wholeheartedly that it doesn’t provide much of a basis for ‘feedback’ to students.

    I think I can shed some light on why BTEC qualifications are the way they are.

    Vocational Education and Training in this country has been very much influenced by strands of educational psychology developed in the United States and especially the ideas criterion-referenced assessment, mastery learning both of which were developed, broadly speaking, within the context of behaviourist approaches to knowledge.

    Knowledge is defined, put simply, as a change in behaviour. All educational outcomes are supposed to be defined using behavioural verbs, and not mental ones such as ‘understand’, ‘appreciate’ and so on. A seminal paper by Robert Glaser played a big part in getting this particular ball rolling.

    The BTEC rubrics in your article demonstrate this general approach, with the verbs being ‘explain’, ‘plan’, ‘develop’ and ‘perform’. You quote Swanick’s idea of learning as the ‘residue of experience’. Such mentalistic ideas of learning are anathema to more or less the whole of modern assessment practice. However, the appearance of subjective terms like ‘flair’ undermine the apparent objectivity of the rubric.

    Returning to the criteria, I must confess myself unable to understand how an individual learner can ‘develop as a musical ensemble’. The only explanations of this I can come up with involve a mixture of split personalities and time travel!

    Other aspects of this rubric are very familiar, as for example, the specification of degrees of tutor support strands 2 and 3.

    These behaviourist-based approaches have appeal, as you acknowledge, and some justification, when opposed to more subjective and unclear approaches to the assessment of the work of students. More specifically, they are often opposed to ‘norm referenced’ systems of assessment in which the grades or marks of students are worked out in comparison either to the rest of their cohort or with reference to some statistically based distribution curve.

    The BTEC system you describe appears to be based on the mastery and competence-based approaches to vocational training. There is a great deal of work on the problems associated with a close cousin of the BTEC examination, the NVQ (National Vocational Qualification), which has also been influenced by the behaviourist, mastery-based educational thinking I mentioned above. If you dipped into this literature, I am sure you would find thoughts and experiences in other fields which mirror your own.

    You comment that “Edexcel does not prescribe course content or curricula. ”

    This is entirely in line with the behaviourist-based approach to assessment. What matters is not how the learner gets to know how to do things but whether or not they can do them. In criterion referenced assessment schemes the assessment criteria tend to become the focus of the curriculum, and they certainly become a focus of student concern. You are not alone in expressing a degree of concern about this. The comment that these assessment regimes often leave teachers unsure about what they are supposed to be teaching has been made with some elegance and force, by Professor Alison Wolf in her seminal works on the topic, among others.

    Professor Wolf spells out one reason for the lack of a syllabus: there has been a conscious intention to decouple assessment from teaching. In theory a student can present themselves for assessment without having followed a course, sometimes using ‘accreditation of prior learning’ evidence to claim a qualification. My belief is that this is partly due to the desire to set up a market in assessments, to make assessment a free-standing business whose interests will be independent from those of the educators.

    I read your comment that the BTEC was unfair because unless a student obtained a distinction in all papers their grade reflected the lowest level that they had achieved. This is part and parcel of the ‘mastery’ approach. In this approach, students are not supposed to progress to the next level until they have thoroughly mastered the work from the previous one. Tactics such as ‘averages’, in which best and worst aspects of student performance balance each other out, are at odds with the mastery approach to testing and assessment. I’m not taking a position on this, I simply aim to set out the philosophy behind the approach. It is supposed to motivate students to do well.

    You comment that the criteria seem ‘fragmentary’ and potentially ‘narrowing’ though you acknowledge that they are relatively ‘manageable’. Assesement processes should, as I have noted in another post on this web site, be valid, reliable, manageable and useful. Validity and reliability are notoriously difficult to reconcile, and to do this while maintaining manageability is harder still. The work of Professor Alison Wolf showed how in a vain attempt to increase the value of NVQ’s more and more detail was inserted into the specifications. Yet the aim – of precise and clear criteria – was not met. Why? I suspect that the answer lies in the area of the philosophy of language which in most everyday situations under-specifies meaning.

    I will also comment that you are lucky that BTEC and teachers of music do not appear to have been forced to incorporate ‘key’ or ‘core’ skills into music qualifications. Usually defined in such terms as ‘Application of Number’, ‘Literacy’, ‘ICT’, ‘Problem Solving’ and ‘Working with Others’, these ‘skills’ are defined in levels designed to dovetail with the national curriculum levels you describe in your paper and attempts have been made, with varying degrees of success, to integrate this into both academic and vocational qualifications across a range of subjects and levels. If they have not yet been inflicted upon music, somebody somewhere has missed a trick, since the ‘music business’ strands would be the ideal place to make students use ICT skills to produce formal documents applying mathematical skills (graphs, averages and so on, for example), and ensemble work looks an ideal place to develop the key skill of ‘working with others’. I once earned money teaching vocational students key skills, since the vocational teachers could not teach or assess some of them themselves!

    I empathise completely with your view that musicians have all sorts of knowledge and understanding by which they can assess students, but, sadly, in a world where centres of education are often paid by results, a time when teachers would be trusted to do this has long since passed. Indeed, Wolf’s work reveals that in some cases fraud cases were brought in respect of the misuse of awarding!

    I have just been following an online MOOC (Massive Open Online Course) giving an introduction to improvisation. As you may know, assessment on MOOCs is via a mixture of multiple choice type tests and peer assessment. In general I felt ill-qualified to assess the work of my peers and so my policy was to award top marks unless they had not bothered to complete the set task. I felt even less qualified to offer constructive feedback.

    Finally, here are a few references which, if you were interested enough and had the time, might help you to see why you are coping with the sort of assessment rubric in question and to learn more about the way the underlying principles have been applied in the UK in the past.

    Wolf, A (1995) Competence Based Assessment OUP, Buckingham

    https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/180504/DFE-00031-2011.pdf

    Glaser, R (1063) Educational Technology and The Measurement of Learning Outcomes. American Psychology, Vol 18(8) pp 519-521

    JH Block, RB Burns – Mastery Learning: Review of research in education, 1976

Leave a Reply

Your email address will not be published. Required fields are marked *