Turning assessment into a learning experience for higher education students
By Peter Collison – Head of Formative Assessment and School Platforms at RM Results
During my 17 year career in the education sector working with schools, awarding organisations and higher education institutions, I have witnessed how assessment has changed from being largely focused on summative ‘assessment of learning’ at the end of a course or key stage, to an increasing focus on formative assessment, often described as ‘assessment for learning’ or ‘learning through evaluation’, helping both teachers and learners understand where to direct focus for continued progress in order for each learner or cohort to achieve the best outcomes.
One example of how learning through evaluation can be applied in the higher education sector was explained to me by Professor Scott Bartholomew who had coached a high-school robotics team early in his career. A wide-ranging curriculum had been developed to introduce and explain various principles of robotics to the students. However, he discovered that the students preferred their own approach to acquiring the necessary knowledge: YouTube.
YouTube provided the teams with something unique: the ability to view video footage of a variety of existing robots, evaluate the performance of each, and identify potential aspects for integration into their own designs. They were able to make comparisons between a host of posted videos and incorporate what they had identified as key performance attributes into the design and build of their own robots.
Professor Bartholomew found himself discussing with colleagues the potential for intentionally using evaluation as a learning activity. This dovetailed nicely with his own research interests into improving student learning in design settings. In time, these ideas developed into a study involving 550 students using Adaptive Comparative Judgement (ACJ), the results of which have fundamentally changed his approach to learning through the act of evaluating or assessing work.
Introducing Adaptive Comparative Judgement (ACJ)
ACJ is an approach based on the ‘law of comparative judgement’, which states that humans are better equipped to make paired, comparative judgements, rather than absolute ones. ACJ technological tools present one alternative to traditional marking; instead of assessing each piece of work alongside a mark scheme, an assessor is presented with two pieces of work, and simply needs to use their professional judgement to choose which piece better meets the assessment criteria. An adaptive algorithm is embedded into the pairing process and intelligently pairs pieces of work for comparison – as the algorithm learns, it pairs similarly ranked pieces of work, optimising the judgement process and ensuring the accuracy of the ultimate rank order which is produced.
One can probably best explain the concept of comparative judgement by considering a visit to an optometrist. When a patient takes an eye test, they aren’t presented with twenty options and asked to pick which prescription is the most clear – it would be a complex and stressful situation which would more than likely result in a sub-par prescription. Instead, the optometrist asks the patient progressively to choose the better of only two presented options, until, eventually, they know precisely which prescription is the most suitable. It is the same logic that underpins ACJ assessment – choosing from all the potential options on a mark scheme when assigning a mark can be difficult; focusing on a comparative decision can be much easier.
ACJ is an assessment approach particularly well suited to subjects with multiple potentially-correct answers such as literature, video production, and design. In creative subjects, ‘correct’ solutions are numerous and consequently any mark scheme often has to remain somewhat abstract. ACJ can replace the point-based mark scheme approach through comparisons and can be done individually or in a more collaborative approach, with multiple assessors working on the ACJ session. However, despite the value in improving assessment by fundamentally changing the approach, Professor Bartholomew perceived a bigger potential for ACJ: using it as a learning tool.
Using ACJ as a learning tool – a study
The study was implemented at Purdue University with 550 design and technology students split into two groups: a control group and a treatment group. The course required students to engage with open-ended problems and the design process, culminating in the production of a design portfolio. Every aspect of the curriculum followed an identical guide between the control and treatment groups, with a single exception: the use of ACJ for peer assessment.
On the day of the intervention the students in the control group participated in a traditional instructor-led ‘think-pair-share’ activity where students exchanged work and feedback. The students in the treatment group used the ACJ software RM Compare to evaluate work submitted by students in a previous term who had worked on a different design brief. This simple intervention took approximately 20 minutes but provided the students with a ‘learning by evaluating’ experience.
Professor Bartholomew observed that by evaluating each other’s work, students were able to quickly learn to identify the qualities and traits that characterised both higher and lower-quality work. This process of students repetitively comparing items and providing justification for their decisions of one item over another appeared to act as a learning activity that helped solidify an understanding in the students.
Following this experience all students proceeded with the remainder of the class and the completion of their design portfolios. After the students’ own design portfolios were completed, a team of instructors for the course assessed all the work of all of the students from across the two groups – also by using ACJ. The results were significant: seven of the top ten highest performers for the year had been in the treatment group which had used ACJ for peer assessment. Not only that, the average student in the treatment group outperformed 66% of their peers in the control group. The findings were statistically significant at a high level (p = .032).
Read the full case study to find out more about how student attainment was significantly improved through the use of RM Compare for peer assessment.
Extending the benefits of ACJ to primary schools.
The benefits of ACJ technology do not start and end with peer assessment in higher education. A recent case study of a group of 14 UK primary schools using RM Compare for the marking and moderation of a creative writing assignment proved that ACJ technology can be used to reduce teacher workload, whilst increasing student attainment.
New technologies with proven pedagogical impact are becoming increasingly crucial at every level of education, enabling learning and assessment to be delivered flexibly, efficiently and effectively, to ensure students have the best chance of getting the outcomes they deserve. The recent case studies for RM Compare certainly make the case for ACJ to be amongst these technologies.
Disclaimer: Purdue University does not endorse any technology.