the e-Assessment Association

Digital Assessment Practice, by the Centre for Distance Education at University of London

Digital Assessment Practice, by the Centre for Distance Education at University of London
Centre for Distance Education logo

Re-published with kind permission from the Centre for Distance Education at University of London

Click here to view the original article.

The annual CDE ‘Supporting Student Success’ workshop this year focused on one issue that has weighed on many academics’ minds since the spring lockdown: how assessment practice can and should change in the transition to e-learning.  As Mary Stiasny, Pro Vice-Chancellor (International) of the University of London, put it in her introduction to the workshop, ‘the toothpaste is now out of the tube’: e-learning is here to stay. It is, therefore, increasingly unlikely that paper-based exams will return even once the pandemic has passed into memory and we will need to make sure that current and future students are well served by whatever follows.

Written by Dr Clare Sansom, CDE Fellow 

In the session on transition to online, CDE Head Linda Amrane-Cooper and the Centre’s joint executive leads for research and dissemination, Stylianos Hatzipanagos and Alan Tait described London University’s marathon effort to put all its spring and summer 2020 assessments online. A team comprising Chris Cobb and Craig O’Callahan from University of London and Brunel University’s Mariann Rand Weaver gave a talk with the intriguing title ‘Assessment Re-booted’, and Martha Gibson from Heriot-Watt University in Edinburgh explained what she wished she had known about e-assessment when the crisis began. Finally, Simon Walker and Norbert Pachler from University College London addressed the question of assessment in these ‘times of disruption’.

Putting 110,000 Assessments Online

Professor Hatzipanagos began with a summary of some background research into online assessment, before and during the pandemic. Even 10 years ago, researchers were noting a growing student interest in replacing pen-and-paper exams with online equivalents, citing prompt feedback as a driver. However, student performance in online exams has until now been patchy, and some still have difficulty accessing the technology. Furthermore, some staff consider the COVID pandemic and subsequent shift online to have provided a ‘perfect storm’ for contract cheating. When the pandemic hit, the University of London faced the challenge of moving online more than 110,000 exams for 35,000 students in over 20 time zones at very short notice. The University therefore commissioned a project to explore students’ experience of online exams in 2020, with a focus on unseen assessments.

Dr Amrane-Cooper explained that the project is examining four aspects of assessment: student behaviour before and during the exams, including interactions with the VLE; students’, lecturers’ and examiners’ opinions, assessed through a student survey and interviews; outcomes; and some important operational issues. She then handed over to Professor Tait to discuss the survey. This was sent to about 30,000 students before results were issued and over 8,500 responses were received; most were from undergraduates, about half of whom attended one of the University’s Teaching Centres. Interestingly, student take-up of exams was slightly higher than it had been with traditional assessments in previous years, and most felt they had been able to demonstrate their learning. Many of those who chose not to take exams blamed the impact of the pandemic on their study in general rather than their assessment, and poor access to technology was rarely a problem. Furthermore, most students’ experience of online assessment was generally positive, and more than half the students expressed a preference for continuing to take exams online at home. Students with poor Internet access were, not surprisingly, significantly less likely to select this option. These results suggest that continuing online exams after the pandemic will be popular with students, which has important operational implications. Selecting the best formats for these exams; invigilation, proctoring and assessment offences; and improving infrastructure and communications will remain live issues for some time.

Assessment Re-Booted

Chris Cobb set out a ‘2030 vision’ of assessment throughout the next decade. This report had been commissioned before the pandemic and it was clear that the shock move online had only speeded up a transition that was already underway. He suggested that assessment will need to stay relevant, adaptable and trustworthy as the context changes. Taking these in turn, one simple way in which e-assessment has established its relevance is in the switch from handwriting to keyboarding, and ‘digital’ also makes it easier to assess collaboration and group work digitally. E-assessments may overcome the disadvantage of some disabilities, and they can be adapted to student circumstances, examining students ‘anytime, anywhere’ whenever they are ready and taking prior learning into account.

One of 2020’s major challenges has been scaling up digital assessments to cover more students. Global student numbers are likely to increase dramatically in the next decade and it is not far-fetched to think of AI playing a significant role in their assessment by 2030.  Staff and students need to be assured that digital assessments will, at the very least, not be more open to plagiarism and fraud than conventional ones, and there, trustworthy methods of validating student identity and data ownership will be key.

Dr Rand Weaver, Vice Provost of Education at Brunel, described e-assessment practices there starting when, in 2016, students were first allowed to bring their own devices to on-campus exams. In 2020, of course, all this had to be replaced with home-based exams.  During the last four years the WISEflow assessment platform had been adopted for both coursework and exams, gradually until the pandemic and then completed almost instantaneously.  All the over 800 exams held since lockdown were open book and time-limited, supported using an embedded chat that students found very valuable and used often.  Around 80% of the students chose not to defer spring exams to August, but typically ‘disadvantaged’ groups – BAME, low-income and disabled ones – were most likely to do so. Staff were encouraged to include questions that required higher-order thinking, multiple-choice questions were minimised and randomised, and marking criteria adjusted to allow for open book conditions. Rand Weaver, like others, is now using 2020 as a ‘catalyst for re-imagining assessments’, maintaining a focus on the pedagogy of assessment with technology as the enabler, not the driver of change.

Dr O’Callahan’s experience at the University of London’s International Programmes has been very different. There, about 80% of some 35,000 students were assessed by examination only and they only had two months to achieve a transition that had taken Brunel four years. Furthermore, many students lived in countries with very low bandwidth, making a commercial platform like WISEflow impracticable. The only possible solution in the time available, was a Moodle-based one without proctoring, relying on Turnitin to pick up plagiarism. The technology worked well, although more irregularities were detected than in previous years, but Moodle was much less suited to handling the marking of very large numbers of assessments than a commercial platform would have been. This poses many questions for assessment in the ‘new normal’, where e-assessment will, at the very least, be a lot more common.

Transitioning to e-assessment: What I wished I had known at the outset

Ms Gibson’s presentation took delegates back to ‘the halcyon pre-COVID days’ in describing how, starting in 2015, the graduate business school at Heriot-Watt University had taken its assessment online. This works in a similar way to London’s International Programmes but on a smaller scale, with (in 2015) 12,000 registered students in 166 countries. A large majority of the students are mid-career professionals working flexibly towards MBAs, assessed by exams only with four exam sessions each year. The gradual transition to e-assessment arose in response to student feedback, to fit in with the students’ professional lives and business practices, improve exam security and reduce printing and shipping costs. There were, of course, challenges, not least the enormous variation across the world of students’ context and experience and assumptions about what ‘e-assessment’ would actually mean.

Rather than taking the opportunity to re-design the whole assessment strategy, they chose to deliver existing face-to-face exams through an e-assessment platform. This led to technical issues; whole courses, including the core Economics module, were initially excluded from the system because the platform did not allow students to submit diagrams. There was also no way to mitigate technical problems during the exams. Initial student feedback was distinctly patchy, but once staff and students had experienced the system few wanted to return to pen and paper, and uptake of e-exams increased gradually from one exam session to the next.

So, after a generally successful launch, what did Ms Gibson wish she had known before she started the project? In brief:

  • What all stakeholders understood by the term ‘e-assessment’
  • The situation and the subsequent challenges in each country and at each local exam centre
  • How other UK-based global programmes were managing the same transition
  • How much practice, preparation and training students and staff would need

And what would she do if she was starting again from scratch? She would use a hybrid model of remote invigilation and test centres to reach a wider group of students, make e-assessment the default and, crucially, take the opportunity of a ‘root and branch’ refresh of assessment design rather than re-purposing existing paper-based exams. She recommended delegates to join the e-Assessment Association, which is free.

Dynamic responses to assessment in times of disruption

The final presentation, in contrast, focused on how the pandemic had disrupted assessment at UCL, and what can be learned from this for future years. UCL is a global university, and very many of its overseas students have been unable or unwilling to return to the UK. In this context, the dominance of exams as a mode of assessment is, as Professor Pachler explained, ‘perhaps unhelpful’, and assessment is likely to place too many demands on students. E-assessment is clearly the way forward, but the crisis has merely accelerated this change in strategy. It is clearly led by pedagogy, with technology as an enabler.

In a normal year, students at UCL would take about 2500 exams between March and June. In 2020 these were replaced by a mixture of coursework and timed online assessments, with a discipline-specific ‘capstone assessment’ for all first years.  This change to the previous exam-based system was a key part of a broader University-wide strategy for crisis management.

Looking beyond the immediate crisis, UCL is now setting out its strategy for the academic year that has just started in an environment that is still disrupted and subject to rapid and unpredictable change. Professor Walker described how their online assessment model was developing ‘at scale and at pace’ in a community the size of a small town. Assessment load for staff and students is a serious issue. Planning for 2020-21 started at the beginning of the spring lockdown by developing a guide for alternative assessment strategies beyond exams. A ‘connected learning essentials’ course was developed for staff training, together with a model for calculating assessment load to ensure equivalence across different programmes.

This is now used when new modules are developed. An evaluation of staff and student experience of online assessment generated positive feedback but also highlighted a few hassles; many students reported being more stressed by these exams than the paper-based equivalents.
The final piece of UCL’s online assessment strategy for 2020-21 and beyond will be a dedicated digital assessment platform. This supports the whole assessment cycle from design through to moderation and can be used in a wide range of languages and disciplines. Following proof of concept and piloting the system should be available for all students taking exams at the end of the academic year. In a large institution like UCL, maintaining a common assessment strategy, improving communication with staff and students, and ensuring prompt feedback are key, and the platform should support all these.

Designing Digital Assessments

The second strand of the workshop took online assessment design as its theme. As each of the four contrasting presentations highlighted, the pandemic has only accelerated a trend that was already under way. there will be no mass return to paper-based exams and assessment design must continue to reflect this change in practice.

Firstly, Alison Sturrock and Gil Byers from UCL medical school discussed how medical students can be assessed virtually in this most practical of disciplines. CDE Fellows Stephen Brown and David Baume presented the results of a project on student learning behaviour in online courses and made some recommendations for assessment. CDE Fellow Alan Parkinson, also from UCL, reflected on his experience with criterion-based marking scheme, and Yaping Gao from the quality assurance programme Quality Matters suggested ways of rethinking assessments to improve student outcomes.

Assessing UCL Medical Students Virtually

Medicine is one of the most challenging of disciplines to convert to e-assessment. In many countries including the UK, clinical practice is assessed through Objective Structured Clinical Examinations (OSCEs) in which all students are assessed on the same clinical tasks in a restricted timeframe. When COVID gripped the UK in March, Dr Sturrock, academic lead for clinical and practical skills at UCL, initially decided to keep the face-to-face OSCE for students but replace real patients with actors.  After lockdown, however, even this became impossible and a virtual OSCE had to be designed for students in years 4 and 5.  A certain amount of ingenuity was necessary in assessing some of the key skills: much could be done with video and audio, but some practical tasks have to be assessed, including vaccination. “We asked each student to provide an orange for this task”, she explained. Students were generally satisfied with these assessments, but some face-to-face practical exams were reinstated after full lockdown ended, with actors again replacing patients and with social distancing.

Medical educators are no better placed than anyone else to know what the future holds. Dr Myers, a senior clinical teaching fellow, explained UCL’s plans for different options in 2020-21. These depend on the progress of the pandemic but will hopefully include a mix of both socially distant and virtual assessments and with a wider range of video options. A lively discussion after the presentation included questions on scaling up the assessments.

What Student Learning Behaviour Online tells us about Assessment in Course Design

Professor Brown and Dr Baume, both members of the CDE’s executive team, presented some results of a short research study of students’ e-learning behaviour and the implications for assessment. Students on four University of London programmes – postgraduate law, accounting, computer science and the MBA – had been asked to log their learning activities and rank those they found most useful. The outcome of this exercise was unambiguous but, frankly, worrying. Almost unanimously, students of all four programmes found interaction with the course content – basically, learning facts – the most useful activity, and more social and interactive exercises such as peer collaboration and discussion forums were the least useful. Student interviews confirmed that they viewed the acquisition of knowledge rather than higher-level skills as most essential for success. This contrasts with educators’ view of the importance of active learning.

Students have always been driven by assessment, and it seems obvious that the value students place on factual knowledge is linked to their understanding that this is the key to passing exams. Therefore, the best way to encourage them to value deep learning and higher-level skills is likely to be to build them more explicitly into assessment strategies. This was not a critique of students, who will have been brought up on factual assessments and who, having reached higher education at all, are likely to be at least fairly good at them: more, a critique of how their learning is assessed.

The presenters ended by stressing the ‘iron triangle’ connecting a course’s core components with its learning outcomes and its assessment, and then giving eight recommendations for course and assessment design to encourage active learning.

  1. Explain and illustrate the fact that learning is an active process
  2. Redesign assessment questions using the stated learning outcomes
  3. Match all learning outcomes to activities
  4. Explicitly show how the activities and the learning outcomes relate to each other
  5. Design activities that go beyond testing memory
  6. Build skills development into the learning outcomes
  7. Provide activities that help students recognise as well as develop these skills
  8. Make some of the activities that explicitly relate to learning outcomes collaborative ones.

Using Criterion-based Marking Schemes to assess Differentiated Attainment of Learning Outcomes

Academic teachers often take a relaxed attitude to assessment and marking; it is not surprising to come across comments like ‘I forgot the assignment had to be set by Monday’ on a Friday afternoon. This can be a particular problem when student numbers are large, as the time spent assessing each one is necessarily limited. Professor Parkinson, deputy director for education in UCL’s School of Management, thinks that a more professional approach to assessment would be beneficial. He presented some reflections on this topic from his own experience, student surveys and the literature, using the topic of marking criteria to illustrate his points.

Student surveys and literature reports both highlight assessment as a contentious issue for both staff and students.  Parkinson quoted lecturers’ published views of it as ‘a necessary evil’ and ‘a serious and often tragic enterprise’. Students’ views of assessment captured by the UK’s National Student Survey seem on the surface to be positive, with over 70% agreeing with statements like ‘the assessment criteria used are fair’ and ‘feedback is prompt and helpful’. However, this compares badly with much higher average scores for other aspects of their learning.

Both students and lecturers will agree that marking criteria, including mark schemes, must enable clear differentiation between grades. Parkinson ran through an example from the Professional Accountancy course where the same answer, including both correct and incorrect points, received very different marks depending on the scheme used. It may seem obvious, but we should think through exactly what knowledge or understanding students are expected to demonstrate, and how to distinguish between great, good and passable answers before setting assessment criteria; a formal assessment design inventory can help with this. However, complex marking criteria that work well when assessing a small group of students may be impracticable with a class of hundreds.

Alignment Matters: Rethinking Assessments for Student Engagement and Success

Dr Gao began her talk by introducing Quality Matters as a quality assurance programme that helps educational institutions at all levels improve and assess their courses. It works with over 1,350 institutions in the UK including, so far, four HEIs in the UK. They have established a total of 42 interrelated sets of course quality standards that are based on published research; focus on pedagogy rather than technology; and are flexible enough to be applied at different levels. Institutions are assessed through peer review.

The talk focused on the theme of alignment. Gao adapted the old proverb, ‘it takes a village to raise a child’ into ‘it takes the whole institution to educate each student’. Alignment matters at all levels: the institution, in terms of enrolment and accreditation; the programme, in terms of academic rigour and the individual course, in terms of learning objectives, assessment, student engagement and support. At the programme level, programme design, teaching support and learning support are equally important for ensuring student success.

So, where does assessment fit in? At the course level, certainly. The Quality Matters approach to assessment starts with the Learning Outcomes and is an exercise in reverse engineering: what do we need to assess in order to ensure that the students meet those outcomes, and that they can be fully tested. The need to align assessment with objectives, and for quality and integrity in assessment, is exactly the same for online assessments as for paper-based ones. At the time when the pandemic hit and higher education, including assessment, moved abruptly online, every institution used a different set of technologies to teach and assess a different set of subjects and employed staff with a different technical knowledge and skills. Quality Matters has been offering a wide range of products and tools to help each institution, in its individual circumstances, implement high quality assessments.

And – whatever stated preferences of the students in Brown and Baume’s surveys, much research has showed that students need to learn, and to be tested on, higher-level skills such as analysis and evaluation. Gao encouraged delegates to relate Bloom’s taxonomy – a hierarchical ordering of cognitive skills – in curriculum and assessment design. She presented a table of activities linked to this taxonomy that can be used in digital learning, while acknowledging that certain activities, particularly in experimental science and clinical medicine, will be particularly difficult to assess in this way. Assessments should also be designed and run in a way that promotes engagement between students and instructors, and this is one of the benefits – but not the only one – of providing prompt, high-quality feedback.

Gao highlighted that, during the pandemic, Quality Matters is offering two specific resources to institutions, to enable them to put quality activities and assessments online:

  • An emergency checklist for those rapidly moving materials (including assessment) online
  • A ‘bridge to quality course design’ guide for use after the first emergency

She ended by stressing that students see assessment as the most important part of their ‘learning journeys’. High-quality assessments will be closely linked to the learning objectives, will assess core elements of the course and test high-level skills, and will do so in a way that engages them.

Across both strands, several speakers picked up on Professor Stiasny’s initial metaphor. Today, the toothpaste is not only out of the tube: the horse has bolted, leaving the stable door far behind. E-assessment is here to stay and the experiences of 2020’s rapid transition will pay dividends even when social distancing is a distant memory.

Centre for Distance Education logo
University of London Logo
Share this: