E-assessment provides greater quality assurance
Over the past decade when I have been ‘on the road’ talking about electronic assessment a frequent accusation that I have had to counter, is that computers are ‘easier to cheat’ and therefore e-assessment, by inference, is less reliable. This in part explains the continued insistence from some public bodies on ‘wet signatures’, as if these cannot be forged. It would not, for example, be too difficult to ‘forge’ the signature of Vince Cable, the minister who signs off all the funding for the sector. It’s a large C with a dot in the middle!
There is however a growing recognition that it is pretty difficult to ‘cheat’ the e-assessment of vocational skills because every decision and action is date stamped and ascribed to a person’s password-protected user name. It is just not possible to alter that date, and, for every action that is taken, there is a clear audit trail.
This assertion of “cheat-proof” is most applicable to those systems that are designed to support rigorous assessment, where, although the portfolio belongs to the learner, they are not, for obvious reasons, in total control of what happens within it. The systems that I am referring to are those that are able to control who has permission to see and do what, depending on the group they belong to and the role they play.
The level of transparency that e-assessment provides goes well beyond this level of quality assurance. Electronic assessment brings decisions about assessment methodology to the surface, which in traditional assessment systems often get hidden amongst stacks of paper. Computers are not very good at accommodating the common scenario in human judgement-based assessment of ‘this student looks like a fail to me, but maybe in some often ill-defined conditions the student could pass…’. In e-assessment all rules of assessment need to be clearly defined and understood in order to work at all, and this is largely a benefit (although it might not always seem so).
The process of creating an e-assessment system often results in awkward questions being asked of the assessment methodology itself. This is often true in my experience within Higher Education where the assessment methodologies are not always ‘standardised’ across a department (never mind across the wider university). In one sense this is not surprising: a health course at a University often has to be validated by three separate bodies; the University itself, the relevant professional body and the regulator and so standardisation across these three is likely to draw more attention than standardisation with other departments teaching disciplines that are seen as very different.
However there is an increasing interest in standardising the processes of courses covering similar vocational subjects. This is, in part, driven by the desire for an electronic approach and the recognition that there is significant extra expense involved in trying to accommodate all the different requirements from individual courses. This is particularly the case where these requirements often appear not to be core to the assessment process. It is also driven by a demand from those being assessed and their assessors to have processes in place that are clear.
When e-assessment was first introduced for vocational assessment, there was a sense that the early adopters were carried away by all the opportunities that it potentially offered, in terms of the way evidence was captured and the possibilities of using the portfolio beyond the course, etc. As a result, people often lost sight of the need to fully reflect the assessment journey. This is perhaps why, even though there have been lots of electronic assessment projects in HE, very few have secured widespread adoption or had a significant impact.
The real irony is that the very thing that was perceived to be the Achilles heel of e-assessment, namely that by using computers the assessment will be less rigorous and subject to cheating, the opposite turns out to be the case. Well-designed e-assessment systems actually offer to Universities and other similar institutions the opportunity to strengthen their quality assurance systems. In our experience with Southampton University, the pressure on mentors to make ‘quick assessments,’ because the student must leave with their paper portfolio is replaced by an opportunity for the mentor to reflect, in their own time, on the judgements they need to make. From the tutors’ perspective, they are able to see at any time the progress their students are making, rather than waiting until the portfolio lands on their desk at the end of the placement, by which time it is often too late. For the student the pain of carrying around a 300 page document is replaced by their tablet or mobile.
In this and in many ways e-assessment can be said to strengthen rather than undermine quality assurance.
Chris Peat Axia Interactive http://www.axiainteractive.net/