the e-Assessment Association

Video items in e-Assessment – what happened?

Video items in e-Assessment – what happened?

Video is a now a key factor in personalized learning: it affords learners access to educational sources how and when they like it. The e-Assessment world tends to shrug its shoulders when asked why video hasn’t really take off for high-stakes e-testing and why e-Portfolio (using video as evidence for qualifications) is not quite into the early/late majority adoption lifecycle in many sectors.

Almost every buzz-term in the education technology field (flipped classroom, MOOC, BYOD, personalized or social learning) has video as a key component. Proponents such as Michal Tsur of Kaltura call video “the glue that ties our new learning methods together and facilitates them in the first place”.

Over 20 years ago, research from Teachers College at Columbia University claimed that students who were exposed to the integration of video content into their learning, outperformed non-exposed peers on exams. A 2012 report by Cisco claims that video accounts for more than 50% of all mobile traffic, and will reach at least 66% by 2017.

So why is video the ‘shy kid in the corner’ for e-Assessment? If we consider that for much of e-Assessment (and especially e-testing), there is a cost shift from delivery (essentially shifting paper) to the creation/ production of items and tests or retooling of processes for greater efficiency. For video items, this means that item writers and qualification development professionals need to consider different skills, which are not traditionally in their skill set, such as storyboarding, lighting, scripting, location finding etc. While digital cameras and phones have transformed our ability to capture memories, special occasions, or other events that were not possible even 10 years ago, video fit for high-stakes (or ‘meaningful’) exams still has a number of barriers to overcome.

Furthermore, the assessment specialists need to consider how the video changes the assessment, what precisely the assessment is designed to measure and how reliable scoring will be implemented. All these factors can change as the media of prompt and/or response is changed. Consider the following:

  1. Face validity: Does the test look like a real test and for the intended purpose? Many e-Assessments still rely on paper conversions (text and image) while fewer take a ‘ground zero’ approach and consider the real potential of media for testing that are just not possible via simple text and picture e-Assessment items.
  2. Scalability: How easy is it to build a sustainable item bank that can withstand exposure and maintain relevancy to the curriculum? Some commentators in the e-learning world lazily call e-Assessment ‘cheap’. Question writing is a discipline which involves regular, scheduled refresh, analysis and editing. This helps to ensure that not only each candidate receives a fair, valid and reliable test, but also that sustainable processes can accommodate candidate growth within existing programmes, but also in new and diversifying candidate bases. When assessment is meaningful, it is a cost worth paying for.
  3. Use of Non-Standardised Hardware: A fundamental principle of e-Assessment is to produce a test that is presented in a standardised way for all candidates, so that no candidate is unfairly disadvantaged. For example, video reproduced using different frame rates by non-standardised hardware in a test centre, can create significant differences for candidates sat in the same room (e.g.
  4. Security: Holding items on dispersed, local servers can create security issues, especially with video files being many times the size of a text-based multiple choice question. CAPS compliant laptops, as users with enterprise-wide encryption and anti-virus software will testify, can severely hamper performance.
  5. Access to Broadband: Test preparation for video items may seem like an easy project to set up from an exam creator’s point of view. ‘Flat’ video files served up and embedded on a website, or linked to YouTube can help show a candidate what to expect. However, broadband ‘not-spots’ mean that any practice test using video becomes a tiresome waste of time for prospective candidates, affecting their ability to perform to the best of their abilities. Equally, the upload/ download of evidence for e-Portfolio away from a suitable (and free-of-charge for the student) corporate or educational internet connection can hamper a student in their pursuit of a qualification.

In July 2013, Jon Brodkin wrote a fascinating article for Arstechnica that tells us of how the never-ending argument between Internet Service Providers and video services (over how much one network should pay to connect to another) that directly impacts video streaming quality. Needless to say, this turf-war compounds the increasing demand for better video quality, which exam creators appear to be reticent to engage with. Learner expectations of video quality have increased since the smudgy days of VHS (or 405 lines for our younger readers), with Blu-ray and HD now prevalent in many Western households and 4k consumer hardware on the horizon.

The growth of gaming over the internet has also raised expectation in terms of the user experience and the ability to interact (read: shoot-‘em-up) over broadband. Needless to say, an exam board that has little experience of using video to test candidate abilities will inevitably steer clear of a radical change in assessment instrument except perhaps where driven by the pressures coming from tutors and candidates through curriculum design and pedagogy.

While organisations such as the Serious Games Institute at Coventry University have made considerable strides in the use of gaming and education within computer generated environments (e.g. soft skills, career guidance), the high-stakes e-Assessment e-testing and portfolio worlds are conspicuous by their absence.

Outliers such as the UK Driving Test still continue to be beacons for using video. A study on Hazard Perception Testing (HPT) was performed by TRL in 2002, which indicated that while there a number of research procedures for measuring hazard perception of candidates, none was deemed suitable for licensure. The HPT test was deemed (at that time) to be statistically reliable, and that adequate training of learner drivers could help them achieve hazard awareness closer to that of experienced drivers. The test was introduced in November 2002.The creation of film clips that offer face validity is costly, and fraught with operational difficulty. Interestingly, the recent move to convert 130 of the HPT clips to CGI/ animation not only removes the use of film, but offers more control over environments, vehicles and characteristics of movement within each clip. While the test hazards remain the same, and are still relevant, contemporary hazards can be introduced at reduced cost.

The tipping point for the use of video maybe reached with Bring Your Own Device (BYOD). Candidates and learners are invited to use their own hardware to take their test (subject to meeting minimum technical criteria and perhaps coupled with rendering and/or security software). As issues with proprietary formats being used over the internet are slowly being overcome with the use of tools such as HTML5, developers and exam creators will no longer be able to ignore the bow wave of video in every day learning – the exam must reflect and align with how the training/ learning was performed.

by Geoff Chapman, Calibrand and e-Assessment Association
[email protected]

Share this: