Simulation in assessment
I was delighted to have the opportunity to write a blog on the work that the Dell Technologies Proven Professional Program has been doing with Pearson on incorporating simulation-based items into its certification exams. I have worked in e-assessment for over 25 years now, and I am always interested in anything that improves the overall assessment experience for candidates. I know that there has been a lot of work on simulations over the years, but it is still a comparative novelty in terms of a widely used assessment approach. Whilst it is something that makes obvious sense, the challenge is that it takes a lot of time and money to create simulations, limiting the number organisations that have the means to go down this route.
In my role with Certiverse, I speak to many tech companies on a daily basis, and whilst we support what I will refer to as ‘technology-enhanced items,’ most discussions are still focused on multiple-choice item types. I think using technology to improve the test experience by measuring something that is otherwise difficult to be measured is laudable. However, although not the case here, I have often seen complex items created for complexity’s sake, when in all honesty, the candidate would have been better served by a well-written, multiple-choice item type.
There are a number of challenges with problem with technology-enhanced item types or simulations is that they are more memorable, and as a result are more easily recalled (and therefore compromised). Another challenge of simulations is that when there are lots of ways of completing a task, you need to either program in all the possible options, or if you’re looking for one correct way of doing a thing, that approach has to be sign-posted. I thought that using a multiple-choice question that required you to find the answer via the simulation — in essence, combining two item types to create a more complex question — was inspired. Although my technological understanding is admittedly somewhat limited, the use of Storyline allowed for the simulations to be re-used, with different variables being independently changed to allow for additional variations in the content. Whilst this doesn’t negate the challenges of the time and cost of developing simulation content, it certainly reduces them, because it enables the test sponsor, in this case Dell Technologies, to generate more usable content from each simulation.
Having said all this, I thought that the Dell Technologies solution was rather good, and I had the benefit of being able to speak to both Juan and Orest, who presented at the “Beyond Multiple Choice” conference, to discuss in more detail the work they are doing. Whilst I don’t think that the use of simulations is new, the application for Dell’s certainly is, and it better reflects the real-world environment. Multiple-choice content still has its place and is great for assessing most knowledge. But simulations such as the one created by Dell Technologies and Pearson provide a means of assessing more complex practical performance. Dell Technologies only uses simulations in its higher-level certification exams, which limits item exposure. Perhaps in the future, the simulations could be a standalone exam, to ensure greater accessibility to the more efficient and affordable multiple-choice content, and then limiting exposure of the higher-cost simulations to only those candidates at the right level.
I do hope organizations like Dell Technologies and Pearson continue to invest in innovative exam design and that other organisations follow their lead in embedding simulations into their exams when and where it makes the most sense and offers value to test sponsors and candidates alike.
Chief Revenue Officer for Certiverse