the e-Assessment Association

Skynet or Wall-E? The implications of Generative AI for exam security

Skynet or Wall-E? The implications of Generative AI for exam security

A blog by Paul Muir, Chief Strategy and Partnership Officer Surpass Assessment and eAA Vice Chair

We all remember the days before ChatGPT and Generative AI seemed to being every education news story, don’t we? You know, where AI wasn’t going to kill us all or make assessments entirely redundant? 

Back in late 2021 I sat down and wrote a blog post titled “AI: Friend or Foe?’ and followed this up with a session of the same name at the 2022ATP conference in Orlando with Professor Barry O’Sullivan and Marten Roorda. There, we debated the topic with fellow assessment professionals, who mostly agreed it was indeed our friend. 

AI was going to be our constructive ally in assessment, supporting us with topics such as ID verification, content creation, auto-marking, enhanced security for Remote Proctoring and enhancing data forensic capabilities within examinations. 

So, what’s changed? Or has anything really changed from those innocent days of early 2022? 

Even back in 2021, before ChatGPT raised its head, the European Commission had placed the use of AI in Education in the ‘High-Risk’ category of its experimental AI Framework. This was the middle category, with others being ‘Limited Risk’ and ‘Unacceptable Risk’.  Does Generative AI move this towards unacceptable? I think it’s probably too soon to decide. Personally, I think the biggest change is that the discussion is now very much focussed on Generative AI or Large Language Models rather than AI as an overall concept. 

As a reminder, Generative AI refers to technologies that can autonomously generate content, such as text, images and videos. These systems employ machine learning algorithms and neural networks to analyse and learn from vast amounts of data, enabling them to create highly realistic and believable outputs. Believable is key there, as we know hallucination is a real problem. That’s for another paper though… 


While Generative AI has opened up exciting possibilities in various industries, including education and assessment more widely, it also presents significant challenges in maintaining the integrity of examinations. 

Three key implications for exam security are: 

  • Cheating & Plagiarism: One of the significant security implications of Generative AI in exams is the increased risk of cheating and plagiarism. Students can use generative AI models to generate answers or even entire essays that appear to be original, making it difficult for traditional plagiarism detection tools to detect such instances. This poses a challenge for exam owners and institutions in ensuring the authenticity of student work.
  • Impersonation: Have we all seen the Deep Fake Tom Cruise? Or the new AI software that can ‘replace’ eye movement to fool remote proctoring services/software? The rise of deepfake technology, which is a form of Generative AI, presents a significant risk in exams that require identification or authentication of the student. AI-powered ‘solutions’ can be used to create sophisticated audio and video for impersonation or identity theft purposes. For instance, a student could use deepfake technology to mimic another student’s voice during an oral exam or to manipulate facial features during a remotely proctored test.
  • Content Development (closed AI environment): We already know that some assessment organisations/exam owners use Generative AI to produce more cost-efficient content. But at what cost? Using a publicly accessible AI solution, such as ChatGPT, could result in exposure of test content, allowing students the opportunity to access, generate and share new questions very closely related to the original ‘seed’ content, compromising the integrity of the assessment process.  


However, it’s not all doom and gloom and as a sector we are already addressing those threats described above (and many more!) head on with various interventions and preventions such as: 

  • Advanced Proctoring Solutions: Whether in-person at a test centre or with remote proctoring solutions, there are a number of tools readily available to mitigate the risk posed by Generative AI. Two-camera solutions, AI-enhanced anti-plagiarism technology and the use of more cutting edge multi-modal and behavioural biometrics will be critical in the battle against those who wish to use Generative AI for nefarious purposes.
  • Secure Test Drivers: Utilising secure platforms/test drivers (such as Surpass) or Learning Management Systems (LMS) that incorporate encryption solutions, have lockdown functionality and advanced authentication mechanisms can help safeguard exam materials, ensuring question confidentiality and preventing unauthorised access. 
  • Collaboration: It might be controversial, but collaboration between educational institutions and AI developers can help us as a sector to develop robust anti-cheating mechanisms. By understanding the nuances of generative AI, exam owners and educational institutions can stay ahead of potential threats and work towards creating secure exam environments. 

While I’ve only listed three key mitigations above, there are clearly many more available and some of these will be discussed in an upcoming ATP Security Committee White Paper on Security Implications of AI coming out towards the end of July. We’ll post a link to this once published on the EAA website. 

I want to end on a positive note though. Generative AI has the potential to revolutionise the assessment industry, not just from the negative perspective that many initially run straight to. It’s not going away, so we need to figure out as an industry how to mitigate the risks, but just as importantly how to ’embrace with caution’ the positives it brings. 

In the next EAA newsletter, we’ll cover the more positive sides of Generative AI, the role of the regulator in the UK and what I believe are the 5 core principles for the use of AI in education and assessment. 

So, going back to the beginning of this newsletter, are we now in









territory with our opinion of AI?


What do you think?


And if you’d like to see how Surpass is embracing the future of assessment creation with AI, take a look at Surpass Copilot, our upcoming AI-powered toolset that could revolutionize your item authoring process:


Surpass Company logo

Share this: