the e-Assessment Association

Roll of AI in Detecting Collusion in Assessments

Roll of AI in Detecting Collusion in Assessments

An article by Vali Huseyn, Strategic Assessment Advisor, Vretta

The integration of Artificial Intelligence (AI) in educational assessments marks a significant shift in the testing landscape, involving two key players: the assessment administering bodies and the candidates. The rapid integration, especially with generative AI systems like ChatGPT by OpenAI or BARD by Google, offers vast opportunities but also poses challenges. A notable concern is the pace at which candidates adopt AI compared to its slower institutional integration by assessment bodies, potentially impacting the fairness and objectivity of assessments. As regulatory mechanisms struggle to keep pace with AI’s evolution, navigating this new reality of human-AI collaboration is essential for preserving the integrity of assessments and staying competitive in a rapidly advancing market.

This article aims to explore AI’s role in detecting and preventing collusion, a crucial aspect in ensuring the effectiveness of assessment operations in this dynamic, AI-saturated environment.

Navigating the Dual Edges of AI in Educational Assessments

Understanding the Capabilities of AI in Assessment Design

The primary role of AI, particularly generative AI as demonstrated by tools like ChatGPT, is its remarkable content creation capabilities. This technology has also transformed the design of assessments, enabling the generation of diverse items and the construction of complex problem-solving scenarios, thereby significantly expanding the scope for innovative assessment authoring. However, while AI greatly enhances the possibilities for item creation, it also introduces unique challenges. One such challenge is the potential for collusion in responding to assessment questions when these tools are used by candidates in assessment settings.

Addressing Collusion in Assessments

Collusion has long been a challenge in assessments, especially when students illegitimately cooperate on assignments meant to be completed independently, which often occurs due to a lack of readiness. The methods of collusion adapt to the assessment environment, whether it’s paper-based, computer-based, or remotely controlled. In online assessments, even before the pandemic brought about a surge in remote invigilation, a common form of collusion was the use of online search engines like Google to find answers. Recently, however, search engines have often been replaced by AI platforms such as ChatGPT. Students may use these platforms to generate answers through human-AI collaboration, which they then copy and paste into their responses for exams conducted remotely. This issue affects both low and high-stake exams in various jurisdictions. As a result, the use of AI-generated responses by students can undermine the fairness and integrity of these assessments, casting doubt on their overall validity. Therefore, it is essential to continuously understand and address this form of collusion to uphold public trust in the fairness and integrity of educational assessments.

Detecting and Preventing Collusion through AI

A key focus for policymakers responsible for assessment policy worldwide is the detection and prevention of collusion. Various methods are being explored and experimented within the field. There are various solutions currently under trial, one of them is keystroke analysis (or copy/paste controls) and behaviour monitoring (or softlock). The first approach involves tracking key combinations (such as Ctrl-C and Ctrl-V) and right-click pasting actions, as well as the insertion of large characters, to monitor typing patterns. The latter approach focuses on detecting when a user navigates away from the assessment environment, such as by changing tabs or browsers, or exiting full-screen mode and the timing of these activities. Suspicious activities identified by these methods are flagged for further investigation, helping to ensure the integrity of the assessment process. Furthermore, the implementation of a browser blocking feature within the assessment environment can restrict the use of unauthorized resources on the same device during an exam, especially in high-stake assessments.

Balancing Privacy, Fairness, and Surveillance

The incorporation of AI into assessment environments naturally leads to significant questions concerning privacy and fairness. When detecting behaviours like large-scale text insertions, it is important to consider factors such as a candidate’s typing speed and the timing of their activity. This approach helps prevent bias against individuals who are proficient in typing and ensures that responses are authentically self-generated, not sourced from generative AI tools. Therefore, it is essential for AI-assisted surveillance systems to be designed impartially, with a high regard for students’ rights. The balance between their efficiency in identifying potential collusion and adherence to ethical standards is crucial. Navigating these ethical challenges will be the key to the sustainable integration of AI in the domain of educational assessments.

Practical Recommendations: Equipping Stakeholders for a Transformed Landscape

Once collusion is identified and detected in remotely regulated assessments, decisions need to be contextually aligned with the stakes and nature of the assessment. Various stakeholders, including education policy makers at the top, administrative staff of assessment bodies, guardians responsible for students, and school principals and teachers overseeing learning, need to address the following specific scenarios:

  • Unusual Patterns Detected in Student Responses
    Recommendation: Implement comprehensive training for staff and educators on AI technologies to recognize and respond to these patterns. Develop clear communication channels with guardians to inform students about the potential and limitations of AI in assessments.
  • Student Admits to Using AI Assistance
    Recommendation: Create robust policy frameworks that define the consequences of using AI for malpractice and guidelines for its fair use. Offer counselling and constructive feedback to the student to address the underlying reasons for their actions.
  • Rise in AI Tool Accessibility Among Students
    Recommendation: Invest in technological safeguards like advanced proctoring solutions to monitor assessment integrity. Regularly review and update these tools to keep up with evolving AI capabilities.
  • Confusion Among Guardians About AI’s Role in Assessments
    Recommendation: Conduct educational sessions for guardians, clarifying how AI tools are used in assessments and the importance of academic integrity. Collaborate with teachers and administrators to ensure a unified approach.
  • Need for Continuous Improvement in Assessment Design
    Recommendation: Designing assessments to evaluate creative and critical thinking skills fostered by appropriate AI use can ensure that they truly reflect students’ understanding and effort, thereby supporting both educational advancement and integrity in assessments.
  • Ethical Concerns About AI Monitoring in Assessments
    Recommendation: Advocate for the ethical use of AI in education through workshops, seminars, and public discussions, emphasizing the balance between technological advancement and ethical considerations.
  • AI-assisted learning and AI-facilitated collusion
    Recommendation: Encourage educators to integrate generative AI tools like ChatGPT into the educational process for tasks such as brainstorming and organization, while also being mindful about their potential role in facilitating collusion.  Guide both educators and students through training that emphasizes the distinction between AI-assisted learning and AI-facilitated collusion, ensuring ethical use and the importance of originality in student work.

Innovations in AI present new opportunities for enhancing assessments, but they also require educators and students to be equipped with the necessary skills and awareness to navigate this transformed landscape effectively.

Reflecting on the Journey and Looking Ahead

As we delve into the dynamics of test administration in an AI-saturated landscape, it becomes clear that balancing AI’s potential with fair and reliable assessment practices is very crucial. This article has aimed to shed light on the challenges and opportunities at both the backend and frontend of test administration, offering practical insights for a future where human-AI collaboration is the norm. The goal is to inspire a vision of responsible AI integration in educational assessments, fostering a future that leverages AI’s capabilities while upholding the highest standards of integrity and fairness.

__________________________________

About the Author

Vali Huseyn is an educational assessment specialist with experience in enhancing key phases of the assessment lifecycle, including item authoring (item banking), registration, administration, scoring, data analysis, and reporting. His work involves strategic collaboration with a range of assessment technology providers, certification authorities, and research institutions, contributing to the advancement of the assessment community. At The State Examination Centre of Azerbaijan, Vali played a crucial role in modernizing local large-scale assessments. As the Head of Strategic Partnerships and Project Management Unit Lead, he co-implemented several regional development projects focused on learning and assessment within the Post-Soviet region.

Feel free to connect with Vali on LinkedIn to learn more about the best practices in transitioning to an online assessment environment.

Share this: