The AI in Assessment Innovation Award is a new Award for 2025 recognizing groundbreaking initiatives that harness the transformative power of artificial intelligence in assessment, paving the way for smarter, fairer, and more efficient evaluation practices
Read on for more information on the 2025 finalists in this Award category at the 2025 International e-Assessment Awards.
HiringBranch with Bell Reducing Candidate Drop-off, Attrition and Hiring Costs using the HiringBranch AI Skills Assessment
Bell is a global telecom company that employs a total of 45,132 people and made more than 24 billion in revenue last year. Typically, they hire 8000 new agents in over one hundred segments across customer care, technical, sales, and loyalty programs. Soft skills are particularly important for the roles Bell is hiring for. Veronique LaCasse, their Senior Manager of Recruiting, Training and Onboarding, explains
“Agents now have to use multiple systems and need to support technical, sales, loyalty, and customer service all at once. They need to have technical skills, people skills, and be able to navigate the processes to solve customer problems, all while the customer is on the line. That’s why it’s critical that they show the foundational soft skills required on the first day on the job and reassure and prompt the client through all this, doing it in a certain amount of seconds and keeping good performance metrics. So the agent’s adaptability and resilience skills are even greater in this position than they used to be. Multitasking through high-quality contacts and performance must be at the forefront of their day to day.”
Given how critical soft skills are to the success of Bell’s employees, their hiring teams needed a reliable way to measure these consistently and at scale. To get the data on soft skills that they needed, Bell turned to HiringBranch to leverage its AI soft skill assessment. This move paid off. Aside from saving time and curbing drop-off with new hires, Bell has been able to rely on the HiringBranch AI skills assessment to correlate agent skills in the hiring process to proficiency on the job and, ultimately, a return on their investment.
Commenting on being a finliast, HiringBranch said, “In a compelling showcase of technological advancement and real-world impact, HiringBranch has submitted its 2025 E-Assessment Awards entry, Bell: Reducing Drop-off, Attrition and Hiring Costs Using the HiringBranch AI Skills Assessment, to the inaugural AI in Assessment Innovation category. The nomination highlights a powerful collaboration with Bell, one of the world’s leading telecommunications companies, and illustrates how AI assessments are reshaping talent acquisition at scale.
The e-Assessment Association, a vanguard of excellence in educational technology and assessment, selected HiringBranch as a finalist following a rigorous review process. This marks the second consecutive year the company has achieved finalist status—an uncommon distinction that underscores its sustained commitment to research, development, and measurable client success.
For HiringBranch, the recognition not only affirms the effectiveness of its Soft Skills AI(TM) assessment but also signals a broader shift in how enterprises approach hiring—moving away from traditional models in favor of scalable, data-backed evaluation tools.
‘We’re honored to be recognized by such an esteemed organization,’ said the HiringBranch Chief Research and Development Officer, Assaf Bar-Moshe, PhD, in a statement. ‘This nomination is a testament to the dedication of our researchers, the trust of our partners at Bell, and the growing global appetite for more intelligent, equitable hiring solutions.’ ”
MTS with MTS’s AI-powered IELTS Mock Test platform with human-like avatar Speaking test ‘Examiners’
Project Summary
MTS’ AI-powered IELTS Mock Test platform, (https://ielts.mtsglobal.uk.com), launched in October 2024, is an advanced e-assessment tool designed to replicate the real IELTS On Computer English language test experience. With millions of people taking the IELTS tests annually, our platform provides an valuable opportunity to practice the test online. It offers AI-generated instant indicative results and detailed feedback on Speaking and Writing, helping test-takers identify areas for improvement. Many users report increased confidence and better performance after using our platform. Additionally, test-takers can gauge their readiness for the IELTS exam; lower-than-expected scores may signal the need for additional preparation before taking the real-life exam. Our platform integrates Large Language Models (LLMs), specialised e-assessment algorithms, and expert ‘human-in-the-machine’ feedback to deliver reliable indicative IELTS band score predictions across all test components of the IELTS exam.
Key features:
• Instant indicative IELTS band scores for mock tests covering all four components of the IELTS test (Listening, Reading, Writing, and Speaking).
• AI-powered Avatar ‘Examiners’ with various accents for a near-real IELTS Speaking test experience where a variety of English accents may be experienced on test days.
• Auto-generated improved versions of Writing task essays for reflective learning.
• Instant feedback on Speaking and Writing, analysing task response, coherence, lexical resource, and grammar.
• Progress tracking over multiple attempts.
• Intuitive and user-friendly interface for easy navigation of the platform.
Benefits:
• Familiarity and confidence: Simulating the IELTS test environment helps test-takers feel more prepared and confident, often leading to better performance.
• Early readiness insights: The platform answers the crucial question, “Am I ready for IELTS?” by assessing English proficiency before the real exam.
• Personalised learning: Test-takers receive targeted recommendations to strengthen weaker areas while building on existing strengths.
• Time and cost efficiency: Reduces reliance on costly preparation courses or 1-2-1 teaching, and minimises the number of potentially unnecessary and costly exam retakes.
• Convenience: Available 24/7, and accessible from anywhere with a stable internet connection on a desktop or laptop.
• A shift away from old-style mock tests: Our platform moves beyond static mock tests, unengaging ‘model answers’, and old-fashioned teaching methods by providing an interactive and engaging e-assessment experience.
We are immensely proud of our innovative AI-powered IELTS Mock Test platform. By delivering accurate, actionable insights and enhancing test-takers’ confidence, it provides an accessible and effective way for people to prepare for their all-important IELTS exam – whether for academic, professional, or migration purposes.
Commenting on being a finliast, MTS said, “We are delighted that MTS has been shortlisted as a finalist for the e-Assessment Association’s ‘AI in Assessment Innovation Award’ 2025. As a global exam services company dedicated to raising standards and redefining assessment experiences, this recognition is an important milestone for us.
Our finalist product, an AI-powered IELTS Mock Test platform with a human-like digital avatar for the Speaking Test, brings together advanced AI, intuitive feedback to guide user learning and an IELTS Mock Speaking Test that is immersive and human-like.
Being acknowledged by the e-Assessment Association as a finalist for this prestigious award affirms our commitment to harnessing cutting-edge technology in service of quality, access and learner confidence. This recognition helps strengthen our mission to help our students to get the outcomes they need with our high-quality IELTS exam preparation offers and reinforces our position as a forward-thinking leader in the global assessment landscape.
We are grateful to the e-Assessment Association for this opportunity and for their continued commitment to spotlighting innovation in the sector.
Finally, we’d like to thank the team at MTS who brought this vision to life. Your expertise and creativity made this achievement possible.”
Sentira XR with Pioneering Fair, Scalable, and AI-Driven VR Assessments
Project Summary
Sentira XR’s AI-driven VR assessment platform is transforming competency evaluation in medical education by integrating advanced artificial intelligence with immersive virtual reality. Our dual AI approach combines AI-driven virtual patients, powered by conversational AI, with AI-based assessment analytics, ensuring authentic student-patient interactions and objective performance evaluation.
The platform addresses critical challenges in medical training, including subjectivity in grading, scalability limitations, and the lack of immediate feedback. Traditional assessments often fail to capture real-world decision-making and procedural skills. Our solution eliminates these shortcomings by providing real-time, data-driven feedback, enhancing assessment fairness, reliability, and scalability.
Student feedback highlights the platform’s impact: 88.7% of learners found it helpful, 92.9% reported increased confidence in speaking with real patients, and 89.3% believed it improved their clinical placement effectiveness. Additionally, institutions benefit from reduced resource dependency, lower costs, and a streamlined assessment process.
Beyond educational benefits, our platform aligns with sustainability initiatives. A doctoral research study, undertaken using our solution and published in the British Journal of Anaesthesia, highlights VR’s potential to reduce the carbon footprint of medical training by minimising travel-related emissions, resource consumption, and operational costs. These environmental advantages extend to assessment, supporting institutional efforts towards sustainability.
Our development processes take W3C accessibility guidelines into consideration to enhance inclusivity in XR environments. Additionally, we use AI to generate ethnically diverse avatars, with approximately 40% of our simulations featuring a patient from an ethnic minority background. This promotes cultural competence and ensures learners interact with a diverse range of patient scenarios.
By offering remote accessibility, ensuring compliance with GDPR and ISO 42001 standards, and embedding ethical AI practices, our platform is setting a new benchmark in AI-driven assessment. Sentira XR’s innovation represents a scalable, fair, and transformative approach to medical training, equipping the next generation of healthcare professionals with the skills they need for real-world success.”
Commenting on being a finalist, Sentira said, “Being shortlisted for the AI in Assessment Innovation Award is a valuable recognition for Sentira XR and the work we’ve done to integrate AI and VR in medical and healthcare training. Our platform addresses key challenges in traditional assessment methods, such as subjectivity, scalability, and real-world applicability, by combining AI-powered virtual patients with real-time performance analytics.
This recognition affirms the relevance and potential of our solution to improve educational outcomes and help make competency-based training more accessible and effective. It also brings attention to the role that AI and VR can play in transforming assessment in healthcare education.
We appreciate the opportunity to share the impact of our work and the support we’ve received from educators, institutions, and clinical professionals who have provided feedback and collaborated with us along the way. This recognition highlights the collective effort of our team, partners, and stakeholders who have helped make this platform a reality.
As we continue to grow and refine our platform, we are motivated to keep advancing the capabilities of our solution and ensure we are making a positive impact on healthcare education.”
Smartail Pvt Ltd with Deepgrade AI: Revolutionizing Handwritten and Descriptive Assessment Grading and Analytics Through AI
Project Summary
Revolutionizing Assessments with Deepgrade AI Smartail’s Deepgrade AI is an advanced assessment solution that automates the grading of handwritten and descriptive responses with unparalleled accuracy and scalability. By leveraging cutting-edge AI, natural language processing, and machine learning algorithms, Deepgrade addresses a critical bottleneck in education—manual grading of complex and subjective answers.
In 2023-2024, Deepgrade AI conducted a pilot across 8 schools in Tamilnadu, India in a large chain of school and a medical university in Bangalore, Karnataka grading over 10,000+ handwritten answer scripts in subjects like science, math, english, social science and medicine-pathology. The AI demonstrated a 93% grading accuracy compared to manual evaluation, reducing grading time by 70%. Educators benefited from detailed analytics, including learning gaps and performance trends, enabling targeted interventions.
The solution is now operational in 70+ K-12 schools and institutions, serving 40,000+ students daily across CBSE, ICSE, Cambridge, and State boards. With proven global scalability, Deepgrade is also piloting in the UK, Oman, and Singapore. By combining innovation with educational impact, Deepgrade AI is setting a new benchmark for assessments worldwide.
Commenting on being a finalist, Smartail said, “We are deeply honoured to be shortlisted as a finalist for the prestigious e-Assessment Awards 2025. This recognition is a powerful validation of our mission at Smartail to reimagine assessment through our deeptech-powered platform, Deepgrade AI. Being among global leaders in educational innovation reinforces our belief that the future of learning hinges on intelligent, inclusive, and equitable assessment practices.
This moment is not just a milestone for our organisation but a tribute to the relentless efforts of our cross-functional team, academic partners, and forward-thinking educators who have been part of this journey. From humble beginnings in India to pilots now running in UK schools, our platform has been built with a clear purpose — to empower educators and unlock student potential by transforming how handwritten assessments are graded, analysed, and acted upon.
The recognition serves as a significant boost to our global outreach strategy and accelerates our vision of supporting student learning across borders, especially in underserved regions where digital equity is a challenge. We are incredibly grateful to the eAA jury for this opportunity, and we extend heartfelt thanks to the schools, universities, and stakeholders who believed in our solution.
We look forward to the final round with immense excitement and renewed commitment — to deliver trust, accuracy, and actionable insights through AI-driven assessments. This nomination is not just an honour; it is a responsibility to keep pushing the boundaries of educational transformation.”
Imperial College London with AI Innovation in Formative Assessment: Local LLMs for Enhanced Medical Education
Project Summary
Imperial College London’s MSc Molecular Medicine programme has pioneered an innovative approach to developing formative assessment in collaboration with medical professionals while maintaining strict data privacy and security through local instances of open-source Large Language Models (LLMs).
The MSc Molecular Medicine programme features an intensive curriculum with 60 lectures delivered by 52 different academics over just 10 weeks. This diversity of instructors and rapid pace created a challenge: ensuring consistency and quality in formative assessments across all sessions. Recognising this gap, I, as an e-learning technologist, devised a pioneering AI-driven solution inspired by a presentation on the Ollama local language model I attended at Imperial and the pressing need for more online materials for MSc Molecular Medicine students.
With the programme director’s support, I leveraged Ollama to autonomously generate high-quality quizzes, which were then refined through academic review before being transformed into dynamic, interactive Rise e-modules. Crucially, by providing academics with AI-generated drafts, I eliminated the most difficult step—the blank page—making it significantly easier for them to contribute materials.
This approach not only streamlines assessment creation but also enhances learning by providing students with engaging, AI-powered revision tools—all while safeguarding sensitive academic materials. By integrating cutting-edge AI with pedagogical best practices, this project redefines how learning materials and formative assessments are developed, setting a new standard for innovation in education.
Technical Innovation in Assessment
Unlike conventional cloud-based AI solutions, our project utilizes Open WebUI and Ollama—a locally-hosted LLM platform that processes all assessment data on institutional hardware. This approach ensures:
1. Complete data privacy—no lecture materials are uploaded to external servers
2. No training of external models with our proprietary content
3. Secure handling of sensitive research data when creating assessments
The formative assessment workflow optimizes faculty time while maximizing educational value:
• Lecture slides, lecture transcripts and supplementary readings are collected
• Materials are processed through the local LLM instance
• Structured formative assessments are generated as Word documents
• Faculty perform accuracy verification before student release
• Approved assessment materials are formatted into interactive e-modules in Articulate Rise
Assessment Impact
Student response to the AI-generated formative assessments has been overwhelmingly positive. The assessment e-modules provide:
• Consistent practice opportunities through scaffolded quizzes
• Knowledge check questions reinforcing key concepts
• Open-ended formative tasks promoting deeper engagement and reflection
So far, eleven comprehensive assessment e-modules have been created covering advanced topics in molecular medicine.
Innovation in Assessment Accessibility
This approach democratizes access to advanced formative assessment technologies. By utilizing open-source models on local hardware, we demonstrate how institutions with limited resources can implement AI-enhanced assessment without expensive vendor contracts or cloud computing costs.
Commenting on being a finalist, Imperial College said, “Being shortlisted for the AI in Assessment Innovation Award is an incredible honour and a meaningful recognition of the potential that innovative, ethical uses of AI hold for higher education. It reflects not just the success of a single project, but a broader commitment at Imperial College London to harness emerging technologies in ways that are practical, secure, and deeply student-focused.
This project began as a response to a real challenge—supporting time-poor academics in delivering high-quality formative assessment at scale. To see it recognised in this way affirms that AI can genuinely empower educators, enhance student learning, and still uphold the principles of academic integrity and data security that are core to postgraduate education.
I’m especially grateful to Dr Paras Anand, Course Director of the MSc Molecular Medicine programme, whose support from the very beginning gave this project momentum and credibility. My thanks also go to Agata Sadza, Head of E-Learning at the Digital Education Office, with whom I first shared the idea—her encouragement was a key catalyst in turning it into reality. I’d also like to thank Adrian Cowell, Innovation Lead, who introduced me to Ollama and installed it on my machine, and Tya Asgari, Digital Education Lead and an exceptional line manager, who pushed me to share this work with the world.
This recognition is also a tribute to the many academics who generously contributed their time, materials, and feedback. Their openness and collaboration made it possible to co-create something scalable, transformative, and impactful.
Ultimately, being shortlisted inspires us to continue exploring what’s possible at the intersection of education and technology—and to keep building tools that serve both educators and students with thoughtfulness, creativity, and innovation.”
For more information on all finalists in the 2025 International e-Assessment Awards, visit our finalists webpage.
Join the global community advancing e-assessment through innovation, research, and collaboration.
Keep informed
This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.