2024 Winners and Finalists

The e-Assessment Awards Winners and Finalists

We are delighted to present the finalists and winners of the 2024 e-Assessment Awards.

Thank you to our 2024 Awards Headline Sponsor

Thank you to the British Council who sponsored the International e-Assessment Awards since 2022.

BritishCouncil_Logo_Indigo_RGB

We are incredibly excited to work with British Council worldwide, to spread the news about our Awards programme, and draw in even more international examples of best practice, excellence and innovation.

Our Awards attract entries from right around the world - from Asia, Africa, Europe, Oceania and North America.

Lifetime Contribution Award

        

Winner: John Kleeman

As the Founder and Chairman of Questionmark, John Kleeman pioneered computerised assessments, including the development of the world's first web-based assessment software. His leadership has not only shaped the industry but also advanced standards initiatives, such as IMS Question and Test Interoperability, in collaboration with prestigious organisations like ATP, BSI, and ISO
His career trajectory showcases his commitment to innovation. Notably, he wrote the original Questionmark software and later transitioned to web-based assessment, reflecting a career-long responsiveness to technological advancements.

John's involvement in standards initiatives and leadership roles underscore his continuous engagement with evolving industry practices and technologies. His contributions to various standards development and his work on diverse committees highlight his proactive approach to staying abreast of advancements in the field. And his focus on test security, diversity, equity, and inclusion in assessments demonstrates his responsiveness to emerging trends in the sector.

Best Practitioner of the Year (Team) Award

2024 International e-Assessment Awards

Winner: Education Quality and Accountability Office (EQAO) with Modernization and Transformation of Ontario's K-12 Large-Scale Assessment program

EQAO uses innovation to maximize the accessibility and inclusiveness of its assessments so that students do not experience barriers when demonstrating their learning.

The modernized assessments introduced over the past two years include innovative features that meet the needs of the hundreds of thousands of students who participate each year. These features are the product of an extensive consultation process with teachers, administrators, and advocacy groups who have specialized knowledge on how to remove barriers for students. Their advice informs product development and careful testing procedures to ensure the best possible student experience.

Best Practitioner of the Year ( Individual) Award

2024 International e-Assessment Awards

Winner: Dr Liberty Munson

Dr Liberty Munson has consistently contributed to the assessment community throughout her long career. These contributions have always been above and beyond and easily digestible and accessible for any member of the assessment community. In this contribution, the assessment community needed such a resource when writing assessment items (questions). Dr Liberty Munson developed a guide that helps item writers take diversity and inclusion into consideration as they create exam and assessment content.

Best Workplace or Talent Assessment Project Award

Sponsored by City & Guilds

2024 International e-Assessment Awards

 

Winner: Mercer | Mettl & Sky Italy's Butterfly Project

The Butterfly Project is a tool created for prominent media and entertainment company Sky Italy which assisted with Sky Italy’s organisational transformation and  was also highly innovative and very successful in its implementation. The landscape of broadcasting and customer service is evolving, and our solution was designed to make the process of internal employee mobility smoother and to help employees reskill for new roles.

The project’s innovative nature is evident: The tool can measure proficiency in future roles, incorporate learning assessments to optimise individual learning journeys and proactively address the company’s future needs.

The solution is robust, reliable and scalable thanks to a meticulous customisation process, close collaboration with subject-matter experts and successful implementation on a large scale. All these factors contributed to the Butterfly Project’s ability to effectively meet the organisation’s requirements and accommodate its evolving needs.

Finalists

HiringBranch with Skill Assessment Implementation Leads to 400% Reduction in Bad Hire Rate.

HiringBranch delivers significant value to the organizations that have implemented its assessment technology. One global organization saw a 400% reduction in bad hire rates and millions of dollars in savings from using the HiringBranch skills-based hiring assessment.

The HiringBranch assessment is a robust, reliable, and ethical solution for hiring teams to scale their hiring efforts and analyze a high volume of candidates easily and effectively.

telc gGmbH with Empowering EU Talents: Comprehensive Digital Language Assessments.

The objective was to develop a digital language assessment tool that offers valid, reliable, and fair testing across all 24 official EU languages, accommodating special requirements and varying time zones.
The development of an online testing platform has transformed the way EU employees and agency staff can demonstrate their language competencies. This state-of-the-art assessment tool is meticulously designed to align with the EU’s 2+1 language skills framework and with the Common European Framework of Reference (CEFR). Content and tasks are culturally pertinent and directly relevant to the professional environment of the EU. A pivotal advantage of this digital system lies in its uniform application of evaluation criteria across all 24 languages. This ensures that every language is assessed with consistent fairness. All receptive items are meticulously calibrated to match CEFR standards while the platform empowers assessors to appraise test-takers’ productive responses with more uniformity. This level of calibration in marking, unattainable with traditional paper-based assessments, underscores the online model’s superiority in fairness, accuracy, and reliability in multilingual evaluation.

Best Transformational Project Award

2024 International e-Assessment Awards

 

Winner: Pearson with Remote Invigilation Service - International GCSEs

Online schools are forging ahead in supporting a growing demand for flexible education that can be accessed from anywhere in the world. However, one of the biggest challenges for students, parents/carers and the schools themselves, remains high-stakes assessment.

Traditionally, these students must source brick-and-mortar exam centres to take their exams, a setting that's unfamiliar and different to their ways of working. For many King's InterHigh students and families, this brings additional geographical, physical and psychological/emotional pressures. In response to this emerging challenge, Pearson and King’s InterHigh, delivered a first-of-its-kind pilot in the summer 2023, with 150 students choosing and successfully completing their Pearson Edexcel International GCSE exams from their chosen setting via Pearson-enabled remote invigilation.

By harnessing new technologies, we enabled students in exceptional circumstances – many with SEND, anxiety, health issues or geographical challenges who otherwise may not have been able to sit their exams – to take their exams either onscreen or on paper in their chosen environment and with adaptations such as extra time and rest breaks.

Remote invigilators monitored students from three different camera views from three different recording devices – this included seeing their screen / printed exam paper, location and utilising their webcam and microphone settings. There was also a chat function between students and remote invigilators where they could ask for assistance, for example checking set-up or indicating when they needed rest breaks. We worked with each student at every step – from exam setup to submitting their exams. In total, over 19 days, 1,554 remotely invigilated exam sessions took place across 16 different qualifications.

All 150 students successfully completed their exams remotely and received their grade they deserved – enabling their next steps in education. As with all our Pearson Edexcel exams, we’re committed to ensuring fairness and security, and following consistent processes accordingly – regardless of where or how the exam is completed. To ensure fairness, all students involved in the pilot could not access / download their papers or start their exams until 30 minutes after students sitting exams in schools.

Finalists

Open Assessment Technologies with Building the Evidence Base for Digital Assessment: Click Learning Addresses Literacy & Numeracy Outcomes for Learners in South Africa

Addressing South Africa’s literacy crisis and longstanding youth unemployment rate requires a genuine transformation in the way student literacy and numeracy skills are measured and taught. The partnership with TAO empowers Click Learning to acquire vast, on-demand student performance data from underserved districts in South Africa and address literacy and numeracy skills in ways they never could before. Previously impossible to access, this data reveals crucial insights into student performance, helping guide remediation programs for student success, while also holding the potential to influence new standards and transformations to education policies

A2i Aspire to Innovate with Noipunno

Noipunno is a transformational platform as it provides a comprehensive evaluation of students' learning and competencies, going beyond traditional assessment methods. Aligned with the new National Curriculum, it ensures consistency and relevance in evaluations. Teachers are empowered as primary users, with the app significantly reducing manual workload during assessments, allowing them to focus more on teaching. Real-time monitoring and data retrieval at various administrative levels contribute to effective decision-making. Noipunno's transformative impact lies in its comprehensive approach, alignment with standards, teacher empowerment, real-time monitoring, adaptability, user-friendly interface, self-sufficiency features, extensive scale, and support for career guidance.

Education Quality and Accountability Office (EQAO) with Modernization and Transformation of Ontario's K-12 Large-Scale Assessment program

TEQAO's large-scale assessments have undergone substantial transformation from paper-based tests to shorter, more student-friendly e-assessments. Ontario’s K-12 education system is made up of 72 school districts, and approximately 4900 schools. EQAO’s digitalized assessment solution had to be responsive to diverse regional needs during a time of rapid change. The modernization of Ontario’s assessments has greatly improved students’ experience and increased the flexibility of administration for schools. The digital transformation has also led to improvements in the collection, scoring and reporting of data that has informed public policy and improvement planning in literacy and mathematics.

The British Council with British Council Primary English Test

The Primary English Test is a test for the 9-12 young learner age group which contains a unique combination of features with the potential to introduce a new era in assessment and language learning. It is the first test to be immersive, scenario-based and using machine-driven automated scoring for this age group. This use of Artificial Intelligence in a test delivered digitally, flexibly and cost-effectively on site in schools around the world could transform young learner assessment practices, especially in terms of multimodal presentation and the testing of productive language skills (speaking and writing) at scale.

Best Research Award

Sponsored by TCS iON

2024 International e-Assessment Awards

 

Winner: Duolingo with Measuring Variability in Proctor Decision Making on High-Stakes Assessments: Improving Test Security in the Digital Age

This submission is based on peer-reviewed research published in Educational Measurement: Issues and Practices, which provides novel insights into the variability of proctor decision making in a remotely-proctored, high-stakes testing environment. In particular, it highlights how differences in proctor judgments can impact the fairness and integrity of high-stakes assessments, and it underscores the importance of mitigating this variability to ensure credible and reliable assessment outcomes, an area previously under-explored in e-assessment research.

Our results show that (1) proctors systematically differ in their decision making and (2) these differences are trait-like (i.e., ranging from lenient to strict), but (3) systematic variability in decisions can be reduced. Based on these findings, we recommend that test security providers conduct regular measurements of proctors’ judgments and take actions to reduce variability in proctor decision making.

The paper can be found here: https://onlinelibrary.wiley.com/doi/10.1111/emip.12591

Finalists

AlphaPlus Consultancy Ltd with Welsh Government National Reading and Numeracy Onscreen Personalised Assessments (OPAs)

e-Assessment is a mature technology. It works. Formative assessment is shown to have significant beneficial impact on learning but is difficult to scale.

The OPAs are an innovative formative assessment solution, a national e-assessment system rolled out to all learners from years 2 – 9 based on adaptive assessment approaches.

This is a serious attempt to deploy best practice in a national live educational environment. It has been informed by research from start to finish. This application is not about a single research study, but about the ongoing deployment of research to support a national government’s educational objectives

Pearson with "I can read without letters doing backflips": understanding the SEND learner experience and shaping inclusive digital assessments

This research programme investigates how we can improve the accessibility of digital assessments to ensure they are as fair, valid and fit-for-purpose as possible. In emphasising and focusing on student voice, the impact on and experiences of SEND learners in relation to digital learning and assessment, our research contributes original thought and new insights into how we can better design and develop digital assessments – a currently under-researched and misunderstood topic. We believe this can add to, and enrich, existing bodies of research within the assessment community and is already contributing to tangible improvements to assessment experiences for SEND learners.

University of Massachusetts Amherst with The Massachusetts Adult Proficiency Tests (MAPT)

The MAPT is a multistage-adaptive assessment of mathematics and reading that leverages adaptive testing technology to meet accountability and educational demands. We have done extensive research on designing, developing, and validating the assessments.

This research is documented in a 280-page technical manual, about 50 pages of which describes validation research and results. It is an outstanding example of building and validating a 21st-century assessment, and includes research on standard setting, and five sources of validity evidence.

Most Innovative Use of Technology in Assessment Award

Sponsored by SQA

2024 International e-Assessment Awards

Winner: A2i Aspire to Innovate with Noipunno

Noipunno revolutionizes education assessment in Bangladesh, integrating technology seamlessly into the educational landscape. Developed by the National Curriculum and Textbook Board and the Ministry of Education, it employs a comprehensive digital platform for assessing secondary-level students. With features like real-time data retrieval, behavioral assessments, and automated report cards, it ensures transparency and reduces the manual workload for teachers. The platform's adaptability to the new curriculum and its ability to manage vast amounts of student and teacher information demonstrate an innovative use of technology. Noipunno not only enhances the educational experience but also contributes to career guidance and economic development.

Finalists

Learnosity with Author Aide

The newest offering from Learnosity Author Aide is an AI-assisted authoring tool enables content creators to produce higher quality questions, faster.
Powered by GPT-4, Author Aide helps authors put their expertise to greater use while increasing their output by as much as 10x.
We’ve carefully designed Author Aide to meet the highly specific needs of assessment authors. Following months of rigorous testing, we’ve released an AI-assisted authoring tool that seamlessly integrates into the authoring workflow and offers content creators the fine-tuned control required to create, review, and refine content for assessments.
Our AI-enhanced UI enables content creators to reach high standards in record time. With its built-in capability to tailor content complexity level based on Bloom's taxonomy, authors can rapidly generate consistently high-quality, high-volume item banks in any subject, at any level, in multiple languages.
And by using their own learning material as training data, they can align that content with specific curricula, training guides, and standards.
We ensure assessment organizations can be confident in your content. Security is guaranteed, as OpenAI will never store or “learn” from the prompts entered by our users.

BCS, The Chartered Institute for IT with Using generative AI for instant, personalised feedback

Generative AI has advanced rapidly in recent years, raising ethical concerns about risks ranging from security breaches to discrimination. By learning to make ethical decisions, organisations can protect their reputation, their customers and the public from the dangers, and they need expert support to do this. To help companies and tech professionals get their approach right, we recently created our own Foundation Certificate in the Ethical Build of AI.
As part of the new Lord Mayor of London’s (Michael Mainelli) Ethical AI Initiative, the course teaches professionals how to manage the risks of designing and building AI-powered systems by applying a clear set of ethical principles. In collaboration with our supplier, Tintisha Technologies, as well specialist subject matter groups and internal and external stakeholders, we have devised an AI-powered assessment solution that is safe, effective, and easily adaptable to other projects.

TCS iON with AI based CCTV Surveillance integration with Assessment Platform

TCS iON has been providing secure and scalable assessment with over 92% of the high-stake assessments in India since the past several years. To further ensure the sanctity and security of assessments, integration with CCTV surveillance monitoring has been made as an integral part of the assessment offering. This integration enables us to provide the world's first surveillance integrated offering in Assessment. This innovation includes -
1. Round the clock access to live streaming of 13500+ CCTV across Test Centers.
2. Auto-navigation to relevant CCTV streams on occurrence of security incidents.
3. 1 click navigation to Candidate/Room CCTV.
4. AI based monitoring of Invigilator and candidates
This is the world’s first surveillance integrated digital assessment solution that helps differentiate our existing platform. The integration provides unique user experiences and unlock value with complete traceability between candidates, nodes and video feeds. It provides health, coverage and blind spot reports. The Analytics driven alerts ensure more efficient proctoring. The integration of surveillance, automation, governance controls and analytics will provide new experiences and dimensions for our solutions.

Tai Kwong Hilary College with Equipping Students for an Interconnected Future: An Innovative VA, ICT and AI Solution for English Language and Artistic Development

Our school has been exploring AI learning opportunities since the 2021/22 school year. Originally planning Microsoft AI certification, the emergence of generative AI led us to design a cross-disciplinary project for Grade 7 students. Teachers from ICT, English, and Visual Arts collaborated on a term-long project incorporating generative AI into their lessons. Students learned AI fundamentals and used Procreate for digital artwork. Their English descriptive writing was input into Midjourney to generate images. Prompts were also used to fuel generative storytelling and art reference generation. Lessons were assessed formatively and summatively. The project strengthened interdisciplinary learning, hands-on skill building, and critical thinking. Moving forward, we aim to involve more departments and tools while designing new assessments that support our evolving teaching strategies with generative AI.

Best International Implementation Award

2024 International e-Assessment Awards

 

Winner: Australian Council for Educational Research with ACER Maple - Enhancing Global Educational Assessment

ACER Maple's involvement in the Programme for International Student Assessment (PISA) 2025 represents a significant endeavour in global education evaluation. PISA is a large-scale international survey that is highly influential in shaping national and global educational policies and practices. PISA now covers over 90 countries, assessing a substantial cohort of more than 600,000 students every three years. At the core of this operation is ACER Maple, a progressive web application designed to streamline participant sampling, assessment allocation, and participant tracking. ACER Maple enhances the transparency and efficiency of sampling activities, which is crucial to ensuring the validity of PISA data and the comparability of its outcomes.

The redevelopment of ACER Maple was necessary due to the ever-increasing scope and size of PISA. This redevelopment included incorporating a configurable sampling algorithm to cater to the diverse educational systems and specific requirements of various clients. A key aspect of this development phase was integrating automation into critical processes, alongside an emphasis on user experience and efficient data processing. The project employed continuous integration and deployment methods, actively involving clients in the development process, and swiftly incorporating their feedback. This collaborative approach contributed to the refinement of the software and fostered strong client relationships, as indicated by positive feedback and the efficiency of training and data submission processes.

Managing the deployment of ACER Maple across different countries presented challenges, primarily due to the need for custom software configurations for each client's national options. To address this, the team implemented an intuitive survey configuration interface with multiple sub-menus for data handling and sophisticated sampling processes. The effectiveness of these solutions was reflected in the quicker data submission timelines and the positive feedback from clients.
The implementation of ACER Maple was characterised by clear success metrics, including the effectiveness of user training and the quality of data submissions. The automation of critical sampling processes significantly improved the validity and reliability of the data. Support services were robust, featuring webinars, manuals, and a 24/7 helpdesk, ensuring smooth adoption and high satisfaction levels among clients.

Recognising the diverse cultural and regional needs in the PISA programme, ACER Maple included extensive customisation features such as multilingual support and right-to-left text functionality. These features, along with streamlined operational processes, minimised non-technical barriers, and enhanced accessibility across different regions.
ACER Maple faced several technical challenges, including bandwidth limitations and data protection laws, necessitating flexible deployment options. The team addressed these challenges through a variety of installer options, ranging from fully offline solutions to cloud-based, encrypted systems. The global helpdesk played a crucial role in providing swift and effective issue resolution.

The scale and complexity of PISA 2025 are effectively managed by ACER Maple, which demonstrates its capability in handling large-scale, international educational assessments. This project stands out for its client-centred development approach, technical flexibility, and efficient management strategies, successfully navigating the challenges inherent in global educational evaluations. ACER Maple's processing of millions of student records, while adeptly handling intricate data requirements, sets a new benchmark in the field.

Finalists

Open Assessment Technologies with MEXTCBT & TAO: Implementing a Standardized Approach to Digital Assessment in Japan

The implementation of national computer-based testing (CBT) through TAO and the Japanese Ministry of Education's MEXCBT program has had a transformative impact in Japan's education landscape. The strategic partnership between the TAO team in Luxembourg and Uchida Yoko / Infosign addressed Japan's digital transformation needs in education by adopting a standardized approach to CBT. The project demonstrated unparalleled flexibility, seamless integration, and cost-effectiveness. Despite cultural and technical challenges, proactive engagement, meticulous planning and adherence to 1EdTech standards ensured successful implementation and has not only accelerated technology adoption, but helped establish a set of recognized standards for global cooperation in EdTech.

Talview with Global Proctoring Initiative: Navigating Challenges, Ensuring Integrity

Cambridge aimed to address the urgent demand for online proctoring solutions amidst the pandemic while ensuring the integrity, security, and user experience of assessments worldwide. Talview emerged as the chosen partner due to its AI-powered proctoring solution, demonstrating compliance with GDPR and other data protection regulations. The decision to collaborate with a single provider aimed to streamline recommendations and ensure consistency for agents seeking remote proctored exams.

Beyond technical considerations, the project acknowledged the diverse levels of understanding and knowledge about remote proctoring across regions, as well as resistance in certain cultures. Localization efforts, such as creating videos with local language subtitles, were implemented to cater to regional preferences. Additionally, targeted training and guidelines were designed for candidates and agents, addressing privacy and data security concerns specific to regional and cultural sensitivities.

The global implementation of the program marked a significant achievement, engaging over 5000 candidates utilizing the Talview solution. This endeavor involved managing a vast network of agents and adapting to various geographical, cultural, and regulatory environments. Collaboration with more than 200 Cambridge agents worldwide underscored a truly global implementation, with regions spanning from Japan to India, the Middle East, Latin America, South America, and Europe. Overcoming the complexities of designing a solution and process that catered to diverse regional requirements proved challenging but was successfully executed, emphasizing standardization as crucial for serving a global audience effectively.

Best Formative Assessment Project Award

2024 International e-Assessment Awards

 

Winner: BCS, The Chartered Institute for IT with Using generative AI to assess open-response questions

The Future of Jobs Report 2023 indicates that by 2027, 43% of work tasks will be automated as a result of innovation, and it is therefore predicted that skills such as critical thinking and evaluation, ethical decision-making, and basic AI literacy are going to be highly needed across most job roles.

Generative AI has advanced rapidly in recent years, raising ethical concerns about risks ranging from security breaches to discrimination. By learning to make ethical decisions, organisations can protect their reputation, their customers and the public from the dangers, and they need expert support to do this. To help companies and tech professionals get their approach right, we recently created our own Foundation Certificate in the Ethical Build of AI.

In collaboration with our supplier, Tintisha Technologies, as well specialist subject matter groups and internal and external stakeholders, we have devised an AI-powered assessment solution that is safe, effective, and easily adaptable to other projects.

As part of the new Lord Mayor of London’s (Michael Mainelli) Ethical AI Initiative, the course teaches professionals how to manage the risks of designing and building AI-powered systems by applying a clear set of ethical principles.

By completing the course learners are able to:

1. Gain a practical understanding of how to apply ethical thinking, principles and frameworks when developing AI applications in their own contexts.
2. Study the course in their own time online and attain a BCS professional certification.
3. Sit a final online exam, similar to many of our professional certification exams, supported by shorter formative assessment tasks within the learning.

The course is designed to encourage learners to apply their ethical thinking to given situations as they are learning, in their own time. As such, we identified an opportunity to incorporate the use of AI to evaluate learners’ responses within formative assessment activities and provide detailed, individualised feedback.

Within our solution we have incorporated the means for learners to be able to comment on the usefulness of the feedback provided to them by the AI and rate it out of five. This allows learners to develop their ability to evaluate generative AI output, which is essential to using it ethically. It also allows BCS to monitor and evaluate the performance of the AI through the quantitative and qualitative data generated by the user comments and ratings so to evaluate the user experience, the effectiveness of the tool, and how we can improve its use as a learning and assessment tool.

Many of our customers are education and training providers, and we’re eager to share our experiences to support further thinking and best practice in implementing this sort of AI use.

Finalists

Buckinghamshire New University with e-Formative Peer Assessment (eFPA) of oral presentations

Students believe assessment and feedback are the teacher’s sole responsibility; however, research shows that PA “contributes to the development of student learning and promotes ownership of assessment processes” (Bryant and Carless, 2010:3). To promote shared responsibility, the eFPA of oral presentations was included in the 2023-24 Blended Postgraduate Certificate in Practice Education (PgCPE) programme for the student practice educators (SPEs) to articulate aspects of their practice as educators in a professional multi-disciplinary environment.
The 15-minute oral presentation of how they planned, implemented and reviewed an assessment for their own learners, considering the need to support learners' engagement with the process to promote inclusivity and social justice within the assessment and the wider learning environment happened in ClassCollaborate, the institution's web conferencing platform in the VLE for synchronous communication while asynchronous discussions were in the Discussion Board promoting self-regulatory skills; this also takes into account the SPEs' situational, institutional and dispositional barriers to learning. As the design and delivery of the eFPA, marking, reporting, storing the recorded presentations enabling evaluation from different perspectives (SPEs, Course team, internal/external moderators), this initiative is considered e-assessment (Joint Information Systems JISC, 2007).
36 SPEs, a mature group with diverse backgrounds and work experiences in the health care sector participated in the initiative from January to February 2024. For efficiency, nine virtual Presentation Rooms were created in ClassCollaborate with three/four students each enrolled as moderators to enable them to autonomously manage the assessment and feedback processes in a safe, secure space. For three weeks, the SPEs received guided, hands-on experience of using the tools in the virtual presentation rooms learning to upload slides and recording Mock presentations. In addition, training on how to use the rubric contributed to the SPEs understanding of how the course team would assess their oral presentations. Examples of constructive feedback and evaluative judgements were provided to mirror the course team's feedback practice.
Review of recorded mock presentations as well as the course team's weekly observations of the SPEs' engagement with the eFPA and feedback processes was iterative throughout the period enabling timely changes and improvements to the process.

The interim findings suggest that the eFPA empowered the SPEs to take over their own learning process and to become a resource for their peers; commenting on others' work appeared to improve their understanding of assessment criteria with a focus on success, helping them to become more engaged in learning and developing their interpersonal skills and their attitude to assessment and feedback as a shared responsibility. For the course team, eFPA can potentially reduce their assessment and feedback workload.

Best Summative Assessment Project Award

Sponsored by City & Guilds

2024 International e-Assessment Awards

 

Winner: The British Council with British Council Primary English Test

The British Council Primary English Test is a digitally-delivered general proficiency test targeted at students globally learning English as a Foreign Language at Primary and Lower Secondary schools and language tuition centres (9-12 years old). It is a 4-skills (listening, reading, speaking, writing) experience, characterised by digital multimodal tasks that assess independent and integrated English oral and literacy skills. The test is storyline-driven, with engaging animations, featuring innovative question/task types and fun elements. The assessment is designed to explore progressively deeper, more complex skills over the course of the test. The aim is to replicate the immersive experience of playing a digital game with a strong storyline and narrative. Such gamification provides enjoyment and motivation for learners (e.g., through instant feedback: auditory or pictorial).
The test has been developed by global experts in English teaching and assessment in collaboration with specialist AI and platform partners. The Primary English Test takes full advantage of significant recent advances in autoscoring technology. Automatic speech recognition and scoring systems are trained to assess a diverse range of speakers.
Technical and preparation support is provided for administrators, teachers and students. There are technical and administrative guides, including an orientation video and demo. A free practice test familiarises students with the test format, and offers listening and reading feedback.
Scores are generally available instantly to teachers on submission of the test. The scoring report consists of a student certificate and a detailed breakdown of the student's test scores, CEFR levels in reading, listening, speaking and writing. There are also Lexile measures for reading and listening, and additional information including statements describing what the student is able to do in the language.
Extensive piloting and field testing generated data to verify high levels of item performance and test reliability. Data was also used to train automated scoring models according to human raters’ scores on a range of measures of communicative competence.

Finalists

Kaplan Testing Services, Kaplan International Pathways with Using computerised-adaptive testing for high-stakes English language assessments in Higher Education

The English for Academic Purposes (EAP} module is compulsory for all Pathways students taken by over 5000 students each year, across nine geographically dispersed Kaplan Pathway Colleges in the UK. Its assessment is incredibly high-stakes, as it is a pre-requisite to securing progression to university.
Prior to the transformation, the assessment was paper-based and posed challenges with maintenance and management due to its high-stakes nature and the continuous need for resource-heavy production of multiple new assessment versions every year. The decision to transition to a digital, adaptive assessment platform was driven by the desire for greater sustainability of assessment processes for Kaplan Pathways, increased results precision and improved personalization of the assessment experience for Pathways students.
The new approach retained the existing four discrete skills testing model used in the EAP module but replaced paper-based examinations with online adaptive KTE assessments for listening, reading, and writing, while maintaining face-to-face speaking examinations. The adaptive nature of KTE, powered by Item Response Theory (IRT), resulted in personalized testing experiences for students, reduced error rates and efficiency gains through instant, automatic scoring. Moreover, the new approach improved operational sustainability, increased flexibility in test administration and enhanced overall assessment security.
Addressing potential barriers to adoption, the Kaplan International Pathways and KTS engaged staff and students in the validation process. Staff validated the alignment of the new assessments with the EAP module learning outcomes through careful content reviews and mapping of test constructs with the EAP curriculum. Students were consulted on their experience with the new assessment strategy, and their feedback revealed that they overwhelmingly endorsed the online delivery, remote accessibility and adaptive nature of the assessments.
After an initial rollout and scaling across all colleges, the institution observed incremental improvements in student outcomes and satisfaction levels, attributing these to reduced stress and increased personalization of the testing experience. The positive impact on student performance and module feedback led to enhanced confidence from university partners and widespread adoption of the KTE adaptive test for additional various purposes within Kaplan International Pathways, including admissions and progress testing.
Overall, the case study highlights the successful implementation of computer-adaptive assessments in high-stakes English language modules, resulting in improved sustainability, precision, personalization, and student satisfaction. It underscores the effectiveness of adaptive testing in enhancing learning measurement and test-taking experiences, ultimately contributing to the institution's educational goals and partnerships.

Excelsoft Technologies with Digitizing Assessment Processes at Kasetsart University, Thailand

Kasetsart University (KU) stands as a global leader in academia, offering an extensive range of undergraduate, postgraduate, and research programs spanning various fields, including Business Administration, Agriculture, Humanities, Economics, Engineering, and Agro-tech. KU aspires to be a world-class institution dedicated to education, research, and innovation aimed at achieving sustainable societal development rooted in indigenous knowledge.

The management sought a unified, centrally deployed, cost-effective solution capable of catering to diverse school needs. They aimed for a secure test delivery mechanism to safeguard their departmental item banks and tests, scalability to accommodate a large number of students taking tests simultaneously and timely results with comprehensive visibility for top management. Considering that certain programs were offered in both Thai and English and some exclusively in Thai, the university required a solution that would enable tests and metadata in both languages, ensuring seamless administration across language preferences.

Conducting thorough market analysis, KU explored both open-source and proprietary software options. Teaming up with Excelsoft, they implemented a new platform enabling nationwide access for students to take exams online, even in areas with unreliable internet connectivity. Leveraging the fact that over 90% of students already owned iPads, KU adopted a Bring Your Own Device (BYOD) model, facilitating greater participation in exams while reducing costs associated with providing computers.

The implementation of this solution led to a swift adoption of digital summative exams, with over 72,000 items and 3,000 exams efficiently generated, resulting in the submission of over 733,000 summative exams within three years. Presently, more than 2 million students can take the university entrance exam using the digital platform.

Education Quality and Accountability Office (EQAO) with Modernization and Transformation of Ontario's K-12 Large Scale Assessment Program

EQAO transformed Ontario’s provincial large-scale assessments in response to two major catalysts: the urgent need for reliable data on student learning created by the COVID-19 pandemic, and the release of new curriculum over the same period. The transformation leveraged relevant, digital assessment models and provided an engaging, accessible assessment experience for all students. The assessments continue to be aligned to the Ontario Curriculum and follow best practices for psychometric measurement and data quality. The modernization also included substantial enhancements to assessment reporting by leveraging interactive dashboards that improve access to timely and meaningful data to inform practice and improve student achievement. EQAO ensured the success of the end-to-end transformation through active collaboration with and support to partners in the education system; a steadfast focus on the needs of all students; and an agile and evidence-informed approach to implementation.

An example of how EQAO’s transformation is impacting the education system in Ontario is the ongoing efforts of educators and administrators to improve math learning in the province. Following the pandemic, and the launch of a revised mathematics curriculum, year-over-year tracking of mathematics data was necessary to guide decision making at a school, school board and provincial level. EQAO provided the education sector with sub-score reports that give educators information on how students do on items mapped to specific strands and skills of the revised mathematics curriculum. For example, schools can see how students performed on geometry, algebra and data sections of the assessment. This information allows them to make data-informed decisions around instruction and professional development as educators become accustomed to the new curriculum. The reports were highlighted in 2023 as part of the Ministry of Education Mathematics Achievement Action Plan as an important way to determine areas where students may be struggling in Mathematics. EQAO’s School Support Team has been supporting school in accessing and using the reports to better understand math results.

Watch the Winners and FInalists videos on our YouTube Channel