Best Research Award

Sponsored by TCS iON


This submission is based on peer-reviewed research published in Educational Measurement: Issues and Practices, which provides novel insights into the variability of proctor decision making in a remotely-proctored, high-stakes testing environment. In particular, it highlights how differences in proctor judgments can impact the fairness and integrity of high-stakes assessments, and it underscores the importance of mitigating this variability to ensure credible and reliable assessment outcomes, an area previously under-explored in e-assessment research.


Our results show that (1) proctors systematically differ in their decision making and (2) these differences are trait-like (i.e., ranging from lenient to strict), but (3) systematic variability in decisions can be reduced. Based on these findings, we recommend that test security providers conduct regular measurements of proctors’ judgments and take actions to reduce variability in proctor decision making.


The paper can be found here: https://onlinelibrary.wiley.com/doi/10.1111/emip.12591
 

Finalists:

AlphaPlus Consultancy Ltd with Welsh Government National Reading and Numeracy Onscreen Personalised Assessments (OPAs)

e-Assessment is a mature technology. It works. Formative assessment is shown to have significant beneficial impact on learning but is difficult to scale. The OPAs are an innovative formative assessment solution, a national e-assessment system rolled out to all learners from years 2 – 9 based on adaptive assessment approaches. This is a serious attempt to deploy best practice in a national live educational environment. It has been informed by research from start to finish. This application is not about a single research study, but about the ongoing deployment of research to support a national government’s educational objectives

Pearson with "I can read without letters doing backflips": understanding the SEND learner experience and shaping inclusive digital assessments

This research programme investigates how we can improve the accessibility of digital assessments to ensure they are as fair, valid and fit-for-purpose as possible. In emphasising and focusing on student voice, the impact on and experiences of SEND learners in relation to digital learning and assessment, our research contributes original thought and new insights into how we can better design and develop digital assessments – a currently under-researched and misunderstood topic. We believe this can add to, and enrich, existing bodies of research within the assessment community and is already contributing to tangible improvements to assessment experiences for SEND learners.

University of Massachusetts Amherst with The Massachusetts Adult Proficiency Tests (MAPT)

The MAPT is a multistage-adaptive assessment of mathematics and reading that leverages adaptive testing technology to meet accountability and educational demands. We have done extensive research on designing, developing, and validating the assessments. This research is documented in a 280-page technical manual, about 50 pages of which describes validation research and results. It is an outstanding example of building and validating a 21st-century assessment, and includes research on standard setting, and five sources of validity evidence.

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.