Surpass Assessment wins Bronze at the 2023 International Business Awards®
Surpass Assessment has been named winner of a Bronze Stevie® Award in the 20th Annual International Business Awards®.
Surpass was a Bronze Award winner in the ‘Most Innovative Tech Company of the Year – Up to 2,500 Employees’ category. The submission focused on a strong commitment to innovation, with particular focus on their Customizable Question Type (CQT) framework.
On winning the award, Sonya Whitworth, co-CEO at Surpass, said:
“This International Business Award reflects our strong commitment to invest in innovation, with particular focus on our Customizable Question Type (CQT) framework. It showcases our dedication to advancing assessment and improving the services we offer the Surpass Community, collaborating with them to create the best technology and services on the market. I’m delighted with the recognition for the fantastic team that drives the evolution of Surpass.”
About the Stevie Awards
The International Business Awards are the world’s only international, all-encompassing business awards program.
Stevie Award winners were determined by the average scores of more than 230 executives worldwide who participated in the judging process.
Winners were selected from more than 3,700 nominations submitted by organizations in 61 nations.
A complete list of all 2023 Gold, Silver and Bronze Stevie Award winners by category is available here
Understanding Total Cost of Ownership (TCO) in Digital Assessments: Navigating the Financial Landscape
By the GamaLearn Blog Team
Welcome to the GamaLearn blog series. Join their journey to unravel the multifaceted realm of TCO in the context of digital assessments.
Investing in modern technology solutions such as e-assessment tools is no longer just a trend – it’s a strategic necessity.
While the direct benefits of these tools are evident in the form of enhanced teaching and learning experiences, it’s their financial impact – in terms of Total Cost of Ownership (TCO) and Return on Investment (ROI) – that truly underscores their value.
This transition isn’t just about swapping paper for screens, it represents a revolutionary leap towards sustainable, scalable, and effective methods of assessing. Traditional assessments have presented various challenges that have been increasingly hard to ignore. The need for physical infrastructure, the substantial time investment for administration, and the manual processing of results have limited the scalability of such exams.
Pen and paper methods often fail to provide immediate feedback, which is crucial for ongoing learning and improvement. It also inherently lacks accessibility for people with disabilities and remote students. The inconsistency in grading, primarily due to human error, has also been a notable challenge in traditional assessments.
With e-assessment, on the other hand, students can identify gaps in their knowledge faster, enabling instant scoring and feedback, quicker course correction, and improved learning outcomes. This accelerated learning curve can lead to higher course completion rates, better grades, and improved institutional reputation – all of which enhance the value proposition for potential students, further driving revenue growth.
Here are some strategies that institutions can employ to prevent cheating during exams.

Shifting from CapEx to OpEx
A crucial element to consider in the cost of e-assessment is the financial transition from Capital Expenditure (CapEx) to Operational Expenditure (OpEx). Historically, assessment systems have relied heavily on CapEx, with the investment required in physical infrastructure such as buildings, printers, and paper. This method often resulted in significant upfront costs and depreciation of assets over time.
In contrast, the cost of e-assessment primarily encompasses OpEx, which includes expenses for cloud storage, subscription-based assessment software, and ongoing service support. This shift results in more predictable and manageable costs, relieving institutions from the burden of ownership and maintenance of depreciating assets. The migration towards OpEx allows for a more sustainable financial model and offers increased flexibility, enabling educational institutions to adapt swiftly to emerging technological trends and pedagogical methodologies.
The move towards e-assessments signifies more than just a shift in the evaluation method; it’s an enhancement, marking a notable advance in addressing the challenges of traditional exams, all while aligning with the financial dynamics of the digital age.

The TCO of E-Assessments: A Breakdown
The Total Cost of Ownership (TCO) is crucial for any organization seeking to adopt or optimize an e-assessment system. TCO provides a comprehensive view of the direct and indirect costs associated with a product or service over its lifecycle – in this case, an e-assessment platform. It allows organizations to understand and predict the financial implications of implementing and maintaining e-assessments, incorporating the purchase price as well as implementation costs, operation, maintenance, and disposal costs. When considering TCO in the e-assessment context, it’s not just about the upfront purchase or subscription price of the platform but also about the long-term operational costs and the potential financial impact of the system on an organization’s efficiency and productivity.
Delving into the TCO of e-assessment platforms, direct costs include purchasing or subscribing to the platform, implementing the system, and ongoing maintenance. These are often the most visible and anticipated expenses. However, the indirect costs can be just as significant, if not more so. These can include expenses related to training staff to use the new system, technical support, time spent on administration, potential downtime, and even the impact on student or user satisfaction. Factors like data security and compliance, system upgrades, and integration with existing systems are crucial considerations that could affect the TCO of e-assessments.
Reducing the Total Cost of Ownership (TCO)
The cost of e-assessment can sound overwhelming, but there are ways of reducing TCO:
Utilizing cloud-based e-assessment platforms: These platforms typically require lower upfront costs and maintenance fees, as the cloud service provider manages the upkeep. Cloud-based platforms are inherently scalable, secure, and come with built-in managed services, removing these aspects as overhead costs for the institution and its IT team. They ensure high uptime, reducing the risk of service disruptions during crucial exam periods. Cloud-based solutions are designed to handle future workloads, such as the integration of AI-based features, without incurring substantial additional costs.
Scalability: Choose a system that can scale according to your needs. Cloud-based solutions often provide easy scalability, allowing you to adjust resources based on demand.
Training and Support: Invest in training your staff to effectively use the e-assessment system. Well-trained staff can maximize the system’s potential and minimize errors, reducing long-term costs.
Automating question bank creation: Automation reduces the time and effort needed to input and organize questions manually. Some systems might incorporate AI and ML techniques to generate relevant questions based on set parameters, reducing the time teachers and administrators spend building tests, cutting labor costs.
Maintenance and Updates: Regularly update and maintain the system to ensure it remains efficient and secure. This prevents potential issues and the need for costly emergency fixes.
Interoperability and Extensibility: Choose a system that can integrate with your existing tools and platforms, reducing the need for additional software or complex workarounds.
Security Measures: Invest in robust security measures to protect sensitive data and prevent potential breaches. The cost of a security breach can far outweigh the initial savings of a cheaper solution.
Long-Term Planning: Plan for the long term. A system that aligns with your institution’s long-term goals and requirements can save costs associated with frequent system changes.
Vendor Partnerships: Establish strong relationships with your e-assessment vendors. This could lead to potential discounts, priority support, and a better understanding of your specific needs.
Analytics and Reporting: Utilize the analytics and reporting features of your e-assessment system to identify areas for improvement and optimize processes, potentially saving time and resources.
Bring Your Own Device (BYOD) and Accessibility: Choose a platform which allows individuals to use their personal devices, such as smartphones, laptops, or tablets, to access and interact with digital resources and services. This approach allows users to work or learn using devices they are comfortable with, potentially enhancing productivity and convenience. Also, prioritizing accessibility, you provide students with a system that is designed to be usable and understandable by individuals with diverse abilities and needs.
Keep an eye out for the second part of our blog series, where we’ll delve into various facets of how the impact of return on investment (ROI) becomes evident in educational institutions that are employing digital assessments.
Discover more by visiting GamaLearn’s blog pages here
The Age of AI in Education: Promises and Concerns
A blog by Sirsendu Das, Senior Learning Architect, Excelsoft
Remember AlphaZero that defeated Stockfish, which was the best chess program in the world? In 2017, it won 28 games, drew 72, and lost none. The next year, while playing 1000 games against Stockfish, it won 155, lost 6, and the rest were held a draw.
So what was so special about AlphaZero?
AlphaZero had no predefined moves or strategies from human play. It was a genuine product of Artificial Intelligence (AI) training where developers keyed in the chess rules and coded the program to develop strategies that improve the chances of winning every match. Just a self-training of 4 hours made AlphaZero the world’s most effective chess module. No human has ever beaten it.
It’s time we take AI and Machine Learning (ML) seriously. They are showing immense potential to create new types of businesses across business operations with striking results.
To check the popularity, I immediately started my research with Google. Searching “What is Human intelligence?” fetched 10 billion results in 0.42 seconds, whereas “What is Artificial Intelligence?” generated about 5.8 billion results in just 0.52 seconds.
The day is not far when AI (Inorganic Intelligence) will leave Human intelligence(Organic Intelligence) behind.
Welcome to the new world, where organic is meeting the inorganic.
AI is making that happen everywhere, and it’s viral!
AI is not an industry or a domain, let alone a single product. It is an enabler that has the capacity to learn, evolve, and surprise. It will disrupt and transform the human experience to levels that were not experienced up until now.
So, can we call this a new age?
As you know, human civilizations progressed with Stone Age, Bronze Age, and Iron Age, developing competencies using the materials and technology of each of the ages. Till the end of the 20th century, humans with better cognitive abilities have enjoyed success at work. However, with computing advancements in the 21st century, we are experiencing a disruption that has embarked us on a new age of freedom, where information and decision come for free.
We live in the AGE of AI
Every day, everywhere, AI is gaining popularity, and one of the biggest beneficiaries might be education. While AI has the potential to revolutionize the way we think about education, there are still many challenges and concerns that need to be addressed.
By the way, the presence of AI in Education is significant, with a valuation of $4 billion in 2022, and is projected to expand over 10% CAGR from 2023 to 2032. AI in Education Market Statistics, Trends & Growth Opportunity 2032 (gminsights.com).
In our recent Townhall, almost every conversation on Education Technology converged to AI and its impact. A question that gained everyone’s attention was, “In Education, who should benefit the most from AI?
Our CEO’s response was very candid, “AI should impact the student experience and improve performance outcomes.” He also added that the other stakeholders will also get their share of the pie by analyzing the insights from student engagement.
Let me summarize the takeaway.
In the EdTech domain, AI should serve the “King” aka THE LEARNER.
When they benefit, other stakeholders will get a share of the pie.
The big question revolves around the impact that AI will bring on education.
On my way to Mumbai, I had a conversation with a principal. He was both excited and skeptical about AI and was seeking answers to his questions.
- What does AI-enabled education look like?
- Will AI replace human intellect or critical thinking?
- What do AI-enabled assistants mean to children?
- Will AI-based assessments used for shaping human action be allowed?
- Will cheating be rampant and cripple the learning model in education?
- Should we teach children how to frame compelling questions and AI will do the rest?
I explained to him that the integration of AI in education could provide dynamic learning environments that are accessible, engaging, effective, and offer personalized learning experiences with intelligent tutoring systems.
But I still saw the worry in his eyes. He said, “My friend, it is important to strike an equilibrium between technology and human interaction, ensuring safe and secured learning environments.”
His words made me think about the opportunities that AI can offer to students and also its ramifications. AI can:
- Personalize learning with tailored learning interventions, real-time feedback and opportunity for graded practice.
- Enhance learning support and prioritize learning interventions along with improved assessment quality.
- Present immersive experiences and enhance learner participation in regulated real-world situations.
- Increase accessibility for learners.
- Save costs by automating difficult tasks and facilitating customized instructions.
- Revolutionize smart content creation by generating tailored learning materials.
- Enhance the ease of performing administrative tasks and improve the efficiency of learning delivery.
- Provide access to educational resources, particularly for students with limited access.
- Analyze data patterns by using AI algorithms to detect early warning signs and alert educators for timely interventions.
However, we also need to be responsible enough to address the following:
- Strengthen the bias and Inequality algorithms with continuous monitoring. This will help remove the learning and assessment bias.
- Address privacy and security concerns as AI-powered learning systems gather a wide range of student information, including their behavior, learning progress, and personal data.
- Address the tech dependencies and their impact on learners’ critical thinking and problem-solving skills. Even a continuous upgrade of modern technology to ride this AI wave is worrisome.
In conclusion, AI holds great promise for the future and will play a pivotal role in shaping our lives. The integration of AI in education holds tremendous potential for transforming learning environments. However, legislatures, policymakers, entrepreneurs, educators, and technology enthusiasts must work together to ensure that AI-driven learning platforms and tools are used ethically and responsibly. By leveraging AI’s capabilities, education can become more accessible, engaging, and effective.
Ensuring EDI in assessment content
In this blog, Ben Rockliffe, AlphaPlus’s Deputy Director of Assessment explains how to ensure assessment content is inclusive of culture, equality and diversity.
What is culture?
Culture is the way of life, especially the general customs and beliefs, of a particular group of people. Assessments may be used by different cultures and will need to account for different customs and beliefs across these different groupings.
What is equality?
Equality means making sure that everyone is treated fairly and with dignity and respect. In the context of assessment, this means removing barriers, so that everyone has opportunities to demonstrate the required standard.
What is diversity?
Diversity is about recognising different values, abilities, and perspectives, and celebrating people’s differences. This means developing assessments that allow for diverse backgrounds, thinking, skills and experiences.
Why are culture, equality and diversity important for content?
Assessment content needs to be equally accessible to all learners within its target cohort to be valid. If learners’ cultural setting or personal circumstances affect, either negatively or positively their ability to understand the context of a question or source material within an assessment, then the level playing field ceases to exist.
Assessments are vital in educational/professional landscapes, acting as a gateway to the next stage or position. Assessments must, therefore, avoid stereotyping within content, as inclusion could lead to this being transferred into the training and teaching associated with preparing for them.
Assessment developers need to get this right for several reasons. Assessments that discriminate are bad, because the results do not give a reflection of how a learner has performed, and failure to get this right can, in some cases, contravene regulatory requirements. Most importantly, creating fair assessments that allow all learners the chance to do their best is the right thing to do.
How does this work in practice?
We ask all our assessment authors and quality reviewers to consider these issues when they are writing and reviewing assessment content. Here are some examples of the types of things that we look out for.
Is the question content equally relevant to the situation of all of the learner cohort?
This is particularly relevant to international assessments. For example, you may have an author in the UK writing questions for an international assessment that will be taken by some learners who are based in hot, dry, dessert countries. If they create a question scenario with a context that is based on wet cold weather, this context will be less familiar to these learners and be easier to access/interpret for learners from countries where these countries are common, potentially providing them with an unfair advantage.
Similarly, culturally assumed points of reference can cause issues in assessments that are taken across a range of jurisdictions. A numeracy question with a scenario written for a UK context that assumes that lunchtime is at 12:00 pm may confuse learners and therefore put learners at a disadvantage, where this is not the norm. Another common example is where a question talks about a culturally celebrated event or day, for instance, some learners may not celebrate Christmas, Thanksgiving or Eid, and therefore be less prepared to deal with a question situated in such a context.
Is the question content appropriate for learners in different financial circumstances?
Many assessments will be undertaken by learners from a cross-section of socio-economic backgrounds. Care needs to be given to ensure that contexts are equally open to all of these groups. Scenarios that assume a high level of income, can be less familiar/put learners from a lower-income background at a disadvantage. Examples might include scenarios that include budgets for very expensive items, or involving expensive holidays in exotic locations.
Does the question content include inappropriate stereotyping?
While it may be appropriate to have some question scenarios with examples that conform to a stereotype, e.g. engineers that are men and nurses that are women, the content should try to balance this out with reverse stereotypes, e.g. girls playing rugby or examples of non-typical families, as appropriate to the subject and content where the assessment is taking place.
It is worth noting that sometimes reverse stereotyping can be overdone and then this becomes an issue in its own right. For instance, if all families represented are “non-typical” then this can appear unusual. The key to success is to create a “balance” across typical and non-typical examples.
Are there opportunities for positive role models or casting?
When developing source materials or context for assessments it may be possible sometimes to put underrepresented groups in key positive roles. For example, you may be able to cast someone with a disability as a leader or a hero within a story. This can help make the assessment relatable to these groups and also provide positive reinforcement for other learners undertaking the assessment.
Are there any exceptions?
Yes, it is important to consider all of the factors above when taking an assessment. However, the core purpose of the assessment overrides these. This means that if the learner is unable to demonstrate a particular skill that is required of that assessment, and if the skill is an essential requirement to demonstrating competence in that area, then the assessment still needs to incorporate that skill. For example, a candidate for an HGV driving assessment who grew up in a hot climate may have less awareness of what to do if there is ice on the road. However, they still will need to know this to pass their assessment if undertaken in a cold country.
What are the benefits of getting this right?
The primary benefit of creating assessment content that reflects its target cohort ensures all learners have an equal chance of success. Other advantages include:
- Reputation: Blatant errors of this nature in content can cause significant reputational damage for the assessment provider.
- Legal compliance: Some adaptations may be required to support protected characteristics in law.
- Innovation: Thinking about assessments through this lens can encourage you to try new approaches.
- Business: Ensuring assessments cater for all learners means they will be appropriate to the widest market possible.
Click here to find out more about our assessment services
Benefits of using a Digital Grading Platform to Evaluate Student Learning
A blog by Manjider Kainth, CEO, Graide
In a nutshell
Putting the “AI” into grAIding to “aid” educators, Graide interfaces with Replay Grading to reduce educator workload. The platform is designed with simplicity in mind. No programming is required. It’s as simple as clicking or drawing regions and typing the relevant feedback.
The benefits are significant.
1. Improved accuracy
Automation ensures accuracy, the cornerstone of assessment. We all know and accept that marking inconsistencies can occur over time. And having more than one marker only increases the likelihood of this. Furthermore, the sheer volume of marking can lead to “clerical errors”. Graide, however, marks in the same way, every single time.
Integrating with leading learning management systems (LMS) such as Canvas, Blackboard, and Moodle, provides educators with comprehensive data and analytics on student performance. Using powerful analytic tools, it is able to collate results and help identify patterns or anomalies in students’ performance. That way, problems are spotted early on and addressed promptly, and instructional practice can be later reviewed accordingly.
2. Increased student engagement
Students expect and deserve an assessment which is accurate: i.e. free from subjectivity and human error. They also want detailed quality feedback, to show them the way. Thoughtful, accurate feedback provides the road map for future improvement. Studying without this is like driving blindfolded, with only occasional glimpses of the road to course-correct. Regular, consistent, detailed grading and feedback are fundamental to student learning. They also take time.
Using a grading platform speeds up every stage of the process, from submission, to marking, and feedback. Submission of student work is effectively instantaneous. Gone are the days of handing in work physically, a boon to everyone involved, especially those students with access or mobility issues.
And not having to mark the same type of answer twice speeds up grading for teachers!
In fact, the median grading times were reduced by 74%, and the number of words of feedback given increased by a factor of 7.2, when we compared grading on paper. We also estimated that a university, with 3500 STEM students, using Graide could save over £240,000 a year.
Getting marking off your desk double-quick is always a nice feeling, but it is also fundamental to student learning. For marking and feedback, time is of the essence. It is no good if the student has forgotten or “moved on” in their minds from the assessment. Feedback is far more effective when the work is fresh in their minds, and they are still emotionally involved or committed. University policies commonly stipulate feedback within two to three weeks. As a student, I wanted my work back yesterday!
When students see the immediate feedback given by a digital grading system, they can quickly enjoy successes, identify strengths, and address areas for improvement. They can act quickly, with the help of their teachers, to put things right. Speeding up this process indubitably minimises uncertainty and stress for students.
3. Easier collaboration
Student learning is a team sport, a shared endeavour. If everyone is pulling in the same direction, progress can be swift and that, in turn, can be motivating.
Working with paper scripts can slow down the process, for instance, when there is more than one marker and time is lost waiting for the other one to finish their section. With a grading platform, scripts can be marked in parallel.
Better still, teachers, students, and administrators can review or discuss student projects or assessments in real-time online.
Skynet or Wall-E? The implications of Generative AI for exam security
A blog by Paul Muir, Chief Strategy and Partnership Officer Surpass Assessment and eAA Vice Chair
We all remember the days before ChatGPT and Generative AI seemed to being every education news story, don’t we? You know, where AI wasn’t going to kill us all or make assessments entirely redundant?
Back in late 2021 I sat down and wrote a blog post titled “AI: Friend or Foe?’ and followed this up with a session of the same name at the 2022ATP conference in Orlando with Professor Barry O’Sullivan and Marten Roorda. There, we debated the topic with fellow assessment professionals, who mostly agreed it was indeed our friend.
AI was going to be our constructive ally in assessment, supporting us with topics such as ID verification, content creation, auto-marking, enhanced security for Remote Proctoring and enhancing data forensic capabilities within examinations.
So, what’s changed? Or has anything really changed from those innocent days of early 2022?
Even back in 2021, before ChatGPT raised its head, the European Commission had placed the use of AI in Education in the ‘High-Risk’ category of its experimental AI Framework. This was the middle category, with others being ‘Limited Risk’ and ‘Unacceptable Risk’. Does Generative AI move this towards unacceptable? I think it’s probably too soon to decide. Personally, I think the biggest change is that the discussion is now very much focussed on Generative AI or Large Language Models rather than AI as an overall concept.
As a reminder, Generative AI refers to technologies that can autonomously generate content, such as text, images and videos. These systems employ machine learning algorithms and neural networks to analyse and learn from vast amounts of data, enabling them to create highly realistic and believable outputs. Believable is key there, as we know hallucination is a real problem. That’s for another paper though…
Threats
While Generative AI has opened up exciting possibilities in various industries, including education and assessment more widely, it also presents significant challenges in maintaining the integrity of examinations.
Three key implications for exam security are:
- Cheating & Plagiarism: One of the significant security implications of Generative AI in exams is the increased risk of cheating and plagiarism. Students can use generative AI models to generate answers or even entire essays that appear to be original, making it difficult for traditional plagiarism detection tools to detect such instances. This poses a challenge for exam owners and institutions in ensuring the authenticity of student work.
- Impersonation: Have we all seen the Deep Fake Tom Cruise? Or the new AI software that can ‘replace’ eye movement to fool remote proctoring services/software? The rise of deepfake technology, which is a form of Generative AI, presents a significant risk in exams that require identification or authentication of the student. AI-powered ‘solutions’ can be used to create sophisticated audio and video for impersonation or identity theft purposes. For instance, a student could use deepfake technology to mimic another student’s voice during an oral exam or to manipulate facial features during a remotely proctored test.
- Content Development (closed AI environment): We already know that some assessment organisations/exam owners use Generative AI to produce more cost-efficient content. But at what cost? Using a publicly accessible AI solution, such as ChatGPT, could result in exposure of test content, allowing students the opportunity to access, generate and share new questions very closely related to the original ‘seed’ content, compromising the integrity of the assessment process.
Mitigations
However, it’s not all doom and gloom and as a sector we are already addressing those threats described above (and many more!) head on with various interventions and preventions such as:
- Advanced Proctoring Solutions: Whether in-person at a test centre or with remote proctoring solutions, there are a number of tools readily available to mitigate the risk posed by Generative AI. Two-camera solutions, AI-enhanced anti-plagiarism technology and the use of more cutting edge multi-modal and behavioural biometrics will be critical in the battle against those who wish to use Generative AI for nefarious purposes.
- Secure Test Drivers: Utilising secure platforms/test drivers (such as Surpass) or Learning Management Systems (LMS) that incorporate encryption solutions, have lockdown functionality and advanced authentication mechanisms can help safeguard exam materials, ensuring question confidentiality and preventing unauthorised access.
- Collaboration: It might be controversial, but collaboration between educational institutions and AI developers can help us as a sector to develop robust anti-cheating mechanisms. By understanding the nuances of generative AI, exam owners and educational institutions can stay ahead of potential threats and work towards creating secure exam environments.
While I’ve only listed three key mitigations above, there are clearly many more available and some of these will be discussed in an upcoming ATP Security Committee White Paper on Security Implications of AI coming out towards the end of July. We’ll post a link to this once published on the EAA website.
I want to end on a positive note though. Generative AI has the potential to revolutionise the assessment industry, not just from the negative perspective that many initially run straight to. It’s not going away, so we need to figure out as an industry how to mitigate the risks, but just as importantly how to ’embrace with caution’ the positives it brings.
In the next EAA newsletter, we’ll cover the more positive sides of Generative AI, the role of the regulator in the UK and what I believe are the 5 core principles for the use of AI in education and assessment.
So, going back to the beginning of this newsletter, are we now in

FRIEND
OR

FOE
territory with our opinion of AI?
What do you think?
And if you’d like to see how Surpass is embracing the future of assessment creation with AI, take a look at Surpass Copilot, our upcoming AI-powered toolset that could revolutionize your item authoring process: surpass.com/copilot