the e-Assessment Association

Inclusivity by Design – Essential or Optional? A closer look at principles and examples

What is Inclusivity by Design and Why Is It Important in Assessments?

If every student is unique, why should assessments be one-size-fits-all? When assessment software is designed to be inclusive, it is built to accommodate the varying needs of all learners to ensure every student has the chance to thrive. Many individuals have special needs that require content to be altered, accommodated, or displayed in a way that optimizes their ability to succeed.

Inclusion has always been crucial to assessment, but with digital assessment management platforms, the testing process is more inclusive than ever. Digital assessments allow for built-in accommodations and personalized testing experiences. A study published in the Journal of Technology even highlights that when designed with inclusivity, students have significantly increased test scores on computer-based assessments when compared to traditional paper and pencil testing.  The same study also reveals that most students prefer the computer-based testing experience (Robert et. al). Using a high-quality digital assessment software that is inclusive by design empowers all students to unlock their potential.

How Can Online Assessment Software Increase Inclusivity by Design?

Breakthroughs in digital assessment software have helped to ensure that all students have access to a personalized, optimized, and accommodating assessment experience. Here are some characteristics of assessment management platforms that are inclusive by design.

  • Custom Student Profiles:
    While many assessment platforms still need to implement features to support students with special needs, SwiftAssess has already developed the needed modules for students with dyslexia, dyscalculia, ADHD, autism, deafness, blindness, color blindness, epilepsy, and other special needs allowing them to have a student disability profile as well as a custom assessment profile. This profile recognizes and documents any special needs a student has, and any accommodations required. With this feature, optimized and inclusive test delivery will match the student’s needs every time a test is administered. Even if an examiner has not corrected a test’s setting to make it accommodate, the software will automatically optimize delivery for that examinee. For example, if a student with impaired vision requires enlarged text on exams, the profile will apply that accommodation every time.
  • Accessibility Scoring Feature:
    Educators have a lot on their plates. Even with the best of intentions, sometimes educators may miss opportunities to optimize the inclusivity of an assessment. Additionally, they may not be familiar with all an assessment platform’s accessibility features when creating a test. This is why the accessibility scoring feature of digital assessment platforms is so powerful and important! After creating an assessment, educators will automatically receive an accessibility score report. Educators will be alerted of any problems or incompatibilities for test inclusivity including test items that are not fit for their students with special needs. In addition to identifying non-inclusive test items, the software will also provide alternative ways of presenting the question to ensure an inclusive and accommodating exam.
  • Accommodations And Features Compliant With Web Content Accessibility Guidelines:
    A truly inclusive digital assessment platform will follow the Web Content Accessibility Guidelines. Here are a few of the features that allow platforms to be inclusive by design and accessible to all.
    – Extra Time: Educators can customize the amount of time given to students to complete an assessment.
    – Adjusting Color Scheme: Educators can adjust the color scheme of assessments to increase readability and accessibility.
    – Controlling Question Fonts: The font of assessments can be changed to ensure that questions are easy to read and clear to understand.
    – Magnification: When taking an assessment, users can zoom in on and magnify images or text. This ensures that difficulty seeing a test item does not negatively impact a test score.
    – Multi-Lingual Interface: Assessment platforms can be inclusive of students from all backgrounds with varying languages by offering multi-lingual interfaces.
    – Audio and Video Transcripts: Audio and video transcripts are available for in-test items to aide students with hearing impairments or difficulty processing auditory information.
    – Speech to Text and Text to Speech: With speech to text, examinees can answer questions verbally and have their answers recorded. With text to speech, users can have written parts of the test read out loud.

How Can Schools And Institutions Support Their Educators In increasing Inclusivity

For all the reasons listed above, digital assessment platforms can be monumental in creating an inclusive learning environment. However, as with all technology, it is important to ensure that educators are properly trained in inclusivity. Educational stakeholders should hold workshops, trainings, or professional development sessions that focus on inclusivity by design.

First and foremost, educators need to understand the why behind inclusivity. Why is it so important? Why does it matter? Emphasize the value and the legality behind ensuring every child has equal access to a quality education regardless of their varying needs. Every student deserves to have their needs met to thrive; inclusivity should be a daily practice that empowers students to be their best selves. Once educators understand the why, its time to focus on how. School leaders can hold trainings that provide ways to make classrooms a more inclusive place. If the district has access to software such as a digital assessment platform, they should highlight and demonstrate the accessibility features to raise awareness and increase implementation. When using educational technology, don’t be afraid to reach out to the software’s team as well. Often times, companies can provide resources and even hold their own trainings to familiarize educators with best practices in inclusive learning.

Increasing Inclusivity by Design

It is imperative that institutions offer assessment solutions that are inclusive by design. As technology advances, research reveals that computer-based testing may be the best way to increase inclusivity and academic performance on assessments. Through custom student profiles, accessibility scoring reports, and adaptive testing features, digital assessment software can increase inclusivity and offer every student a chance to excel.
SwiftAssess helps institutions who are looking for comprehensive digital assessment software that is designed to be inclusive.

To learn more about SwiftAssess and its accessibility features, contact a team member here.

Discover more by visiting

How to Select the Best Assessment Platform for Higher Ed?

A blog by Neha Nougai, Presales Consultant, Excelsoft

Assessments are central to learning for any higher-ed program, offering educators the data to enhance learning, define goals, and promote institutional change. Without the right software, gathering this data can be laborious and time-consuming, so you must ensure that your software can meet your requirements.

The global pandemic gave rise to a shift towards online and blended learning, leading to a surge in digital assessment platforms. Understanding how these solutions can add value to the learning process at your institution is imperative.

Let’s first understand what digital assessments are before delving into any details.

Digital Assessment is the application of digital technologies to create, deliver, grade, report, and manage examinations to assess students’ learning. They can be used as remote, online assessments or in-person deployments on campus. Whether digital or analog, the goal of assessment remains evaluation for effective learning and credentialing.

Why higher-ed institutions are increasingly adopting digital assessments

  • Save time and effort: Automated grading simplifies evaluation and takes less faculty time, allowing more time for instruction and learning. Depending on the class size, commercial off-the-shelf software have shown to save grading time by 30% to 50% on average.
  • Advanced capabilities: A good assessment platform offers educators complex workflows for collaborative authoring, the capability of tagging questions, question-level analysis, management, and more.
  • Formative Feedback Opportunities: A frequently cited advantage of online assessment is the ease with which detailed feedback can be provided to students. Feedback can be provided on time and frequently in various formats, including written, audio-recorded, and video-recorded.
  • Accessibility and Flexibility: Both students and teachers value how accessible online tests are. Instead of being forced to work within the confines of a classroom, students have more freedom in approaching their assignments because they can decide when and where to complete them. Students with jobs, family obligations, or other reasons that may limit their capacity to be present on campus may feel greatly relieved.

It should come as no surprise that an effective assessment platform may help your institution tremendously. Selecting the ideal software is a complex choice. For easy integration and quick results, you should consider a few crucial criteria when selecting the assessment platform.

What features should be considered for choosing an e-Assessment platform?

  • User-friendly interface: The interface of the assessment platform needs to be user-friendly. An intuitive and simple-to-use platform is favorably perceived by both educators and students. The assessments must be simple for the candidates to interact with and complete. For educators, it should be easy to create and evaluate the tests and generate results instantly.
  • Rich media support: The platform should enable creating questions with rich media (e.g., audio and video-based assessment, Geometry, Mathematics, Scientific Equations, and more) to make assessments more engaging and effective.
  • Enhanced Security: The platform must offer a high level of security to prevent candidate malpractices. The integrity of an online test can be ensured by using screen, audio, and video proctoring. The platform should also be secured so that the data is not exposed to any vulnerability.
  • Accessibility Standards: The platform should adhere to mandated requirements for accessibility, such as 508 accessibility guidelines and WCAG 2.0 Level AA requirements.
  • Multi-language compatibility: The platform should offer compatibility with regional languages, regardless of the standardization of the language, to make it accessible for both educators and students.
  • Reporting and Analysis: The platform should offer comprehensive reporting and analytics to provide educators with rich diagnostics and support for student remediation.
  • Availability of technical support: In addition to assisting students in self-regulating their learning, timely support enables instructors to feel confident and competent using e-learning tools. Technical support can be offered on-campus or through active platform support.


The Impact of ChatGPT on Higher Education

A blog by Manjinder Kainth, PhD, CEO, Graide

Rarely does a technology come along that promises to revolutionise the way we live, work, and learn. Rarer still is that technology adopted with such alacrity. Released on 30 November, OpenAI’s ChatGPT gained 1 million users in just 5 days. To put that into perspective, it took Netflix 3 ½ years, Facebook 10 months, Spotify 5 months, and Instagram 2 ½ months.

So popular is OpenAI’s ChatGPT, it cannot always meet demand. From coding to essay writing, its range of uses is impressive.
Let’s take a look at the possible implications in higher education.

What is ChatGPT?

In essence, it’s a chatbot – a very advanced one.

GPT stands for Generative Pre-Trained Transformer and its large language model uses statistics to work out the most probabilistic next word based on billions of parameters when training with a large data set. It mimics language rather than thinking. You might ask it to write an essay on love in the comedies of William Shakespeare, but it has no idea who William Shakespeare is, and it has never been in love!

How to use ChatGPT

In many ways, it could not be simpler, which is why it’s proving so popular. It was designed to be used as a chatbot, so users just have to ask questions or make requests.

“Write me an essay on the theme of love in William Shakespeare.”

“Tell me about the importance of love in the style of William Shakespeare.”

ChatGPT’s appeal is plain to see when compared to, let’s say, Google search. Google finds web pages with relevant information. ChatGPT writes the answer for you (without all the ads!)

Short-term implications in education

With technology, this impressive, plagiarism is an obvious concern, but don’t worry – we’ll be able to detect ChatGPT content very soon. OpenAI is working on a digital watermark. University of Texas computer science professor and guest researcher at OpenAI Scott Aaronson, told TechCrunch that the company is “studying hiding cryptographic signals, called watermarks, in ChatGPT results, so that they’ll be more easily identifiable by companies like Turnitin [who detect online plagiarism].”

And there’s also GPT-2 Output Detector Demo and GPTZero.

Let’s not forget, educators know their students. They’re not daft. A professor suddenly receives a flawless piece of prose from someone who was previously suffering from writer’s block. Gut instinct tells them something is afoot. As one lecturer observed, “I think the chat writes better than 95% of my students could ever.”

Another wrote, “As a lecturer, I gave it last year’s exam question on a third-year module. It was more fluent and readable than the majority of answers I got, but the level of detail and analysis would have only secured it a 2.2 at best, probably a 3rd.”

Poppy Wood of inews used ChatGPT for GCSE questions and got a professional examiner to mark them. It didn’t get perfect marks!

One way to deter plagiarism is to follow students’ work from early outlines and first drafts to completion. It’s worth remembering that ChatGPT literally makes things up because it’s designed to write the next most probabilistic word. Moreover, it can’t do maths.

AI coding

ChatGPT can code at speed, with some degree of accuracy, but there are caveats. The homepage warns of its “limitations” ⚠️:

“May occasionally generate incorrect information”

OpenAI co-founder Sam Altman has been very clear in numerous interviews, advising against using ChatGPT for important tasks.

Furthermore, concerned about its accuracy or dependability, the public coding platform Stack Overflow no longer accepts ChatGPT code, after being inundated with faulty code.

Long-term benefits of ChatGPT in education

All that said, ChatGPT will only get more reliable and more sophisticated, and the possibilities are tantalising.

Generating ideas is one of its strengths, and so it will be increasingly used by educators for course creation, writing syllabuses, lectures, assignments, and grading rubrics, along with carrying out much of the tedious admin, to boot. Producing practice questions or example essays for “peer”-like feedback will take hours rather than days and weeks. Consider, for example, pre-trained transformer models in marketing, which have been around for a while now. They can magically conjure up a perfectly reasonable starting point. Users find they save time as it’s often easier to edit them than to write a first draft from scratch.

Effect of AI on higher education

GPT-like tools are already making their mark in the world of work, so along with harnessing their enormous potential, we also need to teach students how to use them skilfully, with questioning scepticism and discernment. We want students to exploit ChatGPT’s powers; we want them to be empowered, but not overly reliant.

This entails exploring with them, in an open, honest way, what ChatGPT can do well and, just as importantly, what it can’t do. If, for instance, it’s being used for the writing process, teachers should outline those stages where it’s most useful. It can help with generating content or plugging the gaps in early brainstorming, but its written content, though impressively fluent and convincing, can be rather generic or vague. Educators and students need to explore together ChatGPT’s limitations and how best to work with or around them.

Artificial intelligence works best alongside human intelligence. The quality of what it produces is highly dependent on the quality of the prompts. The maxim “garbage in, garbage out” applies. Students need to learn not to accept its first responses, but rather follow up with modified questions and commands to ensure quality. There’s an art to asking the right questions or asking for clarification.

The smallest tweaks to the wording can make a huge difference. Students should also question the veracity of what ChatGPT produces.

If students are using it to fix code, context is important. They need to learn how to frame explanations of what exactly they are trying to achieve.

Working in groups on all of this, with teacher input and interventions, will help students develop essential 21st-century AI skills. AI won’t necessarily replace humans so much as AI-empowered humans will replace those who are yet to catch on.

Final thoughts

It’s early days and the tech is in its infancy, but these are exciting times, for students and teachers alike. Early adopters who embrace AI in the spirit of play, discovery, and experimentation will reap the rewards. ChatGPT and its ilk will certainly free up time for other cognitive elements of academia.

This is one of the many reasons we devised Graide: we wanted to harness artificial intelligence to help speed up the marking process in higher education, allowing educators to deliver assignments, get immediate insights into student performance, and provide tailored feedback in a flash.

Find out more here: How it works | Graide


Bridging the gap between current item Adaptive assessments and our Multi-Stage Assessments

A blog by Gunter Maris: Senior Scientist – Statistical Analysis and Programming (TCS iON).

Education, the birthright of every child, is their primary means to touch success. Examination and education go hand-in-hand. Examinations judge a student’s progress at every stage, thus revealing any existing gaps.

However, data shows that even after 12 years of extensive schooling and exams, several students (data shows more than one in three) do not achieve the socially expected levels. The chief reason behind this failure is not always the learner; the school, our education system, also has a massive role to play in this.

Our current learning conception is almost like a straight line. Placing questions across the 12-year curriculum such that their position reflects their difficulty, allows for placing learners on this line based on how well they perform on assessments. The Rasch model statistically encodes this idea. Being positioned further along the learning curve than another student or the student himself at an earlier stage suggests that they are somewhat better at everything. Similarly, if a question is placed further along the line than another one, we conclude that this question is more difficult for everyone. Following this logic, a learner must get better at everything to progress. It is an admirable aspiration, but it is neither practical nor realistic for any learner.

Examinations should empower educators to predict gaps in learning. They should allow us to estimate the probability of success after compulsory education and to infer how it would vary if a specific (malleable) learning gap is closed. Authors Zwitser, Glaser and Maris (2017) provide a methodology that enables us to make this necessary prediction while guaranteeing comparability among learners and across time. The technique presented by these authors is sufficiently adaptable to detect attainment gaps at the individual level, giving educators knowledge of the areas where particular learners require intervention.

Let’s take a simple example. We’ve all heard of the high school student who claimed to know nothing about linear equations while excelling in trigonometry. When given a test consisting of both this week and another the next week, this student would probably perform poorly on linear equations exams while performing far better on his trigonometry exams. A Rasch model implies that the second test results are equally likely to be reversed, which is not the case. Since the teacher cannot detect the root problem, he cannot fix it.

How can we ensure exams help identify the root cause and fix it? By bringing in a personalised learner/learning support system that

  • does justice to the diversity found among learners,
  • assisting each one of them to succeed,
  • considering scalability in terms of the number of users and questions, and
  • reducing carbon footprints

Personalized assessments are praiseworthy if and when they encourage learning and achieving learning objectives. Adaptive testing is seen by many as the pinnacle of personalized assessments. Though you get a minor increase in precision from the same number of item responses, there is little opportunity to diagnose individual strengths and weaknesses and promote learning. In the ideal adaptive test, every item presented to a learner has a ½ probability of being answered correctly.

In such a test, if our learner, who does not understand linear equations, fails all three of them in an adaptive test, this outcome is not surprising (under the Rasch model). We wouldn’t conclude that this specific learner struggles with linear equations. Now, on responses to three linear equation problems, if we test the hypothesis that this particular learner does not understand linear equations, taking into account circumstantial evidence (how did this learner do on trigonometry), the same result would be surprising. It would point strongly to the correct conclusion. Once we detect the root cause, we can deal with it and ensure the student advances in the proper direction.

Our idea here is not to argue that the current assessments are inadequate but rather to show where we can improve by modifying the technical nature of the examinations. The aim is to empower our future generations with all the skills they need, and personalised examinations can be the perfect ally to a teacher.

TCS iON company logo

Latest Trends and Developments in Online Assessments

The pandemic in 2020 has changed the way we teach and learn. Schools continue to close and reopen in many parts of the world, demanding alternatives for instructional continuity. This has drawn considerable investments in modernizing and digitizing the educational systems. A wide range of digital tools is emerging to support learning and assessment for enhancing learning outcomes.

Neha Nougai, Presales Consultant, Excelsoft asks; What does it mean for online assessments in the near future?

The online exam software market size was valued at USD 5.3 billion in 2020 and is projected to reach USD 11 billion by 2028, growing at a CAGR of 9.01% from 2021 to 2028. Beyond what LMSs have traditionally offered, external digital assessments are increasingly being adopted. Significant developments in computer-based testing are redefining the assessment potential. It would be helpful for practitioners, decision-makers, researchers, and system developers to identify trends in the characteristics of online assessment systems to select or create them effectively.

Emerging trends in online assessments

• Formative assessments are rising and becoming more interactive

With the necessity of early feedback and student remediation, digital tools are increasingly used over analog methods to create and deploy assessments faster. Educators prefer to create exams with various types of questions such as multiple-choice questions, fill-in-the-blanks, match the columns, essay type, Likert scale, image labeling, etc. Multimedia, like audio, images, video, etc., is included to make assessments more engaging.

• Assessments are becoming more candidate-centric

There is a trend toward more actionable assessment data from which insights can be drawn. Rather than imposing rigid standards to evaluate students’ competencies, educators are tailoring assessments to students’ responses to personalize learning. Using technology tools, teachers can simplify and shorten the feedback loop to drive their instruction. Platforms are incorporating Item Response Theory (IRT) based question and test calibration. This allows educators to appraise student performance using metrics such as depth of knowledge level, skills, guessing factor, and more.

• There is a greater emphasis on inclusivity

With a growing concern for accessible education, there has been an increased focus on making assessments more diverse, inclusive, and fair. By utilizing assistive technologies, test platforms are becoming more accessible to users with various needs, challenges, and disabilities. Some important ways to make the test interface perceptible to users are to provide text alternatives for non-text content, to make the content easier to see and hear, to provide different input modalities beyond the keyboard, and to provide multilingual support.

• Traditional assessment delivery is being phased out

Institutes are no longer dependent only on pen and paper-based delivery. Although there has been a slow transition back to physical spaces, educators are well aware of the advantages of online proctoring. Many have adopted a blended or online assessment delivery model, allowing educators to monitor the candidate test progress remotely. Proctors can see students’ live webcam feed and on-screen activities, e.g., navigation, click activity, etc. Latest remote proctoring platforms offer features like an AI anti-cheat engine, lockdown browser, etc., to ensure the sanctity of the exam.

• AI for auto-marking is becoming popular

Educators are increasingly using the auto-marking function of online assessment platforms, which frees up significant time for instruction. In addition to scoring multiple-choice or short-form questions, these platforms use AI to grade longer answers and written work, making auto-marking an indispensable platform function.

Digital tools are becoming more readily available to support instruction and assessment. Since the adoption of online assessments has been soaring, it is crucial to stay abreast of new advances and trends in eAssessments to use them effectively.


Reed in Partnership and Eintech partner to form testing powerhouse

The deal, which sees Reed in Partnership making a strategic investment in Eintech, brings together Eintech’s flexible, configurable assessment and e-learning technology with the scale, infrastructure and reputation of Reed’s Test Centre network.

This partnership establishes a new proposition for awarding organisations and large corporate and government clients for which Reed is known. The new proposition covers the spectrum of organisations’ assessment needs, from online testing with remote proctoring, to in-person Test Centres for high stakes examinations, to e-portfolio for evidence-based qualifications, and tailored e-learning for course delivery.

Rhodri Thomas, Managing Director of Reed in Partnership, joins his Reed colleague Louise Edwards in seats on the Eintech board to support future growth. He comments:

“We are delighted to be part of Eintech’s next stage of growth and development. The Eintech team have developed innovative and scalable solutions for the assessment market. Our partnership with Eintech enables Reed Assessment to provide test owners with flexible test delivery either through our Test Centres or via Eintech’s secure remote proctoring technology.”

James Carter, CEO of Eintech, adds:

“This partnership is a natural fit that brings benefits to all. Our remote learning and testing technology is already proven in over 180 countries, and thanks to the reach and dependability of Reed we can now deploy it at an even larger scale, in new testing environments and to a growing client base.”

Reed Assessment provides assessment solutions to certify people’s skills and expertise.

Reed Assessment delivers secure, compliant and inclusive testing over a Test Centre network of 135 centres nationwide, and delivers over two million tests annually for clients including UK Government.

The business is part of the Reed Group, which encompasses 11 companies all created with the purpose of “improving lives through work”. Founded in 1960 by Sir Alec Reed, the group of companies remains a UK family business today, with annual revenues in excess of £1billion.

Photo caption: The Eintech board, L-R: James Carter, Marc Carpenter, Rhodri Thomas, Louise Edwards