the e-Assessment Association

The anatomy of a successful certification program in an AI world

The anatomy of a successful certification program in an AI world

What does the new era of AI mean for certification programs, and how will it impact them in terms of security and the future of testing? To address these questions and provide insight into what makes a successful certification program, we turn to Questionmark’s Chief Product Officer, Neil McGough.

Can you tell us a little bit about your background Neil, and how you got involved with product development within the EdTech sector?

Sure, I started working in EdTech around ten years ago when I joined Learnosity. Before that, I’d been working in the Enterprise Telecom space and was looking to move to an industry that felt like it had more of a positive impact on the world, and EdTech obviously fits that bill completely.

So, Learnosity was my first introduction to education technology, and my role started very much on the sales engineering and support side which involved getting our customers up and running. During this time, I saw many EdTech and assessment platforms, the good, the bad, the ugly, and the really, really positive. The outcome of seeing all these implementations meant I developed a really keen understanding of what makes a high-quality assessment experience in a product, and as the organization grew it ultimately led me to want to sit within the product team to really be able to make the meaningful product changes that our customers wanted and that would best support them.

What are the most common challenges organizations face when delivering successful certification programs?

I think there’s a broad range of challenges, and it typically depends on what stage of the life cycle an organization is at. In the earliest stages, many organizations we encounter are still in a digital transformation era with their certification programs, which entails a lot of shifting from pen-and-paper assessments and migrating offline content creation cycles into a digital space. It’s one of the most difficult processes organizations often face, and success hinges on using reliable vendors and finding experts who can guide them correctly. The added challenge is also figuring out how to make those things successful from day one and not end up in a continuous cycle of attempting to transform and then hitting roadblocks.

So, due to this common challenge, organizations vary in their success levels with the digital transformation process, leading to different pain points for each. For example, some organizations we encounter have excelled in the student testing experience, but their pipeline for content creation remains a lengthy, laborious, and offline process. This often involves relying on people working in spreadsheets and manually entering data into the end system. As a result, they fail to fully leverage technology to streamline and improve the process for both themselves and the test-taker.

In other examples, we’ve had organizations start the digital transformation process and have the solution in place for test-takers, but the unfortunate reality is that they chose a subpar vendor, and now they still have a cycle of hitting roadblocks five or six years before they can realistically reinvest to replace it. It’s why I think choosing the right vendor for you early on is essential for success.

It’s never a one-size-fits-all-all scenario, of course. You can have organizations that have successfully created good content workflows but are still very much tied to a brick-and-mortar approach for test delivery and not necessarily able  to capitalize on the worldwide technological advancements we’ve made. This includes assessing test-takers remotely and allowing them to be assessed when it suits them rather than when there’s an open test window.  It’s a real journey for an organization to go on, and the key thing is to get to a stage where, as an organization, you can be nimble and really adapt to the needs of the audience you’re aiming to certify or assess.

You mention that successful certification programs often rely on choosing the right vendor. What advice can you give to organizations on making that choice? 

In broad terms, you’re looking for a cross-section in your vendors of reliability in terms of what they’re able to provide and trustworthiness in terms of what they are able to deliver.  Just as importantly, you should be looking for organizations that are innovative in their approach and moving with the technology because the reality is once you adopt a vendor and they’re a key part of delivering your certification program to your test takers, you become hugely reliant on them and moving from that vendor can be extremely complex, costly, and difficult.

So it’s about ensuring that your vendors are going to be able to deliver on what they say, to the scale needed, and that they are going to be the best vendor for your needs not just now but for the numerous years that you will probably work with them, given the complexity of change.

AI is being presented globally as an ‘industry disrupter.’ In what ways do you think it’s also going to impact how organizations manage their certification programs in terms of efficiency or security? 

When we talk about AI in these terms, I always like to break it up into challenges and opportunities because they’re equal elements, and I think focusing on one without the other is kind of painting an unfair picture of the topic.

AI in the certification space from a test security or cheating point of view obviously faces challenges. We’re in a world where generating content and essay answers has become infinitely easier than it ever was, and now some of the old ways of cheating, via essay mills, for example, are all but gone because why would you bother spending money with an essay mill when you can use an LLM and get something crafted up in an instant?

So yes, there’s a risk, but while the risk here is new, the response of high-quality security should remain the same, and as with most security conversations, there is no silver bullet. Instead, you have to implement layered defenses on top of each other to try and catch and modify differing behavior and minimize the opportunities for people to succeed in test fraud. These layers can be made up of online proctoring tools, secure browser features, and test experiences that occur in a controlled environment.

Interestingly, despite the huge fear at the moment that AI is an insurmountable threat to valid testing, much of the current security risks aren’t AI-driven but about proxy testing. The focus then, for anyone concerned about test security, is asking themselves how well can they secure the system or platform and whether there are more layers that could be added.

The other thing to consider when talking about AI and the risks of cheating is to welcome the very pertinent conversation about how we assess and whether we’re assessing in the right way. So, for example, should we adopt more micro-credentialing approaches to certifications and build from low to higher stakes? And how much do we balance changing or deterring fraudulent behaviors and how much do we accept that there is an element of risk you can’t plan for?

These conversations inevitably also open the door to moving into more real-world testing as with performance-based testing in the IT space for example, where you aren’t looking for a written reflection of knowledge but an actual demonstration of the on-the-job skills in a lifelike environment. In moving towards these more observational and demonstrative assessments it becomes much more difficult to cheat, but as we talked about before,  there’s no one silver bullet so it’s always about stacking security measures and becoming more flexible in how we build certifications.

The flip side of AI is that there really is a huge amount of opportunity for organizations to streamline how they do a lot of things. Content creation is an obvious example and enables organizations to boost their content volume and create fully fleshed-out item banks without compromising on quality, as with Author Aide. But AI-assisted authoring could also support test security by streamlining the content creation process, allowing for more complex questions with reduced exposure rates amongst test-takers and minimizing content leakage.

Of course, it’s not only security that AI can really push the needle for but the efficacy of certification programs, especially in terms of fixing bottlenecks. One of the biggest problems for organizations typically is getting access to subject matter experts from which to create test content throughout the duration of the creation process. With AI as an aid, however, you can use already written and expert-approved training or source material to generate assessment content, and the added advantage is your subject matter expert no longer needs to be interleaved all the way through your assessment process, save for a short review of the content at the end.

Similarly, AI can be used as an assistive aid for scoring and grading complex subjective material, and again, the aim isn’t to push the human out of the interaction but more to make the human process sufficiently more efficient. In these ways and more, AI can be a true solution to all-too-common bottlenecks found within certification programs.

Lastly, and perhaps most powerfully, AI presents an opportunity to assess people differently. To date,  we’ve been in a world where the two main methods used in certification programs are an objective multiple-choice question or a subjective essay. With AI, there are multiple new ways and approaches to understand how much an individual understands and can apply said knowledge. Perhaps, instead of a traditional essay, for example, we might see more chat or conversational style assessments that are able to provide a picture of a test taker’s understanding which might have been missed if they aren’t a strong writer.

So, I think in summary, AI has the potential to really change how we assess and can unlock a huge amount of inefficiency commonly found throughout the certification process.

There’s been a lot of talk in recent months about an IT skills gap. How do you see certifications solving that, and are there any unique challenges to be aware of?

I would say that in some way, shape, or form, we’ve been talking about an IT skills gap for the last twenty years, and it’s realistically a challenge that’s only going to accelerate. The reason we discuss this specific industry and its skill gap challenges is because everything is online now, and everything is an application these days. The explosion of tools and tech, along with the need for people to work with these tools, creates a real chasm that’s challenging for individuals to keep abreast of. This prompts an interesting conversation about how AI will impact tools like GitHub co-pilots and others like it. Realistically, all they will do is drive the demand for technical skills higher and cause IT skills to change more rapidly than ever before.

In Ireland, where I’m from, this is a highly discussed and common problem. For years, we’ve tried to address IT skills gaps with college course placements and programs. What is very apparent is that the three or four-year structure is not the way to solve it. What we need are routes for people to reskill frequently. A key part of that process, of course, is certifications because you need a structure for people to learn within, and you need a training program to support it.

The solution then lies in building out certification programs in the IT space that allow people to develop their skill sets and reliably prove them in order to expand their careers. Likewise, developing certification programs that allow organizations to be able to grow their employee base with the skills they need or upskill existing employees is also absolutely key for the tech industry.

The trick to this approach succeeding however is ensuring that certification programs are built with the experience of learning and testing in mind. It needs to be as streamlined as it possibly can be for test takers because what we know for sure is that if the process around getting a certification is hard—not the certification itself but the process around it— then you lose good people who are more than capable of being certified but aren’t in a position to be able to jump through complicated process hoops. It’s why best-in-class technology and choosing the right vendor that will reliably meet your needs is essential right from the start.

Going back to your initial thoughts on what makes a good vendor, what is it about the Questionmark platform that makes it so powerful for creating successful certification programs?

I think there are many integral points within this, and I think the most important things that you can look for in a vendor in this space is experience and a deep understanding of what is required. It’s also the ability to act as a trusted partner that’s able to guide you to what you truly need, rather than what you might initially think you need –  plenty of vendors will say yes to every request whether it’s in your best interest or not.

With the Questionmark platform, not only do we have vast experience in getting customers from onboard to the first test, but there is also a great attitude of shared success with our customer base that’s really powerful in getting a solution that truly stands the test of time.

The second thing that I’m personally very proud of is that Questionmark as a technology platform is designed to grow with your certification program. Very often organizations might know what they want the end result to be, but getting there is a process to be taken in chunks and so for organizations that need to get from onboarding to the first test as fast as humanly possible we have an application that can seamlessly handle everything from test scheduling to commerce to reporting,  all within one application.

Our other strength is that as an organization’s program grows in complexity or scale, we can grow with them, integrating flexibly with their other vendors and becoming part of their overall certification ecosystem, which again, makes the test taker experience all the more simple and easily navigable.

And the third part is really about the incorporation of innovative technology. We provide a solution, especially via our Advanced Assessment offering, that’s highly responsive across any device and makes it simple for organizations to build complex, scenario-based, and high-quality content supported by best-in-class tools like proctoring solutions and observational assessments to really transforms the test-taker experience.

In keeping ahead of the innovations we remain at the forefront of EdTech, and by extension, our customers do too. It’s this commitment to scaling with our customers and adapting to new tech that’s truly paved the way for how we tackle and take on AI opportunities too. Our AI roadmap and the areas we’re investing in across the company (including bringing all our existing and future Learnosity AI tech into the Questionmark platform) are looking towards more AI-assisted tools to free up time and enhance the test-taker experience. So Author Aide and in the future, AI rubric scoring, etc.

Having this future-facing attitude toward technological innovation combined with our ability to grow with our customers means we’re extremely well-placed to be the best vendor and partner for now and in ten years’ time too.



Share this: