Global Medical and Healthcare Assessment Special Interest Group

health

About this SIG

About the Global Medical and Healthcare Assessment Special Interest Group
The e-Assessment Association's Global Medical and Healthcare Assessment Special Interest Group (SIG) brings together assessment professionals with a shared interest in Medical and Healthcare digital assessment to collaborate, learn, and advance their expertise. The purpose of this SIG is to:

Unlock the content

Previous Meetings

Untitled design (7)
play icon

Lessons Learned on Evolving Assessment Models in the Certification & Credentialing Industry

The certification and credentialing industry often talks about innovation, but are we truly evolving, or are we refining the same models we’ve used for decades?
In this thought-provoking 60-minute webinar led by Manfred Straehle, we will explore key lessons from over 20 years of real-world assessment transformation across healthcare, technology, and professional certification programs. Drawing from practical experience and industry observation, this session challenges conventional approaches to exam strategy, job analysis, recertification, microcredentials, accreditation, and emerging neuroassessment technologies.
There will be discussion between Global Medical & Healthcare Assessment Special Interest Group and then an opportunity for Q&As with the audience.

Article: Lessons Learned on Evolving Assessment Models in the Certification & Credentialing Industry

Manny Opex headshot

Lessons Learned on Evolving Assessment Models in the Certification & Credentialing Industry
By Manny Straehle, PhD, GISF

When I decided to write this piece, I wanted to say that assessments were evolving, but in general, I don’t believe they are. I think we are “stuck” using many of the same tools (e.g., Excel, R, SPSS, SAS, online survey applications, qualitative and quantitative methods) and following the same validation guidelines and arguments. I do confess, the latter is especially important for providing a fair and legally defensible assessment. The question is: can we develop examinations that measure the breadth and depth needed to ensure that one person is competent while maintaining fairness and rigor (e.g., validity, reliability)? 

While I do believe we are “stuck” at the present moment, I think assessments will evolve with technological innovations, including AI, wireless brain communication, and small, portable, affordable bio- and brain-detection systems. The brain measures will be scored to interpret individual performance with much greater precision and in less time, as I predict (see Lesson 7 below). So, I stay tuned for a tipping point here. 

So what lessons will I be sharing then? I want to share practical lessons I have learned from real-world assessment transformations across healthcare, technology, and professional certification organizations that I have been involved with or have an idea about after 20 years of doing this. These are lessons learned that can begin to spark an evolution and move the industry to think and practice differently in these areas and perhaps more effectively to support a more competent workforce. 

Lesson 1: Assessment Strategy Is Critical – So Do it Before, Not Afterwards
Too many certification programs treat assessment strategy as an afterthought—something to be considered once the exam is written or the blueprint is approved. That approach almost guarantees misalignment, weak defensibility, and poor decision-making downstream. I often see organizations creating certification programs without fundamental components such as scope, purpose, target audience, alignment with current and future products (e.g., stackable credentials, microcredentials, badging, training/education, aides), key internal and external stakeholders (e.g., marketing and communication team leadership, external seminal individuals), B2B strategies, endorsements, and many others. These components are critical in the development of many exam development activities, business inputs, and decisions. Certification is a unique product in which positive revenue may never be achieved, so the value of ensuring a competent professional is identified and communicated effectively is clear. 

A credible assessment strategy must be planned intentionally from the start, clearly described to stakeholders, rigorously reported, and consistently executed over time. Planning defines what competence looks like and how it will be measured. Description creates transparency and shared understanding. Reporting turns data into insight. Execution is where credibility is either earned or lost. Miss any one of these, and the entire system becomes fragile.

Strong programs recognize that assessment strategy is not a document—it’s an operational discipline. It connects job analysis to test design, scoring to governance, and product to various teams to communicate and promote the value. When assessment decisions are well planned, clearly articulated, properly documented, and executed, certification bodies gain internal confidence and external trust. Regulators, employers, and candidates can see not just that decisions were made, but why they were made and how they are supported by evidence, including business decisions. In an environment of increasing scrutiny, this level of discipline is no longer optional—it is the foundation of a defensible, future-ready credential.

Lessons Learned

·        Begin the assessment strategy before any assessment development activities start.

·        Clearly define scope, purpose, and target audience prior to any activities

·        Ensure alignment between certification and current/future product offerings (e.g., stackable credentials, microcredentials, badging, training).

·        Identify and engage key internal and external stakeholders early (e.g., marketing, leadership, SMEs, influencers, B2B partners).

·        Define competence (for assessment products and services) explicitly before determining how it will be measured.

·        Treat assessment strategy as an operational discipline—not a one-time document.

·        Intentionally plan, clearly describe, rigorously report, and consistently execute assessment decisions.

·        Document not only what decisions were made, but why they were made and how they are supported by evidence.

·        Recognize that certification’s primary value is credibility and professional competence. And it is not guaranteed revenue.

·        Consider engaging an experienced credentialing professional to guide strategy development.


Lesson 2: Focus on Outcomes Rather than Relying only on KSAOs of the Job Analysis. 

Outcomes matter in a job analysis because work does not exist in a vacuum—jobs exist to produce results. Focusing on outcomes clarifies why a role exists and what success looks like when the job is performed competently. Tasks describe activity; outcomes describe value (e.g., AERE’s C-Suite Big 5 Outcomes - increasing revenue, decreasing costs, efficiency, quality, customer service). Without outcomes, job analyses drift into exhaustive but shallow inventories of duties that obscure what truly matters, which, by the way, is minimal to develop an examination outline (test specification, exam specification) . By defining outcomes, we highlight the consequences of performance—especially failure—and can identify which aspects of the job carry the greatest risk, responsibility, and impact. This is what separates meaningful job analysis from clerical documentation. In fact, when we develop competency models, we also focus on those common incompetencies to determine the common failures, which is incredibly useful and aligns with an individual's and an organization's needs. 

An outcomes-based approach also provides the structural backbone for everything that follows the job analysis. Competency models, training programs, performance metrics, and assessments only have integrity when they are traceable to outcomes that matter in the real world. When outcomes are explicit, we can defensibly specify the knowledge, skills, and judgment required to achieve them reliably, under real conditions, and at an acceptable level of quality. This strengthens validity, supports legal and professional defensibility, and keeps workforce decisions grounded in evidence rather than tradition or intuition. In short, outcomes turn job analysis from a descriptive exercise into a decision-enabling one. For these reasons, when we develop competency models, we identify outcomes which can be analyzed across the various levels (e.g., novice, intermediate, expert). Therefore, outcomes are incredibly important as we evolve assessments – a list of KSAOs along with outcomes as they relate to each other. 

Lessons Learned

·        Focus on outcomes to clarify why a role exists and what competent performance actually delivers.

·        Distinguish between tasks (activity) and outcomes (value and impact).

·        Avoid over-reliance on exhaustive KSAO lists that obscure what truly matters for performance.

·        Identify high-risk, high-impact outcomes to prioritize what should be measured and assessed.

·        Use outcomes to reveal consequences of failure—not just descriptions of work.

·        Align competency models, training, performance metrics, and assessments directly to real-world outcomes.

·        Define the knowledge, skills, and judgment required to achieve outcomes reliably and at acceptable quality levels.

·        Analyze outcomes across competency levels (e.g., novice, intermediate, expert) to support developmental pathways.

·        Incorporate both KSAOs and outcomes into examination/blueprint specifications 

·        Use outcomes to transform job analysis from a descriptive exercise into a defensible, decision-enabling framework.

Lesson 3: Maintaining Competency – The Broken World of Recertification

Is how we certify credential holders broken? Yes! I have seen so many programs that have poor rationales and justifications for the type of recertification activities and how many or how many hours (or other metric) support their decisions. All of these practices create potentially incompetent individuals whom someone who initially takes the certification would encounter.  The primary question is whether someone who has completed these activities for this amount of time would be able to pass the current examination. And should they pass the current examination set at a higher passing standard (score)? We conduct studies like this to validate the recertification activities. 

Maintaining competency was never meant to be a box-checking exercise, yet that’s exactly what recertification has become in far too many certification programs. We’ve built systems that reward attendance/participation over ability, time served over skills demonstrated, and compliance over competence. Professionals collect continuing education credits like airline miles—often disconnected from what they actually do on the job—while employers and the public assume that a renewed credential indicates continued competency or a level higher than when they initially earned the certification. It doesn’t. In a fast-changing world, knowledge decays, practice evolves, and roles shift. When recertification fails to reflect those realities, it quietly erodes the very trust credentials are supposed to protect. In other words, the value is lost. 

 If credentials are to remain relevant, recertification must move from focusing on making it easier for the certification holder to do so to a defensible, evidence-driven process that genuinely supports professionals' competency, safeguards the public, and restores confidence in what it means to be “currently competent.” So, hire a professional to ensure that your recertification program is valid by conducting quality research.

Lessons Learned

Question whether your recertification model truly reflects continued competence—or merely compliance.
Avoid “box-checking” systems that reward attendance over demonstrated ability.
Do not assume that continuing education hours equate to maintained or improved performance, and the candidate is minimally qualified/competent.
Regularly evaluate whether recertified individuals could pass the current examination—especially at an appropriate or elevated standard.
Align recertification requirements with real-world practice changes, evolving knowledge, and role shifts.
Validate recertification activities through defensible research and empirical studies.
Prioritize high-impact competencies and risks when designing maintenance requirements.
Ensure that recertification protects the public and reinforces employer trust.
Move from convenience-driven renewal models to evidence-driven competency assurance.
Engage experienced credentialing and psychometric professionals to design and validate your recertification framework.

Lesson 4: Focus on Developing Future Certifications at a Lower Competency Level

Most certification programs develop their next certification at a higher competency level and continue in this manner until someone realizes there is no volume to support a revenue stream.  In addition, the future of certification depends on recognizing lower competency not as a weakness, but as an entry point. When we ignore early-stage competence, we force programs to choose between being inaccessible or irrelevant. New entrants, career changers, and emerging roles need credentials that reflect what they can safely and meaningfully do now, not what experts mastered after ten years on the job. Designing certification only at the top end creates artificial barriers, shrinks pipelines, and leaves entire segments of the workforce uncertified and unsupported.

A smarter approach is to intentionally design future certifications from the ground up—starting with lower competency and building upward. That means clearly defining foundational outcomes, minimum safe practice, and realistic performance expectations, then mapping progressive pathways toward advanced competence.  When done well, this approach strengthens the entire credentialing ecosystem: candidates see a future, employers gain clarity, and certification bodies remain relevant in a world that increasingly demands flexibility without sacrificing rigor.

So, is it worth developing credentials at a higher level, maybe after the lower-level competency ones have been developed to support more of a need and help the certification organization survive and meet revenue requirements to sustain itself? 

Lessons Learned

Avoid building only upward—higher-level certifications alone rarely sustain volume or long-term revenue.
Recognize lower competency certifications as strategic entry points, not diluted standards.
Design credentials that reflect what individuals can safely and meaningfully perform (public welfare and safety) now, not only expert-level mastery.
Reduce artificial barriers that shrink pipelines and exclude emerging professionals.
Define foundational outcomes and minimum safe practice clearly before building advanced pathways.
Create progressive certification ladders that map realistic development from novice to expert.
Strengthen workforce pipelines by supporting new entrants and career changers.
Align credential design with market demand and organizational sustainability.
Preserve rigor while increasing accessibility through structured competency tiers.
Develop higher-level credentials strategically—after establishing lower-level programs that support ecosystem growth and financial viability. 

Lesson 5 Microcredentials – Is it Worth It or a Waste of Time? 

For years, as a psychometrician, I did not understand how a competency-based certification could be reduced to a smaller number of exam items, potentially compromising the reliability or soundness of the exam. However, I recalled that when we administered clinical personality assessments as therapists, we used short versions, some with fewer than 10 items, while still providing a sound validity argument. While I remain on the fence, I can see how a stackable microcredentials model with earning all microcredentials can be a competency-based path equivalent to a full certification but that can progress along the career continuum, and therein lies the value, measuring competency along the career path. 

The more I examined microcredentials through a job-analysis and outcomes lens, the more the conversation shifted from item counts to decision purpose. A microcredential is not, and should not pretend to be, a compressed version of a high-stakes certification. Its value lies in the precision of the claim it makes. When the outcomes are narrow, observable, and well-defined—tied to a discrete set of tasks or decisions—the assessment burden can justifiably be lighter without being psychometrically irresponsible. In other words, validity is not a function of length alone; it is a function of alignment between the competency claim, the evidence collected, and the inference drawn.

That said, microcredentials become problematic when organizations use them as a marketing shortcut rather than a measurement strategy. Fragmentation without coherence leads to credential inflation, learner confusion, and employer skepticism. A well-designed microcredential ecosystem must be intentionally scaffolded, psychometrically governed, and anchored to a defensible competency framework. When done right, microcredentials are not a waste of time—they are a modular expression of professional growth. When done poorly, they are little more than digital merit badges. The difference is not philosophical; it is methodological.

So, when strategically implemented, a microcredential can be an evolution, as it offers an easier way for the candidate to earn a credential. 

Lessons Learned

Length is not the enemy—misalignment is. Short assessments can support defensible decisions when the competency claim is narrow, the outcomes are explicit (see above), and the evidence is fit for purpose. But the validity argument with a few total exam items can weaken your validity argument.
Microcredentials must make modest, precise claims. The moment a microcredential overreaches—implying full occupational competence (a person can do a job – typically it may be able to support claims that it can show minimally competency in an specific area of the job and not the entire job)—it undermines its own credibility.
Stackability is a design principle, not a marketing slogan. Microcredentials only work when they are intentionally sequenced, governed, and cumulatively mapped to a broader competency framework, rather than designed solely to join a club of marketed unvalidated products. 
Validity arguments scale down—but they do not disappear. Fewer items require more rigor in defining outcomes, task relevance, and decision rules, not less.
Employer trust is the real barometer. If employers cannot quickly understand what a microcredential represents and how it differs from a certification, the credential has already failed. And it often competes with the existing certifications when introduced. 
Microcredentials are neither a panacea nor a waste of time. They are a tool—and like any tool in assessment, their value depends entirely on how responsibly they are designed, implemented, and interpreted

Lesson 6: Meeting an Accreditation Standard Is the Floor, Not the Ceiling — 
Many organizations fail to understand that earning accreditation status does not mean you stop there, as many credentialing standards focus on continued improvement. As an ANAB ISO/IEC 17024 assessor, I see many new accredited programs realize that the standard can help their organization with continuous improvement. However, while they may recognize this, some don’t immediately or at all implement improvement, and this could be for a number of reasons (e.g., a small-staffed program). 

In credentialing, accreditation is often treated as the finish line rather than the starting point. Organizations work tirelessly to demonstrate compliance, assemble documentation, and earn the accreditation status—only to exhale and quietly revert to business as usual. While accreditation standards are essential for protecting the public and ensuring baseline quality, they are intentionally written to be minimum thresholds. They define what must be in place, not what good, innovative, or future-ready credentialing looks like. Confusing compliance with excellence is one of the most persistent risks in our field.

The organizations that truly lead do not ask, “Are we compliant?” but rather, “Are our decisions defensible, relevant, and improving?” Accreditation does not guarantee meaningful outcomes, employer trust, or candidate value—it simply affirms that foundational safeguards are in place. Going beyond the standard requires continuous investment in job analysis quality, outcome clarity, assessment design, and post-launch evaluation. Accreditation may get you in the room, but it is psychometric rigor, transparency, and responsiveness to the profession that ultimately earn credibility.

Lessons Learned

Accreditation establishes minimum expectations, not always best practices
Compliance should trigger reflection and improvement, not complacency.
Standards cannot replace thoughtful judgment and professional accountability.
Ongoing validation and outcome review matter more than passing a single audit.
True credibility is earned through sustained rigor, not earning accreditation.

Lesson 7: Neuroassessments Can Be The Next Future Disrupter

For almost a century, we have relied on self-report, multiple-choice testing, interviews, and performance simulations as proxies for competence, readiness, and potential. They have been valid and with MCQs meet the business and candidate needs, but let’s be honest: they require a lot of inference, asking people what they know when we almost always estimate what they know, and time to develop, so we are guessing they will be competent with a degree of X% confidence. And all of this measuring occurs at one point and time rather than continuously.  Neuroassessments flip that model on its head. Instead of asking people what they know or can do, we begin to observe how the brain actually processes information—attention, working memory, cognitive load, emotional regulation, and decision-making under pressure. This is no longer science fiction. Advances in EEG, eye-tracking, reaction-time analytics, and AI-driven pattern recognition now make it possible to measure cognitive functioning directly, continuously, and with a level of objectivity traditional assessments simply cannot match.

The real disruption is not that Neuroassessments will supplement existing assessments—it’s that they may ultimately replace them in many high-stakes contexts. When we can directly observe cognitive efficiency, adaptability, fatigue, and stress responses in real time, the value of proxy measures declines rapidly. Credentials of the future may be grounded less in what someone claims to know and more in neural patterns consistently associated with expert performance.

Lessons Learned

Assessment is moving closer to biology, not further into abstraction.
Proxy/inference measures (tests, surveys, interviews, observations) will lose dominance as direct cognitive evidence becomes viable.
Validity will increasingly be defined by observed cognitive functioning, not just response data that has to be inferred.
Neuroassessments will shift credentials from static snapshots to dynamic, continuous measurement.

Downloadable version

Stakeholder Communication in High-Stakes Healthcare Programs

J Dangles headshot

When your decisions affect patient safety, workforce mobility, and public trust, stakeholder communication is a core programmatic responsibility. Effective communication with stakeholders strengthens the credibility of your credentialing program and mitigates reputational risk. By contrast, even the most technically
sound assessment programs can crumble with poor communication.

When I was studying for my Project Management Professional (PMP®) certification, one of the core knowledge areas that resonated most with me was stakeholder management. It goes well beyond simply identifying who your stakeholders are. It requires a deliberate plan for engaging them, managing those
engagements over time, and monitoring how stakeholder needs and perceptions evolve. Those lessons have stayed with me.

As Executive Director of the Certification Board of Infection Control and Epidemiology, my professional background is not in test development or psychometrics. Instead, I have spent my career managing assessment programs and have learned valuable lessons, both from successful launches and from those that were less successful. There are few things worse than announcing a major programmatic change and then realizing, “Oh, we forgot to notify ____.”

When developing a new credential or implementing changes to an existing one, I keep a simple stakeholder checklist handy and regularly ask myself: What are their priorities? What is their risk tolerance? Below are a few questions your team may want to work through during the planning phase of a new or revised program:

Candidates and certificants

How will this new program impact existing credential holders?
What are the consequences—intended or unintended—of introducing a new credential?
Are there issues of fairness or eligibility that need to be addressed prior to launch?
Employers and healthcare organizations

What is the impact on workforce readiness and hiring decisions?
How will credibility be perceived by employers and leaders in
the field?
Regulators and accreditors

If you are an accredited program, will you pursue accreditation immediately for the new credential?
If accreditation is planned, how does that decision affect your development timeline and resources?
Subject matter experts and volunteers

What role will volunteers play throughout development and implementation?
Who are the decision-makers, and how are decisions communicated back to contributors?
Patients and the public

What will the outcomes of this assessment signal to the public about competence and quality?
One-size-fits-all communication does not work for high-stakes programs, especially those with international reach. Messaging that is appropriate for regulators may overwhelm or confuse candidates, while public-facing communications may lack the depth or specificity regulators expect. Segmentation matters.

Helpful Hints
Be transparent without overexposing. Clearly explaining how decisions are made and by who builds trust. However, disclosing overly technical psychometric details can confuse rather than inform. Apply consistency across communications. Alignment between customer service responses, written policies, and public messaging reduces appeals, complaints, and perceptions of unfairness. Anticipate emotional responses. High-stakes assessments are inherently stressful. Plain, direct language, especially around pass/fail decisions, is essential. Be explicit about next steps, timelines, and available options.

Lessons Learned
Don’t rely on dense legal language that obscures meaning. If you and your staff don’t understand it, others won’t either! Communicate policy changes well in advance. Budgets and staffing decisions are often made months ahead. Ensure internal teams are fully prepared before making major programmatic announcements. Once again, if your team doesn’t understand it, others won’t either!

Conclusion
High-stakes healthcare assessments require psychometric rigor and technical quality, but those elements alone are not enough. Stakeholders evaluate programs not only on outcomes, but on how those outcomes are explained and contextualized. Effective stakeholder communication reinforces perceptions of fairness and legitimacy and ultimately supports long-term program sustainability.

Downloadable version

Beyond Borders: What Global Accreditation Teaches Us About Program Governance

Article by Terri Hinkley, Chief Executive Officer/Executive Leadership and Burgeoning Futurist

1769654758924

 

When international credentialing organizations negotiate mutual recognition agreements, they don't just compare course catalogs. The American Speech-Language-Hearing Association's (ASHA) agreement with counterparts from Canada, the UK, Australia, New Zealand, and Ireland examined "educational and other requirements expected of each other's certificate holders, including academic course content, the amount and distribution of clinical practice hours prior to certification being awarded, degree designations, accreditation of academic programs, experience, and assessment mechanisms" (American Speech-Language-Hearing Association, n.d.). This deep dive reveals international accreditation's lesson for governance: it forces us to distinguish between essential standards and inherited assumptions. The World Federation for Medical Education (WFME), founded in 1972 in partnership with the World Health Organization (WHO), provides healthcare's most instructive accreditation example. WFME doesn't accredit individual medical schools, it evaluates and recognizes the accreditation agencies themselves, examining "the legal standing, accreditation process, post-accreditation monitoring, and decision-making processes" (World Federation for Medical Education, n.d.-a). WFME's Recognition Criteria cover four areas: (1) Background: scope of authority and acceptance; (2) Accreditation standards: existence, appropriateness, and review; (3) Process and procedures: site visits, qualifications, decisions, complaints; and (4) Policies and resources: conflict controls, consistent application, due process, records, information dissemination (World Federation for Medical Education, n.d.-b).

The Enhanced Nurse Licensure Compact (eNLC), enabling nurses to practice across 43 US jurisdictions with one license, implemented 11 uniform requirements including mandatory federal background checks and standardized disciplinary provisions (National Council of State Boards of Nursing, n.d.). The compact distinguished what's truly necessary for public protection versus what's customary but negotiable.

Similarly, ISO/IEC 17024—harmonizing personnel certification worldwide—requires demonstrating that "members of the governing body do not have a conflict of interest in their overall capacity to serve that could compromise the integrity of the certification process" (International Organization for Standardization, 2012). This demands structural separation: board members shouldn't simultaneously serve on examination committees or participate in appeals involving their practice areas.

The National Commission for Certifying Agencies (NCCA) Standards for the Accreditation of Certification Programs require certification programs to show "the governance structure and the process for selection and removal of certification board members protects against undue influence" (Institute for Credentialing Excellence, 2021, Standard 2.A), taking concrete forms: public representation with actual decision-making authority, documented authority flows, and financial independence.

What International Accreditation Reveals About Governance
International accreditation and mutual recognition can help credentialing organizations understand which governance requirements are critical versus those which are somewhat more flexible and can be based on geographic or organizational preferences. These are the key building blocks to an exemplary governance structure:


• Conflict of interest controls: Evaluates whether agencies implement systematic controls preventing conflicts from compromising decisions—not just disclosure forms.

• Consistent application: Decisions must follow documented, consistently applied processes rather than individual preferences or institutional knowledge.

• Qualification and training: Strong governance ensures people implementing processes are qualified and trained.

• Public accountability: Stakeholders can independently verify accreditation status, which builds public trust.

• Public representation: Not tokenism, but actual decisionmaking authority. ISO/IEC 17024 and NCCA both require demonstrated stakeholder representation on governance bodies

• Due process and appeals: The recognition criteria require demonstrated due process mechanisms. Accreditation decisions must be defensible, documented, and subject to appropriate appeal processes.

• Documented authority flows: Clear policies showing which bodies recommend versus decide

• Financial independence:
Especially critical when certification programs exist within membership associations Financial independence: Especially critical when certification programs exist within membership associations

The Path Forward
International accreditation offers a diagnostic tool: Would our governance withstand review by external auditors using accreditation criteria? Could we demonstrate systemic conflict controls, consistent application with documented evidence, qualification requirements for decision-makers, followed due process, and public accountability?

The goal isn't standardization – it’s governance maturity. Strong governance demonstrates clear role boundaries, evidence-based requirements, carefully structured stakeholder input, systemic conflict management, and focus on competency rather than credentials as proxies.

Apply the same scrutiny to your governance that international reviewers would. Not because you're seeking mutual recognition, but because questions about authority, accountability, evidence, and impartiality are fundamental to credentialing integrity wherever you operate. If your governance can withstand that examination,you're building something that deserves stakeholder trust.

References:
American Speech-Language-Hearing Association. (n.d.). FAQs:
Mutual Recognition Agreement.
https://www.asha.org/certification/mutual-recognitionagreement-faqs-general-information/
Institute for Credentialing Excellence. (2021). National
Commission for Certifying Agencies standards for the
accreditation of certification programs.
https://www.credentialingexcellence.org/Accreditation/EarnAccreditation/NCCA
International Organization for Standardization. (2012). ISO/IEC
17024:2012: Conformity assessment — General requirements for
bodies operating certification of persons.
https://www.iso.org/standard/52993.html
World Federation for Medical Education. (n.d.-a). Recognition
programme. https://wfme.org/recognition/bme-recognition/
World Federation for Medical Education. (n.d.-b). BME
recognition criteria. https://wfme.org/recognition/bmerecognition/bme-recognition-criteria/

Downloadable version

Screenshot 2026 03 31 105635
play icon

eAA Global Medical and Healthcare Assessment Special Interest Group

The eAA's first Global Medical and Healthcare Assessment Special Interest Group (SIG) Webinar brought together experts and practitioners working at the intersection of healthcare, education, and assessment. This introductory webinar explored the current landscape of medical and healthcare assessment, the unique challenges faced in clinical education, and the opportunities digital assessment offers for improving practice and outcomes. Our guest speaker was Professor Chris McManus, Emeritus Professor of Psychology and Medical Education, UCL Medical School Professor McManus trained in medicine at Cambridge and Birmingham and has spent decades leading research in the fields of neuropsychology, medical education, and assessment. With appointments at institutions including Imperial College and UCL, and ongoing work with MRCP(UK), he brings invaluable insight into the evolution of medical assessment and the role of psychology and data in improving healthcare outcomes.

global medical 1
play icon

Global Medical and Healthcare Assessment SIG meeting July 2025

The eAA's first Global Medical and Healthcare Assessment Special Interest Group (SIG) Webinar brought together experts and practitioners working at the intersection of healthcare, education, and assessment. This introductory webinar explored the current landscape of medical and healthcare assessment, the unique challenges faced in clinical education, and the opportunities digital assessment offers for improving practice and outcomes. Our guest speaker was Professor Chris McManus, Emeritus Professor of Psychology and Medical Education, UCL Medical School Professor McManus trained in medicine at Cambridge and Birmingham and has spent decades leading research in the fields of neuropsychology, medical education, and assessment. With appointments at institutions including Imperial College and UCL, and ongoing work with MRCP(UK), he brings invaluable insight into the evolution of medical assessment and the role of psychology and data in improving healthcare outcomes.

Platinum Sponsors

Join our membership

Shape the future of digital assessment

Join the global community advancing e-assessment through innovation, research, and collaboration.

user full
5,000+
Global members
globe point
50+
Countries
cog icon
15+
Years leading

The home of digital assessment

News

Latest digital assessment news from around the globe

Record number of finalists announced for the 2026 International e-Assessment Awards

UK Government Launches New International Education Strategy to Drive ÂŁ40bn in Education Exports

AI in Assessment: New Industry Survey Highlights Opportunities, Risks and the Road Ahead

Resources

Sharing best practice and industry insights

resources

Events

eAA & industry events, shaping the future of digital assessment

calendar

Keep informed

Subscribe to our newsletter

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.