the e-Assessment Association

AI In Assessments – Automated Item Generation

AI In Assessments – Automated Item Generation

A blog by Vishwanath Subbanna, Principal Pre-sales Consultant, Excelsoft Technologies

In the assessment space, items are the basis for building intellectual property. Different industries utilize various processes and techniques to ensure the uniqueness and quality of these items. But how do these industries provide the uniqueness, consistency, and quality of the items developed? That’s where Artificial Intelligence (AI) will play a vital role in meticulously weaving elements such as item type, item language, item content (includes stimuli, stem, and distractors), item meta-data, item difficulty, and taxonomy levels together.

Let us understand the item design, item template, and the power of Artificial Intelligence (AI) in creating new items and refactoring the existing items.

Factors considered for Item design:

  • Interactivity: How the test taker will interact with the Item (e.g., choice-based selection vs. free text response)
  • Response Format: The type of response expected or captured (e.g., multiple choice single response, multiple choice multiple responses, fill-in-the-blank with free text, and a few more)
  • Scoring: How the Item is scored based on the response (e.g., auto scoring, semi-auto scoring, and manual scoring)

Every Item will be crafted with the listed factors in mind, ensuring it captures all parameters for presentation, evaluation, and analysis.

Item template design:

Like architectural template guide construction, item template is the foundation for consistent and high-quality item creation. While each item type will have its unique template outlining specific mandatory and optional parameters, all items will share a set of standard parameters necessary for their organization and analysis.

  • Item Stimuli
  • Item Stem
  • Item Distractors (in case of a predefined list of responses)
  • Response placeholder (in case of open-ended responses)
  • Answer Key (in the context of Objective Items)
  • Model answer (in the context of subjective items)
  • Complexity
  • Taxonomy classifications
  • Meta-data

Having captured all required parameters listed, the effective use of AI will significantly support authors in building new items and refactoring legacy items.

Leveraging AI for Efficient, Effective, and Rapid Item Generation:

By generating a set of items based on a predefined knowledge model, AI can significantly improve the speed of item generation and reduce the manual effort required from SMEs. SMEs can focus more on refining and fine-tuning the AI-generated items to ensure they precisely align with the desired learning objectives and assessment outcomes. The AI-generated item pools can serve as a springboard for collaboration and fine-tuning.

  • AI can generate a broader range of assessment items from the Knowledge bank, saving time and resources. (a content model of related information about a particular subject/topic).
  • AI can be used to create distractors for objective item types, preventing guessing factors and promoting a deeper understanding of the topic. It can generate model answers for the items used in the auto or manual marking process.
  • AI can analyze existing items and suggest modifications to meet newer objectives and outcomes, such as changing stem content, difficulty levels, and taxonomy levels and introducing newer distractors.
  • AI can suggest alternative item types based on the objective and outcome of an existing item (convert a multiple-choice item into a fill-in-the-blank format while ensuring the same knowledge or skill is assessed).
  • AI can create meta-data for each Item based on the outcome and analytical parameters defined.
  • AI can analyze items to identify potential biases based on language, content, or difficulty level. It can be used to group equivalent items and mark them as enemy items.

The future of assessment lies in the balanced collaboration between human expertise and AI capabilities. By adopting AI in item generation, the assessment space can unlock the potential to increase efficiency, cost-effectiveness, consistency, speed of development, content semantics, and an impactful item pool for all stakeholders involved.


Share this: