Posts on psychometrics: The Science of Assessment

three standard errors

One of my graduate school mentors once said in class that there are three standard errors that everyone in the assessment or I/O Psychology field needs to know: the standard error of the mean, the standard error of measurement, and the standard error of the estimate. These concepts are distinct in their purpose and application but can be easily confused by those with minimal training.

In fact, I have personally seen the standard error of the mean incorrectly reported as the standard error of measurement, which is a critical mistake in statistical reporting.

In this post, I will briefly describe each of these standard errors to clarify their differences. 

Standard Error of the Mean (SEM)

The standard error of the mean is a fundamental concept introduced in introductory statistics courses. It is closely tied to the Central Limit Theorem, which serves as the foundation of statistical analysis. The purpose of this metric is to provide an index of accuracy—or conversely, error—in a sample mean.

Since different samples from a population yield slightly different means, the standard error of the mean estimates the expected variation in sample means. It is calculated using the following formula:

Where:

  • SD = Sample standard deviation
  • n = Number of observations in the sample

This formula is useful for creating confidence intervals to estimate the true population mean.

Example:

Suppose we conduct a large-scale educational assessment where 50,000 students take a test. The sample mean score is 71, with a standard deviation of 12.3. Applying the formula:

Thus, we can be 95% certain that the true population mean falls within 71 ± 0.055.

Key Takeaway: The standard error of the mean applies to group averages and has nothing to do with psychometricsdirectly. However, it can be used in assessment data when analyzing sample-based studies.

Click here to read more.

Standard Error of Measurement (SEM)

The standard error of measurement is particularly important in educational assessment and psychometrics. It estimates the accuracy of an individual’s test score, rather than a group mean.

This error can be calculated using Classical Test Theory (CTT) or Item Response Theory (IRT), though each defines it differently.

Classical Test Theory Formula:

Where:

This formula provides an estimate of how much a test-taker’s score would vary if they retook the test multiple times under ideal conditions.

Item Response Theory Perspective:

Under IRT, SEM is not a fixed value but varies across the ability spectrum. A test is more accurate (lower error) in areas where it has more high-quality items. For instance, a test with mostly mid-range difficulty questions will provide more accurate scores in the middle range but be less precise for very high or low scorers.

 

Standard error of measurement and test information function

 

For a deeper discussion of SEM, click here. 

Standard Error of the Estimate (SEE)

The standard error of the estimate measures the accuracy of predictions in linear regression models. It estimates the expected deviation between actual and predicted values.

The formula for SEE is:

Where:

  • SD_y = Standard deviation of the dependent variable
  • r = Correlation coefficient between predictor and outcome variables

Example:

Imagine we want to predict job performance using a 40-item job knowledge test. We analyze 1,000 job incumbents’ scores and find that those who scored 30/40 had performance ratings ranging from 61 to 89.

Using statistical software (such as Microsoft Excel), we compute:

  • SEE = 4.69
  • Regression slope = 1.76
  • Intercept = 29.93

Using these, the estimated job performance for a test score of 30 is:

To construct a 95% confidence interval:

Resulting in a prediction range of 73.52 to 91.90.

Practical Application:

If an employer wants to hire only candidates likely to score 80 or higher in job performance, they can set a cutscore of 30 on the test based on this estimate.

 

Key Takeaways: Avoiding Common Mistakes

  • The standard error of the mean applies to sample averages and is part of general statistics.
  • The standard error of measurement relates to individual test scores and is crucial in assessment and psychometrics.
  • The standard error of the estimate is used in predictive modeling (e.g., regression) to assess the accuracy of predictions.

Final Tip: When reporting standard errors in assessment or statistical studies, be sure to use the correct one to maintain accuracy and credibility.

If you found this post helpful, share it with your colleagues or leave a comment below!

certification exam delivery

Machine Learning in Psychometrics: Old News?

In the past decade, terms like machine learning (ML), artificial intelligence (AI), and data mining have become buzzwords, thanks to the increasing availability of data and advancements in computing power. These technologies power innovations like self-driving cars, but machine learning has been used in psychometrics for decades—making much of the hype just that: hype.

What is Machine Learning?

There is no universally agreed-upon definition of machine learning. Wikipedia broadly defines it as:

“The study and construction of algorithms that can learn from and make predictions on data.”

ML is often divided into two main categories:

  • Supervised learning – where there is a criterion variable that is being predicted (“label” in ML parlance).
  • Unsupervised learning – where the there is no criterion variable to predict, and you are just looking for patterns in the data.

The latter is has been present in psychometrics since the early days of the field, in the form of factor analysis.It also includes item response theory and other quantitative methods of analyzing exam data to evaluate reliability and internal validity. Supervised learning includes methods like linear regression to evaluate predictive validity.

Machine Learning in Psychometrics

Psychometrics and test development have leveraged machine learning methodologies for decades. Below are some key areas where these techniques play a significant role.

1. Dimensionality Reduction

One of the fundamental applications of machine learning in psychometrics is dimensionality reduction. This involves identifying underlying structures in unstructured data, particularly through factor analysis and cluster analysis.

The concept isn’t new—Charles Spearman pioneered these methods over 100 years ago while studying intelligence. His work laid the foundation for modern latent trait models and is even referenced in online machine learning courses today.

2. Classification

 

Classification problems are central to machine learning. A well-known example is the MNIST dataset for handwritten digit recognition. But in psychometrics, classification applies to areas such as:

  • Test cutscore setting – The contrasting groups method is a simple classification technique, using training data to determine where a passing score should be set.
  • Automated essay scoring – A supervised learning technique where essays are first classified by human raters, and then an algorithm is trained to replicate human scoring. This technology has been in use for over three decades.

3. Anomaly Detection

ML techniques are also used in psychometric forensics—identifying cheating behavior, test fraud, or data anomalies. Additionally, anomaly detection helps in model fit evaluation, ensuring that test items and responses align with psychometric models.

4. Learning from Data: Item Response Theory (IRT)

five item response functions

Item Response Theory (IRT) is one of the clearest examples of machine learning in psychometrics. It involves:

Unlike standard classification problems, IRT doesn’t rely on predefined “true” labels, such as pass/fail. Instead, it infers ability levels based on probability models.

5. Reinforcement Learning

While reinforcement learning isn’t widely used in psychometrics, online IRT calibration is a notable exception. This involves updating item parameters over time, ensuring tests remain valid and reliable. However, practical implementation remains rare.

6. Using Test Scores for Prediction

Machine learning isn’t just used within tests—it also applies beyond test-taking. A prime example is using pre-employment test scores to predict job performance. When combined with other variables, these scores can enhance predictive validity.

7. Automation in Psychometrics

Automation is another significant way that machine learning impacts psychometrics. Automated test assembly (ATA) and automated essay scoring are well-known applications. However, more subtle automation can have an even greater impact.

For example, software like Iteman and Xcalibre automatically generate psychometric reports in Word format, saving psychometricians hours of manual data entry. The goal of automating the entire test development cycle—from job analysis to test delivery—would revolutionize the field.

The Future of Machine Learning in Psychometrics

As technology evolves, psychometrics will continue to integrate machine learning. However, the biggest challenge is not innovation but adoption. Many organizations still use outdated methods, despite having the data and resources to implement modern psychometric techniques.

The field must focus on overcoming barriers to adoption, ensuring that machine learning-driven methodologies become standard practice.

Final Thoughts

Machine learning is not new to psychometrics—it has been part of the field for over a century. Factor analysis, classification, and predictive modeling have long been integral to test development and validation.

Rather than getting caught up in AI buzzwords, the focus should be on removing obstacles to widespread adoption. Machine learning isn’t a futuristic concept in psychometrics—it’s already here. The question isn’t if it will be used, but rather, how we ensure it is used effectively and ethically.

student-profile-cognitive-diagnostic-models

What Are Cognitive Diagnostic Models?

Cognitive diagnostic models (CDMs) are a psychometric framework designed to enhance how tests are structured and scored. Instead of providing just an overall test score, CDMs aim to generate a detailed profile of an examinee’s mastery of specific skills. This approach offers deeper insights into individual strengths and weaknesses, making it particularly useful in educational and psychological assessments.

CDMs have seen significant growth in research and application over the past decade, even though their mathematical foundations trace back to MacReady and Dayton (1977). Their rising popularity stems from the need for more precise assessments in many contexts where a single overall score is insufficient. For instance, in formative educational assessments, understanding a student’s specific strengths and weaknesses is crucial for providing meaningful feedback. Conversely, professional certification or licensure exams often rely on a single pass/fail score derived from overall performance.

Understanding Cognitive Diagnostic Models: A Different Approach

Since the 1980s, Item Response Theory (IRT), also known as latent trait theory, has been the dominant psychometric paradigm. CDMs, however, belong to a distinct framework called latent class theory. Rather than assuming a unidimensional measurement, latent class theory categorizes examinees based on multiple attributes or skills.

The ultimate goal with CDMs isn’t to produce a single numerical score but to develop a comprehensive profile indicating which skills an examinee has mastered and which they haven’t. These skills, often related to cognitive abilities, help identify areas of strength and weakness. This diagnostic capability makes CDMs especially valuable for formative assessments, where targeted feedback can foster improvement.

 

A Practical Example: Fractions in Mathematics

To illustrate how CDMs work, let’s consider a formative assessment designed to evaluate students’ understanding of fractions. Imagine the curriculum focuses on these distinct skills:

  • Finding the lowest common denominator
  • Adding fractions
  • Subtracting fractions
  • Multiplying fractions
  • Dividing fractions
  • Converting mixed numbers to improper fractions

Now, suppose one of the questions on the test is:

What is 2 3/4 + 1 1/2?

Answering this question requires mastery of three skills:

  1. Finding the lowest common denominator
  2. Adding fractions
  3. Converting mixed numbers to improper fractions

Researchers use a tool called the Q-Matrix to map out which skills each question assesses. Here’s a simplified example:

 

Item Find the lowest common denominator Add fractions Subtract fractions Multiply fractions Divide fractions Convert mixed number to improper fraction
 Item 1  X X  X
 Item 2  X  X
 Item 3  X  X
 Item 4  X  X

 

How Do You Determine an Examinee’s Skill Profile?

This is where cognitive diagnostic models shine. The term models is plural because there are several types of CDMs available—just as there are various models within IRT, such as the Rasch model, 2-parameter model, and generalized partial credit model. The choice of CDM depends on the test’s characteristics and the specific needs of the assessment.

One of the simplest CDMs is the DINA model. This model uses two parameters for each item:

  • Slippage (s): The likelihood that a student who has the required skills answers the item incorrectly.
  • Guessing (g): The probability that a student without the necessary skills answers the item correctly.

The process of determining a skill profile involves complex mathematical calculations based on maximum likelihood estimation. The steps include:

  1. Listing all possible skill profiles.
  2. Calculating the likelihood of each profile using item parameters and the examinee’s responses.
  3. Selecting the profile with the highest likelihood.

Estimating item parameters is computationally intensive. While IRT parameter estimation can be done using software like Xcalibre, CDMs require more advanced tools such as MPlus or R.

Beyond simply identifying the most likely skill profile, CDMs can also provide the probability that an examinee has mastered each specific skill. This level of diagnostic detail is invaluable in contexts like formative assessment, where personalized feedback can drive improvement.

How Can You Implement Cognitive Diagnostic Models?

To implement CDMs effectively, start by analyzing your data to assess how well the models fit your test items. As mentioned earlier, software like MPlus or R can help with this analysis.

Publishing a fully operational assessment that uses CDMs for scoring is a more challenging task. Most CDM-based tests are proprietary and developed by large educational companies that employ psychometricians to create and refine their assessments. These companies often provide banks of formative assessments for schools, particularly for students in grades 3–12.

If you’re interested in developing your own CDM-based assessments, options are currently limited. However, platforms like FastTest can support you in test development, delivery, and analytics.

Ready to Learn More?

If you’re eager to dive deeper into cognitive diagnostic models, here are some excellent resources:

  • Alan Huebner offers a fascinating article on adaptive testing using the DINA model, with an informative introduction to CDMs.
  • Jonathan Templin, a leading expert from the University of Iowa, provides fantastic resources on his website.
  • A free pdf book by Rupp, Templin, and Hansen is available at Mindful Measurement.
  • For a more comprehensive understanding, check out this highly recommended textbook on CDMs.

CDMs offer powerful tools for more nuanced assessments, allowing educators and researchers to gain detailed insights into examinees’ cognitive skills. Whether you’re new to this approach or looking to implement it in your testing framework, exploring CDMs can significantly enhance the value of your assessments.



psychometrics-possibilities

The Goal: Quality Assessment

On March 31, 2017, I read an article in The Industrial-Organizational Psychologist (the journal published by the Society for Industrial Organizational Psychology) that really resonated with me:

Has Industrial-Organizational Psychology Lost Its Way? By Deniz S. Ones, Robert B. Kaiser, Tomas Chamorro-Premuzic, Cicek Svensson

Why? Because many of their points apply to psychometrics and its current state of innovation. They summarize their concerns in six bullet points:

  • Overemphasis on theory
  • Fixation on trivial methodological minutiae
  • Suppression of exploration and innovation
  • Obsession with publication while ignoring practical issues
  • Distraction by fads
  • Loss of real-world influence to other fields

 

So What Is Psychometrics Supposed to Be Doing?

psychometrician in code

What has irked me most over the years is the overemphasis on theory and minutiae rather than solving practical problems. This is why I stopped attending NCME conferences and instead attended practical ones like ATP. My goal is to improve the quality of assessment worldwide. Developing esoteric DIF methodology, new multidimensional IRT models, or CAT sub-algorithms that improve efficiency by only 0.5% won’t significantly impact the many poor assessments out there or the flawed decisions they lead to. There is a place for research, but practical improvements are underserved.

Example: Credentialing

Credentialing is a prime example of where assessment quality matters. Many licensure and certification tests used to make critical decisions are poorly constructed. Some organizations simply don’t care, while others face external constraints. I once worked with a Department of Agriculture in a western U.S. state where the legislature mandated licensure tests for professions with as few as three test-takers per year.

How do we encourage such groups to follow best practices? Traditionally, they would need to hire expensive consultants, which isn’t feasible for small organizations. Why spend $5,000 on an Angoff study for just three candidates per year? I don’t blame them for avoiding it, but the result is low-quality tests certifying unqualified practitioners.



An Opportunity for Innovation

making-predictions-and-decisions-based-on-test-scores

There is still innovation happening in our field, but it is often misdirected. Large corporations hire fresh PhDs in psychometrics and then assign them repetitive tasks like running SAS scripts or conducting Angoff studies over and over. I experienced this firsthand, and after 18 months, I was eager for something more.

Worse, much of the innovation isn’t focused on better measurement. I once saw someone advocating for gamification in assessments, claiming measurement precision didn’t matter. This is, of course, ridiculous. An appealing UI is meaningless if the results are just random numbers.

Innovation in Psychometrics at ASC

At ASC, much of our innovation is aimed at solving these issues. I developed Iteman 4 and Xcalibre 4 to allow organizations to generate professional psychometric reports without hiring expensive consultants. Unlike other programs that produce text files or Excel spreadsheets, our tools generate reports directly in Microsoft Word, making them easy to use.

Similarly, our FastTest platform was designed to modernize assessment processes. Conducting an Angoff study with items on a projector and SMEs writing ratings on paper is outdated. FastTest allows this to be done online, enabling remote SME participation and reducing costs. Want to publish an adaptive (CAT) exam without coding? We’ve built that directly into the test publishing interface.

Back to My Original Point

The question was: What should psychometrics be doing? My answer: improving assessment. Advanced mathematical research is useful, but only for the top 5% of organizations. It’s our duty as psychometricians to find ways to help the other 95%.

The Future: Automation in Psychometrics

The future lies in automation. Iteman 4, Xcalibre 4, and FastTest were effective machine learning tools before the term became trendy. Other fields, like Big Data, are gaining influence by applying concepts psychometrics has used for decades. Item Response Theory, for example, is a form of machine learning that has been around for 50 years!

If you’re looking to improve your assessment practices, ASC’s tools can help. Contact us today to learn how automation and innovation can enhance your psychometric processes.



 

Big Data - AI Generated Image

Assessments have long been a cornerstone of education, hiring, and psychological research. However, with advancements in artificial intelligence and machine learning, these assessments are evolving faster than ever. Moreover, Big Data is revolutionizing how we measure personality, cognitive ability, and behavioral traits. From adaptive testing to bias detection, it is improving accuracy, efficiency, and fairness in psychological measurement. But with these advancements come challenges—concerns about privacy, bias, and ethical AI implementation.

In this article, we’ll break down how Big Data is transforming in testing, where it’s being applied, and what the future holds.

What is Big Data?

Big Data refers to large-scale, complex datasets that can be analyzed to reveal patterns, trends, and associations. In psychometrics, this means massive amounts of test-taker data—not just scores, but response times, keystroke dynamics, facial expressions, and even biometric data.

Traditional assessments relied on small samples and fixed test items. Today, platforms collect millions of data points in real time, enabling AI-driven insights that were previously impossible. Machine learning algorithms can analyze these rich datasets to detect patterns in behavior, predict future performance, and enhance the precision of assessments.

This shift has led to more precise, adaptive, and fairer assessments—but also raises ethical and practical concerns, such as data privacy, bias in algorithms, and the need for transparency in decision-making.

How Big Data is Transforming Assessment

1. Smarter Assessment

Computerized Adaptive Testing AI Generated Image

Big Data is used to drive the development, administration, and scoring of future-focused assessments. Some examples of this are process data, machine learning personalization, and adaptive testing. By leveraging data-driven decision making inside the assessment, we can make it smarter, faster, and fairer.

Process data refers to the use of data other than answer selection, such as keystroke/mouse dynamics.  An example is a drag-and-drop question where a student has to classify animals into Reptile, Mammal, or Amphibian; instead of just recording the final locations, we can evaluate what they dragged first and where, if they changed their mind, how much time they took to answer the question, etc.  This can provide greater insight into both scoring and feedback.  We can also evaluate the use of tools like rulers or calculators (Liao & Sahin, 2020).  However, it might be of limited use in high-stakes exams where the final answer is what matters.

Machine learning algorithms based on Big Data can improve assessment by personalizing the assessment or even the selection of assessment.  Next-generation learning systems are designed to be adaptive from a very high level, understanding what students know and recommending the next modules, not unlike how your favorite video streaming service learns what shows you like or not and recommends new ones.  This could also be done inside the assessment, transforming it from assessment of learning (AoL) to assessment for learning (AfL).

Big Data can enhance Computerized Adaptive Testing (CAT), where test difficulty adjusts in real time based on the test-taker’s responses. Instead of presenting a fixed set of questions, the algorithm selects each new question based on the test-taker’s previous answers, making the assessment more efficient and tailored to individual ability levels.  This approach has been utilized for decades with large-scale exams, with well-known benefits like shorter tests, reduced anxiety, and increased engagement.  The Graduate Management Admission Test (GMAT) transitioned to a computer-adaptive format in 1997. Similarly, the Graduate Record Examination (GRE) introduced a computer-adaptive format in 1993 and made it mandatory in 1997. You can read more about adaptive testing in action here.

2. AI-Driven Personality and Behavioral Analysis

Psychometric models like the Big Five and HEXACO are now enhanced by machine learning and natural language processing (NLP).

AI can analyze how people respond, not just what they answer—including text responses, speech patterns, and decision-making behaviors. Companies like HireVue analyze facial expressions and speech cadence to assess job candidates’ traits.

3. Detecting Bias and Improving Fairness

One of the biggest concerns in psychological testing is bias. Traditional tests have been criticized for favoring certain demographics over others.

With Big Data, AI can flag biased questions by analyzing how different groups respond.
Example: If women or minority groups consistently underperform on a test question despite having equal qualifications, the item may be unfair and flagged for review.

This helps ensure that assessments are more equitable and inclusive.

4. Predicting Job Performance and Talent Retention

Companies are increasingly using psychometric Big Data to predict:

  • Which candidates will succeed in a role
  • Who is at risk of burnout
  • How leadership potential can be identified early

Example: Google uses Big Data psychometric analysis to refine its hiring process, ensuring long-term employee success and cultural fit. You can read a more in-depth look at their hiring process here.

5. Real-Time Fraud Detection in Assessments

Just like SIFT detects test fraud, Big Data helps identify cheating in online exams. AI can analyze eye movement, response times, and typing behavior to detect suspicious activity.  The models to do so are trained on large data sets; for example, a proctoring provider might have millions of past exam data, with stored videos as well as human-confirmed flags of if they were cheating or not. Universities and companies now use AI-powered proctoring tools to prevent cheating in high-stakes tests.

The Challenges of Big Data in Assessment

The integration of Big Data into psychometrics offers significant advancements but also presents several challenges that must be carefully managed:

1. Privacy and Data Security Risks

Challenges_of_Big_Data_in_Assessment AI Generated Image

Collecting extensive psychological data introduces serious ethical concerns. Regulations like the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States mandate strict data protection protocols to safeguard individuals’ personal information. Ensuring compliance with these laws is essential to protect test-takers’ confidentiality and maintain public trust.

2. Algorithmic Bias in AI

AI models trained on historical data can inadvertently perpetuate existing biases if not properly managed. A notable example is Amazon’s hiring algorithm, which was discontinued after it demonstrated bias against female candidates. This incident underscores the importance of developing strategies to identify and mitigate bias in AI-driven assessments to ensure fairness and equity. You can read more about the incident here.

3. Necessity of Human Oversight

While AI-driven assessments can provide valuable insights, human oversight remains crucial. Misinterpretation or overreliance on automated results can lead to incorrect hiring decisions, misdiagnoses, or unethical outcomes. Professionals in psychology and human resources play a vital role in interpreting data within the appropriate context, ensuring that decisions are informed by both technological tools and human judgment.

Addressing these challenges requires a balanced approach that leverages the benefits of Big Data and AI while implementing safeguards to protect individual rights and uphold ethical standards.

The Future of Big Data in Psychometrics

The integration of Big Data into psychometrics is paving the way for innovative advancements that promise to enhance the precision, security, and ethical standards of assessments. Here are some emerging trends:

1. Wearable Biometric Assessments

Big Data in Psychometrics - AI Generated Image

The incorporation of data from wearable devices, such as heart rate monitors and electroencephalogram (EEG) sensors, into cognitive tests is on the rise. These devices provide real-time physiological data that can offer deeper insights into a test-taker’s cognitive and emotional states, potentially leading to more comprehensive assessments. For instance, in healthcare, wearable technology has been utilized for continuous monitoring and personalized care, highlighting its potential application in psychometric evaluations.

2. More Transparent AI Models

The demand for explainable AI in assessments is growing. Developing AI models that provide clear, understandable explanations for their decisions is crucial to ensure ethical use and to mitigate biases. This transparency is essential for building trust in AI-driven assessments and for adhering to ethical standards in testing. You can learn about that here.

3. Blockchain for Secure Testing

Blockchain technology is being explored for enhancing the security and integrity of assessments. Its decentralized ledger system can be used for secure credential verification, ensuring that test results are tamper-proof and verifiable. This approach can uphold the integrity of assessments and protect against fraudulent activities.

4. Integration of Assessment with Learning

The future of testing includes a closer integration of assessment and learning processes. Adaptive learning platforms that utilize Big Data can provide personalized learning experiences, adjusting content and assessments in real-time to meet individual learner needs. This approach not only enhances learning outcomes but also provides continuous assessment data that can inform educational strategies.

5. Greater Insight into Open-Response Items

Advancements in natural language processing (NLP) and machine learning are enabling more sophisticated analysis of open-response items in assessments. These technologies can evaluate the content, structure, and sentiment of written responses, providing deeper insights into test-takers’ understanding, reasoning, and communication skills. This development allows for a more nuanced evaluation of competencies that are difficult to measure with traditional multiple-choice questions.

As Big Data continues to shape the field of psychometrics, it is imperative to balance innovation with ethical responsibility. Ensuring data privacy, mitigating biases, and maintaining transparency are crucial considerations as we advance toward more sophisticated and personalized assessment methodologies.

Final Thoughts: The Future is Here, but Caution is Key

Big Data is revolutionizing assessment, making tests faster, more precise, and highly adaptive. AI-driven testing can unlock deeper insights into human cognition and behavior, transforming everything from education to hiring. However, these advancements come with profound ethical dilemmas—issues of privacy, bias, and opaque AI decision-making must be addressed to ensure responsible implementation.

To move forward, assessment professionals must embrace AI’s potential while demanding fairness, transparency, and rigorous data governance. The future of testing is not just about innovation but about striking the right balance between technology and ethical responsibility. As we harness AI to enhance assessments, we must remain vigilant in protecting individuals’ rights and ensuring that data-driven insights serve to empower rather than exclude.

Generalized-partial-credit-model

What is a Rubric?

A rubric is a set of rules for converting unstructured responses on assessments—such as essays—into structured data that can be analyzed psychometrically. It helps educators evaluate qualitative work consistently and fairly.

Why Do We Need Rubrics?

Measurement is a quantitative endeavor. In psychometrics, we aim to measure knowledge, achievement, aptitude, or skills. Rubrics help convert qualitative data (like essays) into quantitative scores. While qualitative feedback remains valuable for learning, quantitative data is crucial for assessments.

For example, a teacher might score an essay using a rubric (0 to 4 points) but also provide personalized feedback to guide student improvement.

How Many Rubrics Do I Need?

The number of rubrics you need depends on what you’re assessing:

  • Mathematics: Often, a single rubric suffices because answers are either right or wrong.
  • Writing: More complex. You might assess multiple skills like grammar, argument structure, and spelling, each with its own rubric.

Examples of Rubrics

Spelling Rubric for an Essay

Points Description
0 Essay contains 5 or more spelling mistakes
1 Essay contains 1 to 4 spelling mistakes
2 Essay does not contain any spelling mistakes

 

Argument Rubric for an Essay

Prompt: “Your school is considering eliminating organized sports. Write an essay for the School Board with three reasons to keep sports, supported by explanations.”

Points Description
0 Student does not include any reasons with explanation (includes providing 3 reasons but no explanations)
1 Student provides 1 reason with a clear explanation
2 Student provides 2 reasons with clear explanations
3 Student provides 3 reasons with clear explanations

 

Answer Rubric for Math

Points Description
0 Student provides no response or a response that does not indicate understanding of the problem.
1 Student provides a response that indicates understanding of the problem, but does not arrive at correct answer OR provides the correct answer but no supporting work.
2 Student provides a response with the correct answer and supporting work that explains the process.

 

How Do I Score Tests with a Rubric?

Traditionally, rubric scores are added to the total score. This method aligns with classical test theory, using statistics like coefficient alpha (reliability) and Pearson correlation (discrimination).

However, item response theory (IRT) offers a more advanced approach. Techniques like the generalized partial credit model analyze rubric data deeply, enhancing score accuracy. (Muraki, 1992; resources on that here and here).

Example: In an essay scored 0-4 points:

  • An average student (Theta = 0) likely scores 2 points.
  • A higher-performing student (Theta = 1) likely scores 3 points.

An example of this is below.  Imagine that you have an essay which is scored 0-4 points.  This graph shows the probability of earning each point level, as a function of total score (Theta).  Someone who is average (Theta=0.0) is likely to get 2 points, the yellow line.  Someone at Theta=1.0 is likely to get 3 points.  Note that the middle curves are always bell-shaped while the ones on the end go up to an upper asymptote of 1.0.  That is, the smarter the student, the more likely they are to get 4 out of 4 points, but the probability of that can never go above 100%, obviously.

Generalized-partial-credit-model

How Can I Efficiently Implement a Scoring Rubric?

Efficiency improves with online assessment platforms that support rubrics. Look for platforms with:

  • Integrated psychometrics
  • Multiple rubrics per item
  • Multi-rater support
  • Anonymity features

These tools streamline grading, improve consistency, and save time.

Online marking essays

 

What About Automated Essay Scoring?

Automated essay scoring (AES) uses machine learning models trained on human-scored data. While AES isn’t flawless, it can significantly reduce grading time when combined with human oversight.  Of course, you can also ask LLMs to grade essays for you, but this lacks accuracy and validity – that is, you don’t have actual evidence like if you scored 10,000 essays by humans and then analyzed the data.

Final Thoughts

Rubrics are essential tools for educators, offering structured, fair, and consistent ways to assess complex student work. Whether you’re grading essays, math problems, or projects, implementing clear rubrics improves both assessment quality and student learning outcomes.

Ready to improve your assessments? Request a demo of our online platform with an integrated essay marking module!

 

automated test assembly

Time limits are an essential parameter to just about every type of assessment.  A time limit is the length of time given to individuals to complete their assessment or a defined portion of it. Managing test timing effectively ensures fairness, accuracy, and a pleasant experience for all test-takers – and is therefore a component of validity, which means we need to thoughtfully research and establish the time limits. In this blog post, we will explore the concept of test timing, how time limits are determined, and how accommodations are provided for those who need extra time.

Power vs Speeded vs Timed

When it comes to the role of time, there are three types of assessment. This article focuses on timed tests.  More on speeded vs power tests can be found in this article.

Power: This test is untimed, meaning that the examinee has as much time as they want to show just how much they can do and how far they can go.  The goal of the test is to find the maximum performance level of the examinee.  For example, we might give them a math test with items that are years ahead of what they have learned, but bright students might be able to figure them out if given enough time.

Speeded: A speeded test is one where the time limit is set low enough that it affects performance, and the goal of the test is more to evaluate the velocity of the examinee.  For example, we might provide you with a list of 100 simple math problems and see how many you can finish in 30 seconds.  Or, provide you with a list of 100 words to proofread in 30 seconds.  In these cases, getting the correct answers is still the score, but it is driven by the time limit.  There are, of course, assessments where the time is the score; many of us had to run a mile when we were in school, and of course there is only one “item” on the test, the score is entirely the time taken to finish.

Timed: A timed test is one where there is a time limit, but it is intentionally designed to not impact the majority of examinees, and is therefore more for practical purposes.  Most assessments fall into this category.  There might be 100 items with a 2-hour time limit, and most examinees finish within 1.5 hours.  The time limit exists so that the student does not sit there all day, but it has virtually no impact on their performance.  In some cases, the only students who hit the time limit are the high achievers who are so intent on getting 100% that they take every minute possible to keep checking their work!

Factors Considered When Determining Time Limits

Several factors are considered when deciding on time limits for assessments. One important factor to consider is the complexity of the content being assessed. As an example, a math or science test requiring intricate problem-solving will require more time to complete than a verbal reasoning or test of general cognitive ability.

Unsurprisingly, the time load of the questions is important as well.  If there are reading passages, videos, complex images like X-rays, or other assets which require a lot of time to digest before even starting the question, then this obviously needs to be taken into account.

The intended purpose of the exam also influences time limits. For high-stakes exams, such as licensing or certification tests, the goal is to ensure the candidate demonstrates a high level of knowledge, and we need very high reliability and validity for the test.  This usually means that the test has more questions, and we want to provide an opportunity for the candidate to show what they know.  On the other hand, a quick screening test to see if a kid has learned the most recent unit of 4th grade math has much lower stakes, so a shorter time limit does not affect the purpose, and in this case actually aligns with the goal of “quick.”  Screening tests for pre-employment purposes, or medical surveys, also align with this.

Finally, test security is another consideration.  Some people take tests with the goal of stealing the intellectual property.  If we give them extra time to sit there and memorize the items so they can put them on illegal websites, that does no good to anyone.

Determining Time Limits for Linear Tests

Historical data and statistical modeling are often used to estimate the optimal time for test-takers. Test developers rely on empirical evidence and past performance to predict how long an average test-taker might need to finish the test, and to adjust the time limit accordingly.

You might pilot a test and determine that the questions take 1 minute on average, and if the test is 100 items, then 120 minutes (2 hours) makes a plausible time limit.  More complex modeling might also be useful, such as this article discussing lognormal distributions.test time limits metadata

Determining Time Limits for Adaptive Tests

Unlike traditional fixed-form assessments, in which every test-taker answers the same questions, Computerized Adaptive Testing (CAT) adjusts the test in real time based on how an individual performs. While some adaptive tests provide the same number of items to each examinee, the difficulty will vary widely.  Moreover, some tests will vary in the number of items used.  For example, the NCLEX (nursing licensure exam) will range from 85 to 150 items.

The US Armed Services Vocational Aptitude Battery is adaptive, and configured the linear timing approach to this situation.  Suppose again that items take 1 minute on average.  If we know that a test is an average of 100 items with a standard deviation of 10, then 98% of examinees will take less than 120 items, that then means 98% of examinees will finish under 120 minutes with no speededness.  Such research questions are discussed in technical reports like this one, and also the landmark book Computerized adaptive testing: From inquiry to operation. American Psychological Association.

Time Limit Extensions: Accommodations for Test-Takers

In a diverse world in which test-takers have varying needs, it is essential that accommodations are available for those who may require additional time to complete a test. Test-takers with learning disabilities, attention disorders, or physical impairments may find standard time limits difficult to manage, and they may benefit from time limit extensions.

For example, individuals with dyslexia or ADHD might take longer to process information or may require breaks to stay focused. Providing extra time, usually in the form of a varied or fixed additional number of minutes can help level the playing field for these candidates. Examinees with visual impairments might need special software or even a human to read the questions to them, requiring extra time.

To ensure fairness, accommodations are typically offered based on documentation of the individual’s specific needs. This can include medical assessments, educational records, or reports from licensed professionals that substantiate the need for extra time. The multiplier for extra time can differ from examinee to examinee, as seen below. These accommodations are not intended to provide unfair advantages, but to ensure every test-taker may demonstrate their knowledge.

test time accomodations

Types of Time Limits

We’ve focused this discussion on time limits for a test, but there are actually several levels of time limits that can be applied.

Item: For some tests, you might set a hard limit of, say, 30 seconds per item.  This is usually only when it is relevant to the measurement, such as a working memory test.

Section: Longer tests are broken into sections, so the time limit might be more relevant at this level.  You might have an hour for section 1, a 10 minute break, and then an hour for section 2.  Setting an overall time limit of 2:10 then seems unnecessary.

Test: As discussed above, this is what we typically think of, with a test that has one section and a time limit overall.

Session: Some assessments have multiple tests, which is known as a battery.  While the individual tests might each have time limits, there might also be a session maximum.

Functionality regarding test limits also needs to mesh with other functionality regarding test security, such as re-entry.  For example, you can see that our Assess.ai platform has an option to allow an examinee to leave, but continue the clock ticking while they are out, with automatic submission when the time limit is complete.

Session Security time limits

Conclusion

Time limits are essential aspects in assessment, having a direct effect on the validity of scores as well as the experience of examinees. Determining appropriate time limits requires careful consideration of multiple factors, including the complexity of the content, the number of questions, and test security concerns. At the same time, it is essential to provide accommodations for test-takers who require additional time to ensure that individuals can perform to the best of their abilities, regardless of any physical or cognitive challenges. By including these considerations, testing organizations can ensure that their assessments are both fair and accessible, promoting a more inclusive and equal testing experience for all.

SIFT test security data forensics

Introduction

Test fraud is an extremely common occurrence.  We’ve all seen articles about examinee cheating.  However, there are very few defensible tools to help detect it.  I once saw a webinar from an online testing provider that proudly touted their reports on test security… but it turned out that all they provided was a simple export of student answers that you could subjectively read and form conjectures.

The goal of SIFT is to provide a tool that implements real statistical indices from scientific research on statistical detection of test fraud. It’s user-friendly enough to be used by someone without a PhD in psychometrics or experience in data forensics. SIFT provides more collusion indices and analysis than any other software, making it the industry standard from the day of its release. The science behind SIFT is also implemented in our world-class online testing platform, FastTest which supports computerized adaptive testing known to increase test security.

Interested?  Download a free trial version of SIFT!

What is Test Fraud?

As long as tests have been around, people have been trying to cheat them. Anytime there’s a system with stakes or incentives involved, people will try to game it. The root culprit is the system itself, not the test. Blaming the test is just shooting the messenger.

In most cases, the system serves a useful purpose. K-12 assessments provide information on curriculum and teachers, certification tests identify qualified professionals, and so on. To preserve the system’s integrity, we must minimize test fraud.

When it comes to test fraud, the old cliché is true: an ounce of prevention is worth a pound of cure. While I recommend implementing preventative measures to deter fraud, some cases will always occur. SIFT is designed to help find those cases. Additionally, the knowledge that such analysis is being conducted can deter some examinees.

How can SIFT help me with statistical detection of test fraud?

Like other psychometric software, SIFT does not interpret results for you. For example, software for item analysis like Iteman  and  Xcalibre

doesn’t tell you which items to retire or how to revise them—they provide output for practitioners to analyze. SIFT offers a wide range of outputs to help identify:

  • Copying
  • Proctor assistance
  • Suspect test centers
  • Brain dump usage
  • Low examinee motivation

YOU decide what’s important for detecting test fraud and look for relevant evidence. More details are provided in the manual, but here’s a glimpse.

SIFT Test Security Data Forensics

SIFT calculates various indices to evaluate potential fraud:

  • Collusion Indices: SIFT calculates these for each student pair, summarizing the number of flags.

SIFT test security data forensics

First, there are a number of indices you can evaluate, as you see above.  SIFT  will calculate those collusion indices for each pair of students, and summarize the number of flags.

  • Brain Dump Detection: Compare examinee responses with known brain dump content, especially content intentionally seeded by the organization.

sift collusion index analysis

A certification organization could use  SIFT  to look for evidence of brain dump makers and takers by evaluating similarity between examinee response vectors and answers from a brain dump site – especially if those were intentionally seeded by the organization!  We also might want to find adjacent examinees or examinees in the same location that group together in the collusion index output.  Unfortunately, these indices can differ substantially in their conclusions.

  • Adjacent Examinee Analysis: Identify students in the same location with suspiciously similar responses.

sift group analysis

Additionally, we can roll up many of these statistics to the group level.  Below is an example that provides a portion of  SIFT  output regarding teachers.  Note that Gutierrez has suspiciously high scores but without spending much more time.  Cheating?  Possibly.  On the other hand, that is the smallest N, so perhaps the teacher just had a group of accelerated students.  Worthington, on the other hand, also had high scores but had notably shorter times – perhaps the teacher was helping?

  • Response Time Data: Evaluate time spent on questions to detect irregularities.

sift time analysis

Finally, you might want to evaluate time data.  SIFT  provides this as well.

Group-Level Analysis

SIFT rolls up statistics to the group level. For example, one teacher might have suspiciously high scores without much time spent per question. Is it cheating? Possibly. But perhaps the teacher had a group of accelerated students. Another teacher may show high scores with notably shorter times—perhaps due to unauthorized assistance.

The Story of SIFT

I started developing SIFT in 2012. ASC previously sold a program called Scrutiny!, but we stopped due to compatibility issues with newer Windows versions. Despite that, we kept receiving inquiries.

Determined to create a better tool, I set out to develop SIFT. I aimed to include the analysis from Scrutiny! (the Bellezza & Bellezza index) and much more. After years of business hurdles and countless hours, SIFT was released in July 2016.

Version 1.0 of SIFT includes:

  • 10 Collusion Indices (5 probabilistic, 5 descriptive)
  • Response Time Analysis
  • Group-Level Analysis
  • Additional tools for detecting test fraud

While not exhaustive of all literature analyses, SIFT surpasses other options for practitioners.

Suggestions? We’d love to hear from you!

job-task-analysis

KSAOs (Knowledge, Skills, Abilities, and Other Characteristics) is an approach to defining the human attributes necessary for success on a job.  It is an essential aspect of human resources and organizational development field, impacting critical business processes from recruitment to selection to compensation.  This post provides an introduction to KSAOs and then discusses how they impact the human assessments in the workplace, such as pre-employment screening or certification/licensure exams.

Need help developing an assessment based on sound psychometrics such as job analysis and KSAOs?  Or just need a software platform to make it easier for you to do so?  Get in touch!

What is a KSAO? Knowledge, Skills, Abilities, and Other Characteristics

KSAO is an acronym that represents four essential components:hr-interview-pre-employment

Knowledge refers to the understanding or awareness of concepts, facts, and information required to perform a job. This can include both formal education and practical experience. For example, a software developer needs knowledge of programming languages like Python or Java.

Skills are the learned proficiency or expertise in specific tasks. Skills are often technical in nature and can be developed through practice. For instance, an accountant needs strong skills in financial analysis or spreadsheet management.

Abilities are the natural or developed traits that determine how well someone can perform certain tasks. This includes cognitive abilities such as problem-solving or physical abilities like manual dexterity. A surgeon, for example, must have the ability to remain calm under pressure and possess fine motor skills to perform delicate operations.

Other characteristics refer to personal attributes or traits that may influence job performance but don’t necessarily fall into the above categories. These could include things like personality traits, work ethics, or attitudes. For instance, a customer service representative should have a positive attitude and excellent communication skills.

Examples of KSAOs

Here are some examples of KSAOs in various roles:

Registered Nurse:

  • Knowledge: Medical terminology, patient care protocols, pharmacology.
  • Skills: Administering injections, operating medical equipment, record-keeping.
  • Abilities: Emotional resilience, critical thinking, physical stamina.
  • Other characteristics: Compassion, teamwork, attention to detail.

Marketing Manager:

  • Knowledge: Market research, digital marketing trends, consumer behavior.
  • Skills: Data analysis, content creation, campaign management.
  • Abilities: Strategic thinking, multitasking, creative problem-solving.
  • Other characteristics: Leadership, adaptability, communication skills.

Software Engineer:

  • Knowledge: Programming languages, software development methodologies.
  • Skills: Debugging code, designing algorithms, testing software.
  • Abilities: Logical reasoning, attention to detail, time management.
  • Other characteristics: Innovation, teamwork, problem-solving attitude.

Why Are KSAOs Important in Human Resources, Recruitment, and Selection?

KSAOs are integral in various parts of the HR cycle, primarily by providing quality information about jobs to decision-makers.

  1. Drive Recruitment: KSAOs provide a clear framework for matching candidates with job roles. By assessing whether a candidate has the required knowledge, skills, abilities, and other characteristics, employers can avoid mismatches that could result in poor performance or turnover.
  2. Clear Job Expectations: When a job is defined by its KSAOs, both employers and employees have a clearer understanding of what is expected in terms of performance. This reduces confusion and ensures that candidates understand the responsibilities and qualifications required.
  3. Improved Hiring Decisions: Analyzing KSAOs allows employers to focus on evaluating specific traits and capabilities.  This helps to ensure that they choose individuals who are best equipped to handle the job. It helps streamline the hiring process and avoid subjective decisions based solely on resumes or interviews by focusing on reliable, objective assessments.
  4. Enhanced Training and Development: Once KSAOs are clearly defined, employers can identify skill gaps and focus their training efforts on developing specific areas. This targeted approach makes employee development more efficient. It can also improve retention.
  5. Legal Compliance and Fairness: By focusing on specific job-related criteria like KSAOs, employers are less likely to make hiring decisions based on irrelevant factors, such as personal biases. This helps maintain legal compliance and promotes fairness in hiring practices.
  6. Informs Compensation Structures: Comparing KSAOs between different job families within the hierarchy of an organization can help provide reasoning and documentation behind compensation plans.

So… how does this pertain to assessment?

Workforce-related assessment such as Certification/Licensure or Pre-Employment tests have to be supported by validity, which is evidence and documentation that the tests measure and predict what we want them to. This then begs the question, what do we want them to measure and predict? The KSAOs!test dev cycle

For example, if you are developing a certification test for widgetmakers, you have to make sure it covers the right content, so that if (when!) the test is challenged or submitted for accreditation, you have evidence that it does. You can’t just go down to your basement and write 100 questions willy-nilly then put them up on some cheap online exam platform. Instead, you need to do a job analysis.

Job analysis is the process of systematically studying a job to determine the duties, responsibilities, required skills, and qualifications. This is the foundation upon which KSAOs are defined. By analyzing the job thoroughly, employers can define the exact KSAOs that are necessary for someone to perform well in that position.

There are two general approaches: focus groups and surveys. For focus groups, you might get 8 expert widgetmakers in a room for a day to discuss and agree on the KSAOs or tasks done on the job, then define weights for each on the exam. For the survey, you make a list of KSAOs or tasks and then send it out to hundreds of widgetmakers, asking them to rate how important or frequent each are to successful performance.

The results of this are a very well-defined list of KSAOs and tasks that are critical for the job, with relative weighting. This can provide input into important business processes like those discussed above (interview questions, job postings, recruitment), but for higher-stakes exams, you use this information to make formal exam blueprints. This will ensure that professionals who earn the certification have demonstrated a certain level of expertise on the KSAOs. The blueprints are also useful for pre-employment assessments to screen applicants, highlighting those which are more qualified than others.

Conclusion: The Value of KSAOs

Incorporating KSAOs as the foundation of your employee hiring, development, and assessment processes can provide invaluable insight, increase validity, and have a positive impact to the company bottom line. By understanding and evaluating these components, organizations can make informed decisions that contribute to long-term success.

Whether you’re a hiring manager or an HR professional, embracing KSAO-based assessments can enhance your ability to cultivate a strong, capable workforce. Contact us if you wish to talk with an expert about developing exams which meet international psychometric standards.  Additionally, our powerful yet easy-to-use online platform is ideal to develop and deliver your assessments.

 

computerized adaptive testing

Computerized adaptive testing is an AI-based approach to assessment where the test is personalized based on your performance as you take the test, making the test shorter, more accurate, more secure, more engaging, and fairer.  If you do well, the items get more difficult, and if you do poorly, the items get easier.  If an accurate score is reached, the test stops early.  By tailoring question difficulty to each test-taker’s performance, CAT ensures an efficient and secure testing process.  The AI algorithms are almost always based on Item Response Theory (IRT), an application of machine learning to assessment, but can be based on other models as well. 

 

Prefer to learn by doing?  Request a free account in FastTest, our powerful adaptive testing platform.

Free FastTest Account

What is computerized adaptive testing?

Computerized adaptive testing (CAT), sometimes called computer-adaptive testing, adaptive assessment, or adaptive testing, is an algorithm that personalizes how an assessment is delivered to each examinee.  It is coded into a software platform, using the machine-learning approach of IRT to select items and score examinees.  The algorithm proceeds in a loop until the test is complete.  This makes the test smarter, shorter, fairer, and more precise.

 

The steps in the diagram above are adapted from Kingsbury and Weiss (1984). based on these components.

Components of a CAT

  1. Item bank calibrated with IRT
  2. Starting point (theta level before someone answers an item)
  3. Item selection algorithm (usually maximum Fisher information)
  4. Scoring method (e.g., maximum likelihood)
  5. Termination criterion (stop the test at 50 items, or when standard error is less than 0.30?  Both?)

How the components work

For starters, you need an item bank that has been calibrated with a relevant psychometric or machine learning model.  That is, you can’t just write a few items and subjectively rank them as Easy, Medium, or Hard difficulty.  That’s an easy way to get sued.  Instead, you need to write a large number of items (rule of thumb is 3x your intended test length) and then pilot them on a representative sample of examinees.  The sample must be large enough to support the psychometric model you choose, and can range from 100 to 1000.  You then need to perform simulation research – more on that later.

Once you have an item bank ready, here is how the computerized adaptive testing algorithm works for a student that sits down to take the test, with options for how to do so.

  1. Starting point: there are three option to select the starting score, which psychometricians call thetacomputerized adaptive testing
    • Everyone gets the same value, like 0.0 (average, in the case of non-Rasch models)
    • Randomized within a range, to help test security and item exposure
    • Predicted value, perhaps from external data, or from a previous exam
  2. Select item
    • Find the item in the bank that has the highest information value
    • Often, you need to balance this with practical constraints such as Item Exposure or Content Balancing
  3. Score the examinee
    • Usually IRT, maximum likelihood or Bayes modal
  4. Evaluate termination criterion: using a predefined rule supported by your simulation research
    • Is a certain level of precision reached, such as a standard error of measurement <0.30?
    • Are there no good items left in the bank?
    • Has a time limit been reached?
    • Has a Max Items limit been reached?

The algorithm works by looping through 2-3-4 until the termination criterion is satisfied.

How does the test adapt? By Difficulty or Quantity?

CATs operate by adapting both the difficulty and quantity of items seen by each examinee.

Difficulty
Most characterizations of computerized adaptive testing focus on how item difficulty is matched to examinee ability. High-ability examinees receive more difficult items, while low ability examinees receive easier items, which has important benefits to the student and the organization. An adaptive test typically begins by delivering an item of medium difficulty; if you get it correct, you get a tougher item, and if you get it incorrect, you get an easier item.  This pattern continues.

Quantity: Fixed-Length vs. Variable-Length
A less publicized facet of adaptation is the number of items. Adaptive tests can be designed to stop when certain psychometric criteria are reached, such as a specific level of score precision. Some examinees finish very quickly with few items, so that adaptive tests are typically about half as many questions as a regular test, with at least as much accuracy. Since some examinees have longer tests, these adaptive tests are referred to as variable-length. Obviously, this makes for a massive benefit: cutting testing time in half, on average, can substantially decrease testing costs.

Some adaptive tests use a fixed length, and only adapt item difficulty. This is merely for public relations issues, namely the inconvenience of dealing with examinees who feel they were unfairly treated by the CAT, even though it is arguably more fair and valid than conventional tests.  In general, it is best practice to meld the two: allow test length to be shorter or longer, but put caps on either end that prevent inadvertently too-short tests or tests that could potentially go on to 400 items.  For example, the NCLEX has a minimum length exam of 75 items and the maximum length exam of 145 items.

 

Example of the computerized adaptive testing algorithm

Let’s walk through an oversimplified example.  Here, we have an item bank with 5 questions.  We will start with an item of average difficulty, and answer as would a student of below-average difficulty.

Below are the item information functions for five items in a bank.  Let’s suppose the starting theta is 0.0.  

item information functions

 

  1. We find the first item to deliver.  Which item has the highest information at 0.0?  It is Item 4.
  2. Suppose the student answers incorrectly.
  3. We run the IRT scoring algorithm, and suppose the score is -2.0.  
  4. Check the termination criterion; we certainly aren’t done yet, after 1 item.
  5. Find the next item.  Which has the highest information at -2.0?  Item 2.
  6. Suppose the student answers correctly.
  7. We run the IRT scoring algorithm, and suppose the score is -0.8.  
  8. Evaluate termination criterion; not done yet.
  9. Find the next item.  Item 2 is the highest at -0.8 but we already used it.  Item 4 is next best, but we already used it.  So the next best is Item 1.
  10. Item 1 is very easy, so the student gets it correct.
  11. New score is -0.2.
  12. Best remaining item at -0.2 is Item 3.
  13. Suppose the student gets it incorrect.
  14. New score is perhaps -0.4.
  15. Evaluate termination criterion.  Suppose that the test has a max of 3 items, an extremely simple criterion.  We have met it.  The test is now done and automatically submitted.

A Longer Example

Computerized adaptive testing standard errors

As more items are delivered, the score typically becomes more accurate, unless the examinee is responding oddly.  Operationally, this means that the standard error of measurement will continue to decrease. For this reason, a common termination criterion for point-estimate CAT is to stop when the SEM drops below some threshold like 0.30.  If the purpose of the test is a pass/fail decision, we can take a confidence interval around the theta estimate and compare it to a cutscore.  When the interval is above the cutscore, we pass.  If below, we fail.  If neither, we administer another item.

The graph here shows what often happens in a CAT, with the example of a pass/fail decision that is made as a Pass after 34 items.  It shows the theta estimate (red) and the upper bound and lower bound of the confidence interval, as we proceed through the CAT for each item going left to right.  The theta estimate will bounce a lot for the first 5-10 items but then settle into a groove.  The SEM will reduce fast early in the test; here, the width of the band is high after one item but is small avert about 5, and only shrinks a small bit after each further item.  If you think about it, going from 1 item to 2 items provides double the information, so this is not surprising.  But going from 28 to 29 items does not provide much marginal information.  Certainly not going from 128 to 129.  So, many CATs will stop after some large number like 150 items, even if a decision is not yet made.

Advantages of computerized adaptive testing

By making the test more intelligent, adaptive testing provides a wide range of benefits.  Some of the well-known advantages of adaptive testing, recognized by scholarly psychometric research, are listed below.  
 

Shorter tests

Research has found that adaptive tests produce anywhere from a 50% to 90% reduction in test length.  This is no surprise.  Suppose you have a pool of 100 items.  A top student is practically guaranteed to get the easiest 70 correct; only the hardest 30 will make them think.  Vice versa for a low student.  Middle-ability students do no need the super-hard or the super-easy items.

Why does this matter?  Primarily, it can greatly reduce costs.  Suppose you are delivering 100,000 exams per year in testing centers, and you are paying $30/hour.  If you can cut your exam from 2 hours to 1 hour, you just saved $3,000,000.  Yes, there will be increased costs from the development and maintenance of computer adaptive tests, but you will likely save money in the end.

For the K12 assessment, you aren’t paying for seat time, but there is the opportunity cost of lost instruction time.  If students are taking formative assessments 3 times per year to check on progress, and you can reduce each by 20 minutes, that is 1 hour; if there are 500,000 students in your State, then you just saved 500,000 hours of learning.

More precise scores

CAT will make tests more accurate, in general.  It does this by designing the algorithms specifically around how to get more accurate scores without wasting examinee time.

More control of score precision (accuracy)

CAT ensures that all students will have the same accuracy, making the test much fairer.  Traditional tests measure the middle students well but not the top or bottom students.  Is it better than A) students see the same items but can have drastically different accuracy of scores, or B) have equivalent accuracy of scores, but see different items?

Better test security

Since all students are essentially getting an assessment that is tailored to them, there is better test security than everyone seeing the same 100 items.  Item exposure is greatly reduced; note, however, that this introduces its own challenges, and adaptive assessment algorithms have considerations of their own item exposure.

A better experience for examinees, with reduced fatigue

Computer adaptive tests will tend to be less frustrating for examinees on all ranges of ability.  Moreover, by implementing variable-length stopping rules (e.g., once we know you are a top student, we don’t give you the 70 easy items), reduces fatigue.

Increased examinee motivation

Since examinees only see items relevant to them, this provides an appropriate challenge.  Low-ability examinees will feel more comfortable and get many more items correct than with a linear test.  High-ability students will get the difficult items that make them think.

Frequent retesting is possible

The whole “unique form” idea applies to the same student taking the same exam twice.  Suppose you take the test in September, at the beginning of a school year, and take the same one again in November to check your learning.  You’ve likely learned quite a bit and are higher on the ability range; you’ll get more difficult items, and therefore a new test.  If it was a linear test, you might see the same exact test.

This is a major reason that CAT plays a huge role in formative testing for K-12 education, delivered several times per year to millions of students in the US alone.

Individual pacing of tests

Examinees can move at their own speed.  Some might move quickly and be done in only 30 items.  Others might waver, also seeing 30 items but taking more time.  Still, others might see 60 items.  The algorithms can be designed to maximize the process.

Advantages of computerized testing in general

Of course, the advantages of using a computer to deliver a test are also relevant.  Here are a few
  • Immediate score reporting
  • On-demand testing can reduce printing, scheduling, and other paper-based concerns
  • Storing results in a database immediately makes data management easier
  • Computerized testing facilitates the use of multimedia in items
  • You can immediately run psychometric reports
  • Timelines are reduced with an integrated item banking system

 

How to develop an adaptive assessment that is valid and defensible

CATs are the future of assessment. They operate by adapting both the difficulty and number of items to each individual examinee. The development of an adaptive test is no small feat, and requires five steps integrating the expertise of test content developers, software engineers, and psychometricians.

The development of a quality adaptive test is complex and requires experienced psychometricians in both item response theory (IRT) calibration and CAT simulation research. FastTest can provide you the psychometrician and software; if you provide test items and pilot data, we can help you quickly publish an adaptive version of your test.

   Step 1: Feasibility, applicability, and planning studies. First, extensive monte carlo simulation research must occur, and the results formulated as business cases, to evaluate whether adaptive testing is feasible, applicable, or even possible.

   Step 2: Develop item bank. An item bank must be developed to meet the specifications recommended by Step 1.

   Step 3: Pretest and calibrate item bank. Items must be pilot tested on 200-1000 examinees (depends on IRT model) and analyzed by a Ph.D. psychometrician.

   Step 4: Determine specifications for final CAT. Data from Step 3 is analyzed to evaluate CAT specifications and determine most efficient algorithms using CAT simulation software such as CATSim.

   Step 5: Publish live CAT. The adaptive test is published in a testing engine capable of fully adaptive tests based on IRT.  There are not very many of them out in the market.  Sign up for a free account in our platform FastTest and try for yourself!

Want to learn more about our one-of-a-kind model? Click here to read the seminal article by our two co-founders.  More adaptive testing research is available here.

Minimum requirements for computerized adaptive testing

Here are some minimum requirements to evaluate if you are considering a move to the CAT approach.

  • A large item bank piloted so that each item has at least 100 valid responses (Rasch model) or 500 (3PL model)
  • 500 examinees per year
  • Specialized IRT calibration and CAT simulation software like  Xcalibre  and  CATsim.
  • Staff with a Ph.D. in psychometrics or an equivalent level of experience. Or, leverage our internationally recognized expertise in the field.
  • Items (questions) that can be scored objectively correct/incorrect in real-time
  • An item banking system and CAT delivery platform
  • Financial resources: Because it is so complex, the development of a CAT will cost at least $10,000 (USD) — but if you are testing large volumes of examinees, it will be a significantly positive investment. If you pay $20/hour for proctoring seats and cut a test from 2 hours to 1 hour for just 1,000 examinees… that’s a $20,000 savings.  If you are doing 200,000 exams?  That is $4,000,000 in seat time that is saved.

Adaptive testing: Resources for further reading

Visit the links below to learn more about adaptive assessment.  

  • We first recommend that you first read this landmark article by our co-founders.
  • Read this article on producing better measurements with CAT from Prof. David J. Weiss.
  • International Association for Computerized Adaptive Testing: www.iacat.org
  • Here is the link to the webinar on the history of CAT, by the godfather of CAT, Prof. David J. Weiss.

Examples of CAT

Many large-scale assessments utilize adaptive technology.  The GRE (Graduate Record Examination) is a prime example of an adaptive test. So is the NCLEX (nursing exam in the USA), GMAT (business school admissions), Paramedic/EMT certification exam, and many formative assessments like the NWEA MAP or iReady.  The SAT has recently transitioned to a multistage adaptive format.

How to implement CAT on an adaptive testing platform

computerized Adaptive testing options

Our revolutionary platform, FastTest, makes it easy to publish a CAT.  It is designed as a user-friendly ecosystem to build, deliver, and validate assessments, with a focus on modern psychometrics like IRT and CAT.

  1. Upload your items
  2. Deliver a pilot exam
  3. Calibrate with our IRT software Xcalibre
  4. Upload the IRT parameters into the FastTest adaptive testing platform
  5. Assemble the pool of items you want to publish
  6. Specify the adaptive testing software parameters (screenshot)
  7. Deliver your adaptive test!

 

Ready to roll?  Contact us to sign up for a free account in our industry-leading CAT platform or to discuss with one of our PhD psychometricians.