Test Development

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/ .
item-writing-tips
PsychometricsTest Development

Item Writing: Tips for Authoring Test Questions

Item writing (aka item authoring) is a science as well as an art, and if you have done it, you know just how challenging it can be!  You are experts at what you do, and

validity threats
PsychometricsTest Development

Validity Threats and Psychometric Forensics

Validity threats are issues with a test or assessment that hinder the interpretations and use of scores, such as cheating, inappropriate use of scores, unfair preparation, or non-standardized delivery.  It is important to establish a

woman-taking-test
Test Development

Item Review Workflow for Exam Development

Item review is the process of ensuring that newly-written test questions go through a rigorous peer review, to ensure that they are high quality and meet industry standards. What is an item review workflow? Developing

job-task-analysis
PsychometricsTest Development

What is Job Task Analysis for Certification?

Job Task Analysis (JTA) is an essential step in designing a test to be used in the workforce, such as pre-employment or certification/licensure, by analyzing data on what is actually being done in the job. 

laptop screen demonstrating automated test assembly
Test Development

Question and Test Interoperability (QTI)

Question and Test Interoperability® (QTI®) is a set of standards around the format of import/export files for test questions in educational assessment and HR/credentialing exams.  This facilitates the movement of questions from one software platform

Iteman45-quantile-plot
PsychometricsTest Development

Classical Test Theory: Item Statistics

Classical Test Theory (CTT) is a psychometric approach to analyzing, improving, scoring, and validating assessments.  It is based on relatively simple concepts, such as averages, proportions, and correlations.  One of the most frequently used aspects

Classical Test Theory vs. Item Response Theory
PsychometricsTest Development

Classical Test Theory vs. Item Response Theory

Classical Test Theory and Item Response Theory (CTT & IRT) are the two primary psychometric paradigms.  That is, they are mathematical approaches to how tests are analyzed and scored.  They differ quite substantially in substance

Coefficient cronbachs alhpa interpretation
PsychometricsTest Development

Coefficient Alpha Reliability Index

Coefficient alpha reliability, sometimes called Cronbach’s alpha, is a statistical index that is used to evaluate the internal consistency or reliability of an assessment. That is, it quantifies how consistent we can expect scores to

Juggling-statistics
PsychometricsTest Development

“Dichotomous” Vs “Polytomous” in IRT?

What is the difference between the terms dichotomous and polytomous in psychometrics?  Well, these terms represent two subcategories within item response theory (IRT) which is the dominant psychometric paradigm for constructing, scoring and analyzing assessments.

group working on meta-analysis
Test Development

Meta-analysis in Assessment

Meta-analysis is a research process of collating data from multiple independent but similar scientific studies in order to identify common trends and findings by means of statistical methods. To put it simply, it is a

test-validation
Test Development

Test validation

Test validation is the process of verifying whether the specific requirements to test development stages are fulfilled or not, based on solid evidence. In particular, test validation is an ongoing process of developing an argument

Psychometric software
EducationPsychometricsTest Development

Automated Item Generation

Automated item generation (AIG) is a paradigm for developing assessment items (test questions), utilizing principles of artificial intelligence and automation. As the name suggests, it tries to automate some or all of the effort involved

borderline method educational assessment
Test Development

Borderline group method standard setting

The borderline group method of standard setting is one of the most common approaches to establishing a cutscore for an exam.  In comparison with the item-centered standard setting methods such as modified-Angoff, Nedelsky, and Ebel,

ebel-method-for-multiple-choice-questions
CredentialingPsychometricsTest Development

Ebel Method of Standard Setting

The Ebel method of standard setting is a psychometric approach to establish a cutscore for tests consisting of multiple-choice questions. It is usually used for high-stakes examinations in the fields of higher education, medical and health

item parameter drift boat
PsychometricsTest DevelopmentTest Security

Item Parameter Drift

Item parameter drift (IPD) refers to the phenomenon in which the parameter values of a given test item change over multiple testing occasions within the item response theory (IRT) framework. This phenomenon is often relevant

response-time-effort
EducationTest Development

Speeded Test vs Power Test

The concept of Speeded vs Power Test is one of the ways of differentiating psychometric or educational assessments. In the context of educational measurement and depending on the assessment goals and time constraints, tests are

enhance-assessment
PsychometricsTest Development

Distractor Analysis for Test Items

Distractor analysis refers to the process of evaluating the performance of incorrect answers vs the correct answer for multiple choice items on a test.  It is a key step in the psychometric analysis process to

students taking test security
Test Development

What is multi-modal test delivery?

Multi-modal test delivery refers to an exam that is capable of being delivered in several different ways, or of a online testing software platform designed to support this process. For example, you might provide the