Nathan Thompson articles

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/ .
Conditional standard error of measurement function
Psychometrics

The IACAT Journal: JCAT

Do you conduct adaptive testing research? Perhaps a thesis or dissertation? Or maybe you have developed adaptive tests and have a technical report or validity study? I encourage you to check out the Journal of

Test information function
PsychometricsTest Development

IRT Test Information Function

The IRT Test Information Function is a concept from item response theory (IRT) that is designed to evaluate how well an assessment differentiates examinees, and at what ranges of ability. For example, we might expect

conditional standard error of measurement
Psychometrics

What Is The Standard Error of Measurement?

The standard error of measurement (SEM) is one of the core concepts in psychometrics.  One of the primary assumptions of any assessment is that it is accurately and consistently measuring whatever it is we want

question bank
Psychometrics

Question Banks: An Introduction

What is a question bank? A question bank refers to a pool of test questions to be used on various assessments across time. ..

psychometrician psychometrist
Psychometrics

Psychometrist: What do they do?

A psychometrist is an important profession within the world of assessment and psychology.  Their primary role is to deliver and interpret assessments, typically the sorts of assessments that are delivered in a one-on-one clinical situation. 

math educational assessment
PsychometricsTest Development

What is Classical Item Difficulty (P Value)?

One of the core concepts in psychometrics is item difficulty.  This refers to the probability that examinees will get the item correct for educational/cognitive assessments or respond in the keyed direction with psychological/survey assessments (more on that

ways-to-improve-item-banks
Psychometrics

Item Banks: 6 Ways To Improve

The foundation of a decent assessment program is the ability to develop and manage strong item banks. Item banks are a central repository of test questions, each stored with important metadata such as Author or

pair-of-students-examinees-that-have-common-responses
Psychometrics

Responses in Common (RIC) Index

This collusion detection (test cheating) index simply calculates the number of responses in common between a given pair of examinees.  For example, both answered ‘B’ to a certain item regardless of whether it was correct

two-examinees-cheating
Psychometrics

Exact Errors in Common (EEIC) collusion detection

Exact Errors in Common (EEIC) is an extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees. For example, suppose two examinees got 80/100 correct on

pair-of-students-cheating
Psychometrics

Errors in Common (EIC) Exam Cheating Index

This exam cheating index (collusion detection) simply calculates the number of errors in common between a given pair of examinees.  For example, two examinees got 80/100 correct, meaning 20 errors, and they answered all of

response similarity index
Psychometrics

Harpp, Hogan, & Jennings (1996): Response Similarity Index

Harpp, Hogan, and Jennings (1996) revised their Response Similarity Index somewhat from Harpp and Hogan (1993). This produced a new equation for a statistic to detect collusion and other forms of exam cheating: . Explanation

Harpp Hogan
Psychometrics

Harpp & Hogan (1993): Response Similarity Index

Harpp and Hogan (1993) suggested a response similarity index defined as Response Similarity Index Explanation EEIC denote the number of exact errors in common or identically wrong, EIC is the number of errors in common. This

a-pair-of-examinees-having-responses-in-common
Psychometrics

Bellezza & Bellezza (1989): Error Similarity Analysis

This index evaluates error similarity analysis (ESA), namely estimating the probability that a given pair of examinees would have the same exact errors in common (EEIC), given the total number of errors they have in

frary g2
Psychometrics

Frary, Tideman, & Watts (1977): g2 Collusion Index

The Frary, Tideman, and Watts (1977) g2 index is a collusion (cheating) detection index, which is a standardization that evaluates a number of common responses between two examinees in the typical standardized format: observed common

Wollack Omega
Psychometrics

Wollack 1997 Omega Collusion Index

Wollack (1997) adapted the standardized collusion index of Frary, Tidemann, and Watts (1977) g2 to item response theory (IRT) and produced the Wollack Omega (ω) index.  It is clear that the graphics in the original

wesolosky
Psychometrics

Wesolowsky (2000) Zjk Collusion Detection Index

Wesolowsky’s (2000) index is a collusion detection index, designed to look for exam cheating by finding similar response vectors amongst examinees. It is in the same family as g2 and Wollack’s ω.  Like those, it

response-time-effort
Psychometrics

Response Time Effort

Wise and Kong (2005) defined an index to flag examinees not putting forth minimal effort, based on their response time.  It is called the response time effort (RTE) index. Let K be the number of

Holland K
Psychometrics

Holland K Index and K Variants for Forensics

The Holland K index and variants are probability-based indices for psychometric forensics, like the Bellezza & Bellezza indices, but make use of conditional information in their calculations. All three estimate the probability of observing  wij 

Previous Next