Entries by Nathan Thompson, PhD

The IACAT Journal: JCAT

Do you conduct adaptive testing research? Perhaps a thesis or dissertation? Or maybe you have developed adaptive tests and have a technical report or validity study? I encourage you to check out the Journal of Computerized Adaptive Testing as a publication outlet for your adaptive testing research. JCAT is the official journal of the International […]

,

IRT Test Information Function

The IRT Test Information Function is a concept from item response theory (IRT) that is designed to evaluate how well an assessment differentiates examinees, and at what ranges of ability. For example, we might expect an exam composed of difficult items to do a great job in differentiating top examinees, but it is worthless for […]

Psychometrist: What do they do?

A psychometrist is an important profession within the world of assessment and psychology.  Their primary role is to deliver and interpret assessments, typically the sorts of assessments that are delivered in a one-on-one clinical situation.  For example, they might give IQ tests to kids to identify those who qualify as Gifted, then explain the results […]

Item Banks: 6 Ways To Improve

The foundation of a decent assessment program is the ability to develop and manage strong item banks. Item banks are a central repository of test questions, each stored with important metadata such as Author or Difficulty. They are designed to treat items are reusable objects, which makes it easier to publish new exam forms. Of […]

Responses in Common (RIC)

This collusion detection (test cheating) index simply calculates the number of responses in common between a given pair of examinees.  For example, both answered ‘B’ to a certain item regardless of whether it was correct or incorrect.  There is no probabilistic evaluation that can be used to flag examinees.  However, it could be of good […]

Exact Errors in Common (EEIC) collusion detection

Exact Errors in Common (EEIC) is an extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees. For example, suppose two examinees got 80/100 correct on a test. Of the 20 each got wrong, they had 10 in common. Of those, they gave the same wrong […]

Errors in Common (EIC) exam cheating index

This exam cheating index (collusion detection) simply calculates the number of errors in common between a given pair of examinees.  For example, two examinees got 80/100 correct, meaning 20 errors, and they answered all of the same questions wrongly, the EIC would be 20. If they both scored 80/100 but had only 10 wrong questions […]

Harpp, Hogan, & Jennings: Response Similarity

Harpp, Hogan, and Jennings (1996) revised their Response Similarity Index somewhat from Harpp and Hogan (1993). This produced a new equation for a statistic to detect collusion and other forms of exam cheating: . Explanation of Response Similarity Index EEIC denote the number of exact errors in common or identically wrong, D is the number […]