Entries by Nathan Thompson, PhD

Item Banks: 6 Ways To Improve

The foundation of a decent assessment program is the ability to develop and manage strong item banks. Item banks are a central repository of test questions, each stored with important metadata such as Author or Difficulty. They are designed to treat items are reusable objects, which makes it easier to publish new exam forms. Of […]

Responses in Common (RIC) Index

This collusion detection (test cheating) index simply calculates the number of responses in common between a given pair of examinees.  For example, both answered ‘B’ to a certain item regardless of whether it was correct or incorrect.  There is no probabilistic evaluation that can be used to flag examinees.  However, it could be of good […]

Exact Errors in Common (EEIC) collusion detection

Exact Errors in Common (EEIC) is an extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees. For example, suppose two examinees got 80/100 correct on a test. Of the 20 each got wrong, they had 10 in common. Of those, they gave the same wrong […]

Errors in Common (EIC) Exam Cheating Index

This exam cheating index (collusion detection) simply calculates the number of errors in common between a given pair of examinees.  For example, two examinees got 80/100 correct, meaning 20 errors, and they answered all of the same questions wrongly, the EIC would be 20. If they both scored 80/100 but had only 10 wrong questions […]

Harpp, Hogan, & Jennings (1996): Response Similarity Index

Harpp, Hogan, and Jennings (1996) revised their Response Similarity Index somewhat from Harpp and Hogan (1993). This produced a new equation for a statistic to detect collusion and other forms of exam cheating: . Explanation of Response Similarity Index EEIC denote the number of exact errors in common or identically wrong, D is the number […]

Harpp & Hogan (1993): Response Similarity Index

Harpp and Hogan (1993) suggested a response similarity index defined as      Response Similarity Index Explanation EEIC denote the number of exact errors in common or identically wrong, EIC is the number of errors in common. This is calculated for all pairs of examinees that the researcher wishes to compare.  One advantage of this approach is […]

Bellezza & Bellezza (1989): Error Similarity Analysis

This index evaluates error similarity analysis (ESA), namely estimating the probability that a given pair of examinees would have the same exact errors in common (EEIC), given the total number of errors they have in common (EIC) and the aggregated probability P of selecting the same distractor.  Bellezza and Bellezza utilize the notation of k=EEIC […]

Frary, Tideman, & Watts (1977): g2 Collusion Index

The Frary, Tideman, and Watts (1977) g2 index is a collusion (cheating) detection index, which is a standardization that evaluates a number of common responses between two examinees in the typical standardized format: observed common responses minus the expectation of common responses, divided by the expected standard deviation of common responses.  It compares all pairs […]

Wollack 1997 Omega Collusion Index

Wollack (1997) adapted the standardized collusion index of Frary, Tidemann, and Watts (1977) g2 to item response theory (IRT) and produced the Wollack Omega (ω) index.  It is clear that the graphics in the original article by Frary, Tideman, and Watts (1977) were crude classical approximations of an item response function, so Wollack replaced the […]

Wesolowsky (2000) Zjk Collusion Detection Index

Wesolowsky’s (2000) index is a collusion detection index, designed to look for exam cheating by finding similar response vectors amongst examinees. It is in the same family as g2 and Wollack’s ω.  Like those, it creates a standardized statistic by evaluating the difference between observed and expected common responses and dividing by a standard error.  […]