Entries by nthompson

Tips to Improve Assessment Security During COVID-19

The COVID-19 Pandemic drastically changing all aspects of our world, and one of the most impacted areas is educational assessment and other types of assessment. Many organizations still delivered tests with methodologies from 50 years ago, such as putting 200 examinees in a large room with desks, paper exams, and a pencil. COVID-19 is forcing […]

Responses in Common (RIC)

This collusion detection (test cheating) index simply calculates the number of responses in common between a given pair of examinees.  For example, both answered ‘B’ to a certain item regardless of whether it was correct or incorrect.  There is no probabilistic evaluation that can be used to flag examinees.  However, it could be of good […]

Exact Errors in Common (EEIC) – Collusion Detection

This extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees.  For example, suppose two examinees got 80/100 correct on a test. Of the 20 each got wrong, they had 10 in common. Of those, they gave the same wrong answer on 5 items. This means […]

Errors in Common (EIC) exam cheating index

This exam cheating index (collusion detection) simply calculates the number of errors in common between a given pair of examinees.  For example, two examinees got 80/100 correct, meaning 20 errors, and they answered all of the same questions wrongly, the EIC would be 20. If they both scored 80/100 but had only 10 wrong questions […]

Harpp, Hogan, and Jennings (1996): Response Similarity Index

Harpp, Hogan, and Jennings (1996) revised their Response Similarity Index somewhat from Harpp and Hogan (1993). This produced a new equation for a statistic to detect collusion and other forms of exam cheating: where EEIC denote the number of exact errors in common or identically wrong, D is the number of items with a different […]

Harpp and Hogan (1993) Response Similarity Index

Harpp and Hogan (1993) suggested a response similarity index defined as                                                                                 where EEIC denote the number of exact errors in common or identically wrong, EIC is the number of errors in common. This is calculated for all pairs of examinees that the researcher wishes to compare.  One advantage of this approach is that it […]

Bellezza & Bellezza (1989): Error Similarity Analysis

This index evaluates error similarity analysis (ESA), namely estimating the probability that a given pair of examinees would have the same exact errors in common (EEIC), given the total number of errors they have in common (EIC) and the aggregated probability P of selecting the same distractor.  Bellezza and Bellezza utilize the notation of k=EEIC […]

Frary, Tideman, and Watts (1977): g2 collusion index

The Frary, Tideman, and Watts (1977) g2 index is a collusion (cheating) detection index, which is a standardization that evaluates number of common responses between two examinees in the typical standardized format: observed common responses minus the expectation of common responses, divided by the expected standard deviation of common responses.  It compares all pairs of […]

Wollack Omega – Wollack (1997) ω

Wollack (1997) adapted the standardized collusion index of Frary, Tidemann, and Watts (1977) g2 to item response theory (IRT) and produced the Wollack Omega (ω) index.  It is clear that the graphics in the original article by Frary et al. (1977) were crude classical approximations of an item response function, so Wollack replaced the probability […]

Wesolowsky (2000) Zjk collusion detection index

Wesolowsky’s (2000) index is a collusion detection index, designed to look for exam cheating by finding similar response vectors amongst examinees. It is in the same family as g2 and Wollack’s ω.  Like those, it creates a standardized statistic by evaluating the difference between observed and expected common responses and dividing by a standard error.  […]