assessment systems logo

Exact Errors in Common (EEIC) – collusion detection

College student copying test

Exact Errors in Common (EEIC) is an extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees.

For example, suppose two examinees got 80/100 correct on a test. Of the 20 each got wrong, they had 10 in common. Of those, they gave the same wrong answer on 5 items. This means that the EEIC would be 5. Why does this index provide evidence of collusion detection? Well, if you and I both get 20 items wrong on a test (same score), that’s not going to raise any eyebrows. But what if we get the same 20 items wrong? A little more concerning. What if we gave the same exact wrong answers on all of those 20? Definitely cause for concern!

There is no probabilistic evaluation that can be used to flag examinees.  However, it could be of good use from a descriptive or investigative perspective. Because it is of limited use by itself, it was incorporated into more advanced indices, such as Harpp, Hogan, and Jennings (1996).

Note that because Exact Errors in Common is not standardized in any way, so its range and relevant flag cutoff will depend on the number of items in your test, and how much your examinee responses vary.  For a 100-item test, you might want to set the flag at 10 items.  But for a 20-item test, this is obviously irrelevant, and you might want to set it at 5 (because most examinees will probably not even get more than 10 errors).

EEIC is easy to calculate, but you can download the SIFT software for free.

The following two tabs change content below.

Nathan Thompson, PhD

CEO at Assessment Systems
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world. Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He's published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.

Share This Post

Facebook
Twitter
LinkedIn
Email

More To Explore

incremental validity
Psychometrics

Incremental Validity

Incremental validity is an aspect of validity that refers to what an additional assessment or predictive variable can add to the information provided by existing