This extremely basic collusion detection index simply calculates the number of responses in common between a given pair of examinees. For example, suppose two examinees got 80/100 correct on a test. Of the 20 each got wrong, they had 10 in common. Of those, they gave the same wrong answer on 5 items. This means that the EEIC would be 5. Why does this index provide evidence of collusion detection? Well, if you and I both get 20 items wrong on a test (same score), that’s not going to raise any eyebrows. But what if we get the same 20 items wrong? A little more concerning. What if we gave the same exact wrong answers on all of those 20? Definitely cause for concern!
There is no probabilistic evaluation that can be used to flag examinees. However, it could be of good use from a descriptive or investigative perspective. Because it is of limited use by itself, it was incorporated into more advanced indices, such as Harpp, Hogan, and Jennings (1996).
Note that because EEIC is not standardized in any way, so its range and relevant flag cutoff will depend on the number of items in your test, and how much your examinee responses vary. For a 100-item test, you might want to set the flag at 10 items. But for a 20-item test, this is obviously irrelevant, and you might want to set it at 5 (because most examinees will probably not even get more than 10 errors).
EEIC is easy to calculate, but you can download the SIFT software for free.
Latest posts by nthompson (see all)
- Tips to Improve Assessment Security During COVID-19 - September 4, 2020
- Responses in Common (RIC) - August 22, 2020
- Exact Errors in Common (EEIC) – Collusion Detection - August 22, 2020