# Wollack 1997 Omega Collusion Index

Wollack (1997) adapted the standardized collusion index of Frary, Tidemann, and Watts (1977) g2 to item response theory (IRT) and produced the Wollack Omega (ω) index.  It is clear that the graphics in the original article by Frary et al. (1977) were crude classical approximations of an item response function, so Wollack replaced the probability calculations from the classical approximations with those from IRT.

The probabilities could be calculated with any IRT model.  Wollack suggested Bock’s Nominal Response Model since it is appropriate for multiple-choice data, but that model is rarely used in practice and very few IRT software packages support it.  SIFT instead supports the use of dichotomous models: 1-parameter, 2-parameter, 3-parameter, and Rasch.

Because of using IRT, implementation of ω requires additional input.  You must include the IRT item parameters in the control tab, as well as examinee theta values in the examinee tab.  If any of that input is missing, the omega output will not be produced.

The ω index is defined as

Where P is the probability of an examinee with θa selecting the response, that examinee b selected, and cab is the RIC.  That is, the probability that the copier with θa would select the responses that the source did when summed, this can be interpreted as the expected RIC

Note: This uses all responses, not just errors.

How to interpret?  The value will be higher when the copier had more responses in common with the source than we’d expect from a person of that (probably lower) ability.  This index is standardized onto a z-metric, and therefore can easily be converted to the probability you wish to use.

A standardized value of 3.09 is the default for g2, ω, and Zjk because this translates to a probability of 0.001.  A value beyond 3.09, then, represents an event that is expected to be very rare under the assumption of no collusion.

Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.

Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.

## More To Explore

Education

### Gamification in Learning & Assessment

Gamification in assessment and psychometrics presents new opportunities for ways to improve the quality of exams. While the majority of adults perceive games with caution

Statistics

### Meta-analysis and Test Validation in Psychological Measurement

Meta-analysis is a research process of collating data from multiple independent but similar scientific studies in order to identify common trends and findings by means