The Nedelsky method is an approach to setting the cutscore of an exam. Originally suggested by Nedelsky (1954), it is an early attempt to implement a quantitative, rigorous procedure to the process of standard setting. Quantitative approaches are needed to eliminate the arbitrariness and subjectivity that would otherwise dominate the process of setting a cutscore. The most obvious and common example of this is simply setting the cutscore at a round number like 70%, regardless of the difficulty of the test or the ability level of the examinees. It is for this reason that a cutscore must be set with a method such as the Nedelsky approach to be legally defensible or meet accreditation standards.
How to implement the Nedelsky method
The first step, like several other standard setting methods, is to gather a panel of subject matter experts (SMEs). The next step is for the panel to discuss the concept of a minimally qualified candidate This is a concept about the type of candidate that should barely pass this exam, and sits on the borderline of competence. They then review a test form, paying specific attention to each of the items on the form. For every item in the test form, each rater estimates the number of options that an MCC will be able to eliminate. This then translates into the probability of a correct response, assuming that each candidate guesses amongst the remaining options. If an MCC can only eliminate one of the options of a four option item, they then have a 1/3 = 33% chance of getting the item correct. If two, then ½ = 50%.
These ratings are then averaged across all items and all raters. This then represents the percentage score expected of an MCC on this test form, as defined by the panel. This makes a compelling, quantitative argument for what the cutscore should then be, because we would expect anyone that is minimally qualified to score at that point or higher.
Drawbacks to the Nedelsky method
This approach only works on multiple choice items, because it depends on the evaluation of option probability. It is also a gross oversimplification. If the item has four options, there are only four possible values for the Nedelsky rating 25%, 33%, 50%, 100%. This is all the more striking when you consider that most items tend to have a percent-correct value between 50% and 100%, and reflecting this fact is impossible with the Nedelsky method. Obviously, more goes into answering a question than simply eliminating one or two of the distractors. This is one reason that another method is generally preferred and supersedes this method…
Nedelsky vs Modified-Angoff
The Nedelsky method has been superseded by the modified-Angoff method. The modified-Angoff method is essentially the same process but allows for finer variations, and can be applied to other item types. The modified-Angoff method subsumes the Nedelsky method, as a rater can still implement the Nedelsky approach within that paradigm. In fact, I often tell raters to use the Nedelsky approach as a starting point or benchmark. For example, if they think that the examinee can easily eliminate two options, and is slightly more likely to guess one of the remaining two options, the rating is not 50%, but rather 60%. The modified-Angoff approach also allows for a second round of ratings after discussion to increase consensus (Delphi Method). Raters can slightly adjust their rating without being hemmed into one of only four possible ratings.
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.
Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.