three standard errors

Computerized adaptive tests (CATs) are a sophisticated method of test delivery based on item response theory (IRT). They operate by adapting both the difficulty and quantity of items seen by each examinee.

Difficulty
Most characterizations of adaptive testing focus on how item difficulty is matched to examinee ability. High ability examinees receive more difficult items, while low ability examinees receive easier items, which has important benefits to the student and the organization. An adaptive test typically begins by delivering an item of medium difficulty; if you get it correct, you get a tougher item, and if you get it incorrect, you get an easier item. This basic algorithm continues until the test is finished, though it usually includes subalgorithms for important things like content distribution and item exposure.

Quantity
A less publicized facet of adaptation is the number of items. Adaptive tests can be designed to stop when certain psychometric criteria are reached, such as a specific level of score precision. Some examinees finish very quickly with few items, so that adaptive tests are typically about half as many questions as a regular test, with at least as much accuracy. Because some examinees have longer tests, these adaptive tests are referred to as variable-length. Obviously, this makes for a massive benefit: cutting testing time in half, on average, can substantially decrease testing costs. Nevertheless, some adaptive tests use a fixed length, and only adapt item difficulty. This is merely for public relations issues, namely the inconvenience of dealing with examinees who feel they were unfairly treated by the CAT, even though it is arguably more fair and valid than conventional tests.

Advantages of adaptive testing

By making the test more intelligent, adaptive testing provides a wide range of benefits.  Some of the well-known advantages of adaptive testing, recognized by scholarly psychometric research, are listed below.  However, the development of an adaptive test is a very complex process that requires substantial expertise in item response theory (IRT) and CAT simulation research.  Our Ph.D. psychometricians can provide your organization with the requisite experience to implement adaptive testing and help your organization benefit from these advantages.  Contact us or read this white paper to learn more.
Shorter tests, anywhere from a 50% to 90% reduction; reduces cost, examinee fatigue, and item exposure
More precise scores: CAT will make tests more accurate
More control of score precision (accuracy): CAT ensures that all students will have the same accuracy, making the test much more fair.  Traditional tests measure the middle students well but not the top or bottom students.
Increased efficiency
Greater test security because everyone is not seeing the same form
A better experience for examinees, as they only see items relevant for them, providing an appropriate challenge
The better experience can lead to increased examinee motivation
Immediate score reporting
More frequent retesting is possible; minimize practice effects
Individual pacing of tests; examinees move at their own speed
On-demand testing can reduce printing, scheduling, and other paper-based concerns
Storing results in a database immediately makes data management easier
Computerized testing facilitates the use of multimedia in items

No, you can’t just subjectively rank items!

Computerized adaptive tests (CATs) are the future of assessment. They operate by adapting both the difficultyand number of items to each individual examinee. The development of an adaptive test is no small feat, and requires five steps integrating the expertise of test content developers, software engineers, and psychometricians. The development of a quality adaptive test is not possible without a Ph.D. psychometrician experienced in both item response theory (IRT) calibration and CAT simulation research. FastTest can provide you the psychometrician and software; if you provide test items and pilot data, when can help you quickly publish an adaptive version of your test. Step 1: Feasibility, applicability, and planning studies. First, extensive monte carlo simulation research must occur, and the results formulated as business cases, to evaluate whether adaptive testing is feasible, applicable, or even possible.
Step 2: Develop item bank. An item bank must be developed to meet the specifications recommended by Step 1.
Step 3: Pretest and calibrate item bank. Items must be pilot tested on 200-1000 examinees (depends on IRT model) and analyzed by Ph.D. psychometrician.
Step 4: Determine specifications for final CAT. Data from Step 3 is analyzed to evaluate CAT specifications and determine most efficient algorithms using CAT simulation software such as CATSim.
Step 5: Publish live CAT. The adaptive test is published in a testing engine capable of fully adaptive tests based on IRT. Want to learn more about our one-of-a-kind model? Click here to read the seminal article by two of our psychometricians. More adaptive testing research is available here.

What do I need for adaptive testing?

Minimum requirements:

  • A large item bank piloted on at least 500 examinees
  • 1,000 examinees per year
  • Specialized IRT calibration and CAT simulation software.
  • Staff with a PhD in psychometrics or an equivalent level of experience. Or, leverage our internationally recognized expertise in the field.
  • Items (questions) that can be scored objectively correct/incorrect in real time
  • Item banking system and CAT delivery platform
  • Financial resources: Because it is so complex, development of a CAT will cost at least $10,000 (USD) — but if you are testing large volumes of examinees, it will be a significant positive investment.


Visit the links below to learn more about adaptive testing.  We also recommend that you first read this landmark article by our industry-leading psychometricians and our white paper detailing the requirements of CAT.  Additionally, you will likely be interested in this article on producing better measurements with CAT.

The following two tabs change content below.

nthompson

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://pareonline.net/getvn.asp?v=16&n=1.