student-profile-cognitive-diagnostic-models

What Are Cognitive Diagnostic Models?

Cognitive diagnostic models (CDMs) are a psychometric framework designed to enhance how tests are structured and scored. Instead of providing just an overall test score, CDMs aim to generate a detailed profile of an examinee’s mastery of specific skills. This approach offers deeper insights into individual strengths and weaknesses, making it particularly useful in educational and psychological assessments.

CDMs have seen significant growth in research and application over the past decade, even though their mathematical foundations trace back to MacReady and Dayton (1977). Their rising popularity stems from the need for more precise assessments in many contexts where a single overall score is insufficient. For instance, in formative educational assessments, understanding a student’s specific strengths and weaknesses is crucial for providing meaningful feedback. Conversely, professional certification or licensure exams often rely on a single pass/fail score derived from overall performance.

Understanding Cognitive Diagnostic Models: A Different Approach

Since the 1980s, Item Response Theory (IRT), also known as latent trait theory, has been the dominant psychometric paradigm. CDMs, however, belong to a distinct framework called latent class theory. Rather than assuming a unidimensional measurement, latent class theory categorizes examinees based on multiple attributes or skills.

The ultimate goal with CDMs isn’t to produce a single numerical score but to develop a comprehensive profile indicating which skills an examinee has mastered and which they haven’t. These skills, often related to cognitive abilities, help identify areas of strength and weakness. This diagnostic capability makes CDMs especially valuable for formative assessments, where targeted feedback can foster improvement.

 

A Practical Example: Fractions in Mathematics

To illustrate how CDMs work, let’s consider a formative assessment designed to evaluate students’ understanding of fractions. Imagine the curriculum focuses on these distinct skills:

  • Finding the lowest common denominator
  • Adding fractions
  • Subtracting fractions
  • Multiplying fractions
  • Dividing fractions
  • Converting mixed numbers to improper fractions

Now, suppose one of the questions on the test is:

What is 2 3/4 + 1 1/2?

Answering this question requires mastery of three skills:

  1. Finding the lowest common denominator
  2. Adding fractions
  3. Converting mixed numbers to improper fractions

Researchers use a tool called the Q-Matrix to map out which skills each question assesses. Here’s a simplified example:

 

Item Find the lowest common denominator Add fractions Subtract fractions Multiply fractions Divide fractions Convert mixed number to improper fraction
 Item 1  X X  X
 Item 2  X  X
 Item 3  X  X
 Item 4  X  X

 

How Do You Determine an Examinee’s Skill Profile?

This is where cognitive diagnostic models shine. The term models is plural because there are several types of CDMs available—just as there are various models within IRT, such as the Rasch model, 2-parameter model, and generalized partial credit model. The choice of CDM depends on the test’s characteristics and the specific needs of the assessment.

One of the simplest CDMs is the DINA model. This model uses two parameters for each item:

  • Slippage (s): The likelihood that a student who has the required skills answers the item incorrectly.
  • Guessing (g): The probability that a student without the necessary skills answers the item correctly.

The process of determining a skill profile involves complex mathematical calculations based on maximum likelihood estimation. The steps include:

  1. Listing all possible skill profiles.
  2. Calculating the likelihood of each profile using item parameters and the examinee’s responses.
  3. Selecting the profile with the highest likelihood.

Estimating item parameters is computationally intensive. While IRT parameter estimation can be done using software like Xcalibre, CDMs require more advanced tools such as MPlus or R.

Beyond simply identifying the most likely skill profile, CDMs can also provide the probability that an examinee has mastered each specific skill. This level of diagnostic detail is invaluable in contexts like formative assessment, where personalized feedback can drive improvement.

How Can You Implement Cognitive Diagnostic Models?

To implement CDMs effectively, start by analyzing your data to assess how well the models fit your test items. As mentioned earlier, software like MPlus or R can help with this analysis.

Publishing a fully operational assessment that uses CDMs for scoring is a more challenging task. Most CDM-based tests are proprietary and developed by large educational companies that employ psychometricians to create and refine their assessments. These companies often provide banks of formative assessments for schools, particularly for students in grades 3–12.

If you’re interested in developing your own CDM-based assessments, options are currently limited. However, platforms like FastTest can support you in test development, delivery, and analytics.

Ready to Learn More?

If you’re eager to dive deeper into cognitive diagnostic models, here are some excellent resources:

  • Alan Huebner offers a fascinating article on adaptive testing using the DINA model, with an informative introduction to CDMs.
  • Jonathan Templin, a leading expert from the University of Iowa, provides fantastic resources on his website.
  • A free pdf book by Rupp, Templin, and Hansen is available at Mindful Measurement.
  • For a more comprehensive understanding, check out this highly recommended textbook on CDMs.

CDMs offer powerful tools for more nuanced assessments, allowing educators and researchers to gain detailed insights into examinees’ cognitive skills. Whether you’re new to this approach or looking to implement it in your testing framework, exploring CDMs can significantly enhance the value of your assessments.



The following two tabs change content below.
Avatar for Nathan Thompson, PhD

Nathan Thompson, PhD

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/ .
Avatar for Nathan Thompson, PhD

Latest posts by Nathan Thompson, PhD (see all)