ASC 2022 Logo no tagline 300

10 Ways To Improve Assessment With Psychometrics

Psychometric software

Education, to me, is the neverending opportunities we have for a cycle of instruction and assessment.    This can be extremely small scale (watching a YouTube video on how to change a bike tire, then doing it) to large scale (teaching a 5th grad math curriculum and then assessing it nationwide).  Psychometrics is the Science of Assessment – using scientific principles to make the assessment side of that equation more efficient, accurate, and defensible.  How can psychometrics, especially its intersection with technology, improve assessment?  Here are 10 important avenues to improve assessment with psychometrics.

10 ways to improve assessment with psychometrics

Job analysis

If you are doing assessment of anything job-related, from pre-employment screening tests of basic skills to a nationwide licensure exam for a high-profile profession, a job analysis is the essential first step.  It uses a range of scientifically vetted and quantitatively leveraged approaches to help you define the scope of the exam.

Job analysis graphics

Standard-setting studies

If a test has a cutscore, you need a defensible method to set that cutscore.  Simply selecting a round number like 70% is asking for a disaster.  There are a number of approaches from the scientific literature that will improve this process, including the Angoff method and Contrasting Groups method.

Automated Item generation

Newer assessment platforms will include functionality for automated item generation.  There are two types: template-based and AI text generators.

Workflow management

Items are the basic building blocks of the assessment.  If they are not high quality, everything else is a moot point.  There needs to be formal processes in place to develop and review test questions.  You should be using item banking software that helps you manage this process.

Linking

Linking and equating refer to the process of statistically determining comparable scores on different forms of an exam, including tracking a scale across years and completely different set of items.  If you have multiple test forms or track performance across time, you need this.  And IRT provides far superior methodologies.

Automated test assembly

The assembly of test forms – selecting items to match blueprints – can be incredibly laborious.  That’s why we have algorithms to do it for you.  Check out TestAssembler.

Item/Distractor analysis

If you are using items with selected responses (including multiple choice, multiple response, and Likert), a distractor/option analysis is essential to determine if those basic building blocks are indeed up to snuff.  Our reporting platform in FastTest, as well as software like Iteman and Xcalibre, is designed for this purpose.
Job analysis

Item response theory (IRT)

This is the modern paradigm for developing large-scale assessments.  Most important exams in the world over the past 40 years have used it, across all areas of assessment: licensure, certification, K12 education, postsecondary education, language, medicine, psychology, pre-employment… the trend is clear.  For good reason.  It will improve assessment.

Automated essay scoring

This technology is has become more widely available to improve assessment.  If your organization scores large volumes of essays, you should probably consider this.  Learn more about it here.  There was a Kaggle competition on it in the past.

Computerized adaptive testing (CAT)

Tests should be smart.  CAT makes them so.  Why waste vast amounts of examinee time on items that don’t contribute to a reliable score, and just discourage the examinees?  There are many other advantages too.

Nathan Thompson, PhD

Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.

Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.

Share This Post

Facebook
Twitter
LinkedIn
Email

More To Explore

Digital Badges avatar
Credentialing

Digital Badges

Digital badges (aka ebadges) have emerged in today’s digitalized world as a powerful tool for recognizing and showcasing individual’s accomplishments in an online format which