Posts on psychometrics: The Science of Assessment

SIFT test security data forensics

Introduction

Test fraud is an extremely common occurrence.  We’ve all seen articles about examinee cheating.  However, there are very few defensible tools to help detect it.  I once saw a webinar from an online testing provider that proudly touted their reports on test security… but it turned out that all they provided was a simple export of student answers that you could subjectively read and form conjectures.

The goal of SIFT is to provide a tool that implements real statistical indices from scientific research on statistical detection of test fraud. It’s user-friendly enough to be used by someone without a PhD in psychometrics or experience in data forensics. SIFT provides more collusion indices and analysis than any other software, making it the industry standard from the day of its release. The science behind SIFT is also implemented in our world-class online testing platform, FastTest which supports computerized adaptive testing known to increase test security.

Interested?  Download a free trial version of SIFT!

What is Test Fraud?

As long as tests have been around, people have been trying to cheat them. Anytime there’s a system with stakes or incentives involved, people will try to game it. The root culprit is the system itself, not the test. Blaming the test is just shooting the messenger.

In most cases, the system serves a useful purpose. K-12 assessments provide information on curriculum and teachers, certification tests identify qualified professionals, and so on. To preserve the system’s integrity, we must minimize test fraud.

When it comes to test fraud, the old cliché is true: an ounce of prevention is worth a pound of cure. While I recommend implementing preventative measures to deter fraud, some cases will always occur. SIFT is designed to help find those cases. Additionally, the knowledge that such analysis is being conducted can deter some examinees.

How can SIFT help me with statistical detection of test fraud?

Like other psychometric software, SIFT does not interpret results for you. For example, software for item analysis like Iteman  and  Xcalibre

doesn’t tell you which items to retire or how to revise them—they provide output for practitioners to analyze. SIFT offers a wide range of outputs to help identify:

  • Copying
  • Proctor assistance
  • Suspect test centers
  • Brain dump usage
  • Low examinee motivation

YOU decide what’s important for detecting test fraud and look for relevant evidence. More details are provided in the manual, but here’s a glimpse.

SIFT Test Security Data Forensics

SIFT calculates various indices to evaluate potential fraud:

  • Collusion Indices: SIFT calculates these for each student pair, summarizing the number of flags.

SIFT test security data forensics

First, there are a number of indices you can evaluate, as you see above.  SIFT  will calculate those collusion indices for each pair of students, and summarize the number of flags.

  • Brain Dump Detection: Compare examinee responses with known brain dump content, especially content intentionally seeded by the organization.

sift collusion index analysis

A certification organization could use  SIFT  to look for evidence of brain dump makers and takers by evaluating similarity between examinee response vectors and answers from a brain dump site – especially if those were intentionally seeded by the organization!  We also might want to find adjacent examinees or examinees in the same location that group together in the collusion index output.  Unfortunately, these indices can differ substantially in their conclusions.

  • Adjacent Examinee Analysis: Identify students in the same location with suspiciously similar responses.

sift group analysis

Additionally, we can roll up many of these statistics to the group level.  Below is an example that provides a portion of  SIFT  output regarding teachers.  Note that Gutierrez has suspiciously high scores but without spending much more time.  Cheating?  Possibly.  On the other hand, that is the smallest N, so perhaps the teacher just had a group of accelerated students.  Worthington, on the other hand, also had high scores but had notably shorter times – perhaps the teacher was helping?

  • Response Time Data: Evaluate time spent on questions to detect irregularities.

sift time analysis

Finally, you might want to evaluate time data.  SIFT  provides this as well.

Group-Level Analysis

SIFT rolls up statistics to the group level. For example, one teacher might have suspiciously high scores without much time spent per question. Is it cheating? Possibly. But perhaps the teacher had a group of accelerated students. Another teacher may show high scores with notably shorter times—perhaps due to unauthorized assistance.

The Story of SIFT

I started developing SIFT in 2012. ASC previously sold a program called Scrutiny!, but we stopped due to compatibility issues with newer Windows versions. Despite that, we kept receiving inquiries.

Determined to create a better tool, I set out to develop SIFT. I aimed to include the analysis from Scrutiny! (the Bellezza & Bellezza index) and much more. After years of business hurdles and countless hours, SIFT was released in July 2016.

Version 1.0 of SIFT includes:

  • 10 Collusion Indices (5 probabilistic, 5 descriptive)
  • Response Time Analysis
  • Group-Level Analysis
  • Additional tools for detecting test fraud

While not exhaustive of all literature analyses, SIFT surpasses other options for practitioners.

Suggestions? We’d love to hear from you!

job-task-analysis

KSAOs (Knowledge, Skills, Abilities, and Other Characteristics) is an approach to defining the human attributes necessary for success on a job.  It is an essential aspect of human resources and organizational development field, impacting critical business processes from recruitment to selection to compensation.  This post provides an introduction to KSAOs and then discusses how they impact the human assessments in the workplace, such as pre-employment screening or certification/licensure exams.

Need help developing an assessment based on sound psychometrics such as job analysis and KSAOs?  Or just need a software platform to make it easier for you to do so?  Get in touch!

What is a KSAO? Knowledge, Skills, Abilities, and Other Characteristics

KSAO is an acronym that represents four essential components:hr-interview-pre-employment

Knowledge refers to the understanding or awareness of concepts, facts, and information required to perform a job. This can include both formal education and practical experience. For example, a software developer needs knowledge of programming languages like Python or Java.

Skills are the learned proficiency or expertise in specific tasks. Skills are often technical in nature and can be developed through practice. For instance, an accountant needs strong skills in financial analysis or spreadsheet management.

Abilities are the natural or developed traits that determine how well someone can perform certain tasks. This includes cognitive abilities such as problem-solving or physical abilities like manual dexterity. A surgeon, for example, must have the ability to remain calm under pressure and possess fine motor skills to perform delicate operations.

Other characteristics refer to personal attributes or traits that may influence job performance but don’t necessarily fall into the above categories. These could include things like personality traits, work ethics, or attitudes. For instance, a customer service representative should have a positive attitude and excellent communication skills.

Examples of KSAOs

Here are some examples of KSAOs in various roles:

Registered Nurse:

  • Knowledge: Medical terminology, patient care protocols, pharmacology.
  • Skills: Administering injections, operating medical equipment, record-keeping.
  • Abilities: Emotional resilience, critical thinking, physical stamina.
  • Other characteristics: Compassion, teamwork, attention to detail.

Marketing Manager:

  • Knowledge: Market research, digital marketing trends, consumer behavior.
  • Skills: Data analysis, content creation, campaign management.
  • Abilities: Strategic thinking, multitasking, creative problem-solving.
  • Other characteristics: Leadership, adaptability, communication skills.

Software Engineer:

  • Knowledge: Programming languages, software development methodologies.
  • Skills: Debugging code, designing algorithms, testing software.
  • Abilities: Logical reasoning, attention to detail, time management.
  • Other characteristics: Innovation, teamwork, problem-solving attitude.

Why Are KSAOs Important in Human Resources, Recruitment, and Selection?

KSAOs are integral in various parts of the HR cycle, primarily by providing quality information about jobs to decision-makers.

  1. Drive Recruitment: KSAOs provide a clear framework for matching candidates with job roles. By assessing whether a candidate has the required knowledge, skills, abilities, and other characteristics, employers can avoid mismatches that could result in poor performance or turnover.
  2. Clear Job Expectations: When a job is defined by its KSAOs, both employers and employees have a clearer understanding of what is expected in terms of performance. This reduces confusion and ensures that candidates understand the responsibilities and qualifications required.
  3. Improved Hiring Decisions: Analyzing KSAOs allows employers to focus on evaluating specific traits and capabilities.  This helps to ensure that they choose individuals who are best equipped to handle the job. It helps streamline the hiring process and avoid subjective decisions based solely on resumes or interviews by focusing on reliable, objective assessments.
  4. Enhanced Training and Development: Once KSAOs are clearly defined, employers can identify skill gaps and focus their training efforts on developing specific areas. This targeted approach makes employee development more efficient. It can also improve retention.
  5. Legal Compliance and Fairness: By focusing on specific job-related criteria like KSAOs, employers are less likely to make hiring decisions based on irrelevant factors, such as personal biases. This helps maintain legal compliance and promotes fairness in hiring practices.
  6. Informs Compensation Structures: Comparing KSAOs between different job families within the hierarchy of an organization can help provide reasoning and documentation behind compensation plans.

So… how does this pertain to assessment?

Workforce-related assessment such as Certification/Licensure or Pre-Employment tests have to be supported by validity, which is evidence and documentation that the tests measure and predict what we want them to. This then begs the question, what do we want them to measure and predict? The KSAOs!test dev cycle

For example, if you are developing a certification test for widgetmakers, you have to make sure it covers the right content, so that if (when!) the test is challenged or submitted for accreditation, you have evidence that it does. You can’t just go down to your basement and write 100 questions willy-nilly then put them up on some cheap online exam platform. Instead, you need to do a job analysis.

Job analysis is the process of systematically studying a job to determine the duties, responsibilities, required skills, and qualifications. This is the foundation upon which KSAOs are defined. By analyzing the job thoroughly, employers can define the exact KSAOs that are necessary for someone to perform well in that position.

There are two general approaches: focus groups and surveys. For focus groups, you might get 8 expert widgetmakers in a room for a day to discuss and agree on the KSAOs or tasks done on the job, then define weights for each on the exam. For the survey, you make a list of KSAOs or tasks and then send it out to hundreds of widgetmakers, asking them to rate how important or frequent each are to successful performance.

The results of this are a very well-defined list of KSAOs and tasks that are critical for the job, with relative weighting. This can provide input into important business processes like those discussed above (interview questions, job postings, recruitment), but for higher-stakes exams, you use this information to make formal exam blueprints. This will ensure that professionals who earn the certification have demonstrated a certain level of expertise on the KSAOs. The blueprints are also useful for pre-employment assessments to screen applicants, highlighting those which are more qualified than others.

Conclusion: The Value of KSAOs

Incorporating KSAOs as the foundation of your employee hiring, development, and assessment processes can provide invaluable insight, increase validity, and have a positive impact to the company bottom line. By understanding and evaluating these components, organizations can make informed decisions that contribute to long-term success.

Whether you’re a hiring manager or an HR professional, embracing KSAO-based assessments can enhance your ability to cultivate a strong, capable workforce. Contact us if you wish to talk with an expert about developing exams which meet international psychometric standards.  Additionally, our powerful yet easy-to-use online platform is ideal to develop and deliver your assessments.

 

computerized adaptive testing

Computerized adaptive testing is an AI-based approach to assessment where the test is personalized based on your performance as you take the test, making the test shorter, more accurate, more secure, more engaging, and fairer.  If you do well, the items get more difficult, and if you do poorly, the items get easier.  If an accurate score is reached, the test stops early.  By tailoring question difficulty to each test-taker’s performance, CAT ensures an efficient and secure testing process.  The AI algorithms are almost always based on Item Response Theory (IRT), an application of machine learning to assessment, but can be based on other models as well. 

 

Prefer to learn by doing?  Request a free account in FastTest, our powerful adaptive testing platform.

Free FastTest Account

What is computerized adaptive testing?

Computerized adaptive testing (CAT), sometimes called computer-adaptive testing, adaptive assessment, or adaptive testing, is an algorithm that personalizes how an assessment is delivered to each examinee.  It is coded into a software platform, using the machine-learning approach of IRT to select items and score examinees.  The algorithm proceeds in a loop until the test is complete.  This makes the test smarter, shorter, fairer, and more precise.

computerized Adaptive testing options

The steps in the diagram above are adapted from Kingsbury and Weiss (1984). based on these components.

Components of a CAT

  1. Item bank calibrated with IRT
  2. Starting point (theta level before someone answers an item)
  3. Item selection algorithm (usually maximum Fisher information)
  4. Scoring method (e.g., maximum likelihood)
  5. Termination criterion (stop the test at 50 items, or when standard error is less than 0.30?  Both?)

How the components work

For starters, you need an item bank that has been calibrated with a relevant psychometric or machine learning model.  That is, you can’t just write a few items and subjectively rank them as Easy, Medium, or Hard difficulty.  That’s an easy way to get sued.  Instead, you need to write a large number of items (rule of thumb is 3x your intended test length) and then pilot them on a representative sample of examinees.  The sample must be large enough to support the psychometric model you choose, and can range from 100 to 1000.  You then need to perform simulation research – more on that later.

computerized adaptive testing

Once you have an item bank ready, here is how the computerized adaptive testing algorithm works for a student that sits down to take the test, with options for how to do so.

  1. Starting point: there are three option to select the starting score, which psychometricians call theta
    • Everyone gets the same value, like 0.0 (average, in the case of non-Rasch models)
    • Randomized within a range, to help test security and item exposure
    • Predicted value, perhaps from external data, or from a previous exam
  2. Select item
    • Find the item in the bank that has the highest information value
    • Often, you need to balance this with practical constraints such as Item Exposure or Content Balancing
  3. Score the examinee
    • Usually IRT, maximum likelihood or Bayes modal
  4. Evaluate termination criterion: using a predefined rule supported by your simulation research
    • Is a certain level of precision reached, such as a standard error of measurement <0.30?
    • Are there no good items left in the bank?
    • Has a time limit been reached?
    • Has a Max Items limit been reached?

The algorithm works by looping through 2-3-4 until the termination criterion is satisfied.

How does the test adapt? By Difficulty or Quantity?

CATs operate by adapting both the difficulty and quantity of items seen by each examinee.

Difficulty
Most characterizations of computerized adaptive testing focus on how item difficulty is matched to examinee ability. High-ability examinees receive more difficult items, while low ability examinees receive easier items, which has important benefits to the student and the organization. An adaptive test typically begins by delivering an item of medium difficulty; if you get it correct, you get a tougher item, and if you get it incorrect, you get an easier item.  This pattern continues.

Quantity: Fixed-Length vs. Variable-Length
A less publicized facet of adaptation is the number of items. Adaptive tests can be designed to stop when certain psychometric criteria are reached, such as a specific level of score precision. Some examinees finish very quickly with few items, so that adaptive tests are typically about half as many questions as a regular test, with at least as much accuracy. Since some examinees have longer tests, these adaptive tests are referred to as variable-length. Obviously, this makes for a massive benefit: cutting testing time in half, on average, can substantially decrease testing costs.

Some adaptive tests use a fixed length, and only adapt item difficulty. This is merely for public relations issues, namely the inconvenience of dealing with examinees who feel they were unfairly treated by the CAT, even though it is arguably more fair and valid than conventional tests.  In general, it is best practice to meld the two: allow test length to be shorter or longer, but put caps on either end that prevent inadvertently too-short tests or tests that could potentially go on to 400 items.  For example, the NCLEX has a minimum length exam of 75 items and the maximum length exam of 145 items.

 

Example of the computerized adaptive testing algorithm

Let’s walk through an oversimplified example.  Here, we have an item bank with 5 questions.  We will start with an item of average difficulty, and answer as would a student of below-average difficulty.

Below are the item information functions for five items in a bank.  Let’s suppose the starting theta is 0.0.  

item information functions

 

  1. We find the first item to deliver.  Which item has the highest information at 0.0?  It is Item 4.
  2. Suppose the student answers incorrectly.
  3. We run the IRT scoring algorithm, and suppose the score is -2.0.  
  4. Check the termination criterion; we certainly aren’t done yet, after 1 item.
  5. Find the next item.  Which has the highest information at -2.0?  Item 2.
  6. Suppose the student answers correctly.
  7. We run the IRT scoring algorithm, and suppose the score is -0.8.  
  8. Evaluate termination criterion; not done yet.
  9. Find the next item.  Item 2 is the highest at -0.8 but we already used it.  Item 4 is next best, but we already used it.  So the next best is Item 1.
  10. Item 1 is very easy, so the student gets it correct.
  11. New score is -0.2.
  12. Best remaining item at -0.2 is Item 3.
  13. Suppose the student gets it incorrect.
  14. New score is perhaps -0.4.
  15. Evaluate termination criterion.  Suppose that the test has a max of 3 items, an extremely simple criterion.  We have met it.  The test is now done and automatically submitted.

 

Advantages of computerized adaptive testing

By making the test more intelligent, adaptive testing provides a wide range of benefits.  Some of the well-known advantages of adaptive testing, recognized by scholarly psychometric research, are listed below.  
 

Shorter tests

Research has found that adaptive tests produce anywhere from a 50% to 90% reduction in test length.  This is no surprise.  Suppose you have a pool of 100 items.  A top student is practically guaranteed to get the easiest 70 correct; only the hardest 30 will make them think.  Vice versa for a low student.  Middle-ability students do no need the super-hard or the super-easy items.

Why does this matter?  Primarily, it can greatly reduce costs.  Suppose you are delivering 100,000 exams per year in testing centers, and you are paying $30/hour.  If you can cut your exam from 2 hours to 1 hour, you just saved $3,000,000.  Yes, there will be increased costs from the development and maintenance of computer adaptive tests, but you will likely save money in the end.

For the K12 assessment, you aren’t paying for seat time, but there is the opportunity cost of lost instruction time.  If students are taking formative assessments 3 times per year to check on progress, and you can reduce each by 20 minutes, that is 1 hour; if there are 500,000 students in your State, then you just saved 500,000 hours of learning.

More precise scores

CAT will make tests more accurate, in general.  It does this by designing the algorithms specifically around how to get more accurate scores without wasting examinee time.

More control of score precision (accuracy)

CAT ensures that all students will have the same accuracy, making the test much fairer.  Traditional tests measure the middle students well but not the top or bottom students.  Is it better than A) students see the same items but can have drastically different accuracy of scores, or B) have equivalent accuracy of scores, but see different items?

Better test security

Since all students are essentially getting an assessment that is tailored to them, there is better test security than everyone seeing the same 100 items.  Item exposure is greatly reduced; note, however, that this introduces its own challenges, and adaptive assessment algorithms have considerations of their own item exposure.

A better experience for examinees, with reduced fatigue

Computer adaptive tests will tend to be less frustrating for examinees on all ranges of ability.  Moreover, by implementing variable-length stopping rules (e.g., once we know you are a top student, we don’t give you the 70 easy items), reduces fatigue.

Increased examinee motivation

Since examinees only see items relevant to them, this provides an appropriate challenge.  Low-ability examinees will feel more comfortable and get many more items correct than with a linear test.  High-ability students will get the difficult items that make them think.

Frequent retesting is possible

The whole “unique form” idea applies to the same student taking the same exam twice.  Suppose you take the test in September, at the beginning of a school year, and take the same one again in November to check your learning.  You’ve likely learned quite a bit and are higher on the ability range; you’ll get more difficult items, and therefore a new test.  If it was a linear test, you might see the same exact test.

This is a major reason that CAT plays a huge role in formative testing for K-12 education, delivered several times per year to millions of students in the US alone.

Individual pacing of tests

Examinees can move at their own speed.  Some might move quickly and be done in only 30 items.  Others might waver, also seeing 30 items but taking more time.  Still, others might see 60 items.  The algorithms can be designed to maximize the process.

Advantages of computerized testing in general

Of course, the advantages of using a computer to deliver a test are also relevant.  Here are a few
  • Immediate score reporting
  • On-demand testing can reduce printing, scheduling, and other paper-based concerns
  • Storing results in a database immediately makes data management easier
  • Computerized testing facilitates the use of multimedia in items
  • You can immediately run psychometric reports
  • Timelines are reduced with an integrated item banking system

 

How to develop an adaptive assessment that is valid and defensible

CATs are the future of assessment. They operate by adapting both the difficulty and number of items to each individual examinee. The development of an adaptive test is no small feat, and requires five steps integrating the expertise of test content developers, software engineers, and psychometricians.

The development of a quality adaptive test is complex and requires experienced psychometricians in both item response theory (IRT) calibration and CAT simulation research. FastTest can provide you the psychometrician and software; if you provide test items and pilot data, we can help you quickly publish an adaptive version of your test.

   Step 1: Feasibility, applicability, and planning studies. First, extensive monte carlo simulation research must occur, and the results formulated as business cases, to evaluate whether adaptive testing is feasible, applicable, or even possible.

   Step 2: Develop item bank. An item bank must be developed to meet the specifications recommended by Step 1.

   Step 3: Pretest and calibrate item bank. Items must be pilot tested on 200-1000 examinees (depends on IRT model) and analyzed by a Ph.D. psychometrician.

   Step 4: Determine specifications for final CAT. Data from Step 3 is analyzed to evaluate CAT specifications and determine most efficient algorithms using CAT simulation software such as CATSim.

   Step 5: Publish live CAT. The adaptive test is published in a testing engine capable of fully adaptive tests based on IRT.  There are not very many of them out in the market.  Sign up for a free account in our platform FastTest and try for yourself!

Want to learn more about our one-of-a-kind model? Click here to read the seminal article by our two co-founders.  More adaptive testing research is available here.

Minimum requirements for computerized adaptive testing

Here are some minimum requirements to evaluate if you are considering a move to the CAT approach.

  • A large item bank piloted so that each item has at least 100 valid responses (Rasch model) or 500 (3PL model)
  • 500 examinees per year
  • Specialized IRT calibration and CAT simulation software like  Xcalibre  and  CATsim.
  • Staff with a Ph.D. in psychometrics or an equivalent level of experience. Or, leverage our internationally recognized expertise in the field.
  • Items (questions) that can be scored objectively correct/incorrect in real-time
  • An item banking system and CAT delivery platform
  • Financial resources: Because it is so complex, the development of a CAT will cost at least $10,000 (USD) — but if you are testing large volumes of examinees, it will be a significantly positive investment. If you pay $20/hour for proctoring seats and cut a test from 2 hours to 1 hour for just 1,000 examinees… that’s a $20,000 savings.  If you are doing 200,000 exams?  That is $4,000,000 in seat time that is saved.

Adaptive testing: Resources for further reading

Visit the links below to learn more about adaptive assessment.  

  • We first recommend that you first read this landmark article by our co-founders.
  • Read this article on producing better measurements with CAT from Prof. David J. Weiss.
  • International Association for Computerized Adaptive Testing: www.iacat.org
  • Here is the link to the webinar on the history of CAT, by the godfather of CAT, Prof. David J. Weiss.

Examples of CAT

Many large-scale assessments utilize adaptive technology.  The GRE (Graduate Record Examination) is a prime example of an adaptive test. So is the NCLEX (nursing exam in the USA), GMAT (business school admissions), Paramedic/EMT certification exam, and many formative assessments like the NWEA MAP or iReady.  The SAT has recently transitioned to a multistage adaptive format.

How to implement CAT on an adaptive testing platform

computerized Adaptive testing options

Our revolutionary platform, FastTest, makes it easy to publish a CAT.  It is designed as a user-friendly ecosystem to build, deliver, and validate assessments, with a focus on modern psychometrics like IRT and CAT.

  1. Upload your items
  2. Deliver a pilot exam
  3. Calibrate with our IRT software Xcalibre
  4. Upload the IRT parameters into the FastTest adaptive testing platform
  5. Assemble the pool of items you want to publish
  6. Specify the adaptive testing software parameters (screenshot)
  7. Deliver your adaptive test!

 

Ready to roll?  Contact us to sign up for a free account in our industry-leading CAT platform or to discuss with one of our PhD psychometricians.

Situational judgment tests (SJTs) are a type of assessment typically used in a pre-employment context to assess candidates’ soft skills and decision-making abilities. As the name suggests, we are not trying to assess something like knowledge, but rather the judgments or likely behaviors of candidates in specific situations, such as an unruly customer. These tests have become a critical component of modern recruitment, offering employers valuable insights into how applicants approach real-world scenarios, with higher fidelity than traditional assessments.

The importance of tools like SJTs becomes even clearer when considering the significant costs of poor hiring decisions. The U.S. Department of Labor suggests that the financial impact of a poor hiring decision can amount to roughly 30% of the employee’s annual salary. Similarly, CareerBuilder reports that around three-quarters of employers face an average loss of approximately $15,000 for each bad hire due to various costs such as training, lost productivity, and recruitment expenses. Gallup’s State of the Global Workplace Report 2022 further emphasizes the broader implications, revealing that disengaged employees—often a result of poor hiring practices—cost companies globally $8.8 trillion annually in lost productivity.

In this article, we’ll define situational judgment tests, explore their benefits, and provide an example question to better understand how they work.

What is a Situational Judgment Test?

A Situational Judgment Test (SJT) is a psychological assessment tool designed to evaluate how individuals handle hypothetical workplace scenarios. These tests present a series of realistic situations and ask candidates to choose or rank responses that best reflect how they would act. Unlike traditional aptitude tests that measure specific knowledge or technical skills, SJTs focus on soft skills like problem-solving, teamwork, communication, and adaptability. They can provide a critical amount of incremental validity over cognitive and job knowledge assessments.

SJTs are widely used in recruitment for roles where interpersonal and decision-making skills are critical, such as management, customer service, and healthcare. They can be administered in various formats, including multiple-choice questions, multiple-response items, video scenarios, or interactive simulations.

Example of a Situational Judgment Test Question

Here’s a typical SJT question to illustrate the concept:

 

Scenario:

You are leading a team project with a tight deadline. One of your team members, who is critical to the project’s success, has missed several key milestones. When you approach them, they reveal they are overwhelmed with personal issues and other work commitments.

hr-interview-pre-employment

Question:

What would you do in this situation?

– Report the issue to your manager and request their intervention.

– Offer to redistribute some of their tasks to other team members to ease their workload.

– Have a one-on-one meeting to understand their challenges and develop a plan together.

– Leave them to handle their tasks independently to avoid micromanaging.

 

Answer Key:

While there’s no definitive “right” answer in SJTs, some responses align better with desirable workplace behaviors. In this example, Option 3 demonstrates empathy, problem-solving, and leadership, which are highly valued traits in most professional settings.

 

Because SJTs typically do not have an overtly correct answer, they will sometimes have a partial credit scoring rule. In the example above, you might elect to give 2 points to Option 3 and 1 point to Option 2. Perhaps even a negative point to some options!

Potential topics for SJTs

Customer service – Given a video of an unruly customer, and how would you respond?

Difficult coworker situation – Like the previous example, how would you find a solution?

Police/Fire – It you made a routine traffic stop and the driver was acting intoxicated and belligerent, what would you do?

How to Develop and Deliver an SJT

Development of an SJT is typically more complex than knowledge-based tests, both because it is more difficult to come up with the topic/content of the item as well as plausible distractors and scoring rules. It can also get expensive if you are utilizing simulation formats or high-quality videos for which you hire real actors!

Here are some suggested steps:

  1. Define the construct you want to measure
  2. Draft item content
  3. Establish the scoring rules
  4. Have items reviewed by experts
  5. Create videos/simulations
  6. Set your cutscore (Standard setting)
  7. Publish the test

SJTs are almost always delivered by computer nowadays because it is so easy to include multimedia. Below is an example of what this will look like, using ASC’s FastTest platform.

FastTest - Situational Judgment Test SJT example

Advantages of Situational Judgment Tests

1. Realistic Assessment of Skills

Unlike theoretical tests, SJTs mimic real-world situations, making them a practical way to evaluate how candidates might behave in the workplace. This approach helps employers identify individuals who align with their organizational values and culture.

2. Focus on Soft Skills

Technical expertise can often be measured through other assessments or qualifications, but soft skills like emotional intelligence, adaptability, and teamwork are harder to gauge. SJTs provide insights into these intangible qualities that are crucial for success in many roles.

3. Reduced Bias

SJTs focus on behavior rather than background, making them a fairer assessment tool. They can help level the playing field by emphasizing practical decision-making over academic credentials or prior experience.

4. Efficient Screening Process

For roles that receive a high volume of applications, SJTs offer a quick and efficient way to filter candidates. By identifying top performers early, organizations can save time and resources in subsequent hiring stages.

5. Improved Candidate Experience

Interactive and scenario-based assessments often feel more engaging to candidates than traditional tests. This positive experience can enhance a company’s employer brand and attract top talent.

Tips for Success in Taking an SJT

If you’re preparing to take a situational judgment test, keep these tips in mind:

– Understand the Role: Research the job to better understand the types of situations that might be encountered, and think through your responses ahead of time.

– Understand the Company: Research organization to align your responses with their values, culture, and expectations.

– Prioritize Key Skills: Many SJTs assess teamwork, leadership, and conflict resolution, so focus on demonstrating these attributes.

– Practice: Familiarize yourself with sample questions to build confidence and improve your response strategy.

Conclusion

Situational judgment tests are a powerful tool for employers to evaluate candidates’ interpersonal and decision-making abilities in a realistic context, and in a way that is much more scalable than 1-on-1 interviews.  For job seekers, they offer an opportunity to showcase soft skills that might not be evident from a resume or educational record alone. As their use continues to grow across industries, understanding and preparing for SJTs can give candidates a competitive edge in the job market.

Interested developing and delivering your own SJTs on a world-class platform?  ASC’s software is designed to support such usage; contact us for a demo.

Additional Resources on SJTs

Lievens, F., & Sackett, P. R. (2012). The validity of interpersonal skills assessment via SJTs: A review. Journal of Applied Psychology, 97(1), 3–17.

Weekley, J. A., & Ployhart, R. E. (Eds.). (2005). Situational judgment tests: Theory, measurement, and application. Psychology Press.

Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgment tests: Constructs assessed and a meta-analysis of their criterion-related validity. Personnel Psychology, 63(1), 83–117.

confidence-intervals-avatar

Confidence intervals (CIs) are a fundamental concept in statistics, used extensively in assessment and measurement to estimate the reliability and precision of data. Whether in scientific research, business analytics, or health studies, confidence intervals provide a range of values that likely contain the true value of a parameter, giving us a better understanding of uncertainty. This article dives into the concept of confidence intervals, how they are used in assessments, and real-world applications to illustrate their importance.

What Is a Confidence Interval?

CI is a range of values, derived from sample data, that is likely to contain the true population parameter. Instead of providing a single estimate (like a mean or proportion), it gives a range of plausible values, often expressed with a specific level of confidence, such as 95% or 99%. For example, if a survey estimates the average height of adult males to be 175 cm with a 95% CI of 170 cm to 180 cm, it means that we can be 95% confident that the true average height of all adult males falls within this range.

These are made by taking plus or minus a standard error times a factor, around the single estimate, creating a lower bound and upper bound for a range.

Upper Bound = Estimate + factor * standard_error

Lower Bound = Estimate – factor * standard_error

The value of the factor depends upon your assumptions and the desired percentage in the range.  You might want 90%, 95%, 0r 99%.  With the standard normal distribution, the factor is 1.96 (see any table of z-scores), which makes for easy quick math of 2 in your head to get a rough idea.  So, in the example above, we might find that the average height for a sample of 100 adult males is 175 cm with a standard deviation of 25.  The standard error of the mean is SD*sqrt(N), which in this case is 25*sqrt(100) = 2.5.  If we take plus or minus 2 times the standard error of 2.5, that is how we get a confidence interval that the true population mean is 95% likely to be between 170 and 180.

This example is from general statistics, but confidence intervals are used for specific reasons in assessment.

 

How Confidence Intervals Are Used in Assessment and Measurement

calculating-confidence-interval

1. Statistical Inference

CIs play a crucial role in making inferences about a population based on sample data. For instance, researchers studying the effect of a new drug might calculate a CI for the average improvement in symptoms. This helps determine if the drug has a significant effect compared to a placebo.

2. Quality Control

In industries like manufacturing, CIs help assess product consistency and quality. For example, a factory producing light bulbs may use CIs to estimate the average lifespan of their products. This ensures that most bulbs meet performance standards.

3. Education and Testing

In educational assessments, CIs can provide insights into test reliability and student performance. For instance, a standardized test score might come with a CI to account for variability in test conditions or scoring methods.

Real-World Examples of Confidence Intervals

1. Medical Research

explaining-confidence-intervals

In clinical trials, CIs are often used to estimate the effectiveness of treatments. Suppose a study finds that a new vaccine reduces the risk of a disease by 40%, with a 95% CI of 30% to 50%. This means there’s a high probability that the true effectiveness lies within this range, helping policymakers make informed decisions.

2. Business Analytics

Businesses use CIs to forecast sales, customer satisfaction, or market trends. For example, a company surveying customer satisfaction might report an average satisfaction score of 8 out of 10, with a 95% CI of 7.5 to 8.5. This helps managers gauge customer sentiment while accounting for survey variability.

3. Environmental Studies

Environmental scientists use CIs to measure pollution levels or climate changes. For instance, if data shows that the average global temperature has increased by 1.2°C over the past century, with a CI of 0.9°C to 1.5°C, this range provides a clearer picture of the uncertainty in the estimate.

Confidence Intervals in Education: A Closer Look

CIs are particularly valuable in education, where they help assess the reliability and validity of test scores and other measurements. By understanding and applying CIs, educators, test developers, and policymakers can make more informed decisions that impact students and learning outcomes.

1. Estimating a Range for True Score

CIs are often paired with standard error of measurement (SEM) to provide insights into the reliability of test scores. SEM quantifies the amount of error expected in a score due to various factors like testing conditions or measurement tools.  It gives us a range for a true score around the observed score (see technical note near then end on this).

For example, consider a standardized test with a scaled score range of 200 to 800. If a student scores 700 with an SEM of 20, the 95% CI for their true score is calculated as:

     Score ± (SEM × Z-value for 95% confidence)

     700 ± (20 × 1.96) = 700 ± 39.2700 ± (20 × 1.96) = 700 ± 39.2

Thus, the 95% CI is approximately 660 to 740. This means we can be 95% confident that the student’s true score lies within this range, accounting for potential measurement error.  Because this is important, it is sometimes factored into important decisions such as setting a cutscore to be hired at a company based on a screening test.

The reasoning for this is accurately described by this quote from Prof. Michael Rodriguez, noted by Mohammed Abulela on LinkedIn:

A test score is a snapshot estimate, based on a sample of knowledge, skills, or dispositions, with a standard error of measurement reflecting the uncertainty in that score-because it is a sample. Fair test score interpretation employs that standard error and does not treat a score as an absolute or precise indicator of performance.

2. Using Standard Error of Estimate (SEE) for Predictions

The standard error of the estimate (SEE) is used to evaluate the accuracy of predictions in models, such as predicting student performance based on prior data.

For instance, suppose that a college readiness score ranges from 0 to 500, and is predicted by a student’s school grades and admissions test score.  If a predictive model estimates a student’s college readiness score to be 450, with an SEE of 25, the 95% confidence interval for this predicted score is:

     450 ± (25 × 1.96) = 450 ± 49

This results in a confidence interval of 401 to 499, indicating that the true readiness score is likely within this range. Such information helps educators evaluate predictive assessments and develop better intervention strategies.

3. Evaluating Group Performance

confidence-intervals-schemes

CIs are also used to assess the performance of groups, such as schools or districts. For instance, if a district’s average math score is 75 with a 95% CI of 73 to 77, policymakers can be fairly confident that the district’s true average falls within this range. This insight is crucial for making fair comparisons between schools or identifying areas that need improvement.

4. Identifying Achievement Gaps

When studying educational equity, CIs help measure differences in achievement between groups, such as socioeconomic or demographic categories. For example, if one group scores an average of 78 with a CI of 76 to 80 and another scores 72 with a CI of 70 to 74, the overlap (or lack thereof) in intervals can indicate whether the gap is statistically significant or might be due to random variability.

5. Informing Curriculum Development

CIs can guide decisions about curriculum and instructional methods. For instance, when pilot-testing a new teaching method, researchers might use CIs to evaluate its effectiveness. If students taught with the new method have scores averaging 85 with a CI of 83 to 87, compared to 80 (78 to 82) for traditional methods, educators might confidently adopt the new approach.

6. Supporting Student Growth Tracking

In long-term assessments, CIs help track student growth by providing a range around estimated progress. If a student’s reading level improves from 60 (58–62) to 68 (66–70), educators can confidently assert growth while acknowledging measurement variability.

Key Benefits of Using Confidence Intervals

  • Enhanced Decision-Making: CIs provide a range, rather than a single estimate, making decisions more robust and informed.
  • Clarity in Uncertainty: By quantifying uncertainty, confidence intervals allow stakeholders to understand the limitations of the data.
  • Improved Communication: Reporting findings with CIs ensures transparency and builds trust in the results.

 

How to Interpret Confidence Intervals

A common misconception is that a 95% CI means there’s a 95% chance the true value falls within the interval. Instead, it means that if we repeated the study many times, 95% of the calculated intervals would contain the true parameter. Thus, it’s a statement about the method, not the specific interval.  This is similar to the common misinterpretation of an experimental p-value that it is the probability that our alternative hypothesis is true; instead, it is the probability of our experiment’s results if the null is true.

Final Thoughts

CIs are indispensable in assessment and measurement, offering a clearer understanding of data variability and precision. By applying them effectively, researchers, businesses, and policymakers can make better decisions based on statistically sound insights.

Whether estimating population parameters or evaluating the reliability of a new method, CIs provide the tools to navigate uncertainty with confidence. Start using CIs today to bring clarity and precision to your analyses!

General intelligence, often symbolized as “g,” is a concept that has been central to psychology and cognitive science since the early 20th century. First introduced by Charles Spearman, general intelligence represents an individual’s overall cognitive ability. This foundational concept has evolved over the years and remains crucial in both academic and applied settings, particularly in assessment and measurement. Understanding general intelligence can help in evaluating mental abilities, predicting academic and career success, and creating reliable and valid assessment tools. This article delves into the nature of general intelligence, its assessment, and its importance in measurement fields.

What is General Intelligence?

general-intelligence-idea

General intelligence (GI), or “g,” is a theoretical construct referring to the common cognitive abilities underlying performance across various mental tasks. Spearman proposed that a general cognitive ability contributes to performance in a wide range of intellectual tasks. This ability encompasses multiple cognitive skills, such as reasoning, memory, and problem-solving, which are thought to be interconnected. In Spearman’s model, a person’s performance on any cognitive test relies partially on “g” and partially on task-specific skills.

For example, both solving complex math problems and understanding a new language involve specific abilities unique to each task but are also underpinned by an individual’s GI. This concept has been pivotal in shaping how we understand cognitive abilities and the development of intelligence tests.

To further explore the foundational aspects of intelligence, the Positive Manifold phenomenon demonstrates that most cognitive tasks tend to be positively correlated, meaning that high performance in one area generally predicts strong performance in others. You can read more about it in our article on Positive Manifold.

GI in Assessment and Measurement

The assessment of GI has been integral to psychology, education, and organizational settings for decades. Testing for “g” provides insight into an individual’s mental abilities and often serves as a predictor of various outcomes, such as academic performance, job performance, and life success.

  1. Intelligence Testing: Intelligence tests, like the Wechsler Adult Intelligence Scale (WAIS) and Stanford-Binet, aim to provide a measurement of GI. These tests typically consist of a variety of subtests measuring different cognitive skills, including verbal comprehension, working memory, and perceptual reasoning. The results are aggregated to produce an overall IQ score, representing a general measure of “g.” These scores are then compared to population averages to understand where an individual stands in terms of cognitive abilities relative to their peers.
  2. Educational Assessment: GI is often used in educational assessments to help identify students who may need additional support or advanced academic opportunities. For example, cognitive ability tests can assist in identifying gifted students who may benefit from accelerated programs or those who need extra resources. Schools also use “g” as one factor in admission processes, relying on tests like the SAT, GRE, and similar exams, which assess reasoning and problem-solving abilities linked to GI.
  3. Job and Career Assessments: Many organizations use cognitive ability tests as part of their recruitment processes. GI has been shown to predict job performance across many types of employment, especially those requiring complex decision-making and problem-solving skills. By assessing “g,” employers can gauge a candidate’s potential for learning new tasks, adapting to job challenges, and developing in their role. This approach is especially prominent in fields requiring high levels of cognitive performance, such as research, engineering, and management. One notable example is the Armed Services Vocational Aptitude Battery (ASVAB), a multi-test battery that assesses candidates for military service. The ASVAB includes subtests like arithmetic reasoning, mechanical comprehension, and word knowledge, all of which reflect diverse cognitive abilities. These individual scores are then combined into the Armed Forces Qualifying Test (AFQT) score, an overall measure that serves as a proxy for GI. The AFQT score acts as a threshold across military branches, with each branch requiring minimum scores.

Here are a few ASVAB-style sample questions that reflect different cognitive areas while collectively representing general intelligence:

  1. Arithmetic Reasoning:
    If a train travels at 60 mph for 3 hours, how far does it go?
    Answer: 180 miles
  2. Word Knowledge:
    What does the word “arduous” most nearly mean?
    Answer: Difficult
  3. Mechanical Comprehension:
    If gear A turns clockwise, which direction will gear B turn if it is directly connected?
    Answer: Counterclockwise

 

How GI is Measured

studying-cognitive-abilities

In measuring GI, psychometricians use a variety of statistical techniques to ensure the reliability and validity of intelligence assessments. One common approach is factor analysis, a statistical method that identifies the relationships between variables and ensures that test items truly measure “g” as intended.

Tests designed to measure general intelligence are structured to cover a range of cognitive functions, capturing a broad spectrum of mental abilities. Each subtest score contributes to a composite score that reflects an individual’s general cognitive ability. Assessments are also periodically normed, or standardized, so that scores remain meaningful and comparable over time. This standardization process helps maintain the relevance of GI scores in diverse populations.

 

The Importance of GI in Modern Assessment

GI continues to be a critical measure for various practical and theoretical applications:

  • Predicting Success: Numerous studies have linked GI to a wide array of outcomes, from academic performance to career advancement. Because “g” encompasses the ability to learn and adapt, it is often a better predictor of success than task-specific skills alone. In fact, meta-analyses indicate that g accounts for approximately 25% of the variance in job performance, highlighting its unparalleled predictive power in educational and occupational contexts.
  • Validating Assessments: In psychometrics, GI is used to validate and calibrate assessment tools, ensuring that they measure what they intend to. Understanding “g” helps in creating reliable test batteries and composite scores, making it essential for effective educational and professional testing.
  • Advancing Cognitive Research: GI also plays a vital role in cognitive research, helping psychologists understand the nature of mental processes and the structure of human cognition. Studies on “g” contribute to theories about how people learn, adapt, and solve problems, fueling ongoing research in cognitive psychology and neuroscience.

 

The Future of GI in Assessment

With advancements in technology, the assessment of GI is becoming more sophisticated and accessible. Computerized adaptive testing (CAT) and machine learning algorithms allow for more personalized assessments, adjusting test difficulty based on real-time responses. These innovations not only improve the accuracy of GI testing but also provide a more engaging experience for test-takers.

As our understanding of human cognition expands, the concept of GI remains a cornerstone in both educational and occupational assessments. The “g” factor offers a powerful framework for understanding mental abilities and continues to be a robust predictor of various life outcomes. Whether applied in the classroom, the workplace, or in broader psychological research, GI is a valuable metric for understanding human potential and guiding personal and professional development.

Personality plays a crucial role in shaping our behaviors, attitudes, and overall interactions with the world. One of the most widely accepted frameworks for understanding personality is the Big Five Personality Traits model—also known as the OCEAN model—consist of five broad dimensions that describe human personality. These traits are:

  1. Openness to Experience
  2. Conscientiousness
  3. Extraversion
  4. Agreeableness
  5. Neuroticism

In the field of psychological and educational assessment, personality measurements serve as powerful tools for understanding individuals and predicting outcomes like job performance, academic success, and personal satisfaction. In this blog post, we will explore the Big Five’s role in assessments and highlight its relevance in various applications.

The Big Five and Assessment: Practical Applications

The Big Five Personality Traits are widely used in assessments across fields, from human resources to clinical psychology, and even in education. These traits provide a measurable, standardized way of understanding an individual’s personality. Through assessments based on this model, employers can predict how well a potential employee might fit within a team, or educational institutions can gauge a student’s learning preferences and potential areas of growth.

Here’s a breakdown of how the Big Five dimensions translate into practical assessments:

Big Five Personality Traits Scheme

  • Openness to Experience: This trait reflects creativity, curiosity, and a willingness to embrace new ideas. Assessments can use this dimension to identify candidates or students who thrive in environments requiring innovation and adaptability.
  • Conscientiousness: Often linked to self-discipline, responsibility, and goal-directed behavior, conscientiousness assessments are crucial in predicting job performance. Employees high in conscientiousness tend to be reliable, organized, and effective in roles requiring careful planning and execution. A study by Sanchez-Ruiz and Khoury (2019) highlights conscientiousness as a strong predictor of academic success, explaining up to 14% of the variance in academic performance (e.g., GPA).
  • Extraversion: Assessing extraversion helps understand how individuals interact with their environment. In team-based settings, extraverted individuals are often proactive, enthusiastic, and effective communicators. This trait is particularly relevant in leadership assessments, as discussed in our post on general intelligence and leadership dynamics.
  • Agreeableness: Personality assessments measuring agreeableness can reveal how cooperative, empathetic, and considerate a person is. High levels of agreeableness are often associated with positive teamwork dynamics, which is valuable for roles involving collaboration and conflict resolution.
  • Neuroticism: This trait measures emotional stability. Individuals with lower neuroticism scores are better able to cope with stress and are less likely to experience negative emotions frequently. Understanding this trait helps organizations manage workplace stress and predict an employee’s reaction to pressure.

A critical method used to develop and validate the Big Five framework is factor analysis, a statistical technique that helps identify underlying variables or factors that explain the patterns observed in data. Factor analysis ensures that each trait in the model reflects distinct personality dimensions, contributing to its scientific rigor and practical applicability in assessments.

Adapting Big Five Assessments Across Populations: Example Items

The Big Five Personality Traits offer a flexible framework for assessing personality across diverse groups. Tailoring assessment items to specific populations—such as high school students, job candidates, or employees—ensures that the results are relevant and actionable. Below are examples of how Big Five items might vary to suit these distinct contexts:

High School Students

High school assessments aim to capture personality traits in a way that is relatable to young learners’ academic and social experiences. The language is straightforward, focusing on school-related scenarios and responsibilities.

  • Openness to Experience: “I enjoy trying new activities in school, even if they’re unfamiliar to me.”
  • Conscientiousness: “I complete my homework on time and organize my assignments.”
  • Extraversion: “I often speak up in class discussions and like working with others on group projects.”
  • Agreeableness: “I try to understand my classmates’ feelings and help them when I can.”
  • Neuroticism: “I get nervous before tests or when I have to speak in front of the class.”

College Students

college-students

For college assessments, items often bridge between academic and emerging professional experiences. Questions highlight independence and social interactions typical in a college environment.

  • Openness to Experience: “I seek out new learning experiences that help broaden my academic interests.”
  • Conscientiousness: “I manage my study schedule effectively to stay on top of my coursework.”
  • Extraversion: “I feel comfortable participating in campus organizations or social activities.”
  • Agreeableness: “I respect my classmates’ ideas during group assignments and try to work toward common goals.”
  • Neuroticism: “I can manage academic pressure and stay calm during exams.”

Job Candidates or Employees

For employee selection and professional development, items are adapted to reflect workplace scenarios. The questions target traits relevant to job performance, teamwork, and adaptability in a professional setting.

  • Openness to Experience: “I am comfortable taking on new projects, even if they involve skills I haven’t fully developed yet.”
  • Conscientiousness: “I am meticulous in planning and meeting deadlines at work.”
  • Extraversion: “I enjoy collaborating with team members and sharing ideas in meetings.”
  • Agreeableness: “I make an effort to understand my colleagues’ perspectives and offer help when needed.”
  • Neuroticism: “I handle work-related stress effectively, staying calm under pressure.”

These examples illustrate the adaptability of the Big Five model, ensuring that assessments are both meaningful and relevant to the unique demands of each population. By adjusting the context and language, personality assessments can yield insights that are more closely aligned with individuals’ day-to-day environments and challenges.

Assessing Personality in Recruitment and Development

Exploring Big Five Personality Traits

Personality assessments based on the Big Five are particularly valuable in recruitment and professional development. These tests allow employers to gain insights into whether a candidate’s personality traits align with the demands of the role or the company culture. For instance, a high score in conscientiousness might indicate a strong fit for roles requiring meticulous attention to detail, while high extraversion may be suited for sales or customer-facing positions.

In the educational domain, personality assessments can guide personalized learning experiences. By understanding student’s traits, teachers and administrators can tailor learning environments that optimize student engagement and motivation, aligning personality with learning approaches. The use of personality data alongside other cognitive measures, like general intelligence tests, provides a more holistic view of an individual’s strengths and areas for growth. For instance, this paper investigates the impact of Big Five personality traits on the academic performance of university students.

The Value of Big Five Assessments in Modern Contexts

Modern assessments incorporating the Big Five model provide organizations and institutions with actionable insights. They help predict employee retention, job satisfaction, academic achievement, and overall well-being. In an era where data-driven decisions are crucial, using personality assessments allows for more informed hiring, training, and development processes.

As technology continues to advance, these assessments are becoming more efficient and accessible through tools like computerized adaptive testing (CAT) and online platforms. These modern approaches ensure that personality assessments are more personalized and adaptive to the test-taker’s responses, enhancing the accuracy and relevance of the results.

Exploring Open-Source Personality Assessments

For those interested in learning more about Big Five personality assessments, the Open Psychometrics Project provides access to an IPIP-based Big Five assessment. This resource offers a practical introduction to personality testing, allowing users to explore the model’s dimensions through a public, scientifically supported tool.

Conclusion

The Big Five Personality Traits are foundational in the field of assessment, offering a reliable and scientifically grounded framework to understand and measure human behavior. Like the RIASEC model, which categorizes individuals based on career-oriented personality factors, the Big Five provides a structured approach to capture personality dimensions, whether applied in recruitment, educational development, or personal growth. Assessments based on the Big Five offer valuable insights that help individuals and organizations thrive.

 

The introduction of Bring Your Own Device (BYOD) policies by academic institutions has led to significant changes in education, particularly in the context of assessments. These policies permit students to bring and use their personal devices—such as laptops, tablets, or smartphones—during tests and exams, allowing them to take assessments in a familiar environment and potentially improving their performance. According to this source, 59% of companies have implemented BYOD policies by 2024. However, implementing BYOD for assessments also involves encountering challenges alongside the opportunities these policies provide.

Impact on Student Performance

Implementing BYOD in assessments can significantly improve student performance. Students benefit from reduced anxiety and a seamless experience when utilizing a device they are familiar with. For instance, a student accustomed to the accessibility features on their personal tablet, like text-to-speech or customized display settings, can perform better when these features are readily available during assessments.

Due to using a device they are familiar with, students may face fewer technical issues, such as navigating an unfamiliar operating system or using a non-preferred input method. For example, left-handed students might struggle with a right-handed mouse setup on school devices but would be comfortable using their own device configured to their preferences.

BYOD allows for a greater variety of resources to be utilized, such as educational applications and online materials, which can aid students’ problem-solving and critical thinking. In mathematics assessments, students might use graphing tools they’re proficient with, leading to deeper understanding and application of concepts. Furthermore, formative assessments can be conducted through engaging platforms that provide immediate feedback, aiding educators in tailoring instruction to meet individual students’ needs. Platforms like Kahoot! and Quizlet make learning interactive and can be seamlessly integrated into BYOD environments.

Security Concerns

Despite these advantages, BYOD presents potential security concerns, especially regarding the integrity of assessments. Students may access unauthorized resources, such as websites or applications, to find answers during an exam. For instance, a student might use messaging apps to communicate with peers or search engines to look up answers, compromising the fairness of the assessment.

To mitigate these concerns, strict guidelines and monitoring practices should be implemented by institutions and instructors. Secure testing environments are crucial, including both the physical location and the use of specific software that restricts access to unauthorized resources. Lockdown browsers like Respondus LockDown Browser or Safe Exam Browser can prevent students from navigating away from the test interface or opening other applications during the exam.

Creating a culture of academic integrity is of the utmost importance. Educators can discourage students from cheating by informing them of the ethical and academic implications of dishonest practices and emphasizing the importance of honest assessment. Investing in robust security measures, such as remote proctoring services and secure networks, can ensure the integrity of the assessment while still providing the benefits of BYOD. For non-educational exams such as certifications, the use of a non-disclosure agreement (NDA) or similar legal document is essential. For example, this can include language that the candidate will be taking the test on their own device, but will not attempt to cheat or steal content, or could face consequences such as being barred from taking the test again for one year.

Implementation Strategies

Successful implementation of BYOD requires careful planning and execution. Institutions should begin by providing clear guidelines that define acceptable use of personal devices and the responsibilities assigned to students. Specific facets include specifying which devices are permitted and what applications may be used during the assessment.

Training should be provided to staff to ensure they understand the permissions and restrictions and can provide a secure environment for students. Educators should be familiar with the technology used in assessments to troubleshoot issues and prevent security breaches. For example, teachers can be trained to use monitoring software that alerts them if a student tries to access unauthorized resources during an exam.

Considerations should also be given to those who do not own a personal device. Institutions might provide loaned devices or partner with organizations to supply devices, ensuring all students have equal access. For instance, a school could establish a device loan program where students can borrow laptops for the duration of the assessment.

Regular evaluations of the BYOD policy will allow for accommodations for new technologies and evolving student needs, ensuring the policy remains effective and beneficial for all involved parties.

Security Checklist for Implementing BYOD in Assessment

  1. Establish Clear Policies and Guidelines

    • Define acceptable devices and software.

    • Outline prohibited applications and websites.

    • Communicate consequences for policy violations.

  2. Use Secure Assessment Platforms

    • Implement lockdown browsers to prevent access to unauthorized resources.

    • Ensure compatibility with various devices and operating systems.

    • Regularly update platforms to address security vulnerabilities.

  3. Authenticate Student Identity

    • Require secure logins for assessment access.

    • Consider biometric verification for high-stakes exams.

  4. Monitor Assessment Environments

    • Utilize proctoring software to oversee student activity.

    • Train staff to recognize and address suspicious behaviors.

  5. Provide Technical Support

    • Offer assistance before and during assessments.

    • Prepare for common technical issues related to diverse devices.

  6. Promote Academic Integrity

    • Educate students on the importance of honest practices.

    • Include honor codes or integrity pledges in assessments.

  7. Ensure Equity of Access

    • Provide devices for students without personal ones.

    • Offer alternative assessment methods if necessary.

  8. Regularly Review and Update Security Measures

    • Stay informed about emerging security threats.

    • Adjust policies and technologies accordingly.

  9. Secure Network Infrastructure

    • Use encrypted networks during assessments.

    • Restrict network access to authorized users.

  10. Data Privacy Compliance

    • Adhere to laws and regulations regarding student data.

    • Ensure all platforms used comply with privacy standards.

Conclusion

The BYOD approach to assessments offers exciting opportunities to improve student performance but also introduces challenges related to security and implementation. In a world of constant technological innovation, institutions must balance leveraging technology to enhance learning while maintaining the integrity of assessments. Addressing these concerns through effective strategies can allow institutions to benefit from a more dynamic and responsive assessment environment provided by BYOD.

Bloom’s Taxonomy is a hierarchical classification of cognitive levels ranging from lower to higher order thinking, which provides a valuable framework for test development. The development of effective assessments is a cornerstone of educational practice, essential for measuring student achievement and informing instructional decisions, and effective use of Bloom’s Taxonomy can improve the validity of assessments.

 

Why use Bloom’s Taxonomy in Assessment?

By integrating Bloom’s Taxonomy into processes like test blueprint and item design, educators can create assessments that not only evaluate basic knowledge but also foster critical thinking, problem-solving, and the application of concepts in new contexts (generalization).

Utilizing Bloom’s Taxonomy in test blueprints involves aligning learning objectives with their corresponding cognitive levels (remembering, understanding, applying, analyzing, evaluating, creating). A test blueprint serves as a strategic plan that outlines the distribution of test items across various content areas and cognitive skills. By mapping each learning objective to a specific level of Bloom’s Taxonomy, educators ensure a balanced assessment that reflects the intended curriculum emphasis. This alignment guarantees that the test measures a range of abilities, from factual recall (remembering) to complex analysis and synthesis (analyzing – evaluating).

In item design, Bloom’s Taxonomy guides the creation of test questions that target specific cognitive levels. For example, questions aimed at the “understanding” level might ask students to generate a paraphrase from a given passage, while those at the “applying” level could present real-world scenarios requiring the use of the learned principles (e.g. Calculating the perimeter of a rectangle to know how many meters of fencing to buy). Higher-order questions at the “analyzing”, “evaluating”, or “creating” levels, challenge students to distinguish between arguments, critique methodologies, or design original solutions. This deliberate crafting of items ensures that assessments are not disproportionately focused on lower order skills, but promote deeper cognitive engagement.

Moreover, incorporating Bloom’s Taxonomy into test development enhances the validity and reliability of assessments and aids in identifying specific areas where students may struggle, allowing for targeted instructional interventions. By fostering a comprehensive evaluation of both foundational knowledge and advanced thinking skills, Bloom’s Taxonomy contributes to more meaningful assessments that support student growth, achievement, and certification among other types of assessments.

Bloom’s Taxonomy is an important tool in developing educational assessments with validity, by targeting the content to the appropriate complexity for the target population. In the world of psychometrics and assessments, understanding cognitive levels is essential for creating effective exams that accurately measure a candidate’s knowledge, skills, and abilities. Cognitive levels, often referred to as levels of cognition, are typically categorized into a hierarchy that reflects how deeply an individual understands and processes information. This concept is foundational in education and testing, including professional certification exams, where assessing not just knowledge but how it is applied is critical.

One of the most widely recognize frameworks for cognitive levels is Bloom’s Taxonomy, developed by educational psychologist Benjamin Bloom in the 1950s. Bloom’s Taxonomy classifies cognitive abilities into six levels, ranging from basic recall of facts to more complex evaluation and creation of new knowledge. For exam creators, this taxonomy is valuable because it helps design assessments that challenge different levels of thinking.

Here’s an overview of the cognitive levels in Bloom’s Taxonomy, with examples relevant to assessment design:

  1. Remembering: This is the most basic level of cognition and involves recalling facts or information. In the context of an exam, this could mean asking candidates to memorize specific definitions or procedures.
    • Example: “Define the term ‘psychometrics.'”
    • Exam Insight: While important, questions that assess only the ability to remember facts may not provide a full picture of a candidate’s competence. They’re often used in conjunction with higher-level questions to ensure foundational knowledge.
  2. Understanding: The next level involves comprehension – being able to explain ideas or concepts. Rather than simply recalling information, the test-taker demonstrates an understanding of what the information means.
    • Example: “Explain the difference between formative and summative assessments.”
    • Exam Insight: Understanding questions helps gauge whether a candidate can interpret concepts correctly, which is essential for fields like psychometrics, where understanding testing methods and principles is key.
  3.  Applying : Application involves using information in new situations. This level goes beyond understanding by asking candidates to apply their knowledge in a practical context.
    • Example: “Given a set of psychometric data, identify the most appropriate statistical method to analyze test reliability.”
    • Exam Insight: This level of cognition is crucial in high-stakes exams, especially in certification contexts where candidates must demonstrate their ability to apply theoretical knowledge to real-world scenarios.
  4. Analyzing: At this level, candidates are expected to break down information into components and explore relationships among the parts. Analysis questions often require deeper thinking and problem-solving skills.
    • Example: “Analyze the factors that could lead to bias in a psychometric assessment.”
    • Exam Insight: Analytical skills are critical for assessing a candidate’s ability to think critically about complex issues, which is essential in roles like test development or evaluation in assessment ecosystems.
  5.  Evaluating: Evaluation involves making judgments about the value of ideas or materials based on criteria and standards. This might include comparing different solutions or assessing the effectiveness of a particular approach.
    • Example: “Evaluate the effectiveness of different psychometric models for ensuring the fairness of certification exams.”
    • Exam Insight: evaluation questions are typically found in advanced assessments, where candidates are expected to critique processes and propose improvements. This level is vital for ensuring that individuals in leadership roles can make informed decisions about the tools they use.
  6. Creating: The highest cognitive level is creation, where candidates generate new ideas, products, or solutions based on their knowledge and analysis. This level requires innovative thinking and often asks for the synthesis of information.
    • Example: “Design an assessment framework that incorporates both traditional and modern psychometric theories.”
    • Exam Insight: Creating-level questions are rare in most standardized tests but may be used in specialized certifications where innovation and leadership are critical. This level of cognition assesses whether the candidate can go beyond existing knowledge and contribute to the field in new and meaningful ways.

 

Bloom’s Taxonomy and Cognitive Levels in High-Stakes Exams

When designing high-stakes exams – such as those used for professional certifications or employment tests – it’s important to strike a balance between assessing lower and higher cognitive levels. While remembering and understanding provide insight into the candidate’s foundational knowledge, analyzing and evaluating help gauge their ability to think critically and apply knowledge in practical scenarios.

For example, a psychometric exam for certifying test developers might include questions across all cognitive levels:

  • Remembering: Questions that assess knowledge of psychometric principles; basic definitions, processes, and so on.
  • Understanding: Questions that ask for an explanation of item response theory.
  • Applying: Scenarios where candidates must apply these theories to improve test design.
  • Analyzing: Situations where candidates analyze a poorly performing test item.
  • Evaluating: Questions that ask candidates to critique the use of certain assessment methods.
  • Creating: Tasks where candidates design new assessment tools.

Bloom’s Taxonomy and Learning Psychometrics: An applied example for test blueprint development.

Developing a test blueprint using Bloom’s Taxonomy ensures that the assessments in psychometrics effectively measures a range of cognitive skills. Here is an example of how you can apply Bloom’s Taxonomy to create a comprehensive test blueprint for a course in psychometrics.

Step 1. Identify content strands and their cognitive demands

Let’s consider the following strands and their corresponding cognitive demand:

 

Content Strand Cognitive Demand
Foundations of Psychometrics Remembering: Recall basic definitions and concepts in psychometrics.

Understanding: Explain fundamental principles and their significance.

Classical Test Theory Understanding: Describe components of CTT.

Applying: Use CTT formulas to compute test scores.

Analyzing: Interpret the implications of test scores and error.

Item Response Theory Understanding: Explain the basics of IRT models.

Applying: Apply IRT to analyze test items.

Analyzing: Compare IRT with CTT in terms of advantages and limitations.

Reliability and Validity Understanding: Define different types of reliability and validity.

Evaluating: Assess the reliability and validity of given tests.

Analyzing: Identify factors affecting reliability and validity.

Test development and standardization Applying: Develop test items following psychometric principles.

Creating: Design a basic blueprint.

Evaluating: Critique test items for bias and fairness.

Step 2. Use Bloom’s Taxonomy to elaborate a test blueprint

Using Bloom’s Taxonomy, we can create a test blueprint that aligns the content strands with appropriate cognitive levels.

This would be a test blueprint table:

Content strand Bloom’s Level Learning objective # items Item type
Foundational of Psychometrics Remembering Recall definitions and concepts 5 Multiple-choice
Understanding Explain principles and their significance 4 Short-answer
Classical Test Theory Understanding Describe components of CTT 3 True/False with explanation
Applying Compute test scores using CTT formulas 5 Calculation problems
Analyzing Interpret test scores and error implications 4 Data analysis questions
Item Response Theory Understanding Explain basics of IRT models 3 Matching
Applying Analyze test items using IRT 4 Problem solving
Analyzing Compare IRT and CTT 3 Comparative essays
Reliability and Validity Understanding Define types of reliability and validity 4 Fill-in-the-blank
Analyzing Identify factors affecting reliability and validity 4 Case studies
Evaluating Assess the reliability and validity of tests 5 Critical evaluations
Test development and standardization Applying Develop test items using psychometric principles 4 Item writing exercises
Creating Design a basic test blueprint 2 Project-based tasks
Evaluating Critique test items for bias and fairness 3 Peer review assignments

By first delineating the content strands and their associated cognitive demands, and then applying Bloom’s Taxonomy, educators can develop a test blueprint that is both systematic and effective. This method ensures that assessments are comprehensive, balanced, and aligned with educational goals, ultimately enhancing the measurement of student learning in psychometrics.

Benefits of this approach include:

  • Comprehensive Coverage: Ensures all important content areas and cognitive skills are assessed.
  • Balanced Difficulty: Provides a range of item difficulties to discriminate between different levels of student performance.
  • Enhanced Validity: Aligns assessment with learning objectives, improving content validity.
  • Promotes Higher-Order Thinking: Encourages students to engage in complex cognitive processes.

Conclusion: Why Cognitive Levels Matter in Assessments

Cognitive levels play a crucial role in assessment design. By incorporating questions that target different levels of cognition, exams can provide a more complete picture of a candidate’s abilities.

By aligning exam content with cognitive levels, you ensure that your assessments are not just measuring rote memorization but the full spectrum of cognitive abilities – from basic understanding to advanced problem-solving and creativity. This creates a more meaningful and comprehensive evaluation process for both candidates and employers alike.

 

References:

 

assessment-technology-improve-exams

Psychometrics is the science of educational and psychological assessment, using data to ensure that tests are fair and accurate.  Ever felt like you took a test which was unfair, too hard, didn’t cover the right topics, or was full of questions that were simply confusing or poorly written?  Psychometricians are the people who help organizations fix these things using data science, as well as more advanced topics like how to design an AI algorithm that adapts to each examinee.

Psychometrics is a critical aspect of many fields.  Having accurate information on people is essential to education, human resources, workforce development, corporate training, professional certifications/licensure, medicine, and more.  It scientifically studies how tests are designed, developed, delivered, validated, and scored.

Key Takeaways on Psychometrics

  • Psychometrics is the study of how to measure and assess mental constructs, such as intelligence, personality, or knowledge of accounting law
  • Psychometrics is NOT just screening tests for jobs
  • Psychometrics is dedicated to making tests more accurate and fair
  • Psychometrics is heavily reliant on data analysis and machine learning, such as item response theory

What is Psychometrics? Definition & Meaning

Psychometrician Qualities

Psychometrics is the study of assessment itself, regardless of what type of test is under consideration. In fact, many psychometricians don’t even work on a particular test, they just work on psychometrics itself, such as new methods of data analysis.  Most professionals don’t care about what the test is measuring, and will often switch to new jobs at completely unrelated topics, such as moving from a K-12 testing company to psychological measurement to an Accountant certification exam.  We often refer to whatever we are measuring simply as “theta” – a term from item response theory.

Psychometrics tackles fundamental questions around assessment, such as how to determine if a test is reliable or if a question is of good quality, as well as much more complex questions like how to ensure that a score today on a university admissions exam means the same thing as it did 10 years ago.  Additionally, it examines phenomena like the positive manifold, where different cognitive abilities tend to be positively correlated, supporting the consistency and generalizability of test scores over time.

Psychometrics is a branch of data science.  In fact, it’s been around a long time before that term was even a buzzword.  Don’t believe me?  Check out this Coursera course on Data Science, and the first example they give as one of the foundational historical projects in data science is… psychometrics!  (early research on factor analysis of intelligence).

Even though assessment is everywhere and Psychometrics is an essential aspect of assessment, to most people it remains a black box, and professionals are referred to as “psychomagicians” in jest. However, a basic understanding is important for anyone working in the testing industry, especially those developing or selling tests.

Psychometrics is NOT limited to very narrow types of assessment.  Some people use the term interchangeably with concepts like IQ testing, personality assessment, or pre-employment testing.  These are each but tiny parts of the field!  Also, it is not the administration of a test.

 

Why do we need Psychometrics?

This purpose of tests is providing useful information about people, such as whether to hire them, certify them in a profession, or determine what to teach them next in school.  Better tests mean better decisions.  Why?  The scientific evidence is overwhelming that tests provide better information for decision makers than many other types of information, such as interviews, resumes, or educational attainment.  Thus, tests serve an extremely useful role in our society.

The goal of psychometrics is to provide validity: evidence to support that interpretations of scores from the test are what we intended.  If a certification test is supposed to mean that someone passing it meets the minimum standard to work in a certain job, we need a lot of evidence about that, especially since the test is so high stakes in that case.  Meta-analysis, a key tool in psychometrics, aggregates research findings across studies to provide robust evidence on the reliability and validity of tests. By synthesizing data from multiple studies, meta-analysis strengthens the validity claims of tests, especially crucial in high-stakes certification exams where accuracy and fairness are paramount.

 

What does Psychometrics do?

Building and maintaining a high-quality test is not easy.  A lot of big issues can arise.  Much of the field revolves around solving major questions about tests: what should they cover, what is a good question, how do we set a good cutscore, how do we make sure that the test predicts job performance or student success, etc.  Many of these questions align with the test development cycle – more on that later.

How do we define what should be covered by the test? (Test Design)

test dev cycle

Before writing any items, you need to define very specifically what will be on the test.  If the test is in credentialing or pre-employment, psychometricians typically run a job analysis study to form a quantitative, scientific basis for the test blueprints.  A job analysis is necessary for a certification program to get accredited.  In Education, the test coverage is often defined by the curriculum.

How do we ensure the questions are good quality? (Item Writing)

There is a corpus of scientific literature on how to develop test items that accurately measure whatever you are trying to measure.  A great overview is the book by Haladyna.  This is not just limited to multiple-choice items, although that approach remains popular.  Psychometricians leverage their knowledge of best practices to guide the item authoring and review process in a way that the result is highly defensible test content.  Professional item banking software provides the most efficient way to develop high-quality content and publish multiple test forms, as well as store important historical information like item statistics.

How do we set a defensible cutscore? (Standard Setting)

Test scores are often used to classify candidates into groups, such as pass/fail (Certification/Licensure), hire/non-hire (Pre-Employment), and below-basic/basic/proficient/advanced (Education).  Psychometricians lead studies to determine the cutscores, using methodologies such as Angoff, Beuk, Contrasting-Groups, and Borderline.

How do we analyze results to improve the exam? (Psychometric Analysis)

Psychometricians are essential for this step, as the statistical analyses can be quite complex.  Smaller testing organizations typically utilize classical test theory, which is based on simple mathematics like proportions and correlations.  Large, high-profile organizations typically use item response theory (IRT), which is based on a type of nonlinear regression analysis.  Psychometricians evaluate overall reliability of the test, difficulty and discrimination of each item, distractor analysis, possible bias, multidimensionality, linking multiple test forms/years, and much more.  Software such as  Iteman  and  Xcalibre  is also available for organizations with enough expertise to run statistical analyses internally.  Scroll down below for examples.

How do we compare scores across groups or years? (Equating)

This is referred to as linking and equating.  There are some psychometricians that devote their entire career to this topic.  If you are working on a certification exam, for example, you want to make sure that the passing standard is the same this year as last year.  If you passed 76% last year and this year you passed 25%, not only will the candidates be angry, but there will be much less confidence in the meaning of the credential.

How do we know the test is measuring what it should? (Validity)

Validity is the evidence provided to support score interpretations.  For example, we might interpret scores on a test to reflect knowledge of English, and we need to provide documentation and research supporting this.  There are several ways to provide this evidence.  A straightforward approach is to establish content-related evidence, which includes the test definition, blueprints, and item authoring/review.  In some situations, criterion-related evidence is important, which directly correlates test scores to another variable of interest.  Delivering tests in a secure manner is also essential for validity.

 

Where is Psychometrics Used?

Certification/Licensure/Credentialing

In certification testing, psychometricians develop the test via a documented chain of evidence following a sequence of research outlined by accreditation bodies, typically: job analysis, test blueprints, item writing and review, cutscore study, and statistical analysis.  Web-based item banking software like  FastTest  is typically useful because the exam committee often consists of experts located across the country or even throughout the world; they can then easily log in from anywhere and collaborate.

Pre-Employment

In pre-employment testing, validity evidence relies primarily on establishing appropriate content (a test on PHP programming for a PHP programming job) and the correlation of test scores with an important criterion like job performance ratings (shows that the test predicts good job performance).  Adaptive tests are becoming much more common in pre-employment testing because they provide several benefits, the most important of which is cutting test time by 50% – a big deal for large corporations that test a million applicants each year. Adaptive testing is based on item response theory, and requires a specialized psychometrician as well as specially designed software like  FastTest.

K-12 Education

Most assessments in education fall into one of two categories: lower-stakes formative assessment in classrooms, and higher-stakes summative assessments like year-end exams.  Psychometrics is essential for establishing the reliability and validity of higher-stakes exams, and on equating the scores across different years.  They are also important for formative assessments, which are moving towards adaptive formats because of the 50% reduction in test time, meaning that student spend less time testing and more time learning.

Universities

Universities typically do not give much thought to psychometrics even though a significant amount of testing occurs in higher education, especially with the move to online learning and MOOCs.  Given that many of the exams are high stakes (consider a certificate exam after completing a year-long graduate program!), psychometricians should be used in the establishment of legally defensible cutscores and in statistical analysis to ensure reliable tests, and professionally designed assessment systems used for developing and delivering tests, especially with enhanced security.

Medicine/Psychology

Have you ever taken a survey at your doctor’s office, or before/after a surgery?  Perhaps a depression or anxiety inventory at a psychotherapist?  Psychometricians have worked on these.

 

The Test Development Cycle

Psychometrics is the core of the test development cycle, which is the process of developing a strong exam.  It is sometimes called similar names like assessment lifecycle.

test development cycle job task analysis psychometrics

You will recognize some of the terms from the introduction earlier.  What we are trying to demonstrate here is that those questions are not standalone topics, or something you do once and simply file a report.  An exam is usually a living thing.  Organizations will often be republishing a new version every year or 6 months, which means that much of the cycle is repeated on that timeline.  Not all of it is; for example, many orgs only do a job analysis and standard setting every 5 years.

Consider a certification exam in healthcare.  The profession does not change quickly because things like anatomy never change and medical procedures rarely change (e.g., how to measure blood pressure).  So, every 5 years it does a job analysis of its certificants to see what they are doing and what is important.  This is then converted to test blueprints.  Items are re-mapped if needed, but most likely do not need it because there are probably only minor changes to the blueprints.  Then a new cutscore is set with the modified-Angoff method, and the test is delivered this year.  It is delivered again next year, but equated to this year rather than starting again.  However, the item statistics are still analyzed, which leads to a new cycle of revising items and publishing a new form for next year.

 

Example of Psychometrics in Action

Here is some output from our Iteman software.  This is deeply analyzing a single question on English vocabulary, to see if the student knows the word alleviate.  About 70% of the students answered correctly, with a very strong point-biserial.  The distractor P values were all in the minority and the distractor point-biserials were negative, which adds evidence to the validity.  The graph shows that the line for the correct answer is going up while the others are going down, which is good.  If you are familiar with item response theory, you’ll notice how the blue line is similar to an item response function.  That is not a coincidence.

FastTest Iteman Psychometrics Analysis

Now, let’s look at another one, which is more interesting.  Here’s a vocab question about the word confectioner.  Note that only 37% of the students get it right… even though there is a 25% chance just of guessing!!!  However, the point-biserial discrimination remains very strong at 0.49.  That means it is a really good item.  It’s just hard, which means it does a great job to differentiate amongst the top students.

Confectioner confetti

A Glossary of Psychometric Terms

Accreditation: Accreditation by an outside agency affirms that an organization has met a certain level of standards. Certification testing programs may become accredited by meeting specified standards in test development, psychometrics, bylaws, management, etc.  Learn more.

Adaptive Test: A test that is delivered with an AI-based algorithm that personalizes it to each examinee, thereby making it much more secure and accurate while decreasing test length. Learn more.

Achievement: The psychometric term for measuring something that a student has learned, such as 9th grade biology curriculum knowledge, rather than an innate construct such as intelligence of conscientiousness.

Aptitude: A construct that is measured which is innate, usually in a cognitive context.  For example, logical reasoning ability.

Biserial Correlation: A classical index of item discrimination, highly similar to the more commonly used point-biserial. The biserial correlation assumes that the item scores and test scores reflect an underlying normal distribution, which is not always the case

Blueprint: A test blueprint, or test specification, details how an exam is to be constructed. It includes important information, such as the total number of items, the number of items in each content area or domain, the number of items that are recall verses reasoning, and the item formats to be utilized.

Certification: A non-mandatory testing program that certifies the candidates have achieved a minimum standard or knowledge or performance.

Classical Test Theory (CTT): A psychometric analysis and test development paradigm based on correlations, proportions, and other statistics that are relatively simple compared to IRT. It is, therefore, more appropriate for smaller samples, especially for fewer than 100.

Classification: The use of tests for classifying candidates into categories, such as pass/fail, nonmaster/master, or basic/proficient/advanced.

Cognitive Diagnostic Models (CDMs) aka Diagnostic Measurement Models (DMMs): A relatively new psychometric paradigm that frames the measurement problem not as one latent trait, but rather individual skills that must be mastered.  So rather than 4th grade match achievement as a scale, there are locations for adding fractions, dividing fractions, multiplying decimals, etc.  Can be used in concert with IRT.  Learn more.

Computerized Adaptive Testing (CAT): A dynamic method of test administration where items are selected one at a time to match item difficulty and candidate ability as closely as possible. This helps prevent candidates from being presented with items that are too difficult or too easy for them, which has multiple benefits. Often, the test only takes half as many items to obtain a similar level of accuracy to form-based tests. This reduces the testing time per examinee and also reduces the total number of times an item is exposed, as well as increasing security by the fact that nearly every candidate will receive a different set of items.

Computerized Classification Testing (CCT): An approach similar to CAT, but with different algorithms to reflect the fact that the purpose of the test is only to make a broad classification and not obtain a highly accurate point estimate of ability.

Concurrent Validity: An aspect of validity (see below) that correlates a test to other variables at the same time, to which we hope it correlates.  A university admissions test should correlate with high school grade point average – but not perfectly, since they are not exactly the same construct, and then what is the point of having the test?

Cutscore: Also known as a passing score, the cutscore is the score that a candidate must achieve to obtain a certain classification, such as “pass” on a licensure or certification exam.

Criterion-Referenced: A test score (not a test) is criterion-referenced if it is interpreted with regard to a specified criterion and not compared to scores of other candidates. For instance, providing the number-correct score does not relate any information regarding a candidate’s relative standing.

Differential item functioning: A specific type of analysis that evaluates whether an item is biased towards a subgroup.  This is different than overall test bias.

Distractors: Distractors are the incorrect options of a multiple-choice item. A distractor analysis is an important part of psychometric review, as it helps determine if one is acting as a keyed response.  Learn more.

Equating: A psychometric term for the process of determining comparable scores on different forms of an examination. For example, if Form A is more difficult than Form B, it might be desirable to adjust scores on Form A upward for the purposes of comparing them to scores on Form B. Usually, this is done statistically based on items that are on both forms, which are called equator, anchor, or common items. Because the groups who took the two forms are different, this is called a common items non-equivalent groups design.

Factor Analysis: An approach to analyzing complex data that seeks to break it down into major components or factors.  Use in many fields nowadays, but originally developed for psychometrics.  Two of the most common examples are the extensive research which finds that personality items/measures boil down to the Big Five, and that intelligence items/measures boil down to general cognitive ability (though there is evidence of different aspects with massive cognitive manifold).

Form: Forms are specific sets of items that are administered together for a test. For example, if a test included a certain set of 100 items this year and a different set of 100 items next year, these would be two distinct forms.

Item: The basic component of a test, often colloquially referred to as a “question,” but items are not necessary phrased as a question. They can be as varied as true/false statements, rating scales, and performance task simulations, in addition to the ubiquitous multiple-choice item.

Item Bank: A repository of items for a testing program, including items at all stages, such as newly written, reviewed, pretested, active, and retired.

Item Banker: A specialized software program that facilitates the maintenance and growth of an item bank by recording item stages, statistics, notes, and other characteristics.

Item Difficulty: A statistical index of how easy/hard the item is with respect to the underlying ability/trait. That is, an item is difficult if not many people get it correct or respond in the keyed direction.

Item Discrimination: A statistical index of the quality of the item, assessing how well it differentiates examinees of high verses low ability. Items with low discrimination are considered poor quality and are candidates to be revised or retired.

Item Response Theory (IRT): A comprehensive approach to psychometric analysis and test development that utilizes complex mathematical models. This provides several benefits, including the ability to design CATs, but requires larger sample sizes. A common rule of thumb is 100 candidates for the one-parameter model and 500 for the three-parameter model.

     a: The item response theory index of item discrimination, analogous to the point-biserial and biserial correlations in classical test theory. It reflects the slope of the item response function. Often ranging from 0.1 to 2.0 in practice, a higher value indicates a better-performing item.

     b: The item response theory index of item difficulty or location, analogous to the P-value (P+) of classical test theory. Typically ranging from -3.0 to 3.0 in practice, a higher value indicates a more difficult item.

     c: The item response theory pseudo-guessing parameter, representing the lower asymptote of the item response function. It is theoretically near the value of 1/k, where k is the number of alternatives. For example, with the typical four-option multiple-choice item, a candidate has a base chance of 25% of guessing the correct answer.

Item Type: Items (test questions) can be a huge range of formats.  We are all familiar with single best answer multiple choice, but there are many others.   Some of these are: multiple response, drag and drop, essay, scored short answer, and equation editor.

Job Analysis: Also known as practice analysis or role delineation study, job analysis is a formal study used to determine the structure of a job and the KSAs important to success or competence. This is then used to establish the test blueprint for a professional testing program, a critical step in the chain of evidence for validity.

Key: The key is the correct response to an item.

KSA: KSA is an acronym for knowledge, skills, and abilities. A critical step in testing for employment or professional credentials is to determine the KSAs that are important in a job. This is often done via a job analysis study.

Licensure: A testing program mandated by a government body. The test must be passed in order to perform the task in question, whether it is to work in the profession or drive a car.

Norm-Referenced: A test score (not a test) is norm-referenced if it is interpreted with regard to the performance of other candidates. Percentile rank is an example of this because it does not provide any information regarding how many items the candidate got correct.

P-value: A classical index of item difficulty, presented as the proportion of candidates who correctly responded to the item. A value above 0.90 indicates an easy item, while a value below 0.50 indicates a relatively difficult item. Note that it is inverted; a higher value indicates less difficulty.

Point-Biserial Correlation: A classical index of item discrimination, calculated as the Pearson correlation between the item score and the total test score. If below 0.0, low-scoring candidates are actually doing better than high-scoring candidates, and the item should be revised or retired. Low positive values are marginal, higher positive values are ideal.

Polytomous: A psychometric term for data where there are 2 or more possible points.  Multiple choice items, while having 3-5 options, are usually still only dichotomous (0/1 points).  Examples of a polytomous item are a Likert-style rating scale (“rate on a scale of 1 to 5”) and partial credit items or rubrics (scoring an essay as 0 to 5 points).

Power Test: A test where the goal is to measure the maximal knowledge, ability, or trait of the examinee.  For example, a medical certification exam with a generous time limit.

Predictive Validity: An aspect of validity (see below) that focuses on how well the test predicts important outcomes.  A university admissions test should predict 4-year graduation probability very well, and a pre-employment test on MS Excel should predict job performance for bookkeepers.

Pretest (or Pilot) Item: An item that is administered to candidates simply for the purposes of obtaining data for future psychometric analysis. The results on this item are not included in the score. It is often prudent to include a small number of pretest items in a test.

Reliability: A psychometric term for the repeat-ability or consistency of the measurement process. Often, this is indexed by a single number, most commonly the internal consistency index coefficient alpha or its dichotomous formulation, KR-20. Under most conditions, these range from 0.0 to 1.0, with 1.0 being a perfectly reliable measurement. However, just because a test is reliable does not mean that it is valid (i.e.,  measures what it is supposed to measure).

Scaling: Scaling is a process of converting scores obtained on an exam to an arbitrary scale. This is done so that all the forms and exams used by a testing organization are on a common scale. For example, suppose an organization had two testing programs, one with 50 items and one with 150 items. All scores could be put on the same scale to standardize score reporting.

Speeded Test: A test where the purpose is to see how fast the examinee can answer questions.  The questions are therefore not usually knowledge based.  For example, seeing how many 5-digit zip codes they can correctly type in 60 seconds.  Learn more.

Standard Error of Measurement: A psychometric term for a concept that quantifies the amount of error in an examinee’s score, since psychometrics is not perfect; even with the best Math test, a student might have variation in their result today vs next week.  The concept differs substantially in classical test theory vs. item response theory.

Standard-Setting Study: A formal study conducted by a testing organization to determine standards for a testing program, which are manifested as a cutscore. Common methods include the Angoff, Bookmark, Contrasting Groups, and Borderline Survey methods.

Subject Matter Expert (SME): An extremely knowledgeable person within the test development process. SMEs are necessary to write items, review items, participate in standard-setting studies, and job analyses, and oversee the testing program to ensure its fidelity to its true intent.

Validity: Validity is the concept that test scores can be interpreted as intended. For example, a test for certification in a profession should reflect basic knowledge of that profession, and not intelligence or other constructs, and scores can, therefore, be interpreted as evidencing professional competence. Validity must be formally established and maintained by empirical studies as well as sound psychometric and test development practices.  Learn more.

Psychometrics looks fun!  How can I join the band?

You will need a graduate degree.  I recommend you look at the NCME website (ncme.org) with resources for students.  Good luck!

Already have a degree and looking for a job?  Here’s the two sites that I recommend:

  • NCME – Also has a job listings page that is really good (ncme.org)
  • Horizon Search – Headhunter for Psychometricians and I/O Psychologists