Predictive Validity

Pre employment testing

Predictive Validity is a type of test score validity.  Validity, in a general sense, is evidence that we have to support intended interpretations of test scores.  There are different types of evidence that we can gather to do so.  Predictive validity refers to evidence that the test predicts things that it should predict.  If we have quantitative data to support such conclusions, it makes the test more defensible and can improve the efficiency of its use.  For example, if a university admissions test does a great job of predicting success at university, then universities will want to use it to select students that are more likely to succeed.


Examples of Predictive Validity

Predictive validity evidence can be gathered for a variety of assessment types.


  1. Pre-employment: Since the entire purpose of a pre-employment test is to positively predict good things like job performance or negatively predict bad things like employee theft or short tenure, a ton of effort goes into developing tests to function in this way, and then documenting that they do.
  2. University Admissions: Like pre-employment testing, the entire purpose of university admissions exams is predictive.  They should positively correlate with good things (first year GPA, four year graduation rate) and negatively predict the negative outcomes like academic probation or dropping out.
  3. Prep Exams: Preparatory or practice tests are designed to predict performance on their target test.  For example, if a prep test is designed to mimic the Scholastic Aptitude Test (SAT), then one way to validate it is to gather the SAT scores later, after the examinees take it, and correlate with the prep test.
  4. Certification & Licensure: The primary purpose of credentialing exams is not to predict job performance, but to ensure that the candidate has mastered the material necessary to practice their profession.  Therefore, predictive validity is not important, compared to content-related validity such as blueprints based on a job analysis. However, some credentialing organizations do research on the “value of certification” linking it to improved job performance, reduced clinical errors, and often external third variables such as greater salary.
  5. Medical/Psychological: There are some assessments that are used in a clinical situation, and the predictive validity is necessary in that sense.  For instance, there might be an assessment of knee pain used during initial treatment (physical therapy, injections) that can be predictively correlated with later surgery.  The same assessment might then be used after the surgery to track rehabilitation.


The case of pre-employment testing is perhaps the most common use of this type of validity evidence.  A new study (Sacket, Zhang, Berry, & Lievens, 2022) was recently released that was a meta-analysis of the various types of pre-employment tests and other selection procedures (e.g., structured interview), comparing their predictive validity power.  This was a modern update to the classic article by Schmidt & Hunter (1998).  While in the past the consensus has been that cognitive ability tests provide the best predictive power in the widest range of situations, the new article suggests otherwise.  It recommends the use of structured interview and job knowledge tests, which are more targeted towards the role in question, and therefore not surprising that they are well-performing.  This in turn suggests that you should not buy pre-fab ability tests and use them in a shotgun approach with the assumption of validity generalization, but instead leverage an online testing platform like FastTest that allows you to build high-quality exams that are more specific to your organization.


Predictive validity is one type of test score validity, referring to evidence that scores from a certain test can predict their intended target variables.  The most common application of it is to pre-employment testing, but it is useful in other situations as well.  But validity is an extremely important and wide-ranging topic, so it is not the only type of validity evidence that you should gather.

The following two tabs change content below.

Nathan Thompson, PhD

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains