The more complex the psychometrics, the more we love the project – from Machine Learning for automated essay scoring to multidimensional item response theory.
Psychometrics is a field that is always evolving. We relish the opportunity to work with advanced psychometrics like adaptive testing, as well as integration of complex approaches from other fields such as AI, Natural Language Processing, and Machine Learning.
Have a unique problem? We’d love to hear about it.
We have been one of the world leaders in computerized adaptive testing (CAT) since its inception; ASC was founded out of the seminal CAT research at the University of Minnesota in the 1970s. We’ve implemented psych/medical assessments with CAT using multidimensional item response theory, adaptive language assessments, multistage testing, and much more.
Are you a government agency that is required to hire external contractors for QA/Replication efforts? Or other psychometric work that doesn’t fit into the large-scale multi-year contracts? We love that stuff.
We use machine learning and other quantitative methods to search for evidence of unusual data from students. This can range from simply low motivation to a classroom of copiers to widespread cheating efforts. You can also download the SIFT software for free.
How do you define what scores mean? How should they be reported? How can you compare scores across grade levels? These are some thorny problems.
IRT is an application of machine learning (logistic regression on a latent trait) to solving some major issues in the world of assessment. We were one of the first organizations to adopt it, and remain one of the leaders.
If your test has multiple forms out at one time, or switches to a new form every year, how do you know that results are comparable? Does a 67 on Form A mean the same as a 67 on Form B, or is Form B a little harder/easier? The answer to this question is nowhere near as simple as you might think.