Entries by nthompson

What’s the difference between Assess.ai and FastTest?

FastTest has been ASC’s flagship platform for the past decade, securely delivering millions of exams, loaded with best practices like computerized adaptive testing and item response theory. FastTest is based off decades of experience in computerized test delivery and item banking, from MICROCAT in the 1980s to FastTest PC/LAN in the 1990s. And now the […]

What is online proctoring?

Online proctoring has been around for over a decade by now. But given the recent outbreak of COVID-19, educational and workforce/certification institutions are scrambling to change their operations, and a huge part of this is an incredible surge in online proctoring. This blog post is intended to provide an overview of the online proctoring industry […]


What is an item distractor?

An item distractor, also known as a foil or a trap, is an incorrect option for a selected-response item on an assessment. What makes a good item distractor? One word: plausibility.  We need the item distractor to attract examinees.  If it is so irrelevant that no one considers it, then it does not do any […]


What are the best practices for test item review?

What is item review?  It is the process of performing quality control on items before they are ever delivered to examinees.  This is an absolutely essential step in the development of items for medium and high stakes exams; while a teacher might not have other teachers review questions on a 4th grade math quiz, items […]

, ,

What are technology enhanced items?

Technology enhanced items are assessment items (questions) that utilize technology to improve the interaction of the item, over and above what is possible with paper.  Technology enhanced items can improve examinee engagement (important with K12 assessment), assess complex concepts with higher fidelity, improve precision/reliability, and enhance face validity/sellability.  To some extent, the last word is […]

, ,

What is automated item generation?

Automated item generation (AIG) is a paradigm for developing assessment items, aka test questions, utilizing principles of artificial intelligence and automation. As the name suggests, it tries to automate some or all of the effort involved with item authoring, as that is one of the most time-intensive aspects of assessment development – which is no […]


The generalized partial credit model (GPCM; Muraki, 1992) is one of the family of models from item response theory.  It is designed to work, as you might have guess, with items that are partial credit.  That is, instead of just right/wrong as possible scoring, an examinee can receive partial points for completing some aspects of […]

What are the possible transformations for scaled scoring?

I was recently asked about scaled scoring and if the transformation must be based on the normal curve. This is an important question, especially since most human traits fall in a fairly normal distribution, and item response theory places items and people on that latent scale. The short answer is “no” – there are other […]


What is computerized adaptive testing?

Computerized adaptive tests (CATs) are a sophisticated method of test delivery based on item response theory (IRT). They operate by adapting both the difficulty and quantity of items seen by each examinee. Difficulty Most characterizations of adaptive testing focus on how item difficulty is matched to examinee ability. High ability examinees receive more difficult items, […]