The COVID-19 Pandemic drastically changing all aspects of our world, and one of the most impacted areas is educational assessment and other types of assessment. Many organizations still delivered tests with methodologies from 50 years ago, such as putting 200 examinees in a large room with desks, paper exams, and a pencil. COVID-19 is forcing many organizations to pivot, which provides an opportunity to modernize assessments. But how can we maintain assessment security – and therefore validity – through these changes? Here are some suggestions, all of which can be easily implemented on ASC’s industry-leading assessment platforms. Get started by signing up for a free account at http://assess.ai.

True item banking with content access

A good assessment starts with good items. While learning management systems (LMSs) and other not-true-assessment platforms include some item authoring functionality, they usually don’t meet the basic requirements for real item banking. Here are some examples

  • Items are reusable
  • Item version tracking
  • User edit tracking and audits
  • Author content controls
  • Store metadata such as item response theory statistics
  • Track item usage across tests
  • Item review workflow

Role based access

All users should be limited by roles, such as Item Author, Item Review, Test Publisher, and Examinee Manager. So, for example, someone in charge of managing the roster of examinees/students might never see any test questions.

Data Forensics

There are many ways you can analyze your test results to look for possible security/validity threats. Our SIFT software provides a free platform to get started.

Randomization

When tests are delivered online, you should have the option to randomize item order and also answer order. When printing to paper, there should be an option to randomize the order.

Linear on the fly testing (LOFT)

LOFT will create a uniquely randomized test for each examinee. For example, you might have a pool of 300 items spread across 4 domains, and set each examinee will receive 100 items with 25 from each domain. This greatly increases security.

Computerized adaptive testing (CAT)

CAT takes the personalization even further, and adapts the difficulty of the exam and the number of items seen to each student, based on certain psychometric goals and algorithms. This makes the test extremely secure.

Lockdown browser

Want to ensure that the student can’t surf for answers, or take screenshots of items? You need a lockdown browser. ASC’s assessment platforms, Assess.ai and FastTest, both come with this out of the box and at no extra cost.

Examinee test codes

Want to make sure the right person takes the right exam? Generate unique one-time passwords to be given out by a proctor after identity verification.

Proctor codes

Want an extra layer on the test kickoff procedure? After a student has their identity verified, and enters their code, then the proctor needs to also enter a different password that is unique to them on that day.

Date/Time Windows

Want to prevent examinees from logging in early or late? Set up a specific time window, such as 9-12AM on Friday.

AI-based proctoring

This level of proctoring is relatively inexpensive and scalable, and does a great job of validating the results of an individual examinee. However, it does nothing to protect the intellectual property of your exam questions. If an examinee steals all the questions, you won’t know until later.

Live Online Proctoring

If you can’t do in-person test centers because of COVID, this is the next best option. Live proctors can check in the candidate, verify identity, and implement all the other things above. In addition, they can verify the examinee’s environment and stop the exam if they see the examinee stealing questions or other major issues. MonitorEDU is a great example of this.

How can I start?

Need a hand implementing some of these measures? Or just want to talk about the possibilities? Email ASC at solutions@assess.com.

The following two tabs change content below.

nthompson

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://pareonline.net/getvn.asp?v=16&n=1.