True item banking with content accessA good assessment starts with good items. While learning management systems (LMSs) and other not-really-assessment platforms include some item authoring functionality, they usually don’t meet the basic requirements for real item banking. There are best practices regarding item banking that are standard at large-scale assessment organizations (e.g., US State Departments of Education_, but these are surprisingly rare for professional certification/licensure exams, universities, and other organizations. Here are some examples.
- Items are reusable (you don’t have to upload for each test where they are used)
- Item version tracking
- User edit tracking and audits
- Author content controls (Math teachers can only see Math items)
- Store metadata such as item response theory parameters and classical statistics
- Track item usage across tests
- Item review workflow
Role based accessAll users should be limited by roles, such as Item Author, Item Review, Test Publisher, and Examinee Manager. So, for example, someone in charge of managing the roster of examinees/students might never see any test questions.
Data ForensicsThere are many ways you can analyze your test results to look for possible security/validity threats. Our SIFT software provides a free software platform to help you implement this modern methodology. You can evaluate collusion indices, which quantify how similar the responses are for any pair of examinees. You can evaluate response times, group performance, and roll-up statistics as well.
RandomizationWhen tests are delivered online, you should have the option to randomize item order and also answer order. When printing to paper, there should be an option to randomize the order. But of course, you are much more limited in this respect when using paper.
Linear on the fly testing (LOFT)LOFT will create a uniquely randomized test for each examinee. For example, you might have a pool of 300 items spread across 4 domains, and set each examinee will receive 100 items with 25 from each domain. This greatly increases security.
Computerized adaptive testing (CAT)CAT takes the personalization even further, and adapts the difficulty of the exam and the number of items seen to each student, based on certain psychometric goals and algorithms. This makes the test extremely secure.
Lockdown browserWant to ensure that the student can’t surf for answers, or take screenshots of items? You need a lockdown browser. ASC’s assessment platforms, Assess.ai and FastTest, both come with this out of the box and at no extra cost.
Examinee test codesWant to make sure the right person takes the right exam? Generate unique one-time passwords to be given out by a proctor after identity verification. This is especially useful with remote proctoring; the student never receives any communication before the exam about how to get in, other than to start the virtual proctoring session. Once the proctor verifies the examinee identity, then they provide the unique one-time password.
Proctor codesWant an extra layer on the test kickoff procedure? After a student has their identity verified, and enters their code, then the proctor needs to also enter a different password that is unique to them on that day.
Date/Time WindowsWant to prevent examinees from logging in early or late? Set up a specific time window, such as 9-12AM on Friday.
AI-based proctoringThis level of proctoring is relatively inexpensive and scalable, and does a great job of validating the results of an individual examinee. However, it does nothing to protect the intellectual property of your exam questions. If an examinee steals all the questions, you won’t know until later. So, it is very useful for low or mid-stakes exams, but not as useful for high-stakes exams such as certification or licensure. Learn more about our remote proctoring options. I also recommend this blog post for an overview of the remote proctoring industry.
Live Online ProctoringIf you can’t do in-person test centers because of COVID, this is the next best option. Live proctors can check in the candidate, verify identity, and implement all the other things above. In addition, they can verify the examinee’s environment and stop the exam if they see the examinee stealing questions or other major issues. MonitorEDU is a great example of this.
How can I start?
Need a hand implementing some of these measures? Or just want to talk about the possibilities? Email ASC at firstname.lastname@example.org
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.
Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.