If you are looking around for an online testing platform, you have certainly discovered there are many out there. How do you choose? Well, for starters, it is important to understand that the range of quality and sophistication is incredible. This post will help you identify some salient features and benefits that you should consider in first establishing your requirements and then selecting an online testing platform.
Types of online testing platforms
- There are, of course, many survey engines that are designed for just that: unproctored, unscored surveys. No stakes involved, no quality required.
- At a slightly higher level are platforms for simple or formative assessment. For example, a learning management system will typically include a component to deliver multiple choice questions. However, this is obviously not their strength – you should use an LMS for assessment no more than you should use an assessment platform as an LMS.
- At the top end of the spectrum are assessment platforms that are designed for real assessment. That is, they implement best practices in the field, like Angoff studies, item response theory, and adaptive testing. The type of assessment effort being done by school teachers is quite different from that being done by a company producing high-stakes international tests. FastTest is an example of such a platform.
This post describes some of the aspects that separate the third level from the lower two levels. If you need these aspects, then you likely need a “Level 3” system. Of course, there are many testing situations where such a high quality system is complete overkill.
Another consideration is that many online testing platforms are closed-content. That is, they are 100% proprietary and used only within the organization that built it. You likely need an open-content system, which allows anyone to build and deliver tests.
Test development aspects
Reusable items: If you write an item for this semester’s test, you should be able to easily reuse it next semester. Or let another instructor use it. Surprisingly, many systems do not have this basic functionality. This is a core aspect of item banking.
Item metadata: All items should have extensive metadata fields, such as author, content area, depth of knowledge, etc. It should also store item statistics or IRT parameter, and more importantly, actually use them. For example, test form assembly should utilize item statistics to evaluate form difficulty and reliability.
Form assembly: The system needs advanced functionality for form assembly, including extensive search functionality, automation, support for parallel forms, etc.
Publication options: The system needs to support all the publication situations that your organization might use: paper vs. online, implementation of time limits, control of widgets like calculators, etc.
Item types: The field of assessment is moving excitedly towards technology enhanced items (TEIs) in an effort to assess more deeply and authentically. While this brings challenges, online testing platforms obviously need to support these.
Best practices in online testing
Psychometrics: Psychometrics is the Science of Assessment. The system needs to implement real psychometrics like item response theory, computerized adaptive testing, distractor analysis, and test fraud analysis. Note: just looking at P values and point-biserials is extremely outdated.
Logs: The online testing platform should log user and examinee activity, and have it available for queries and reports.
Reporting: You need reports on various aspects of the system, including item banks, users, tests, examinees, and domains.
Security: Be able to limit availability windows, require a lockdown browser, enable remote proctoring, and so many other options.
Essay scoring: If you deliver open-response items, the platform should have a module for scoring them.
Security: Security is obviously essential to high stakes testing organizations. There are many aspects though: user roles, content access control, browser lockdown, options for virtual proctoring, examinee access, proctor management, and more.
Reliability: The online testing platform needs to have very little downtime. Does the provider have a system status page? Best-in-class hosting like Amazon Web Services?
Scalability: The online testing platform needs to be able to scale up to large volumes.
Configurability: The functionality throughout the system needs to be configurable to meet the needs of your organization and individual users.
Request a demo with ASC’s online testing platform
Contact us to get started! We’d love to explain why the functionality above is so important!
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.
Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.