It is not easy to find an online exam platform that provides a professional level of functionality. There are many, many software products out in wild that provide at least some functionality for online testing. The biggest problem is that there is an incredible range in quality, though there are also other differentiators, such as some being made only to deliver pre-built employment skill tests rather than being for general usage.
So how do you know what level of quality you need in an online exam platform? It mostly depends on the stakes of your test, which governs the need for quality in the test itself, which then drives the need for a quality platform to build and deliver the test. This post helps you identify the types of functionality that set apart “real” online exam platforms, and you can evaluate which components are most critical for you once you go shopping.
Prefer to get your hands dirty? Sign up for a free account in our platform or request a personalized demonstration.
What is a professional online exam platform, anyway?
An online exam platform is much more than an assessment module in a learning management system (LMS) or an inexpensive quiz/survey maker. A real online exam platform is designed for professionals, that is, someone whose entire job is to make assessments. A good comparison is a customer relationship management (CRM) system. That is a platform that is designed for use be people whose job is to manage customers, whether for existing customers or to manage the sales process. While it is entirely possible to use a spreadsheet to manage such things at a small scale, all organizations doing any sort of scale will leverage a true CRM like SalesForce or Zoho. You wouldn’t hire a team of professional sales experts and then have them waste hours each day in a spreadsheet; you would give them SalesForce to make them much more effective.
The same is true for online testing and assessment. If you are a teacher making math quizzes, then Microsoft Word might be sufficient. But there are many organizations that are doing a professional level of assessment, with dedicated staff. Some examples, by no means an exhaustive list:
- Professional credentialing: Certification and licensure exams that a person passes to work in a profession, such as chiropractors
- Employment: Evaluating job applicants to make sure they have relevant skills, ability, and experience
- Universities: Not for classroom assessments, but rather for topics like placement exams of all incoming students, or for nationwide admissions exams
- K-12 benchmark: If you are a government that tests all 8th graders at the end of the year, or a company that delivers millions of formative assessments
The traditional vs modern approach
For starters, one important thing to consider is the approach that the exam software takes to assessment. Some of the aspects listed here are points in the detailed discussion below.
OK, here is the list! As with the one above, this is by no means exhaustive.
Goal 1: Item banking that makes your team more efficient
True item banking:
The platform should allow you to configure how items are scored and presented, such as font size, answer layout, and weighting.
Audio, video, and images should be stored in their own banks, with their own metadata fields, as reusable objects. If an image is in 7 questions, you should not have to upload 7 times… you upload once and the system tracks which items use it.
Statistics and other metadata:
All items should have many fields that are essential metadata: author name, date created, tests which use the item, content area, Bloom’s taxonomy, classical statistics, IRT parameters, and much more.
You should be able to create any new metadata fields that you like.
Item review workflow:
Professionally built items will go through a review process, like Psychometric Review, English Editing, and Content Review. The platform should manage this, allowing you to assign items to people with due dates and email notifications.
The exam platform should include functionality to help you do standard setting like the modified-Angoff approach.
Automated item generation:
There should be functionality for automated item generation.
Powerful test assembly:
When you publish a test, there should be many options, including sections, navigation limits, paper vs online, scoring algorithms, instructional screens, score reports, etc. You should also have aids in psychometric aspects, such as a Test Information Function.
Many math exams need a professional equation editor to write the items, embedded in the item authoring.
Goal 2: Professional exam delivery
Date ranges, retakes, alternate forms, passwords, etc.
Item response theory:
Item response theory modern psychometric paradigm used by organizations dedicated to stronger assessment.
Linear on the fly testing (LOFT):
Suppose you have a pool of 200 questions, and you want every student to get 50 randomly picked, but balanced so that there are 10 items from each of 5 content areas.
Computerized adaptive testing:
This uses AI and machine learning to customize the test uniquely to every examinee. Adaptive testing is much more secure, more accurate, more engaging, and can reduce test length by 50-90%.
Tech-enhanced item types:
Drag and drop, audio/video, hotspot, fill-in-the-blank, etc.
Because most “real” exams will be doing thousands, tens of thousands, or even hundreds of thousands of examinees, the online exam platform needs to be able to scale up.
Online essay marking:
The online exam platform should have a module to score open-response items. Preferably with advanced options, like having multiple markers or anonymity.
Goal 3: Maintaining test integrity and security
Delivery security options:
There should be choices for how to create/disseminate passcodes, set time/date windows, disallow movement back to previous sections, etc.
An option to deliver with software that locks the computer while the examinee is in the test.
There should be an option for remote (online) proctoring.
There should be functionality that facilitates live human proctoring, such as in computer labs at a university.
User roles and content access:
There should be various roles for users, as well as options to limit them by content. For example, limiting a Math teacher doing reviews to do nothing but review Math items.
If items are compromised or challenged, you need functionality to easily remove them from scoring for an exam, and rescore all candidates
You should be able to see who is in the online exam, stop them if needed, and restart or re-register if needed.
Goal 4: Powerful reporting and exporting
Support for QTI:
You should be able to import and export items with QTI, as well as common formats like Word or Excel.
Detailed psychometric analytics:
You should be able to see reports on reliability, standard error of measurement, point-biserial item discriminations, and all the other statistics that a psychometrician needs.
Exporting of detailed raw files:
You should be able to easily export the examinee response matrix, item times, item comments, scores, and all other result data.
You should have options to set up APIs to other platforms, like an LMS or CRM.
OK, now how do I find a real online exam platform?
If you are out shopping, ask about the aspects in the list above. For sure, make sure to check the websites for documentation on these?
Want to save yourself some time? Click here to request a free account in our platform.
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.
Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.