Why online essay marking? Essay questions and other extended constructed response (ECR) items remain a mainstay of educational assessment. From a purely psychometric perspective, they are usually not beneficial – that is, from an item response theory paradigm, the amount of information added per minute of testing time will be less than other item types – but because ECR items have extensive face validity for assessing deeper constructs or other aspects not easily measured by traditional item types, there will likely always be a place for them.
So, if we are going to keep using ECR items such as those used on PARCC, PISA, and other assessments, it would behoove us to find more effective ways of implementing them. The days of a handwritten essay on paper, graded on a global rubric, are long gone. To get the most out of ECR items, you need to be leveraging both technology and psychometrics. For this reason, we’ve built an online essay marking system that’s directly integrated into our educational assessment platform, FastTest. So how is using such a system better than the old paper-based methods?
1. It’s more efficient… way more efficient
Raters can rate more responses per hour online than on paper, and you are eliminating manual processes that would take a lot of time, such as managing and distributing the stacks of paper essays, gathering the bubble sheets with marks, scanning the bubble sheets, and uploading the results of the scan into the system. When marking online, the marks are immediately entered into the database. How much more efficient is it? One of our clients estimated it provided a 60% reduction in the number of teacher hours needed to complete all their essay marking.
2. Faster turnaround
Because of the increase in efficiency, and the fact that marks are stored directly in the database, the time to turn around scores for reporting to students will be drastically reduced. The 60% reduction mentioned above was only for actual essay marking time – when you include the time saved for shipping/distributing papers and gathering/scanning the bubble sheets, the overall turnaround time for student scores will be reduced by 80% or more.
3. You can use remote raters
Are some teachers home sick? Has summer/spring/winter break already started? Are you working with a national or statewide test? Do you want essays marked by markers in a location different than the students? All of these are much easier to do when markers can simply log in online.
4. It saves paper
This one is fairly obvious. You’ll easily be saving tens of thousands of sheets of paper, even for a relatively small project.
5. Easier to flag responses
Do you flag responses for use as anchor items next year, or if you see a possible critical student issue? This is done automatically in our online essay marking module.
6. Integrated psychometrics and real-time reporting
Because all the numbers are going directly into the database, you can act on those numbers immediately. Want to track the performance of your markers by calculating inter-rater reliability and agreement every day, as well as velocity? No problem. Want to export the data matrix as soon as marking is complete so you can run an item response theory calibration with Xcalibre, then score all the students with IRT the next day? Once again, the system is built for that. You can attach IRT parameters to your rubrics, ensuring a stronger test with better student feedback. And pretty soon, we’ll have Xcalibre right there in the online platform too!
OK, so how can I implement online essay marking?
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world.
Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He’s published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.