AI essay scoring process

In the realm of academic dishonesty and high-stakes exams such as Certification, the term “pre-knowledge” is an important concern in test security and validity. Understanding what pre-knowledge entails and its implications in exam cheating can offer valuable insights into how the integrity of tests and their scores can be compromised. According to studies by Cizek and Wollack (2017), nearly 50% of high school students in the United States acknowledge having cheated on an exam at least once in the past year. Furthermore, around 10% of university students admit to copying answers from peers during exams (McCabe, 2024). These statistics highlight the prevalence of academic dishonesty across different educational levels, underscoring the importance of robust exam security measures.

What is Pre-Knowledge in Exam Cheating?

Pre-knowledge refers to having advance access to information or materials that are not available to all students before an exam. This can manifest in various ways, from receiving the actual questions that will be on the test to obtaining detailed information about the exam’s content through unauthorized means. Essentially, it’s an unfair advantage that undermines the principles of a level playing field in assessments.  Psychometricians would refer to it as a threat to validity.

How Pre-Knowledge Occurs

The concept of pre-knowledge encompasses several scenarios. For instance, a student might gain insight into the exam questions through illicit means such as hacking into a teacher’s email, collaborating with insiders, or even through more subtle means like overhearing discussions. This sort of advantage is particularly problematic because it negates the purpose of assessments, which is to evaluate a student’s understanding and knowledge acquired through study and effort.

Perhaps the greatest source of pre-knowledge is “brain dump” websites, where people who take the test go to soon afterwards and write all the questions they can remember, dumping their short-term memory.  Over time, this can lead to large portions of the item bank being exposed to the public. These sites are often intentionally based in countries where cheating is not legally recognized, leaving little to no legal recourse to prevent their operations.

Implications of Pre-Knowledge

Pre-knowledge not only disrupts the fairness of the examination process but also poses significant challenges for test sponsors. Institutions must be vigilant and proactive in safeguarding against such breaches. Implementing robust security measures, such as secure testing environments and rigorous monitoring, is crucial to prevent the unauthorized dissemination of exam content.student cheating on test

Pre-knowledge and other forms of cheating can have far-reaching consequences beyond the immediate context of the exam. For instance, students who gain an unfair advantage may achieve higher grades, which can influence their future academic and career opportunities. This creates an inequitable situation where hard work and honest efforts are overshadowed by dishonesty.

Engaging in brain-dump activities is actually a form of intellectual property theft.  Many organizations spend a lot of money to develop high-quality assessments, sometimes totaling in the thousands of dollars per item!  Cheating in this manner will actually increase the costs of certification exams or educational opportunities.  Moreover, the organizations can then pursue legal action against cheaters.

Most importantly, cheating on exams can invalidate scores and therefore the entire credential.  If everyone is cheating and passing a certification exam, then the certification itself becomes worthless.  So by participating in brain-dump activities, you are actually hurting yourself.  And if the purpose of the test is to protect the public, such as ensuring that people working in healthcare fields are actually qualified, you are hurting the general public.  If you are going into healthcare to help patients, you need to realize that brain-dump activity only ends up hurting patients in the end.

Tips to Prevent Pre-Knowledge

Educational institutions and other test sponsors play a pivotal role in addressing the issue of pre-knowledge. Here are some effective strategies to help prevent pre-knowledge and maintain the integrity of academic assessments.  You might also benefit from this discussion on credentialing exams.

  1. Secure Testing Environments: Ensure exams are administered in a controlled environment where access to unauthorized materials is restricted.
  2. Monitor and Supervise: Use proctors and surveillance to oversee exam sessions and detect any suspicious behavior.
  3. Randomize Questions: Utilize question banks with randomized questions to prevent the predictable pattern of exam content.
  4. Strengthen Digital Security: Implement robust cybersecurity measures to protect exam content and prevent unauthorized access.
  5. Educate Examinees: Foster a culture of academic integrity by educating students about the consequences of cheating and the importance of honest efforts.
  6. Regularly Update Procedures: Review and update security measures and examination procedures regularly to address emerging threats.
  7. Encourage Open Communication: Provide support and resources for students struggling with their studies to reduce the temptation to seek unfair advantages.
  8. Strong NDA: This is one of the most important approaches.  If someone signs a Non-Disclosure Agreement which tells them that if they engage in cheating that they might not only fail the exam but be blocked from the entire profession and subject to lawsuits, this can be a huge deterrent.
  9. Multiple forms: Make multiple versions of the exam, and rotate them frequently.
  10. LOFT/CAT: Linear on the fly testing (LOFT) makes nearly infinite parallel versions of the exam, increasing security.  Computerized adaptive testing (CAT) creates a personalized exam for each candidate.
  11. Scan the internet: You can search the internet for text strings from your items, helping you find the brain dump sites and address them.

 

What if it happens?

There are two aspects to this: finding that it happened, and then deciding what to do about it.

To find that it happened, there are several types of evidence.

  1. Video – If there was proctoring, do we have video, perhaps of someone taking pictures of the exam with their phone?
  2. Technology – Perhaps a post on a brain dump site was done with someone’s email address or IP address.
  3. Eyewitness – You might get tipped off by another candidate that has higher integrity.
  4. Psychometric forensics – There are some ways to find this.  On the brain dump side, you might find candidates that don’t attempt all the items on the test but spend most of the time on a few… because they are memorizing them.  On the other end, you might find candidates that get a high score in a very short amount of time.  There are also much more advanced methods, as discussed in this book.

What should your organization do?  First of all, you need a test security plan ahead of time.  In that, you should lay out the processes as well as the penalties.  This ensures that examinees have due process.  Unfortunately, some organizations just stick their head in the sand; as long as candidates are paying the registration fees and the end stakeholders are not complaining, why rock the boat?

Summary

In conclusion, pre-knowledge in exam cheating represents a serious breach of academic integrity that undermines the educational process. It highlights the need for vigilance, robust security measures, and a culture of honesty within educational institutions. By understanding and addressing the nuances of pre-knowledge, educators and students can work together to uphold the values of fairness and integrity in academic assessments.

SIFT, developed by Assessment Systems Corporation (ASC), is specialized software designed to compute various collusion indices for detecting potential cheating in exams. Unlike many data forensics methods that require psychometricians to write custom code, SIFT provides a streamlined solution by automating these analyses. The software can also aggregate results based on group variables like testing locations or classrooms, making it particularly useful for large-scale test security evaluations.

References

Cizek, G. J., & Wollack, J. A. (2016). Exploring cheating on tests: The context, the concern, and the challenges. In Handbook of quantitative methods for detecting cheating on tests (pp. 3-19). Routledge. https://doi.org/10.4324/9781315743097

McCabe, D. (2024). Cheating and honor: Lessons from a long-term research project. Second Handbook of Academic Integrity, 599-610.

Student exam cheating has always been a major concern, even before the major shift to remote proctoring and online exams during the pandemic.  According to a study done in 2007, 60.8% of college students admitted to cheating while a whopping 95% of these admitted not getting caught! Another article also revealed that even renowned institutions such as Harvard and Yale turn out to be fragile when it came to exam cheating. This is just but a tip of the iceberg when exploring insights on academic infidelity.  In 2020, things took a turn for the worse as many institutions struggled to implement digital transformation as a response to the pandemic. Coupled with the common misconception that cheating in online exams is easier than traditional exams, everything became catastrophic.

However, this is not a true reflection of cheating in online exams. Is cheating in online exams easy as the students perceive it to be? No. Just like in-person exams, online exams have some loopholes that students can exploit for academic gains. But, these loopholes can be mitigated using the right online assessment software and strategies. Perhaps the most important technology is remote proctoring, which has seen an incredible boom during the pandemic.

This article provides two things:

  • Overview of how students cheat in online tests
  • Provide you with tips and ideas on how to deter and detect cheating

What is exam cheating?

Cheating-in-online-exams-Misconceptions-Vs-Reality

Cheating refers to actual, intended, or attempted deception and/or dishonesty in relation to academic or employment-related assessments. It is only one part of the larger issue of test integrity; there are other issues, such as teacher help on K-12 summative tests or “faking good” on personality assessments. This in turn is part of the larger scope of academic dishonesty.

There are two pathways to dealing with academic dishonesty and cheating: Deter and Detect.

  • Deter: Make it harder to cheat, or put punishment policies in place that make examinees avoid cheating. Examples include lockdown browser and remote proctoring.
  • Detect: Find cheating that has already happened, such as psychometric forensics (download free software for that here), or reviewing a proctor video.

With exam security, experts follow the adage of “A pound of prevention is worth an ounce of cure” so deterrence is important. However, before going forward, let’s define academic dishonesty and cheating, and discuss some definitions.

Need an exam platform with more security, or want to discuss security with a psychometrician?

Click here to get in touch.

 

7 ways of cheating in online exams and how to catch or deter them

1. Impersonation (proxy tester)

Impersonation refers to the act of asking or hiring someone else to take an exam on your behalf. This is not a new trick and has also been used in traditional assessments. In online assessments, the risks are even higher. Impersonation can take place in two stages of the examination delivery process; before and during the assessments.

Before the assessments, students share their login details with the imposter. The login details vary depending on the software in use. However, some online proctoring software easily makes this impossible by using advanced identification features such as biometrics or facial recognition systems. Impersonation during the exam process can take place in many ways. One trick used by students is providing remote access to their imposters through tools such as virtual machines. This sounds like something pulled out of a spy movie, but you wouldn’t believe what students are willing to do to avoid failure.

How to curb impersonation in online exams

  • Using authentication technologies such as biometric systems and two-factor authentication to verify registered candidates.
  • Use online assessment software with the AI-flagging feature. AI flagging feature enables instructors to spot and act on suspicious activities in exam video and audio recordings. Some suspicious behavior includes extra audio voices and unusual body language.
  • Use software such as BioSig-ID which uses keystroke analysis to identify metrics such as typing patterns, style, rhythm, and pressure. These are then used to create a unique persona that is almost impossible to replicate.

2. The ‘Old school’

student cheating on testThe ‘old school’ is popular in traditional exams. It includes tricks such as writing the notes in your palms or sticky notes and using sign language to communicate with classmates. Many institutions make a mistake of assuming that these tricks cannot be used in online exams. However, these tricks can be used in online proctored exams, but remote proctoring can catch it.

How to deal with the ‘old-school tricks’

Most of the old-school tricks of cheating can be identified with online proctoring software. In many cases, AI-based flagging is sufficient. But for high-stakes exams, you definitely need a human proctor in real time, because that not only protects the integrity of that examinee’s score, but greatly deters them from stealing the test content.

3. Advanced tech gadgets and software

The advancement of technology has turned some students into ‘James Bond’ types of cheaters. They rely on sophisticated tech gadgets and software to cheat in online proctored exams. Here are some creative ways students use software and technology to try and outwit the system;

  • Tech gadgets such as smartwatches and microphones– These devices enable students to communicate about the exam with people from the outside. Smartwatches can also be used to store answers to the tests.
  • Using remote software– This is one of the most popular ways of cheating in online assessments. Students install remote software to connect them with help from the outside. Some even use web versions of applications such as WhatsApp to cheat in exams.
  • Projectors or cameras– This is another way students can cheat in online proctored exams. Students may set up cameras on their desks and have their family or friends project the answers in front of them.
  • Screen Mirroring– Using certain tools, students can grant monitor access to friends and family in the room, who can help them with the test.

How to stop the ‘tech-savvy’ students from cheating

  • Use lock-down browser to stop the student from surfing the web or using any software, apart from taking the exam.
  • Use AI flagging to recognize and act on suspicious behavior
  • Collaborate with a remote proctoring provider that offers audio and video proctoring services. You can also add a human proctor to the equation to increase assessment security. Good proctoring services will use two cameras, require room scans, lock down the computer, and other approaches to addressing tech-based cheating.

4. “Brain Dumps” and sharing of test content

students discussing formative summative assessment

In many cases, this form of student exam cheating can be referred to as the ‘charity’ of academic infidelity. This is popular in traditional exams where students share exam questions with others who have not taken the exam yet, because they feel they are being helpful – even if it could end up hurting themselves. Candidates may take screenshots or use cameras to record questions and share them with others, or simply try to memorize items.

How to curb sharing of test content

Use ASC’s lockdown browser functionality to eliminate the ability to take screenshots and screen recording. Proctoring services will flag if students are taking pictures with their phone. Psychometric forensics will look for unusual time patterns.

5. The ‘Water or Bathroom’ break

A student pretends to have a ‘bathroom emergency’ or ‘bad connection’ and the first thing they do after logging off the system is getting access to a device or textbook to cheat. To achieve high standards of academic integrity, it’s important for institutions not to take chances.

How to deal with the ‘Water or Bathroom Break’

This is a tricky one, as there is no knowing when the excuses are genuine or not. However, the institutions should try to come up with strict examination policies for such circumstances. For instance, the time for bathroom breaks should be limited, such as there being a 200 item test with two 100-item sections and a 10 minute break between, so there is no chance to come back to items.

6. Help from family, experts, and friends

This is a method that many students consider ‘safe’. There are several ways family, experts, and friends can help students cheat in online assessments. One way is having a family member or friend in the same room a student is taking the exam. The student can share the questions and the friend or family member can browse the internet and provide answers.

Another way students can cheat is through ‘expert tutors’. Students can hire tutors to help them with the exams or take them on their behalf. In fact, there has been an increased demand for online academic tutors.

How to stop students from getting external help

  • Tighten up your authentication process by using technologies such as biometrics systems and two-factor authentication. This prevents students from receiving external help from experts.
  • Utilize lockdown browser to make sure that students don’t have access to remote software or have the freedom to surf the internet.
  • Use AI flagging to make sure that the candidates don’t display suspicious behavior.

7. Googling or ChatGPT-ing the answer

Doesn’t access to a computer and the internet provide students with endless opportunities to cheat in online proctored tests? Yes, if the test is not proctored or locked down. Fortunately, this method is relatively easy to address.

How to stop students from using the internet during an exam.

Use ASC’s lockdown browser function to stop students from accessing other websites or resources on personal devices. Utilize proctoring services like MonitorEDU or Proctor360. Do not allow cell phones to be present at the desk.

Tips for students

Read that candidate/student handbook: This actually VERY important. It will not only tell you what to expect, but also tell you important information about the exam itself.

Practice test: If available, make sure to do this. It will help you understand the process, and feel much more confident come Exam Day.

Download beforehand: Some software for an online proctored exam will require you to download something, such as a secure browser or Chrome plugin. Make sure to do this before Exam Day. Sometimes, it is part of the Practice Test.

Watch instructional videos: Videos like the example above will help you see what to expect.

System check: Similarly, some software to deliver an online proctored exam will perform an automated system check to make sure that you have enough internet bandwidth, that your video is streaming well, etc. Do this before Exam Day, and sometimes it is also part of the Practice Test.

Know the rules: Make sure you are aware of the rules for your particular exam. Are you allowed a bottle of water? Bathroom breaks? Open-book?

Prepare the area: Once you know the rules, make sure your desk area is ready. Do you need to clean the top of your desk from all books and papers? Obtain an external webcam or microphone? Ask your roommates to leave? There are many things like this which might be required, and more than you can do in 30 seconds as you log into your exam.

Have water: If you are allowed water, make sure you have it ready, just in case.

Study: YES!!!!! Obviously, this is more important than anything else… if you actually know the material, you are more likely to pass. Cheating is not the way to go; it isn’t worth the risk if you could just take that effort and put it into actually studying.

Data Forensics

There is a distinction between primary and secondary collusion, and there are specific collusion detection indices and methods to investigate aberrant testing behavior, such as

 

Conclusion

Student exam cheating remains common. These are some of the popular ways in which students cheat on exams. However, these tricks and hacks should not scare you from reaping the benefits of online assessments. In fact, by using the solutions suggested in this article, we believe you can reduce the risks of cheating and achieve high standards of academic integrity. Assessment Systems regularly participates in the Conference on Test Security (COTS), to stay at the forefront of the latest strategies and technologies for preventing cheating in assessments.

Ready to minimize and deter student cheating in your online proctored exams? Contact us today and get a free consultation!

There are many online proctoring providers on the market today, so how can you strategically shop for one?  This post provides some considerations, and a list of most vendors, to help you with the process.

Online proctoring refers to the use of video for proctoring in educational or professional tests when the proctor is not in the same room as the examinee. This means that it is done with a video stream or recording using a webcam and sometimes an additional device, which are monitored by a human and/or AI. It is also referred to as remote proctoring or invigilation. Online proctoring offers a compelling alternative to in-person proctoring, with security and costs somewhere in between unproctored at-home tests and tests delivered at an expensive testing center in an office building. This makes it a perfect fit for medium-stakes exams, such as university placement, pre-employment screening, and many types of certification/licensure tests.

The Importance of Online Proctoring

Online proctoring software has become essential in maintaining the integrity and fairness of remote examinations. This technology enables educational institutions and certification bodies to administer tests securely, ensuring that the assessment process is free from cheating and other forms of malpractice. By using advanced features such as facial recognition, screen monitoring, and AI-driven behavior analysis, online proctoring software can detect and flag suspicious activities in real time, making it a reliable solution for maintaining academic standards.

Moreover, online proctoring software offers significant convenience and flexibility. Students and professionals can take exams from any location, reducing the need for physical test centers and making it easier for individuals in remote or underserved areas to access educational opportunities. This flexibility not only broadens access to education and certification but also saves time and resources for both test-takers and administrators. Additionally, the data collected during online proctored exams can provide valuable insights into test-taker behavior, helping organizations to refine their testing processes and improve overall assessment quality.

List of Online Proctoring Providers

Looking to evaluate potential vendors? Here is a great place to start.

# Name Website Country Proctor Service
1 Aiproctor https://www.aiproctor.com/ USA AI
2 Centre Based Test (CBT) https://www.conductexam.com/center-based-online-test-software/ India Live, Record and Review
3 Class in Pocket classinpocket.com (Website now defunct) India AI
4 Constructor https://constructor.tech/ Switzerland AI, Record & Review, Bring Your Own Proctor, Live
5 Datamatics https://www.datamatics.com/industries/education-technology/proctoring/ India AI, Live, Record and Review
6 DigiProctor https://www.digiproctor.com/ India AI
7 Disamina https://disamina.in/ India AI
8 Examity https://www.examity.com/ USA Live
9 ExamMonitor https://examsoft.com/ USA Record and Review
10 ExamOnline https://examonline.in/remote-proctoring-solution-for-employee-hiring/ India AI, Live
11 Eduswitch https://eduswitch.com/ India AI
12 EasyProctor https://www.excelsoftcorp.com/products/assessment-and-proctoring-solutions/ India AI, Live, Record and Review
13 HonorLock https://honorlock.com/ USA AI, Record and Review
14 Internet Testing Systems https://www.testsys.com/ USA Bring your own proctor/td>
14 Invigulus https://www.invigulus.com/ USA AI, Live, Record and Review
15 Iris Invigilation https://www.irisinvigilation.com/ Australia AI
16 Mettl https://mettl.com/en/online-remote-proctoring/ India AI, Live, Record and Review
17 MonitorEdu https://monitoredu.com/proctoring/ USA Live
18 OnVUE https://home.pearsonvue.com/Test-takers/OnVUE-online-proctoring.aspx/ USA Live
19 Oxagile https://www.oxagile.com/competence/edtech-solutions/proctoring/ USA AI, Live, Record and Review
20 Parakh https://parakh.online/blog/remote-proctoring-ultimate-solution-for-secure-online-exam/ India AI, Live, Record and Review
21 ProctorFree https://www.proctorfree.com/ USA AI, Live
22 Proctor360 https://proctor360.com/ USA AI, Bring Your Own Proctor, Live, Record and Review
23 ProctorEDU https://proctoredu.com/ Russia AI, Live, Record and Review
24 ProctorExam https://proctorexam.com/ Netherlands Bring Your Own Proctor, Live, Record and Review
25 Proctorio https://proctorio.com/products/online-proctoring/ USA AI, Live
26 Proctortrack https://www.proctortrack.com/ USA AI, Live
27 ProctorU https://www.proctoru.com/ USA AI, Live, Record and Review
28 Proview https://www.proview.io/en/ USA AI, Live
29 PSI Bridge https://www.psionline.com/en-gb/platforms/psi-bridge/ USA Live, Record and Review
30 Respondus Monitor https://web.respondus.com/he/monitor/ USA AI, Live, Record and Review
31 Rosalyn https://www.rosalyn.ai/ USA AI, Live
32 SmarterProctoring https://smarterservices.com/smarterproctoring/ USA AI, Bring Your Own Proctor, Live
33 Sumadi https://sumadi.net/ Honduras AI, Live, Record and Review
34 Suyati https://suyati.com/ India AI, Live, Record and Review
35 TCS iON Remote Assessments https://www.tcsion.com/hub/remote-assessment-marking-internship/ India AI, Live
36 Think Exam https://www.thinkexam.com/remoteproctoring/ India AI, Live
37 uxpertise XP https://uxpertise.ca/en/uxpertise-xp/ Canada AI, Live, Record and Review
38 Proctor AI https://www.visive.ai/solutions/proctor-ai/ India AI, Live, Record and Review
39 Wise Proctor https://www.wiseattend.com/wise-proctor/ USA AI, Record and Review
40 Xobin https://xobin.com/online-remote-proctoring/ India AI
41 Youtestme https://www.youtestme.com/online-proctoring/ Canada AI, Live

 

Our recommendations for the best online proctoring software

ASC’s platforms are integrated with some of the leading remote proctoring software providers.

Type Vendors
Live MonitorEDU
AI Constructor, Proctor360, ProctorFree
Record and Review Constructor, Proctor360, ProctorFree
Bring Your Own Proctor Constructor, Proctor360

 

How do I select an online proctoring provider?

First, you need to determine which type of proctoring you need. There are four levels of online proctoring; more detail on this later!

  1. Live: Professional proctors watch in real time
  2. Record and review: Video is recorded, and professional proctors watch later
  3. AI: Video is recorded, and analyzed with AI to look for flags like two faces
  4. Bring your own proctor: Same as the ones above, but YOU need to watch all the videos

There is a trade-off with costs here. Live proctoring with professionals can cost $30 to $100 or more, while AI proctoring can be as little as a few dollars. So, Live is usually used for high-stakes low-volume exams like Certification, while AI is better for low-stakes high-volume exams like university quizzes.

Next, evaluate some vendors to see which group they fall into; note that some vendors can do all of them! Then, ask for some demos so you understand the business processes involved and the UX on the examinee side, both of which could substantially impact the soft costs for your organization. Finally, start negotiating with the vendor you want!

What do I need? Two Distinct Markets

First, I would describe the online proctoring industry as actually falling into two distinct markets, so the first step is to determine which of these fits your organizationlaptop-desk-above

  1. Large scale, lower cost (when large scale), lower security systems designed to be used only as a plugin to major LMS platforms like Blackboard or Canvas. These systems are therefore designed for lower-stakes exams like an Intro to Psychology midterm at a university.
  2. Lower scale, higher cost, higher security systems designed to be used with standalone platforms dedicated to high stakes exams. These are generally for tests like certification or workforce, or perhaps special use at universities like Admissions and Placement exams.

How to tell the difference? The first type will advertise about easy integration with systems like Blackboard or Canvas as a key feature. They will also often focus on AI review of videos, rather than using real humans. Another key consideration is to look at the existing client base, which is often advertised.

Other ways that vendors can differ

Screen capture: Some online proctoring providers have an option to record/stream the screen as well as the webcam. Some also provide the option to only do this (no webcam) for lower stakes exams.

Mobile phone as the second camera: Some newer platforms provide the option to easily integrate the examinee’s mobile phone as a second camera (third stream, if you include screen capture), which effectively operates as a human proctor. Examinees will be instructed to use the video to show under the table, behind the monitor, etc., before starting the exam. They then might be instructed to stand up the phone 2 meters away with a clear view of the entire room while the test is being delivered. This is in addition to the webcam.

API integrations: Some systems require software developers to set up an API integration with your LMS or assessment platform. Others are more flexible, and you can just log in yourself, upload a list of examinees, and you are all set.

On-Demand vs. Scheduled: Some platforms involve the examinee scheduling a time slot. Others are purely on-demand, and the examinee can show up whenever they are ready. MonitorEDU is a prime example of this: examinees show up at any time, present their ID to a live human, and are then started on the test immediately – no downloads/installs, no system checks, no API integrations, nothing.

What are the types of online proctoring?

There are many types of online proctoring software on the market, spread across dozens of vendors, especially new ones that sought to capitalize on the pandemic which were not involved with assessment before hand. With so many options, how can you more effectively select amongst the types of remote proctoring? There are four types of remote proctoring platforms, which can be adapted to a particular use case, sometimes varying between different tests in a single organization. ASC supports all four types, and partners with 5 different vendors to help provide the best solution to our clients. In descending order of security:

Approach What it entails for you What it entails for the candidate

Live with professional proctors

You register a set of examinees in FastTest, and tell us when they are to take their exams and under what rules.
We provide the relevant information to the proctors.
You send all the necessary information to your examinees.
The most secure of the types of remote proctoring.
Examinee goes to ascproctor.com, where they will initiate a chat with a proctor.
After confirmation of their identity and workspace, they are provided information on how to take the test.
The proctor then watches a video stream from their webcam as well as a phone on the side of the room, ensuring that the environment is secure. They do not see the screen, so your exam content is not exposed. They maintain exam invigilation continuously.
When the examinee is finished, they notify the proctor, and are excused.

Live, bring your own proctor (BYOP)

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
Your staff logs into the admin portal and awaits examinees.
Videos with AI flagging are available for later review if needed.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  Proctors ask the examinee to provide identity verification, then launch the test.
Examinee is watched on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

Record and Review (with option for AI)

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
After examinees take the test, your staff (or ours) logs into review all the videos and report on any issues.  AI will automatically flag irregular behavior, making your reviews more time-efficient.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

AI only

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
Videos are stored for 1 month if you need to check any.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

 

Some case studies for different types of exams

We’ve worked with all types of remote proctoring software, across many types of assessment:

  • ASC delivers high-stakes certification exams for a number of certification boards, in multiple countries, using the live proctoring with professional proctors. Some of these are available continuously on-demand, while others are on specific days where hundreds of candidates log in.
  • We partnered with a large university in South America, where their admissions exams were delivered using Bring Your Own Proctor, enabling them to drastically reduce costs by utilizing their own staff.
  • We partnered with a private company to provide AI-enhanced record-and-review proctoring for applicants, where ASC staff reviews the results and provides a report to the client.
  • We partner with an organization that delivers civil service exams for a country, and utilizes both unproctored and AI-only proctoring, differing across a range of exam titles.

 

More security: Test delivery options

A professional-grade exam system will also come with its own functionality to enhance test security: randomization, automated item generation, computerized adaptive testing, linear-on-the-fly testing, professional item banking, item response theory scoring, scaled scoring, psychometric analytics, equating, lockdown delivery, and more. In the context of online proctoring, perhaps the most salient is the lockdown delivery. In this case, the test will completely take over the examinee’s computer and they can’t use it for anything else until the test is done.

LMS systems rarely include any of this functionality, because they are not needed for a midterm exam of Intro to Psychology. However, most assessments in the world that have real stakes – university admissions, certifications, workforce hiring, etc. – depend heavily on such functionality. It’s not just out of habit or tradition, either. Such methods are considered essential by international standards including AERA/APA/NCMA, ITC, and NCCA.

 

Want some more information?

Get in touch with us, we’d love to show you a demo or introduce you to partners!

Email solutions@assess.com.

 

lock keyboard test security plan

A test security plan (TSP) is a document that lays out how an assessment organization address security of its intellectual property, to protect the validity of the exam scores.  If a test is compromised, the scores become meaningless, so security is obviously important.  The test security plan helps an organization anticipate test security issues, establish deterrent and detection methods, and plan responses.  It can also include validity threats not security-related, such as how to deal with examinees that have low motivation.  Note that it is not limited to delivery; it can often include topics like how to manage item writers.

Since the first tests were developed 2000 years ago for entry into the civil service of Imperial China, test security has been a concern.  The reason is quite straightforward: most threats to test security are also validity threats. The decisions we make with test scores could therefore be invalid, or at least suboptimal.  It is therefore imperative that organizations that use or develop tests should develop a TSP.

Why do we need a test security plan?

There are several reasons to develop a test security plan.  First, it drives greater security and therefore validity.  The TSP will enhance the legal defensibility of the testing program.  It helps to safeguard the content, which is typically an expensive investment for any organization that develops tests themselves.  If incidents do happen, they can be dealt with more swiftly and effectively.  It helps to manage all the security-related efforts.

The development of such a complex document requires a strong framework.  We advocate a framework with three phases: planning, implementation, and response.  In addition, the TSP should be revised periodically.

Phase 1: Planning

The first step in this phase is to list all potential threats to each assessment program at your organization.  This could include harvesting of test content, preknowledge of test content from past harvesters, copying other examinees, proxy testers, proctor help, and outside help.  Next, these should be rated on axes that are important to the organization; a simple approach would be to rate on potential impact to score validity, cost to the organization, and likelihood of occurrence.  This risk assessment exercise will help the remainder of the framework.

Next, the organization should develop the test security plan.  The first piece is to identify deterrents and procedures to reduce the possibility of issues.  This includes delivery procedures (such as a lockdown browser or proctoring), proctor training manuals, a strong candidate agreement, anonymous reporting pathways, confirmation testing, and candidate identification requirements.  The second piece is to explicitly plan for psychometric forensics. 

This can range from complex collusion indices based on item response theory to simple flags, such as a candidate responding to a certain multiple choice option more than 50% of the time or obtaining a score in the top 10% but in the lowest 10% of time.  The third piece is to establish planned responses.  What will you do if a proctor reports that two candidates were copying each other?  What if someone obtains a high score in an unreasonably short time? 

What if someone obviously did not try to pass the exam, but still sat there for the allotted time?  If a candidate were to lose a job opportunity due to your response, it helps you defensibility to show that the process was established ahead of time with the input of important stakeholders.

Phase 2: Implementation

The second phase is to implement the relevant aspects of the Test Security Plan, such as training all proctors in accordance with the manual and login procedures, setting IP address limits, or ensuring that a new secure testing platform with lockdown is rolled out to all testing locations.  There are generally two approaches.  Proactive approaches attempt to reduce the likelihood of issues in the first place, and reactive methods happen after the test is given.  The reactive methods can be observational, quantitative, or content-focused.  Observational methods include proctor reports or an anonymous tip line.  Quantitative methods include psychometric forensics, for which you will need software like SIFT.  Content-focused methods include automated web crawling.

Both approaches require continuous attention.  You might need to train new proctors several times per year, or update your lockdown browser.  If you use a virtual proctoring service based on record-and-review, flagged candidates must be periodically reviewed.  The reactive methods are similar: incoming anonymous tips or proctor reports must be dealt with at any given time.  The least continuous aspect is some of the psychometric forensics, which depend on a large-scale data analysis; for example, you might gather data from tens of thousands of examinees in a testing window and can only do a complete analysis at that point, which could take several weeks.

Phase 3: Response

The third phase, of course, to put your planned responses into motion if issues are detected.  Some of these could be relatively innocuous; if a proctor is reported as not following procedures, they might need some remedial training, and it’s certainly possible that no security breach occurred.  The more dramatic responses include actions taken against the candidate.  The most lenient is to provide a warning or simply ask them to retake the test.  The most extreme methods include a full invalidation of the score with future sanctions, such as a five-year ban on taking the test again, which could prevent someone from entering a profession for which they spent 8 years and hundreds of thousands of dollars in educative preparation.

What does a test security plan mean for me?

It is clear that test security threats are also validity threats, and that the extensive (and expensive!) measures warrant a strategic and proactive approach in many situations.  A framework like the one advocated here will help organizations identify and prioritize threats so that the measures are appropriate for a given program.  Note that the results can be quite different if an organization has multiple programs, from a practice test to an entry level screening test to a promotional test to a professional certification or licensure.

Another important difference between test sponsors/publishers and test consumers.  In the case of an organization that purchases off-the-shelf pre-employment tests, the validity of score interpretations is of more direct concern, while the theft of content might not be an immediate concern.  Conversely, the publisher of such tests has invested heavily in the content and could be massively impacted by theft, while the copying of two examinees in the hiring organization is not of immediate concern.

In summary, there are more security threats, deterrents, procedures, and psychometric forensic methods than can be discussed in one blog post, so the focus here rather on the framework itself.  For starters, start thinking strategically about test security and how it impacts their assessment programs by using the multi-axis rating approach, then begin to develop a Test Security Plan.  The end goal is to improve the health and validity of your assessments.


Want to implement some of the security aspects discussed here, like online delivery lockdown browser, IP address limits, and proctor passwords?

Sign up for a free account in FastTest!

item parameter drift boat

Item parameter drift (IPD) refers to the phenomenon in which the parameter values of a given test item change over multiple testing occasions within the item response theory (IRT) framework. This phenomenon is often relevant to student progress monitoring assessments where a set of items is used several times in one year, or across years, to track student growth;  the observing of trends in student academic achievements depends upon stable linking (anchoring) between assessment moments over time, and if the item parameters are not stable, the scale is not stable and time-to-time comparisons are not either. Some psychometricians consider IPD as a special case of differential item functioning (DIF), but these two are different issues and should not be confused with each other.

Reasons for Item Parameter Drift

IRT modeling is attractive for assessment field since its property of item parameter invariance to a particular sample of test-takers which is fundamental for their estimation, and that assumption enables important things like strong equating of tests across time and the possibility for computerized adaptive testing. However, item parameters are not always invariant. There are plenty of reasons that could stand behind IPD. One possibility is curricular changes based on assessment results or instruction that is more concentrated. Other feasible reasons are item exposure, cheating, or curricular misalignment with some standards. No matter what has led to IPD, its presence can cause biased estimates of student ability. In particular, IPD can be highly detrimental in terms of reliability and validity in case of high-stakes examinations. Therefore, it is crucial to detect item parameter drift when anchoring assessment occasions over time, especially when the same anchor items are used repeatedly.

Perhaps the simplest example is item exposure.  Suppose a 100-item test is delivered twice per year, with 20 items always remaining as anchors.  Eventually students will share memories and the topics of those will become known.  More students will get them correct over time, making the items appear easier.

Identifying Item Parameter Drift

There are several methods of detecting IPD. Some of them are simpler because they do not require estimation of anchoring constants, and some of them are more difficult due to the need of that estimation. Simple methods include the “3-sigma p-value”, the “0.3 logits”, and the “3-sigma IRT” approaches. Complex methods involve the “3-sigma scaled IRT”, the “Mantel-Haenszel”, and the “area between item characteristic curves”, where the last two approaches are based on consideration that IPD is a special case of DIF, and therefore there is an opportunity to draw upon a massive body of existing research on DIF methodologies.

Handling Item Parameter Drift

Even though not all psychometricians think that removal of outlying anchor items is the best solution for item parameter drift, if we do not eliminate drifting items from the process of equating test scores, they will affect transformations of ability estimates, not only item parameters. Imagine that there is an examination, which classifies examinees as either failing or passing, or into four performance categories; then in case of IPD, 10-40% of students could be misclassified. In high-stakes testing occasions where classification of examinees implies certain sanctions or rewards, IPD scenarios should be minimized as much as possible. As soon as it is detected that some items exhibit IPD, these items should be referred to the subject-matter experts for further investigation. Otherwise, if there is a need in a faster decision, such flagged anchor items should be removed immediately. Afterwards, psychometricians need to re-estimate linking constants and evaluate IPD again. This process should repeat unless none of the anchor items shows item parameter drift.

HR assessment is a critical part of the HR ecosystem, used to select the best candidates with pre-employment testing, assess training, certify skills, and more.  But there is a huge range in quality, as well as a wide range in the type of assessment that it is designed for.  This post will break down the different approaches and help you find the best solution.

HR assessment platforms help companies create effective assessments, thus saving valuable resources, improving candidate experience & quality, providing more accurate and actionable information about human capital, and reducing hiring bias.  But, finding software solutions that can help you reap these benefits can be difficult, especially because of the explosion of solutions in the market.  If you are lost on which tools will help you develop and deliver your own HR assessments, this guide is for you.

What is HR assessment?

HR assessment is a comprehensive process used by human resources professionals to evaluate various aspects of potential and current employees’ abilities, skills, and performance. This process encompasses a wide range of tools and methodologies designed to provide insights into an individual’s suitability for a role, their developmental needs, and their potential for future growth within the organization.

hr assessment software presentation

The primary goal of HR assessment is to make informed decisions about recruitment, employee development, and succession planning. During the recruitment phase, HR assessments help in identifying candidates who possess the necessary competencies and cultural fit for the organization.

There are various types of assessments used in HR.  Here are four main areas, though this list is by no means exhaustive.

  1. Pre-employment tests to select candidates
  2. Post-training assessments
  3. Certificate or certification exams (can be internal or external)
  4. 360-degree assessments and other performance appraisals

 

Pre-employment tests

Finding good employees in an overcrowded market is a daunting task. In fact, according to the Harvard Business Review, 80% of employee turnover is attributed to poor hiring decisions. Bad hires are not only expensive, but can also adversely affect cultural dynamics in the workforce. This is one area where HR assessment software shows its value.

There are different types of pre-employment assessments. Each of them achieves a different goal in the hiring process. The major types of pre-employment assessments include:

Personality tests: Despite rapidly finding their way into HR, these types of pre-employment tests are widely misunderstood. Personality tests answer questions in the social spectrum.  One of the main goals of these tests is to quantify the success of certain candidates based on behavioral traits.

Aptitude tests: Unlike personality tests or emotional intelligence tests which tend to lie on the social spectrum, aptitude tests measure problem-solving, critical thinking, and agility.  These types of tests are popular because can predict job performance than any other type because they can tap into areas that cannot be found in resumes or job interviews.

Skills Testing: The kinds of tests can be considered a measure of job experience; ranging from high-end skills to low-end skills such as typing or Microsoft excel. Skill tests can either measure specific skills such as communication or measure generalized skills such as numeracy.

Emotional Intelligence tests: These kinds of assessments are a new concept but are becoming important in the HR industry. With strong Emotional Intelligence (EI) being associated with benefits such as improved workplace productivity and good leadership, many companies are investing heavily in developing these kinds of tests.  Despite being able to be administered to any candidates, it is recommended they be set aside for people seeking leadership positions, or those expected to work in social contexts.

Risk tests: As the name suggests, these types of tests help companies reduce risks. Risk assessments offer assurance to employers that their workers will commit to established work ethics and not involve themselves in any activities that may cause harm to themselves or the organization.  There are different types of risk tests. Safety tests, which are popular in contexts such as construction, measure the likelihood of the candidates engaging in activities that can cause them harm. Other common types of risk tests include Integrity tests.

 

Post-training assessments

This refers to assessments that are delivered after training.  It might be a simple quiz after an eLearning module, up to a certification exam after months of training (see next section).  Often, it is somewhere in between.  For example you might take an afternoon sit through a training course, after which you take a formal test that is required to do something on the job.  When I was a high school student, I worked in a lumber yard, and did exactly this to become an OSHA-approved forklift driver.

 

Certificate or certification exams

Sometimes, the exam process can be high-stakes and formal.  It is then a certificate or certification, or sometimes a licensure exam.  More on that here.  This can be internal to the organization, or external.

Internal certification: The credential is awarded by the training organization, and the exam is specifically tied to a certain product or process that the organization provides in the market.  There are many such examples in the software industry.  You can get certifications in AWS, SalesForce, Microsoft, etc.  One of our clients makes MRI and other medical imaging machines; candidates are certified on how to calibrate/fix them.

External certification: The credential is awarded by an external board or government agency, and the exam is industry-wide.  An example of this is the SIE exams offered by FINRA.  A candidate might go to work at an insurance company or other financial services company, who trains them and sponsors them to take the exam in hopes that the company will get a return by the candidate passing and then selling their insurance policies as an agent.  But the company does not sponsor the exam; FINRA does.

 

360-degree assessments and other performance appraisals

Job performance is one of the most important concepts in HR, and also one that is often difficult to measure.  John Campbell, one of my thesis advisors, was known for developing an 8-factor model of performance.  Some aspects are subjective, and some are easily measured by real-world data, such as number of widgets made or number of cars sold by a car salesperson.  Others involve survey-style assessments, such as asking customers, business partners, co-workers, supervisors, and subordinates to rate a person on a Likert scale.  HR assessment platforms are needed to develop, deliver, and score such assessments.

 

The Benefits of Using Professional-Level Exam Software

Now that you have a good understanding of what pre-employment and other HR tests are, let’s discuss the benefits of integrating pre-employment assessment software into your hiring process. Here are some of the benefits:

Saves Valuable resources

Unlike the lengthy and costly traditional hiring processes, pre-employment assessment software helps companies increase their ROI by eliminating HR snugs such as face-to-face interactions or geographical restrictions. Pre-employment testing tools can also reduce the amount of time it takes to make good hires while reducing the risks of facing the financial consequences of a bad hire.

Supports Data-Driven Hiring Decisions

Data runs the modern world, and hiring is no different. You are better off letting complex algorithms crunch the numbers and help you decide which talent is a fit, as opposed to hiring based on a hunch or less-accurate methods like an unstructured interview.  Pre-employment assessment software helps you analyze assessments and generate reports/visualizations to help you choose the right candidates from a large talent pool.

Improving candidate experience 

Candidate experience is an important aspect of a company’s growth, especially considering the fact that 69% of candidates admitting not to apply for a job in a company after having a negative experience. Good candidate experience means you get access to the best talent in the world.

Elimination of Human Bias

Traditional hiring processes are based on instinct. They are not effective since it’s easy for candidates to provide false information on their resumes and cover letters. But, the use of pre-employment assessment software has helped in eliminating this hurdle. The tools have leveled the playing ground, and only the best candidates are considered for a position.

 

What To Consider When Choosing HR assessment tools

Now that you have a clear idea of what pre-employment tests are and the benefits of integrating pre-employment assessment software into your hiring process, let’s see how you can find the right tools.

Here are the most important things to consider when choosing the right pre-employment testing software for your organization.

Ease-of-use

The candidates should be your top priority when you are sourcing pre-employment assessment software. This is because the ease of use directly co-relates with good candidate experience. Good software should have simple navigation modules and easy comprehension.

Here is a checklist to help you decide if a pre-employment assessment software is easy to use;

  • Are the results easy to interpret?
  • What is the UI/UX like?
  • What ways does it use to automate tasks such as applicant management?
  • Does it have good documentation and an active community?

Tests Delivery and Remote Proctoring

Good online assessment software should feature good online proctoring functionalities. This is because most remote jobs accept applications from all over the world. It is therefore advisable to choose a pre-employment testing software that has secure remote proctoring capabilities. Here are some things you should look for on remote proctoring;

  • Does the platform support security processes such as IP-based authentication, lockdown browser, and AI-flagging?
  • What types of online proctoring does the software offer? Live real-time, AI review, or record and review?
  • Does it let you bring your own proctor?
  • Does it offer test analytics?

Test & data security, and compliance

Defensibility is what defines test security. There are several layers of security associated with pre-employment test security. When evaluating this aspect, you should consider what pre-employment testing software does to achieve the highest level of security. This is because data breaches are wildly expensive.

The first layer of security is the test itself. The software should support security technologies and frameworks such as lockdown browser, IP-flagging, and IP-based authentication.

The other layer of security is on the candidate’s side. As an employer, you will have access to the candidate’s private information. How can you ensure that your candidate’s data is secure? That is reason enough to evaluate the software’s data protection and compliance guidelines.

A good pre-employment testing software should be compliant with certifications such as GDRP. The software should also be flexible to adapt to compliance guidelines from different parts of the world.

Questions you need to ask;

  • What mechanisms does the software employ to eliminate infidelity?
  • Is their remote proctoring function reliable and secure?
  • Are they compliant with security compliance guidelines including ISO, SSO, or GDPR?
  • How does the software protect user data?

Psychometrics

Psychometrics is the science of assessment, helping to drive accurate scores from defensible tests, as well as making them more efficient, reducing bias, and a host of other benefits.  You should ensure that your solution supports the necessary level of psychometrics.  Some suggestions:

 

User experience

A good user experience is a must-have when you are sourcing any enterprise software. A new age pre-employment testing software should create user experience maps with both the candidates and employer in mind. Some ways you can tell if a software offers a seamless user experience includes;

  • User-friendly interface
  • Simple and easy to interact with
  • Easy to create and manage item banks
  • Clean dashboard with advanced analytics and visualizations

Customizing your user-experience maps to fit candidates’ expectations attracts high-quality talent.

 

Scalability and automation

With a single job post attracting approximately 250 candidates, scalability isn’t something you should overlook. A good pre-employment testing software should thus have the ability to handle any kind of workload, without sacrificing assessment quality.

It is also important you check the automation capabilities of the software. The hiring process has many repetitive tasks that can be automated with technologies such as Machine learning, Artificial Intelligence (AI), and robotic process automation (RPA).

Here are some questions you should consider in relation to scalability and automation;

  • Does the software offer Automated Item Generation (AIG)?
  • How many candidates can it handle?
  • Can it support candidates from different locations worldwide?

Reporting and analytics

iteman item analysis

A good pre-employment assessment software will not leave you hanging after helping you develop and deliver the tests. It will enable you to derive important insight from the assessments.

The analytics reports can then be used to make data-driven decisions on which candidate is suitable and how to improve candidate experience. Here are some queries to make on reporting and analytics.

  • Does the software have a good dashboard?
  • What format are reports generated in?
  • What are some key insights that prospects can gather from the analytics process?
  • How good are the visualizations?

Customer and Technical Support

Customer and technical support is not something you should overlook. A good pre-employment assessment software should have an Omni-channel support system that is available 24/7. This is mainly because some situations need a fast response. Here are some of the questions your should ask when vetting customer and technical support;

  • What channels of support does the software offer/How prompt is their support?
  • How good is their FAQ/resources page?
  • Do they offer multi-language support mediums?
  • Do they have dedicated managers to help you get the best out of your tests?

 

Conclusion

Finding the right HR assessment software is a lengthy process, yet profitable in the long run. We hope the article sheds some light on the important aspects to look for when looking for such tools. Also, don’t forget to take a pragmatic approach when implementing such tools into your hiring process.

Are you stuck on how you can use pre-employment testing tools to improve your hiring process? Feel free to contact us and we will guide you on the entire process, from concept development to implementation. Whether you need off-the-shelf tests or a comprehensive platform to build your own exams, we can provide the guidance you need.  We also offer free versions of our industry-leading software  FastTest  and  Assess.ai  – visit our Contact Us page to get started!

If you are interested in delving deeper into leadership assessments, you might want to check out this blog post.  For more insights and an example of how HR assessments can fail, check out our blog post called Public Safety Hiring Practices and Litigation. The blog post titled Improving Employee Retention with Assessment: Strategies for Success explores how strategic use of assessments throughout the employee lifecycle can enhance retention, build stronger teams, and drive business success by aligning organizational goals with employee development and engagement.

ai-remote-proctoring-solutions

AI remote proctoring has seen an incredible increase of usage during the COVID pandemic. ASC works with a very wide range of clients, with a wide range of remote proctoring needs, and therefore we partner with a range of remote proctoring vendors that match to our clients’ needs, and in some cases bring on a new vendor at the request of a client.

At a broad level, we recommend live online proctoring for high-stakes exams such as medical certifications, and AI remote proctoring for medium-stakes exams such as pre-employment assessment. Suppose you have decided that you want to find a vendor for AI remote proctoring. How can you start evaluating solutions? 

Well, it might be surprising, but even within the category of “AI remote proctoring” there can be substantial differences in the functionality provided, and you should therefore closely evaluate your needs and select the vendor that makes the most sense – which is the approach we at ASC have always taken.  We have also prepared a list of the best online proctoring software platforms for your use.

What is AI Remote Proctoring?

AI remote proctoring is a cutting-edge technology that uses artificial intelligence to monitor and supervise online examinations. Unlike traditional in-person proctoring, which requires human invigilators to oversee exams, AI remote proctoring leverages advanced algorithms to ensure the integrity and security of the testing process. This system typically employs a combination of facial recognition, screen monitoring, and behavior analysis to detect any suspicious activity during the exam.

The main benefits of using AI for remote proctoring

One of the primary advantages of AI remote proctoring is its ability to provide a scalable and flexible solution for educational institutions, certification bodies, and corporations. It allows students and test-takers to complete their exams from the comfort of their own homes or any other location with an internet connection. This not only reduces logistical challenges but also opens up opportunities for a wider range of candidates to participate in assessments.

Moreover, AI remote proctoring enhances the fairness and accuracy of the examination process. The technology can continuously analyze multiple data points in real time, identifying potential instances of cheating such as looking away from the screen, using unauthorized devices, or receiving external help. These automated systems can flag suspicious behavior for further review by human proctors, ensuring a robust and reliable evaluation process.

In addition to maintaining the integrity of the exams, AI remote proctoring also respects the privacy of test-takers. Many systems are designed with stringent data protection measures to ensure that personal information and exam data are securely handled and stored. This combination of security, flexibility, and reliability makes AI remote proctoring an increasingly popular choice for modern educational and professional assessments.

How does AI remote proctoring work?

Because this approach is fully automated, we need to train machine learning models to look for things that we would generally consider to be a potential flag.   These have to be very specific!  Here are some examples:student remote proctoring software

  1. Two faces in the screen
  2. No faces in the screen
  3. Voices detected
  4. Black or white rectangle approximately 2-3 inches by 5 inches (phone is present)
  5. Face looking away or down
  6. White rectangle approximately 8 inches by 11 inches (notebook or extra paper is present)

These are continually monitored at each slice of time, perhaps twice per second.  The video frames or still images are evaluated with machine learning models such as support vector machines to determine the probability that each flag is happening, and then an overall cheating score or probability is calculated.  You can see this happening in real time at this very helpful video.

If you are a fan of the show Silicon Valley you might recall how a character built an app for recognizing food dishes with AI… and the first iteration was merely “hot dog” vs. “not hot dog.”  This was a nod towards how many applications of AI break problems down into simple chunks.

Some examples of differences in AI remote proctoring

   1. Video vs. Stills: Some vendors record full HD video of the student’s face with the webcam, while others are designed for lower-stakes exams, and only take a still shot every 10 seconds.  In some cases, the still-image approach is better, especially if you are delivering exams in low-bandwidth areas.

   2. Audio: Some vendors record audio, and flag it with AI.  Some do not record audio.

   3. Lockdown browser: Some vendors provide their own lockdown browser, such as Sumadi.  Some integrate with market leaders like Respondus or SafeExam.  Others do not provide this feature.

   4. Peripheral detection: Some vendors can detect certain hardware situations that might be an issue, such as a second monitor. Others do not.

   5. Second camera: Some vendors have the option to record a second camera; typically this is from the examinee’s phone, which is placed somewhere to see the room, since the computer webcam can usually only see their face.

   6. Screen recording: Some vendors record full video of the examinee’s screen as they take the exam. It can be argued that this increases security, but is often not required if there is lockdown browser. It can also be argued that this decreases security, because now images of all your items reside in someplace other than your assessment platform.

   7. Additional options: For example, Examus has a very powerful feature for Bring Your Own Proctors, where staff would be able to provide live online proctoring, enhanced with AI in real time. This would allow you to scale up the security for certain classifications that have high stakes but low volume, where live proctoring would be more appropriate. 

Want some help in finding a solution?

We can help you find the right solution and implement it quickly for a sound, secure digital transformation of your assessments. Contact solutions@assess.com to request a meeting with one of our assessment consultants. Or, sign up for a free account in our assessment platforms, FastTest and Assess.ai and see how our Rapid Assessment Development (RAD) can get you up and running in only a few days. Remember that the assessment platform itself also plays a large role in security: leveraging modern methods like computerized adaptive testing or linear on-the-fly testing will help a ton.

What are current topics and issues regarding AI remote proctoring?

ASC hosted a webinar in August 2022, specifically devoted to this topic.  You can view the video below.

Paper-and-pencil testing used to be the only way to deliver assessments at scale.  The introduction of computer-based testing (CBT) in the 1980s was a revelation – higher fidelity item types, immediate scoring & feedback, and scalability all changed with the advent of the personal computer and then later the internet.  Delivery mechanisms including remote proctoring provided students with the ability to take their exams anywhere in the world.  This all exploded tenfold when the pandemic arrived.  So why are some exams still offline, with paper and pencil?

Many education institutions are confused about which examination models to stick to.  Should you go on with the online model they used when everyone was stuck in their homes?  Should you adopt multi-modal examination models, or should you go back to the traditional pen-and-paper method?  

This blog post will provide you with an evaluation of whether paper-and-pencil exams are still worth it in 2021. 

 

Paper-and-pencil testing; The good, the bad, and the ugly

The Good

Answer Bubble Sheet OrangeOffline exams have been a stepping stone towards the development of modern assessment models that are more effective. We can’t ignore the fact that there are several advantages of traditional exams. 

Some advantages of paper-and-pencil testing include students having familiarity with the system, development of a social connection between learners, exemption from technical glitches, and affordability. Some schools don’t have the resources and pen-and-paper assessments are the only option available. 

This is especially true in areas of the world that do not have the internet bandwidth or other technology necessary to deliver internet-based testing.

Another advantage of paper exams is that they can often work better for students with special needs, such as blind students which need a reader.

Paper and pencil testing is often more cost-efficient in certain situations where the organization does not have access to a professional assessment platform or learning management system.

 

The Bad and The Ugly

However, the paper-and-pencil testing does have a number of shortfalls.

1. Needs a lot of resources to scale

Delivery of paper-and-pencil testing at large scale requires a lot of resources. You are printing and shipping, sometimes with hundreds of trucks around the country.  Then you need to get all the exams back, which is even more of a logistical lift.

2. Prone to cheating

Most people think that offline exams are cheat-proof but that is not the case. Most offline exams count on invigilators and supervisors to make sure that cheating does not occur. However, many pen-and-paper assessments are open to leakages. High candidate-to-ratio is another factor that contributes to cheating in offline exams.

3. Poor student engagement

We live in a world of instant gratification and that is the same when it comes to assessments. Unlike online exams which have options to keep the students engaged, offline exams are open to constant destruction from external factors.

Offline exams also have few options when it comes to question types. 

4. Time to score

To err is human.” But, when it comes to assessments, accuracy, and consistency. Traditional methods of hand-scoring paper tests are slow and labor-intensive. Instructors take a long time to evaluate tests. This defeats the entire purpose of assessments.

5. Poor result analysis

Pen-and-paper exams depend on instructors to analyze the results and come up with insight. This requires a lot of human resources and expensive software. It is also difficult to find out if your learning strategy is working or it needs some adjustments. 

6. Time to release results

Online exams can be immediate.  If you ship paper exams back to a single location, score them, perform psychometrics, then mail out paper result letters?  Weeks.

7. Slow availability of results to analyze

Similarly, psychometricians and other stakeholders do not have immediate access to results.  This prevents psychometric analysis, timely feedback to students/teachers, and other issues.

8. Accessibility

Online exams can be built with tools for zoom, color contrast changes, automated text-to-speech, and other things to support accessibility.

9. Convenience

traditional approach vs modern approach

Online tests are much more easily distributed.  If you publish one on the cloud, it can immediately be taken, anywhere in the world.

10. Support for diversified question types

Unlike traditional exams which are limited to a certain number of question types, online exams offer many question types.  Videos, audio, drag and drop, high-fidelity simulations, gamification, and much more are possible.

11. Lack of modern psychometrics

Paper exams cannot use computerized adaptive testing, linear-on-the-fly testing, process data, computational psychometrics, and other modern innovations.

12. Environmental friendliness

Sustainability is an important aspect of modern civilization.  Online exams eliminate the need to use resources that are not environmentally friendly such as paper. 

 

Conclusion

Is paper-and-pencil testing still useful?  In most situations, it is not.  The disadvantages outweigh the advantages.  However, there are many situations where paper remains the only option, such as poor tech infrastructure.

How ASC Can Help 

Transitioning from paper-and-pencil testing to the cloud is not a simple task.  That is why ASC is here to help you every step of the way, from test development to delivery.  We provide you with the best assessment software and access to the most experienced team of psychometricians.  Ready to take your assessments online?  Contact us!

 

So, yeah, the use of “hacks” in the title is definitely on the ironic and gratuitous side, but there is still a point to be made: are you making full use of current technology to keep your tests secure?  Gone are the days when you are limited to linear test forms on paper in physical locations.  Here are some quick points on how modern assessment technology can deliver assessments more securely, effectively, and efficiently than traditional methods:

1.  AI delivery like CAT and LOFT

Psychometrics was one of the first areas to apply modern data science and machine learning (see this blog post for a story about a MOOC course).  But did you know it was also one of the first areas to apply artificial intelligence (AI)?  Early forms of computerized adaptive testing (CAT) were suggested in the 1960s and had become widely available in the 1980s.  CAT delivers a unique test to each examinee by using complex algorithms to personalize the test.  This makes it much more secure, and can also reduce test length by 50-90%.

2. Psychometric forensics

Modern psychometrics has suggested many methods for finding cheaters and other invalid test-taking behavior.  These can range from very simple rules like flagging someone for having a top 5% score in a bottom 5% time, to extremely complex collusion indices.  These approaches are designed explicitly to keep your test more secure.

3. Tech enhanced items

Tech enhanced items (TEIs) are test questions that leverage technology to be more complex than is possible on paper tests.  Classic examples include drag and drop or hotspot items.  These items are harder to memorize and therefore contribute to security.

4. IP address limits

Suppose you want to make sure that your test is only delivered in certain school buildings, campuses, or other geographic locations.  You can build a test delivery platform that limits your tests to a range of IP addresses, which implements this geographic restriction.

5. Lockdown browser

A lockdown browser is a special software that locks a computer screen onto a test in progress, so for example a student cannot open Google in another tab and simply search for answers.  Advanced versions can also scan the computer for software that is considered a threat, like a screen capture software.

6. Identity verification

Tests can be built to require unique login procedures, such as requiring a proctor to enter their employee ID and the test-taker to enter their student ID.  Examinees can also be required to show photo ID, and of course, there are new biometric methods being developed.

7. Remote proctoring

The days are gone when you need to hop in the car and drive 3 hours to sit in a windowless room at a community college to take a test.  Nowadays, proctors can watch you and your desktop via webcam.  This is arguably as secure as in-person proctoring, and certainly more convenient and cost-effective.

So, how can I implement these to deliver assessments more securely?

Some of these approaches are provided by vendors specifically dedicated to that space, such as ProctorExam for remote proctoring.  However, if you use ASC’s FastTest platform, all of these methods are available for you right out of the box.  Want to see for yourself?  Sign up for a free account!

Guttman errors are a concept derived from the Guttman Scaling approach to evaluating assessments.  There are a number of ways that they can be used.  Meijer (1994) suggests an evaluation of Guttman errors as a way to flag aberrant response data, such as cheating or low motivation.  He quantified this with two different indices, G and G*.

What is a Guttman error?

It occurs when an examinee answers an item incorrectly when we expect them to get it correct, or vice versa.  Here, we describe the Goodenough methodology as laid out in Dunn-Rankin, Knezek, Wallace, & Zhang (2004).  Goodenough is a researcher’s name, not a comment on the quality of the algorithm!

In Guttman scaling, we begin by taking the scored response matrix (0s and 1s for dichotomous items) and sorting both the columns and rows.  Rows (persons) are sorted by observed score and columns (items) are sorted by observed difficulty.  The following table is sorted in such a manner, and all the data fit the Guttman model perfectly: all 0s and 1s fall neatly on either side of the diagonal.

 

  Score Item 1 Item 2 Item 3 Item 4 Item 5
P =   0.0 0.2 0.4 0.6 0.8
Person 1 1 1 0 0 0 0
Person 2 2 1 1 0 0 0
Person 3 3 1 1 1 0 0
Person 4 4 1 1 1 1 0
Person 5 5 1 1 1 1 1

 

Now consider the following table.  Ordering remains the same, but Person 3 has data that falls outside of the diagonal.

 

  Score Item 1 Item 2 Item 3 Item 4 Item 5
P =   0.0 0.2 0.4 0.6 0.8
Person 1 1 1 0 0 0 0
Person 2 2 1 1 0 0 0
Person 3 3 1 1 0 1 0
Person 4 4 1 1 1 1 0
Person 5 5 1 1 1 1 1

 

Some publications on the topic are unclear as to whether this is one error (two cells are flipped) or two errors (a cell that is 0 should be 1, and a cell that is 1 should be 0).  In fact, this article changes the definition from one to the other while looking at two rows the same table.  The Dunn-Rankin et al. book is quite clear: you must subtract the examinee response vector from the perfect response vector for that person’s score, and each cell with a difference counts as an error.

 

  Score Item 1 Item 2 Item 3 Item 4 Item 5
P =   0.0 0.2 0.4 0.6 0.8
Perfect 3 1 1 1 0 0
Person 3 3 1 1 0 1 0
Difference 1 -1

 

Thus, there are two errors.

Usage of Guttman errors in data forensics

Meijer suggested the use of G, raw Guttman error count, and a standardized index he called G*:

G*=G/(r(k-r).

Here, k is the number of items on the test and r is the person’s score.

How is this relevant to data forensics?  Guttman errors can be indicative of several things:

  1. Preknowledge: A low ability examinee memorizes answers to the 20 hardest questions on a 100 item test. Of the 80 they actually answer, they get half correct.
  2. Poor motivation or other non-cheating issues: in a K12 context, a smart kid that is bored might answer the difficult items correctly but get a number of easy items incorrect.
  3. External help: a teacher might be giving answers to some tough items, which would show in the data as a group having a suspiciously high number of errors on average compared to other groups.

How can I calculate G and G*?

Because the calculations are simple, it’s feasible to do both in a simple spreadsheet for small datasets. But for a data set of any reasonable size, you will need specially designed software for data forensics, such as SIFT.

What’s the big picture?

Guttman error indices are by no means perfect indicators of dishonest test-taking, but can be helpful in flagging potential issues at both an individual and group level.  That is, you could possibly flag individual students with high numbers of Guttman errors, or if your test is administered in numerous separate locations such as schools or test centers, you can calculate the average number of Guttman errors at each and flag the locations with high averages.

As with all data forensics, though, this flagging process does not necessarily mean there is nefarious goings-on.  Instead, it could simply give you a possible reason to open a deeper investigation.