Item parameter drift (IPD) refers to the phenomenon in which parameter values a given test item changes over multiple testing occasions within the item response theory (IRT) framework. This phenomenon is often relevant to student progress monitoring assessments where a set of items is used several times in one year, or across years, to track student growth; the observing of trends in student academic achievements depends upon stable linking (anchoring) between assessment moments over time, and if the item parameters are not stable, the scale is not stable and time-to-time comparisons are not either. Some psychometricians consider IPD as a special case of differential item functioning (DIF), but these two are different issues and should not be confused with each other.
Reasons for Item Parameter Drift
IRT modeling is attractive for assessment field since its property of item parameter invariance to a particular sample of test-takers which is fundamental for their estimation, and that assumption enables important things like strong equating of tests across time and the possibility for computerized adaptive testing. However, item parameters are not always invariant. There are plenty of reasons that could stand behind IPD. One possibility is curricular changes based on assessment results or instruction that is more concentrated. Other feasible reasons are item exposure, cheating, or curricular misalignment with some standards. No matter what has led to IPD, its presence can cause biased estimates of student ability. In particular, IPD can be highly detrimental in terms of reliability and validity in case of high-stakes examinations. Therefore, it is crucial to detect item parameter drift when anchoring assessment occasions over time, especially when the same anchor items are used repeatedly.
Perhaps the simplest example is item exposure. Suppose a 100-item test is delivered twice per year, with 20 items always remaining as anchors. Eventually students will share memories and the topics of those will become known. More students will get them correct over time, making the items appear easier.
There are several methods of detecting IPD. Some of them are simpler because they do not require estimation of anchoring constants, and some of them are more difficult due to the need of that estimation. Simple methods include the “3-sigma p-value”, the “0.3 logits”, and the “3-sigma IRT” approaches. Complex methods involve the “3-sigma scaled IRT”, the “Mantel-Haenszel”, and the “area between item characteristic curves”, where the last two approaches are based on consideration that IPD is a special case of DIF, and therefore there is an opportunity to draw upon a massive body of existing research on DIF methodologies.
Even though not all psychometricians think that removal of outlying anchor items is the best solution for item parameter drift, if we do not eliminate drifting items from the process of equating test scores, they will affect transformations of ability estimates, not only item parameters. Imagine that there is an examination, which classifies examinees as either failing or passing, or into four performance categories; then in case of IPD, 10-40% of students could be misclassified. In high-stakes testing occasions where classification of examinees implies certain sanctions or rewards, IPD scenarios should be minimized as much as possible. As soon as it is detected that some items exhibit IPD, these items should be referred to the subject-matter experts for further investigation. Otherwise, if there is a need in a faster decision, such flagged anchor items should be removed immediately. Afterwards, psychometricians need to re-estimate linking constants and evaluate IPD again. This process should repeat unless none of the anchor items shows item parameter drift.
Certification exam delivery is the process of administering a certification test to candidates. This might seem straightforward, but it is surprisingly complex. The greater the scale and the stakes, the more potential threats and pitfalls.
1. Determine Approach to Certification Exam Delivery
The first step is to determine how you want to deliver the certification exam. Here are your options for exam delivery.
Remotely Proctored (Recorded)
Remotely Proctored (Live)
Multi-modal (a mix of the above)
Which approach to use for certification exam delivery depends on several factors.
Are you going to seek accreditation?
Do you have an easy way to make your own locations? One example is that you have quarterly regional conferences for your profession, where you could simply get a side room to deliver your test to candidates since they will already be there. Another is that most of your candidates are coming from training programs at universities, and you are able to use classrooms at those universities.
What are the stakes of the exam? In rare cases, it can be unproctored online. Low- to medium-stakes can be delivered with recorded video proctoring. The highest stakes exams will almost always be at test centers with in-person proctoring, including strong measures like fingerprinting and retinal scanning. Learn more about remote proctoring.
How many exams are you planning to deliver? If a large volume, higher-cost approaches might be out of your budget.
What is your testing cycle like? Some exams are on-demand year-round, while some might be one or two Saturdays per year.
2. Shop for Vendors
Once you have some idea what you are looking for, start shopping for vendors that provide services for certification exam delivery, development, and scoring. In some cases, you might not settle on a certain approach right away, and that’s OK. See what is out there and compare prices. Perhaps the cost of Live Remote Proctoring is more affordable than you anticipated, and you can upgrade to that.
3. Create Policies and Documentation for Certification Exam Delivery
Once you have finalized your vendors, you need to write policies and documentation around them. For example, if your vendor has a certain login page for proctoring (we have ascproctor.com), you should take relevant screenshots and write up a walkthrough so candidates know what to expect. Much of this should go into your Candidate Handbook. Some of the things to cover:
How to prepare for the exam
How to take a practice test
What is allowed during the exam
What is not allowed
ID needed and the check-in process
Details on specific locations (if using locations)
Time limits and other practical considerations in the exam
4. Let Everyone Know
Once you have written up everything, make sure all the relevant stakeholders know. Publish the new Candidate Handbook and announce to the world. Send emails to all upcoming candidates with instructions and an opportunity for a practice exam. Put a link on your homepage. Get in touch with all the training programs or universities in your field. Make sure that everyone has ample opportunity to know about the new process!
5. Roll Out
Finally, of course, you can implement the new approach to certification exam delivery. You might launch a new certification exam from scratch, or perhaps you are moving one from paper to online with remote proctoring, or some other change. Either way, you need a date to start using it and a change management process. The good news is that, even though it’s probably a lot of work to get here, the new approach is probably going to save you time and money in the long run. Roll it out!
Student exam cheating has always been a major concern, even before the major shift to remote proctoring and online exams during the pandemic. According to a study done in 2007, 60.8% of college students admitted to cheating while a whopping 95% of these admitted not getting caught! Another article also revealed that even renowned institutions such as Harvard and Yale turn out to be fragile when it came to exam cheating. This is just but a tip of the iceberg when exploring insights on academic infidelity.
In 2020, things took a turn for the worse as many institutions struggled to implement digital transformation as a response to the pandemic. Coupled with the common misconception that cheating in online exams is easier than traditional exams, everything became catastrophic.
However, this is not a true reflection of cheating in online exams. Is cheating in online exams easy as the students perceive it to be? No. Just like in-person exams, online exams have some loopholes that students can exploit for academic gains. But, these loopholes can be mitigated using the right online assessment software and strategies. Perhaps the most important technology is remote proctoring, which has seen an incredible boom during the pandemic.
With exam security, experts follow the adage of “A pound of prevention is worth an ounce of cure” so deterrence is important. However, before going forward, let’s define academic dishonesty and cheating, and discuss some definitions.
Need an online exam platform with more security?
Request a free account or demo with FastTest.
Quickly publish high-quality exams with remote proctoring: live, recorded, or AI.
Cheating refers to actual, intended, or attempted deception and/or dishonesty in relation to academic or employment-related assessments. It is is only one part of the larger issue of test integrity; there are other issues, such as teacher help on K-12 summative tests or “faking good” on personality assessments.
Cheating In Online Exams; Misconceptions Vs Reality
Here is a simple infographic to demonstrate the common misconceptions on cheating in online exams.
7 ways of cheating in online exams and how to catch or deter them
1. Impersonation (proxy tester)
Impersonation refers to the act of asking or hiring someone else to take an exam on your behalf. This is not a new trick and has also been used in traditional assessments. In online assessments, the risks are even higher. Impersonation can take place in two stages of the examination delivery process; before and during the assessments.
Before the assessments, students share their login details with the imposter. The login details vary depending on the software in use. However, some online proctoring software easily makes this impossible by using advanced identification features such as biometrics or facial recognition systems. Impersonation during the exam process can take place in many ways. One trick used by students is providing remote access to their imposters through tools such as virtual machines. This sounds like something pulled out of a spy movie, but you wouldn’t believe what students are willing to do to avoid failure.
Impersonation is a common act of exam cheating and can be reduced by using the following approaches:
How to curb impersonation in online exams
Using authentication technologies such as biometric systems and two-factor authentication to verify registered candidates.
Use online assessment software with the AI-flagging feature. AI flagging feature enables instructors to spot and act on suspicious activities in exam video and audio recordings. Some suspicious behavior includes extra audio voices and unusual body language.
Use software such as BioSig-ID which uses keystroke analysis to identify metrics such as typing patterns, style, rhythm, and pressure. These are then used to create a unique persona that is almost impossible to replicate.
2. The ‘Old school’
The ‘old school’ is popular in traditional exams. It includes tricks such as writing the notes in your palms or sticky notes and using sign language to communicate with classmates. Many institutions make a mistake of assuming that these tricks cannot be used in online exams. However, these tricks can be used in online proctored exams, but remote proctoring can catch it.
How to deal with the ‘old-school tricks’
Most of the old-school tricks of cheating can be identified with online proctoring software. In many cases, AI-based flagging is sufficient. But for high-stakes exams, you definitely need a human proctor in real time, because that not only protects the integrity of that examinee’s score, but greatly deters them from stealing the test content.
3. Advanced tech gadgets and software
The advancement of technology has turned some students into ‘James Bond’ types of cheaters. They rely on sophisticated tech gadgets and software to cheat in online proctored exams. Here are some creative ways students use software and technology to try and outwit the system;
Tech gadgets such as smartwatches and microphones– These devices enable students to communicate about the exam with people from the outside. Smartwatches can also be used to store answers to the tests.
Using remote software– This is one of the most popular ways of cheating in online assessments. Students install remote software to connect them with help from the outside. Some even use web versions of applications such as WhatsApp to cheat in exams.
Projectors or cameras– This is another way students can cheat in online proctored exams. Students may set up cameras on their desks and have their family or friends project the answers in front of them.
Screen Mirroring– Using certain tools, students can grant monitor access to friends and family in the room, who can help them with the test.
How to stop the ‘tech-savvy’ students from cheating
Use lock-down browser to stop the student from surfing the web or using any software, apart from taking the exam.
Use AI flagging to recognize and act on suspicious behavior
Collaborate with a remote proctoring provider that offers audio and video proctoring services. You can also add a human proctor to the equation to increase assessment security. Good proctoring services will use two cameras, require room scans, lock down the computer, and other approaches to addressing tech-based cheating.
4. “Brain Dumps” and sharing of test content
In many cases, this form of student exam cheating can be referred to as the ‘charity’ of academic infidelity. This is popular in traditional exams where students share exam questions with others who have not taken the exam yet, because they feel they are being helpful – even if it could end up hurting themselves. Candidates may take screenshots or use cameras to record questions and share them with others, or simply try to memorize items.
A student pretends to have a ‘bathroom emergency’ or ‘bad connection’ and the first thing they do after logging off the system is getting access to a device or textbook to cheat. To achieve high standards of academic integrity, it’s important for institutions not to take chances.
How to deal with the ‘Water or Bathroom Break’
This is a tricky one, as there is no knowing when the excuses are genuine or not. However, the institutions should try to come up with strict examination policies for such circumstances. For instance, the time for bathroom breaks should be limited, such as there being a 200 item test with two 100-item sections and a 10 minute break between, so there is no chance to come back to items.
6. Help from family, experts, and friends
This is a method that many students consider ‘safe’. There are several ways family, experts, and friends can help students cheat in online assessments. One way is having a family member or friend in the same room a student is taking the exam. The student can share the questions and the friend or family member can browse the internet and provide answers.
Another way students can cheat is through ‘expert tutors’. Students can hire tutors to help them with the exams or take them on their behalf. In fact, there has been an increased demand for online academic tutors.
How to stop students from getting external help
Tighten up your authentication process by using technologies such as biometrics systems and two-factor authentication. This prevents students from receiving external help from experts.
Utilize lockdown browser to make sure that students don’t have access to remote software or have the freedom to surf the internet.
Use AI flagging to make sure that the candidates don’t display suspicious behavior.
7. Googling or ChatGPT-ing the answer
Doesn’t access to a computer and the internet provide students with endless opportunities to cheat in online proctored tests? Yes, if the test is not proctored or locked down. Fortunately, this method is relatively easy to address.
How to stop students from using the internet during an exam.
Use ASC’s lockdown browser function to stop students from accessing other websites or resources on personal devices. Utilize a proctoring services. Do not allow cell phones to be present at the desk.
Student exam cheating remains common. These are some of the popular ways in which students cheat on exams. However, these tricks and hacks should not scare you from reaping the benefits of online assessments. In fact, by using the solutions suggested in this article, we believe you can reduce the risks of cheating and achieve high standards of academic integrity.
Ready to minimize and deter student cheating in your online proctored exams? Contact us today and get a free consultation!
https://assess.com/wp-content/uploads/2015/12/iStock_000027831634_20percent.jpg600900Adminhttps://assess.com/wp-content/uploads/2023/11/ASC-2022-Logo-no-tagline-300.pngAdmin2021-09-27 01:08:222023-10-11 11:17:45How students cheat in exams and how to catch them
Paper-and-pencil testing used to be the only way to deliver assessments at scale. The introduction of computer-based testing (CBT) in the 1980s was a revelation – higher fidelity item types, immediate scoring & feedback, and scalability all changed with the advent of the personal computer and then later the internet. Delivery mechanisms including remote proctoringprovided students with the ability to take their exams anywhere in the world. This all exploded tenfold when the pandemic arrived. So why are some exams still offline, with paper and pencil?
Many education institutions are confused about which examination models to stick to. Should you go on with the online model they used when everyone was stuck in their homes? Should you adopt multi-modal examination models, or should you go back to the traditional pen-and-paper method?
This blog post will provide you with an evaluation of whether paper-and-pencil exams are still worth it in 2021.
Paper-and-pencil testing; The good, the bad, and the ugly
Offline exams have been a stepping stone towards the development of modern assessment models that are more effective. We can’t ignore the fact that there are several advantages of traditional exams.
Some advantages of paper-and-pencil testing include students having familiarity with the system, development of a social connection between learners, exemption from technical glitches, and affordability. Some schools don’t have the resources and pen-and-paper assessments are the only option available.
This is especially true in areas of the world that do not have the internet bandwidth or other technology necessary to deliver internet-based testing.
Another advantage of paper exams is that they can often work better for students with special needs, such as blind students which need a reader.
Paper and pencil testing is often more cost-efficient in certain situations where the organization does not have access to a professional assessment platform or learning management system.
The Bad and The Ugly
However, the paper-and-pencil testing does have a number of shortfalls.
1. Needs a lot of resources to scale
Delivery of paper-and-pencil testing at large scale requires a lot of resources. You are printing and shipping, sometimes with hundreds of trucks around the country. Then you need to get all the exams back, which is even more of a logistical lift.
2. Prone to cheating
Most people think that offline exams are cheat-proof but that is not the case. Most offline exams count on invigilators and supervisors to make sure that cheating does not occur. However, many pen-and-paper assessments are open to leakages. High candidate-to-ratio is another factor that contributes to cheating in offline exams.
3. Poor student engagement
We live in a world of instant gratification and that is the same when it comes to assessments. Unlike online exams which have options to keep the students engaged, offline exams are open to constant destruction from external factors.
Offline exams also have few options when it comes to question types.
4. Time to score
“To err is human.” But, when it comes to assessments, accuracy, and consistency. Traditional methods of hand-scoring paper tests are slow and labor-intensive. Instructors take a long time to evaluate tests. This defeats the entire purpose of assessments.
5. Poor result analysis
Pen-and-paper exams depend on instructors to analyze the results and come up with insight. This requires a lot of human resources and expensive software. It is also difficult to find out if your learning strategy is working or it needs some adjustments.
6. Time to release results
Online exams can be immediate. If you ship paper exams back to a single location, score them, perform psychometrics, then mail out paper result letters? Weeks.
7. Slow availability of results to analyze
Similarly, psychometricians and other stakeholders do not have immediate access to results. This prevents psychometric analysis, timely feedback to students/teachers, and other issues.
Online exams can be built with tools for zoom, color contrast changes, automated text-to-speech, and other things to support accessibility.
Online tests are much more easily distributed. If you publish one on the cloud, it can immediately be taken, anywhere in the world.
10. Support for diversified question types
Unlike traditional exams which are limited to a certain number of question types, online exams offer many question types. Videos, audio, drag and drop, high-fidelity simulations, gamification, and much more are possible.
Sustainability is an important aspect of modern civilization. Online exams eliminate the need to use resources that are not environmentally friendly such as paper.
Is paper-and-pencil testing still useful? In most situations, it is not. The disadvantages outweigh the advantages. However, there are many situations where paper remains the only option, such as poor tech infrastructure.
Transitioning from paper-and-pencil testing to the cloud is not a simple task. That is why ASC is here to help you every step of the way, from test development to delivery. We provide you with the best assessment software and access to the most experienced team of psychometricians. Ready to take your assessments online?
HR assessment software is everywhere, but there is a huge range in quality, as well as a wide range in the type of assessment that it is designed for.
HR assessment platforms help companies create effective assessments, thus saving valuable resources, improving candidate experience & quality, providing more accurate and actionable information about human capital, and reducing hiring bias. But, finding software solutions that can help you reap these benefits can be difficult, especially because of the explosion of solutions in the market. If you are lost on which tools will help you develop and deliver your own HR assessments, this guide is for you.
What is HR assessment?
There are various types of assessments used in HR. Here are four main areas, though this list is by no means exhaustive.
Pre-employment tests to select candidates
Certificate or certification exams (can be internal or external)
360-degree assessments and other performance appraisals
Finding good employees in an overcrowded market is a daunting task. In fact, according to research by Career builder, 74% of employers admit to hiring the wrong employees. Bad hires are not only expensive, but can also adversely affect cultural dynamics in the workforce. This is one area where HR assessment software shows its value.
There are different types of pre-employment assessments. Each of them achieves a different goal in the hiring process. The major types of pre-employment assessments include:
Personality tests: Despite rapidly finding their way into HR, these types of pre-employment tests are widely misunderstood. Personality tests answer questions in the social spectrum. One of the main goals of these tests is to quantify the success of certain candidates based on behavioral traits.
Aptitude tests: Unlike personality tests or emotional intelligence tests which tend to lie on the social spectrum, aptitude tests measure problem-solving, critical thinking, and agility. These types of tests are popular because can predict job performance than any other type because they can tap into areas that cannot be found in resumes or job interviews.
Skills Testing: The kinds of tests can be considered a measure of job experience; ranging from high-end skills to low-end skills such as typing or Microsoft excel. Skill tests can either measure specific skills such as communication or measure generalized skills such as numeracy.
Emotional Intelligence tests: These kinds of assessments are a new concept but are becoming important in the HR industry. With strong Emotional Intelligence (EI) being associated with benefits such as improved workplace productivity and good leadership, many companies are investing heavily in developing these kinds of tests. Despite being able to be administered to any candidates, it is recommended they be set aside for people seeking leadership positions, or those expected to work in social contexts.
Risk tests: As the name suggests, these types of tests help companies reduce risks. Risk assessments offer assurance to employers that their workers will commit to established work ethics and not involve themselves in any activities that may cause harm to themselves or the organization. There are different types of risk tests. Safety tests, which are popular in contexts such as construction, measure the likelihood of the candidates engaging in activities that can cause them harm. Other common types of risk tests include Integrity tests.
This refers to assessments that are delivered after training. It might be a simple quiz after an eLearning module, up to a certification exam after months of training (see next section). Often, it is somewhere in between. For example you might take an afternoon sit through a training course, after which you take a formal test that is required to do something on the job. When I was a high school student, I worked in a lumber yard, and did exactly this to become an OSHA-approved forklift driver.
Certificate or certification exams
Sometimes, the exam process can be high-stakes and formal. It is then a certificate or certification, or sometimes a licensure exam. More on that here. This can be internal to the organization, or external.
Internal certification: The credential is awarded by the training organization, and the exam is specifically tied to a certain product or process that the organization provides in the market. There are many such examples in the software industry. You can get certifications in AWS, SalesForce, Microsoft, etc. One of our clients makes MRI and other medical imaging machines; candidates are certified on how to calibrate/fix them.
External certification: The credential is awarded by an external board or government agency, and the exam is industry-wide. An example of this is the SIE exams offered by FINRA. A candidate might go to work at an insurance company or other financial services company, who trains them and sponsors them to take the exam in hopes that the company will get a return by the candidate passing and then selling their insurance policies as an agent. But the company does not sponsor the exam; FINRA does.
360-degree assessments and other performance appraisals
Job performance is one of the most important concepts in HR, and also one that is often difficult to measure. John Campbell, one of my thesis advisors, was known for developing an 8-factor model of performance. Some aspects are subjective, and some are easily measured by real-world data, such as number of widgets made or number of cars sold by a car salesperson. Others involve survey-style assessments, such as asking customers, business partners, co-workers, supervisors, and subordinates to rate a person on a Likert scale. HR assessment platforms are needed to develop, deliver, and score such assessments.
HR Assessment Software: The Benefits
Now that you have a good understanding of what pre-employment tests are, let’s discuss the benefits of integrating pre-employment assessment software into your hiring process. Here are some of the benefits:
Saves Valuable resources
Unlike the lengthy and costly traditional hiring processes, pre-employment assessment software helps companies increase their ROI by eliminating HR snugs such as face-to-face interactions or geographical restrictions. Pre-employment testing tools can also reduce the amount of time it takes to make good hires while reducing the risks of facing the financial consequences of a bad hire.
Supports Data-Driven Hiring Decisions
Data runs the modern world, and hiring is no different. You are better off letting complex algorithms crunch the numbers and help you decide which talent is a fit, as opposed to hiring based on a hunch or less-accurate methods like an unstructured interview. Pre-employment assessment software helps you analyze assessments and generate reports/visualizations to help you choose the right candidates from a large talent pool.
Improving candidate experience
Candidate experience is an important aspect of a company’s growth, especially considering the fact that69% of candidates admitting not to apply for a job in a company after having a negative experience. Good candidate experience means you get access to the best talent in the world.
Elimination of Human Bias
Traditional hiring processes are based on instinct. They are not effective since it’s easy for candidates to provide false information on their resumes and cover letters. But, the use of pre-employment assessment software has helped in eliminating this hurdle. The tools have leveled the playing ground, and only the best candidates are considered for a position.
What To Consider When Choosing HR assessment software
Now that you have a clear idea of what pre-employment tests are and the benefits of integrating pre-employment assessment software into your hiring process, let’s see how you can find the right tools.
Here are the most important things to consider when choosing the right pre-employment testing software for your organization.
The candidates should be your top priority when you are sourcing pre-employment assessment software. This is because the ease of use directly co-relates with good candidate experience. Good software should have simple navigation modules and easy comprehension.
Here is a checklist to help you decide if a pre-employment assessment software is easy to use;
Are the results easy to interpret?
What is the UI/UX like?
What ways does it use to automate tasks such as applicant management?
Does it have good documentation and an active community?
Tests Delivery (Remote proctoring)
Good online assessment software should feature good online proctoring functionalities. This is because most remote jobs accept applications from all over the world. It is therefore advisable to choose a pre-employment testing software that has secure remote proctoring capabilities. Here are some things you should look for on remote proctoring;
Does the platform support security processes such as IP-based authentication, lockdown browser, and AI-flagging?
What types of online proctoring does the software offer? Live real-time, AI review, or record and review?
Does it let you bring your own proctor?
Does it offer test analytics?
Test & data security, and compliance
Defensibility is what defines test security. There are several layers of security associated with pre-employment test security. When evaluating this aspect, you should consider what pre-employment testing software does to achieve the highest level of security. This is because data breaches are wildly expensive.
The first layer of security is the test itself. The software should support security technologies and frameworks such as lockdown browser, IP-flagging, and IP-based authentication. If you are interested in knowing how to secure your assessments, learn more about it here.
The other layer of security is on the candidate’s side. As an employer, you will have access to the candidate’s private information. How can you ensure that your candidate’s data is secure? That is reason enough to evaluate the software’s data protection and compliance guidelines.
A good pre-employment testing software should be compliant with certifications such as GDRP. The software should also be flexible to adapt to compliance guidelines from different parts of the world.
Questions you need to ask;
What mechanisms does the software employ to eliminate infidelity?
Is their remote proctoring function reliable and secure?
Are they compliant with security compliance guidelines including ISO, SSO, or GDPR?
How does the software protect user data?
A good user experience is a must-have when you are sourcing any enterprise software. A new age pre-employment testing software should create user experience maps with both the candidates and employer in mind. Some ways you can tell if a software offers a seamless user experience includes;
Simple and easy to interact with
Easy to create and manage item banks
Clean dashboard with advanced analytics and visualizations
Customizing your user-experience maps to fit candidates’ expectations attracts high-quality talent.
With a single job post attracting approximately 250 candidates, scalability isn’t something you should overlook. A good pre-employment testing software should thus have the ability to handle any kind of workload, without sacrificing assessment quality.
It is also important you check the automation capabilities of the software. The hiring process has many repetitive tasks that can be automated with technologies such as Machine learning, Artificial Intelligence (AI), and robotic process automation (RPA).
Here are some questions you should consider in relation to scalability and automation;
Can it support candidates from different locations worldwide?
Reporting and analytics
A good pre-employment assessment software will not leave you hanging after helping you
develop and deliver the tests. It will enable you to derive important insight from the assessments.
The analytics reports can then be used to make data-driven decisions on which candidate is suitable and how to improve candidate experience. Here are some queries to make on reporting and analytics;
Does the software have a good dashboard?
What format are reports generated in?
What are some key insights that prospects can gather from the analytics process?
How good are the visualizations?
Customer and Technical Support
Customer and technical support is not something you should overlook. A good pre-employment assessment software should have an Omni-channel support system that is available 24/7. This is mainly because some situations need a fast response. Here are some of the questions your should ask when vetting customer and technical support;
What channels of support does the software offer/How prompt is their support?
How good is their FAQ/resources page?
Do they offer multi-language support mediums?
Do they have dedicated managers to help you get the best out of your tests?
Finding the right HR assessment software is a lengthy process, yet profitable in the long run. We hope the article sheds some light on the important aspects to look for when looking for such tools. Also, don’t forget to take a pragmatic approach when implementing such tools into your hiring process.
Are you stuck on how you can use pre-employment testing tools to improve your hiring process? Feel free to contact us and we will guide you on the entire process, from concept development to implementation. Whether you need off-the-shelf tests or a comprehensive platform to build your own exams, we can provide the guidance you need. We also offer free versions of our industry-leading software FastTest and Assess.ai– visit our Contact Us page to get started!
AI remote proctoring has seen an incredible increase of usage during the COVID pandemic. ASC works with a very wide range of clients, with a wide range of remote proctoring needs, and therefore we partner with a range of remote proctoring vendors that match to our clients’ needs, and in some cases bring on a new vendor at the request of a client.
At a broad level, we recommend live online proctoring for high-stakes exams such as medical certifications, and AI remote proctoring for medium-stakes exams such as pre-employment assessment. Suppose you have decided that you want to find a vendor for AI remote proctoring. How can you start evaluating solutions?
Well, it might be surprising, but even within the category of “AI remote proctoring” there can be substantial differences in the functionality provided, and you should therefore closely evaluate your needs and select the vendor that makes the most sense – which is the approach we at ASC have always taken.
How does AI remote proctoring work?
Because this approach is fully automated, we need to train machine learning models to look for things that we would generally consider to be a potential flag. These have to be very specific! Here are some examples:
Two faces in the screen
No faces in the screen
Black or white rectangle approximately 2-3 inches by 5 inches (phone is present)
Face looking away or down
White rectangle approximately 8 inches by 11 inches (notebook or extra paper is present)
These are continually monitored at each slice of time, perhaps twice per second. The video frames or still images are evaluated with machine learning models such as support vector machines to determine the probability that each flag is happening, and then an overall cheating score or probability is calculated. You can see this happening in real time at this very helpful video.
If you are a fan of the show Silicon Valley you might recall how a character built an app for recognizing food dishes with AI… and the first iteration was merely “hot dog” vs. “not hot dog.” This was a nod towards how many applications of AI break problems down into simple chunks.
Some examples of differences in AI remote proctoring
1. Video vs. Stills: Some vendors record full HD video of the student’s face with the webcam, while others are designed for lower-stakes exams, and only take a still shot every 10 seconds. In some cases, the still-image approach is better, especially if you are delivering exams in low-bandwidth areas.
2. Audio: Some vendors record audio, and flag it with AI. Some do not record audio.
3. Lockdown browser: Some vendors provide their own lockdown browser, such as Sumadi. Some integrate with market leaders like Respondus or SafeExam. Others do not provide this feature.
4. Peripheral detection: Some vendors can detect certain hardware situations that might be an issue, such as a second monitor. Others do not.
5. Second camera: Some vendors have the option to record a second camera; typically this is from the examinee’s phone, which is placed somewhere to see the room, since the computer webcam can usually only see their face.
6. Screen recording: Some vendors record full video of the examinee’s screen as they take the exam. It can be argued that this increases security, but is often not required if there is lockdown browser. It can also be argued that this decreases security, because now images of all your items reside in someplace other than your assessment platform.
7. Additional options: For example, Examus has a very powerful feature for Bring Your Own Proctors, where staff would be able to provide live online proctoring, enhanced with AI in real time. This would allow you to scale up the security for certain classifications that have high stakes but low volume, where live proctoring would be more appropriate.
Want some help in finding a solution?
We can help you find the right solution and implement it quickly for a sound, secure digital transformation of your assessments. Contact firstname.lastname@example.org to request a meeting with one of our assessment consultants. Or, sign up for a free account in our assessment platforms, FastTest and Assess.ai and see how our Rapid Assessment Development (RAD) can get you up and running in only a few days. Remember that the assessment platform itself also plays a large role in security: leveraging modern methods like computerized adaptive testing or linear on-the-fly testing will help a ton.
What are current topics and issues regarding AI remote proctoring?
ASC hosted a webinar in August 2022, specifically devoted to this topic. You can view the video below.
https://assess.com/wp-content/uploads/2021/06/facial-recognition.png7341179Nathan Thompson, PhDhttps://assess.com/wp-content/uploads/2023/11/ASC-2022-Logo-no-tagline-300.pngNathan Thompson, PhD2021-05-17 00:27:312023-10-23 02:29:17AI Remote Proctoring: How To Choose A Solution
Psychometric forensics is a surprisingly deep and complex field. Many of the indices are incredibly sophisticated, but a good high-level and simple analysis to start with is overall time vs. scores, which I call Time-Score Analysis. This approach uses simple flagging on two easily interpretable metrics (total test time in minutes and number correct raw score) to identify possible pre-knowledge, clickers, and harvester/sleepers. Consider the four quadrants that a bivariate scatterplot of these variables would produce.
High scores and taking their diligent time
High scores with low time
Top 50% score and bottom 5% time
Low scores with low time
“Clickers” or other low motivation
Bottom 5% time and score
Low scores with high time
Harvesters, sleepers, or just very low ability
Top 5% time and bottom 5% scores
An example of Time-Score Analysis
Consider the example data below. What can this tell us about the performance of the test in general, and about specific examinees?
This test had 100 items, scored classically (number-correct), and a time limit of 60 minutes. Most examinees took 45-55 minutes, so the time limit was appropriate. A few examinees spent 58-59 minutes; there will usually be some diligent students like that. There was a fairly strong relationship of time with the score, in that examinees who took longer, scored highly.
Now, what about the individuals? I’ve highlighted 5 examples.
This examinee had the shortest time, and one of the lowest scores. They apparently did not care very much. They are an example of a low motivation examinee that moved through quickly. One of my clients calls these “clickers.”
This examinee also took a short time but had a suspiciously high score. They definitely are an outlier on the scatterplot, and should perhaps be investigated.
This examinee is simply super-diligent. They went right up to the 60-minute limit and achieved one of the highest scores.
This examinee also went right up to the 60-minute limit but had one of the lowest scores. They are likely low ability or low motivation. That same client of mine calls these “sleepers” – a candidate that is forced to take the exam but doesn’t care, so just sits there and dozes. Alternatively, it might be a harvester; some who have been assigned to memorize test content, so they spend all the time they can, but only look at half the items so they can focus on memorization.
This examinee had by far the lowest score, and one of the lowest times. Perhaps they didn’t even answer every question. Again, there is a motivation/effort issue here, most likely.
How useful is time-score analysis?
Like other aspects of psychometric forensics, this is primarily useful for flagging purposes. We do not know yet if #4 is a Harvester or just low motivation. Instead of accusing them, we open an investigation. How many items did they attempt? Are they repeat test-takers? What location did they take the test? Do we have proctor notes, site video, remote proctoring video, or other evidence that we can review?
There is a lot that can go into such an investigation. Moreover, simple analyses such as this are merely the tip of the iceberg when it comes to psychometric forensics. In fact, so much that I’ve heard some organizations simply stick their head in the sand and don’t even bother checking out someone like #4. It just isn’t in the budget.
However, test security is an essential aspect of validity. If someone has stolen your test items, the test is compromised, and you are guaranteed that scores do not mean the same thing they meant when the test was published. It’s now apples and oranges, even though the items on the test are the same. Perhaps you might not challenge individual examinees but perhaps institute a plan to publish new test forms every 6 months. Regardless, your organization needs to have some difficult internal discussions and establish a test security plan.
So, yeah, the use of “hacks” in the title is definitely on the ironic and gratuitous side, but there is still a point to be made: are you making full use of current technology to keep your tests secure? Gone are the days when you are limited to linear test forms on paper in physical locations. Here are some quick points on how modern assessment technology can deliver assessments more securely, effectively, and efficiently than traditional methods:
1. AI delivery like CAT and LOFT
Psychometrics was one of the first areas to apply modern data science and machine learning (see this blog post for a story about a MOOC course). But did you know it was also one of the first areas to apply artificial intelligence (AI)? Early forms of computerized adaptive testing (CAT) were suggested in the 1960s and had become widely available in the 1980s. CAT delivers a unique test to each examinee by using complex algorithms to personalize the test. This makes it much more secure, and can also reduce test length by 50-90%.
2. Psychometric forensics
Modern psychometrics has suggested many methods for finding cheaters and other invalid test-taking behavior. These can range from very simple rules like flagging someone for having a top 5% score in a bottom 5% time, to extremely complex collusion indices. These approaches are designed explicitly to keep your test more secure.
3. Tech enhanced items
Tech enhanced items (TEIs) are test questions that leverage technology to be more complex than is possible on paper tests. Classic examples include drag and drop or hotspot items. These items are harder to memorize and therefore contribute to security.
4. IP address limits
Suppose you want to make sure that your test is only delivered in certain school buildings, campuses, or other geographic locations. You can build a test delivery platform that limits your tests to a range of IP addresses, which implements this geographic restriction.
5. Lockdown browser
A lockdown browser is a special software that locks a computer screen onto a test in progress, so for example a student cannot open Google in another tab and simply search for answers. Advanced versions can also scan the computer for software that is considered a threat, like a screen capture software.
6. Identity verification
Tests can be built to require unique login procedures, such as requiring a proctor to enter their employee ID and the test-taker to enter their student ID. Examinees can also be required to show photo ID, and of course, there are new biometric methods being developed.
7. Remote proctoring
The days are gone when you need to hop in the car and drive 3 hours to sit in a windowless room at a community college to take a test. Nowadays, proctors can watch you and your desktop via webcam. This is arguably as secure as in-person proctoring, and certainly more convenient and cost-effective.
So, how can I implement these to deliver assessments more securely?
Some of these approaches are provided by vendors specifically dedicated to that space, such as ProctorExam for remote proctoring. However, if you use ASC’s FastTest platform, all of these methods are available for you right out of the box. Want to see for yourself? Sign up for a free account!
A test security plan (TSP) is a document that lays out how an assessment organization address security of its intellectual property, to protect the validity of the exam scores. If a test is compromised, the scores become meaningless, so security is obviously important. The test security plan helps an organization anticipate test security issues, establish deterrent and detection methods, and plan responses. It can also include validity threats not security-related, such as how to deal with examinees that have low motivation. Note that it is not limited to delivery; it can often include topics like how to manage item writers.
Since the first tests were developed 2000 years ago for entry into the civil service of Imperial China, test security has been a concern. The reason is quite straightforward: most threats to test security are also validity threats. The decisions we make with test scores could therefore be invalid, or at least suboptimal. It is therefore imperative that organizations that use or develop tests should develop a TSP.
There are several reasons to develop a test security plan. First, it drives greater security and therefore validity. The TSP will enhance the legal defensibility of the testing program. It helps to safeguard the content, which is typically an expensive investment for any organization that develops tests themselves. If incidents do happen, they can be dealt with more swiftly and effectively. It helps to manage all the security-related efforts.
The development of such a complex document requires a strong framework. We advocate a framework with three phases: planning, implementation, and response. In addition, the TSP should be revised periodically.
Phase 1: Planning
The first step in this phase is to list all potential threats to each assessment program at your organization. This could include harvesting of test content, preknowledge of test content from past harvesters, copying other examinees, proxy testers, proctor help, and outside help. Next, these should be rated on axes that are important to the organization; a simple approach would be to rate on potential impact to score validity, cost to the organization, and likelihood of occurrence. This risk assessment exercise will help the remainder of the framework.
Next, the organization should develop the test security plan. The first piece is to identify deterrents and procedures to reduce the possibility of issues. This includes delivery procedures (such as a lockdown browser or proctoring), proctor training manuals, a strong candidate agreement, anonymous reporting pathways, confirmation testing, and candidate identification requirements. The second piece is to explicitly plan for psychometric forensics.
This can range from complex collusion indices based on item response theory to simple flags, such as a candidate responding to a certain multiple choice option more than 50% of the time or obtaining a score in the top 10% but in the lowest 10% of time. The third piece is to establish planned responses. What will you do if a proctor reports that two candidates were copying each other? What if someone obtains a high score in an unreasonably short time?
What if someone obviously did not try to pass the exam, but still sat there for the allotted time? If a candidate were to lose a job opportunity due to your response, it helps you defensibility to show that the process was established ahead of time with the input of important stakeholders.
Phase 2: Implementation
The second phase is to implement the relevant aspects of the Test Security Plan, such as training all proctors in accordance with the manual and login procedures, setting IP address limits, or ensuring that a new secure testing platform with lockdown is rolled out to all testing locations. There are generally two approaches. Proactive approaches attempt to reduce the likelihood of issues in the first place, and reactive methods happen after the test is given. The reactive methods can be observational, quantitative, or content-focused. Observational methods include proctor reports or an anonymous tip line. Quantitative methods include psychometric forensics, for which you will need software like SIFT. Content-focused methods include automated web crawling.
Both approaches require continuous attention. You might need to train new proctors several times per year, or update your lockdown browser. If you use a virtual proctoring service based on record-and-review, flagged candidates must be periodically reviewed. The reactive methods are similar: incoming anonymous tips or proctor reports must be dealt with at any given time. The least continuous aspect is some of the psychometric forensics, which depend on a large-scale data analysis; for example, you might gather data from tens of thousands of examinees in a testing window and can only do a complete analysis at that point, which could take several weeks.
Phase 3: Response
The third phase, of course, to put your planned responses into motion if issues are detected. Some of these could be relatively innocuous; if a proctor is reported as not following procedures, they might need some remedial training, and it’s certainly possible that no security breach occurred. The more dramatic responses include actions taken against the candidate. The most lenient is to provide a warning or simply ask them to retake the test. The most extreme methods include a full invalidation of the score with future sanctions, such as a five-year ban on taking the test again, which could prevent someone from entering a profession for which they spent 8 years and hundreds of thousands of dollars in educative preparation.
What does a test security plan mean for me?
It is clear that test security threats are also validity threats, and that the extensive (and expensive!) measures warrant a strategic and proactive approach in many situations. A framework like the one advocated here will help organizations identify and prioritize threats so that the measures are appropriate for a given program. Note that the results can be quite different if an organization has multiple programs, from a practice test to an entry level screening test to a promotional test to a professional certification or licensure.
Another important difference between test sponsors/publishers and test consumers. In the case of an organization that purchases off-the-shelf pre-employment tests, the validity of score interpretations is of more direct concern, while the theft of content might not be an immediate concern. Conversely, the publisher of such tests has invested heavily in the content and could be massively impacted by theft, while the copying of two examinees in the hiring organization is not of immediate concern.
In summary, there are more security threats, deterrents, procedures, and psychometric forensic methods than can be discussed in one blog post, so the focus here rather on the framework itself. For starters, start thinking strategically about test security and how it impacts their assessment programs by using the multi-axis rating approach, then begin to develop a Test Security Plan. The end goal is to improve the health and validity of your assessments.
Want to implement some of the security aspects discussed here, like online delivery lockdown browser, IP address limits, and proctor passwords?
https://assess.com/wp-content/uploads/2018/01/lock-keyboard-security-scaled-1.jpg17082560Nathan Thompson, PhDhttps://assess.com/wp-content/uploads/2023/11/ASC-2022-Logo-no-tagline-300.pngNathan Thompson, PhD2018-01-05 13:22:252023-09-27 05:55:35How do I develop a test security plan?
Guttman errors are a concept derived from the Guttman Scaling approach to evaluating assessments. There are a number of ways that they can be used. Meijer (1994) suggests an evaluation of Guttman errors as a way to flag aberrant response data, such as cheating or low motivation. He quantified this with two different indices, G and G*.
What is a Guttman error?
It occurs when an examinee answers an item incorrectly when we expect them to get it correct, or vice versa. Here, we describe the Goodenough methodology as laid out in Dunn-Rankin, Knezek, Wallace, & Zhang (2004). Goodenough is a researcher’s name, not a comment on the quality of the algorithm!
In Guttman scaling, we begin by taking the scored response matrix (0s and 1s for dichotomous items) and sorting both the columns and rows. Rows (persons) are sorted by observed score and columns (items) are sorted by observed difficulty. The following table is sorted in such a manner, and all the data fit the Guttman model perfectly: all 0s and 1s fall neatly on either side of the diagonal.
Now consider the following table. Ordering remains the same, but Person 3 has data that falls outside of the diagonal.
Some publications on the topic are unclear as to whether this is one error (two cells are flipped) or two errors (a cell that is 0 should be 1, and a cell that is 1 should be 0). In fact, this article changes the definition from one to the other while looking at two rows the same table. The Dunn-Rankin et al. book is quite clear: you must subtract the examinee response vector from the perfect response vector for that person’s score, and each cell with a difference counts as an error.
Thus, there are two errors.
Usage of Guttman errors in data forensics
Meijer suggested the use of G, raw Guttman error count, and a standardized index he called G*:
Here, k is the number of items on the test and r is the person’s score.
How is this relevant to data forensics? Guttman errors can be indicative of several things:
Preknowledge: A low ability examinee memorizes answers to the 20 hardest questions on a 100 item test. Of the 80 they actually answer, they get half correct.
Poor motivation or other non-cheating issues: in a K12 context, a smart kid that is bored might answer the difficult items correctly but get a number of easy items incorrect.
External help: a teacher might be giving answers to some tough items, which would show in the data as a group having a suspiciously high number of errors on average compared to other groups.
How can I calculate G and G*?
Because the calculations are simple, it’s feasible to do both in a simple spreadsheet for small datasets. But for a data set of any reasonable size, you will need specially designed software for data forensics, such as SIFT.
What’s the big picture?
Guttman error indices are by no means perfect indicators of dishonest test-taking, but can be helpful in flagging potential issues at both an individual and group level. That is, you could possibly flag individual students with high numbers of Guttman errors, or if your test is administered in numerous separate locations such as schools or test centers, you can calculate the average number of Guttman errors at each and flag the locations with high averages.
As with all data forensics, though, this flagging process does not necessarily mean there is nefarious goings-on. Instead, it could simply give you a possible reason to open a deeper investigation.