Posts on psychometrics: The Science of Assessment

Computerized adaptive testing ( CAT, adaptive testing, computer-adaptive testing, and adaptive assessment) is an AI-based technology that has existed since the 1980s. But, most assessments in the world still don’t capitalize on the benefits of adaptive testing.

There is, of course, a  large body of scientific research on the topic over the past 40+ years, but many organizations are not aware of the benefits of adaptive testing, do not have the expertise, or think it is simply too complex and expensive (spoiler alert: it isn’t!).

What is adaptive testing?

Adaptive testing is the delivery of a test to an examinee using an algorithm that adapts the difficulty of the test to their ability, as well as adapting the number of items used (no need to waste their time).  It is sort of like the High Jump in Track & Field.  You start the bar in a middling-low position.  If you make it, the bar is raised.  This continues until you fail, and then it is dropped a little.  Or if you fail the first one, it can be dropped until you succeed.

For more info, visit this post.

Benefits of adaptive testing

As you might imagine, by making the test more intelligent, adaptive testing provides a wide range of advantages.  Some of the well-known benefits of adaptive testing, recognized by scholarly psychometric research, are listed below.

Shorter tests

Research has found that adaptive tests produce anywhere from a 50% to 90% reduction in test length.  This is no surprise.  Suppose you have a pool of 100 items.  A top student is practically guaranteed to get the easiest 70 correct; only the hardest 30 will make them think.  Vice versa for a low student.  Middle-ability students do no need the super-hard or the super-easy items.

Why does this matter?  Primarily, it can greatly reduce costs.  Suppose you are delivering 100,000 exams per year in testing centers, and you are paying $30/hour.  If you can cut your exam from 2 hours to 1 hour, you just saved $3,000,000.  Yes, there will be increased costs from the use of adaptive testing, but you will likely save money in the end.

For the K12 assessment, you aren’t paying for seat time, but there is the opportunity cost of lost instruction time.  If students are taking formative assessments 3 times per year to check on progress, and you can reduce each by 20 minutes, that is 1 hour; if there are 500,000 students in your State, then you just saved 500,000 hours of learning.

More precise scores

CAT will make tests more accurate, in general.  It does this by designing the algorithms specifically around how to get more accurate scores without wasting examinee time.

More control of score precision (accuracy)

CAT ensures that all students will have the same accuracy, making the test much fairer.  Traditional tests measure the middle students well but not the top or bottom students.  Is it better than A) students see the same items but can have drastically different accuracy of scores, or B) have equivalent accuracy of scores, but see different items?

Better test security

Since all students are essentially getting an assessment that is tailored to them, there is better test security than everyone seeing the same 100 items.  Item exposure is greatly reduced; note, however, that this introduces its own challenges, and adaptive test algorithms have considerations of their own item exposure.

A better experience for examinees, with reduced fatigue

Adaptive tests will tend to be less frustrating for examinees on all ranges of ability.  Moreover, by implementing variable-length stopping rules (e.g., once we know you are a top student, we don’t give you the 70 easy items), reduces fatigue.

Increased examinee motivation

Since examinees only see items relevant to them, this provides an appropriate challenge.  Low-ability examinees will feel more comfortable and get many more items correct than with a linear test.  High-ability students will get the difficult items that make them think.

Frequent retesting is possible

The whole “unique form” idea applies to the same student taking the same exam twice.  Suppose you take the test in September, at the beginning of a school year, and take the same one again in November to check your learning.  You’ve likely learned quite a bit and are higher on the ability range; you’ll get more difficult items, and therefore a new test.  If it was a linear test, you might see the same exact test.

This allows adaptive tests to be used quite frequently for formative assessments in schools; it is an ideal fit.

Individual pacing of tests

Examinees can move at their own speed.  Some might move quickly and be done in only 30 items.  Others might waver, also seeing 30 items but taking more time.  Still, others might see 60 items.  The algorithms can be designed to maximize the process.

Want to learn more?

Interested in learning more about the benefits of adaptive testing?  If you want a full book, I recommend Computerized Adaptive Testing: A Primer by Howard Wainer.  Prefer a short article for now?  Here’s my favorite.

Want to implement the benefits of adaptive testing?

The first step is to perform simulation studies to evaluate the potential benefits for your organization.  We can help you with those, or recommend software if you prefer to do it on your own.  Ready to develop and deliver your own adaptive tests?  Sign up for a free account on our platform!

The two terms Norm-Referenced and Criterion-Referenced are commonly used to describe tests, exams, and assessments.  They are often some of the first concepts learned when studying assessment and psychometrics.

Norm-referenced means that we are referencing how your score compares to other people.  Criterion-referenced means that we are referencing how your score compares to a criterion such as a cutscore or a body of knowledge.

Do we say a test is “Norm-Referenced” vs. “Criterion-Referenced”?

Actually, that’s a slight misuse.

The terms Norm-Referenced and Criterion-Referenced refer to score interpretations.  Most tests can actually be interpreted in both ways, though they are usually designed and validated for only one of the other.

Hence the shorthand usage of saying “this is a norm-referenced test” even though it just means that it is the primarily intended interpretation.

Examples of Norm-Referenced vs. Criterion-Referenced

Suppose you received a score of 90% on a Math exam in school.  This could be interpreted in both ways.  If the cutscore was 80%, you clearly passed; that is the criterion-referenced interpretation.  If the average score was 75%, then you performed at the top of the class; this is the norm-referenced interpretation.

What if the average score was 95%?  Well, that changes your norm-referenced interpretation (you are now below average) but the criterion-referenced interpretation does not change.

Now consider a certification exam.  This is an example of a test that is specifically designed to be criterion-referenced.  It is supposed to measure that you have the knowledge and skills to practice in your profession.  It doesn’t matter whether all candidates pass or only a few candidates pass; the cutscore is the cutscore.

However, you could interpret your score by looking at your percentile rank compared to other examinees; it just doesn’t impact the cutscore

On the other hand, we have an IQ test.  There is no criterion-referenced cutscore of whether you are “smart” or “passed.”  Instead, the scores are located on the standard normal curve (mean=100, SD=15), and all interpretations are norm-referenced.  Namely, where do you stand compared to others?

Is this impacted by item response theory (IRT)?

If you have looked at item response theory (IRT), you know that it scores examinees on what is effectively the standard normal curve (though this is shifted if Rasch).  But, IRT-scored exams can still be criterion-referenced.  It can still be designed to measure a specific body of knowledge and have a cutscore that is fixed and stable over time.

Even computerized adaptive testing can be used like this.  An example is the NCLEX exam for nurses in the United States.  It is an adaptive test, but the cutscore is -0.18 (NCLEX-PN on Rasch scale) and it is most definitely criterion-referenced.

Building and validating an exam

The process of developing an assessment is surprisingly difficult as there are many forces at play. The greater the stakes, volume, and incentives for stakeholders, the more effort that goes into developing and validating.  ASC’s expert consultants can help you navigate these rough waters.

 

Item discrimination is a psychometric concept regarding the quality of a test item, and the point-biserial coefficient is one of several indices for this concept.

What is item discrimination?

While the word “discrimination” has a negative connotation, it is actually a really good thing for an item to have.  It means that it is differentiating between examinees, which is entirely the reason that an assessment item exists.  If a math item on Fractions is good, then students with good knowledge of fractions will tend to get it correct, while students with poor knowledge will get it wrong.  If this isn’t the case, and the item is essentially producing random data, then it has no discrimination.  If the reverse is the case, then the discrimination will be negative.  This is a total red flag; it means that good students are getting the item wrong and poor students are getting it right, which almost always means that there is incorrect content or the item is miskeyed.

What is the point-biserial?

The point-biserial coefficient is a Pearson correlation between scores on the item (usually 0=wrong and 1=correct) and the total score on the test.  As such, it is sometimes called an item-total correlation.

Consider the example below.  There are 10 examinees that got the item wrong, and 10 that got it correct.  The scores are definitely higher for the Correct group.  If you fit a regression line, it would have a positive slope.  If you calculated a correlation, it would be around 0.10.

point-biserial

How do you calculate the point-biserial?

Since it is a Pearson correlation, you can easily calculate it with the CORREL function in Excel or similar software.  Of course, psychometric software like Iteman will also do it for you, and many more important things besides (e.g., the point-biserial for each of the incorrect options!).  This is an important step in item analysis.  The image below is example output from Iteman, where Rpbis is the point-biserial.  This item is very good, as it has a very high point-biserial for the correct answer and strongly negative point-biserials for the incorrect answers (which means the not-so-smart students are selecting them).

iteman_output

How do you interpret the point-biserial?

Well, most importantly consider the points above about near-zero and negative values.  Besides that, a minimal-quality item might have a point-biserial of 0.10, a good item of about 0.20, and strong items 0.30 or higher.  But, these can vary with sample size and other considerations.  Some constructs are easier to measure than others, which makes item discrimination higher.

Are there other indices of item discrimination?

There are two other indices commonly used in classical test theory.  There is the cousin of the point-biserial, the biserial.  There is also the top/bottom coefficient, where the sample is split into a highly performing group and a lowly performing group based on total score, the P value calculated for each, and those subtracted.  So if 85% of top examinees got it right and 60% of low examinees got it right, the index would be 0.25.

Of course, there is also the a parameter from item response theory.  There are a number of advantages to that approach, most notably that the classical indices try to fit a linear model on something that is patently nonlinear.  For more on IRT, I recommend a book like Embretson & Riese (2000).

 

The conditional standard error of measurement (CSEM) is a concept from psychometrics which seeks to characterize error in the process of measuring examinees on a test or assessment.

What is measurement error?

We can all agree that assessments are not perfect, from a 4th grade math quiz to a Psych 101 exam at university to a driver’s license test.  Suppose you got 80% on an exam today.  If we wiped your brain clean and you took the exam tomorrow, what score would you get?  Probably a little higher or lower.  Psychometricians consider you to have a true score which is what would happen if the test was perfect, you had no interruptions or distractions, and everything else fell into place.  But in reality, you of course do not get that score each time.  So psychometricians try to estimate the error in your score, and use this in various ways to improve the assessment and how scores are used.

The Original Approach: Classical Test Theory

Classical test theory (CTT) is a psychometric paradigm that is extremely useful in many situations, but generally oversimplifies things.  Its approach to measurement error certainly qualifies.  CTT assumes that measurement error – called the standard error of measurement – is the same for every examinee.  It is calculated as SEM=SD/sqrt(1-r) where SD is the standard deviation of raw scores on the exam, and r is the reliability of the exam.

Why conditional standard error of measurement?

Early researchers realized that this assumption is unreasonable.  Suppose that a test has a lot of easy questions.  It will therefore measure low-ability examinees quite well.  Imagine that it is a Math placement exam for university, and has a lot of Geometry and Algebra questions at a high school level.  It will measure students well who are at that level, but do a very poor job of measuring good students.

There was an initial suggestion of calculating a conditional standard error of measurement in classical test theory, but at that time, a new paradigm called item response theory was being developed.  It considered error to be a function of ability, not a single number.  In the previous example, the standard error for low students should be much less than the standard error for high students.

An example of this is shown below.  On the right is the conditional standard error of measurement function, and on the left is its inverse, the test information function.  Clearly, this test has a lot of items around -1.0 on the theta spectrum, which is around the 15th percentile.  Students above 1.0 (85th percentile) are not being measured well.

 

Standard error of measurement and test information function

How is CSEM used?

A useful way to think about conditional standard error of measurement is with confidence intervals.  Suppose your score on a test is 0.5 with item response theory.  If the CSEM is 0.25 (see above) then we can get a 95% confidence interval by taking plus or minus 2 standard errors.  This means that we are 95% certain that your true score lies between 0.0 and 1.0.  For a theta of 2.5 with an CSEM of 0.5, that band is then 1.5 to 2.5 – which might seem wide, but remember that is like 94th percentile to 99th percentile.

You will sometimes see scores reported in this manner.  I once saw a report on an IQ test that did not give a single score, but instead said “we can expect that 9 times out of 10 that you would score between X and Y.”

There are various ways to use the CSEM and related functions in the design of tests, including the assembly of parallel linear forms and the development of computerized adaptive tests. To learn more about this, I recommend you delve into a book on IRT, such as Embretson and Riese (2000).  That’s more than I can cover here.

The California Department of Human Resources (CalHR, calhr.ca.gov/) has selected Assessment Systems Corporation (ASC, assess.com) as its vendor for an online assessment platform. CalHR is responsible for the personnel selection and hiring of many job roles for the State, and delivers hundreds of thousands of tests per year to job applicants. CalHR seeks to migrate to a modern cloud-based platform that allows it to manage large item banks, quickly publish new test forms, and deliver large-scale assessments that align with modern psychometrics like item response theory (IRT) and computerized adaptive testing (CAT).

ASC’s landmark assessment platform Assess.ai was selected as a solution for this project. ASC has been providing computerized assessment platforms with modern psychometric capabilities since the 1980s, and released Assess.ai in 2019 as a successor to its industry-leading platform FastTest. It includes modules for item authoring, item review, automated item generation, test publishing, online delivery, and automated psychometric reporting.

 

Multistage adaptive testing

Multistage testing

Automated item generation

automated item generation

 

 

There are many types of remote proctoring on the market, spread across dozens of vendors, especially new ones that sought to capitalize on the pandemic which were not involved with assessment before hand.  With so many options, how can you more effectively select amongst the types of remote proctoring?

What is remote proctoring?

Remote proctoring refers to the proctoring (invigilation) of educational or professional assessments when the proctor is not in the same room as the examinee.  This means that it is done with a video stream or recording, which are monitored by a human and/or AI.  It is also referred to as online proctoring.

Remote proctoring offers a compelling alternative to in-person proctoring, somewhere in between unproctored at-home tests and tests delivered in an expensive testing center.  This makes it a perfect fit for medium-stakes exams, such as university placement, pre-employment screening, and many types of certification/licensure tests.

What are the types of remote proctoring?

There are four types of remote proctoring, which can be adapted to a particular use case, sometimes varying between different tests in a single organization.  ASC supports all four types, and partners with 5 different vendors to help provide the best solution to our clients.  In descending order of security:

 

Approach What it entails for you What it entails for the candidate

Live with professional proctors

  • You register a set of examinees in FastTest, and tell us when they are to take their exams and under what rules.
  • We provide the relevant information to the proctors.
  • You send all the necessary information to your examinees.
  • The most secure of the types of remote proctoring.
  • Examinee goes to ascproctor.com, where they will initiate a chat with a proctor.
  • After confirmation of their identity and workspace, they are provided information on how to take the test.
  • The proctor then watches a video stream from their webcam as well as a phone on the side of the room, ensuring that the environment is secure. They do not see the screen, so your exam content is not exposed.
  • When the examinee is finished, they notify the proctor, and are excused.

Live, bring your own proctor (BYOP)

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • Your staff logs into the admin portal and awaits examinees.
  • Videos with AI flagging are available for later review if needed.
  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  Proctors ask the examinee to provide identity verification, then launch the test.
  • Examinee is watched on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test

Record and Review (with option for AI)

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • After examinees take the test, your staff (or ours) logs into review all the videos and report on any issues.  AI will automatically flag irregular behavior, making your reviews more time-efficient.

 

  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
  • Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test

AI only

  • You upload examinees into FastTest, which will generate links.
  • You send relevant instructions and the links to examinees.
  • Videos are stored for 1 month if you need to check any.

 

  • Examinee will click on a link, which launches the proctoring software.
  • An automated system check is performed.
  • The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
  • Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
  • Examinee concludes the test

 

Some case studies

We’ve worked with all types of remote proctoring, across many types of assessment:

  • ASC delivers high-stakes certification exams for a number of certification boards, in multiple countries, using the live proctoring with professional proctors.  Some of these are available continuously on-demand, while others are on specific days where hundreds of candidates log in.
  • We partnered with a large university in South America, where their admissions exams were delivered using Bring Your Own Proctor, enabling them to drastically reduce costs by utilizing their own staff.
  • We partnered with a private company to provide AI-enhanced record-and-review proctoring for applicants, where ASC staff reviews the results and provides a report to the client.
  • We partner with an organization that delivers civil service exams for a country, and utilizes both unproctored and AI-only proctoring, differing across a range of exam titles.

How do I select a vendor?

First, determine the level of security necessary, and the trade-off with costs.  Live proctoring with professionals can cost $20 to $100 or more, while AI proctoring can be as little as a few dollars.  Then, evaluate some vendors to see which group they fall into; note that some vendors can do all of them!  Then, ask for some demos so you understand the business processes involved and the UX on the examinee side, both of which could substantially impact the soft costs for your organization.  Then, start negotiating with the vendor you want!

 

Want some more information?

Get in touch with us, we’d love to show you a demo!

Email solutions@assess.com.

 

 

Finding good employees in an overcrowded market is a daunting task. In fact, according to research by Career builder, 74% of employers admit to hiring the wrong employees. Bad hires are not only expensive, but can also adversely affect cultural dynamics in the workforce. This is where pre-employment assessment software shows its value.

Pre-employment testing tools help companies create effective assessments, thus saving valuable resources, improving candidate experience & quality hire, and reducing hiring bias.  But, finding a pre-employment testing software that can help you reap these benefits can be difficult, especially because of the explosion of software solutions in the market.  If you are lost on which tools will help you develop and deliver your own pre-employment assessments, this guide is for you.

First things first: you need to understand the basics of pre-employment tests. 

What is a pre-employment test?

A pre-employment test refers to an examination given to job seekers before hiring. The main reasons for administering these tests include determining important candidate metrics such as cognitive abilities, job experience, and personality traits.  The popularity of pre-employment tests has sky-rocketed in the past years. This is because of their ability to help companies manage large banks of candidate applications.  This helps increase quality hires by providing access to a diversified network of professionals while eliminating roadblocks such as ‘Resume Spammers’.  

Types of pre-employment tests

There are different types of pre-employment assessments. Each of them achieves a different goal in the hiring process. The major types of pre-employment assessments include:

Personality tests: Despite rapidly finding their way into HR, these types of pre-employment tests are widely misunderstood. Personality tests answer questions in the social spectrum.  One of the main goals of these tests is to quantify the success of certain candidates based on behavioral traits. 

Aptitude tests: Unlike personality tests or emotional intelligence tests which tend to lie on the social spectrum, aptitude tests measure problem-solving, critical thinking, and agility.  These types of tests are popular because can predict job performance than any other type because they can tap into areas that cannot be found in resumes or job interviews. 

Skills Testing: The kinds of tests can be considered a measure of job experience; ranging from high-end skills to low-end skills such as typing or Microsoft excel. Skill tests can either measure specific skills such as communication or measure generalized skills such as numeracy. 

Emotional Intelligence tests: These kinds of assessments are a new concept but are becoming important in the HR industry. With strong Emotional Intelligence (EI) being associated with benefits such as improved workplace productivity and good leadership, many companies are investing heavily in developing these kinds of tests.  Despite being able to be administered to any candidates, it is recommended they be set aside for people seeking leadership positions, or those expected to work in social contexts. 

Risk tests: As the name suggests, these types of tests help companies reduce risks. Risk assessments offer assurance to employers that their workers will commit to established work ethics and not involve themselves in any activities that may cause harm to themselves or the organization.  There are different types of risk tests. Safety tests, which are popular in contexts such as construction, measure the likelihood of the candidates engaging in activities that can cause them harm. Other common types of risk tests include Integrity tests

Pre-employment testing software: The Benefits 

Now that you have a good understanding of what pre-employment tests are, let’s discuss the benefits of integrating pre-employment assessment software into your hiring process. Here are some of the benefits:

Saves Valuable resources

Unlike the lengthy and costly traditional hiring processes, pre-employment assessment software helps companies increase their ROI by eliminating HR snugs such as face-to-face interactions or geographical restrictions. Pre-employment testing tools can also reduce the amount of time it takes to make good hires while reducing the risks of facing the financial consequences of a bad hire. 

Supports Data-Driven Hiring Decisions

Data runs the modern world, and hiring is no different. You are better off letting complex algorithms crunch the numbers and help you decide which talent is a fit, as opposed to hiring based on a hunch. 

Pre-employment assessment software helps you analyze assessments and generate reports/visualizations to help you choose the right candidates from a large talent pool. 

Improving candidate experience 

Candidate experience is an important aspect of a company’s growth, especially considering the fact that 69% of candidates admitting not to apply for a job in a company after having a negative experience. Good candidate experience means you get access to the best talent in the world. 

Elimination of Human Bias

Traditional hiring processes are based on instinct. They are not effective since it’s easy for candidates to provide false information on their resumes and cover letters. 

But, the use of pre-employment assessment software has helped in eliminating this hurdle. The tools have leveled the playing ground, and only the best candidates are considered for a position. 

Need some help deciding how you can reap the mentioned benefits of pre-employment assessment software? Click the button below to get help.


What To Consider When Choosing pre-employment assessment software

Now that you have a clear idea of what pre-employment tests are and the benefits of integrating pre-employment assessment software into your hiring process, let’s see how you can find the right tools. 

Here are the most important things to consider when choosing the right pre-employment testing software for your organization.

Ease-of-use

The candidates should be your top priority when you are sourcing pre-employment assessment software. This is because the ease of use directly co-relates with good candidate experience. Good software should have simple navigation modules and easy comprehension. 

Here is a checklist to help you decide if a pre-employment assessment software is easy to use;

  • Are the results easy to interpret?
  • What is the UI/UX like?
  • What ways does it use to automate tasks such as applicant management?
  • Does it have good documentation and an active community?

Tests Delivery (Remote proctoring)

Remote proctoring (Courtesy of FastTest)

Good online assessment software should feature good online proctoring functionalities. This is because most remote jobs accept applications from all over the world. It is therefore advisable to choose a pre-employment testing software that has secure remote proctoring capabilities. Here are some things you should look for on remote proctoring;

  • Does the platform support security processes such as IP-based authentication, lockdown browser, and AI-flagging?
  • What types of online proctoring does the software offer? Live real-time, AI review, or record and review?
  • Does it let you bring your own proctor?
  • Does it offer test analytics?

Test & data security, and compliance

Defensibility is what defines test security. There are several layers of security associated with pre-employment test security. When evaluating this aspect, you should consider what pre-employment testing software does to achieve the highest level of security. This is because data breaches are wildly expensive. 

The first layer of security is the test itself. The software should support security technologies and frameworks such as lockdown browser, IP-flagging, and IP-based authentication. If you are interested in knowing how to secure your assessments, check this post out.

The other layer of security is on the candidate’s side. As an employer, you will have access to the candidate’s private information. How can you ensure that your candidate’s data is secure? That is reason enough to evaluate the software’s data protection and compliance guidelines.

A good pre-employment testing software should be compliant with certifications such as GDRP. The software should also be flexible to adapt to compliance guidelines from different parts of the world. 

Questions you need to ask;

  • What mechanisms does the software employ to eliminate infidelity?
  • Is their remote proctoring function reliable and secure?
  • Are they compliant with security compliance guidelines including ISO, SSO, or GDPR?
  • How does the software protect user data?

User experience

A good user experience is a must-have when you are sourcing any enterprise software. A new age pre-employment testing software should create user experience maps with both the candidates and employer in mind. Some ways you can tell if a software offers a seamless user experience includes;

  • User-friendly interface
  • Simple and easy to interact with
  • Easy to create and manage item banks
  • Clean dashboard with advanced analytics and visualizations

Customizing your user-experience maps to fit candidates’ expectations attracts high-quality talent. 

Scalability and automation

With a single job post attracting approximately 250 candidates, scalability isn’t something you should overlook. A good pre-employment testing software should thus have the ability to handle any kind of workload, without sacrificing assessment quality. 

It is also important you check the automation capabilities of the software. The hiring process has many repetitive tasks that can be automated with technologies such as Machine learning, Artificial Intelligence (AI), and robotic process automation (RPA).  

Here are some questions you should consider in relation to scalability and automation; 

  • Does the software offer Automated Item Generation (AIG)?
  • How many candidates can it handle? 
  • Can it support candidates from different locations worldwide?


Reporting and analytics

Reporting and visualization

Example of Reporting and visualization functionality to help you make data-driven hiring decisions  (Courtesy of FastTest and Assess.ai)

A good pre-employment assessment software will not leave you hanging after helping you develop and deliver the tests. It will enable you to derive important insight from the assessments.

The analytics reports can then be used to make data-driven decisions on which candidate is suitable and how to improve candidate experience. Here are some queries to make on reporting and analytics;

  • Does the software have a good dashboard?
  • What format are reports generated in?
  • What are some key insights that prospects can gather from the analytics process?
  • How good are the visualizations?

Customer and Technical Support

Customer and technical support is not something you should overlook. A good pre-employment assessment software should have an Omni-channel support system that is available 24/7. This is mainly because some situations need a fast response. Here are some of the questions your should ask when vetting customer and technical support;

  • What channels of support does the software offer/How prompt is their support?
  • How good is their FAQ/resources page?
  • Do they offer multi-language support mediums?
  • Do they have dedicated managers to help you get the best out of your tests?

Conclusion

Finding the right pre-employment testing software is a lengthy process, yet profitable in the long run. We hope the article sheds some light on the important aspects to look for when looking for such tools. Also, don’t forget to take a pragmatic approach when implementing such tools into your hiring process.

Are you stuck on how you can use pre-employment testing tools to improve your hiring process? Feel free to contact us and we will guide you on the entire process, from concept development to implementation. Whether you need off-the-shelf tests or a comprehensive platform to build your own exams, we can provide the guidance you need.  We also offer free versions of our industry-leading software FastTest and Assess.ai– visit our Contact Us page to get started!

 

Estimated reading time: 7 minutes

The impact of artificial intelligence in education is hardly quantifiable but is game-changing. AI is making the world a better place by powering innovations across all industries including healthcare and education. Learning is an important part of being a contributing member of society and AI is changing how we learn. We are yet to see humanoid robot teachers as depicted in science fiction but AI is rapidly finding its way into the education industry.

 Despite the rapid advancement in technology, most education regimes used today are customized for use during the industrial age. How then is AI trying to beat Donald Clark’s claim that “Education is a bit of a slow learner?

From smart tutors and content, facial recognition to AI-flagging in improving online assessment security, AI is making a positive impact in education. This blog post aims to get into the brass tacks of AI in education.

Let’s jump right in!

How is AI changing the education industry?

Artificial intelligence has an impact on education in many ways, but here are the most dominant ones: 

AI is Powering personalized learning

Personalized learning has proven to boost engagement and improve results among students. AI systems play an important role in elevating this experience. We all have different learning abilities and what works for one student may not work for another. Machine learning solutions can be tailored to help students in different levels of the learning spectrum exist in one class.

While it’s impossible for teachers to provide personalized learning experiences to students, AI systems can easily provide individual learning opportunities for students. Machine learning algorithms can predict outcomes thereby allowing tutors to provide content based on learners’ individualistic goals and past performance. 

   AI systems not only help identify student strengths and weaknesses but also provide reports about the effectiveness of learning methods. These reports can then be used to derive ways to boost learning engagement and improve test results. 

Task automation

According to a new study, teachers spend only 43% of their time in teaching, 11% in marking tests, 7% doing administrative tasks, and 13% planning lessons. This can be tiresome and affects the teachers’ ability to provide students with a good learning experience. This is where AI systems come in handy.

The internet is full of systems that help teachers automate certain learning tasks including lesson planning, generating progress reports, and so much more! 

Assess.com for example leverages the power of AI  to provide tutors with the ability to create, manage, and grade personalized online tests. 

impact of  artificial inteligence in education (Task automation)

Asses.com online item marker. 

This helps tutors to be more productive, and a lot of resources are saved in the process. 

Online Assessment Security and effectiveness

Assessment is the most important part of the learning process. Online assessments have become popular across K12 and Higher-Ed and AI systems are playing an important role in making sure that they are effective and secure. 

Here are some interesting ways AI is positively influencing online exams:

  • AI flagging

AI flagging improves assessment security by helping supervisors spot any suspicious behaviors. This can be done using audio and videos captured during the examination period. The human proctor can then take action depending on governing policies. Some actions that may indicate cheating include background audio, extras faces on the screen, and suspicious body language. 

  • AI-powered remote proctoring

Remote proctoring allows students to take online exams from anywhere while safeguarding exam integrity. All a student needs is a computer with a webcam.  AI has revolutionized how online exams are managed by powering functionality such as live streaming, lockdown browsers, and so much more!

  • Automated Item Generation

AI plays an important role in automated item generation. Despite the unavailability of AI systems that can generate item banks automatically (we may see these in the near future), platforms such as Assess.com provide users with intelligent templates which can be customized to increase effectiveness.  

AI systems are playing an important role in making online exams more effective, and we are yet to see more of it in the near future. 

Facial recognition 

Facial recognition is not only useful in identifying criminals in crowds by national security organizations but also helping improve the education industry in many ways. Moreover, facial recognition is gaining momentum in educational institutions as an identification mechanism. This helps increase the security of students by identifying people who are threats to the students.

Facial recognition is also being used in some institutions to improve the learning process. The systems used in this process collect insight by reading facial cues to determine how students feel during a lesson.  These systems are also used during online exams to uphold academic integrity.

Smart Content

What is smart content? 

Sometimes referred to as “Dynamic content”,   smart content refers to a diversified collection of learning material including video lectures, textbooks, and digitized textbook guides. 

The main aim of smart content is to ease the learning process. 

AI-powered smart content solutions provide real-time feedback, practice exams, interactive content, and many other functions aimed at breaking down complex ideas into easily digestible chunks. This helps students achieve their academic goals much faster. 

Understanding and changing learning experiences

AI is not only helping in understanding the learning process but also changing it.

AI systems and machine learning algorithms collect data from learning ecosystems and use it to identify learning methods that work and those that don’t. 

Other ways AI systems are changing the learning experience include automatic grading, augmenting recruitment processes, and providing constructive criticisms to students while helping them achieve their personalized goals. 

With a projected market revenue of up to $120 billion by 2021, AI tutoring is yet another way to increase student retention by providing personalized learning. AI tutoring comes in many forms including the use of chatbots and virtual reality. 

Artificial intelligence in education; The Flip Side

Just like any innovation ever made, AI has its dark side. 

However, they are not the over-exaggerated dark sides we see in science fiction where AI tries to wipe out the entire human race. 

The Dark Side of AI in education

Here are some drawbacks of using AI in education: 

AI is expensive: The custom development and maintenance of education AI systems can be very expensive. This limits its use to well-funded institutions. However, there are a lot of budget-friendly AI education solutions on the internet that you can easily adopt.  

Data Loss and Information Security: Cybercrimes are on the rise and AI systems are no exception. 

Risk of Addiction: Technology can be addictive and this may affect the performance of some students.

Lack of human connection: AI systems can have a negative effect on the psychological well-being of students. 

The Future of AI in education

AI is growing exponentially and its future in the education industry is bright. The impact of artificial intelligence in education is immense. Not only does it increase efficiency in the learning process but also continues to revolutionize education in ways we cannot imagine.

Despite the many concerns about AI systems not being able to connect emotionally with people or replacing educators, we believe it will open a portal of opportunities in the industry. 

Are you interested in getting access to AI-powered online testing solutions? Feel free to check out our solutions and consulting services ranging from Test Development,  Computerized Adaptive Testing, Remote Proctoring, Item banking and so much more!

Estimated reading time: 7 minutes

Tips for choosing the right assessment tool

There is no shadow of doubt that online assessments are more effective than traditional testing methods. They have become popular across k-12, marketing,  higher education, and recruitment. Not only are online assessments cheaper, secure, and effective but also the foundation for E-learning. But, with the overcrowded digital assessment marketplace, it can be hard to decide which online assessment tool will work for you.

How hard can it be you ask? Think about factors such as pricing, scalability, and advanced functionality such as remote proctoring, CAT, reporting, user-friendliness, etc.

 It’s a lot to take in, isn’t it? To help you crack this enigma, this blog will guide you on how to choose the best online assessment tools.  

Don’t have enough time to read the entire article? Skip to the checklist section and get the main ideas.

Reporting, analysis, and Visualization

This is the most important consideration to make when evaluating online assessment tools. Reports are a measure of progress. They help educational institutions and businesses adjust their learning processes to improve assessment effectiveness.

Some tools to look out for about reporting and analysis include psychometric software such as XCalibre (IRT Analysis), Iteman (Classical analysis), CITAS, and many others. These tools should have visualization capabilities such as creating graphs and charts in relation to the assessment process.  The output should also be in a format that is easy to interpret. 

Assess.ai Free Version of Iteman-online assessment software
Example of A Report Generated By  Assess.ai Free Version of Iteman

Interested in getting free access to some of these psychometric analytical tools including XCalibre (IRT Analysis), Iteman (Classical analysis), and many others? Fill out this form to get free access to the tools. 

Built-In Remote Proctor Feature (AI-Enabled)

This is yet another component of an online assessment tool that you should consider in the evaluation process. Proctoring refers to the process of managing examination processes.

The best right assessment software should have 360 ° proctoring features including anti-cheat systems, live streaming, audio/video recording, and screen sharing. If the site offers real human online proctor for increased security, the better.

It is also important to find out if the tool in question is flexible enough to let you forsee the entire proctoring process.

Scalability

The most common you make when looking for online assessment tools is not evaluating how robust the platform is. This can alter the learning process and cause financial ruin since you will have to get another system.

The ideal platform should be robust enough to handle any form of workload.

All ASC’s online assessment tools are robust enough to handle any kind of workload and are a favorite among the world’s best institutions and organizations.

Ease-of- Use 

Everything should be made as simple as possible, but no simpler.-

Albert Einstein 

The best software is one that offers sophisticated solutions in a way that anyone can use. Online assessment software, especially in education, should be in its simplest use. 

The interface shouldn’t be intimidating, and it should have important functions such as autosaving answers to avoid frustrating the examinee. The software should be cloud-based with no need to install it on devices. 

The process of creating and managing item banks should be as simple as possible.

Item Banking refers to the development and management of a large pool of high-quality test questions.  Items are treated as reusable objects, which allows you to more efficiently publish new test forms.  Items are stored with extensive historical metadata to drive validity.

Break the chain of creating assesments manually

The right online assessment tool should also support the best practices in workflow management and support collaboration.

Compatibility With Existing Systems

Most businesses and education institutions already have a Learning Management System (LMS) in their workflow. The right online assessment tool should therefore be easy to sync with the existing system.

This is important because it would be costly and time-consuming to re-develop their entire system to integrate an assessment tool into the process.

Enhanced Digital Assessment Security

Cheating is one of the greatest concerns when it comes to online assessments. It is important to check the technologies and methods used by the software to curb infidelity.  Here are some functionalities you should look for in digital assessment security: 

Lockdown Browser

This is a feature that limits the examinees to one tab. This stops them from accessing files from local storages or getting help online. If an examinee attempts to access eternal software or a private tab in the browser, a notification is sent to the proctor who will take action.

AI Flagging

AI flagging helps supervisors spot any suspicious behaviors using audios and videos captured during the examination period. Some actions that may indicate cheating include background audio, extras faces on the screen, and suspicious body language. 

AI -flagging (online assesment software)
AI-Flagging In Action (Assess.com)

IP-Based Authentication

This is an interesting feature since it eliminates impersonation by using the examinees’ IP addresses for user identification. This can also eliminate cheating through remote access tools.

They are a few functionalities to look out for when vetting the security level of an online assessment platform. If you want to learn more about digital assessment security, feel free to check out this blog post. 

Good customer Support  

We all get stuck once in a while and good customer support shouldnt be ignored when looking for an assessment tool. 

  • How long do they take to reply to a query by a customer? 
  • Do they have a FAQs page?
  • How well is the software documented?
  •   Have they implemented self-service support into their process? 

Consider asking these questions when vetting customer support in online assessment tools. 

Final Thoughts; online assessment tool Checklist

To sum up the article, here is a checklist to help you find the right assessment tool for your needs.

  •  What cheating prevention methods does it offer? (Lockdown browser, IP-based authentication, and IP-flagging)
  •  How good is their item authoring functionality? Go for the one with tech-enhanced item types, classical item statistics storage, and a separate module for managing multimedia files.
  • How does the software offer online delivery? Check out for adaptive testing capabilities, customizable options, and brand-ability.
  • What is their reputation? Always be sure to check out what other people say about the brand and the software. How are their reviews online? Have they won any Ed-tech awards
  • How good is their reporting? Choose the tool that offers classical item performance reports with Iteman and has visualization capabilities. 
  • Does the software support remote proctoring?
  • Are all their test development modules in alignment with the best psychometrics practices?
  • Do they offer multichannel support? How good is their documentation?
  • Is the software easy to use? Is it accessible from anywhere?  Always go for user-friendly software. 
  •  How well does it integrate with existing systems? 
  • What type of assessments (formative assessments, diagnostic assessments, summative, synoptic, ipsative, or work-integrated assessments) are you looking for? Use this resource to help you differentiate between the types of online assessments. 
  • Do they have an experienced team to help in test development? and other consulting services?

Finding the right software for your needs is hard, especially in this competitive market. We hope this article, the long checklist specifically, helps you find the right assessment software. If you are interested in having access to the best online assessment tools and psychometrics consulting, feel free to contact us to discuss your needs.

Assessment Systems Corporation (ASC; https://assess.com) announced today that Giulia Bianchi has joined the team as an Assessment Solutions Consultant. Giulia has a strong background in the assessment industry, with 6 years of experience in sales, marketing, and operations for International Testing and Training Services Ltd (https://www.ittservices.net/). Giulia helped grow ITTS from its infancy into a testing services provider with a network of more than 200 testing centers.

“I’m excited to have Giulia join our team, especially as we continue to grow our business during the COVID pandemic as test sponsors are searching for cost-effective solutions that allow them to quickly implement a digital transformation” says Nathan Thompson, PhD, the CEO of ASC. “In particular, I’m thrilled about Giulia’s experience in the the assessment industry outside the US, and her multilingualism. She can help us to better serve our partners all across the world.” Giulia is bilingual English/Italian, and also speaks French and Spanish.

For more information about ASC’s international partnerships, please contact solutions@assess.com.