Posts on psychometrics: The Science of Assessment

Digital assessment security is a highly discussed topic in the Ed-Tech community. The use of digital assessments in education has led to increased concerns of academic integrity. While digital assessments have more advantages than the outdated pen-paper method, they are open to a lot of criticism in relation to academic integrity concerns. However, cheating can be eliminated using the right tools and best practices. 

Before we get into the nitty-gritty of creating secure digital assessments, let’s define cheating.

What is cheating? 

Cheating refers to actual, intended, or attempted deception and/or dishonesty in relation to academic or corporate assessments. 

Some smart ways in which examinees cheat in digital assessments include;

  • Plagiarism– Taking someone else’s work and turning it in ones own.
  • Access to external resources– Using external resources such as the internet, books, etc. 
  • Using freelancing services– Paying someone else especially from academic freelancing sites such as Studypool to do the work for you. 
  • Impersonation– Letting someone else take an assessment in one’s place.
  • Abetting cheating– Selfless acts such as releasing exam papers early, or discussing answers with examinees who have taken the exam. 
  • Using remote access tools to get external help.

When addressing digital assessments infidelity, most professionals ask the wrong questions. How do we stop cheating? Or which digital assessment security techniques should we use?

 But we believe that is not an effective way to approach this problem. The right questions ask are; why do students cheat? how can we make students/employees be confident enough to take assessments without cheating? Which digital assessment tools address all the weakest links in the digital assessment life cycle? For instance, by integrating techniques such as adaptive testing or multistage-testing, the students and employees feel confident to take exams. 

In this blog post, we are going to share digital assessment techniques and best practices that can help you create effective assessments. 

Part 1: Digital assessment security techniques 

Online/Remote proctoring is the process of monitoring digital assessments. This gives supervisors and invigilators the freedom to monitor examinations from any location. 

1. Online Proctoring

To increase assessment security in this process, you can utilize these methods;

  • Image Capturing

This is a technique used to increase the effectiveness of online testing by taking photos of activities in a given interval of time. The images are later analyzed to flag down any form of infidelity detected in the image banks. This is a basic mechanism, yet effective in situations where students don’t have access to a stable internet connection. 

  • Audio Capturing

This one is almost similar to image/video capturing but keeps track of audio logs. The audio logs are later analyzed to identify any voice abnormalities in the surrounding environment. This eliminates risks such as getting help from family members or friends in the same room. 

  • Live Video Recording and/ Streaming

This is the most important element of security in digital assessments. Live video streaming keeps the candidate under constant surveillance and invigilators can suspend the examination if they notice suspicious behavior. Some online testing tools are even integrated with AI systems to help in the process of identifying suspicious activity. Video recording on the other hand works like audio capturing. 

Digital assesment: Remote Proctoring,  Record and Review
Remote Proctoring: Record and Review

The video is recorded, stored in the cloud, and later analyzed to spot any suspicious behavior. 

To give you the freedom to be creative and flexible in the assessment process, Assess.com even lets you bring your own proctor.

2. Lockdown Browser

High-speed broadband and access to hundreds of tools to do anything. Sounds like the perfect opportunity to cheat? Yes, to some extent. However, with the lockdown browser feature, students are limited to one tab, the one hosting the exam. 

If the student tries accessing any external software or opening a tab, a notification is sent to the invigilator. The feature starts by scanning the computer for video/image capture. Once the test begins, the examinee is restricted from surfing the internet for answers. 

3. Digital assesment security through AI Flagging

Assesment Integrity: AI Flagging In Remote Proctoring
AI Flagging In Remote Proctoring 

As seen in the article 10 EdTech Trends and Predictions To Watch Out For, In 2021, we saw that AI will increase in popularity in Ed-Tech. Not only can facial recognition (which is a form of AI) be used to increase security on campus but also in digital assessment.

 By using systems integrated with AI systems, supervisors can identify suspicious behavior and take action, in real-time. Some actions the AI systems are able to identify include two faces in one screen.  This feature works in hand with video streaming/recording. This helps increase examination integrity while cutting back on costs. 

4. IP-Based Authentication

Identity authentication is an integral step in achieving the highest form of academic integrity in digital assessments. IP-Based authentication uses a user’s IP address to confirm the identity of students undertaking the exam. This comes in handy when examinees try to cheat by offering remote access to other users. In a recent paper exploring, e-authentication systems, other components of authentication may include biometric systems, mechanisms, knowledge, and document analysis. Using IP-Based authentication in digital assessment may decrease some risks of cheating.

5. Prevent Cheating By Data Encryption

Cybercrimes have become a threat to many industries. Education is one of the industries that face a lot of challenges when it comes to information security. In fact, according to a recent study, 83% of schools in the UK experienced at least one cyberattack, despite having protection mechanisms such as antivirus software and firewalls. Another research also discovered that education records could be sold as high as $265 on the black market

Digital assesment security: CyberSecurity By Industry
CyberSecurity By Industry

 This kind of insecurity finds its way into digital assessment because most tests are developed and delivered online. Some students can get early access to digital assessments or change their grades if they can find a loophole in the digital assessment systems. To prevent this, it is important to implement cybersecurity practices in digital assessment systems. Data encryption is a key part of ensuring that exams and personal data are safe.  

  1. Audit Logging

The audit logging function tracks user activity in all aspects of the testing process using unique identifiers such as IP addresses. It keeps track of all activities including clicks, stand-by time, software accessed during the assessment, and so much more! These logs are then analyzed for signs of infidelity. 

Part II: Digital assessment security best practices

Digital assesment life cycle
Digital Assessment Life Circle

Most of the technologies discussed above focus on the last few stages of the evaluation process. The strategies below focus on techniques that do not fall under the main stages of the process, yet very critical.

1. Creating Awareness On Cheating

Many institutions don’t create enough awareness of the dangers of cheating in relation to academic integrity. You can use media such as videos to give them guidelines of how to undertake exams authentically, and what is considered cheating. This prepares the students to undertake the exams without thinking about cheating. 

2. Don’t Use Publisher Item Banks

Course textbooks and guides may come with complimentary test banks, and you should refrain from using them in assessments. This is because most course materials are available on the internet, and may encourage infidelity. Instead, make up your own questions, based on your learning objectives and methodologies. 

3. Disable Backtracking

This increases learning efficiency by making sure that students answer what they know, and locate their weakest links. By facilitating backtracking, you give the students time to go back and try to locate the correct answers, which reduces effectiveness. 

4. Diversity In Question Types

Fear of Failure is one thing that makes examinees cheat. By creating diverse question types, you improve the assessment engagement of the students, therefore improving their confidence. Don’t give your examinees multiple-choice questions only, add in some yes/no questions, essays, and so on.

5. Make Them Sign Academic Integrity Contracts

Contracts have a psychological effect on people, and the examinees are more likely to be authentic if they sign some form of contract. The contract can be tied to the policies around academic integrity. 

If you are interested in other best practices to increase online assessment security, here is an important resource that can help.

Conclusion

Digital assessment security is an important aspect of online education and should not be ignored. Despite the immaturity of existing frameworks and methodologies in the market, techniques such as remote proctoring, lockdown browser, and audit logging have proven to be effective. 

Creating and delivering high-quality digital assessments is not a walk in the park. If you need professional help in creating online assessments with alignment to the best psychometrics practices, contact Assess for a free consultation. 

You can also get access to online assessment tools with security features including, AI-flagging, remorse proctoring, lock-down browser and so much more!

STILLWATER, Minnesota USA, and  MARILAO BULACAN, Philippines  —  May.5, 2021 — ASC, the world’s leading provider of educational assessment software, has announced its first client for online educational assessment in The Philippines.  St. Philomena School Kids Comfort Zone took a step forward in revolutionizing educational assessments joining ASC’s community of organizations that drive high-quality assessment with sophisticated assessment platforms.

The software is expected to help the school improve educational assessment effectiveness, especially after the disruption caused by the Covid-19 pandemic. This will in turn shape the learning experience by improving feedback for students.

St. Philomena School Kids Comfort Zone started using ASC’s cloud software platform to help them with tasks such as tracking & analyzing assessments, test development & delivery, and more. All of this functionality is geared towards creating educational assessments and strategies that positively impact learning. 

“Moving from traditional testing methods and developing online exams that are effective has always been a challenge, especially during the pandemic” admitted Engr. Alphonsus C. De Alban, MAEd, LPT, the owner of St. Philomena School Kids Comfort Zone. “However, with the adoption of this new technology, we believe that we can create assessments that work in our students’ best interests.”

“Digitalization of educational assessment is a trend that has many positive benefits to schools and learning, and it’s refreshing to see educational entities such as St Philomena School Kids Comfort Zone capitalize on it,” said Nathan Thompson, CEO at ASC. “For us, this means staying at the forefront of the revolution and pioneering technologies and solutions that shape the future of digital assessments for the best.”

About Assessment Systems

Assessment Systems Corporation (ASC; assess.com) is the world’s leading provider of innovative digital assessment software solutions and consulting services across Education and HR.  Its mission is to leverage AI, automation, and psychometrics best practices to positively impact education and employment opportunities by helping clients develop digital assessments with agility, reliability, and validity.

About St. Philomena School Kids Comfort Zone
St. Philomena School: Kids’ Comfort  Zone Inc. remains committed as a SCHOOL OF LOVE that aims to provide highly individualized and specialized educational services to ALL students with different abilities in an inclusive environment ACROSS ALL PLATFORMS at ALL TIMES.

The Covid-19 pandemic caused a radical shift in the education sector, which has certainly impacted EdTech trends. This resulted in the accelerated adoption of education technology solutions across k12, elearning, and higher education. Some trends and technologies that dominated the marketplace in 2019, including Video-based learning,  AI, Machine Learning, digital assessments, learning analytics, and many others are expected to take a new turn in 2021. 

To give you a complete glimpse into the future of education post-pandemic, we have prepared 10 EdTech Trends and Predictions To Watch Out For In 2021. What will happen to the education technology marketplace? Which trends and technologies will disrupt education as we know it? 

Let’s find out. 

Important EdTech Stats To Note:

  • 86% of educators say Technology is the core of learning
  • EdTech expenditure will hit $404 billion by 2025 at a CAGR of 16%, between 2019-2025

Image Credit: Bryce Durbin / TechCrunch

  • Utilization of Technology for Engagement is on the rise (84% of Educators support the idea)
  • Global investments in education technology are on the rise, with a total of $10 billion in 2020. They are expected to reach $87 billion in the next decade. 

Part 1: EdTech Trends To Watch Out For

  1. E-Learning

E-learning is the biggest EdTech trend in 2021. Its scalability and cost-effectiveness have made this model the go-to solution for students all over the world, especially during the pandemic.  Also, recent research shows that Elearning improves retention rates by 25-60%, which makes it a better deal for students. 

The industry is booming and is expected to hit the $375 billion mark by 2026. 

To capitalize on this trend, many institutions have started offering online degrees online. Platforms such as Coursera have partnered with the world’s greatest universities to provide short courses, certifications, and bachelor’s degrees online, at lower costs. 

Other platforms offering eLearning services include Lessonly, Udemy, Masterclass, and Skillshare. This trend is not likely to slow down post-covid, but there will be barriers that should be eliminated in order to accelerate the adoption of this trend. 

  1. Digital Assessments Become A Necessity

Assessments are a vital part of the learning process as they gauge the effectiveness of learning methods. Over the past few years, Online tests have been replacing traditional testing methods which mostly involved pen and paper. Some of the main advantages of digital assessments Include:

  •  Reliability
  •  Cost-effectiveness
  •  Flexibility. 
  • Increased security and efficiency
  • Students with disabilities can have equal chances with other students. This is because online testing can support technologies such as speech-to-text.
  • Immediate results reduce stress and anxiety among students
  • Learning analytics help improve assessment quality

Online exams have increased effectiveness in the testing process by adding technologies and strategies such as adaptive testing,  remote proctoring, and learning analytics into the equation. 

This has helped students improve retention rates and exam developers enhance their testing strategies. 

Cheating, which has been a major concern in assessments has been reduced by online assessment techniques such as Lockdown browsers, time windows, and time windows.

 Digital assessments are a must-have for all institutions looking to capitalize on the advantages of online testing in 2021.

  1. Extended Reality (XR)

Virtual Reality Learning Retention (Source: FrontCore)

Virtual Reality and Augmented Reality are revolutionizing the education sector by challenging the way in which students used to receive and process information.  2021 will witness increased adoption of XR technologies and solutions across education and corporate learning environments. 

Extended Reality technologies and solutions are improving retention rates and learning experiences in students by creating immersive worlds where students can learn new skills and visualize concepts in an interactive way. 

According to research, Virtual Reality has higher retention rates compared to other learning methods such as lecturing, demonstration, and audio-visuals.

Some other ways in which XR is improving the education sector are offering cost-effective and interesting field trips, enhancing lifelong learning, and transforming hands-on learning. 

  1. Chatbots In Higher Education

Chatbots took the sales and marketing industry by storm over the past few years and are finding their way into the education sector. During the Covid-19 pandemic, many higher education institutions integrated chatbots into their systems to help in the automation of tasks. 

Some tasks that can be automated including lead generation for courses and student query resolution on pressing issues such as fees. This improves the cost-effectiveness in institution administration.

 This EdTech trend will be integrated into more institutions in 2021. 

  1. Video-Based Learning At The Forefront

Video has become king in learning and entertainment. 2021 will see an exponential increase in video consumption. In relation to learning, the rapid increase in devices on the internet, and new technological innovations such as smart TVs will accelerate the adoption of video-based learning. 

There are many advantages that come with video-based learning. Some of them include cost-friendliness, higher retention rate in students, support for self-pace learning, and many others. The reliability of this mode of learning will make higher education institutions invest in this mode of learning. 

Using video-based learning is not a complex procedure, especially because there are many tools and platforms online that can help with the process. The cost of producing high-quality videos and lectures has also become cheaper. 

  1. Artificial Intelligence (AI)

Artificial Intelligence (AI) has been at the forefront of pioneering innovations that drive humanity to the new ages of civilization. In relation to education, AI has played a big role in enhancing learning experiences, improving assessment practices and so much more! 

In 2021, AI will make headlines as it will be integrated into important aspects of higher education such as campus security, student performance evaluation, and so much more! 

Facial recognition, which is a component of AI, can be used in tracking student attendance, securing campus premises, and decreasing infidelity during online testing. 

AI also plays a vital part in student evaluation and improving the assessment process by powering technologies such as adaptive testing. Adaptive testing is a method that improves the testing process by providing unique assessments to different students based on factors such as IQ and many others. 

The power of AI is unlimited and there is no predicting what 2021 will bring to the education industry.

I guess we will have to wait and see what it throws at us. 

  1. Big Data and Learning analytics

Image Source: Unsplash

Learning analytics is yet another trend that is yet to disrupt the education industry. Learning analytics is a set of technologies and strategies that are used to empower educators, track learning processes, and help instructors make data-driven decisions.

 Some components of learning analytics include Artificial Intelligence, visualizations, machine learning, statistics, personalized adaptive learning, and educational data mining. 

 As we all know, data runs everything in the information era, and whoever has the best data analytics model wins. In education, analytics has played an important role in shaping instructional design and assessment processes for the best. 

2021 will see increased adoption of analytics tools into Learning and Development.

Here are some of the ways learning analytics will shape Education for the best;

  • Behavior and performance prediction– Using predictive analytics, instructors can predict the performance of learners based on their present and past performances.
  • Increased retention rates– By having a clear picture of what students are good at and what they are struggling with (You can identify this by analyzing their digital assessment results), one can shape the learning process to increase retention rates. 
  • Support for personalized learning– This is the hot cake of learning analytics. By using data analytics, instructors can create personalized learning experiences for different students based on several metrics.
  • Increase cost-efficiency– Education and training are expensive, and learning analytics can help improve the quality of learning outcomes while cutting out unnecessary costs.
  • Helps in the definition of learning strategy– By understanding what you need to achieve, and the purpose of your goals, you can use data analytics to create a roadmap to help you in the process. 
  1. Gamification

Gamification is not a new concept in education. It has been around for ages, but it’s expected to take a different turn through the integration of game mechanics in learning. Minecraft, the game everybody can’t seem to have enough of is a good example. This game has come in handy in teaching important concepts in society such as storytelling.  

Programming and game design, which are skills that are important in our digital world, has received a huge boost from the existence of games such as Roblox. The game attracts more than 100 million players each month.  

In 2021, we are likely to see the rise of more gamification strategies in the ed-tech space.

Part 2: Predictions

  1. Accelerated EdTech Investments

Over the past decade, the global EdTech venture capital has seen exponential growth, from 500 million in 2010 to more than $10 billion in 2020). 

As the adoption of digital EdTech solutions rises, this number is expected to rise rapidly in the next decade. Some reports estimate an $87 billion increase in investments by 2030.

 With Asia being the leading adopters of EdTech, startups from other continents will be looking to up their game to capitalize on opportunities arising from the industry.

  1.  EdTech Market Value Will Reach New Heights

The Covid-19 pandemic increased the adoption of Education technology solutions. The expenditure of the sector is expected to reach new heights ($404 billion) between 2019 and 2025. 

What acted as a source of short-term resort for the education sector will become the new norm. 

Many schools and colleges have not started their transition to the new models of learning, but most of them are laying off strategies to start the process. Integrating some of the trends discussed above may be a good way to start your journey.

Final Thoughts

The uncertainty that comes in the aftermath of the Covid-19 pandemic makes it hard to predict the ed-tech trends that will lead the way in 2021. But, we believe that 2021 will be a year of many firsts and unknowns in the education sector. That said, there is immense power in understanding how the above-discussed trends and predictions may shape L & D. And how you can capitalize on the opportunities that arise from them. 

If you are interested in leveraging the power of digital assessments and psychometrics practices for your education and corporate campaigns, feel free to sign up for access to our free assessment tools or consult with our experienced team for services such as test development, job analysis, implementation services and so much more!

Online corporate training assessment is an integral part of improving the performance and productivity of employees. To gauge the effectiveness of the training, online assessments are the go-to solution by many businesses.  While traditional on-premise assessments do the trick, online corporate training assessments have proved to be more cost-effective, reliable, and secure. (Image Source: Unsplash)

Also known as digital assessments or e-assessments, online assessments are the new black in recruitment and learning & development (L&D). In fact, according to a Talent Assessment Study conducted in 2016-2017, the adoption of online assessment grew by 116% in L&D and 114% in recruitment. Made up of several approaches, online assessments aim to assess technical and soft skills and individuals’ personalities. They can be adopted in industries such as education, technology, and even corporate environments!

However, despite the many benefits of online assessments, developing assessments that are effective has proven to be a problem for many businesses. 

In this blog, we explore 5 tips to create effective online corporate training assessments to make sure that your training achieves its main objectives.

But, before going into the brass tucks of online assessment tricks, here are some benefits of using online testing in corporate training assessment:

The Future Of Assessments: Tech. ed

Benefits Of Online Corporate Training Assessments

Insight On Company Strengths and Weaknesses

Online testing gives businesses and organizations insight into the positives and negatives of their training programs. For example, if an organization realizes that certain employees can’t grasp certain concepts, they may decide to modify how they are delivered or eliminate them completely. The employees can also work on their areas of weaknesses after the assessments, hence improving productivity. 

Helps in Measuring Performance

Unlike traditional testing which is impossible to perform analytics on the assessments, measure performance and high-fidelity, online assessments can quantify initial goals such as call center skills. By measuring performance, businesses can create data-driven roadmaps on how their employees can achieve their best form in terms of performance. 

Advocate For Ideas and Concepts That Can Be Integrated Into The Real World

Workers learn every day and sometimes what they learn is not used in driving the business towards attaining its objectives. This can lead to burnout and information overload in employees, which in turn lowers performance and work quality. By using online assessments, you can customize tests to help workers attain skills that are only in alignment with your business goals. This can be done by using methods such as adaptive testing

Other Benefits Include: 

  • The assessments can be taken from anywhere in the world
  • Saves the company a lot of time and resources
  • Improved security compared to traditional assessments
  • Improved accuracy and reliability. 
  • Scalability and flexibility

Components Of Effective Corporate Training Assessments

Effective online testing is all about having a well-defined strategy. What is it you are trying to achieve with the corporate assessment? 

It is therefore important that you have data-driven insight into key questions such as what you are trying to assess, why you are doing it, the time to assess learners, which online assessment tools to use, and how you go about with the assessments.

 These help you formulate assessments with learning goals and objectives in mind, which improves their effectiveness. 

Tips To Improve Your Online Corporate Training Assessments

1. Personalized Testing

Most businesses have an array of different training needs. Most employees have different backgrounds and responsibilities in organizations, which is difficult to create effective generalized tests. To achieve the main objectives of your training, it is important to differentiate the assessments. Sales assessments, technical assessments, management assessments, etc can not be the same. Even in the same department, there could be diversification in terms of skills and responsibilities. One way to achieve personalized testing is by using methods such as Computerized Adaptive Testing. Through the immense power of AI and machine learning, this method gives you the power to create tests that are unique to employees. Not only does personalized testing improve effectiveness in your workforce, but it is also cost-effective, secure, and in alignment with the best Psychometrics practices in the corporate world. It is also important to keep in mind the components of effective online assessments when creating personalized tests.  

2. Analyzing Assessment Results

Many businesses don’t see the importance of analyzing corporate training assessment results. How do you expect to improve your training programs and assessments if you don’t derive value from the assessment results? 

Example of Assessment analysis on Iteman

Analyze assessment results using psychometric analytics software such as Iteman to get important insights such as successful participants, item performance issues, and many others. This provides you with a blueprint to improve your assessments and employee training programs. 

3. Integrating Online Assessments Into Company Culture

Getting the best out of online assessments is not about getting it right once, but getting it right over a long period of time. Integrating assessments into company culture is one great way to achieve this. This will make assessments part of the systems and employees will always look forward to improving their skills. You can also use strategies such as gamification to make sure that your employees enjoy the process. It is also critical to give the employees the freedom to provide feedback on the assessments and training programs. 

4. Diversify Your Assessment Types

One great myth about online assessments is that they are limited in terms of questions and problems you can present to your employees. However, this is not true!

 By using methods such as item banking, assessment systems are able to provide users with the ability to develop assessments using different question types. Some modern question types include:

  • Drag & drop 
  • Multiple correct 
  • Embedded audio or video
  • Cloze or fill in the blank
  • Number lines
  • Situational judgment test items
  • Counter or timer for performance tests

Diversification of question types improves comprehension in employees and helps them develop skills to approach problems from multiple angles. 

5. Choose Your Online Assessment Tools Carefully

This is among the most important considerations you should make when creating a corporate assessment strategy. This is because online assessment software and tools are the core of how your campaigns turn out. 

There are many online assessment tools available, but choosing one that meets your requirements can be a daunting task. Apart from the key considerations of budget, functionality, etc, there are many other factors to keep in mind before choosing online assessment tools. 

To help you choose an online assessment tool that will help you in your assessment journey, here are a few things to consider:

Ease-of-use

Most people are new to online assessments, and as much as some functionalities can be powerful, they may be overwhelming to candidates and the test development staff. This may make candidates underperform. It is, therefore, important to vet the platform and its functionalities to make sure that they are easy to use. 

Functionality 

Online assessments are becoming popular and new inventions are being made every day. Does the online assessment software have the latest innovations in the industry? Do you get value for your money? Does it support modern psychometrics like item response theory? These are just but a few questions to ask when vetting a platform for functionality. 

Assessment Reporting and Visualizations

One major advantage of online assessments over traditional ones is that they offer access to instant assessment reporting. You should therefore look for a platform that offers advanced reporting and visualizations in metrics such as performance, question strengths, and many others. 

Cheating precautions and Security

When it comes to online corporate assessments, there are two concerns when it comes to security. How secure are the assessments? And how secure is the platform? In relation to the tests, the platform should provide precautions and technologies such as Lockdown browser against cheating. They should also have measures in place to make sure that user data is secure. 

Reliable Support System

This is one consideration that many businesses don’t keep in mind, and end up regretting in the long run. Which channels does the corporate training assessment platform use to provide its users with support? Do they have resources such as whitepapers and documentation in case you need them? How fast is their support? 

These are questions you should ask before selecting a platform to take care of your assessment needs. 

Scalability

A good online testing platform should be able to provide you with resources should your needs go beyond expectation. This includes delivery volume – server scalability – but also being able to manage more item authors, more assessments, more examinees, and greater psychometric rigor.

Final Thoughts

Adopting effective online corporate training assessments can be daunting tasks with a lot of forces at play, and we hope these tips will help you get the best out of your assessments. Online assessments are revolutionizing many industries, and it’s about time you integrate them into your workflow.

Do you want to integrate online assessments into your corporate environment or any industry but feel overwhelmed by the process? Feel free to contact an experienced team of professionals to help you create an assessment strategy that helps you achieve your long-term goals and objectives.  

You can also sign up to get free access to our online assessment suite including 60 item types, IRT, adaptive testing, and so much more functionality!

We are proud to announce that ASC’s next-generation platform, Assess.ai, was recently announced as a Finalist for the 2021 EdTech Awards. This follows the successful launch of the landmark platform in 2020, where it quickly garnered attention as a finalist for the Innovation award at the 2020 conference of the Association of Test Publishers.

What is Assess.ai?

Assess.ai brings together best practices in assessment with modern approaches to automation, AI, and software design. It is built to make it easier to develop high-quality assessments with automated item review and automated item generation, and deliver with advanced psychometrics like adaptive and multistage testing.

An example of this approach is shown below; a modern UI/UX presents a visual platform for publishing MultiStage Tests based on information functions from item response theory (IRT). Users can easily publish tests with this modern AI-based approach, without having to write a single line of code.

Multistage testing

What are the EdTech Awards?

See the full list of finalists, and learn more about the awards, via the link below.

Item analysis refers to the process of statistically analyzing assessment data to evaluate the quality and performance of your test items. This is an extremely important step in the test development cycle, not only because it helps improve the quality of your test, but because it provides documentation for validity: evidence that your test performs well and score interpretations mean what you intend.

The Goals of Item Analysis

Item analysis boils down to two goals:

  1. Find the items that are not performing well (difficulty and discrimination, usually)
  2. Figure out WHY those items are not performing well

There are different ways to evaluate performance, such as whether the item is too difficult/easy, too confusing (not discriminating), mis-keyed, or perhaps even biased to a minority group. Moreover, there are two completely different paradigms for this analysis: classical test theory (CTT) and item response theory (IRT). On top of that, the analyses can differ based on whether the item is dichotomous (right/wrong) or polytomous (2 or more points). Because of the possible variations, item analysis is actually a very deep and complex topic. And that doesn’t even get into evaluation of test performance. In this post, we’ll cover some of the basics for each theory, at the item level.

Implementing Item Analysis

To implement item analysis, you should utilize dedicated software designed for this purpose. If you utilize an online assessment platform, it will provide you output like you see below because it will have such dedicated software already integrated (if not, it isn’t a real assessment platform). In some cases, you might utilize standalone software. CITAS provides a simple spreadsheet-based approach to help you learn the basics.

Classical Test Theory

Classical Test Theory provides a very simply and intuitive approach to item analysis. It utilizes nothing more complicated than proportions, averages, counts, and correlations. For this reason, it is useful for small-scale exams, or use with groups that do not have psychometric expertise.

Item Difficulty

CTT quantifies item difficulty for dichotomous items as the proportion (P value) of examinees that correctly answer it. If P = 0.95, that means the item is very easy. If P = 0.35, the item is very difficult. Note that because the scale is inverted (lower value means higher difficulty), this is sometimes referred to as item facility.

For polytomous items, we evaluate the mean score. If the item is an essay that is scored 0 to 5 points, is the average score 1.9 (difficult) or 4.1 (easy)?

Item Discrimination

In psychometrics, discrimination is a GOOD thing. The entire point of an exam is to discriminate amongst examinees; smart students should get a high score and not-so-smart students should get a low score. If everyone gets the same score, there is no discrimination, and no point in the exam! Item discrimation evaluates this concept.

CTT uses the point-biserial item-total correlation (Rpbis) as its primary statistic for this. It correlates scores on the item to the total score on the test. If the item is strong, and it measures the topic well, then examinees who get the item right will tend to score higher on the test. This will mean the correlation will be 0.20 or higher. If it is around 0.0, that means the item is just a random data generator, and worthless on the exam.

Key and Distractor Analysis

In the case of many item types, it pays to evaluate the answers. A distractor is an incorrect option. We want to make sure that more examinees are not selecting a distractor than the key (P value) and also that no distractor has a higher discrimination. The latter would mean that smart students are selecting the wrong answer, and not-so-smart students are selecting what is supposedly correct. In some cases, the item is just bad. In others, the answer is just incorrectly recorded, perhaps by a typo. We call this a miskey of the item. In both cases, we want to flag the item and then dig into the distractor statistics to figure out what is wrong.

Example

Below is an example output for one item from our Iteman software, which you can download for free. You might also be interested in this video. Here, we see that 82% of students answered this item correctly, with very high Rpbis. This is a very well performing item.

iteman_output

Item Response Theory

Item Response Theory (IRT) is a very sophisticated paradigm of item analysis, and tackling many other psychometric tasks. It requires much larger sample sizes than CTT, and extensive expertise, so it isn’t relevant for small-scale exams like classroom quizzes. However, it is used by virtually every “real” exam you will take in your life, from K-12 benchmark exams to university admissions to professional certifications.

If you haven’t used IRT, I recommend you check out this blog post first.

Item Difficulty

IRT evaluates item difficulty for dichotomous items as a b-parameter, which is sort of like a z-score for the item on the bell curve: 0.0 is average, 2.0 is hard, and -2.0 is easy. (This can differ somewhat with the Rasch approach, which rescales everything.) In the case of polytomous items, there is a b-parameter for each threshold, or step between points.

Item Discrimination

IRT evaluates item discrimination by the slope of its item response function, which is called the a-parameter. Often, values above 0.80 are good and below 0.80 are less effective.

Key and Distractor Analysis

In the case of polytomous items, the multiple b-parameters provide an evaluation of the different answers. For dichotomous items, the IRT modeling does not distinguish amongst correct answers. Therefore, we utilize the CTT approach for distractor analysis. This remains extremely important for diagnosing issues in multiple choice items.

Example

Here is an example of what output from an IRT analysis program (Xcalibre) looks like. You might also be interested in this video. Here, we have a polytomous item, utilizing the generalized partial credit model. It has a strong classical discrimination (0.62) but poor IRT discrimination (0.466).

Xcalibre item response theory

Do you conduct scholarly research on adaptive testing? Perhaps a thesis or dissertation? Or maybe you have developed adaptive tests and have a technical report or validity study? I encourage you to check out the Journal of Computerized Adaptive Testing as a publication outlet for your adaptive testing research. JCAT is the official journal of the International Association for Computerized Adaptive Testing, a nonprofit org dedicated to improving the science and technology of assessment.

JCAT has an absolutely stellar board of editors, and was founded to focus on improving the dissemination of adaptive testing research. The IACAT website also contains a comprehensive bibliography of adaptive testing research, across all journals and tech reports, for the past 50 years!

La pandemia COVID-19 está cambiando drásticamente todos los aspectos de nuestro mundo, y una de las áreas más afectadas es la evaluación educativa y otros tipos de evaluación. Muchas organizaciones aún realizaban pruebas con metodologías de hace 50 años, como colocar a 200 examinados en una sala grande con escritorios, exámenes en papel y un lápiz. COVID-19 está obligando a muchas organizaciones a dar un giro, lo que brinda la oportunidad de modernizar las evaluaciones. Pero, ¿cómo podemos mantener la seguridad en la evaluación, y por lo tanto la validez, a través de estos cambios? A continuación, presentamos algunas sugerencias, las cuales se pueden implementar fácilmente en las plataformas de evaluación de ASC, líderes en la industria. Comience registrándose para obtener una cuenta gratuita en http://assess.ai.

Verdadera banca de ítems con acceso a contenido

Una buena evaluación comienza con buenos ítems. Si bien los Sistemas de Gestión del Aprendizaje (LMS) y otras plataformas que no son realmente de evaluación incluyen algunas funciones de creación de ítems, por lo general no cumplen con los requisitos básicos para una verdadera banca de ítems. Existen prácticas recomendadas con respecto a la banca de ítems que son estándar en las organizaciones de evaluación a gran escala (p. Ej., Los Departamentos de Educación de Estado en EE. UU.), pero son sorprendentemente raras para los exámenes de certificación / licencia profesional, universidades y otras organizaciones. A continuación, se muestran algunos ejemplos.

• Los ítems son reutilizables (no es necesario cargarlos para cada prueba en la que se utilicen)

• Seguimiento de la versión del ítem

• Seguimiento y auditorías de edición hecha por usuarios

• Controles de contenido de autor (los profesores de matemáticas solo pueden ver elementos de matemáticas)

• Almacenar metadatos como parámetros de la Teoría de Respuesta al Ítem (TRI) y estadísticas clásicas

• Seguimiento del uso de ítems en las pruebas

Flujo de trabajo de revisión de ítems

Acceso basado en roles

Todos los usuarios deben estar limitados por roles, como Autor del ítem, Revisor del Ítem, Editor de Pruebas y Administrador de Examinados. Entonces, por ejemplo, es posible que alguien a cargo de administrar la lista de examinados / estudiantes nunca vea ninguna pregunta del examen.

Análisis forense de datos

Hay muchas formas de analizar los resultados de tu prueba para buscar posibles amenazas de seguridad / validez. Nuestro software SIFT proporciona una plataforma de software gratuita para ayudarte a implementar esta metodología moderna. Puedes evaluar los índices de colusión, que cuantifican qué tan similares son las respuestas para cualquier par de examinados. También puedes evaluar los tiempos de respuesta, el rendimiento del grupo y las estadísticas acumuladas.

Aleatorización

Cuando las pruebas se entregan en línea, debe tener la opción de aleatorizar el orden de los ítems y también el orden de las respuestas. Al imprimir en papel, debe haber una opción para aleatorizar el orden. Pero, por supuesto, está mucho más limitado respecto a esto cuando se usa papel.

Prueba lineal sobre la marcha (LOFT)

LOFT creará una prueba aleatoria única para cada examinado. Por ejemplo, puedes tener un grupo de 300 ítems distribuidos en 4 dominios, y cada examinado recibirá 100 ítems con 25 de cada dominio. Esto aumenta enormemente la seguridad.

Pruebas adaptativas computarizadas (CAT)

CAT lleva la personalización aún más lejos y adapta la dificultad del examen y el número de ítems que ve cada alumno, en base a ciertos algoritmos y objetivos psicométricos. Esto hace que la prueba sea extremadamente segura.

Navegador bloqueado

¿Quieres asegurarte de que el alumno no pueda navegar en busca de respuestas o tomar capturas de pantalla de ítems? Necesitas un navegador bloqueado. Las plataformas de evaluación de ASC, Assess.ai y FastTest, vienen con esto listo para usar y sin costo adicional.

Códigos de prueba para examinados

¿Quieres asegurarte de que la persona adecuada realice el examen adecuado? Genera contraseñas únicas de un solo uso para que las entregue un supervisor después de la verificación de identidad. Esto es especialmente útil en la supervisión remota; el estudiante nunca recibe ninguna información  antes del examen sobre cómo ingresar, excepto para iniciar la sesión de supervisión virtual. Una vez que el supervisor verifica la identidad del examinado le proporciona la contraseña única de un solo uso.

Códigos de supervisor

¿Quieres un paso adicional en el procedimiento de inicio de la prueba? Una vez que se verifica la identidad de un estudiante e ingresa su código, el supervisor también debe ingresar una contraseña diferente que sea exclusiva para él ese día.

Ventanas de fecha / hora

¿Quieres evitar que los examinados ingresen temprano o tarde? Configura una ventana de tiempo específica, como el viernes de 9 a 12 am.

Supervisión basada en IA (Inteligencia Artificial)

Este nivel de supervisión es relativamente económico, y hace un gran trabajo validando los resultados de un examinado individual. Sin embargo, no protege la propiedad intelectual de las preguntas de tu examen. Si un examinado roba todas las preguntas, no lo sabrás de inmediato. Por lo tanto, es muy útil para exámenes de nivel bajo o medio, pero no tan útil para exámenes de alto riesgo como certificaciones o licenciaturas. Obtenga más información sobre nuestras opciones de supervisión remota. También te recomiendo esta publicación de blog para obtener una descripción general de la industria de supervisión remota.

Supervisión en línea en tiempo real

Si no puedes asistir a los centros de pruebas en persona debido a COVID, esta es la siguiente mejor opción. Los supervisores en vivo pueden registrar al candidato, verificar la identidad e implementar todas las demás cosas anteriores. Además, pueden verificar el entorno del examinado y detener el examen si ven que el examinado roba preguntas u otros problemas importantes. MonitorEDU es un gran ejemplo de esto.

¿Cómo puedo empezar?

¿Necesitas ayuda para implementar algunas de estas medidas? ¿O simplemente quieres hablar sobre las posibilidades? Envía un correo electrónico a ASC a solutions@assess.com.

Traducido de la entrada de blog escrita por el Dr. Nathan Thompson.

Nathan Thompson obtuvo su doctorado en psicometría de la Universidad de Minnesota, con un enfoque en pruebas adaptativas computarizadas. Su licenciatura fue de Luther College con una triple especialización en Matemáticas, Psicología y Latín. Está interesado principalmente en el uso de la IA y la automatización de software para aumentar y reemplazar el trabajo realizado por psicometristas, lo que le ha proporcionado una amplia experiencia en el diseño y programación de software. El Dr. Thompson ha publicado más de 100 artículos de revistas y presentaciones de conferencias, pero su favorito sigue siendo https://pareonline.net/getvn.asp?v=16&n=1.

La vigilancia en línea existe desde hace más de una década. Pero dado el reciente brote de COVID-19, las instituciones educativas y de fuerza laboral / certificación están luchando por cambiar sus operaciones, y una gran parte de esto es un aumento increíble en la vigilancia en línea. Esta publicación de blog está destinada a proporcionar una descripción general de la industria de vigilancia en línea para alguien que es nuevo en el tema o está comenzando a comprar y está abrumado por todas las opciones que existen.

Vigilancia en Línea: Dos Mercados Distintos

En primer lugar, describiría la industria de vigilancia en línea como perteneciente a dos mercados distintos, por lo que el primer paso es determinar cuál de ellos se adapta a tu organización.

1. Sistemas a mayor escala, de menor costo (cuando son a gran escala) y con menos seguridad, diseñados para ser utilizados solo como un complemento para las principales plataformas LMS como Blackboard o Canvas. Por lo tanto, estos sistemas de vigilancia en línea están diseñados para exámenes de nivel medio, como un examen de mitad de período de Introducción a la psicología en una universidad.

2. Sistemas de menor escala, mayor costo y mayor seguridad diseñados para ser utilizados con plataformas de evaluación independientes. Estos son generalmente para exámenes de mayor importancia como certificación o fuerza laboral, o quizás para uso especial en universidades como exámenes de Admisión y Colocación.

¿Cómo reconocer la diferencia? El primer tipo anunciará la fácil integración con sistemas como Blackboard o Canvas como característica clave. También se centrarán a menudo en la revisión de videos por IA, en lugar de usar humanos en tiempo real. Otra consideración clave es observar la base de clientes existente, que usualmente es anunciada.

Otras formas en que los sistemas de vigilancia en línea pueden diferir

IA vs humanos: Algunos sistemas se basan exclusivamente en algoritmos de inteligencia artificial para marcar las grabaciones de video de los examinados. Otros sistemas utilizan humanos reales.

Grabar y Revisar vs Humanos en Tiempo Real: Existen dos formas si se utilizan humanos. Primero, puede ser en vivo y en tiempo real, lo que significa que hay un ser humano en el otro extremo del video que puede confirmar la identidad antes de permitir que comience la prueba, y detener la prueba si hay actividad ilícita. Grabar y Revisar grabará el audio y un humano lo comprobará en un plazo de 24 a 48 horas. Esto es más flexible, pero no puedes detener la prueba si alguien está robando el contenido; probablemente no lo sabrás hasta el día siguiente.

Captura de pantalla: Algunos proveedores de vigilancia en línea tienen la opción de grabar / transmitir la pantalla y también la cámara web. Algunos también brindan la opción de hacer únicamente esto (sin cámara web) para exámenes de menor importancia.

Teléfono móvil como tercera cámara: Algunas plataformas más nuevas ofrecen la opción de integrar fácilmente el teléfono móvil del examinado como una tercera cámara, que funciona efectivamente como un supervisor humano. Se les indicará a los examinados que utilicen el video para mostrar debajo de la mesa, detrás del monitor, etc., antes de comenzar el examen. Luego, se les puede indicar que coloquen el teléfono a 2 metros de distancia con una vista clara de toda la habitación mientras se realiza la prueba.

Uso de supervisores propios: Algunos sistemas de vigilancia en línea le permiten utilizar su propio personal como supervisores, lo que es especialmente útil si la prueba se realiza en un período de tiempo reducido. Si se entrega continuamente 24 × 7 durante todo el año, probablemente desee utilizar el personal altamente capacitado del proveedor.

Integraciones de API: Algunos sistemas requieren que los desarrolladores de software configuren una integración de API con su LMS o plataforma de evaluación. Otros son más flexibles y puedes iniciar sesión por ti mismo, cargar una lista de examinados y ya queda todo listo para la prueba.

Bajo pedido vs Programado: Algunas plataformas requieren que se programe un margen de tiempo para que los examinados realicen la prueba. Otros son puramente bajo demanda y el examinado puede presentarse cuando esté listo. MonitorEDU es un excelente ejemplo de esto: los examinados se presentan en cualquier momento, presentan su identificación a un humano en tiempo real y luego comienzan la prueba de inmediato: sin descargas / instalaciones, sin verificaciones del sistema, sin integraciones de API, nada.

Más seguridad: Un Mejor Sistema de Entrega de Pruebas

Una buena plataforma de entrega de pruebas también vendrá con su propia funcionalidad para mejorar la seguridad de las pruebas: aleatorización, generación automatizada de ítems, pruebas adaptativas computarizadas, pruebas lineales sobre la marcha, banca profesional de ítems, puntuación de la teoría de respuesta a los ítems, puntuación escalada, análisis psicométrico, equiparación, entrega de bloqueo y más. En el contexto de la vigilancia en línea, quizás lo más destacado sea la entrega de bloqueo. En este caso, la prueba se hará cargo por completo de la computadora del examinado y no podrá usarla para nada más hasta que termine la prueba.

Los sistemas LMS rara vez incluyen esta funcionalidad, porque no son necesarios para un examen de mitad de período de Introducción a la psicología. Sin embargo, hay muchas cosas en juego en la mayoría de las evaluaciones del mundo (admisiones universitarias, certificaciones, contratación de personal, etc.) y estas pruebas dependen en gran medida de dicha funcionalidad. Tampoco es solo una costumbre o una tradición. Dichos métodos se consideran esenciales según los estándares internacionales, incluidos AERA / APA / NCMA, ITC y NCCA.

Socios de ASC de Vigilancia en Línea

ASC les brinda a sus clientes una solución lista para ser usada, debido a que está asociado con algunos de los líderes en el ámbito. Estos incluyen: MonitorEDU, ProctorExam, Examity y Proctor360. Obtén más información en nuestra página web sobre esa funcionalidad y otra que explica el concepto de seguridad de prueba configurable.

Traducido de la entrada de blog escrita por el Dr. Nathan Thompson.

Nathan Thompson obtuvo su doctorado en psicometría de la Universidad de Minnesota, con un enfoque en pruebas adaptativas computarizadas. Su licenciatura fue de Luther College con una triple especialización en Matemáticas, Psicología y Latín. Está interesado principalmente en el uso de la IA y la automatización de software para aumentar y reemplazar el trabajo realizado por psicometristas, lo que le ha proporcionado una amplia experiencia en el diseño y programación de software. El Dr. Thompson ha publicado más de 100 artículos de revistas y presentaciones de conferencias, pero su favorito sigue siendo https://pareonline.net/getvn.asp?v=16&n=1.

The Test Information Function is a concept from item response theory (IRT) that is designed to evaluate how well an assessment differentiates examinees, and at what ranges of ability. For example, we might expect an exam composed of difficult items to do a great job in differentiating top examinees, but it is worthless for the lower half of examinees because they will be so confused and lost. The reverse is true of an easy test; it doesn’t do any good for top examinees. The test information function quantifies this, and has a number of other important applications and interpretations.

Test Information Function: How To Calculate It

The test information function is not something you can calculate by hand. First, you need to estimate item-level IRT parameters, which define the item response function. The only way to do this is with specialized software; there are a few options in the market, but we recommend Xcalibre. Next, the item response function is converted to an item information function for each item. The item information functions can then be summed into a test information function. Lastly, the test information function is often inverted into the conditional standard error of measurement function, which is extremely useful in test design and evaluation.

IRT Item Parameters

Software like Xcalibre will estimate a set of item parameters. What parameter you use depend on the item types and other aspects of your assessment. For example, let’s just use the 3-parameter model, which estimates a, b, and c. And we’ll use a small test of 5 items. These are ordered by difficulty: Item 1 is very easy and Item 5 is very hard.

Itemabc
11.00-2.000.20
20.70-1.000.40
30.400.000.30
40.801.000.00
51.202.000.25

Item Response Function

The item response function uses the IRT equation to convert the parameters into a curve. The purpose of the item parameters is to fit this curve for each item, like a regression model, to describe how it performs. Here are the response functions for those 5 items. Note the scale on the x-axis, similar to the bell curve, with the easy items to the left and hard ones to the right.

item response function

Item Information Function

The item information function evaluates the calculus derivative of the item response function. An item provides more information about examinees where it provides more slope. For example, consider Item 5: it is difficult, so it is not very useful for examinees in the bottom half of ability. The slope of the Item 5 IRF is then nearly 0 for that entire range. This then means that its information function is nearly 0.

item information functions

Test Information Function

The test information function then sums up the item information functions, to summarize where the test is providing information. If you imagine adding the graphs above, you can easily imagine some humps near the top and bottom of the range, where there are the prominent IIFs. This indeed happens.

Test information function

Conditional Standard Error of Measurement Function

The test information function can be inverted into an estimate of conditional standard error of measurement. What do we mean by conditional? Well, if you are familiar with classical test theory, you know that it estimates the same standard error of measurement for everyone that takes a test. But given the reasonable concepts above, it is incredibly unreasonable to expect this. If a test has only difficult items, then it measures top students well, and does not measure lower students well, so why should we say that their scores are just as accurate? The conditional standard error of measurement turns this into a function of ability. Also, note that it refers to the theta scale and not to the number-correct scale.

Conditional standard error of measurement function

How can I implement all this?

For starters, I recommend delving deeper into an item response theory book. My favorite is Item Response Theory for Psychologists by Embretson and Riese. Next, you need some item response theory software. Xcalibre can be downloaded as a free version for the purposes of learning, and is the easiest program to learn how to use (no 1980s-style command code… how is that still a thing?). But, if you are an R fan, there are plenty of resources in that community as well.

Tell me again: Why are we doing this?

The purpose of all this is to effectively model how items and tests work, namely how they interact with examinees. This then allows us to evaluate their performance so that we can improve them, thereby enhancing reliability and validity. Classical test theory had a number of shortcomings in this endeavor, which led to IRT being invented. IRT also facilitates some modern approaches to assessment, such as linear on the fly testing, adaptive testing, and multistage testing.