Happy birthday to us! We’re 40 years old! To celebrate our 40th birthday, we thought we’d do something big. But before we cheers to the Big 4-0, we want to travel back to 1979. 

In the ‘70s, Dr. David J. Weiss and his team at the University of Minnesota were tasked to design and develop the first Computerized Adaptive Test (CAT) for the U.S. Department of Defense. Dr. Weiss, who we lovingly refer to as the Grandfather of Computerized Adaptive Testing, brought this innovation to life. CAT revolutionized testing by making it smarter, faster, and fairer. Ensuring tests are smart, fast, and fair continues to be our mission at Assessment Systems. 

40 years later, Dr. Weiss, Dr. Nathan Thompson, and the rest of us here at Assessment Systems continue to push the field of psychometrics forward. To celebrate the last 40 years, we want to share two big Assessment Systems updates. 

Have You Met Ada?

Ada is AI-Driven Assessment and the future of psychometrics. We’ve been working hard on Ada for months and we’re ready for you to check it out. 

Ada is built by assessment professionals for assessment professionals. Ada incorporates a number assessment necessities: Item Authoring, Item Banking, 50+ Item Types, and Examinee Management, to name a few. Plus some things you’ve never seen before like Automated Item Generation. Long story short, Ada has been designed specifically to guide you along best practices and sound psychometrics, enabling you to make strong, defensible, and engaging assessments.

Ada is currently in its most basic form; we’re in the process of building additional powerful features and microservices. Click here to get an early look and start using Ada.

40 Has Never Looked This Good

After 40 years, we’ve decided to update our visual identity! This includes a new logo, brand colors, fonts, and a new website. You’ll see these changes on our website, social sites like LinkedIn, and our products.

Assess logos through the years

Our new logo is an evolution of the logos that precede it; anchored with an interpretation of the IRT curve in orange, our new logo represents a more modern, simplified, and accessible visualization of who we are and what we do: we assess. As a data-driven company, we’re relentless in our approach of using data and assessment internally, and passionate about our products and consulting to empower our partners to assess with precision and ease.

We’re proud to release Ada and our new visual identity into the world. We’ve spent 40 years innovating and building stronger and more efficient assessment tools, and we’re looking forward to seeing what we will accomplish in the next 40 years. Thank you for being part of the Assess community. Whether you just read the blogs or you’ve been a partner for years, we’re grateful to celebrate our 40th birthday with you. 

Assessment Systems, international leaders in psychometrics and assessment software, have completed the Service Organization Controls (SOC) 2 Type 2 audit reaffirming its commitment to data security for its employees and client partners.

Assessments Systems completed the SOC 2 audit in partnership with A-Lign. A-Lign employs the defined criteria of the American Institute of Certified Public Accountants (AICPA) to complete the audit. The audit examined multiple components by which Assessment Systems provides its services including: Infrastructure – physical, IT, and hardware; Software; People – governance and operations; processes – automated and manual; and data – transaction streams, files, databases, and outputs used or processed by a system.

“Our team drove this initiative as an effort to ensure we’re always at our best for our partners. We’re incredibly proud to complete the SOC 2 Type 2 audit,” said Dave Saben, President and CEO at Assessment Systems, “As we continue to develop software and programs that elevate psychometrics and assessment, it is paramount to secure all aspects of our organization and do the right thing for our partners.”

In addition to successfully completing the SOC 2 Type 2 audit, Assessment Systems is proud to continuously innovate and develop features and enhancements for its flagship products – FastTest, Certifior, Iteman and Xcalibre – and the anticipated release of its new assessment platform, āda. āda is AI-Driven Assessment and the future of psychometrics and assessments. As international leaders in psychometrics, assessment, and certification programs, āda is Assessment Systems’ commitment to continuously developing and delivering smarter, faster, and fairer assessments. Click here to stay up to date with ada developments. To request a copy of the full SOC2 audit report, email HR@54.89.150.95.

MINNEAPOLIS, MN, September 7, 2018 – Assessment Systems, global leaders in psychometrics and assessment software has added the National Institute of Automotive Service Excellence (ASE) to its growing list of valued partners.

Since 1972, ASE has driven to elevate the quality of vehicle repair and service by assessing and certifying automotive professionals. This partnership joins the power and sophistication of Assessment Systems’ flagship products – FastTest, Iteman and Xcalibre – with ASE’s long standing and renowned certification in the automotive industry.

“Our values align,” said Cassandra Bettenberg, Executive Director of Strategic Partnerships at Assessment Systems, “Like Assessment Systems, ASE values psychometrics and wants to develop and deliver more valid and reliable exams.”

As Assessment Systems continues to diversify its list of partners across industries, they continue to improve their best-in-class assessment technology and their new assessment platform, Ada. Assessment Systems recently earned a spot on the Inc. 5000 –  Inc. Magazine’s list of America’s Fastest-Growing Private Companies – for the second year in a row.

About The National Institute for Automotive Service Excellence
The National Institute for Automotive Service Excellence was established in 1972 as a non-profit organization to help improve the quality of automotive service and repair through the voluntary testing and certification of automotive technicians and parts specialists. Today, there are nearly 400,000 ASE- certified professionals at work in dealerships, independent shops, collision repair shops, auto parts stores, fleets, schools and colleges throughout the country.

About Assessment Systems Corporation
Assessment Systems is the trusted provider of high-stakes assessment and psychometric services for over 250 partners worldwide, delivering over 2,000,000 assessments every year. Powered by decades of research in psychometrics, Assessment Systems offers best-in-class software platforms and consulting services to support high-quality measurement and completely scalable solutions. Assessment Systems’ success is driven by a commitment to make assessments smarter, faster, and fairer to ensure bad tests don’t hurt good people.

###

MINNEAPOLIS, MN, September 5, 2018 – Assessment Systems, industry leaders in psychometrics and assessment software, recently added The BARBRI Group to their growing list of partners.

BARBRI is the leader in preparing future lawyers for the most important exam of their lives, the Bar Exam. BARBRI has entered into a partnership with Assessment Systems and their suite of powerful high-stakes testing software to deliver its new formative assessment program that will help law professors make better and faster exams for future lawyers.

As Assessment Systems continues to build its list of partners around the globe, they continue to improve their best-in-class assessment technology and their flagship products – FastTest, Certifior, Iteman and Xcalibre – and the anticipated release of its new assessment platform, Ada. Assessment Systems recently earned a spot on the Inc. 5000 –  Inc. Magazine’s list of America’s Fastest-Growing Private Companies – for the second year in a row.

About The BARBRI Group
The BARBRI Group companies meet the legal education, cyber skills education and specialized training needs of law students, attorneys, IT students and other professionals throughout their careers. The companies offer a comprehensive portfolio of learning solutions for higher education institutions and law- and finance-related businesses. At the core of The BARBRI Group Companies is BARBRI Bar Review, which has helped more than 1.3 million lawyers around the world pass a U.S. bar exam. The BARBRI Group, founded in 1967, is a Leeds Equity Partners portfolio company headquartered in Dallas with offices throughout the United States and around the world.

About Assessment Systems Corporation
Assessment Systems is the trusted provider of high-stakes assessment and psychometric services for over 250 partners worldwide, delivering over 2,000,000 assessments every year. Powered by decades of research in psychometrics, Assessment Systems offers best-in-class software platforms and consulting services to support high-quality measurement and completely scalable solutions. Assessment Systems’ success is driven by a commitment to make assessments smarter, faster, and fairer to ensure bad tests don’t hurt good people.

###

The traditional Learning Management System (LMS) is designed to serve as a portal between educators and their learners. Platforms like Moodle are successful in facilitating cooperative online learning in a number of groundbreaking ways: course management, interactive discussion boards, assignment submissions, and delivery of learning content. While all of this is great, we’ve yet to see an LMS that implements best practices in assessment and psychometrics to ensure that medium or high stakes tests meet international standards.

To put it bluntly, LMS systems have assessment functionality that is usually good enough for short classroom quizzes but falls far short of what is required for a test that is used to award a credential.  A white paper on this topic is available here, but some examples include:

  • Treatment of items as reusable objects
  • Item metadata and historical use
  • Collaborative item review and versioning
  • Test assembly based on psychometrics
  • Psychometric forensics to search for non-independent test-taking behavior
  • Deeper score reporting and analytics

Assessment Systems is pleased to announce the launch of an easy-to-use bridge between FastTest and Moodle that will allow users to seamlessly deliver sound assessments from within Moodle while taking advantage of the sophisticated test development and psychometric tools available within FastTest. In addition to seamless delivery for learners, all candidate information is transferred to FastTest, eliminating the examinee import process.  The bridge makes use of the international Learning Tools Interoperability standards.

If you are already a FastTest user, watch a step-by-step tutorial on how to establish the connection, in the FastTest User Manual by logging into your FastTest workspace and selecting Manual in the upper right-hand corner. You’ll find the guide in Appendix N.

If you are not yet a FastTest user and would like to discuss how it can improve your assessments while still allowing you to leverage Moodle or other LMS systems for learning content, sign up for a free account here.

Desperation is seldom fun to see.

Some years ago, having recently released our online marking functionality I was reviewing some of the functionality in a customer workspace I was intrigued to see “Beyonce??” mentioned in a marker’s comments on an essay. The student’s essay was evaluating some poetry and had completely misunderstood the use of metaphor in the poem in question. The student also clearly knew that her interpretation was way off, but didn’t know how and had reached the end of her patience. So after a desultory attempt at answering, with a cry from the heart, reminiscent of William Wallace’s call for freedom, she wrote “BEYONCE” with about seventeen exclamation points. It felt good to see that her spirit was not broken, and it was a moment of empathy that drove home the damage that standardized tests are inflicting on our students. That vignette is playing itself out millions of time each year in this country, the following explains why.

What are “Standardized Tests”?

We use standardized tests for a variety of reasons, but underlying every reason (curriculum effectiveness, college/career preparedness, teacher effectiveness, etc.) is the understanding that the test is measuring what a student has learned. In order to know how all our students are doing, we give them all standardized tests, meaning every student receives essentially the same set of tests. So, a standardized test is a test where all students take essentially the same test. This is a difficult endeavor given the wide range of students and number of tests, and raises the question “How do we do this reliably and in a reasonable amount of time?”

Accuracy and Difficulty vs Length

We all want tests to reliably measure the students’ learning. In order to make these tests reliable, we need to supply questions of varying difficulty, from very easy to very difficult, to cover a wide range of abilities. In order to reduce the length of the test, most of the questions fall in the medium easy to medium difficulty range because that is where most of the students’ ability level will fall. So the test that best balances length and accuracy for the whole population should be constructed such that the amount of questions of any difficulty is proportionate to the number of students of that ability.

Why are most questions in the medium difficulty range? Imagine creating a test to measure 10th graders’ math ability. A small number of the students might have a couple years of calculus. If the test covered those topics, imagine the experience of most students who would often not even understand the notation in the question. Frustrating, right? On the other hand, if the test was also constructed to measure students with only rudimentary math knowledge, these average to advanced students would be frustrated and bored from answering a lot of questions on basic math facts. The solution most organizations use is to present only a few questions that are really easy or difficult, and accept that this score is not as accurate as they would prefer for the students at either end of the ability range.

These Tests are Inaccurate and Mean Spirited

The problem is that while this might work OK for a lot of kids, it exacts a pretty heavy toll on others. Almost one in five students will not know the answer to 80% of the questions on these tests, and scoring about 20% on a test certainly feels like failing. It feels like failing every time a student takes such a test. Over the course of an academic career, students in the bottom quintile will guess on or skip 10,000 questions. That is 10,000 times the student is told that school, learning, or success is not for them. Even biasing the test to be easier only makes a slight improvement.

Computerized Adaptive Testing, Test Performance with Bell Curve

The shaded area represents students who will miss at least 80% of questions.

It isn’t necessarily better for the top students whose every testing experience assures them that they are already very successful when the reality is that they are likely being outperformed by a significant percentage of their future colleagues.

In other words, at both ends of the Bell Curve, we are serving our students very poorly, inadvertently encouraging lower performing students to give up (there is some evidence that the two correlate) and higher performing students to take it easy. It is no wonder that people dislike standardized tests.

There is a Solution

A computerized adaptive test (CAT) solves all the problems outlined above. Properly constructed, a CAT has the ability to make the following faster, fairer, and more valid:

  • Every examinee completes the test in less time (fast)
  • Every examinee gets a more accurate score (valid)
  • Every examinee receives questions tuned to their ability so gets about half right (fair)

Given all the advantages of CAT, it may seem hard to believe that they are not used more often. While they are starting to catch on, it is not fast enough given the heavy toll that the old methods exact on our students. It is true that few testing providers can enable CATs, but that is simply making an excuse. If a standardized test is delivered to as few as 500 students it can be made adaptive. It probably isn’t, but it could be. All that is needed are computers or tablets, an Internet connection, and some effort. We should expect more.

How can my organization implement CAT?

While CAT used to only be feasible for large organizations that tested hundreds of thousands or millions of examinees per year, a number of advances have changed this landscape.  If you’d like to do something about your test, it might be worthwhile for you to evaluate CAT.  We can help you with that evaluation; if you’d like to chat, here is a link to schedule a meeting. Or contact me if you’d like to discuss the math or related ideas please drop me a note.

One of the best aspects of my position is the opportunity to travel the world and talk with many experts about psychometrics and educational assessment.  In December 2017, I was lucky enough to travel to Monterrey, Mexico, with the dual purpose of a conference and a Psychometrics Seminar.  It was an exciting week that taught me a lot about education in Latin America.

The first half of the week was the Congreso Internacional de Innovacion Educativa, the premier conference in educational technology in Latin America, with more than 3,000 attendees.  I had the opportunity to present on a project we are doing with Tecnologico de Monterrey to implement adaptive testing in admissions exams.  Tecnologico is the #2 university in Mexico, and therefore a leader in all of Latin America, as well as being the host of the conference.  In addition, I heard of number of interesting talks, including one by the CEO of Coursera on the role of MOOCs in higher education.  I’m personally a MOOC addict, though I tend to start new courses far faster than I ever finish one.

 

adaptive testing seminar

The second half of the week was the 2017 ASC Learning Summit, a 2-day seminar to teach item response theory and adaptive testing to anyone that wanted to learn.  We had 22 attendees, which was extremely successful given the short notice. (Tec had to move the entire conference from Mexico City to Monterrey after an earthquake)  We had 18 from Mexico, two from the US, one from Barbados, and one from the UK.  These included university professors, K12 assessment professionals, higher education admissions staff, certification test psychometricians, and more.

 

 

If time allows, I hope to record my presentations from that summit as a series of webinars or videos, so join our mailing list to stay tuned if you aren’t already on it.  In addition, I’ll be holding similar seminars in the future.  If you are interested in having me visit your country for such an event, get in touch with me at nthompson@54.89.150.95.