assessment systems logo

The “Opt-Out of Educational Assessment” Movement: Just What Exactly Do They Stand For?

validity threats

I recently read a disturbing article in the New York Times regarding the “Opt-Out of Educational Assessment” movement in the State of New York.  I find this downright appalling.  There are many sorts of standardized assessments, and the ones we are talking about here exist to improve the quality of education.  They check to see whether students are learning the curriculum, thereby providing some measure of accountability, as well as an index of improvement, to the teachers, schools, districts, and States.  They are used to help students, not hinder them.  By saying that you do not want your students involved in this, you are saying that you don’t want any accountability, and that you really do not care about education at an aggregate level.  You are happy just to send your kids to school and hope that the teacher teaches them something over the course of the year.

It’s also interesting that the article mentions this being in response, in part, to the fact that test scores “plummeted” after the introduction of Common Core.  The author here is definitely biased and/or uninformed.    Why were scores lower?  Because we no longer allowed States to doctor their data.  Under No Child Left Behind, States were required to assess students but could set the cutscores wherever they wanted.  So the backwoods States obviously set the bar pretty low, and reported that large numbers of their students were proficient.  They were also free to adjust their curriculum, i.e., teach 4th grade math in 5th grade so that, wow… the 5th graders seem to do well on math!  The move to Common Core was in part to prevent these two schemes, and of course such states saw much lower numbers of 5th grade math students score as proficient when being tested on actual 5th grade math, with a standard set that was similar to other states.  The Opt-Out of Educational Assessment movement is fighting this.

What is happening here, then is that people are blaming the test purely out of reactionary tendency.  An analogy might be that you are on a weight loss program and have for years intentionally miscalibrated your bathroom scale to read 10 pounds lower.  You get a fitness coach that calls you on it and forces you to correctly calibrate it, and then to actually weigh yourself once a week to see if you are losing weight.  So you blame 1) the scale, and 2) your fitness coach?  That is probably not going to help improve your fitness.  Or the education system.  But go ahead and opt out of weighing yourself.

The following two tabs change content below.

Nathan Thompson, PhD

CEO at Assessment Systems
Nathan Thompson, PhD, is CEO and Co-Founder of Assessment Systems Corporation (ASC). He is a psychometrician, software developer, author, and researcher, and evangelist for AI and automation. His mission is to elevate the profession of psychometrics by using software to automate psychometric work like item review, job analysis, and Angoff studies, so we can focus on more innovative work. His core goal is to improve assessment throughout the world. Nate was originally trained as a psychometrician, with an honors degree at Luther College with a triple major of Math/Psych/Latin, and then a PhD in Psychometrics at the University of Minnesota. He then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. He is also cofounder and Membership Director at the International Association for Computerized Adaptive Testing (iacat.org). He's published 100+ papers and presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/.

Latest posts by Nathan Thompson, PhD (see all)

Share This Post

Facebook
Twitter
LinkedIn
Email

More To Explore

certification exam delivery
Credentialing

Certification Exam Delivery: Guidelines For Success

Certification exam delivery is the process of administering a certification test to candidates.  This might seem straightforward, but it is surprisingly complex.  The greater the

Multiple choice bubble sheet - split half reliability
Psychometrics

Split Half Reliability

Split Half Reliability is an internal consistency approach to quantifying the reliability of a test, in the paradigm of classical test theory.  The name comes