The California Department of Human Resources (CalHR, calhr.ca.gov/) has selected Assessment Systems Corporation (ASC, assess.com) as its vendor for an online assessment platform. CalHR is responsible for the personnel selection and hiring of many job roles for the State, and delivers hundreds of thousands of tests per year to job applicants. CalHR seeks to migrate to a modern cloud-based platform that allows it to manage large item banks, quickly publish new test forms, and deliver large-scale assessments that align with modern psychometrics like item response theory (IRT) and computerized adaptive testing (CAT).

ASC’s landmark assessment platform Assess.ai was selected as a solution for this project. ASC has been providing computerized assessment platforms with modern psychometric capabilities since the 1980s, and released Assess.ai in 2019 as a successor to its industry-leading platform FastTest. It includes modules for item authoring, item review, automated item generation, test publishing, online delivery, and automated psychometric reporting.

 

Multistage adaptive testing

Multistage testing

Automated item generation

automated item generation

 

 

Nathan Thompson, Ph.D., was recently invited to talk about ASC and the future of educational assessment on the Ednorth EdTech Podcast.

EdNorth is an association dedicated to continuing the long history of innovation in educational technology that has been rooted in the Twin Cities of Minnesota (Minneapolis / Saint Paul). Click below to listen online, or find it on Apple or other podcast aggregators.

Dr. Thompson discusses the history of ASC, ASC’s mission to improve assessment with quality psychometrics, and how AI and automation are becoming used more often – even though they’ve been part of the Psychometrics field for a century.

Thank you to Dave Swerdlick and the team at EdNorth for the opportunity to speak!

Estimated reading time: 9 minutes

The Covid-19 pandemic caused a radical shift in the education sector, which has certainly impacted EdTech trends. This resulted in the accelerated adoption of education technology solutions across k12, elearning, and higher education. Some trends and technologies that dominated the marketplace in 2019, including Video-based learning,  AI, Machine Learning, digital assessments, learning analytics, and many others are expected to take a new turn in 2021. 

To give you a complete glimpse into the future of education post-pandemic, we have prepared 10 EdTech Trends and Predictions To Watch Out For In 2021. What will happen to the education technology marketplace? Which trends and technologies will disrupt education as we know it? 

Let’s find out. 

Important EdTech Stats To Note:

  • 86% of educators say Technology is the core of learning
  • EdTech expenditure will hit $404 billion by 2025 at a CAGR of 16%, between 2019-2025

Image Credit: Bryce Durbin / TechCrunch

  • Utilization of Technology for Engagement is on the rise (84% of Educators support the idea)
  • Global investments in education technology are on the rise, with a total of $10 billion in 2020. They are expected to reach $87 billion in the next decade. 
  1. E-Learning

E-learning is the biggest EdTech trend in 2021. Its scalability and cost-effectiveness have made this model the go-to solution for students all over the world, especially during the pandemic.  Also, recent research shows that Elearning improves retention rates by 25-60%, which makes it a better deal for students. 

The industry is booming and is expected to hit the $375 billion mark by 2026. 

To capitalize on this trend, many institutions have started offering online degrees online. Platforms such as Coursera have partnered with the world’s greatest universities to provide short courses, certifications, and bachelor’s degrees online, at lower costs. 

Other platforms offering eLearning services include Lessonly, Udemy, Masterclass, and Skillshare. This trend is not likely to slow down post-covid, but there will be barriers that should be eliminated in order to accelerate the adoption of this trend. 

  1. Digital Assessments Become A Necessity

Assessments are a vital part of the learning process as they gauge the effectiveness of learning methods. Over the past few years, Online tests have been replacing traditional testing methods which mostly involved pen and paper. Some of the main advantages of digital assessments Include:

  •  Reliability
  •  Cost-effectiveness
  •  Flexibility. 
  • Increased security and efficiency
  • Students with disabilities can have equal chances with other students. This is because online testing can support technologies such as speech-to-text.
  • Immediate results reduce stress and anxiety among students
  • Learning analytics help improve assessment quality

Online exams have increased effectiveness in the testing process by adding technologies and strategies such as adaptive testing,  remote proctoring, and learning analytics into the equation. 

This has helped students improve retention rates and exam developers enhance their testing strategies. 

Cheating, which has been a major concern in assessments has been reduced by online assessment techniques such as Lockdown browsers, time windows, and time windows.

 Digital assessments are a must-have for all institutions looking to capitalize on the advantages of online testing in 2021.

  1. Extended Reality (XR)

Virtual Reality Learning Retention (Source: FrontCore)

Virtual Reality and Augmented Reality are revolutionizing the education sector by challenging the way in which students used to receive and process information.  2021 will witness increased adoption of XR technologies and solutions across education and corporate learning environments. 

Extended Reality technologies and solutions are improving retention rates and learning experiences in students by creating immersive worlds where students can learn new skills and visualize concepts in an interactive way. 

According to research, Virtual Reality has higher retention rates compared to other learning methods such as lecturing, demonstration, and audio-visuals.

Some other ways in which XR is improving the education sector are offering cost-effective and interesting field trips, enhancing lifelong learning, and transforming hands-on learning. 

  1. Chatbots In Higher Education

Chatbots took the sales and marketing industry by storm over the past few years and are finding their way into the education sector. During the Covid-19 pandemic, many higher education institutions integrated chatbots into their systems to help in the automation of tasks. 

Some tasks that can be automated including lead generation for courses and student query resolution on pressing issues such as fees. This improves the cost-effectiveness in institution administration.

 This EdTech trend will be integrated into more institutions in 2021. 

  1. Video-Based Learning At The Forefront

Video has become king in learning and entertainment. 2021 will see an exponential increase in video consumption. In relation to learning, the rapid increase in devices on the internet, and new technological innovations such as smart TVs will accelerate the adoption of video-based learning. 

There are many advantages that come with video-based learning. Some of them include cost-friendliness, higher retention rate in students, support for self-pace learning, and many others. The reliability of this mode of learning will make higher education institutions invest in this mode of learning. 

Using video-based learning is not a complex procedure, especially because there are many tools and platforms online that can help with the process. The cost of producing high-quality videos and lectures has also become cheaper. 

  1. Artificial Intelligence (AI)

Artificial Intelligence (AI) has been at the forefront of pioneering innovations that drive humanity to the new ages of civilization. In relation to education, AI has played a big role in enhancing learning experiences, improving assessment practices and so much more! 

In 2021, AI will make headlines as it will be integrated into important aspects of higher education such as campus security, student performance evaluation, and so much more! 

Facial recognition, which is a component of AI, can be used in tracking student attendance, securing campus premises, and decreasing infidelity during online testing. 

AI also plays a vital part in student evaluation and improving the assessment process by powering technologies such as adaptive testing. Adaptive testing is a method that improves the testing process by providing unique assessments to different students based on factors such as IQ and many others. 

The power of AI is unlimited and there is no predicting what 2021 will bring to the education industry.

I guess we will have to wait and see what it throws at us. 

  1. Big Data and Learning analytics

Image Source: Unsplash

Learning analytics is yet another trend that is yet to disrupt the education industry. Learning analytics is a set of technologies and strategies that are used to empower educators, track learning processes, and help instructors make data-driven decisions.

 Some components of learning analytics include Artificial Intelligence, visualizations, machine learning, statistics, personalized adaptive learning, and educational data mining. 

 As we all know, data runs everything in the information era, and whoever has the best data analytics model wins. In education, analytics has played an important role in shaping instructional design and assessment processes for the best. 

2021 will see increased adoption of analytics tools into Learning and Development.

Here are some of the ways learning analytics will shape Education for the best;

  • Behavior and performance prediction– Using predictive analytics, instructors can predict the performance of learners based on their present and past performances.
  • Increased retention rates– By having a clear picture of what students are good at and what they are struggling with (You can identify this by analyzing their digital assessment results), one can shape the learning process to increase retention rates. 
  • Support for personalized learning– This is the hot cake of learning analytics. By using data analytics, instructors can create personalized learning experiences for different students based on several metrics.
  • Increase cost-efficiency– Education and training are expensive, and learning analytics can help improve the quality of learning outcomes while cutting out unnecessary costs.
  • Helps in the definition of learning strategy– By understanding what you need to achieve, and the purpose of your goals, you can use data analytics to create a roadmap to help you in the process. 
  1. Gamification

Gamification is not a new concept in education. It has been around for ages, but it’s expected to take a different turn through the integration of game mechanics in learning. Minecraft, the game everybody can’t seem to have enough of is a good example. This game has come in handy in teaching important concepts in society such as storytelling.  

Programming and game design, which are skills that are important in our digital world, has received a huge boost from the existence of games such as Roblox. The game attracts more than 100 million players each month.  

In 2021, we are likely to see the rise of more gamification strategies in the ed-tech space.

Part 2: Predictions

  1. Accelerated EdTech Investments

Over the past decade, the global EdTech venture capital has seen exponential growth, from 500 million in 2010 to more than $10 billion in 2020). 

As the adoption of digital EdTech solutions rises, this number is expected to rise rapidly in the next decade. Some reports estimate an $87 billion increase in investments by 2030.

 With Asia being the leading adopters of EdTech, startups from other continents will be looking to up their game to capitalize on opportunities arising from the industry.

  1.  EdTech Market Value Will Reach New Heights

The Covid-19 pandemic increased the adoption of Education technology solutions. The expenditure of the sector is expected to reach new heights ($404 billion) between 2019 and 2025. 

What acted as a source of short-term resort for the education sector will become the new norm. 

Many schools and colleges have not started their transition to the new models of learning, but most of them are laying off strategies to start the process. Integrating some of the trends discussed above may be a good way to start your journey.

Final Thoughts

The uncertainty that comes in the aftermath of the Covid-19 pandemic makes it hard to predict the ed-tech trends that will lead the way in 2021. But, we believe that 2021 will be a year of many firsts and unknowns in the education sector. That said, there is immense power in understanding how the above-discussed trends and predictions may shape L & D. And how you can capitalize on the opportunities that arise from them. 

If you are interested in leveraging the power of digital assessments and psychometrics practices for your education and corporate campaigns, feel free to sign up for access to our free assessment tools or consult with our experienced team for services such as test development, job analysis, implementation services and so much more!

The traditional Learning Management System (LMS) is designed to serve as a portal between educators and their learners. Platforms like Moodle are successful in facilitating cooperative online learning in a number of groundbreaking ways: course management, interactive discussion boards, assignment submissions, and delivery of learning content. While all of this is great, we’ve yet to see an LMS that implements best practices in assessment and psychometrics to ensure that medium or high stakes tests meet international standards.

To put it bluntly, LMS systems have assessment functionality that is usually good enough for short classroom quizzes but falls far short of what is required for a test that is used to award a credential.  A white paper on this topic is available here, but some examples include:

  • Treatment of items as reusable objects
  • Item metadata and historical use
  • Collaborative item review and versioning
  • Test assembly based on psychometrics
  • Psychometric forensics to search for non-independent test-taking behavior
  • Deeper score reporting and analytics

Assessment Systems is pleased to announce the launch of an easy-to-use bridge between FastTest and Moodle that will allow users to seamlessly deliver sound assessments from within Moodle while taking advantage of the sophisticated test development and psychometric tools available within FastTest. In addition to seamless delivery for learners, all candidate information is transferred to FastTest, eliminating the examinee import process.  The bridge makes use of the international Learning Tools Interoperability standards.

If you are already a FastTest user, watch a step-by-step tutorial on how to establish the connection, in the FastTest User Manual by logging into your FastTest workspace and selecting Manual in the upper right-hand corner. You’ll find the guide in Appendix N.

If you are not yet a FastTest user and would like to discuss how it can improve your assessments while still allowing you to leverage Moodle or other LMS systems for learning content, sign up for a free account here.

Desperation is seldom fun to see.

Some years ago, having recently released our online marking functionality I was reviewing some of the functionality in a customer workspace I was intrigued to see “Beyonce??” mentioned in a marker’s comments on an essay. The student’s essay was evaluating some poetry and had completely misunderstood the use of metaphor in the poem in question. The student also clearly knew that her interpretation was way off, but didn’t know how and had reached the end of her patience. So after a desultory attempt at answering, with a cry from the heart, reminiscent of William Wallace’s call for freedom, she wrote “BEYONCE” with about seventeen exclamation points. It felt good to see that her spirit was not broken, and it was a moment of empathy that drove home the damage that standardized tests are inflicting on our students. That vignette is playing itself out millions of time each year in this country, the following explains why.

What are “Standardized Tests”?

We use standardized tests for a variety of reasons, but underlying every reason (curriculum effectiveness, college/career preparedness, teacher effectiveness, etc.) is the understanding that the test is measuring what a student has learned. In order to know how all our students are doing, we give them all standardized tests, meaning every student receives essentially the same set of tests. So, a standardized test is a test where all students take essentially the same test. This is a difficult endeavor given the wide range of students and number of tests, and raises the question “How do we do this reliably and in a reasonable amount of time?”

Accuracy and Difficulty vs Length

We all want tests to reliably measure the students’ learning. In order to make these tests reliable, we need to supply questions of varying difficulty, from very easy to very difficult, to cover a wide range of abilities. In order to reduce the length of the test, most of the questions fall in the medium easy to medium difficulty range because that is where most of the students’ ability level will fall. So the test that best balances length and accuracy for the whole population should be constructed such that the amount of questions of any difficulty is proportionate to the number of students of that ability.

Why are most questions in the medium difficulty range? Imagine creating a test to measure 10th graders’ math ability. A small number of the students might have a couple years of calculus. If the test covered those topics, imagine the experience of most students who would often not even understand the notation in the question. Frustrating, right? On the other hand, if the test was also constructed to measure students with only rudimentary math knowledge, these average to advanced students would be frustrated and bored from answering a lot of questions on basic math facts. The solution most organizations use is to present only a few questions that are really easy or difficult, and accept that this score is not as accurate as they would prefer for the students at either end of the ability range.

These Tests are Inaccurate and Mean Spirited

The problem is that while this might work OK for a lot of kids, it exacts a pretty heavy toll on others. Almost one in five students will not know the answer to 80% of the questions on these tests, and scoring about 20% on a test certainly feels like failing. It feels like failing every time a student takes such a test. Over the course of an academic career, students in the bottom quintile will guess on or skip 10,000 questions. That is 10,000 times the student is told that school, learning, or success is not for them. Even biasing the test to be easier only makes a slight improvement.

Computerized Adaptive Testing, Test Performance with Bell Curve

The shaded area represents students who will miss at least 80% of questions.

It isn’t necessarily better for the top students whose every testing experience assures them that they are already very successful when the reality is that they are likely being outperformed by a significant percentage of their future colleagues.

In other words, at both ends of the Bell Curve, we are serving our students very poorly, inadvertently encouraging lower performing students to give up (there is some evidence that the two correlate) and higher performing students to take it easy. It is no wonder that people dislike standardized tests.

There is a Solution

A computerized adaptive test (CAT) solves all the problems outlined above. Properly constructed, a CAT has the ability to make the following faster, fairer, and more valid:

  • Every examinee completes the test in less time (fast)
  • Every examinee gets a more accurate score (valid)
  • Every examinee receives questions tuned to their ability so gets about half right (fair)

Given all the advantages of CAT, it may seem hard to believe that they are not used more often. While they are starting to catch on, it is not fast enough given the heavy toll that the old methods exact on our students. It is true that few testing providers can enable CATs, but that is simply making an excuse. If a standardized test is delivered to as few as 500 students it can be made adaptive. It probably isn’t, but it could be. All that is needed are computers or tablets, an Internet connection, and some effort. We should expect more.

How can my organization implement CAT?

While CAT used to only be feasible for large organizations that tested hundreds of thousands or millions of examinees per year, a number of advances have changed this landscape.  If you’d like to do something about your test, it might be worthwhile for you to evaluate CAT.  We can help you with that evaluation; if you’d like to chat, here is a link to schedule a meeting. Or contact me if you’d like to discuss the math or related ideas please drop me a note.

Today I read an article in The Industrial-Organizational Psychologist (the colloquial journal published by the Society for Industrial Organizational Psychology) that really resonated with me.

Has Industrial-Organizational Psychology Lost Its Way?
-Deniz S. Ones, Robert B. Kaiser, Tomas Chamorro-Premuzic, Cicek Svensson

Why?  Because I think a lot of the points they are making are also true about the field of Psychometrics.  They summarize their point in six bullet points that they suggest present a troubling direction for their field.  Though honestly, I suppose a lot of Academia falls under these, while some great innovation is happening over on some free MOOCs and the like because they aren’t fettered by the chains of the purely or partially academic world.

  • an overemphasis on theory
  • a proliferation of, and fixation on, trivial methodological minutiae
  • a suppression of exploration and a repression of innovation
  • an unhealthy obsession with publication while ignoring practical issues
  • a tendency to be distracted by fads
  • a growing habit of losing real-world influence to other fields.

So what is psychometrics supposed to be doing?

The part that has irked me the most about Psychometrics over the years is the overemphasis on theory and minutiae rather than solving practical problems.  This is the main reason I stopped attending the NCME conference and instead attend practical conferences like ATP.  It stems from my desire to improve the quality of assessment throughout the world.  Development of esoteric DIF methodology, new multidimensional IRT models, or a new CAT sub-algorithm when there are already dozens and the new one offers a 0.5% increase in efficiency… stuff like that isn’t going to impact all the terrible assessment being done in the world and the terrible decisions being made about people based on those assessments.  Don’t get me wrong, there is a place for the substantive research, but I feel the latter point is underserved.

The Goal: Quality Assessment

And it’s that point that is driving the work that I do.  There is a lot of mediocre or downright bad assessment out there in the world.  I once talked to a Pre-Employment testing company and asked if I could help implement strong psychometrics to improve their tests as well as validity documentation.  Their answer?  It was essentially “No thanks, we’ve never been sued so we’re OK where we are.”  Thankfully, they fell in the mediocre category rather the downright bad category.

Of course, in many cases, there is simply a lack of incentive to produce quality assessment.  Higher Education is a classic case of this.  Professional schools (e.g., Medicine) often have accreditation tied in some part to demonstrating quality assessment of their students.  There is typically no such constraint on undergraduate education, so your Intro to Psychology and freshman English Comp classes still do assessment the same way they did 40 years ago… with no psychometrics whatsoever.  Many small credentialing organizations lack incentive too, until they decide to pursue accreditation.

I like to describe the situation this way: take all the assessments of the world and get them a percentile rank in psychometric quality.  The top 5% are the big organizations, such as Nursing licensure in the US, that have in-house psychometricians, large volumes, and huge budgets.  We don’t have to worry about them as they will be doing good assessment (and that substantive research I mentioned might be of use to them!).  The bottom 50% or more are like university classroom assessments.  They’ll probably never use real psychometrics.  I’m concerned about that 50-95th percentile.

Example: Credentialing

A great example of this level is the world of Credentialing.  There a TON of poorly constructed licensure and certification tests that are being used to make incredibly important decisions about people’s lives.  Some are simply because the organization is for-profit and doesn’t care.  Some are caused by external constraints.  I once worked with a Department of Agriculture for a western US State, where the legislature mandated that licensure tests be given for certain professions, even though only like 3 people per year took some tests.

So how do we get groups like that to follow best practices in assessment?  In the past, the only way to get psychometrics done is for them to pay a consultant a ton of money that they don’t have.  Why spend $5k on an Angoff study or classical test report for 3 people/year?  I don’t blame them.  The field of Psychometrics needs to find a way to help such groups.  Otherwise, the tests are low quality and they are giving licenses to unqualified practitioners.

There are some bogus providers out there, for sure.  I’ve seen Certification delivery platforms that don’t even store the examinee responses, which would be necessary to do any psychometric analysis whatsoever.  Obviously they aren’t doing much to help the situation.  Software platforms that focus on things like tracking payments and prerequisites simply miss the boat too.  They are condoning bad assessment.

Similarly, mathematically complex advancements such as multidimensional IRT are of no use to this type of organization.  It’s not helping the situation.

An Opportunity for Innovation

I think there is still a decent amount of innovation in our field.  There are organizations that are doing great work to develop innovative items, psychometrics, and assessments.  However, it is well known that large corporations will snap up fresh PhDs in Psychometrics and then lock them in a back room to do uninnovative work like run SAS scripts or conduct Angoff studies over and over and over.  This happened to me and after only 18 months I was ready for more.

Unfortunately, I have found that a lot of innovation is not driven by producing good measurement.  I was in a discussion on LinkedIn where someone was pushing gamification for assessments and declared that measurement precision was of no interest.  This, of course, is ludicrous.  It’s OK to produce random numbers as long as the UI looks cool for students?

Innovation in Psychometrics at ASC

Much of the innovation at ASC is targeted towards the issue I have presented here.  I originally developed Iteman 4 and Xcalibre 4 to meet this type of usage.  I wanted to enable an organization to produce professional psychometric analysis reports on their assessments without having to pay massive amounts of money to a consultant.  Additionally, I wanted to save time; there are other software programs which can produce similar results, but drop them in text files or Excel spreadsheets instead of Microsoft Word which is of course what everyone would use to draft a report.

Much of our FastTest platform is designed with a similar bent.  Tired of running an Angoff study with items on a projector and the SMEs writing all their ratings with pencil and paper, only to be transcribed later?  Well, you can do this online.  Moreover, because it is only you can use the SMEs remotely rather than paying to fly them into a central office.  Want to publish an adaptive (CAT) exam without writing code?  We have it built directly into our test publishing interface.

Back to My Original Point

So the title is “What is Psychometrics Supposed to be Doing?”  My answer, of course, is improving assessment.  The issue I take with the mathematically advanced research is that it is only relevant for that top 5% of organizations that is mentioned.  It’s also our duty as psychometricians to find better ways to help the other 95%.

What else can we be doing?  I think the future here is automation.  Iteman 4 and Xcalibre 4, as well as FastTest, were really machine learning and automation platforms before those things became so en vogue.  As the SIOP article mentioned at the beginning talks about, other scholarly areas like Big Data are gaining more real-world influence even if they are doing things that Psychometrics has done for a long time.  Item Response Theory is a form of machine learning and it’s been around for 50 years!

 

Technavio recently released a market analysis on K-12 Assessment Trends.  They concluded that there are three big trends currently impacting the testing world:

  • Emphasis on cognitive testing and assessment
  • Increase in number of standardized test-optional institutions
  • Rising penetration of cloud computing

The second of the 3 K-12 Assessment Trends is not on assessment itself (or psychometrics, the science of assessment) but on how tests are used.  Not the same domain.  In fact, I just attended the Conference on Test Security, where a keynote address by Prof. Greg Cizek noted that such consequential aspects are really outside the purview of assessment and as such shouldn’t be part of our validity framework; instead, he suggested that we include security protocols in our validity framework.

So that leaves the other two… and ASC is at the leading edge of both.  I love that those two are getting recognition from external sources.  We’ve been doing computerized assessment since the Days of DOS – starting with the revolutionary MICROCAT platform – and are now focusing on massively scalable cloud-based delivery platforms.  Moreover, we are driving the forefront of cloud computing on the back end; the new world of a geographically distributed workforce needs a test development platform (FastTest) and certification management system (Certifior) that support best practices while allowing for geographic dispersion.

Similarly, our company is founded on the principles of evidence-based research and psychometric science, with the goal that assessment is not just a pretty face and instead drives at measuring the construct.  One of the issues facing the testing industry is that many assessments re driven by pedagogy and politics rather than scientific evidence.  We want to see that change.

Want to stay ahead on K-12 Assessment Trends?  Interested in how these cutting-edge concepts can help your organization?  This list is only the tip of the iceberg – it doesn’t even delve into topics like computerized adaptive testing, item response theory, or tech-enhanced items. Get in touch!

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

The “opt out” movement is a supposedly-grass-roots movement against K-12 standardized testing, primarily focusing action on encouraging parents to refuse to allow their kids to take tests.  The absolutely bizarre part of this is that large scale test scores are rarely used for individual impact on the student, and that tests take up only a tiny fraction of school time throughout the year.  An extremely well-written paper was recently released that explored this befuddling situation, written by Randy E. Bennett at Educational Testing Service (ETS).  Dr. Bennett is an internationally-renowned researcher whose opinion is quite respected.  He came to an interesting conclusion.

After a brief background, he states the situation:

Despite the fact that reducing testing time is a recurring political response, the evidence described thus far suggests that the actual time devoted to testing might not provide the strongest rationale for opting out, especially in the suburban low-poverty schools in which test refusal appears to occur more frequently.

A closer look at New York, the state with the highest opt-out rates, found a less obvious but stronger relationship (page 7):

It appears to have been the confluence of a revamped teacher evaluation system with a dramatically harder, Common Core-aligned test that galvanized the opt-out movement in New York State (Fairbanks, 2015; Harris & Fessenden, 2015;PBS Newshour, 2015). For 2014, 96% of the state’s teachers had been rated as effective or highly effective, even though only 31% of students had achieved proficiency in ELA and only 36% in mathematics (NYSED, 2014; Taylor, 2015). These proficiency rates were very similar to ones achieved on the 2013 NAEP for Grades 4 and 8 (USDE, 2013a, 2013b, 2013c,2013d). The rates were also remarkably lower than on New York’s pre-Common-Core assessments. The new rates might be taken to imply that teachers were doing a less-than-adequate job and that supervisors, perhaps unwittingly, were giving them inflated evaluations for it.

That view appears to have been behind a March 2015 initiative from New York Governor Andrew Cuomo (Harris& Fessenden, 2015; Taylor, 2015). At his request, the legislature reduced the role of the principal’s judgment, favored by teachers, and increased from 20% to 50% the role of test-score growth indicators in evaluation and tenure decisions (Rebora, 2015). As a result, the New York State United Teachers union urged parents to boycott the assessment so as to subvert the new teacher evaluations and disseminated information to guide parents specifically in that action (Gee, 2015;Karlin, 2015).

I am certainly sympathetic to the issues facing teachers today, being the son of two teachers and having a sibling who is a teacher, as well as having wanted to be a high school teacher myself until I was 18.  The lack of resources and low pay facing most educators is appalling.  However, the situation described above is simply an extension of the soccer-syndrome that many in our society decry: how all kids should be allowed to play and rewarded equally, merely for participation and not performance.  With no measure of performance, there is no external impetus to perform – and we all know the role that motivation plays in performance.

It will be interesting to see the role that the Opt Out movement plays in the post-NLCB world.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Every Spring, the Association of Test Publishers (ATP) hosts its annual conference, Innovations in Testing.  This is the leading conference in the testing industry, with nearly 1000 people from major testing vendors and a wide range of test sponsors, from school districts to certification boards to employment testing companies.  While the technical depth is much lower that pure-scholar conferences like NCME and IACAT, it is the top conference for networking, business contacts, and discussion of practical issues.

The conference is typically held in a warm location at the end of a long winter.  This year did not disappoint, with Orlando providing us with a sunny 75 degrees each day!

Interested in attending a conference on assessment and psychometrics?  We provide this list to help you decide.

ATP Presentations

Here are the four presentations that were presented by the Assessment Systems team:

Let the CAT out of the Bag: Making Adaptive Testing more Accessible and Feasible

This session explored the barriers to implementing adaptive testing and how to address them.  It still remains underutilized, and it is our job as a profession to fix that.

FastTest: A Comprehensive System for Assessment

FastTest revolutionizes how assessments are developed and delivered with scalable security.  We provided a demo session with an open discussion.  Want to see a demo yourself?  Get in touch!

Is Remote Proctoring Really Less Secure?

Rethink security. How can we leverage technology to improve proctoring?  Is the 2000-year old approach still the most valid?  Let’s critically evaluate remote proctoring and how technology will make it better than one person watching a room of 30 computers.

The Best of Both Worlds: Leveraging TEIs without Sacrificing Psychometrics

Anyone can dream up a drag and drop item. Can you make it automatically scored with IRT in real time?  We discussed our partnership with Charles County (MD) Public Schools to develop a system that allowed a single school district to develop quality assessments on par with what a major testing company would do.  Learn more about our item authoring platform.

 

In addition to speaking, I served on the scientific committee – here’s a pic of the volunteers (I am third from left in the front row).

 

ATP2016

Photo courtesy of the Association of Test Publishers

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

The United States Congress recently passed the 2016 National Defense Authorization Act.  This provides policies and authorizations for 2016 but no budgetary appropriations, which is done with subsequent legislation.  While these Acts and related ones happen all the time, they rarely have direct impact on the field of testing and assessment.

This one is different.

There is one Section that specifically addresses a major portion of the testing industry.  Section 559 of the 2016 NDAA (full text below) requires that all Professional Certifications obtained by members of the armed forces must be accredited by an independent third party.  For a discussion on the difference between certification and accreditation, read this post, but here’s more on this specific Act.

What does this mean for military certifications?

The US Military spends a lot of money to help its people gain relevant skills within the military as well as for transferring their skills to the civilian world.  If you work on information technology in the military, you probably want one of those many certifications.  If you work as a medical assistant, you probably want a certification.  If you are soon to exit the military and need to find a job, a certification is obviously beneficial.  The government will pay for these before you leave.  This of course is all good – we want to facilitate members of the military with a smooth transition to civilian life as well as a well-paying job that they like.

Of course, the private sector saw an opportunity.  There was nothing to stop anyone from sitting in their basement, writing 50 test questions on medical assisting, and putting them up on a website as a certification… and charging the US Govt hundreds of dollars each.  The legislation below is intended to stop that.  An independent third party must come in and give your certification a stamp of approval that it has been built according to generally accepted standards.  This is of course quite reasonable.  The only disconcerting piece is that the timeline is 3 years – meaning that fly-by-night operations can still keep going for now.

Who are these third parties?  There are really two:

NCCA
National Commission for Certifying Agencies

 

 

ansi-accreditation-logoAmerican National Standards Institute

 

Want to learn more about accreditation?

Do you have a certification program that you are looking to get accredited, for this or any other reason?  We specialize in helping groups like yours.  Just fill out the contact form below and one of our consultants will get in touch with you and explain more about what is needed.

[contact-form to=’sales@54.89.150.95′ subject=’Inquiry on DoD Certification requirements’][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Comment’ type=’textarea’/][/contact-form]

SEC. 559. QUALITY ASSURANCE OF CERTIFICATION PROGRAMS AND STANDARDS FOR PROFESSIONAL CREDENTIALS OBTAINED BY MEMBERS OF THE ARMED FORCES.

Section 2015 of title 10, United States Code, as amended by section 551 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 (Public Law 113–291; 128 Stat. 3376), is further amended—

(1) by redesignating subsections (c) and (d) as subsections (d) and (e), respectively; and

(2) by inserting after subsection (b) the following new subsection (c):

“(c) Quality Assurance Of Certification Programs And Standards.— (1) Commencing not later than three years after the date of the enactment of the National Defense Authorization Act for Fiscal Year 2016, each Secretary concerned shall ensure that any credentialing program used in connection with the program under subsection (a) is accredited by an accreditation body that meets the requirements specified in paragraph (2).

“(2) The requirements for accreditation bodies specified in this paragraph are requirements that an accreditation body—

“(A) be an independent body that has in place mechanisms to ensure objectivity and impartiality in its accreditation activities;

“(B) meet a recognized national or international standard that directs its policy and procedures regarding accreditation;

“(C) apply a recognized national or international certification standard in making its accreditation decisions regarding certification bodies and programs;

“(D) conduct on-site visits, as applicable, to verify the documents and records submitted by credentialing bodies for accreditation;

“(E) have in place policies and procedures to ensure due process when addressing complaints and appeals regarding its accreditation activities;

“(F) conduct regular training to ensure consistent and reliable decisions among reviewers conducting accreditations; and

“(G) meet such other criteria as the Secretary concerned considers appropriate in order to ensure quality in its accreditation activities.”.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source