Sometimes the best things come to those who wait! This trip is almost over, but seems to just keep getting better. Yesterday, we assisted a partner in the Middle East roll out, for the first time, a CAT (computerized adaptive test) on a national level. I hope Dr. David Weiss and Dr. Nathan Thompson are smiling from Minnesota right now as everything went flawlessly!
Today, I attended the BETT Middle East conference and was able to talk with innovative companies about what is next in EdTech. It was very clear that after 30 minutes at the conference that the United States is not the only ones who innovate. It is always important to look everywhere for innovation, as it can come from anywhere in the world.
Well, we are in the home stretch and this has been an amazing trip. I am looking forward to getting back to the US to see my family and to talk with you colleagues at Assessment Systems about everything I have learned and seen on this trip.

The traditional Learning Management System (LMS) is designed to serve as a portal between educators and their learners. Platforms like Moodle are successful in facilitating cooperative online learning in a number of groundbreaking ways: course management, interactive discussion boards, assignment submissions, and delivery of learning content. While all of this is great, we’ve yet to see an LMS that implements best practices in assessment and psychometrics to ensure that medium or high stakes tests meet international standards.

To put it bluntly, LMS systems have assessment functionality that is usually good enough for short classroom quizzes but falls far short of what is required for a test that is used to award a credential.  A white paper on this topic is available here, but some examples include:

  • Treatment of items as reusable objects
  • Item metadata and historical use
  • Collaborative item review and versioning
  • Test assembly based on psychometrics
  • Psychometric forensics to search for non-independent test-taking behavior
  • Deeper score reporting and analytics

Assessment Systems is pleased to announce the launch of an easy-to-use bridge between FastTest and Moodle that will allow users to seamlessly deliver sound assessments from within Moodle while taking advantage of the sophisticated test development and psychometric tools available within FastTest. In addition to seamless delivery for learners, all candidate information is transferred to FastTest, eliminating the examinee import process.  The bridge makes use of the international Learning Tools Interoperability standards.

If you are already a FastTest user, watch a step-by-step tutorial on how to establish the connection, in the FastTest User Manual by logging into your FastTest workspace and selecting Manual in the upper right-hand corner. You’ll find the guide in Appendix N.

If you are not yet a FastTest user and would like to discuss how it can improve your assessments while still allowing you to leverage Moodle or other LMS systems for learning content, sign up for a free account here.

As we jump headfirst into 2018, we’re reflecting on our successes from the past year. One such success was our inclusion in the Minneapolis/St. Paul Business Journal’s list of Best Places to Work in 2017. We’re honored to be recognized!


So, what makes Assessment Systems one of the best places to work?

Though founded in 1979, we run our company with the mindset and energy of a startup. This means we have a strong foundation on which to create world-class software, but at the same time, we’re constantly innovating, working with the newest technologies and taking risks.

Our leadership team drives this startup mentality, which encourages employees to constantly be on their toes. With experts in a variety of areas, including assessment, psychometrics, entrepreneurship, and tech, not only do all team members play an important role in the business, they also have a real opportunity to make a difference.

We have great company values.

Furthermore, it’s easy for our employees to be inspired every day due to our company’s values. Our CEO stresses the importance of doing the right thing and being kind, which everyone on the team is proud to stand behind. Principles such as these are fundamental to the success of our employees. Ask anyone who’s partnered with us and they’ll tell you that we’re a small company with a big heart that wants to provide the best product and service to our clients.


Last, but certainly not least, we love what we do!

Our unique company culture, diverse team, and values make it easy to love where we work. As a result, we’re all the more motivated to make a difference in our industry and to continue improving our company culture even more.

We may not have a big team, but we have incredible skill sets and a collaborative environment where we rely on each other to make great things happen. We are a small company that is changing the way people test online and improving the world one test at a time.


Sound interesting? Check out our careers page or learn more about what we’re doing at assess.com.

Desperation is seldom fun to see.

Some years ago, having recently released our online marking functionality I was reviewing some of the functionality in a customer workspace I was intrigued to see “Beyonce??” mentioned in a marker’s comments on an essay. The student’s essay was evaluating some poetry and had completely misunderstood the use of metaphor in the poem in question. The student also clearly knew that her interpretation was way off, but didn’t know how and had reached the end of her patience. So after a desultory attempt at answering, with a cry from the heart, reminiscent of William Wallace’s call for freedom, she wrote “BEYONCE” with about seventeen exclamation points. It felt good to see that her spirit was not broken, and it was a moment of empathy that drove home the damage that standardized tests are inflicting on our students. That vignette is playing itself out millions of time each year in this country, the following explains why.

What are “Standardized Tests”?

We use standardized tests for a variety of reasons, but underlying every reason (curriculum effectiveness, college/career preparedness, teacher effectiveness, etc.) is the understanding that the test is measuring what a student has learned. In order to know how all our students are doing, we give them all standardized tests, meaning every student receives essentially the same set of tests. So, a standardized test is a test where all students take essentially the same test. This is a difficult endeavor given the wide range of students and number of tests, and raises the question “How do we do this reliably and in a reasonable amount of time?”

Accuracy and Difficulty vs Length

We all want tests to reliably measure the students’ learning. In order to make these tests reliable, we need to supply questions of varying difficulty, from very easy to very difficult, to cover a wide range of abilities. In order to reduce the length of the test, most of the questions fall in the medium easy to medium difficulty range because that is where most of the students’ ability level will fall. So the test that best balances length and accuracy for the whole population should be constructed such that the amount of questions of any difficulty is proportionate to the number of students of that ability.

Why are most questions in the medium difficulty range? Imagine creating a test to measure 10th graders’ math ability. A small number of the students might have a couple years of calculus. If the test covered those topics, imagine the experience of most students who would often not even understand the notation in the question. Frustrating, right? On the other hand, if the test was also constructed to measure students with only rudimentary math knowledge, these average to advanced students would be frustrated and bored from answering a lot of questions on basic math facts. The solution most organizations use is to present only a few questions that are really easy or difficult, and accept that this score is not as accurate as they would prefer for the students at either end of the ability range.

These Tests are Inaccurate and Mean Spirited

The problem is that while this might work OK for a lot of kids, it exacts a pretty heavy toll on others. Almost one in five students will not know the answer to 80% of the questions on these tests, and scoring about 20% on a test certainly feels like failing. It feels like failing every time a student takes such a test. Over the course of an academic career, students in the bottom quintile will guess on or skip 10,000 questions. That is 10,000 times the student is told that school, learning, or success is not for them. Even biasing the test to be easier only makes a slight improvement.

Computerized Adaptive Testing, Test Performance with Bell Curve

The shaded area represents students who will miss at least 80% of questions.

It isn’t necessarily better for the top students whose every testing experience assures them that they are already very successful when the reality is that they are likely being outperformed by a significant percentage of their future colleagues.

In other words, at both ends of the Bell Curve, we are serving our students very poorly, inadvertently encouraging lower performing students to give up (there is some evidence that the two correlate) and higher performing students to take it easy. It is no wonder that people dislike standardized tests.

There is a Solution

A computerized adaptive test (CAT) solves all the problems outlined above. Properly constructed, a CAT has the ability to make the following faster, fairer, and more valid:

  • Every examinee completes the test in less time (fast)
  • Every examinee gets a more accurate score (valid)
  • Every examinee receives questions tuned to their ability so gets about half right (fair)

Given all the advantages of CAT, it may seem hard to believe that they are not used more often. While they are starting to catch on, it is not fast enough given the heavy toll that the old methods exact on our students. It is true that few testing providers can enable CATs, but that is simply making an excuse. If a standardized test is delivered to as few as 500 students it can be made adaptive. It probably isn’t, but it could be. All that is needed are computers or tablets, an Internet connection, and some effort. We should expect more.

How can my organization implement CAT?

While CAT used to only be feasible for large organizations that tested hundreds of thousands or millions of examinees per year, a number of advances have changed this landscape.  If you’d like to do something about your test, it might be worthwhile for you to evaluate CAT.  We can help you with that evaluation; if you’d like to chat, here is a link to schedule a meeting. Or contact me if you’d like to discuss the math or related ideas please drop me a note.

Today I read an article in The Industrial-Organizational Psychologist (the colloquial journal published by the Society for Industrial Organizational Psychology) that really resonated with me.

Has Industrial-Organizational Psychology Lost Its Way?
-Deniz S. Ones, Robert B. Kaiser, Tomas Chamorro-Premuzic, Cicek Svensson

Why?  Because I think a lot of the points they are making are also true about the field of Psychometrics.  They summarize their point in six bullet points that they suggest present a troubling direction for their field.  Though honestly, I suppose a lot of Academia falls under these, while some great innovation is happening over on some free MOOCs and the like because they aren’t fettered by the chains of the purely or partially academic world.

  • an overemphasis on theory
  • a proliferation of, and fixation on, trivial methodological minutiae
  • a suppression of exploration and a repression of innovation
  • an unhealthy obsession with publication while ignoring practical issues
  • a tendency to be distracted by fads
  • a growing habit of losing real-world influence to other fields.

The part that has irked me the most about Psychometrics over the years is the overemphasis on theory and minutiae rather than solving practical problems.  This is the main reason I stopped attending the NCME conference and instead attend practical conferences like ATP.  It stems from my desire to improve the quality of assessment throughout the world.  Development of esoteric DIF methodology, new multidimensional IRT models, or a new CAT sub-algorithm when there are already dozens and the new one offers a 0.5% increase in efficiency… stuff like that isn’t going to impact all the terrible assessment being done in the world and the terrible decisions being made about people based on those assessments.  Don’t get me wrong, there is a place for the substantive research, but I feel the latter point is underserved.

The Goal: Quality Assessment

And it’s that point that is driving the work that I do.  There is a lot of mediocre or downright bad assessment out there in the world.  I once talked to a Pre-Employment testing company and asked if I could help implement strong psychometrics to improve their tests as well as validity documentation.  Their answer?  It was essentially “No thanks, we’ve never been sued so we’re OK where we are.”  Thankfully, they fell in the mediocre category rather the downright bad category.

Of course, in many cases, there is simply a lack of incentive to produce quality assessment.  Higher Education is a classic case of this.  Professional schools (e.g., Medicine) often have accreditation tied in some part to demonstrating quality assessment of their students.  There is typically no such constraint on undergraduate education, so your Intro to Psychology and freshman English Comp classes still do assessment the same way they did 40 years ago… with no psychometrics whatsoever.  Many small credentialing organizations lack incentive too, until they decide to pursue accreditation.

I like to describe the situation this way: take all the assessments of the world and get them a percentile rank in psychometric quality.  The top 5% are the big organizations, such as Nursing licensure in the US, that have in-house psychometricians, large volumes, and huge budgets.  We don’t have to worry about them as they will be doing good assessment (and that substantive research I mentioned might be of use to them!).  The bottom 50% or more are like university classroom assessments.  They’ll probably never use real psychometrics.  I’m concerned about that 50-95th percentile.

Example: Credentialing

A great example of this level is the world of Credentialing.  There a TON of poorly constructed licensure and certification tests that are being used to make incredibly important decisions about people’s lives.  Some are simply because the organization is for-profit and doesn’t care.  Some are caused by external constraints.  I once worked with a Department of Agriculture for a western US State, where the legislature mandated that licensure tests be given for certain professions, even though only like 3 people per year took some tests.

So how do we get groups like that to follow best practices in assessment?  In the past, the only way to get psychometrics done is for them to pay a consultant a ton of money that they don’t have.  Why spend $5k on an Angoff study or classical test report for 3 people/year?  I don’t blame them.  The field of Psychometrics needs to find a way to help such groups.  Otherwise, the tests are low quality and they are giving licenses to unqualified practitioners.

There are some bogus providers out there, for sure.  I’ve seen Certification delivery platforms that don’t even store the examinee responses, which would be necessary to do any psychometric analysis whatsoever.  Obviously they aren’t doing much to help the situation.  Software platforms that focus on things like tracking payments and prerequisites simply miss the boat too.  They are condoning bad assessment.

Similarly, mathematically complex advancements such as multidimensional IRT are of no use to this type of organization.  It’s not helping the situation.

An Opportunity for Innovation

I think there is still a decent amount of innovation in our field.  There are organizations that are doing great work to develop innovative items, psychometrics, and assessments.  However, it is well known that large corporations will snap up fresh PhDs in Psychometrics and then lock them in a back room to do uninnovative work like run SAS scripts or conduct Angoff studies over and over and over.  This happened to me and after only 18 months I was ready for more.

Unfortunately, I have found that a lot of innovation is not driven by producing good measurement.  I was in a discussion on LinkedIn where someone was pushing gamification for assessments and declared that measurement precision was of no interest.  This, of course, is ludicrous.  It’s OK to produce random numbers as long as the UI looks cool for students?

Innovation in Psychometrics at ASC

Much of the innovation at ASC is targeted towards the issue I have presented here.  I originally developed Iteman 4 and Xcalibre 4 to meet this type of usage.  I wanted to enable an organization to produce professional psychometric analysis reports on their assessments without having to pay massive amounts of money to a consultant.  Additionally, I wanted to save time; there are other software programs which can produce similar results, but drop them in text files or Excel spreadsheets instead of Microsoft Word which is of course what everyone would use to draft a report.

Much of our FastTest platform is designed with a similar bent.  Tired of running an Angoff study with items on a projector and the SMEs writing all their ratings with pencil and paper, only to be transcribed later?  Well, you can do this online.  Moreover, because it is only you can use the SMEs remotely rather than paying to fly them into a central office.  Want to publish an adaptive (CAT) exam without writing code?  We have it built directly into our test publishing interface.

Back to My Original Point

So the title is “What is Psychometrics Supposed to be Doing?”  My answer, of course, is improving assessment.  The issue I take with the mathematically advanced research is that it is only relevant for that top 5% of organizations that is mentioned.  It’s also our duty as psychometricians to find better ways to help the other 95%.

What else can we be doing?  I think the future here is automation.  Iteman 4 and Xcalibre 4, as well as FastTest, were really machine learning and automation platforms before those things became so en vogue.  As the SIOP article mentioned at the beginning talks about, other scholarly areas like Big Data are gaining more real-world influence even if they are doing things that Psychometrics has done for a long time.  Item Response Theory is a form of machine learning and it’s been around for 50 years!


Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Technavio recently released a market analysis on K-12 Assessment Trends.  They concluded that there are three big trends currently impacting the testing world:

  • Emphasis on cognitive testing and assessment
  • Increase in number of standardized test-optional institutions
  • Rising penetration of cloud computing

The second of the 3 K-12 Assessment Trends is not on assessment itself (or psychometrics, the science of assessment) but on how tests are used.  Not the same domain.  In fact, I just attended the Conference on Test Security, where a keynote address by Prof. Greg Cizek noted that such consequential aspects are really outside the purview of assessment and as such shouldn’t be part of our validity framework; instead, he suggested that we include security protocols in our validity framework.

So that leaves the other two… and ASC is at the leading edge of both.  I love that those two are getting recognition from external sources.  We’ve been doing computerized assessment since the Days of DOS – starting with the revolutionary MICROCAT platform – and are now focusing on massively scalable cloud-based delivery platforms.  Moreover, we are driving the forefront of cloud computing on the back end; the new world of a geographically distributed workforce needs a test development platform (FastTest) and certification management system (Certifior) that support best practices while allowing for geographic dispersion.

Similarly, our company is founded on the principles of evidence-based research and psychometric science, with the goal that assessment is not just a pretty face and instead drives at measuring the construct.  One of the issues facing the testing industry is that many assessments re driven by pedagogy and politics rather than scientific evidence.  We want to see that change.

Want to stay ahead on K-12 Assessment Trends?  Interested in how these cutting-edge concepts can help your organization?  This list is only the tip of the iceberg – it doesn’t even delve into topics like computerized adaptive testing, item response theory, or tech-enhanced items. Get in touch!

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

The “opt out” movement is a supposedly-grass-roots movement against K-12 standardized testing, primarily focusing action on encouraging parents to refuse to allow their kids to take tests.  The absolutely bizarre part of this is that large scale test scores are rarely used for individual impact on the student, and that tests take up only a tiny fraction of school time throughout the year.  An extremely well-written paper was recently released that explored this befuddling situation, written by Randy E. Bennett at Educational Testing Service (ETS).  Dr. Bennett is an internationally-renowned researcher whose opinion is quite respected.  He came to an interesting conclusion.

After a brief background, he states the situation:

Despite the fact that reducing testing time is a recurring political response, the evidence described thus far suggests that the actual time devoted to testing might not provide the strongest rationale for opting out, especially in the suburban low-poverty schools in which test refusal appears to occur more frequently.

A closer look at New York, the state with the highest opt-out rates, found a less obvious but stronger relationship (page 7):

It appears to have been the confluence of a revamped teacher evaluation system with a dramatically harder, Common Core-aligned test that galvanized the opt-out movement in New York State (Fairbanks, 2015; Harris & Fessenden, 2015;PBS Newshour, 2015). For 2014, 96% of the state’s teachers had been rated as effective or highly effective, even though only 31% of students had achieved proficiency in ELA and only 36% in mathematics (NYSED, 2014; Taylor, 2015). These proficiency rates were very similar to ones achieved on the 2013 NAEP for Grades 4 and 8 (USDE, 2013a, 2013b, 2013c,2013d). The rates were also remarkably lower than on New York’s pre-Common-Core assessments. The new rates might be taken to imply that teachers were doing a less-than-adequate job and that supervisors, perhaps unwittingly, were giving them inflated evaluations for it.

That view appears to have been behind a March 2015 initiative from New York Governor Andrew Cuomo (Harris& Fessenden, 2015; Taylor, 2015). At his request, the legislature reduced the role of the principal’s judgment, favored by teachers, and increased from 20% to 50% the role of test-score growth indicators in evaluation and tenure decisions (Rebora, 2015). As a result, the New York State United Teachers union urged parents to boycott the assessment so as to subvert the new teacher evaluations and disseminated information to guide parents specifically in that action (Gee, 2015;Karlin, 2015).

I am certainly sympathetic to the issues facing teachers today, being the son of two teachers and having a sibling who is a teacher, as well as having wanted to be a high school teacher myself until I was 18.  The lack of resources and low pay facing most educators is appalling.  However, the situation described above is simply an extension of the soccer-syndrome that many in our society decry: how all kids should be allowed to play and rewarded equally, merely for participation and not performance.  With no measure of performance, there is no external impetus to perform – and we all know the role that motivation plays in performance.

It will be interesting to see the role that the Opt Out movement plays in the post-NLCB world.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

Every Spring, the Association of Test Publishers (ATP) hosts its annual conference, Innovations in Testing.  This is the leading conference in the testing industry, with nearly 1000 people from major testing vendors and a wide range of test sponsors, from school districts to certification boards to employment testing companies.  While the technical depth is much lower that pure-scholar conferences like NCME and IACAT, it is the top conference for networking, business contacts, and discussion of practical issues.

The conference is typically held in a warm location at the end of a long winter.  This year did not disappoint, with Orlando providing us with a sunny 75 degrees each day!

Interested in attending a conference on assessment and psychometrics?  We provide this list to help you decide.

ATP Presentations

Here are the four presentations that were presented by the Assessment Systems team:

Let the CAT out of the Bag: Making Adaptive Testing more Accessible and Feasible

This session explored the barriers to implementing adaptive testing and how to address them.  It still remains underutilized, and it is our job as a profession to fix that.

FastTest: A Comprehensive System for Assessment

FastTest revolutionizes how assessments are developed and delivered with scalable security.  We provided a demo session with an open discussion.  Want to see a demo yourself?  Get in touch!

Is Remote Proctoring Really Less Secure?

Rethink security. How can we leverage technology to improve proctoring?  Is the 2000-year old approach still the most valid?  Let’s critically evaluate remote proctoring and how technology will make it better than one person watching a room of 30 computers.

The Best of Both Worlds: Leveraging TEIs without Sacrificing Psychometrics

Anyone can dream up a drag and drop item. Can you make it automatically scored with IRT in real time?  We discussed our partnership with Charles County (MD) Public Schools to develop a system that allowed a single school district to develop quality assessments on par with what a major testing company would do.  Learn more about our item authoring platform.

 

In addition to speaking, I served on the scientific committee – here’s a pic of the volunteers (I am third from left in the front row).

 

ATP2016

Photo courtesy of the Association of Test Publishers

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

The United States Congress recently passed the 2016 National Defense Authorization Act.  This provides policies and authorizations for 2016 but no budgetary appropriations, which is done with subsequent legislation.  While these Acts and related ones happen all the time, they rarely have direct impact on the field of testing and assessment.

This one is different.

There is one Section that specifically addresses a major portion of the testing industry.  Section 559 of the 2016 NDAA (full text below) requires that all Professional Certifications obtained by members of the armed forces must be accredited by an independent third party.  For a discussion on the difference between certification and accreditation, read this post, but here’s more on this specific Act.

What does this mean for military certifications?

The US Military spends a lot of money to help its people gain relevant skills within the military as well as for transferring their skills to the civilian world.  If you work on information technology in the military, you probably want one of those many certifications.  If you work as a medical assistant, you probably want a certification.  If you are soon to exit the military and need to find a job, a certification is obviously beneficial.  The government will pay for these before you leave.  This of course is all good – we want to facilitate members of the military with a smooth transition to civilian life as well as a well-paying job that they like.

Of course, the private sector saw an opportunity.  There was nothing to stop anyone from sitting in their basement, writing 50 test questions on medical assisting, and putting them up on a website as a certification… and charging the US Govt hundreds of dollars each.  The legislation below is intended to stop that.  An independent third party must come in and give your certification a stamp of approval that it has been built according to generally accepted standards.  This is of course quite reasonable.  The only disconcerting piece is that the timeline is 3 years – meaning that fly-by-night operations can still keep going for now.

Who are these third parties?  There are really two:

NCCA
National Commission for Certifying Agencies

 

 

ansi-accreditation-logoAmerican National Standards Institute

 

Want to learn more about accreditation?

Do you have a certification program that you are looking to get accredited, for this or any other reason?  We specialize in helping groups like yours.  Just fill out the contact form below and one of our consultants will get in touch with you and explain more about what is needed.

[contact-form to=’sales@54.89.150.95′ subject=’Inquiry on DoD Certification requirements’][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Comment’ type=’textarea’/][/contact-form]

SEC. 559. QUALITY ASSURANCE OF CERTIFICATION PROGRAMS AND STANDARDS FOR PROFESSIONAL CREDENTIALS OBTAINED BY MEMBERS OF THE ARMED FORCES.

Section 2015 of title 10, United States Code, as amended by section 551 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 (Public Law 113–291; 128 Stat. 3376), is further amended—

(1) by redesignating subsections (c) and (d) as subsections (d) and (e), respectively; and

(2) by inserting after subsection (b) the following new subsection (c):

“(c) Quality Assurance Of Certification Programs And Standards.— (1) Commencing not later than three years after the date of the enactment of the National Defense Authorization Act for Fiscal Year 2016, each Secretary concerned shall ensure that any credentialing program used in connection with the program under subsection (a) is accredited by an accreditation body that meets the requirements specified in paragraph (2).

“(2) The requirements for accreditation bodies specified in this paragraph are requirements that an accreditation body—

“(A) be an independent body that has in place mechanisms to ensure objectivity and impartiality in its accreditation activities;

“(B) meet a recognized national or international standard that directs its policy and procedures regarding accreditation;

“(C) apply a recognized national or international certification standard in making its accreditation decisions regarding certification bodies and programs;

“(D) conduct on-site visits, as applicable, to verify the documents and records submitted by credentialing bodies for accreditation;

“(E) have in place policies and procedures to ensure due process when addressing complaints and appeals regarding its accreditation activities;

“(F) conduct regular training to ensure consistent and reliable decisions among reviewers conducting accreditations; and

“(G) meet such other criteria as the Secretary concerned considers appropriate in order to ensure quality in its accreditation activities.”.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source