guidelines-for-success

An assessment-based certificate (ABC) is a type of credential that is smaller and more focused than a Certification, License, or Degree. ABCs serve a distinct role in education and workforce development by providing a formal recognition of skills after a structured learning experience.

An assessment-based certificate is typically based on an educational experience like an online course, but instead of receiving the credential for simple completion, the candidate needs to pass an examination first. However, the examination is less extensive than a certification or license, and the educational aspect less extensive than a degree.

In this blog post, we’ll dive into what an assessment-based certificate is, how it differs from other credentials, and what it takes to develop one. We’ll also touch on the importance of accreditation by the Assessment-based Certificate Accreditation Council (ACAC) affiliated with the Institute for Credentialing Excellence, which adds credibility and value to these programs.

Understanding Assessment-Based Certificates

In today’s competitive professional landscape, individuals and organizations who employ them are continuously seeking ways to validate and increase their knowledge, skills, and competencies. However, not all credential programs are created equal.

university studentsAn assessment-based certificate program is a structured learning and assessment process that leads to the issuance of a certificate, indicating that a learner has successfully completed the course and passed an assessment verifying their knowledge or competence in a particular area. ABCs are typically aligned with specific skills or knowledge areas that are immediately applicable to the learner’s profession or interests.

This type of certificate is not a certification in the traditional sense. In traditional certification programs, individuals are usually assessed on their ability to perform at a particular level of competence, often in a broad professional context. Traditional certifications also often require ongoing recertification through continued education or professional development. In contrast, assessment-based certificates are focused on verifying the acquisition of very specific competencies, usually through a one-time assessment after a learning intervention.

Example: Marketing Project Management Certificate

To make the concept clearer, imagine a marketing professional who wants to enhance their project management skills. They enroll in a short-term marketing project management course that covers the fundamentals of managing digital channels, budgets, and resources. After completing the course, they take an exam that tests their understanding of these concepts. If they pass, they receive an assessment-based certificate confirming that they’ve demonstrated knowledge in marketing project management principles. This certificate indicates they have completed the training and have been assessed, but it’s not equivalent to a full-fledged Project Management Professional (PMP) certification, which requires extensive knowledge across domains, experience, and ongoing renewal.

Key Characteristics of Assessment-Based Certificates

Tied to a specific course or training: The certificate is directly linked to the completion of a specific course or training session, followed by an assessment of the learner’s understanding. Certifications are, by definition, not tied to a certain course or training.

One-time assessment: ABCs generally focus on a one-time assessment rather than ongoing evaluations or renewal processes, as seen with certifications.

Focused on specific skills or knowledge: ABCs assess a specific skill set or knowledge area, making them highly focused and often shorter in duration than broader certification programs.

Why Pursue an Assessment-Based Certificate?

For learners, earning an ABC can be a practical way to gain recognition for skills that are immediately applicable in their jobs. For organizations, offering these certificates can enhance workforce development by ensuring that employees have the necessary competencies to perform specific roles. This is particularly relevant in industries like healthcare, IT, and project management, where specific technical skills are essential for success.

In sectors such as healthcare, for example, assessment-based certificates can serve to train and validate skills for specialized tasks. A nurse may obtain an assessment-based certificate in wound care management, which certifies their ability to properly treat and manage different types of wounds, even though they may not be pursuing a broader nursing certification.

Accreditation by the ACAC

When it comes to ensuring the quality and credibility of an assessment-based certificate, accreditation by a third party such as the Assessment-based Certificate Accreditation Council (ACAC) is key. ACAC accreditation provides a formal recognition that the certificate program meets high standards in terms of fairness, validity, and reliability.

The ACAC is an independent organization that accredits professional certification programs and has a separate set of accreditation standards for ABCs. Although ACAC accreditation is not required for all assessment-based certificates, it serves as a seal of quality for those that achieve it. Accreditation ensures that the program follows industry best practices and has established processes for ensuring the assessment accurately reflects the competencies required for the role or skill being tested.

Why ACAC Accreditation Matters

Credibility: ACAC accreditation gives employers, clients, and learners confidence that the certificate program meets high standards of integrity and validity.

Rigorous Standards: To earn ACAC accreditation, certificate programs must demonstrate that their assessments are fair, unbiased, and based on psychometrically sound principles. They must also show that the learning materials effectively prepare candidates for the assessments.

Employer Recognition: Many employers look for certificates that carry ACAC accreditation as an indicator that the individual has undergone a rigorous assessment process.

Improved Employment Opportunities: For individuals, earning an assessment-based certificate from a program that is ACAC-accredited can improve their employment prospects, as it signals a commitment to quality and a verified mastery of the required skills.

 

How to Develop an Assessment-Based Certificate Program

Creating a successful ABC program requires careful planning, as it involves not just designing a learning experience, but also developing a fair and valid assessment process to measure the learner’s acquisition of the required competencies. Below are the essential steps to consider when developing an assessment-based certificate program:

1. Identify the Target Audience and Competencies

First, it’s crucial to understand who the certificate is for and what specific skills or knowledge areas the program will cover. The target audience could be professionals seeking to upgrade their skills, recent graduates looking to specialize in a particular area, or even current employees needing a refresher on specific tasks. Clearly identifying the competencies that learners must demonstrate is key to shaping both the curriculum and the assessment.

For example, if you are creating an ABC for IT security professionals, the program should focus on the core competencies such as network security, data encryption, and threat detection. It’s important to ensure that the competencies are clearly defined and relevant to current industry standards.

2. Develop the Learning Content

student-testletNext, you’ll need to create structured learning materials that align with the competencies you’ve identified. The learning experience can take various forms, including online courses, workshops, or self-study modules. The content should be designed to effectively teach the competencies and prepare learners for the assessment.

For instance, if you’re creating an ABC program for marketing analytics, the content should cover topics like data analysis, performance metrics, and campaign optimization.

3. Design the Assessment

The assessment is the heart of an assessment-based certificate program. It should be designed to test the learner’s ability to perform the competencies at an acceptable level of proficiency. You’ll need to choose an appropriate assessment format—whether it’s a multiple-choice exam, performance-based assessment, or a practical demonstration of skills. Depending on the sophistication of the certificate program, you might need to perform job task analysis or standard setting.

If your ABC program is for customer service training, for example, the assessment could include a role-playing scenario where the learner must respond to a customer inquiry and resolve an issue using the techniques taught in the course.

4. Ensure Psychometric Soundness

It’s essential that the assessment is reliable and valid, meaning it consistently measures what it is intended to measure. Working with experts in psychometrics, the science of educational measurement, can ensure that the assessment is fair, unbiased, and accurate. This step is particularly important if you plan to pursue NCCA accreditation.

5. Implement the Program and Monitor Its Effectiveness

Once the learning materials and assessments are in place, it’s time to launch the program. This involves creating a system for learners to enroll, complete the training, test delivery, and receive their certificates. After launching, it’s important to monitor the program’s effectiveness by collecting feedback from participants and tracking pass rates on the assessments. Regular review and updates ensure that the content and assessment remain relevant.

Conclusion

An assessment-based certificate offers a flexible and targeted way to validate specific competencies in a given profession or field. Whether for individual career growth or organizational development, these certificates provide formal recognition of skills without the long-term commitments of traditional certification programs. NCCA accreditation plays a significant role in ensuring the credibility and quality of assessment-based certificates, offering both individuals and employers confidence in the value of the credential.

Developing an ABC program requires careful planning, content development, and assessment design to ensure that learners not only gain knowledge but are also evaluated accurately on their competency. In an increasingly specialized workforce, assessment-based certificates can serve as powerful tools for professional development and recognition.

AI essay scoring process

In the realm of academic dishonesty and high-stakes exams such as Certification, the term “pre-knowledge” is an important concern in test security and validity. Understanding what pre-knowledge entails and its implications in exam cheating can offer valuable insights into how the integrity of tests and their scores can be compromised. According to studies by Cizek and Wollack (2017), nearly 50% of high school students in the United States acknowledge having cheated on an exam at least once in the past year. Furthermore, around 10% of university students admit to copying answers from peers during exams (McCabe, 2024). These statistics highlight the prevalence of academic dishonesty across different educational levels, underscoring the importance of robust exam security measures.

What is Pre-Knowledge in Exam Cheating?

Pre-knowledge refers to having advance access to information or materials that are not available to all students before an exam. This can manifest in various ways, from receiving the actual questions that will be on the test to obtaining detailed information about the exam’s content through unauthorized means. Essentially, it’s an unfair advantage that undermines the principles of a level playing field in assessments.  Psychometricians would refer to it as a threat to validity.

How Pre-Knowledge Occurs

The concept of pre-knowledge encompasses several scenarios. For instance, a student might gain insight into the exam questions through illicit means such as hacking into a teacher’s email, collaborating with insiders, or even through more subtle means like overhearing discussions. This sort of advantage is particularly problematic because it negates the purpose of assessments, which is to evaluate a student’s understanding and knowledge acquired through study and effort.

Perhaps the greatest source of pre-knowledge is “brain dump” websites, where people who take the test go to soon afterwards and write all the questions they can remember, dumping their short-term memory.  Over time, this can lead to large portions of the item bank being exposed to the public. These sites are often intentionally based in countries where cheating is not legally recognized, leaving little to no legal recourse to prevent their operations.

Implications of Pre-Knowledge

Pre-knowledge not only disrupts the fairness of the examination process but also poses significant challenges for test sponsors. Institutions must be vigilant and proactive in safeguarding against such breaches. Implementing robust security measures, such as secure testing environments and rigorous monitoring, is crucial to prevent the unauthorized dissemination of exam content.student cheating on test

Pre-knowledge and other forms of cheating can have far-reaching consequences beyond the immediate context of the exam. For instance, students who gain an unfair advantage may achieve higher grades, which can influence their future academic and career opportunities. This creates an inequitable situation where hard work and honest efforts are overshadowed by dishonesty.

Engaging in brain-dump activities is actually a form of intellectual property theft.  Many organizations spend a lot of money to develop high-quality assessments, sometimes totaling in the thousands of dollars per item!  Cheating in this manner will actually increase the costs of certification exams or educational opportunities.  Moreover, the organizations can then pursue legal action against cheaters.

Most importantly, cheating on exams can invalidate scores and therefore the entire credential.  If everyone is cheating and passing a certification exam, then the certification itself becomes worthless.  So by participating in brain-dump activities, you are actually hurting yourself.  And if the purpose of the test is to protect the public, such as ensuring that people working in healthcare fields are actually qualified, you are hurting the general public.  If you are going into healthcare to help patients, you need to realize that brain-dump activity only ends up hurting patients in the end.

Tips to Prevent Pre-Knowledge

Educational institutions and other test sponsors play a pivotal role in addressing the issue of pre-knowledge. Here are some effective strategies to help prevent pre-knowledge and maintain the integrity of academic assessments.  You might also benefit from this discussion on credentialing exams.

  1. Secure Testing Environments: Ensure exams are administered in a controlled environment where access to unauthorized materials is restricted.
  2. Monitor and Supervise: Use proctors and surveillance to oversee exam sessions and detect any suspicious behavior.
  3. Randomize Questions: Utilize question banks with randomized questions to prevent the predictable pattern of exam content.
  4. Strengthen Digital Security: Implement robust cybersecurity measures to protect exam content and prevent unauthorized access.
  5. Educate Examinees: Foster a culture of academic integrity by educating students about the consequences of cheating and the importance of honest efforts.
  6. Regularly Update Procedures: Review and update security measures and examination procedures regularly to address emerging threats.
  7. Encourage Open Communication: Provide support and resources for students struggling with their studies to reduce the temptation to seek unfair advantages.
  8. Strong NDA: This is one of the most important approaches.  If someone signs a Non-Disclosure Agreement which tells them that if they engage in cheating that they might not only fail the exam but be blocked from the entire profession and subject to lawsuits, this can be a huge deterrent.
  9. Multiple forms: Make multiple versions of the exam, and rotate them frequently.
  10. LOFT/CAT: Linear on the fly testing (LOFT) makes nearly infinite parallel versions of the exam, increasing security.  Computerized adaptive testing (CAT) creates a personalized exam for each candidate.
  11. Scan the internet: You can search the internet for text strings from your items, helping you find the brain dump sites and address them.

 

What if it happens?

There are two aspects to this: finding that it happened, and then deciding what to do about it.

To find that it happened, there are several types of evidence.

  1. Video – If there was proctoring, do we have video, perhaps of someone taking pictures of the exam with their phone?
  2. Technology – Perhaps a post on a brain dump site was done with someone’s email address or IP address.
  3. Eyewitness – You might get tipped off by another candidate that has higher integrity.
  4. Psychometric forensics – There are some ways to find this.  On the brain dump side, you might find candidates that don’t attempt all the items on the test but spend most of the time on a few… because they are memorizing them.  On the other end, you might find candidates that get a high score in a very short amount of time.  There are also much more advanced methods, as discussed in this book.

What should your organization do?  First of all, you need a test security plan ahead of time.  In that, you should lay out the processes as well as the penalties.  This ensures that examinees have due process.  Unfortunately, some organizations just stick their head in the sand; as long as candidates are paying the registration fees and the end stakeholders are not complaining, why rock the boat?

Summary

In conclusion, pre-knowledge in exam cheating represents a serious breach of academic integrity that undermines the educational process. It highlights the need for vigilance, robust security measures, and a culture of honesty within educational institutions. By understanding and addressing the nuances of pre-knowledge, educators and students can work together to uphold the values of fairness and integrity in academic assessments.

SIFT, developed by Assessment Systems Corporation (ASC), is specialized software designed to compute various collusion indices for detecting potential cheating in exams. Unlike many data forensics methods that require psychometricians to write custom code, SIFT provides a streamlined solution by automating these analyses. The software can also aggregate results based on group variables like testing locations or classrooms, making it particularly useful for large-scale test security evaluations.

References

Cizek, G. J., & Wollack, J. A. (2016). Exploring cheating on tests: The context, the concern, and the challenges. In Handbook of quantitative methods for detecting cheating on tests (pp. 3-19). Routledge. https://doi.org/10.4324/9781315743097

McCabe, D. (2024). Cheating and honor: Lessons from a long-term research project. Second Handbook of Academic Integrity, 599-610.

Test response function 10 items Angoff

Setting a cutscore on a test scored with item response theory (IRT) requires some psychometric knowledge.  This post will get you started.

How do I set a cutscore with item response theory?

There are two approaches: directly with IRT, or using CTT then converting to IRT.

  1. Some standard setting methods work directly with IRT, such as the Bookmark method.  Here, you calibrate your test with IRT, rank the items by difficulty, and have an expert panel place “bookmarks” in the ranked list.  The average IRT difficulty of their bookmarks is then a defensible IRT cutscore.  The Contrasting Groups method and the Hofstee method can also work directly with IRT.
  2. Cutscores set with classical test theory, such as the Angoff, Nedelsky, or Ebel methods, are easy to implement when the test is scored classically.  But if your test is scored with the IRT paradigm, you need to convert your cutscores onto the theta scale.  The easiest way to do that is to reverse-calculate the test response function (TRF) from IRT.

The Test Response Function

The TRF (sometimes called a test characteristic curve) is an important method of characterizing test performance in the IRT paradigm.  The TRF predicts a classical score from an IRT score, as you see below.  Like the item response function and test information function (item response and test information function), it uses the theta scale as the X-axis.  The Y-axis can be either the number-correct metric or proportion-correct metric.

Test response function 10 items Angoff

In this example, you can see that a theta of -0.3 translates to an estimated number-correct score of approximately 7, or 70%.

Classical cutscore to IRT

So how does this help us with the conversion of a classical cutscore?  Well, we hereby have a way of translating any number-correct score or proportion-correct score.  So any classical cutscore can be reverse-calculated to a theta value.  If your Angoff study (or Beuk) recommends a cutscore of 7 out of 10 points (70%), you can convert that to a theta cutscore of -0.3 as above.  If the recommended cutscore was 8 (80%), the theta cutscore would be approximately 0.7.

Because IRT works in a way that it scores examinees on the same scale with any set of items, as long as those items have been part of a linking/equating study.  Therefore, a single study on a set of items can be equated to any other linear test form, LOFT pool, or CAT pool.  This makes it possible to apply the classically-focused Angoff method to IRT-focused programs.  You can even set the cutscore with a subset of your item pool, in a linear sense, with the full intention to apply it on CAT tests later.

Note that the number-correct metric only makes sense for linear or LOFT exams, where every examinee receives the same number of items.  In the case of CAT exams, only the proportion correct metric makes sense.

How do I implement IRT?

Interested in applying IRT to improve your assessments?  Download a free trial copy of  Xcalibre  here.  If you want to deliver online tests that are scored directly with IRT, in real time (including computerized adaptive testing), check out  FastTest.

modified-Angoff Beuk compromise

A modified-Angoff method study is one of the most common ways to set a defensible cutscore on an exam.  It therefore means that the pass/fail decisions made by the test are more trustworthy than if you picked an arbitrary round number like 70%.  If your doctor, lawyer, accountant, or other professional has passed an exam where the cutscore has been set with this method, you can place more trust in their skills.

What is the Angoff method?

The Angoff method is a scientific way of setting a cutscore (pass point) on a test.  If you have a criterion-referenced interpretation, it is not legally defensible to just conveniently pick a round number like 70%; you need a formal process.  There are a number of acceptable methodologies in the psychometric literature for standard-setting studies, also known as cutscores or passing points.  Some examples include Angoff, modified-Angoff, Bookmark, Contrasting Groups, and Borderline.  The modified-Angoff approach is by far the popular approach.  It is used especially frequently for certification, licensure, certificate, and other credentialing exams.

It was originally suggested as a mere footnote by renowned researcher William Angoff, at Educational Testing Service. Studies found that panelists involved in modified-Angoff sessions typically demonstrate high agreement levels, with inter-rater reliability often surpassing 0.85, showcasing its robustness in decision consistency

How does the Angoff approach work?

First, you gather a group of subject matter experts (SMEs), with a minimum of 6, though 8-10 is preferred for better reliability, and have them define what they consider to be a Minimally Competent Candidate (MCC).  Next, you have them estimate the percentage of minimally competent candidates that will answer each item correctly.  You then analyze the results for outliers or inconsistencies.  If experts disagree, you will need to evaluate inter-rater reliability and agreement, and after that have the experts discuss and re-rate the items to gain better consensus.  The average final rating is then the expected percent-correct score for a minimally competent candidate.

Advantages of the Angoff method

  1. It is defensible.  Because it is the most commonly used approach and is widely studied in the scientific literature, it is well-accepted.
  2. You can implement it before a test is ever delivered.  Some other methods require you to deliver the test to a large sample first.
  3. It is conceptually simple, easy enough to explain to non-psychometricians.
  4. It incorporates the judgment of a panel of experts, not just one person or a round number.
  5. It works for tests with both classical test theory and item response theory.
  6. It does not take long to implement – if a short test, it can be done in a matter of hours!
  7. It can be used with different item types, including polytomously scored items (multi-points).

Disadvantages of the Angoff method

  1. It does not use actual data, unless you implement the Beuk method alongside.
  2. It can lead to the experts overestimating the performance of entry-level candidates, as they forgot what it was like to start out 20-30 years ago.  This is one reason to use the Beuk method as a “reality check” by showing the experts that if they stay with the cutscore they just picked, the majority of candidates might fail!

Example of the Modified-Angoff Approach

First of all, do not expect a straightforward, easy process that leads to an unassailably correct cutscore.  All standard-setting methods involve some degree of subjectivity.  The goal of the methods is to reduce that subjectivity as much as possible.  Some methods focus on content, others on examinee performance data, while some try to meld the two.

Step 1: Prepare Your Team

The modified-Angoff process depends on a representative sample of SMEs, usually 6-20. By “representative” I mean they should represent the various stakeholders. For instance, a certification for medical assistants might include experienced medical assistants, nurses, and physicians, from different areas of the country. You must train them about their role and how the process works, so they can understand the end goal and drive toward it.

Step 2: Define The Minimally Competent Candidate (MCC)

This concept is the core of the modified-Angoff method, though it is known by a range of terms or acronyms, including minimally qualified candidates (MQC) or just barely qualified (JBQ).  The reasoning is that we want our exam to separate candidates that are qualified from those that are not.  So we ask the SMEs to define what makes someone qualified (or unqualified!) from a perspective of skills and knowledge. This leads to a conceptual definition of an MCC. We then want to estimate what score this borderline candidate would achieve, which is the goal of the remainder of the study. This step can be conducted in person, or via webinar.

Step 3: Round 1 Ratings

Next, ask your SMEs to read through all the items on your test form and estimate the percentage of MCCs that would answer each correctly.  A rating of 100 means the item is a slam dunk; it is so easy that every MCC would get it right.  A rating of 40 is very difficult.  Most ratings are in the 60-90 range if the items are well-developed. The ratings should be gathered independently; if everyone is in the same room, let them work on their own in silence. This can easily be conducted remotely, though.

Step 4: Discussion

This is where it gets fun.  Identify items where there is the most disagreement (as defined by grouped frequency distributions or standard deviation) and make the SMEs discuss it.  Maybe two SMEs thought it was super easy and gave it a 95 and two other SMEs thought it was super hard and gave it a 45.  They will try to convince the other side of their folly. Chances are that there will be no shortage of opinions and you, as the facilitator, will find your greatest challenge is keeping the meeting on track. This step can be conducted in person, or via webinar.

Step 5: Round 2 Ratings

Raters then re-rate the items based on the discussion.  The goal is that there will be a greater consensus.  In the previous example, it’s not likely that every rater will settle on a 70.  But if your raters all end up from 60-80, that’s OK. How do you know there is enough consensus?  We recommend the inter-rater reliability suggested by Shrout and Fleiss (1979), as well as looking at inter-rater agreement and dispersion of ratings for each item. This use of multiple rounds is known as the Delphi approach; it pertains to all consensus-driven discussions in any field, not just psychometrics.

Step 6: Evaluate Results and Final Recommendation

Evaluate the results from Round 2 as well as Round 1.  An example of this is below.  What is the recommended cutscore, which is the average or sum of the Angoff ratings depending on the scale you prefer?  Did the reliability improve?  Estimate the mean and SD of examinee scores (there are several methods for this). What sort of pass rate do you expect?  Even better, utilize the Beuk Compromise as a “reality check” between the modified-Angoff approach and actual test data.  You should take multiple points of view into account, and the SMEs need to vote on a final recommendation. They, of course, know the material and the candidates so they have the final say.  This means that standard setting is a political process; again, reduce that effect as much as you can.

Some organizations do not set the cutscore at the recommended point, but at one standard error of judgment (SEJ) below the recommended point.  The SEJ is based on the inter-rater reliability; note that it is NOT the standard error of the mean or the standard error of measurement.  Some organizations use the latter; the former is just plain wrong (though I have seen it used by amateurs).

 

modified angoff

Step 7: Write Up Your Report

Validity refers to evidence gathered to support test score interpretations.  Well, you have lots of relevant evidence here. Document it.  If your test gets challenged, you’ll have all this in place.  On the other hand, if you just picked 70% as your cutscore because it was a nice round number, you could be in trouble.

Additional Topics

In some situations, there are more issues to worry about.  Multiple forms?  You’ll need to equate in some way.  Using item response theory?  You’ll have to convert the cutscore from the modified-Angoff method onto the theta metric using the Test Response Function (TRF).  New credential and no data available? That’s a real chicken-and-egg problem there.

Where Do I Go From Here?

Ready to take the next step and actually apply the modified-Angoff process to improving your exams?  Sign up for a free account in our  FastTest item banker. You can also download our Angoff analysis tool for free.

References

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological bulletin86(2), 420.

test-scaling

Scaling is a psychometric term regarding the establishment of a score metric for a test, and it often has two meanings. First, it involves defining the method to operationally scoring the test, establishing an underlying scale on which people are being measured.  A common example is the T-score, which transforms raw scores into a standardized scale with a mean of 50 and a standard deviation of 10, making it easier to compare results across different populations or test forms.  It also refers to score conversions used for reporting scores, especially conversions that are designed to carry specific information.  The latter is typically called scaled scoring.

Examples of Scaling

You have all been exposed to this type of scaling, though you might not have realized it at the time. Most high-stakes tests like the ACT, SAT, GRE, and MCAT are reported on scales that are selected to convey certain information, with the actual numbers selected more or less arbitrarily. The SAT and GRE have historically had a nominal mean of 500 and a standard deviation of 100, while the ACT has a nominal mean of 18 and standard deviation of 6. These are actually the same scale, because they are nothing more than a converted z-score (standard or zed score), simply because no examinee wants to receive a score report that says you got a score of -1. The numbers above were arbitrarily selected, and then the score range bounds were selected based on the fact that 99% of the population is within plus or minus three standard deviations. Hence, the SAT and GRE range from 200 to 800 and the ACT ranges from 0 to 36. This leads to the urban legend of receiving 200 points for writing your name correctly on the SAT; again, it feels better for the examinee. A score of 300 might seem like a big number and 100 points above the minimum, but it just means that someone is in the 3rd percentile.

Now, notice that I said “nominal.” I said that because the tests do not actually have those means observed in samples, because the samples have substantial range restriction. Because these tests are only taken by students serious about proceeding to the next level of education, the actual sample is of higher ability than the population. The lower third or so of high school students usually do not bother with the SAT or ACT. So many states will have an observed average ACT of 21 and standard deviation of 4. This is an important issue to consider in developing any test. Consider just how restricted the population of medical school students is; it is a very select group.

How can I select a score scale?

score-scale

For various reasons, actual observed scores from tests are often not reported, and only converted scores are reported.  If there are multiple forms which are being equated, scaling will hide the fact that the forms differ in difficulty, and in many cases, differ in cutscore.  Scaled scores can facilitate feedback.  They can also help the organization avoid explanations of IRT scoring, which can be a headache to some.

When deciding on the conversion calculations, there are several important questions to consider.

First, do we want to be able to make fine distinctions among examinees? If so, the range should be sufficiently wide. My personal view is that the scale should be at least as wide as the number of items; otherwise you are voluntarily giving up information. This in turn means you are giving up variance, which makes it more difficult to correlate your scaled scores with other variables, like the MCAT is correlated with success in medical school. This, of course, means that you are hampering future research – unless that research is able to revert back to actual observed scores to make sure all information possible is used. For example, supposed a test with 100 items is reported on a 5-point grade scale of A-B-C-D-F. That scale is quite restricted, and therefore difficult to correlate with other variables in research. But you have the option of reporting the grades to students and still using the original scores (0 to 100) for your research.

Along the same lines, we can swing completely in the other direction. For many tests, the purpose of the test is not to make fine distinctions, but only to broadly categorize examinees. The most common example of this is a mastery test, where the examinee is being assessed on their mastery of a certain subject, and the only possible scores are pass and fail. Licensure and certification examinations are an example. An extension of this is the “proficiency categories” used in K-12 testing, where students are classified into four groups: Below Basic, Basic, Proficient, and Advanced. This is used in the National Assessment of Educational Progress. Again, we see the care taken for reporting of low scores; instead of receiving a classification like “nonmastery” or “fail,” the failures are given the more palatable “Below Basic.”

Another issue to consider, which is very important in some settings but irrelevant in others, is vertical scaling. This refers to the chaining of scales across various tests that are at quite different levels. In education, this might involve linking the scales of exams in 8th grade, 10th grade, and 12th grade (graduation), so that student progress can be accurately tracked over time. Obviously, this is of great use in educational research, such as the medical school process. But for a test to award a certification in a medical specialty, it is not relevant because it is really a one-time deal.

Lastly, there are three calculation options: pure linear (ScaledScore = RawScore * Slope + Intercept), standardized conversion (Old Mean/SD to New Mean/SD), and nonlinear approaches like Equipercentile.

Perhaps the most important issue is whether the scores from the test will be criterion-referenced or norm-referenced. Often, this choice will be made for you because it distinctly represents the purpose of your tests. However, it is quite important and usually misunderstood, so I will discuss this in detail.

Criterion-Referenced vs. Norm-Referenced

data-analysis-norms

This is a distinction between the ways test scores are used or interpreted. A criterion-referenced score interpretation means that the score is interpreted with regards to defined content, blueprint, or curriculum (the criterion), and ignores how other examinees perform (Bond, 1996). A classroom assessment is the most common example; students are scored on the percent of items correct, which is taken to imply the percent of the content they have mastered. Conversely, a norm-referenced score interpretation is one where the score provides information about the examinee’s standing in the population, but no absolute (or ostensibly absolute) information regarding their mastery of content. This is often the case with non-educational measurements like personality or psychopathology. There is no defined content which we can use as a basis for some sort of absolute interpretation. Instead, scores are often either z-scores or some linear function of z-scores.  IQ is historically scaled with a mean of 100 and standard deviation of 15.

It is important to note that this dichotomy is not a characteristic of the test, but of the test score interpretations. This fact is more apparent when you consider that a single test or test score can have several interpretations, some of which are criterion-referenced and some of which are norm-referenced. We will discuss this deeper when we reach the topic of validity, but consider the following example. A high school graduation exam is designed to be a comprehensive summative assessment of a secondary education. It is therefore specifically designed to cover the curriculum used in schools, and scores are interpreted within that criterion-referenced context. Yet scores from this test could also be used for making acceptance decisions at universities, where scores are only interpreted with respect to their percentile (e.g., accept the top 40%). The scores might even do a fairly decent job at this norm-referenced application. However, this is not what they are designed for, and such score interpretations should be made with caution.

Another important note is the definition of “criterion.” Because most tests with criterion-referenced scores are educational and involve a cutscore, a common misunderstanding is that the cutscore is the criterion. It is still the underlying content or curriculum that is the criterion, because we can have this type of score interpretation without a cutscore. Regardless of whether there is a cutscore for pass/fail, a score on a classroom assessment is still interpreted with regards to mastery of the content.  To further add to the confusion, Industrial/Organizational psychology refers to outcome variables as the criterion; for a pre-employment test, the criterion is typically Job Performance at a later time.

This dichotomy also leads to some interesting thoughts about the nature of your construct. If you have a criterion-referenced score, you are assuming that the construct is concrete enough that anybody can make interpretations regarding it, such as mastering a certain percentage of content. This is why non-concrete constructs like personality tend to be only norm-referenced. There is no agreed-upon blueprint of personality.

Multidimensional Scaling

camera lenses for multidimensional item response theory

An advanced topic worth mentioning is multidimensional scaling (see Davison, 1998). The purpose of multidimensional scaling is similar to factor analysis (a later discussion!) in that it is designed to evaluate the underlying structure of constructs and how they are represented in items. This is therefore useful if you are working with constructs that are brand new, so that little is known about them, and you think they might be multidimensional. This is a pretty small percentage of the tests out there in the world; I encountered the topic in my first year of graduate school – only because I was in a Psychological Scaling course – and have not encountered it since.

Summary of test scaling

Scaling is the process of defining the scale that on which your measurements will take place. It raises fundamental questions about the nature of the construct. Fortunately, in many cases we are dealing with a simple construct that has a well-defined content, like an anatomy course for first-year medical students. Because it is so well-defined, we often take criterion-referenced score interpretations at face value. But as constructs become more complex, like job performance of a first-year resident, it becomes harder to define the scale, and we start to deal more in relatives than absolutes. At the other end of the spectrum are completely ephemeral constructs where researchers still can’t agree on the nature of the construct and we are pretty much limited to z-scores. Intelligence is a good example of this.

Some sources attempt to delineate the scaling of people and items or stimuli as separate things, but this is really impossible as they are so confounded. Especially since people define item statistics (the percent of people that get an item correct) and items define people scores (the percent of items a person gets correct). It is for this reason that item response theory, the most advanced paradigm in measurement theory, was designed to place items and people on the same scale. It is also for this reason that item writing should consider how they are going to be scored and therefore lead to person scores. But because we start writing items long before the test is administered, and the nature of the construct is caught up in the scale, the issues presented here need to be addressed at the very beginning of the test development cycle.

certification exam development construction

Certification exams are a critical component of workforce development for many professions and play a significant role in the global Testing, Inspection, and Certification (TIC) market, which was valued at approximately $359.35 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 4.0% from 2023 to 2030. As such, a lot of effort goes into exam development and delivery, working to ensure that the exams are valid and fair, then delivered securely yet with enough convenience to reach the target market. If you work for a certification organization or awarding body, this article provides a guidebook to that process and how to select a vendor.

Certification Exam Development

Certification exam development, is a well-defined process governed by accreditation guidelines such as NCCA, requiring steps such as job task analysis and standard setting studies.  For certification, and other credentialing like licensure or certificates, this process is incredibly important to establishing validity.  Such exams serve as gatekeepers into many professions, often after people have invested a ton of money and years of their life in preparation.  Therefore, it is critical that the tests be developed well, and have the necessary supporting documentation to show that they are defensible.

So what exactly goes into developing a quality exam, sound psychometrics, and establishing the validity documentation, perhaps enough to achieve NCCA accreditation for your certification? Well, there is a well-defined and recognized process for certification exam development, though it is rarely the exact same for every organization.  In general, the accreditation guidelines say you need to address these things, but leave the specific approach up to you.  For example, you have to do a cutscore study, but you are allowed to choose Bookmark vs Angoff vs other method.

Job Analysis / Practice Analysis

A job analysis study provides the vehicle for defining the important job knowledge, skills, and abilities (KSA) that will later be translated into content on a certification exam. During a job analysis, important job KSAs are obtained by directly analyzing job performance of highly competent job incumbents or surveying subject-matter experts regarding important aspects of successful job performance. The job analysis generally serves as a fundamental source of evidence supporting the validity of scores for certification exams.

Test Specifications and Blueprints

The results of the job analysis study are quantitatively converted into a blueprint for the certification exam.  Basically, it comes down to this: if the experts say that a certain topic or skill is done quite often or is very critical, then it deserves more weight on the exam, right?  There are different ways to do this.  My favorite article on the topic is Raymond & Neustel, 2006Here’s a free tool to help.

test development cycle job task analysis

Item Development

After important job KSAs are established, subject-matter experts write test items to assess them. The end result is the development of an item bank from which exam forms can be constructed. The quality of the item bank also supports test validity.  A key operational step is the development of an Item Writing Guide and holding an item writing workshop for the SMEs.

Pilot Testing

There should be evidence that each item in the bank actually measures the content that it is supposed to measure; in order to assess this, data must be gathered from samples of test-takers. After items are written, they are generally pilot tested by administering them to a sample of examinees in a low-stakes context—one in which examinees’ responses to the test items do not factor into any decisions regarding competency. After pilot test data is obtained, a psychometric analysis of the test and test items can be performed. This analysis will yield statistics that indicate the degree to which the items measure the intended test content. Items that appear to be weak indicators of the test content generally are removed from the item bank or flagged for item review so they can be reviewed by subject matter experts for correctness and clarity.

Note that this is not always possible, and is one of the ways that different organizations diverge in how they approach exam development.

Standard Setting

Standard setting also is a critical source of evidence supporting the validity of professional credentialing exam (i.e. pass/fail) decisions made based on test scores.  Standard setting is a process by which a passing score (or cutscore) is established; this is the point on the score scale that differentiates between examinees that are and are not deemed competent to perform the job. In order to be valid, the cutscore cannot be arbitrarily defined. Two examples of arbitrary methods are the quota (setting the cut score to produce a certain percentage of passing scores) and the flat cutscore (such as 70% on all tests). Both of these approaches ignore the content and difficulty of the test.  Avoid these!

Instead, the cutscore must be based on one of several well-researched criterion-referenced methods from the psychometric literature.  There are two types of criterion-referenced standard-setting procedures (Cizek, 2006): examinee-centered and test-centered.

The Contrasting Groups method is one example of a defensible examinee-centered standard-setting approach. This method compares the scores of candidates previously defined as Pass or Fail. Obviously, this has the drawback that a separate method already exists for classification. Moreover, examinee-centered approaches such as this require data from examinees, but many testing programs wish to set the cutscore before publishing the test and delivering it to any examinees. Therefore, test-centered methods are more commonly used in credentialing.

The most frequently used test-centered method is the Modified Angoff Method (Angoff, 1971) which requires a committee of subject matter experts (SMEs).  Another commonly used approach is the Bookmark Method.

Equating

If the test has more than one form – which is required by NCCA Standards and other guidelines – they must be statistically equated.  If you use classical test theory, there are methods like Tucker or Levine.  If you use item response theory, you can either bake the equating into the item calibration process with software like Xcalibre, or use conversion methods like Stocking & Lord.

What does this process do?  Well, if this year’s certification exam had an average of 3 points higher than last years, how do you know if this year’s version was 3 points easier, or this year’s cohort was 3 points smarter, or a mixture of both?  Learn more here.

Psychometric Analysis & Reporting

This part is an absolutely critical step in the exam development cycle for professional credentialing.  You need to statistically analyze the results to flag any items that are not performing well, so you can replace or modify them.  This looks at statistics like item p-value (difficulty), item point biserial (discrimination), option/distractor analysis, and differential item functioning.  You should also look at overall test reliability/precision and other psychometric indices.  If you are accredited, you need to perform year-end reports and submit them to the governing body.  Learn more about item and test analysis.

Exam Development: It’s a Vicious Cycle

Now, consider the big picture: in many cases, an exam is not a one-and-done thing.  It is re-used, perhaps continually.  Often there are new versions released, perhaps based on updated blueprints or simply to swap out questions so that they don’t get overexposed.  That’s why this is better conceptualized as an exam development cycle, like the circle shown above.  Often some steps like Job Analysis are only done once every 5 years, while the rotation of item development, piloting, equating, and psychometric reporting might happen with each exam window (perhaps you do exams in December and May each year).

ASC has extensive expertise in managing this cycle for professional credentialing exams, as well as many other types of assessments.  Get in touch with us to talk to one of our psychometricians.

Certification Exam Delivery & Administration

Certification exam administration and proctoring is a crucial component of the professional credentialing process.  Certification exams are expensive to develop well, so an organization wants to protect that investment by delivering the exam with appropriate security so that items are not stolen.  Moreover, there is an obvious incentive for candidates to cheat.  So, a certification body needs appropriate processes in place to deliver the certification exams.  Here are some tips.

1. Determine the best approach for certification exam administration and proctoring

Here are a few of the considerations to take into account.  These can be crossed with each other, such as delivering paper exams at Events vs. Test Centers.

Timing: Cohorts/Windows vs Continuous Availability

Do you have cohorts, where events make more sense, or do you need continuous?  For example, if the test is tied to university training programs that graduate candidates in December and May each year, that affects your need for delivery.  Alternatively, some certifications are not tied to such training; you might have to only show work experience.  In those cases, candidates are ready to take the test continuously throughout the year.

Mode: Paper vs Computer

Does it make more sense to deliver the test on paper or on computer?  This used to be a cost issue, but now the cost of computerized delivery, especially with online proctoring at home, has dropped significantly while saving so much time for candidates.  Also, some exam types like clinical simulations can only be delivered on computers.

Location: Test centers vs Online proctored vs Events vs Multi-Modal

Some types of tests require events, such as a clinical assessment in an actual clinic with standardized patients.  Some tests can be taken anywhere.  Exam events can also coincide with other events; perhaps you have online delivery through the year but deliver a paper version of the test at your annual conference, for convenience.

Do you have an easy way to make your own locations, if you are considering that?  One example is that you have quarterly regional conferences for your profession, where you could simply get a side room to deliver your test to candidates since they will already be there.  Another is that most of your candidates are coming from training programs at universities, and you are able to use classrooms at those universities.

ansi accreditation certification exam candidates

Geography: State, National, or International

If your exam is for a small US state or a small country, it might be easy to require exams in a test center, because you can easily set up only one or two test centers to cover the geography.  Some certifications are international, and need to deliver on-demand throughout the year; those are a great fit for online.

Security: Low vs High

If your test has extremely high stakes, there is extremely high incentive to cheat.  An entry-level certification on WordPress is different than a medical licensure exam.  The latter is a better fit for test centers, while the former might be fine with online proctoring on-demand.

Online proctoring: AI vs Recorded vs Live

If you choose to explore this approach, here are three main types to evaluate.

A. AI only: AI only proctoring means that there are no humans.  The examinee is recorded on video, and AI algorithms flag potential issues, such as if they leave their seat, then notify an administrator (usually a professor) of students with a high number of flags.  This approach is usually not relevant for certifications or other credentialing exams, it is more for low-stakes exams like a Psychology 101 Midterm at your local university.  The vendors for this approach are interested in large-scale projects, such as proctoring all midterms and finals at a university, perhaps hundreds of thousands of exams per year.

B. Record and Review: Record and review proctoring means that the examinee is recorded on video, but that video is watched by a real human and flagged if they think there is cheating, theft, or other issues.  This is much higher quality, and higher price, but has one major flaw that might be concerning to certification tests: if someone steals your test by taking pictures, you won’t find out until tomorrow.  But at least you know who it was and you are certain of what happened, with a video proof.  Perhaps useful for microcredentials or recertification exams.

C. Live Online Proctoring: Live online proctoring (LOP), or what I call “live human proctoring” (because some AI proctoring is also “live” in real time!) means that there is a professional human proctor on the other side of the video from the examinee.  They check the examinee in, confirm their identity, scan the room, provide instructions, and actually watch them take the test.  Some providers like MonitorEDU even have the examinee make a second video stream on their phone, which is placed on a bookshelf or similar spot to see the entire room through the test.  Certainly, this approach is a very good fit with certification exams and other credentialing.  You protect the test content as well as the validity of that individual’s score; that is not possible with the other two approaches.

We have also prepared a list of the best online proctoring software platforms.

2. Determine other technology, psychometric, and operational needs

Next, your organization should establish any other needs for your exams that could impact the vendor selection.

  1. Do you require special item types, such that the delivery platform needs to support or integrate with them?
  2. Do you have simulations or OSCEs?
  3. Do you have specific needs around accessibility and accommodations for your candidates?
  4. Do you need adaptive testing or linear on the fly testing?
  5. Do you need extensive Psychometric consulting services?
  6. Do you need an integrated registration and payment portal?  Or a certification management system to track expirations and other important information?

Write all these up so that you can use the list to shop for a provider.

3. Find a provider – or several!

test development cycle fasttest

While it might seem easier to find a single provider for everything, that’s often not the best solution.  Look for those vendors that specifically fit your needs.

For example, most providers of remote proctoring are just that: remote proctoring.  They do not have a professional platform to manage item banks, schedule examinees, deliver tests, create custom score reports, and analyze psychometrics.  Some do not even integrate with such platforms, and only integrate with learning management systems like Moodle, seeing as their entire target market is only low-stakes university exams.  So if you are seeking a vendor for certification testing or other credentialing, the list of potential vendors is smaller.

Likewise, there are some vendors that only do the exam development and psychometrics, but lack a software platform and proctoring services for deliver.  In these cases, they might have very specific expertise, and often have lower costs due to lower overhead.  An example is JML Testing Services.

Once you have some idea what you are looking for, start shopping for vendors that provide services for certification exam delivery, development, and scoring.  In some cases, you might not settle on a certain approach right away, and that’s OK.  See what is out there and compare prices.  Perhaps the cost of Live Remote Proctoring is more affordable than you anticipated, and you can upgrade to that.

Besides a simple Google search some good places to start are the member listings of the Association of Test Publishers and the Institute for Credentialing Excellence.

4. Establish the new process with policies and documentation

Once you have finalized your vendors, you need to write policies and documentation around them.  For example, if your vendor has a certain login page for proctoring (we have ascproctor.com), you should take relevant screenshots and write up a walkthrough so candidates know what to expect.  Much of this should go into your Candidate Handbook.  Some of the things to cover that are specific to exam day for the candidates:

  • How to prepare for the exam
  • How to take a practice test
  • What is allowed during the exam
  • What is not allowed
  • ID needed and the check-in process
  • Details on specific locations (if using locations)
  • Rules for accessibility and accommodations
  • Time limits and other practical considerations in the exam

Next, consider all the things that are impacted other than exam day.

  • ExamOps - Admin purchases report (blurred)Eligibility pathways and applications
  • Registration and scheduling
  • Candidate training and practice tests
  • Reporting: just to the candidates, or perhaps to training programs as well?
  • Accounting and other operations: consider your business needs, such as how you manage money, monthly accounting reports, etc.
  • Test security plan:  What do you do if someone is caught taking pictures of the exam with their phone, or the other potential events?

5. Let Everyone Know

Once you have written up everything, make sure all the relevant stakeholders know.  Publish the new Candidate Handbook and announce to the world.  Send emails to all upcoming candidates with instructions and an opportunity for a practice exam.  Put a link on your homepage.  Get in touch with all the training programs or universities in your field.  Make sure that everyone has ample opportunity to know about the new process!

6. Roll Out

Finally, of course, you can implement the new approach to certification exam delivery.  You might launch a new certification exam from scratch, or perhaps you are moving one from paper to online with remote proctoring, or some other change.  Either way, you need a date to start using it and a change management process.  The good news is that, even though it’s probably a lot of work to get here, the new approach is probably going to save you time and money in the long run.  Roll it out!

Also, remember that this is not a single point in time.  You’ll need to update into the future.  You should also consider the implementation of audits or quality control as a way to drive improvement.

 

Ready to start?

exam development certification committee

Certification exam delivery is the process of administering a certification test to candidates.  This might seem straightforward, but it is surprisingly complex.  The greater the scale and the stakes, the more potential threats and pitfalls.  Assessment Systems Corporation is one of the world leaders in the development and delivery of certification exams.  Contact us to get a free account in our platform and experience the examinee process, or to receive a demonstration from one of our experts.

 

 

Test preparation for a high-stakes exam can be a daunting task, as obtaining degrees, certifications, and other significant achievements can accelerate your career and present new opportunities to learn in your chosen field of study. It’s a pivotal step towards advancing your career and demonstrating your expertise in a specific field. However, achieving success in your exam requires more than just diligent study: one must also incorporate strategies that can address their unique needs to study effectively. In this article, we will explore ten essential strategies to aid in preparing for your exam.

Also, remember that this test serves an important purpose.  If it is certification/licensure, it is designed to protect the public, ensuring that only qualified professionals work in the field.  If it is a pre-employment test, it is designed to predict job performance and fit in the organization – both to make the organization more efficient, and to help ensure you have the opportunity to be successful.  Tests are not designed to be a personal hurdle to you!

Tips for Test Preparation

  1. Apply Active Learning Strategies: Research indicates that students who employ active learning techniques, such as self-quizzing and summarization, tend to perform better on exams. A study found that 13.5% of students recognized the effectiveness of active strategies and aimed to incorporate them more into their study routines.
  2. Study Early, Study Regularly: To avoid the effects of procrastination, such as stress on the evening before the exam begins, begin your studies as soon as possible. By beginning your studies early, one can thoroughly review the material that will likely be on the exam, as well as identify topics that one may need assistance in reviewing. Also, creating a schedule will ensure thorough review of the material by making “chunks” to ensure the task of studying does not become overwhelming.  Research has shown that strategic test preparation will be more effective. Create a schedule to help manage the process.
  3. student exam help test preparationCreate Organized Notes: Gather your materials, including notes prepared in class, textbooks, and any other helpful materials. If possible, organize your notes by topic, chapter, date, or any other method that will make the material easier for you to understand. Make use of underlining or highlighting relevant sections to facilitate faster review times in the future and to better retain the information you are reviewing.
  4. Understand the Exam Format: Begin by familiarizing yourself with the exam’s format and structure. Understand the types of questions you’ll encounter, such as multiple-choice, essays, or practical demonstrations. Review the blueprints. Knowing what to expect will help you tailor your study plan accordingly and manage your time during the exam.
  5. Focus on Weak Areas: Identify your weaknesses early in the study process and prioritize them. Spend extra time reviewing challenging concepts and seeking clarification from instructors or peers if needed. Don’t neglect these areas in favor of topics you find easier, as they are likely to appear on the exam.
  6. Utilize Memory Aids: Memory aids can be incredibly helpful in associating relevant information with questions. Examples of memory aids include acronyms, such as the oft- promoted “PEMDAS” by teachers in learning the order-of-operations in mathematics, and story-writing to create context for, and/or association with, requisite material, such as the amount(s) and type(s) of force experienced on a particular amusement-park ride by attendees and how the associated formula should be applied.
  7. Familiarize yourself with the rules: Review the candidate handbook and other materials. Exams may differ in structure across varying subjects, incorporating stipulations such as essay-writing, image recognition, or passages to be read with corresponding questions. Familiarizing oneself with the grading weights and general structure of an exam can aid in allocating time to particular sections to study and/or practice for.
  8. Take care of yourself: It is essential that your physical and mental wellbeing is cared for while you are studying. Adequate sleep is essential both for acquiring new information and for reviewing previously-acquired information. Also, it is necessary to eat and drink regularly, and opportunities to do so should be included in your schedule. Exercise may also be beneficial in providing opportunities for relevant material to be reviewed, or as a break from your studies.
  9. Take practice tests: Practice is essential for exam success.  Many high-stakes exams offer a practice test, either through the test sponsor or through a third party, like these MCAT tests with Kaplan. Along with providing a further opportunity to review the content of the exam, practice tests can aid in identifying topics that one may need to further review, as well as increasing confidence for the exam-taker in their ability to pass the exam.  Practice exams simulate the test environment and help identify areas where you need improvement. Focus on both content knowledge and test-taking strategies to build confidence and reduce anxiety on exam day.
  10. Simulate Exam Conditions: To complete your test preparation, try to get as close as possible to the real thing.  Test yourself under a time-limit in a room similar to the exam room. Subtle factors such as room temperature or sound insulation may aid or distract one’s studies, so it may be helpful to simulate taking the exam in a similar environment.  Practice implementing your test-taking strategies, such as process of elimination for multiple-choice questions or managing your time effectively.
  11. Maintain a Positive Attitude: Remaining optimistic can improve one’s perceptions of their efforts, and their performance in the exam. Use mistakes as opportunities to improve oneself, rather than incidents to be recounted in the future. Maintaining an optimistic perspective can also help mitigate stress in the moments before one begins their exam.

Final Thoughts on Test Preparation

In conclusion, by incorporating these strategies into your test preparation routine, you’ll be better equipped to tackle your exam with confidence and competence. Remember that success is not just about how much you know but also how well you prepare. Stay focused, stay disciplined, and trust in your ability to succeed. Good luck!

Stackable credential cairn

Stackable credentials refer to the practice of accumulating a series of credentials such as certifications or microcredentials over time.  In today’s rapidly evolving job market, staying ahead of the curve is crucial for career success. Employers are increasingly looking for candidates with a diverse set of skills and qualifications. This has given rise to the concept of “stackable credentials,” a strategy that can supercharge your career trajectory. In this blog post, we’ll explore what stackable credentials are, why they matter, and how you can leverage them to open new doors in your professional journey.

What is a Stackable Credential?

University certificate credentialStackable credentials are where you can add multiple, related credentials to your resume.  Instead of pursuing a single, traditional degree – which often takes a long time, like 4 years – individuals can enhance their skill set by stacking various credentials on top of each other while they work in the field. These could include industry certifications, specialized training programs, or workshops that target specific skills.  For example, instead of getting a generalized 4 year degree in Marketing, you might take a 3 month bootcamp in WordPress and then start working in the field, while adding other credentials on search engine optimization, Google Ads, or other related topics.

Why Stackable Credentials Matter

  1. Adaptability in a Changing Job Market: The job market is dynamic, and industries are constantly evolving. Stackable credentials allow professionals to adapt to these changes quickly. By continually adding relevant certifications, individuals can stay current with industry trends and remain competitive in the job market.
  2. Customized Learning Paths: Unlike traditional degrees that follow a predetermined curriculum, stackable credentials empower individuals to tailor their learning paths. This flexibility enables professionals to focus on acquiring skills that align with their career goals, making their journey more efficient and purposeful.
  3. Incremental Career Advancement: Stackable credentials provide incremental steps for career advancement. As professionals accumulate qualifications, they can showcase a diverse skill set, making them more appealing to employers seeking candidates with a broad range of competencies. This can lead to promotions, salary increases, and expanded job responsibilities.
  4. Demonstration of Lifelong Learning: In a world where continuous learning is essential, stackable credentials demonstrate a commitment to lifelong learning. This commitment is highly valued by employers who seek individuals eager to stay informed about industry developments and emerging technologies.

Leveraging Stackable Credentials for Success

  1. Identify Your Career Goals: Begin by identifying your long-term career goals. What skills are in demand in your industry? What certifications are recognized and respected? Understanding your objectives will guide you in selecting the most relevant stackable credentials.
  2. Research Accredited Programs: It’s crucial to choose reputable and accredited programs when pursuing stackable credentials. Look for certifications and courses offered by well-known organizations or institutions. This ensures that your efforts are recognized and valued in the professional sphere.
  3. Networking Opportunities: Many stackable credential programs provide opportunities for networking with industry professionals. Take advantage of these connections to gain insights into current trends, job opportunities, and potential mentors who can guide you in your career journey.
  4. Stay Informed and Updated: Stackable credentials lose their value if they become outdated. Stay informed about changes in your industry and continuously update your skill set. Consider regularly adding new credentials to your stack to remain relevant and marketable.

External Resources for Further Learning:

  1. edX – MicroMasters Programs: Explore edX’s MicroMasters programs for in-depth, specialized learning that can be stacked for advanced qualifications.
  2. LinkedIn Learning: Access a vast library of courses on LinkedIn Learning to acquire new skills and stack credentials relevant to your career.
  3. CompTIA Certifications: Explore CompTIA’s certifications for IT professionals, offering a stackable approach to building expertise in various technology domains.
  4. Coursera Specializations: Coursera offers specializations that allow you to stack credentials in specific fields, providing a comprehensive learning path.

Summary

In conclusion, stackable credentials offer a strategic approach to career development in an ever-changing professional landscape. By embracing this mindset and continuously adding valuable qualifications, you can unlock new opportunities and elevate your career to new heights. Stay curious, stay relevant, and keep stacking those credentials for a brighter future.

value-of-digital-badges

Digital badges (aka e-badges) have emerged in today’s digitalized world as a powerful tool for recognizing and showcasing individual’s accomplishments in an online format which is more comprehensive, immediate, brandable, and easily verifiable compared to traditional paper certificates or diplomas. In this blog post, we will delve into the world of digital badges, explore best badging practices, discuss their advantages in education, and highlight examples of badge-awarding platforms.

Digital Badges and Their Utilization

Digital badges are visual representations or icons that are applied to recognize and verify an individual’s achievement or mastery of a particular skill or knowledge area in the online space. They serve as a form of digital credentialing or micro-credentialing, providing a way to display their skills, knowledge, or accomplishments in a specific domain, helping them stand out, and gain recognition in the digital landscape.

Digital badges often contain metadata, such as the name of the issuing organization, a brief description of the achievement, criteria for earning the badge, and evidence of the individual’s accomplishment. This metadata is embedded within the badge image using a technology called Open Badges, allowing anyone to verify the authenticity and details of the badge.

Digital badges are typically issued and exhibited electronically what makes them easily shareable and accessible across various digital platforms, such as social media profiles, resumes, portfolios, or online learning platforms. Digital badges are widely used in various contexts, including education, professional development, training programs, online courses, and gamification.

Digital Badging Best Practices

digital-badges-picture

To ensure the credibility of digital badges, it is important to adhere to the most effective badging practices. Here are some key practices to consider:

  • Clearly Defined Criteria: Badges should have well-defined criteria that outline the specific skills or achievements required to earn the badge. This ensures that the badge holds value and meaning.
  • Authenticity and Verification: Badges should incorporate metadata via technologies like Open Badges, enabling easy verification of the badge’s authenticity and details. This helps maintain trust and credibility in the digital badge ecosystem.
  • Issuer Credibility: Reputable organizations, institutions, or experts in the field should issue badges. The credibility of the issuer adds value to the badge and increases its recognition.
  • Visual Design: Badges should be visually appealing and distinct, making them easily recognizable and shareable. Thoughtful design elements can enhance the badge’s appeal and encourage individuals to display them proudly.

 

Advantages of Using Digital Badges in Education and Workforce Development

Digital badges offer numerous advantages in educational settings, transforming the way achievements and skills are recognized and valued. Below you may find some key advantages listed:

  • Granular Skill Recognition: Badges provide a way to recognize and demonstrate specific skills or achievements that might not be captured by traditional grades or degrees. This allows individuals to highlight their unique strengths and expertise.
  • Motivation and Engagement: Badges can act as a motivational tool, driving learners to actively pursue goals, and master new skills. The visual nature of badges and the ability to display them publicly create a sense of achievement and pride.
  • Portable and Shareable: Digital badges can be easily shared across various platforms, such as social media profiles, resumes, or online portfolios. This increases the visibility of accomplishments and facilitates networking and professional opportunities.
  • Lifelong Learning: Badges promote a culture of lifelong learning by recognizing and acknowledging continuous skill development. Individuals can earn badges for completing online courses, attending workshops, or acquiring new competencies, fostering a commitment to ongoing personal and professional growth.
  • Brand enhancement: Badges provide exposure to the brand of the issuing institution.
  • Turnaround time: Badges can often be available immediately on a profile page after the credential is awarded.
  • Verifiability: Digital badges are easily verifiable when using an appropriate platform/technology. This helps the consumer, hiring manager, or other stakeholder to determine if a person has the skills and knowledge that are relevant to the credential.

 

Examples of Badge-Awarding Platforms

Open badges platforms

Several platforms have emerged to facilitate the creation, issuance, and display of digital badges. Here are a few notable examples:

  • Credly is a widely used platform that allows organizations to design, issue, and manage digital badges. It also provides features for verifying badges and displaying them on various online platforms.
  • Open Badge Factory is an open-source platform that enables badge creation, management, and verification. It offers customizable badge templates and integration with various learning management systems.
  • Badgr is a platform that supports the creation and awarding of digital badges. It provides features for badge design, verification, and integration with various learning management systems (LMS) and online platforms.
  • BadgeCert is another platform which supports digital badges and can be integrated into other platforms.

 

University certificate credential

A Certification Management System (CMS) or Credential Management System plays a pivotal role in streamlining the key processes surrounding the certification or credentialing of people, namely that they have certain knowledge or skills in a profession.  It helps with ensuring compliance, reducing business operation costs, and maximizing the value of certifications. In this article, we explore the significance of adopting a CMS and its benefits for both nonprofits and businesses.

What is a certification management system?

A certification management system is an enterprise software platform that is designed specifically for organizations whose primary goal is to award credentials to people for professional skills and knowledge.  Such an organization is often a nonprofit like an Association or Board, and is sometimes called an “awarding body” or similar term.   However, nowadays there are many for-profit corporations which offer certifications.  For example, many IT/Software companies will certify people on their products.  Here’s a page that does nothing but list the certifications offered by SalesForce!

These organizations often offer various credentials within a field.

  • Initial certifications (high stakes and broad) – Example: Certified Widgetmaker
  • Certificates or microcredentials – Example: Advanced Widget Design Specialist
  • Recertification exams – Example: taking a test every 5 years to maintain your Certified Widgetmaker status
  • Benchmark/progress exams – Example: Widgetmaker training programs are 2 years long and you take a benchmark exam at the end of Year 1
  • Practice tests: Example: old items from the Certified Widgetmaker test provided in a low-stakes fashion for practice

A credentialing body will need to manage the many business and operation aspects around these.  Some examples:

  • Applications, tracking who is applying for whichCertification management system, credentialing
  • Payment processing
  • Eligibility pathways and documentation (e.g., diplomas)
  • Pass/Fail results
  • Retake status
  • Expiration dates

There will often be functionality that makes these things easier, like automated emails to remind the professionals when their certification is expiring so they can register for their Recertification exam.

 

Reasons to use a certification management system

  1. Enhancing Compliance and Regulatory Adherence:  A comprehensive CMS provides a centralized repository where organizations can securely store, track, and manage certifications and credentials. This ensures compliance with industry standards, regulatory bodies, and audits.  A certification management system will also help your organization achieve accreditation like NCCA or ANSI/ISO 17024.
  2. Streamlining Certification Tracking and Renewals: Managing certifications manually can be a time-consuming and error-prone process. A CMS simplifies this task by automating certification tracking, renewal reminders, and verification processes. By digitizing the management of certifications, organizations can save valuable time and resources, eliminating the need for tedious paperwork and manual record-keeping.
  3. Improving Workforce Efficiency and Development: An efficient CMS empowers organizations to optimize their workforce’s knowledge and skill development. By capturing comprehensive data on certifications, skills, and training, organizations gain valuable insights into their employees’ capabilities. This information can guide targeted training initiatives, succession planning, and talent management efforts. Moreover, employees can leverage the CMS to identify skill gaps, explore potential career paths, and pursue professional development opportunities.
  4. Enhancing Credential Verification and Fraud Prevention: Verifying the authenticity of certifications is critical, especially in industries where credentials hold significant weight. A CMS with built-in verification features enables employers, clients, and other stakeholders to authenticate certifications quickly and accurately. By incorporating advanced security measures, such as blockchain technology or encrypted digital badges, CMSs provide an added layer of protection against fraud and credential forgery. This not only safeguards the reputation of organizations but also fosters trust and confidence among customers, partners, and regulatory bodies.

 

Of course, the bottom line is that a certification management system will save money, because this is a lot of information for the awarding body to track, and it is mission-critical.

 

Conclusion

Implementing a Certification Management System is a strategic investment for organizations seeking to streamline their certification processes and maximize their value. By centralizing certification management, enhancing compliance, streamlining renewals, improving workforce development, and bolstering credential verification, a robust CMS empowers organizations to stay ahead in a competitive landscape while ensuring credibility and regulatory adherence.