AI essay scoring process

In the realm of academic dishonesty and high-stakes exams such as Certification, the term “pre-knowledge” is an important concern in test security and validity. Understanding what pre-knowledge entails and its implications in exam cheating can offer valuable insights into how the integrity of tests and their scores can be compromised.

What is Pre-Knowledge in Exam Cheating?

Pre-knowledge refers to having advance access to information or materials that are not available to all students before an exam. This can manifest in various ways, from receiving the actual questions that will be on the test to obtaining detailed information about the exam’s content through unauthorized means. Essentially, it’s an unfair advantage that undermines the principles of a level playing field in assessments.  Psychometricians would refer to it as a threat to validity.

How Pre-Knowledge Occurs

The concept of pre-knowledge encompasses several scenarios. For instance, a student might gain insight into the exam questions through illicit means such as hacking into a teacher’s email, collaborating with insiders, or even through more subtle means like overhearing discussions. This sort of advantage is particularly problematic because it negates the purpose of assessments, which is to evaluate a student’s understanding and knowledge acquired through study and effort.

Perhaps the greatest source of pre-knowledge is “brain dump” websites, where people who take the test go to soon afterwards and write all the questions they can remember, dumping their short-term memory.  Over time, this can lead to large portions of the item bank being exposed to the public. Often, these sites are intentially located in countries that do not recognize the concept of cheating and do not have legal recourse to stop.

Implications of Pre-Knowledge

Pre-knowledge not only disrupts the fairness of the examination process but also poses significant challenges for test sponsors. Institutions must be vigilant and proactive in safeguarding against such breaches. Implementing robust security measures, such as secure testing environments and rigorous monitoring, is crucial to prevent the unauthorized dissemination of exam content.student cheating on test

Pre-knowledge and other forms of cheating can have far-reaching consequences beyond the immediate context of the exam. For instance, students who gain an unfair advantage may achieve higher grades, which can influence their future academic and career opportunities. This creates an inequitable situation where hard work and honest efforts are overshadowed by dishonesty.

Engaging in brain-dump activities is actually a form of intellectual property theft.  Many organizations spend a lot of money to develop high-quality assessments, sometimes totaling in the thousands of dollars per item!  Cheating in this manner will actually increase the costs of certification exams or educational opportunities.  Moreover, the organizations can then pursue legal action against cheaters.

Most importantly, cheating on exams can invalidate scores and therefore the entire credential.  If everyone is cheating and passing a certification exam, then the certification itself becomes worthless.  So by participating in brain-dump activities, you are actually hurting yourself.  And if the purpose of the test is to protect the public, such as ensuring that people working in healthcare fields are actually qualified, you are hurting the general public.  If you are going into healthcare to help patients, you need to realize that brain-dump activity only ends up hurting patients in the end.

Tips to Prevent Pre-Knowledge

Educational institutions and other test sponsors play a pivotal role in addressing the issue of pre-knowledge. Here are some effective strategies to help prevent pre-knowledge and maintain the integrity of academic assessments.  You might also benefit from this discussion on credentialing exams.

  1. Secure Testing Environments: Ensure exams are administered in a controlled environment where access to unauthorized materials is restricted.
  2. Monitor and Supervise: Use proctors and surveillance to oversee exam sessions and detect any suspicious behavior.
  3. Randomize Questions: Utilize question banks with randomized questions to prevent the predictable pattern of exam content.
  4. Strengthen Digital Security: Implement robust cybersecurity measures to protect exam content and prevent unauthorized access.
  5. Educate Examinees: Foster a culture of academic integrity by educating students about the consequences of cheating and the importance of honest efforts.
  6. Regularly Update Procedures: Review and update security measures and examination procedures regularly to address emerging threats.
  7. Encourage Open Communication: Provide support and resources for students struggling with their studies to reduce the temptation to seek unfair advantages.
  8. Strong NDA: This is one of the most important approaches.  If someone signs a Non-Disclosure Agreement which tells them that if they engage in cheating that they might not only fail the exam but be blocked from the entire profession and subject to lawsuits, this can be a huge deterrent.
  9. Multiple forms: Make multiple versions of the exam, and rotate them frequently.
  10. LOFT/CAT: Linear on the fly testing (LOFT) makes nearly infinite parallel versions of the exam, increasing security.  Computerized adaptive testing (CAT) creates a personalized exam for each candidate.
  11. Scan the internet: You can search the internet for text strings from your items, helping you find the brain dump sites and address them.

 

What if it happens?

There are two aspects to this: finding that it happened, and then deciding what to do about it.

To find that it happened, there are several types of evidence.

  1. Video – If there was proctoring, do we have video, perhaps of someone taking pictures of the exam with their phone?
  2. Technology – Perhaps a post on a brain dump site was done with someone’s email address or IP address.
  3. Eyewitness – You might get tipped off by another candidate that has higher integrity.
  4. Psychometric forensics – There are some ways to find this.  On the brain dump side, you might find candidates that don’t attempt all the items on the test but spend most of the time on a few… because they are memorizing them.  On the other end, you might find candidates that get a high score in a very short amount of time.  There are also much more advanced methods, as discussed in this book.

What should your organization do?  First of all, you need a test security plan ahead of time.  In that, you should lay out the processes as well as the penalties.  This ensures that examinees have due process.  Unfortunately, some organizations just stick their head in the sand; as long as candidates are paying the registration fees and the end stakeholders are not complaining, why rock the boat?

Summary

In conclusion, pre-knowledge in exam cheating represents a serious breach of academic integrity that undermines the educational process. It highlights the need for vigilance, robust security measures, and a culture of honesty within educational institutions. By understanding and addressing the nuances of pre-knowledge, educators and students can work together to uphold the values of fairness and integrity in academic assessments.

Test response function 10 items Angoff

Setting a cutscore on a test scored with item response theory (IRT) requires some psychometric knowledge.  This post will get you started.

How do I set a cutscore with item response theory?

There are two approaches: directly with IRT, or using CTT then converting to IRT.

  1. Some standard setting methods work directly with IRT, such as the Bookmark method.  Here, you calibrate your test with IRT, rank the items by difficulty, and have an expert panel place “bookmarks” in the ranked list.  The average IRT difficulty of their bookmarks is then a defensible IRT cutscore.  The Contrasting Groups method and the Hofstee method can also work directly with IRT.
  2. Cutscores set with classical test theory, such as the Angoff, Nedelsky, or Ebel methods, are easy to implement when the test is scored classically.  But if your test is scored with the IRT paradigm, you need to convert your cutscores onto the theta scale.  The easiest way to do that is to reverse-calculate the test response function (TRF) from IRT.

The Test Response Function

The TRF (sometimes called a test characteristic curve) is an important method of characterizing test performance in the IRT paradigm.  The TRF predicts a classical score from an IRT score, as you see below.  Like the item response function and test information function (item response and test information function), it uses the theta scale as the X-axis.  The Y-axis can be either the number-correct metric or proportion-correct metric.

Test response function 10 items Angoff

In this example, you can see that a theta of -0.3 translates to an estimated number-correct score of approximately 7, or 70%.

Classical cutscore to IRT

So how does this help us with the conversion of a classical cutscore?  Well, we hereby have a way of translating any number-correct score or proportion-correct score.  So any classical cutscore can be reverse-calculated to a theta value.  If your Angoff study (or Beuk) recommends a cutscore of 7 out of 10 points (70%), you can convert that to a theta cutscore of -0.3 as above.  If the recommended cutscore was 8 (80%), the theta cutscore would be approximately 0.7.

Because IRT works in a way that it scores examinees on the same scale with any set of items, as long as those items have been part of a linking/equating study.  Therefore, a single study on a set of items can be equated to any other linear test form, LOFT pool, or CAT pool.  This makes it possible to apply the classically-focused Angoff method to IRT-focused programs.  You can even set the cutscore with a subset of your item pool, in a linear sense, with the full intention to apply it on CAT tests later.

Note that the number-correct metric only makes sense for linear or LOFT exams, where every examinee receives the same number of items.  In the case of CAT exams, only the proportion correct metric makes sense.

How do I implement IRT?

Interested in applying IRT to improve your assessments?  Download a free trial copy of  Xcalibre  here.  If you want to deliver online tests that are scored directly with IRT, in real time (including computerized adaptive testing), check out  FastTest.

modified-Angoff Beuk compromise

A modified-Angoff method study is one of the most common ways to set a defensible cutscore on an exam.  It therefore means that the pass/fail decisions made by the test are more trustworthy than if you picked an arbitrary round number like 70%.  If your doctor, lawyer, accountant, or other professional has passed an exam where the cutscore has been set with this method, you can place more trust in their skills.

What is the Angoff method?

The Angoff method is a scientific way of setting a cutscore (pass point) on a test.  If you have a criterion-referenced interpretation, it is not legally defensible to just conveniently pick a round number like 70%; you need a formal process.  There are a number of acceptable methodologies in the psychometric literature for standard-setting studies, also known as cutscores or passing points.  Some examples include Angoff, modified-Angoff, Bookmark, Contrasting Groups, and Borderline.  The modified-Angoff approach is by far the popular approach.  It is used especially frequently for certification, licensure, certificate, and other credentialing exams.

It was originally suggested as a mere footnote by renowned researcher William Angoff, at Educational Testing Service.

How does the Angoff approach work?

First, you gather a group of subject matter experts (SMEs), with a minimum of 6, though 8-10 is preferred for better reliability, and have them define what they consider to be a Minimally Competent Candidate (MCC).  Next, you have them estimate the percentage of minimally competent candidates that will answer each item correctly.  You then analyze the results for outliers or inconsistencies.  If experts disagree, you will need to evaluate inter-rater reliability and agreement, and after that have the experts discuss and re-rate the items to gain better consensus.  The average final rating is then the expected percent-correct score for a minimally competent candidate.

Advantages of the Angoff method

  1. It is defensible.  Because it is the most commonly used approach and is widely studied in the scientific literature, it is well-accepted.
  2. You can implement it before a test is ever delivered.  Some other methods require you to deliver the test to a large sample first.
  3. It is conceptually simple, easy enough to explain to non-psychometricians.
  4. It incorporates the judgment of a panel of experts, not just one person or a round number.
  5. It works for tests with both classical test theory and item response theory.
  6. It does not take long to implement – if a short test, it can be done in a matter of hours!
  7. It can be used with different item types, including polytomously scored items (multi-points).

Disadvantages of the Angoff method

  1. It does not use actual data, unless you implement the Beuk method alongside.
  2. It can lead to the experts overestimating the performance of entry-level candidates, as they forgot what it was like to start out 20-30 years ago.  This is one reason to use the Beuk method as a “reality check” by showing the experts that if they stay with the cutscore they just picked, the majority of candidates might fail!

Example of the Modified-Angoff Approach

First of all, do not expect a straightforward, easy process that leads to an unassailably correct cutscore.  All standard-setting methods involve some degree of subjectivity.  The goal of the methods is to reduce that subjectivity as much as possible.  Some methods focus on content, others on examinee performance data, while some try to meld the two.

Step 1: Prepare Your Team

The modified-Angoff process depends on a representative sample of SMEs, usually 6-20. By “representative” I mean they should represent the various stakeholders. For instance, a certification for medical assistants might include experienced medical assistants, nurses, and physicians, from different areas of the country. You must train them about their role and how the process works, so they can understand the end goal and drive toward it.

Step 2: Define The Minimally Competent Candidate (MCC)

This concept is the core of the modified-Angoff method, though it is known by a range of terms or acronyms, including minimally qualified candidates (MQC) or just barely qualified (JBQ).  The reasoning is that we want our exam to separate candidates that are qualified from those that are not.  So we ask the SMEs to define what makes someone qualified (or unqualified!) from a perspective of skills and knowledge. This leads to a conceptual definition of an MCC. We then want to estimate what score this borderline candidate would achieve, which is the goal of the remainder of the study. This step can be conducted in person, or via webinar.

Step 3: Round 1 Ratings

Next, ask your SMEs to read through all the items on your test form and estimate the percentage of MCCs that would answer each correctly.  A rating of 100 means the item is a slam dunk; it is so easy that every MCC would get it right.  A rating of 40 is very difficult.  Most ratings are in the 60-90 range if the items are well-developed. The ratings should be gathered independently; if everyone is in the same room, let them work on their own in silence. This can easily be conducted remotely, though.

Step 4: Discussion

This is where it gets fun.  Identify items where there is the most disagreement (as defined by grouped frequency distributions or standard deviation) and make the SMEs discuss it.  Maybe two SMEs thought it was super easy and gave it a 95 and two other SMEs thought it was super hard and gave it a 45.  They will try to convince the other side of their folly. Chances are that there will be no shortage of opinions and you, as the facilitator, will find your greatest challenge is keeping the meeting on track. This step can be conducted in person, or via webinar.

Step 5: Round 2 Ratings

Raters then re-rate the items based on the discussion.  The goal is that there will be a greater consensus.  In the previous example, it’s not likely that every rater will settle on a 70.  But if your raters all end up from 60-80, that’s OK. How do you know there is enough consensus?  We recommend the inter-rater reliability suggested by Shrout and Fleiss (1979), as well as looking at inter-rater agreement and dispersion of ratings for each item. This use of multiple rounds is known as the Delphi approach; it pertains to all consensus-driven discussions in any field, not just psychometrics.

Step 6: Evaluate Results and Final Recommendation

Evaluate the results from Round 2 as well as Round 1.  An example of this is below.  What is the recommended cutscore, which is the average or sum of the Angoff ratings depending on the scale you prefer?  Did the reliability improve?  Estimate the mean and SD of examinee scores (there are several methods for this). What sort of pass rate do you expect?  Even better, utilize the Beuk Compromise as a “reality check” between the modified-Angoff approach and actual test data.  You should take multiple points of view into account, and the SMEs need to vote on a final recommendation. They, of course, know the material and the candidates so they have the final say.  This means that standard setting is a political process; again, reduce that effect as much as you can.

Some organizations do not set the cutscore at the recommended point, but at one standard error of judgment (SEJ) below the recommended point.  The SEJ is based on the inter-rater reliability; note that it is NOT the standard error of the mean or the standard error of measurement.  Some organizations use the latter; the former is just plain wrong (though I have seen it used by amateurs).

 

modified angoff

Step 7: Write Up Your Report

Validity refers to evidence gathered to support test score interpretations.  Well, you have lots of relevant evidence here. Document it.  If your test gets challenged, you’ll have all this in place.  On the other hand, if you just picked 70% as your cutscore because it was a nice round number, you could be in trouble.

Additional Topics

In some situations, there are more issues to worry about.  Multiple forms?  You’ll need to equate in some way.  Using item response theory?  You’ll have to convert the cutscore from the modified-Angoff method onto the theta metric using the Test Response Function (TRF).  New credential and no data available? That’s a real chicken-and-egg problem there.

Where Do I Go From Here?

Ready to take the next step and actually apply the modified-Angoff process to improving your exams?  Sign up for a free account in our  FastTest item banker. You can also download our Angoff analysis tool for free.

References

Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological bulletin86(2), 420.

test-scaling

Scaling is a psychometric term regarding the establishment of a score metric for a test, and it often has two meanings. First, it involves defining the method to operationally scoring the test, establishing an underlying scale on which people are being measured.  A common example is the T-score, which transforms raw scores into a standardized scale with a mean of 50 and a standard deviation of 10, making it easier to compare results across different populations or test forms.  It also refers to score conversions used for reporting scores, especially conversions that are designed to carry specific information.  The latter is typically called scaled scoring.

Examples of Scaling

You have all been exposed to this type of scaling, though you might not have realized it at the time. Most high-stakes tests like the ACT, SAT, GRE, and MCAT are reported on scales that are selected to convey certain information, with the actual numbers selected more or less arbitrarily. The SAT and GRE have historically had a nominal mean of 500 and a standard deviation of 100, while the ACT has a nominal mean of 18 and standard deviation of 6. These are actually the same scale, because they are nothing more than a converted z-score (standard or zed score), simply because no examinee wants to receive a score report that says you got a score of -1. The numbers above were arbitrarily selected, and then the score range bounds were selected based on the fact that 99% of the population is within plus or minus three standard deviations. Hence, the SAT and GRE range from 200 to 800 and the ACT ranges from 0 to 36. This leads to the urban legend of receiving 200 points for writing your name correctly on the SAT; again, it feels better for the examinee. A score of 300 might seem like a big number and 100 points above the minimum, but it just means that someone is in the 3rd percentile.

Now, notice that I said “nominal.” I said that because the tests do not actually have those means observed in samples, because the samples have substantial range restriction. Because these tests are only taken by students serious about proceeding to the next level of education, the actual sample is of higher ability than the population. The lower third or so of high school students usually do not bother with the SAT or ACT. So many states will have an observed average ACT of 21 and standard deviation of 4. This is an important issue to consider in developing any test. Consider just how restricted the population of medical school students is; it is a very select group.

How can I select a score scale?

score-scale

For various reasons, actual observed scores from tests are often not reported, and only converted scores are reported.  If there are multiple forms which are being equated, scaling will hide the fact that the forms differ in difficulty, and in many cases, differ in cutscore.  Scaled scores can facilitate feedback.  They can also help the organization avoid explanations of IRT scoring, which can be a headache to some.

When deciding on the conversion calculations, there are several important questions to consider.

First, do we want to be able to make fine distinctions among examinees? If so, the range should be sufficiently wide. My personal view is that the scale should be at least as wide as the number of items; otherwise you are voluntarily giving up information. This in turn means you are giving up variance, which makes it more difficult to correlate your scaled scores with other variables, like the MCAT is correlated with success in medical school. This, of course, means that you are hampering future research – unless that research is able to revert back to actual observed scores to make sure all information possible is used. For example, supposed a test with 100 items is reported on a 5-point grade scale of A-B-C-D-F. That scale is quite restricted, and therefore difficult to correlate with other variables in research. But you have the option of reporting the grades to students and still using the original scores (0 to 100) for your research.

Along the same lines, we can swing completely in the other direction. For many tests, the purpose of the test is not to make fine distinctions, but only to broadly categorize examinees. The most common example of this is a mastery test, where the examinee is being assessed on their mastery of a certain subject, and the only possible scores are pass and fail. Licensure and certification examinations are an example. An extension of this is the “proficiency categories” used in K-12 testing, where students are classified into four groups: Below Basic, Basic, Proficient, and Advanced. This is used in the National Assessment of Educational Progress. Again, we see the care taken for reporting of low scores; instead of receiving a classification like “nonmastery” or “fail,” the failures are given the more palatable “Below Basic.”

Another issue to consider, which is very important in some settings but irrelevant in others, is vertical scaling. This refers to the chaining of scales across various tests that are at quite different levels. In education, this might involve linking the scales of exams in 8th grade, 10th grade, and 12th grade (graduation), so that student progress can be accurately tracked over time. Obviously, this is of great use in educational research, such as the medical school process. But for a test to award a certification in a medical specialty, it is not relevant because it is really a one-time deal.

Lastly, there are three calculation options: pure linear (ScaledScore = RawScore * Slope + Intercept), standardized conversion (Old Mean/SD to New Mean/SD), and nonlinear approaches like Equipercentile.

Perhaps the most important issue is whether the scores from the test will be criterion-referenced or norm-referenced. Often, this choice will be made for you because it distinctly represents the purpose of your tests. However, it is quite important and usually misunderstood, so I will discuss this in detail.

Criterion-Referenced vs. Norm-Referenced

data-analysis-norms

This is a distinction between the ways test scores are used or interpreted. A criterion-referenced score interpretation means that the score is interpreted with regards to defined content, blueprint, or curriculum (the criterion), and ignores how other examinees perform (Bond, 1996). A classroom assessment is the most common example; students are scored on the percent of items correct, which is taken to imply the percent of the content they have mastered. Conversely, a norm-referenced score interpretation is one where the score provides information about the examinee’s standing in the population, but no absolute (or ostensibly absolute) information regarding their mastery of content. This is often the case with non-educational measurements like personality or psychopathology. There is no defined content which we can use as a basis for some sort of absolute interpretation. Instead, scores are often either z-scores or some linear function of z-scores.  IQ is historically scaled with a mean of 100 and standard deviation of 15.

It is important to note that this dichotomy is not a characteristic of the test, but of the test score interpretations. This fact is more apparent when you consider that a single test or test score can have several interpretations, some of which are criterion-referenced and some of which are norm-referenced. We will discuss this deeper when we reach the topic of validity, but consider the following example. A high school graduation exam is designed to be a comprehensive summative assessment of a secondary education. It is therefore specifically designed to cover the curriculum used in schools, and scores are interpreted within that criterion-referenced context. Yet scores from this test could also be used for making acceptance decisions at universities, where scores are only interpreted with respect to their percentile (e.g., accept the top 40%). The scores might even do a fairly decent job at this norm-referenced application. However, this is not what they are designed for, and such score interpretations should be made with caution.

Another important note is the definition of “criterion.” Because most tests with criterion-referenced scores are educational and involve a cutscore, a common misunderstanding is that the cutscore is the criterion. It is still the underlying content or curriculum that is the criterion, because we can have this type of score interpretation without a cutscore. Regardless of whether there is a cutscore for pass/fail, a score on a classroom assessment is still interpreted with regards to mastery of the content.  To further add to the confusion, Industrial/Organizational psychology refers to outcome variables as the criterion; for a pre-employment test, the criterion is typically Job Performance at a later time.

This dichotomy also leads to some interesting thoughts about the nature of your construct. If you have a criterion-referenced score, you are assuming that the construct is concrete enough that anybody can make interpretations regarding it, such as mastering a certain percentage of content. This is why non-concrete constructs like personality tend to be only norm-referenced. There is no agreed-upon blueprint of personality.

Multidimensional Scaling

camera lenses for multidimensional item response theory

An advanced topic worth mentioning is multidimensional scaling (see Davison, 1998). The purpose of multidimensional scaling is similar to factor analysis (a later discussion!) in that it is designed to evaluate the underlying structure of constructs and how they are represented in items. This is therefore useful if you are working with constructs that are brand new, so that little is known about them, and you think they might be multidimensional. This is a pretty small percentage of the tests out there in the world; I encountered the topic in my first year of graduate school – only because I was in a Psychological Scaling course – and have not encountered it since.

Summary of test scaling

Scaling is the process of defining the scale that on which your measurements will take place. It raises fundamental questions about the nature of the construct. Fortunately, in many cases we are dealing with a simple construct that has a well-defined content, like an anatomy course for first-year medical students. Because it is so well-defined, we often take criterion-referenced score interpretations at face value. But as constructs become more complex, like job performance of a first-year resident, it becomes harder to define the scale, and we start to deal more in relatives than absolutes. At the other end of the spectrum are completely ephemeral constructs where researchers still can’t agree on the nature of the construct and we are pretty much limited to z-scores. Intelligence is a good example of this.

Some sources attempt to delineate the scaling of people and items or stimuli as separate things, but this is really impossible as they are so confounded. Especially since people define item statistics (the percent of people that get an item correct) and items define people scores (the percent of items a person gets correct). It is for this reason that item response theory, the most advanced paradigm in measurement theory, was designed to place items and people on the same scale. It is also for this reason that item writing should consider how they are going to be scored and therefore lead to person scores. But because we start writing items long before the test is administered, and the nature of the construct is caught up in the scale, the issues presented here need to be addressed at the very beginning of the test development cycle.

certification exam development construction

Certification exam development, is a well-defined process governed by accreditation guidelines such as NCCA, requiring steps such as job task analysis and standard setting studies.  For certification, and other credentialing like licensure or certificates, this process is incredibly important to establishing validity.  Such exams serve as gatekeepers into many professions, often after people have invested a ton of money and years of their life in preparation.  Therefore, it is critical that the tests be developed well, and have the necessary supporting documentation to show that they are defensible.

So what exactly goes into developing a quality exam, sound psychometrics, and establishing the validity documentation, perhaps enough to achieve NCCA accreditation for your certification? Well, there is a well-defined and recognized process for certification exam development, though it is rarely the exact same for every organization.  In general, the accreditation guidelines say you need to address these things, but leave the specific approach up to you.  For example, you have to do a cutscore study, but you are allowed to choose Bookmark vs Angoff vs other method.

Key Stages in Certification Exam Development

Job Analysis / Practice Analysis

A job analysis study provides the vehicle for defining the important job knowledge, skills, and abilities (KSA) that will later be translated into content on a certification exam. During a job analysis, important job KSAs are obtained by directly analyzing job performance of highly competent job incumbents or surveying subject-matter experts regarding important aspects of successful job performance. The job analysis generally serves as a fundamental source of evidence supporting the validity of scores for certification exams.

Test Specifications and Blueprints

The results of the job analysis study are quantitatively converted into a blueprint for the certification exam.  Basically, it comes down to this: if the experts say that a certain topic or skill is done quite often or is very critical, then it deserves more weight on the exam, right?  There are different ways to do this.  My favorite article on the topic is Raymond & Neustel, 2006Here’s a free tool to help.

test development cycle job task analysis

Item Development

After important job KSAs are established, subject-matter experts write test items to assess them. The end result is the development of an item bank from which exam forms can be constructed. The quality of the item bank also supports test validity.  A key operational step is the development of an Item Writing Guide and holding an item writing workshop for the SMEs.

Pilot Testing

There should be evidence that each item in the bank actually measures the content that it is supposed to measure; in order to assess this, data must be gathered from samples of test-takers. After items are written, they are generally pilot tested by administering them to a sample of examinees in a low-stakes context—one in which examinees’ responses to the test items do not factor into any decisions regarding competency. After pilot test data is obtained, a psychometric analysis of the test and test items can be performed. This analysis will yield statistics that indicate the degree to which the items measure the intended test content. Items that appear to be weak indicators of the test content generally are removed from the item bank or flagged for item review so they can be reviewed by subject matter experts for correctness and clarity.

Note that this is not always possible, and is one of the ways that different organizations diverge in how they approach exam development.

Standard Setting

Standard setting also is a critical source of evidence supporting the validity of professional credentialing exam (i.e. pass/fail) decisions made based on test scores.  Standard setting is a process by which a passing score (or cutscore) is established; this is the point on the score scale that differentiates between examinees that are and are not deemed competent to perform the job. In order to be valid, the cutscore cannot be arbitrarily defined. Two examples of arbitrary methods are the quota (setting the cut score to produce a certain percentage of passing scores) and the flat cutscore (such as 70% on all tests). Both of these approaches ignore the content and difficulty of the test.  Avoid these!

Instead, the cutscore must be based on one of several well-researched criterion-referenced methods from the psychometric literature.  There are two types of criterion-referenced standard-setting procedures (Cizek, 2006): examinee-centered and test-centered.

The Contrasting Groups method is one example of a defensible examinee-centered standard-setting approach. This method compares the scores of candidates previously defined as Pass or Fail. Obviously, this has the drawback that a separate method already exists for classification. Moreover, examinee-centered approaches such as this require data from examinees, but many testing programs wish to set the cutscore before publishing the test and delivering it to any examinees. Therefore, test-centered methods are more commonly used in credentialing.

The most frequently used test-centered method is the Modified Angoff Method (Angoff, 1971) which requires a committee of subject matter experts (SMEs).  Another commonly used approach is the Bookmark Method.

Equating

If the test has more than one form – which is required by NCCA Standards and other guidelines – they must be statistically equated.  If you use classical test theory, there are methods like Tucker or Levine.  If you use item response theory, you can either bake the equating into the item calibration process with software like Xcalibre, or use conversion methods like Stocking & Lord.

What does this process do?  Well, if this year’s certification exam had an average of 3 points higher than last years, how do you know if this year’s version was 3 points easier, or this year’s cohort was 3 points smarter, or a mixture of both?  Learn more here.

Psychometric Analysis & Reporting

This part is an absolutely critical step in the exam development cycle for professional credentialing.  You need to statistically analyze the results to flag any items that are not performing well, so you can replace or modify them.  This looks at statistics like item p-value (difficulty), item point biserial (discrimination), option/distractor analysis, and differential item functioning.  You should also look at overall test reliability/precision and other psychometric indices.  If you are accredited, you need to perform year-end reports and submit them to the governing body.  Learn more about item and test analysis.

Exam Development: It’s a Vicious Cycle

Now, consider the big picture: in many cases, an exam is not a one-and-done thing.  It is re-used, perhaps continually.  Often there are new versions released, perhaps based on updated blueprints or simply to swap out questions so that they don’t get overexposed.  That’s why this is better conceptualized as an exam development cycle, like the circle shown above.  Often some steps like Job Analysis are only done once every 5 years, while the rotation of item development, piloting, equating, and psychometric reporting might happen with each exam window (perhaps you do exams in December and May each year).

ASC has extensive expertise in managing this cycle for professional credentialing exams, as well as many other types of assessments.  Get in touch with us to talk to one of our psychometricians.  I also suggest reading two other blog posts on the topic: ‘Certification Exam Administration and Proctoring‘ and ‘Certification Exam Delivery: Guidelines for Success‘ for a comprehensive understanding.

Test preparation for a high-stakes exam can be a daunting task, as obtaining degrees, certifications, and other significant achievements can accelerate your career and present new opportunities to learn in your chosen field of study. It’s a pivotal step towards advancing your career and demonstrating your expertise in a specific field. However, achieving success in your exam requires more than just diligent study: one must also incorporate strategies that can address their unique needs to study effectively. In this article, we will explore ten essential strategies to aid in preparing for your exam.

Also, remember that this test serves an important purpose.  If it is certification/licensure, it is designed to protect the public, ensuring that only qualified professionals work in the field.  If it is a pre-employment test, it is designed to predict job performance and fit in the organization – both to make the organization more efficient, and to help ensure you have the opportunity to be successful.  Tests are not designed to be a personal hurdle to you!

Tips for Test Preparation

  1. Study Early, Study Regularly: To avoid the effects of procrastination, such as stress on the evening before the exam begins, begin your studies as soon as possible. By beginning your studies early, one can thoroughly review the material that will likely be on the exam, as well as identify topics that one may need assistance in reviewing. Also, creating a schedule will ensure thorough review of the material by making “chunks” to ensure the task of studying does not become overwhelming.  Research has shown that strategic test preparation will be more effective. Create a schedule to help manage the process.
  2. student exam help test preparationCreate Organized Notes: Gather your materials, including notes prepared in class, textbooks, and any other helpful materials. If possible, organize your notes by topic, chapter, date, or any other method that will make the material easier for you to understand. Make use of underlining or highlighting relevant sections to facilitate faster review times in the future and to better retain the information you are reviewing.
  3. Understand the Exam Format: Begin by familiarizing yourself with the exam’s format and structure. Understand the types of questions you’ll encounter, such as multiple-choice, essays, or practical demonstrations. Review the blueprints. Knowing what to expect will help you tailor your study plan accordingly and manage your time during the exam.
  4. Focus on Weak Areas: Identify your weaknesses early in the study process and prioritize them. Spend extra time reviewing challenging concepts and seeking clarification from instructors or peers if needed. Don’t neglect these areas in favor of topics you find easier, as they are likely to appear on the exam.
  5. Utilize Memory Aids: Memory aids can be incredibly helpful in associating relevant information with questions. Examples of memory aids include acronyms, such as the oft- promoted “PEMDAS” by teachers in learning the order-of-operations in mathematics, and story-writing to create context for, and/or association with, requisite material, such as the amount(s) and type(s) of force experienced on a particular amusement-park ride by attendees and how the associated formula should be applied.
  6. Familiarize yourself with the rules: Review the candidate handbook and other materials. Exams may differ in structure across varying subjects, incorporating stipulations such as essay-writing, image recognition, or passages to be read with corresponding questions. Familiarizing oneself with the grading weights and general structure of an exam can aid in allocating time to particular sections to study and/or practice for.
  7. Take care of yourself: It is essential that your physical and mental wellbeing is cared for while you are studying. Adequate sleep is essential both for acquiring new information and for reviewing previously-acquired information. Also, it is necessary to eat and drink regularly, and opportunities to do so should be included in your schedule. Exercise may also be beneficial in providing opportunities for relevant material to be reviewed, or as a break from your studies.
  8. Take practice tests: Practice is essential for exam success.  Many high-stakes exams offer a practice test, either through the test sponsor or through a third party, like these MCAT tests with Kaplan. Along with providing a further opportunity to review the content of the exam, practice tests can aid in identifying topics that one may need to further review, as well as increasing confidence for the exam-taker in their ability to pass the exam.  Practice exams simulate the test environment and help identify areas where you need improvement. Focus on both content knowledge and test-taking strategies to build confidence and reduce anxiety on exam day.
  9. Simulate Exam Conditions: To complete your test preparation, try to get as close as possible to the real thing.  Test yourself under a time-limit in a room similar to the exam room. Subtle factors such as room temperature or sound insulation may aid or distract one’s studies, so it may be helpful to simulate taking the exam in a similar environment.  Practice implementing your test-taking strategies, such as process of elimination for multiple-choice questions or managing your time effectively.
  10. Maintain a Positive Attitude: Remaining optimistic can improve one’s perceptions of their efforts, and their performance in the exam. Use mistakes as opportunities to improve oneself, rather than incidents to be recounted in the future. Maintaining an optimistic perspective can also help mitigate stress in the moments before one begins their exam.

 

Final Thoughts on Test Preparation

In conclusion, by incorporating these strategies into your test preparation routine, you’ll be better equipped to tackle your exam with confidence and competence. Remember that success is not just about how much you know but also how well you prepare. Stay focused, stay disciplined, and trust in your ability to succeed. Good luck!

Stackable credential cairn

Stackable credentials refer to the practice of accumulating a series of credentials such as certifications or microcredentials over time.  In today’s rapidly evolving job market, staying ahead of the curve is crucial for career success. Employers are increasingly looking for candidates with a diverse set of skills and qualifications. This has given rise to the concept of “stackable credentials,” a strategy that can supercharge your career trajectory. In this blog post, we’ll explore what stackable credentials are, why they matter, and how you can leverage them to open new doors in your professional journey.

What is a Stackable Credential?

University certificate credentialStackable credentials are where you can add multiple, related credentials to your resume.  Instead of pursuing a single, traditional degree – which often takes a long time, like 4 years – individuals can enhance their skill set by stacking various credentials on top of each other while they work in the field. These could include industry certifications, specialized training programs, or workshops that target specific skills.  For example, instead of getting a generalized 4 year degree in Marketing, you might take a 3 month bootcamp in WordPress and then start working in the field, while adding other credentials on search engine optimization, Google Ads, or other related topics.

Why Stackable Credentials Matter

  1. Adaptability in a Changing Job Market: The job market is dynamic, and industries are constantly evolving. Stackable credentials allow professionals to adapt to these changes quickly. By continually adding relevant certifications, individuals can stay current with industry trends and remain competitive in the job market.
  2. Customized Learning Paths: Unlike traditional degrees that follow a predetermined curriculum, stackable credentials empower individuals to tailor their learning paths. This flexibility enables professionals to focus on acquiring skills that align with their career goals, making their journey more efficient and purposeful.
  3. Incremental Career Advancement: Stackable credentials provide incremental steps for career advancement. As professionals accumulate qualifications, they can showcase a diverse skill set, making them more appealing to employers seeking candidates with a broad range of competencies. This can lead to promotions, salary increases, and expanded job responsibilities.
  4. Demonstration of Lifelong Learning: In a world where continuous learning is essential, stackable credentials demonstrate a commitment to lifelong learning. This commitment is highly valued by employers who seek individuals eager to stay informed about industry developments and emerging technologies.

Leveraging Stackable Credentials for Success

  1. Identify Your Career Goals: Begin by identifying your long-term career goals. What skills are in demand in your industry? What certifications are recognized and respected? Understanding your objectives will guide you in selecting the most relevant stackable credentials.
  2. Research Accredited Programs: It’s crucial to choose reputable and accredited programs when pursuing stackable credentials. Look for certifications and courses offered by well-known organizations or institutions. This ensures that your efforts are recognized and valued in the professional sphere.
  3. Networking Opportunities: Many stackable credential programs provide opportunities for networking with industry professionals. Take advantage of these connections to gain insights into current trends, job opportunities, and potential mentors who can guide you in your career journey.
  4. Stay Informed and Updated: Stackable credentials lose their value if they become outdated. Stay informed about changes in your industry and continuously update your skill set. Consider regularly adding new credentials to your stack to remain relevant and marketable.

External Resources for Further Learning:

  1. edX – MicroMasters Programs: Explore edX’s MicroMasters programs for in-depth, specialized learning that can be stacked for advanced qualifications.
  2. LinkedIn Learning: Access a vast library of courses on LinkedIn Learning to acquire new skills and stack credentials relevant to your career.
  3. CompTIA Certifications: Explore CompTIA’s certifications for IT professionals, offering a stackable approach to building expertise in various technology domains.
  4. Coursera Specializations: Coursera offers specializations that allow you to stack credentials in specific fields, providing a comprehensive learning path.

Summary

In conclusion, stackable credentials offer a strategic approach to career development in an ever-changing professional landscape. By embracing this mindset and continuously adding valuable qualifications, you can unlock new opportunities and elevate your career to new heights. Stay curious, stay relevant, and keep stacking those credentials for a brighter future.

Online proctoring software refers to platforms that proctor educational or professional assessments (exams or tests) when the proctor is not in the same room as the examinee. This means that it is done with a video stream or recording using a webcam and sometimes an additional device, which are monitored by a human and/or AI. It is also referred to as remote proctoring or invigilation. Online proctoring offers a compelling alternative to in-person proctoring, somewhere in between unproctored at-home tests and tests delivered at an expensive testing center in an office building. This makes it a perfect fit for medium-stakes exams, such as university placement, pre-employment screening, and many types of certification/licensure tests.

The Importance of Online Proctoring Software

Online proctoring software has become essential in maintaining the integrity and fairness of remote examinations. This technology enables educational institutions and certification bodies to administer tests securely, ensuring that the assessment process is free from cheating and other forms of malpractice. By using advanced features such as facial recognition, screen monitoring, and AI-driven behavior analysis, online proctoring software can detect and flag suspicious activities in real time, making it a reliable solution for maintaining academic standards.

Moreover, online proctoring software offers significant convenience and flexibility. Students and professionals can take exams from any location, reducing the need for physical test centers and making it easier for individuals in remote or underserved areas to access educational opportunities. This flexibility not only broadens access to education and certification but also saves time and resources for both test-takers and administrators. Additionally, the data collected during online proctored exams can provide valuable insights into test-taker behavior, helping organizations to refine their testing processes and improve overall assessment quality.

List of Online Proctoring Software Providers

Looking to evaluate potential vendors? Here is a great place to start.

# Name Website Country Proctor Service
1 Aiproctor https://www.aiproctor.com/ USA AI
2 Centre Based Test (CBT) https://www.conductexam.com/center-based-online-test-software/ India Live, Record and Review
3 Class in Pocket classinpocket.com (Website now defunct) India AI
4 Datamatics https://www.datamatics.com/industries/education-technology/proctoring/ India AI, Live, Record and Review
5 DigiProctor https://www.digiproctor.com/ India AI
6 Disamina https://disamina.in/ India AI
7 Examity https://www.examity.com/ USA Live
8 ExamMonitor https://examsoft.com/ USA Record and Review
9 ExamOnline https://examonline.in/remote-proctoring-solution-for-employee-hiring/ India AI, Live
10 Eduswitch https://eduswitch.com/ India AI
11 Examus https://examus.com/ Russia AI, Bring Your Own Proctor, Live
12 EasyProctor https://www.excelsoftcorp.com/products/assessment-and-proctoring-solutions/ India AI, Live, Record and Review
13 HonorLock https://honorlock.com/ USA AI, Record and Review
14 Internet Testing Systems https://www.testsys.com/ USA Bring your own proctor/td>
14 Invigulus https://www.invigulus.com/ USA AI, Live, Record and Review
15 Iris Invigilation https://www.irisinvigilation.com/ Australia AI
16 Mettl https://mettl.com/en/online-remote-proctoring/ India AI, Live, Record and Review
17 MonitorEdu https://monitoredu.com/proctoring/ USA Live
18 OnVUE https://home.pearsonvue.com/Test-takers/OnVUE-online-proctoring.aspx/ USA Live
19 Oxagile https://www.oxagile.com/competence/edtech-solutions/proctoring/ USA AI, Live, Record and Review
20 Parakh https://parakh.online/blog/remote-proctoring-ultimate-solution-for-secure-online-exam/ India AI, Live, Record and Review
21 ProctorFree https://www.proctorfree.com/ USA AI, Live
22 Proctor360 https://proctor360.com/ USA AI, Bring Your Own Proctor, Live, Record and Review
23 ProctorEDU https://proctoredu.com/ Russia AI, Live, Record and Review
24 ProctorExam https://proctorexam.com/ Netherlands Bring Your Own Proctor, Live, Record and Review
25 Proctorio https://proctorio.com/products/online-proctoring/ USA AI, Live
26 Proctortrack https://www.proctortrack.com/ USA AI, Live
27 ProctorU https://www.proctoru.com/ USA AI, Live, Record and Review
28 Proview https://www.proview.io/en/ USA AI, Live
29 PSI Bridge https://www.psionline.com/en-gb/platforms/psi-bridge/ USA Live, Record and Review
30 Respondus Monitor https://web.respondus.com/he/monitor/ USA AI, Live, Record and Review
31 Rosalyn https://www.rosalyn.ai/ USA AI, Live
32 SmarterProctoring https://smarterservices.com/smarterproctoring/ USA AI, Bring Your Own Proctor, Live
33 Sumadi https://sumadi.net/ Honduras AI, Live, Record and Review
34 Suyati https://suyati.com/ India AI, Live, Record and Review
35 TCS iON Remote Assessments https://www.tcsion.com/hub/remote-assessment-marking-internship/ India AI, Live
36 Think Exam https://www.thinkexam.com/remoteproctoring/ India AI, Live
37 uxpertise XP https://uxpertise.ca/en/uxpertise-xp/ Canada AI, Live, Record and Review
38 Proctor AI https://www.visive.ai/solutions/proctor-ai/ India AI, Live, Record and Review
39 Wise Proctor https://www.wiseattend.com/wise-proctor/ USA AI, Record and Review
40 Xobin https://xobin.com/online-remote-proctoring/ India AI
41 Youtestme https://www.youtestme.com/online-proctoring/ Canada AI, Live

 

Our recommendations for the best online proctoring software

ASC’s online assessment platforms are integrated with some of the leading remote proctoring software providers.

Type Vendors
Live MonitorEDU
AI Constructor, Proctor360, ProctorFree
Record and Review Constructor, Proctor360, ProctorFree
Bring Your Own Proctor Constructor, Proctor360

 

How do I select an online proctoring software?

First, you need to determine which type of proctoring you need. There are four levels of online proctoring; more detail on this later!

  1. Live: Professional proctors watch in real time
  2. Record and review: Video is recorded, and professional proctors watch later
  3. AI: Video is recorded, and analyzed with AI to look for flags like two faces
  4. Bring your own proctor: Same as the ones above, but YOU need to watch all the videos

There is a trade-off with costs here. Live proctoring with professionals can cost $30 to $100 or more, while AI proctoring can be as little as a few dollars. So, Live is usually used for high-stakes low-volume exams like Certification, while AI is better for low-stakes high-volume exams like university quizzes.

Next, evaluate some vendors to see which group they fall into; note that some vendors can do all of them! Then, ask for some demos so you understand the business processes involved and the UX on the examinee side, both of which could substantially impact the soft costs for your organization. Finally, start negotiating with the vendor you want!

What do I need? Two Distinct Markets

First, I would describe the online proctoring industry as actually falling into two distinct markets, so the first step is to determine which of these fits your organizationlaptop-desk-above

  1. Large scale, lower cost (when large scale), lower security systems designed to be used only as a plugin to major LMS platforms like Blackboard or Canvas. These systems are therefore designed for lower-stakes exams like an Intro to Psychology midterm at a university.
  2. Lower scale, higher cost, higher security systems designed to be used with standalone assessment platforms. These are generally for higher-stakes exams like certification or workforce, or perhaps special use at universities like Admissions and Placement exams.

How to tell the difference? The first type will advertise about easy integration with systems like Blackboard or Canvas as a key feature. They will also often focus on AI review of videos, rather than using real humans. Another key consideration is to look at the existing client base, which is often advertised.

Other ways that online proctoring software can differ

Screen capture: Some online proctoring providers have an option to record/stream the screen as well as the webcam. Some also provide the option to only do this (no webcam) for lower stakes exams.

Mobile phone as the second camera: Some newer platforms provide the option to easily integrate the examinee’s mobile phone as a second camera (third stream, if you include screen capture), which effectively operates as a human proctor. Examinees will be instructed to use the video to show under the table, behind the monitor, etc., before starting the exam. They then might be instructed to stand up the phone 2 meters away with a clear view of the entire room while the test is being delivered. This is in addition to the webcam.

API integrations: Some systems require software developers to set up an API integration with your LMS or assessment platform. Others are more flexible, and you can just log in yourself, upload a list of examinees, and you are all set.

On-Demand vs. Scheduled: Some platforms involve the examinee scheduling a time slot. Others are purely on-demand, and the examinee can show up whenever they are ready. MonitorEDU is a prime example of this: examinees show up at any time, present their ID to a live human, and are then started on the test immediately – no downloads/installs, no system checks, no API integrations, nothing.

What are the types of online proctoring?

There are many types of online proctoring software on the market, spread across dozens of vendors, especially new ones that sought to capitalize on the pandemic which were not involved with assessment before hand. With so many options, how can you more effectively select amongst the types of remote proctoring? There are four types of remote proctoring platforms, which can be adapted to a particular use case, sometimes varying between different tests in a single organization. ASC supports all four types, and partners with 5 different vendors to help provide the best solution to our clients. In descending order of security:

Approach What it entails for you What it entails for the candidate

Live with professional proctors

You register a set of examinees in FastTest, and tell us when they are to take their exams and under what rules.
We provide the relevant information to the proctors.
You send all the necessary information to your examinees.
The most secure of the types of remote proctoring.
Examinee goes to ascproctor.com, where they will initiate a chat with a proctor.
After confirmation of their identity and workspace, they are provided information on how to take the test.
The proctor then watches a video stream from their webcam as well as a phone on the side of the room, ensuring that the environment is secure. They do not see the screen, so your exam content is not exposed. They maintain exam invigilation continuously.
When the examinee is finished, they notify the proctor, and are excused.

Live, bring your own proctor (BYOP)

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
Your staff logs into the admin portal and awaits examinees.
Videos with AI flagging are available for later review if needed.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  Proctors ask the examinee to provide identity verification, then launch the test.
Examinee is watched on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

Record and Review (with option for AI)

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
After examinees take the test, your staff (or ours) logs into review all the videos and report on any issues.  AI will automatically flag irregular behavior, making your reviews more time-efficient.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

AI only

You upload examinees into FastTest, which will generate links.
You send relevant instructions and the links to examinees.
Videos are stored for 1 month if you need to check any.
Examinee will click on a link, which launches the proctoring software.
An automated system check is performed.
The proctoring is launched.  System asks the examinee to provide identity verification, then launch the test.
Examinee is recorded on the webcam and screencast.  AI algorithms help to flag irregular behavior.
Examinee concludes the test

 

Some case studies for different types of exams

We’ve worked with all types of remote proctoring software, across many types of assessment:

  • ASC delivers high-stakes certification exams for a number of certification boards, in multiple countries, using the live proctoring with professional proctors. Some of these are available continuously on-demand, while others are on specific days where hundreds of candidates log in.
  • We partnered with a large university in South America, where their admissions exams were delivered using Bring Your Own Proctor, enabling them to drastically reduce costs by utilizing their own staff.
  • We partnered with a private company to provide AI-enhanced record-and-review proctoring for applicants, where ASC staff reviews the results and provides a report to the client.
  • We partner with an organization that delivers civil service exams for a country, and utilizes both unproctored and AI-only proctoring, differing across a range of exam titles.

 

More security: A better test delivery software

A good testing delivery platform will also come with its own functionality to enhance test security: randomization, automated item generation, computerized adaptive testing, linear-on-the-fly testing, professional item banking, item response theory scoring, scaled scoring, psychometric analytics, equating, lockdown delivery, and more. In the context of online proctoring, perhaps the most salient is the lockdown delivery. In this case, the test will completely take over the examinee’s computer and they can’t use it for anything else until the test is done.

LMS systems rarely include any of this functionality, because they are not needed for a midterm exam of Intro to Psychology. However, most assessments in the world that have real stakes – university admissions, certifications, workforce hiring, etc. – depend heavily on such functionality. It’s not just out of habit or tradition, either. Such methods are considered essential by international standards including AERA/APA/NCMA, ITC, and NCCA.

 

Want some more information?

Get in touch with us, we’d love to show you a demo or introduce you to partners!

Email solutions@assess.com.

 

value-of-digital-badges

Digital badges (aka e-badges) have emerged in today’s digitalized world as a powerful tool for recognizing and showcasing individual’s accomplishments in an online format which is more comprehensive, immediate, brandable, and easily verifiable compared to traditional paper certificates or diplomas. In this blog post, we will delve into the world of digital badges, explore best badging practices, discuss their advantages in education, and highlight examples of badge-awarding platforms.

Digital Badges and Their Utilization

Digital badges are visual representations or icons that are applied to recognize and verify an individual’s achievement or mastery of a particular skill or knowledge area in the online space. They serve as a form of digital credentialing or micro-credentialing, providing a way to display their skills, knowledge, or accomplishments in a specific domain, helping them stand out, and gain recognition in the digital landscape.

Digital badges often contain metadata, such as the name of the issuing organization, a brief description of the achievement, criteria for earning the badge, and evidence of the individual’s accomplishment. This metadata is embedded within the badge image using a technology called Open Badges, allowing anyone to verify the authenticity and details of the badge.

Digital badges are typically issued and exhibited electronically what makes them easily shareable and accessible across various digital platforms, such as social media profiles, resumes, portfolios, or online learning platforms. Digital badges are widely used in various contexts, including education, professional development, training programs, online courses, and gamification.

Digital Badging Best Practices

digital-badges-picture

To ensure the credibility of digital badges, it is important to adhere to the most effective badging practices. Here are some key practices to consider:

  • Clearly Defined Criteria: Badges should have well-defined criteria that outline the specific skills or achievements required to earn the badge. This ensures that the badge holds value and meaning.
  • Authenticity and Verification: Badges should incorporate metadata via technologies like Open Badges, enabling easy verification of the badge’s authenticity and details. This helps maintain trust and credibility in the digital badge ecosystem.
  • Issuer Credibility: Reputable organizations, institutions, or experts in the field should issue badges. The credibility of the issuer adds value to the badge and increases its recognition.
  • Visual Design: Badges should be visually appealing and distinct, making them easily recognizable and shareable. Thoughtful design elements can enhance the badge’s appeal and encourage individuals to display them proudly.

Advantages of Using Digital Badges in Education and Workforce Development

Digital badges offer numerous advantages in educational settings, transforming the way achievements and skills are recognized and valued. Below you may find some key advantages listed:

  • Granular Skill Recognition: Badges provide a way to recognize and demonstrate specific skills or achievements that might not be captured by traditional grades or degrees. This allows individuals to highlight their unique strengths and expertise.
  • Motivation and Engagement: Badges can act as a motivational tool, driving learners to actively pursue goals, and master new skills. The visual nature of badges and the ability to display them publicly create a sense of achievement and pride.
  • Portable and Shareable: Digital badges can be easily shared across various platforms, such as social media profiles, resumes, or online portfolios. This increases the visibility of accomplishments and facilitates networking and professional opportunities.
  • Lifelong Learning: Badges promote a culture of lifelong learning by recognizing and acknowledging continuous skill development. Individuals can earn badges for completing online courses, attending workshops, or acquiring new competencies, fostering a commitment to ongoing personal and professional growth.
  • Brand enhancement: Badges provide exposure to the brand of the issuing institution.
  • Turnaround time: Badges can often be available immediately on a profile page after the credential is awarded.
  • Verifiability: Digital badges are easily verifiable when using an appropriate platform/technology. This helps the consumer, hiring manager, or other stakeholder to determine if a person has the skills and knowledge that are relevant to the credential.

Examples of Badge-Awarding Platforms

Open badges platforms

Several platforms have emerged to facilitate the creation, issuance, and display of digital badges. Here are a few notable examples:

  • Credly is a widely used platform that allows organizations to design, issue, and manage digital badges. It also provides features for verifying badges and displaying them on various online platforms.
  • Open Badge Factory is an open-source platform that enables badge creation, management, and verification. It offers customizable badge templates and integration with various learning management systems.
  • Badgr is a platform that supports the creation and awarding of digital badges. It provides features for badge design, verification, and integration with various learning management systems (LMS) and online platforms.
  • BadgeCert is another platform which supports digital badges and can be integrated into other platforms.
University certificate credential

A Certification Management System (CMS) or Credential Management System (CMS) plays a pivotal role in streamlining the key processes surrounding the certification or credentialing of people, namely that they have certain knowledge or skills in a profession.  It helps with ensuring compliance, reducing business operation costs, and maximizing the value of certifications. In this article, we explore the significance of adopting a CMS and its benefits for both nonprofits and businesses.  In today’s fast-paced and competitive business landscape, managing certifications and credentials efficiently is crucial for organizations.

What is a certification management system?

A certification management system is an enterprise software platform that is designed specifically for organizations whose primary goal is to award credentials to people for professional skills and knowledge.  Such an organization is often a nonprofit like an Association or Board, and is sometimes called an “awarding body” or similar term.   However, nowadays there are many for-profit corporations which offer certifications.  For example, many IT/Software companies will certify people on their products.  Here’s a page that does nothing but list the certifications offered by SalesForce!

These organizations often offer various credentials within a field.

  • Initial certifications (high stakes and broad) – Example: Certified Widgetmaker
  • Certificates or microcredentials – Example: Advanced Widget Design Specialist
  • Recertification exams – Example: taking a test every 5 years to maintain your Certified Widgetmaker status
  • Benchmark/progress exams – Example: Widgetmaker training programs are 2 years long and you take a benchmark exam at the end of Year 1
  • Practice tests: Example: old items from the Certified Widgetmaker test provided in a low-stakes fashion for practice

A credentialing body will need to manage the many business and operation aspects around these.  Some examples:

  • Applications, tracking who is applying for whichCertification management system, credentialing
  • Payment processing
  • Eligibility pathways and documentation (e.g., diplomas)
  • Pass/Fail results
  • Retake status
  • Expiration dates

There will often be functionality that makes these things easier, like automated emails to remind the professionals when their certification is expiring so they can register for their Recertification exam.

 

Reasons to use a certification management system

  1. Enhancing Compliance and Regulatory Adherence: In industries with stringent compliance requirements, such as healthcare, finance, and IT, adhering to regulations and maintaining accurate records of certifications is paramount. A comprehensive CMS provides a centralized repository where organizations can securely store, track, and manage certifications and credentials. This ensures compliance with industry standards, regulatory bodies, and audits. With automated alerts and renewal notifications, organizations can stay on top of certification expirations, reducing the risk of non-compliance and associated penalties.  A certification management system will also help your organization achieve accreditation like NCCA or ANSI/ISO 17024.
  2. Streamlining Certification Tracking and Renewals: Managing certifications manually can be a time-consuming and error-prone process. A CMS simplifies this task by automating certification tracking, renewal reminders, and verification processes. By digitizing the management of certifications, organizations can save valuable time and resources, eliminating the need for tedious paperwork and manual record-keeping. Additionally, employees can easily access their certification status, track progress, and initiate renewal processes through a user-friendly interface, enhancing transparency and self-service capabilities.
  3. Improving Workforce Efficiency and Development: An efficient CMS empowers organizations to optimize their workforce’s knowledge and skill development. By capturing comprehensive data on certifications, skills, and training, organizations gain valuable insights into their employees’ capabilities. This information can guide targeted training initiatives, succession planning, and talent management efforts. Moreover, employees can leverage the CMS to identify skill gaps, explore potential career paths, and pursue professional development opportunities. This aligns individual aspirations with organizational goals, fostering a culture of continuous learning and growth.
  4. Enhancing Credential Verification and Fraud Prevention: Verifying the authenticity of certifications is critical, especially in industries where credentials hold significant weight. A CMS with built-in verification features enables employers, clients, and other stakeholders to authenticate certifications quickly and accurately. By incorporating advanced security measures, such as blockchain technology or encrypted digital badges, CMSs provide an added layer of protection against fraud and credential forgery. This not only safeguards the reputation of organizations but also fosters trust and confidence among customers, partners, and regulatory bodies.

 

Of course, the bottom line is that a certification management system will save money, because this is a lot of information for the awarding body to track, and it is mission-critical.

 

Conclusion

Implementing a Certification Management System or Credential Management System is a strategic investment for organizations seeking to streamline their certification processes and maximize their value. By centralizing certification management, enhancing compliance, streamlining renewals, improving workforce development, and bolstering credential verification, a robust CMS empowers organizations to stay ahead in a competitive landscape while ensuring credibility and regulatory adherence.