automated-essay-scoring-machine-learning

Automated essay scoring (AES) is an important application of machine learning and artificial intelligence to the field of psychometrics and assessment.  In fact, it’s been around far longer than “machine learning” and “artificial intelligence” have been buzzwords in the general public!  The field of psychometrics has been doing such groundbreaking work for decades.

So how does AES work, and how can you apply it?

 

 

What is automated essay scoring?

The first and most critical thing to know is that there is not an algorithm that “reads” the student essays.  Instead, you need to train an algorithm.  That is, if you are a teacher and don’t want to grade your essays, you can’t just throw them in an essay scoring system.  You have to actually grade the essays (or at least a large sample of them) and then use that data to fit a machine learning algorithm.  Data scientists use the term train the model, which sounds complicated, but if you have ever done simple linear regression, you have experience with training models.

 

There are three steps for automated essay scoring:

  1. Establish your data set (collate student essays and grade them).
  2. Determine the features (predictor variables that you want to pick up on).
  3. Train the machine learning model.

 

Here’s an extremely oversimplified example:

  1. You have a set of 100 student essays, which you have scored on a scale of 0 to 5 points.
  2. The essay is on Napoleon Bonaparte, and you want students to know certain facts, so you want to give them “credit” in the model if they use words like: Corsica, Consul, Josephine, Emperor, Waterloo, Austerlitz, St. Helena.  You might also add other Features such as Word Count, number of grammar errors, number of spelling errors, etc.
  3. You create a map of which students used each of these words, as 0/1 indicator variables.  You can then fit a multiple regression with 7 predictor variables (did they use each of the 7 words) and the 5 point scale as your criterion variable.  You can then use this model to predict each student’s score from just their essay text.

 

Obviously, this example is too simple to be of use, but the same general idea is done with massive, complex studies.  The establishment of the core features (predictive variables) can be much more complex, and models are going to be much more complex than multiple regression (neural networks, random forests, support vector machines).

Here’s an example of the very start of a data matrix for features, from an actual student essay.  Imagine that you also have data on the final scores, 0 to 5 points.  You can see how this is then a regression situation.

Examinee Word Count i_have best_jump move and_that the_kids well
1 307 0 1 2 0 0 1
2 164 0 0 1 0 0 0
3 348 1 0 1 0 0 0
4 371 0 1 1 0 0 0
5 446 0 0 0 0 0 2
6 364 1 0 0 0 1 1

 

How do you score the essay?

If they are on paper, then automated essay scoring won’t work unless you have an extremely good software for character recognition that converts it to a digital database of text.  Most likely, you have delivered the exam as an online assessment and already have the database.  If so, your platform should include functionality to manage the scoring process, including multiple custom rubrics.  An example of our FastTest platform is provided below.

FastTest_essay-marking

Some rubrics you might use:

  • Grammar
  • Spelling
  • Content
  • Style
  • Supporting arguments
  • Organization
  • Vocabulary / word choice

 

How do you pick the Features?

This is one of the key research problems.  In some cases, it might be something similar to the Napoleon example.  Suppose you had a complex item on Accounting, where examinees review reports and spreadsheets and need to summarize a few key points.  You might pull out a few key terms as features (mortgage amortization) or numbers (2.375%) and consider them to be Features.  I saw a presentation at Innovations In Testing 2022 that did exactly this.  Think of them as where you are giving the students “points” for using those keywords, though because you are using complex machine learning models, it is not simply giving them a single unit point.  It’s contributing towards a regression-like model with a positive slope.

In other cases, you might not know.  Maybe it is an item on an English test being delivered to English language learners, and you ask them to write about what country they want to visit someday.  You have no idea what they will write about.  But what you can do is tell the algorithm to find the words or terms that are used most often, and try to predict the scores with that.  Maybe words like “jetlag” or “edification” show up in students that tend to get high scores, while words like “clubbing” or “someday” tend to be used by students with lower scores.  The AI might also pick up on spelling errors.  I worked as an essay scorer in grad school, and I can’t tell you how many times I saw kids use “ludacris” (name of an American rap artist) instead of “ludicrous” when trying to describe an argument.  They had literally never seen the word used or spelled correctly.  Maybe the AI model finds to give that a negative weight.   That’s the next section!

 

How do you train a model?

bart model train

Well, if you are familiar with data science, you know there are TONS of models, and many of them have a bunch of parameterization options.  This is where more research is required.  What model works the best on your particular essay, and doesn’t take 5 days to run on your data set?  That’s for you to figure out.  There is a trade-off between simplicity and accuracy.  Complex models might be accurate but take days to run.  A simpler model might take 2 hours but with a 5% drop in accuracy.  It’s up to you to evaluate.

If you have experience with Python and R, you know that there are many packages which provide this analysis out of the box – it is a matter of selecting a model that works.

 

How well does automated essay scoring work?

Well, as psychometricians love to say, “it depends.”  You need to do the model fitting research for each prompt and rubric.  It will work better for some than others.  The general consensus in research is that AES algorithms work as well as a second human, and therefore serve very well in that role.  But you shouldn’t use them as the only score; of course, that’s impossible in many cases.

Here’s a graph from some research we did on our algorithm, showing the correlation of human to AES.  The three lines are for the proportion of sample used in the training set; we saw decent results from only 10% in this case!  Some of the models correlated above 0.80 with humans, even though this is a small data set.   We found that the Cubist model took a fraction of the time needed by complex models like Neural Net or Random Forest; in this case it might be sufficiently powerful.

Automated essay scoring results

 

How can I implement automated essay scoring without writing code from scratch?

There are several products on the market.  Some are standalone, some are integrated with a human-based essay scoring platform.  ASC’s platform for automated essay scoring is SmartMarq; click here to learn more.  It is currently in a standalone approach like you see below, making it extremely easy to use.  It is also in the process of being integrated into our online assessment platform, alongside human scoring, to provide an efficient and easy way of obtaining a second or third rater for QA purposes.

Want to learn more?  Contact us to request a demonstration.

 

SmartMarq automated essay scoring

AI in Education

 

Artificial intelligence (AI) is poised to address some challenges that education deals with today, through innovation of teaching and learning processes. By applying AI in education technologies, educators can determine student needs more precisely, keep students more engaged, improve learning, and adapt teaching accordingly to boost learning outcomes. A process of utilizing AI in education started off from looking for a substitute for one-on-one tutoring in the 1970s and has been witnessing multiple improvements since then. This article will look at some of the latest AI developments used in education, their potential impact, and drawbacks they possess.

Application of AI

AI robot - AI in Education

Recently, a helping hand of AI technologies has permeated into all aspects of educational process. The research that has been going since 2009 shows that AI has been extensively employed in managing, instructing, and learning sectors. In management, AI tools are used to review and grade student assignments, sometimes they operate even more accurately than educators do. There are some AI-based interactive tools that teachers apply to build and share student knowledge. Learning can be enhanced through customization and personalization of content enabled by new technological systems that leverage machine learning (ML) and adaptability.

Below you may find a list of major educational areas where AI technologies are actively involved and that are worthy of being further developed.

Personalized learning This educational approach tailors learning trajectory to individual student needs and interests. AI algorithms analyze student information (e.g. learning style and performance) to create customized learning paths. Based on student weaknesses and strengths, AI recommends exercises and learning materials.  AI technologies are increasingly pivotal in online learning apps, personalizing education and making it more accessible to a diverse learner base.
Adaptive learning This approach does the same as personalized learning but in real-time stimulating learners to be engaged and motivated. ALEKS is a good example of an adaptive learning program.
Learning courses These are AI-powered online platforms that are designed for eLearning and course management, and enable learners to browse for specific courses and study with their own speed. These platforms offer learning activities in an increasing order of their difficulty aiming at ultimate educational goals. For instance, advanced Learning Management Systems (LMS) and Massive Open Online Courses (MOOCs).
Learning assistants/Teaching robots AI-based assistants can supply support and resources to learners upon request. They can respond to questions, provide personalized feedback, and guide students through learning content. Such virtual assistants might be especially helpful for learners who cannot access offline support.
Adaptive testing This mode of delivering tests means that each examinee will get to respond to specific questions that correspond to their level of expertise based on their previous responses. It is possible due to AI algorithms enabled by ML and psychometric methods, i.e. item response theory (IRT). You can get more information about adaptive testing from Nathan Thompson’s blog post.
Remote proctoring It is a type of software that allows examiners to coordinate an assessment process remotely whilst keeping confidentiality and preventing examinees from cheating. In addition, there can be a virtual proctor who can assist examinees in resolving any issues arisen during the process. The functionality of proctoring software can differ substantially depending on the stakes of exams and preferences of stakeholders. You can read more on this topic from the ASC’s blog here.
Test assembly Automated test assembly (ATA) is a widely used valid and efficient method of test construction based on either classical test theory (CTT) or item response theory (IRT). ATA lets you assemble test forms that are equivalent in terms of content distribution and psychometric statistics in seconds. ASC has designed TestAssembler to minimize a laborious and time-consuming process of form building.
Automated grading Grading student assignments is one of the biggest challenges that educators face. AI-powered grading systems automate this routine work reducing bias and inconsistencies in assessment results and increasing validity. ASC has developed an AI essay scoring system—SmartMarq. If you are interested in automated essay scoring, you should definitely read this post.
Item generation There are often cases when teachers are asked to write a bunch of items for assessment purposes, as if they are not busy with lesson planning and other drudgery. Automated item generation is very helpful in terms of time saving and producing quality items.
Search engine The time of libraries has sunk into oblivion, so now we mostly deal with huge search engines that have been constructed to carry out web searches. AI-powered search engines help us find an abundance of information; search results heavily depend on how we formulate our queries, choose keywords, and navigate between different sites. One of the biggest search engines so far is Google.
Chatbot Last but not least… Chatbots are software applications that employ AI and natural language processing (NLP) to make humanized conversations with people. AI-powered chatbots can provide learners with additional personalized support and resources. ChatGPT can truly be considered as the brightest example of a chatbot today.

 

Highlights of AI and challenges to address

ai chatbot - AI in Education

Today AI-powered functions revolutionize education, just to name a few: speech recognition, NLP, and emotion detection. AI technologies enable identifying patterns, building algorithms, presenting knowledge, sensing, making and following plans, maintaining true-to-life interactions with people, managing complex learning activities, magnifying human abilities in learning contexts, and supporting learners in accordance with their individual interests and needs. AI allows students to use handwriting, gestures or speech as input while studying or taking a test.

Along with numerous opportunities, AI-evolution brings some risks and challenges that should be profoundly investigated and addressed. While approaching utilization of AI in education, it is important to keep caution and consideration to make sure that it is done in a responsible and ethical way, and not to get caught up in the mainstream since some AI tools consult billions of data available to everyone on the web. Another challenge associated with AI is a variability in its performance: some functions are performed on a superior level (such as identifying patterns in data) but some of them are quite primitive (such as inability to support an in-depth conversation). Even though AI is very powerful, human beings still play a crucial role in verifying AI’s output to avoid plagiarism and falsification of information.

 

Conclusion

AI is already massively applied in education around the world. With the right guidance and frameworks in place, AI-powered technologies can help build more efficient and equitable learning experiences. Today we have an opportunity to witness how AI- and ML-based approaches contribute to development of individualized, personalized, and adaptive learning.

ASC’s CEO, Dr Thompson, presented several topics on AI at the 2023 ATP Conference in Dallas, TX. If you are interested in utilizing AI-powered services provided by ASC, please do not hesitate to contact us!

 

References

Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: A guidance for policymakers. UNESCO.

Niemi, H., Pea, R. D., & Lu, Y. (Eds.). (2022). AI in learning: Designing the future. Springer. https://doi.org/10.1007/978-3-031-09687-7

gamification in learning and assessment

Gamification in assessment and psychometrics presents new opportunities for ways to improve the quality of exams. While the majority of adults perceive games with caution because of their detrimental effect on youngsters’ minds causing addiction, they can be extremely beneficial for learning and assessment if employed thoughtfully. Gamification does not only provide learners with multiple opportunities to learn in context, but also is instrumental in developing digital literacy skills that are highly necessary in modern times.

What is Gamification?

Gamification means that elements of games, such as point-scoring, team collaboration, competition, and prizes) are incorporated into processes that would not otherwise have them. For example, a software for managing a Sales team might incorporate points for the number of phone calls and emails, splitting the team into two “teams” to compete against each other on those points, and winning a prize at the end of the month. Such ideas can also be incorporated into learning and assessment. A student might get points for each module they complete correctly, and a badge for each test they pass to show mastery of a skill, which are then displayed on their profile in the learning system.

Gamification equals motivation?student exam help

It is a fact that learning is much more effective when learners are motivated. What can motivate learners, you might ask? Engagement comes first—that is the core of learning. Engaged learners grasp knowledge because they are interested in the learning process, the material itself, and they are curious about discovering more. In contrast, unengaged learners wait when a lesson ends.

A traditional educational process usually involves several lessons where students learn one unit, and at the end of this unit, they take a cumulative test that gauges their level of acquisition. This model usually provides minimum of context for learning throughout the unit, so learners are supposed just to learn and memorize things unless they are given a chance to succeed or fail on the test.

Gamification can change this approach. When lessons and tests are gamified, learners obtain an opportunity to learn in context and use their associations and imagination—they become participants of the process, not just executors of instructions.

Gamification: challenges and ways to overcome them

While gamified learning and assessment are very efficacious, they might be challenging for educators in terms of development and implementation. Below you may check some challenges and how they can be tackled.

Challenge

Solution

More work Interactive lessons containing gamified elements demand more time and effort from educators, which is why overwhelmed with other obligations many of them give up and keep up with traditional style of teaching. However, if the whole team sets up the planning and preparations prior to starting a new unit, then there will be less work and less stress, respectively.
Preparation Gamified learning and assessment can be difficult for educators lacking creativity or not having any experience. Senior managers, like heads of departments, should take a leading position here: organize some courses and support their staff.
Distraction When developing gamified learning or assessment, it is important not to get distracted with fancy stuff and keep focused on the targeted learning objectives.
Individual needs Gamified learning and assessment cannot be unified, so educators will have to customize their materials to meet learner needs.

Gamified assessment

Psychometric tests have been evolving over time to provide more benefits to educators and learners, employers and candidates, and other stakeholders. Gamification is the next stage in the evolutionary process after having gained positive feedback from scientists and practitioners.

Gamified assessment is applied by human resources departments in the hiring process like psychometric tests evaluating candidate’s knowledge and skills. However, game-based assessment is quicker and more engaging than aptitude tests due to its user-friendly and interactive format. The latter features are also true for computerized adaptive testing (CAT), and I believe that these two can be complemented by each other to double the benefits provided.

There are several ways to incorporate gamification into assessment. Here are some ideas, but this is by no means exhaustive.

Aspect

Example

High fidelity items and/or assignments Instead of multiple choice items to ask about a task (e.g., operating a construction crane), create a simulation that is similar to a game.
Badging Candidates win badges for passing exams, which can be displayed places like their LinkedIn profile or email signature.
Points Obviously, most tests have “points” as part of the exam score, but it can be used in other ways, such as how many modules/quizzes you pass per month.
Teams Subdivide a class or other group into teams, and have them compete on other aspects.

Analyzing my personal experience, I remember how I used kahoot.it tool on my Math classes to interact with students and make them more engaged in the formative assessment activities. Students were highly motivated to take such tests because they were rewarding—it felt like competition and sometimes they got sweets. It was fun!

Summary

Obviously, gamified learning and assessment require more time and effort from creators than traditional non-gamified ones, but they are worthy. Both educators and learners are likely to benefit from this experience in different ways. If you are ready to apply gamified assessment by employing CAT technologies, our experts are ready to help. Contact us!

 

Multistage testing algorithm

Multistage testing (MST) is a type of computerized adaptive testing (CAT).  This means it is an exam delivered on computers which dynamically personalize it for each examinee or student.  Typically, this is done with respect to the difficulty of the questions, by making the exam easier for lower-ability students and harder for high-ability students.  Doing this makes the test shorter and more accurate while providing additional benefits.  This post will provide more information on multistage testing so you can evaluate if it is a good fit for your organization.

Already interested in MST and want to implement it?  Contact us to talk to one of our experts and get access to our powerful online assessment platform, where you can create your own MST and CAT exams in a matter of hours.

 

What is multistage testing?Multistage testing algorithm

Like CAT, multistage testing adapts the difficulty of the items presented to the student. But while adaptive testing works by adapting each item one by one using item response theory (IRT), multistage works in blocks of items.  That is, CAT will deliver one item, score it, pick a new item, score it, pick a new item, etc.  Multistage testing will deliver a block of items, such as 10, score them, then deliver another block of 10.

The design of a multistage test is often referred to as panels.  There is usually a single routing test or routing stage which starts the exam, and then students are directed to different levels of panels for subsequent stages.  The number of levels is sometimes used to describe the design; the example on the right is a 1-3-3 design.  Unlike CAT, there are only a few potential paths, unless each stage has a pool of available testlets.

As with item-by-item CAT, multistage testing is almost always done using IRT as the psychometric paradigm, selection algorithm, and scoring method.  This is because IRT can score examinees on a common scale regardless of which items they see, which is not possible using classical test theory.

To learn more about MST, I recommend this book.

Why multistage testing?

Item-by-item CAT is not the best fit for all assessments, especially those that naturally tend towards testlets, such as language assessments where there is a reading passage with 3-5 associated questions.

Multistage testing allows you to realize some of the well-known benefits of adaptive testing (see below), with more control over content and exposure.  In addition to controlling content at an examinee level, it also can make it easier to manage item bank usage for the organization.

 

How do I implement multistage testing?

1. Develop your item banks using items calibrated with item response theory

2. Assemble a test with multiple stages, defining pools of items in each stage as testlets

3. Evaluate the test information functions for each testlet

4. Run simulation studies to validate the delivery algorithm with your predefined testlets

5. Publish for online delivery

Our industry-leading assessment platform manages much of this process for you.  The image to the right shows our test assembly screen where you can evaluate the test information functions for each testlet.

Multistage testing

 

Benefits of multistage testing

There are a number of benefits to this approach, which are mostly shared with CAT.

  • Shorter exams: because difficulty is targeted, you waste less time
  • Increased security: There are many possible configurations, unlike a linear exam where everyone sees the same set of items
  • Increased engagement: Lower ability students are not discouraged, and high ability students are not bored
  • Control of content: CAT has some content control algorithms, but they are sometimes not sufficient
  • Supports testlets: CAT does not support tests that have testlets, like a reading passage with 5 questions
  • Allows for review: CAT does not usually allow for review (students can go back a question to change an answer), while MST does

 

Examples of multistage testing

MST is often used in language assessment, which means that it is often used in educational assessment, such as benchmark K-12 exams, university admissions, or language placement/certification.  One of the most famous examples is the Scholastic Aptitude Test from The College Board; it is moving to an MST approach in 2023.

Because of the complexity of item response theory, most organizations that implement MST have a full-time psychometrician on staff.  If your organization does not, we would love to discuss how we can work together.

 

response-time-effort

The concept of Speeded vs Power Test is one of the ways of differentiating psychometric or educational assessments. In the context of educational measurement and depending on the assessment goals and time constraints, tests are categorized as speeded and power. There is also the concept of a Timed test, which is really a Power test. Let’s look at these types more carefully.

Speeded test

In this test, examinees are limited in time but expected to answer as many questions as possible but there is a unreasonably short time limit that prevents even the best examinees from completing the test, and therefore forces the speed.  Items are delivered sequentially starting from the first one and until the last one. All items are relatively easy, usually.  Sometimes they are increasing in difficulty.  If a time limit and difficulty level are correctly set, none of the test takers will be able to reach the last item before the time limit is reached. A speeded test is supposed to demonstrate how fast an examinee can respond to questions within a time limit. In this case, examinees’ answers are not as important as their speed of answering questions. Total score is usually computed as a number of questions answered correctly when a time limit is met, and differences in scores are mainly attributed to individual differences in speed rather than knowledge.

An example of this might be a mathematical calculation speed test. Examinees are given 100 multiplication problems and told to solve as many as they can in 20 seconds. Most examinees know the answers to all the items, it is a question of how many they can finish. Another might be a 10-key task, where examinees are given a list of 100 5-digit strings and told to type as many as they can in 20 seconds.

Pros of a speeded test:

  • Speeded test is appropriate for when you actually want to test the speed of examinees; the 10-digit task above would be useful in selecting data entry clerks, for example. The concept of “knowledge of 5 digit string” in this case is not relevant and doesn’t even make sense.
  • Tests can sometimes be very short but still discriminating.
  • In case when a test is a mixture of items in terms of their difficulty, examinees might save some time when responding easier items in order to respond to more difficult items. This can create an increased spread in scores.

Cons of a speeded test:

  • Most situations where a test is used is to evaluate knowledge, not speed.
  • The nature of the test provokes examinees commit errors even if they know the answers, which can be stressful.
  • Speeded test does not consider individual peculiarities of examinees.

Power Test

A power test provides examinees with sufficient time so that they could attempt all items and express their true level of knowledge or ability. Therefore, this testing category focuses on assessing knowledge, skills, and abilities of the examinees.  The total score is often computed as a number of questions answered correctly (or with item response theory), and individual differences in scores are attributed to differences in ability under assessment, not to differences in basic cognitive abilities such as processing speed or reaction time.

There is also the concept of a Timed Test. This has a time limit, but it is NOT a major factor in how examinees respond to questions or affect their score. For example, the time limit might be set so that 95% of examinees are not affected at all, and the remaining 5% are slightly hurried. This is done with the CAT-ASVAB.

Pros of a power test:

  • There is no time restrictions for test-takers
  • Power test is great to evaluate knowledge, skills, and abilities of examinees
  • Power test reduces chances of committing errors by examinees even if they know the answers
  • Power test considers individual peculiarities of examinees

Cons of a power test:

  • It can be time consuming (some of these exams are 8 hours long or even more!)
  • This test format sometimes does not suit competitive examinations because of administrative issues (too much test time across too many examinees)
  • Power test is sometimes bad for discriminative purposes, since all examinees have high chances to perform well.  There are certainly some pass/fail knowledge exams where almost everyone passes.  But the purpose of those exams is not to differentiate for selection, but to make sure students have mastered the material, so this is a good thing in that case.

Speeded vs power test

The categorization of speed or power test depends on the assessment purpose. For instance, an arithmetical test for Grade 8 students might be a speeded test when containing many relatively easy questions but the same test could be a power test for Grade 7 students. Thus, a speeded test measures the power when all of the items are correctly responded in a limited time period. Similarly, a power test might turn into a speeded test when easy items are correctly responded in shorter time period. Once a time limit is fixed for a power test, it becomes a speeded test. Today, a pure speeded or power test is rare. Usually, what we meet in practice is a mixture of both, typically a Timed Test.

Below you may find a comparison of a speeded vs power test, in terms of the main features.

 

Speeded test Power test
Time limit is fixed, and it affects all examinees There is no time limit, or there is one and it only affects a small percentage of examinees
The goal is to evaluate speed only, or a combination of speed and correctness The goal is to evaluate correctness in the sense knowledge, skills, and abilities of test-takers
Questions are relatively easy in nature Questions are relatively difficult in nature
Test format increases chances of committing errors Test format reduces chances of committing errors

 

math educational assessment

Educational assessment of Mathematics achievement is a critical aspect of most educational ministries and programs across the world.  One might say that all subjects at school are equally important and that would be relatively true. However, Mathematics stands out amongst the remaining ones, because it is more than just an academic subject. Here are three reasons why Math is so important:

Math is everywhere. Any job is tough to be completed without mathematical knowledge. Executives, musicians, accountants, fashion designers, and even mothers use Math in their daily lives. In particular, Math is essential for decision-making in the fast-growing digital world.

Math designs thinking paths. Math enables people, especially children, to analyze and solve real-world problems by developing logical and critical thinking. Einstein’s words describe this fact inimitably, “Pure mathematics is, in its way, the poetry of logical ideas”.

Math is a language of science. Math gives tools for understanding and developing engineering, science, and technology. Mathematical language, including symbols and their meanings, is the same in the world, so scientists use math to communicate concepts.

No matter which profession a student has chosen, he would likely need some solid knowledge in Math to enter an undergraduate or a graduate program. Some world-known tests that contain Math part are TIMSS, PISA, ACT, SAT, SET, and GRE.

The role of educational assessment in Math

Therefore, an important subject like Math needs careful and accurate assessment approaches starting from school. Educational assessment is the process of collecting data on student progress in knowledge acquisition to inform future academic decisions towards learning goals. This is true at the individual student level, teacher or school level, district level, and state or national level. There are different types of assessment depending on its scale, purpose, and functionality of the data collected. calculator-math

In general, educational authorities in many countries apply criteria-based approach for classroom and external assessment of Mathematics. Criteria help divide a construct of knowledge into edible portions so that students understand what they have to acquire and teachers could positively interfere student individual learning paths to make sure that at the end students achieve learning goals.

Classroom assessment or assessment for learning is curriculum-based. Teachers use learning objectives from Math curriculum to form assessment criteria and make tasks according to the latter. Teachers employ assessment results for making informed decisions on the student level.

External assessment or assessment of learning is also curriculum-based but it covers much more topics than classroom assessment. Tasks are made by external specialists, usually from an independent educational institution. Assessment procedure itself is likely to be invigilated and its results are used by different authorities, not just teachers, to evaluate student progress in learning Math but also curriculum.

Applications of educational assessment of Mathematics

Aforementioned types of assessment are classroom- and school-level, and both are mostly formatted as pen-and-pencil tests. There are some other internationally recognized assessment programs focusing on Math, such as Programme for International Student Assessment (PISA). PISA set a global trend of applying knowledge and skills in Math to solving real-world problems.

In 2018, PISA became a computerized adaptive test which is a great shift favoring all students with various levels of knowledge in Math. Application of adaptive technologies in Math for assessment and evaluation purposes could greatly motivate students because the majority of them are not big fans of Math. Thus, teachers and other stakeholders could get more valid and reliable data on student progress in learning Math.

Implementation

The first steps towards implementation of modern technologies for educational assessment of Math at schools and colleges are extensive research and planning. Second, there has to be a pool of good items written according to the best international practices. Third, assessment procedures have to be standardized. Finally yet importantly, schools would need a consultant with rich expertise in adaptive technologies and psychometrics.

An important consideration is the item types of formats to be used.  FastTest allows you to not only use traditional formats like multiple choice, but advanced formats like drag and drop or the presentation of an equation editor to the student.  An example of that is below.

 

Equation editor item type

 

Why is educational assessment of Math so important?

Educational assessment of Math is one of the major focuses of PISA and other assessments for good reason.  Since Math skills translate to job success in many fields, especially STEM fields, a well-educated workforce is one of the necessary components of a modern economy.  So an educational system needs to know that it is preparing students for the future needs of the economy.  One aspect of this is progress monitoring, which tracks learning over time so that we can not only help individual students but also effect the aggregate changes needed to improve the educational system.

 

healthcare certification

An Objective Structured Clinical Examination (OSCE Exam) is an assessment designed to measure performance of tasks, typically medical, in a high-fidelity way.  It is more a test of skill than knowledge.  For example, I used to work at a certification board for ophthalmic assistants; there were 3 levels, and the top two levels included both a knowledge test (200 multiple choice items) and an OSCE (level 2 was a digital simulation, level 3 was live human patients).

OSCE exams serve a very important purpose in many fields, forging a critical bridge between learning and practice.  This post will cover some of the basics.

What is an Objective Structured Clinical Examination?

An OSCE exam typically works by defining very specific tasks that the examinee is required to do, while examiners (often professors) watch them while grading them via a rubric or checklist.  Each of the tasks is often called a station, and the OSCE will often have multiple stations.  Consider the components of the name:

  • Objective: We are trying to be as objective as possible, boiling down a potentially very complex patient scenario and task into a checklist or rubric. We want to make it quantitative, measurable, and reliable.
  • Structured: The task itself is very boxed, such as using retinoscopy to measure astigmatism (perhaps one thing of 20 that might happen at a visit to your ophthalmologist).
  • Clinical: The task is something to be done in a clinical setting; this is to increase fidelity and validity.

A great summary is provided by Zayyan (2011):

The Objective Structured Clinical Examination is a versatile multipurpose evaluative tool that can be utilized to assess health care professionals in a clinical setting. It assesses competency, based on objective testing through direct observation. It is precise, objective, and reproducible allowing uniform testing ofclinical examination students for a wide range of clinical skills. Unlike the traditional clinical exam, the OSCE could evaluate areas most critical to performance of health care professionals such as communication skills and ability to handle unpredictable patient behavior.

There are a few key points here.

  • It is a clinical setting, rather than a lecture hall setting (though in non-medical fields, “clinical setting” is relative!)
  • It is assessing competency of clinical skills
  • It is based on observation, where examiners rate the examinee
  • It will often include assessment of “soft skills” or other non-knowledge aspects

Where are OSCE Exams used?

OSCE exams are very important in the medical professions.  This report shows that many medical schools use it, though it curiously does not say how many schools were part of the survey.

However, it is most certainly not limited to medical fields.  You don’t hear the term very often outside medical education, but the approach is used widely.   Professions where someone is physically doing something are more likely to use OSCEs.  An accountant, on the other hand, does no physically do something, and their equivalent of an OSCE is more like a complex accounting scenario that needs to be completed in MS Excel and then graded.

Examples of OSCE exams

Of course, there are many medical examples.  I work with the American Board of Chiropractic Sports Physicians, who have a practical exam.  Check out their DACBSP webpage and scroll down to the Practical Exam resources, including instructional videos for some stations.

Nurse skill test

I once worked with a crane operator certification.  They had a performance test where you had to drive the crane into a certain position, lift and place certain objects, and then move a wrecking ball through a path of oil drums without knocking anything over – all while being rated by an examiner with a checklist.  Sounds a lot like an OSCE?

Perhaps the most common OSCE is one that you have likely taken: a Driver’s test.  In addition to taking a knowledge test, you were also likely asked to drive a car with an examiner armed with a checklist while he told you to do various “stations” like parallel parking, perpendicular parking, or navigating a stoplight.

Tell me more!

There are dedicated resources in the world of medical education and assessment, such as Downing and Yudkowsky (2019) Assessment in Health Professions Education (https://www.routledge.com/Assessment-in-Health-Professions-Education/Yudkowsky-Park-Downing/p/book/9781315166902).   You might also be interested in my Lecture Notes from a course taught using that textbook.

bias scales

One of the primary goals of psychometrics and assessment research is to ensure that tests, their scores, and interpretations of the scores, are reliable, valid, and fair. The concepts of reliability and validity are discussed quite often and are well-defined, but what do we mean when we say that a test is fair or unfair? We’ll discuss it here. Though note that fairness is technically part of validity, because if there is bias, then the interpretations being made from scores are usually biased as well.

What do we mean by bias?

Well, there are actually three types of bias in assessment.

1. Differential item functioning / differential test functioning

This type of bias occurs when a single item, or sometimes a test, is biased against a group when ability/trait level is constant. For example, suppose that the reference group (usually the majority) and focal group (usually a minority) perform similarly on the test overall, but on one item we find that the focal group was less likely to get the item correct after adjusting for total score performance. This is known as differential item functioning. Content experts should review the question.

2. Overall test bias

With this type of bias, we find that the entire test is biased against the focal group, so that they receive lower scores (ability/trait estimate) than the reference group. This is especially concerning if there is data from another test or variable that shows the two groups should be of equal ability. However, there are many cases where the focal group has lower scores not because the test is biased, but because of some other reason. For example, if it is economically disadvantaged and receives subpar educational opportunities, the test could very well be valid and simply reflect these well-known inequities.

3. Predictive bias

This is a complex situation. Suppose that the test itself was not biased, but it is used to predict something like job performance or university admissions, and the test scores systematically underpredict performance for the focal group. This is manifested in the predictive model, such as a linear regression, and not in the test scores. There is also selection bias, where a focal group ends up not being selected as often.  In the USA, a rule of thumb is the four-fifths rule.

Other types of unfairness

There are other ways that a test can be considered unfair. One is the case of unequal precision. This refers to the situation that is the case with almost all traditional exams that there are plenty of items of middle difficulty, but not as many items that are easy or difficult. This can lead to very inaccurate scores for examinees on the top or bottom of the distribution. It is one reason that scaled scores are often capped on the ends of the spectrum; the difference between a person at the 98th percentile vs 99th percentile is most likely not meaningful, even if there is a wide difference in the raw scores.

Another is the case of test adaptation and translation. Here, the original test and its items might be unbiased, but when the test is translated or adapted to a different language or culture, it becomes biased. In such cases, the data might manifest itself as DIF/DTF or test bias as described above. I recall a story that a friend of mine told me about an item that was translated to Spanish, where the original item in English was quite strong and unbiased, but when used in Latin America it touched on a cultural aspect that was not present in USA/Canada, and performed poorly.

How can we find test bias?

Psychometricians have a number of statistical methods that are designed to specifically look for the situations described here. Differential item functioning in particular has a ton of scientific literature devoted to it. One example method, which is older but still commonly used, is the Mantel-Haenszel statistic. For predictive bias, I remember learning about the partial F-test in graduate school, but have not had the opportunity to perform such analyses since then.

How do we address or avoid test bias?

As with many things, an ounce of prevention is worth a pound of cure. High-stakes exams such as university admissions will invest heavily in avoiding bias. They will create detailed item writing guidelines, heavily train the item writers, and pay for items to be reviewed not only by experts but by people who are representative of target populations. Of course, some issues will always slip through this process, which is why it is important to perform the statistical analyses afterwards to validate the items, the test, and predictive models.

Where can I learn more?

Here are some relevant resources to help you learn more about test bias.

Handbook of Methods for Detecting Test Bias

Test Bias in Employment Selection Testing: A Visual Introduction

Differential Item Functioning

student-progress-monitoring

Progress monitoring is an essential component of a modern educational system. Are you interested in tracking learners’ academic achievements during a period of learning, such as a school year? Then you need to design a valid and reliable progress monitoring system that would enable educators assist students in achieving a performance target. Progress monitoring is a standardized process of assessing a specific construct or a skill that should take place often enough to make pedagogical decisions and take appropriate actions.

 

Why Progress monitoring?

Progress monitoring mainly serves two purposes: to identify students in need and to adjust instructions based on assessment results. Such adjustments can be used on both individual and aggregate levels of learning. Educators should use progress monitoring data to make decisions about whether appropriate interventions should be employed to ensure that students obtain support to propel their learning and match their needs (Issayeva, 2017).

This assessment is usually criterion-referenced and not normed. Data collected after administration can show a discrepancy between students’ performances in relation to the expected outcomes, and can be graphed to display a change in rate of progress over time.

Progress monitoring dates back to the 1970s when Deno and his colleagues at the University of Minnesota initiated research on applying this type of assessment to observe student progress and identify the effectiveness of instructional interventions (Deno, 1985, 1986; Foegen et al., 2008). Positive research results suggested to use progress monitoring as a potential solution of the educational assessment issues existing in the late 1980s–early 1990s (Will, 1986).

 

Approaches to development of measures

Two approaches to item development are highly applicable these days: robust indicators and curriculum sampling (Fuchs, 2004). It is interesting to note, that advantages of using one approach tend to mirror disadvantages of the other one.

According to Foegen et al. (2008), robust indicators represent core competencies integrating a variety of concepts and skills. Classic examples of robust indicator measures are oral reading fluency in reading and estimation in Mathematics. The most popular illustration of this case is the Programme for International Student Assessment (PISA) that evaluates preparedness of students worldwide to apply obtained knowledge and skills in practice regardless of the curriculum they study at schools (OECD, 2012).

When using the second approach, a curriculum is analyzed and sampled in order to construct measures based on its proportional representations. Due to the direct link to the instructional curriculum, this approach enables teachers to evaluate student learning outcomes, consider instructional changes, and determine eligibility for other educational services. Progress monitoring is especially applicable when curriculum is spiral (Bruner, 2009) since it allows students revisit the same topics with increasing complexity.

 

CBM and CAT

Curriculum-based measures (CBMs) are commonly used for progress monitoring purposes. They typically embrace standardized procedures for item development, administration, scoring, and reporting. CBMs are usually conducted under timed conditions as this allows obtain evidence of a student’s fluency within a targeted skill.

Computerized adaptive tests (CATs) are gaining more and more popularity these days, particularly within progress monitoring framework. CATs were primarily developed to replace traditional fixed-length paper-and-pencil tests and have been proven to become a helpful tool determining each learner’s achievement levels (Weiss & Kingsbury, 1984).

CATs utilize item response theory (IRT) and provide students with subsequent items based on difficulty level and their answers in real time. In brief, IRT is a statistical method that parameterizes items and examinees on the same scale, and facilitates stronger psychometric approaches such as CAT (Weiss, 2004). Thompson and Weiss (2011) suggest a step-by-step guidance on how to build CATs.

 

Progress monitoring vs. traditional assessments

Progress monitoring significantly differs from traditional classroom assessments by many reasons. First, it provides objective, reliable, and valid data on student performance, e. g. in terms of the mastery of a curriculum. Subjective judgement is unavoidable for teachers when they prepare classroom assessments for their students. On the contrary, student progress monitoring measures and procedures are standardized which guarantees relative objectivity, as well as reliability and validity of assessment results (Deno, 1985; Foegen & Morrison, 2010). In addition, progress monitoring results are not graded, and there is no preparation prior to the test. Second, it leads to thorough feedback from teachers to students. Competent feedback helps teachers adapt their teaching methods or instructions in response to their students’ needs (Fuchs & Fuchs, 2011). Third, progress monitoring enables teachers help students in achieving long-term curriculum goals by tracking their progress in learning (Deno et al., 2001; Stecker et al., 2005). According to Hintze, Christ, and Methe (2005), progress monitoring data assist teachers in identifying specific actions towards instructional changes in order to help students in mastering all learning objectives from the curriculum. Ultimately, this results in a more effective preparation of students for the final high-stakes exams.

 

References

Bruner, J. S. (2009). The process of education. Harvard University Press.

Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional children, 52, 219-232.

Deno, S. L. (1986). Formative evaluation of individual student programs: A new role of school psychologists. School Psychology Review, 15, 358-374.

Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30(4), 507-524.

Foegen, A., & Morrison, C. (2010). Putting algebra progress monitoring into practice: Insights from the field. Intervention in School and Clinic46(2), 95-103.

Foegen, A., Olson, J. R., & Impecoven-Lind, L. (2008). Developing progress monitoring measures for secondary mathematics: An illustration in algebra. Assessment for Effective Intervention33(4), 240-249.

Fuchs, L. S. (2004). The past, present, and future of curriculum-based measurement research. School Psychology Review, 33, 188-192.

Fuchs, L. S., & Fuchs, D. (2011). Using CBM for Progress Monitoring in Reading. National Center on Student Progress Monitoring.

Hintze, J. M., Christ, T. J., & Methe, S. A. (2005). Curriculum-based assessment. Psychology in the School, 43, 45–56. doi: 10.1002/pits.20128

Issayeva, L. B. (2017). A qualitative study of understanding and using student performance monitoring reports by NIS Mathematics teachers [Unpublished master’s thesis]. Nazarbayev University.

Samson, J. M. (2016). Human trafficking and globalization [Unpublished doctoral dissertation]. Virginia Polytechnic Institute and State University.

OECD (2012). Lessons from PISA for Japan, Strong Performers and Successful Reformers in Education. OECD Publishing.

Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795-819.

Thompson, N. A., & Weiss, D. A. (2011). A framework for the development of computerized adaptive tests. Practical Assessment, Research, and Evaluation16(1), 1.

Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement21(4), 361-375.

Weiss, D. J. (2004). Computerized adaptive testing for effective and efficient measurement in counseling and education. Measurement and Evaluation in Counseling and Development37(2), 70-84.

Will, M. C. (1986). Educating children with learning problems: A shared responsibility. Exceptional children52(5), 411-415.

 

vertical scaling

Vertical scaling is the process of placing scores from educational assessments that measure the same knowledge domain but at different ability levels onto a common scale (Tong & Kolen, 2008). The most common example is putting Mathematics and Language assessments for K-12 onto a single scale. For example, you might have Grade 4 math curriculum, Grade 5, Grade 6… instead of treating them all as islands, we consider the entire journey and link the grades together in a single item bank. While general information about scaling can be found at What is Scaling?, this article will focus specifically on vertical scaling.

Why vertical scaling?

A vertical scale is incredibly important, as enables inferences about student progress from one moment to another, e. g. from elementary to high school grades, and can be considered as a developmental continuum of student academic achievements. In other words, students move along that continuum as they develop new abilities, and their scale score alters as a result (Briggs, 2010).

This is not only important for individual students, because we can track learning and assign appropriate interventions or enrichments, but also in an aggregate sense.  Which schools are growing more than others?  Are certain teachers better? Perhaps there is a noted difference between instructional methods or curricula?  Here, we are coming up to the fundamental purpose of assessment; just like it is necessary to have a bathroom scale to track your weight in a fitness regime, if a governments implements a new Math instructional method, how does it know that students are learning more effectively?

Using a vertical scale can create a common interpretive framework for test results across grades and, therefore, provide important data that inform individual and classroom instruction. To be valid and reliable, these data have to be gathered based on properly constructed vertical scales.

Vertical scales can be compared with rulers that measure student growth in some subject areas from one testing moment to another. Similarly to height or weight, student capabilities are assumed to grow with time.  However, if you have a ruler that is only 1 meter long and you are trying to measure growth 3-year-olds to 10-year-olds, you would need to link two rulers together.

Construction of Vertical Scales

Construction of a vertical scale is a complicated process which involves making decisions on test design, scaling design, scaling methodology, and scale setup. Interpretation of progress on a vertical scale depends on the resulting combination of such scaling decisions (Harris, 2007; Briggs & Weeks, 2009). Once a vertical scale is established, it needs to be maintained over different forms and time. According to Hoskens et al. (2003), a method chosen for maintaining vertical scales affects the resulting scale, and, therefore, is very important.

A measurement model that is used to place student abilities on a vertical scale is represented by item response theory (IRT; Lord, 2012; De Ayala, 2009) or the Rasch model (Rasch, 1960).  This approach allows direct comparisons of assessment results based on different item sets (Berger et al., 2019). Thus, each student is supposed to work with a selected bunch of items not similar to the items taken by other students, but still his results will be comparable with theirs, as well as with his own ones from other assessment moments.

The image below shows how student results from different grades can be conceptualized by a common vertical scale.  Suppose you were to calibrate data from each grade separately, but have anchor items between the three groups.  A linking analysis might suggest that Grade 4 is 0.5 logits above Grade 3, and Grade 5 is 0.7 logits above Grade 4.  You can think of the bell curves overlapped like you see below.  A theta of 0.0 on the Grade 5 scale is equivalent to 0.7 on the Grade 4 scale, and 1.3 on the Grade 3 scale.  If you have a strong linking, you can put Grade 3 and Grade 4 items/students onto the Grade 5 scale… as well as all other grades using the same approach.

Vertical-scaling

Test design

Kolen and Brennan (2014) name three types of test designs aiming at collecting student response data that need to be calibrated:

  • Equivalent group design. Student groups with presumably comparable ability distributions within a grade are randomly assigned to answer items related to their own or an adjacent grade;
  • Common item design. Using identical items to students from adjacent grades (not requiring equivalent groups) to establish a link between two grades and to align overlapping item blocks within one grade, such as putting some Grade 5 items on the Grade 6 test, some Grade 6 items on the Grade 7 test, etc.;
  • Scaling test design. This type is very similar to common item design but, in this case, common items are shared not only between adjacent grades; there is a block of items administered to all involved grades besides items related to the specific grade.

From a theoretical perspective, the most consistent design with a domain definition of growth is scaling test design. Common item design is the easiest one to implement in practice but only if administering the same items to adjacent grades is reasonable from a content perspective. Equivalent group design requires more complicated administration procedures within one school grade to ensure samples with equivalent ability distributions.

Scaling design

The scaling procedure can use observed scores or it can be IRT-based. The most commonly used scaling design procedures in vertical scale settings are the Hieronymus, Thurstone, and IRT scaling (Yen, 1986; Yen & Burket, 1997; Tong & Harris, 2004). An interim scale is chosen in all these three methodologies (von Davier et al., 2006).

  • Hieronymus scaling. This method uses a total number-correct score for dichotomously scored tests or a total number of points for polytomously scored items (Petersen et al., 1989). The scaling test is constructed in a way to represent content in an increasing order in terms of level of testing, and it is administered to a representative sample from each testing level or grade. The within- and between-level variability and growth are set on an external scaling test, which is the special set of common items.
  • Thurstone scaling. According to Thurstone (1925, 1938), this method first creates an interim-score-scale and then normalizes distributions of variables at each level or grade. It assumes that scores on an underlying scale are normally distributed within each group of interest and, therefore, makes use of a total number-correct scores for dichotomously scored tests or a total number of points of polytomously scored items to conduct scaling. Thus, Thurstone scaling normalizes and linearly equates raw scores, and it is usually conducted within equivalent groups.
  • IRT scaling. This method of scaling considers person-item interactions. Theoretically, IRT scaling is applied for all existing IRT models, including multidimensional IRT models or diagnostic models. In practice, only unidimensional models, such as the Rasch and/or partial credit models (PCM) or the 3PL models, are used (von Davier et al., 2006).

Data calibration

When all decisions are taken, including test design and scaling design, and tests are administered to students, the items need to be calibrated with software like Xcalibre to establish a vertical measurement scale. According to Eggen and Verhelst (2011), item calibration within the context of the Rasch model implies the process of establishing model fit and estimating difficulty parameter of an item based on response data by means of maximum likelihood estimation procedures.

Two procedures, concurrent and grade-by-grade calibration, are employed to link IRT-based item difficulty parameters to a common vertical scale across multiple grades (Briggs & Weeks, 2009; Kolen & Brennan, 2014). Under concurrent calibration, all item parameters are estimated in a single run by means of linking items shared by several adjacent grades (Wingersky & Lord, 1983).  In contrast, under grade-by-grade calibration, item parameters are estimated separately for each grade and then transformed into one common scale via linear methods. The most accurate method for determining linking constants by minimizing differences between linking items’ characteristic curves among grades is the Stocking and Lord method (Stocking & Lord, 1983). This is accomplished with software like IRTEQ.

Summary of Vertical Scaling

Vertical scaling is an extremely important topic in the world of educational assessment, especially K-12 education.  As mentioned above, this is not only because it facilitates instruction for individual students, but is the basis for information on education at the aggregate level.

There are several approaches to implement vertical scaling, but the IRT-based approach is very compelling.  A vertical IRT scale enables representation of student ability across multiple school grades and also item difficulty across a broad range of difficulty. Moreover, items and people are located on the same latent scale. Thanks to this feature, the IRT approach supports purposeful item selection and, therefore, algorithms for computerized adaptive testing (CAT). The latter use preliminary ability estimates for picking the most appropriate and informative items for each individual student (Wainer, 2000; van der Linden & Glas, 2010).  Therefore, even if the pool of items is 1,000 questions stretching from kindergarten to Grade 12, you can deliver a single test to any student in the range and it will adapt to them.  Even better, you can deliver the same test several times per year, and because students are learning, they will receive a different set of items.  As such, CAT with a vertical scale is an incredibly fitting approach for K-12 formative assessment.

Additional Reading

Reckase (2010) states that the literature on vertical scaling is scarce going back to the 1920s, and recommends some contemporary practice-oriented research studies:

Paek and Young (2005). This research study dealt with the effects of Bayesian priors on the estimation of student locations on the continuum when using a fixed item parameter linking method. First, a within group calibration was done for one grade level; then the parameters from the common items in that calibration were fixed to calibrate the next grade level. This approach forces the parameter estimates to be the same for the common items at the adjacent grade levels. The study results showed that the prior distributions could affect the results and that careful checks should be done to minimize the effects.

Reckase and Li (2007). This book chapter depicts a simulation study of the dimensionality impacts on vertical scaling. Both multidimensional and unidimensional IRT models were employed to simulate data to observe growth across three achievement constructs. The results presented that the multidimensional model recovered the gains better than the unidimensional models, but those gains were underestimated mostly due to the common item selection. This emphasizes the importance of using common items that cover all of the content assessed at adjacent grade levels.

Li (2007). The goal of this doctoral dissertation was to identify if multidimensional IRT methods could be used for vertical scaling and what factors might affect the results. This study was based on a simulation designed to match state assessment data in Mathematics. The results showed that using multidimensional approaches was feasible, but it was important that the common items would include all the dimensions assessed at the adjacent grade levels.

Ito, Sykes, and Yao (2008). This study compared concurrent and separate grade group calibration while developing a vertical scale for nine consequent grades tracking student competencies in Reading and Mathematics. The research study used the BMIRT software implementing Markov-chain Monte Carlo estimation. The results showed that concurrent and separate grade group calibrations had provided different results for Mathematics than for Reading. This, in turn, confirms that the implementation of vertical scaling is very challenging, and combinations of decisions about its construction can have noticeable effects on the results.

Briggs and Weeks (2009). This research study was based on real data using item responses from the Colorado Student Assessment Program. The study compared vertical scales based on the 3PL model with those from the Rasch model. In general, the 3PL model provided vertical scales with greater rises in performance from year to year, but also greater increases within grade variability than the scale based on the Rasch model did. All methods resulted in growth curves having less gain along with an increase in grade level, whereas the standard deviations were not much different in size at different grade levels.

References

Berger, S., Verschoor, A. J., Eggen, T. J., & Moser, U. (2019, October). Development and validation of a vertical scale for formative assessment in mathematics. In Frontiers in Education (Vol. 4, p. 103). Frontiers. Retrieved from https://www.frontiersin.org/articles/10.3389/feduc.2019.00103/full

Briggs, D. C., & Weeks, J. P. (2009). The impact of vertical scaling decisions on growth interpretations. Educational Measurement: Issues and Practice, 28(4), 3–14.

Briggs, D. C. (2010). Do Vertical Scales Lead to Sensible Growth Interpretations? Evidence from the Field. Online Submission. Retrieved from https://files.eric.ed.gov/fulltext/ED509922.pdf

De Ayala, R. J. (2009). The Theory and Practice of Item Response Theory. New York: Guilford Publications Incorporated.

Eggen, T. J. H. M., & Verhelst, N. D. (2011). Item calibration in incomplete testing designs. Psicológica 32, 107–132.

Harris, D. J. (2007). Practical issues in vertical scaling. In Linking and aligning scores and scales (pp. 233–251). Springer, New York, NY.

Hoskens, M., Lewis, D. M., & Patz, R. J. (2003). Maintaining vertical scales using a common item design. In annual meeting of the National Council on Measurement in Education, Chicago, IL.

Ito, K., Sykes, R. C., & Yao, L. (2008). Concurrent and separate grade-groups linking procedures for vertical scaling. Applied Measurement in Education, 21(3), 187–206.

Kolen, M. J., & Brennan, R. L. (2014). Item response theory methods. In Test Equating, Scaling, and Linking (pp. 171–245). Springer, New York, NY.

Li, T. (2007). The effect of dimensionality on vertical scaling (Doctoral dissertation, Michigan State University. Department of Counseling, Educational Psychology and Special Education).

Lord, F. M. (2012). Applications of item response theory to practical testing problems. Routledge.

Paek, I., & Young, M. J. (2005). Investigation of student growth recovery in a fixed-item linking procedure with a fixed-person prior distribution for mixed-format test data. Applied Measurement in Education, 18(2), 199–215.

Petersen, N. S., Kolen, M. J., & Hoover, H. D. (1989). Scaling, norming, and equating. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 221–262). New York: Macmillan.

Rasch, G. (1960). Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen: Danmarks Paedagogiske Institut.

Reckase, M. D., & Li, T. (2007). Estimating gain in achievement when content specifications change: a multidimensional item response theory approach. Assessing and modeling cognitive development in school. JAM Press, Maple Grove, MN.

Reckase, M. (2010). Study of best practices for vertical scaling and standard setting with recommendations for FCAT 2.0. Unpublished manuscript. Retrieved from https://www.fldoe.org/core/fileparse.php/5663/urlt/0086369-studybestpracticesverticalscalingstandardsetting.pdf

Stocking, M. L., & Lord, F. M. (1983). Developing a common metric in item response theory. Applied psychological measurement, 7(2), 201–210. doi:10.1177/014662168300700208

Thurstone, L. L. (1925). A method of scaling psychological and educational tests. Journal of educational psychology, 16(7), 433–451.

Thurstone, L. L. (1938). Primary mental abilities (Psychometric monographs No. 1). Chicago: University of Chicago Press.

Tong, Y., & Harris, D. J. (2004, April). The impact of choice of linking and scales on vertical scaling. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA.

Tong, Y., & Kolen, M. J. (2008). Maintenance of vertical scales. In annual meeting of the National Council on Measurement in Education, New York City.

van der Linden, W. J., & Glas, C. A. W. (eds.). (2010). Elements of Adaptive Testing. New York, NY: Springer.

von Davier, A. A., Carstensen, C. H., & von Davier, M. (2006). Linking competencies in educational settings and measuring growth. ETS Research Report Series, 2006(1), i–36. Retrieved from https://files.eric.ed.gov/fulltext/EJ1111406.pdf

Wainer, H. (Ed.). (2000). Computerized adaptive testing: A Primer, 2nd Edn. Mahwah, NJ: Lawrence Erlbaum Associates.

Wingersky, M. S., & Lord, F. M. (1983). An Investigation of Methods for Reducing Sampling Error in Certain IRT Procedures (ETS Research Reports Series No. RR-83-28-ONR). Princeton, NJ: Educational Testing Service.

Yen, W. M. (1986). The choice of scale for educational measurement: An IRT perspective. Journal of Educational Measurement, 23(4), 299–325.

Yen, W. M., & Burket, G. R. (1997). Comparison of item response theory and Thurstone methods of vertical scaling. Journal of Educational Measurement, 34(4), 293–313.