Sie sind auf Seite 1von 13

N choose K

The Knewton Tech Blog

Search

The Mismeasure of Students: Using Item Response Theory Instead of Traditional Grading to Assess Student Proficiency
Posted on June 7, 2012 by alejandro . View the latest posts. Like what you see? We're hiring. Check out the career opportunities at Knewton.

Imagine for a second that youre teaching a math remediation course full of fourth graders. Youve just administered a test with 10 questions. Of those 10 questions, two questions are trivial, two are incredibly hard, and the rest are equally difficult. Now imagine that two of your students take this test

and answer nine of the 10 questions correctly. The first student answers an easy question incorrectly, while the second answers a hard question incorrectly. How would you try to identify the student with higher ability? Under a traditional grading approach, you would assign both students a score of 90 out of 100, grant both of them an A, and move on to the next test. This approach illustrates a key problem with measuring student ability via testing instruments: test questions do not have uniform characteristics. So how can we measure student ability while accounting for differences in questions? Item response theory (IRT) attempts to model student ability using question level performance instead of aggregate test level performance. Instead of assuming all questions contribute equally to our understanding of a students abilities, IRT provides a more nuanced view on the information each question provides about a student. What kind of features can a question have? Lets consider some examples. First, think back to an exam you have previously taken. Sometimes you breeze through the first section, work through a second section of questions, then battle with a final section until the exam ends. In the traditional grading paradigm described earlier, a correct answer on the first section would count just as much as a correct answer on the final section, despite the fact that the first section is easier than the last! Similarly, a student demonstrates greater ability as she answers harder questions correctly; the traditional grading scheme, however, completely ignores each questions difficulty when grading students! The one-parameter logistic (1PL) IRT model attempts to address this by allowing each question to have an independent difficulty variable. It models the probability of a correct answer using the following logistic function:

where j represents the question of interest, theta is the current students ability, and beta is item js difficulty. This function is also known as the item response function. We can exami ne its plot (with different values of beta) below

to confirm a couple of things: 1. For a given ability level, the probability of a correct answer increases as item difficulty decreases. It follows that, between two questions, the question with a lower beta value is easier. 2. Similarly, for a given question difficulty level, the probability of a correct answer increases as student ability increases. In fact, the curves displayed above take a sigmoidal form, thus implying that the probability of a correct answer increases monotonically as student ability increases. Now consider using the 1PL model to analyze test responses provided by a group of students. If one student answers one question, we can only draw information about that students ability from the first question. Now imagine a second student answers the same question as well as a second question,

as illustrated below. We immediately have the following additional information about both students and both test questions: 1. We now know more about student 2s ability relative to student 1 based on student 2s answer to the first question. For example, if student 1 answered correctly and student 2 answered incorrectly we know that student 1s ability is greater than student 2s ability.

2. We also know more about the first questions difficulty after student 2 answered the second question. Continuing the example from above, if student 2 answers the second question correctly, we know that Q1 likely has a higher difficulty than Q2 does. 3. Most importantly, however, we now know more about the first student! Continuing the example even further, we now know that Q1 is more difficult than initially expected. Student 1 answered the first question correctly, suggesting that student 1 has greater ability than we initially estimated! This form of message passing via item parameters is the key distinction between IRTs estimates of student ability and other naive approaches (like the grading scheme described earlier). Interestingly, it also suggests that one could develop an online version of IRT that updates ability estimates as more questions and answers arrive! But lets not get ahead of ourselves. Instead, lets continue to develop item response theory by considering the fact that st udents of all ability levels might have the same probability of correctly answering a poorly-written question. When discussing IRT models, we say that these questions have a low discrimination value, since they do not discriminate between students of high- or low-ability. Ideally, a good question (i.e. one with a high discrimination) will maximally separate students into two groups: those with the ability to answer correctly, and those without. This gets at an important point about test questions: some questions do a better job than others of distinguishing between students of similar abilities. The two-parameter logistic (2PL) IRT model incorporates this idea by attempting to model each items level of discrimination between high- and lowability students. This can be expressed as a simple tweak to the 1PL:

How does the addition of alpha, the item discrimination parameter, affect our model? As above, we can take a look at the item response function while changing alpha a bit:

As previously stated, items with high discrimination values can distingui sh between students of similar ability. If were attempting to compare students with abilities near zero, a higher discrimination sharply decreases the probability that a student with ability < 0 will answer correctly, and increases the probability that a student with ability > 0 will answer correctly. We can even go a step further here, and state that an adaptive test could use a bank of high-discrimination questions of varying difficulty to optimally identify a students abilities. As a student answers each of these high-discrimination questions, we could choose a harder question if the student answers correctly (and vice versa). In fact, one could even identify the students exact ability level via binary search, if the student is willing to work through a test bank with an infinite number of high-discrimination questions with varying difficulty!

Of course, the above scenario is not completely true to reality. Sometimes students will identify the correct answer by simply guessing! We know that answers can result from concept mastery or filling in your Scantron like a Christmas tree. Additionally, students can increase their odds of guessing a question correctly by ignoring answers that are obviously wrong. We can thus model each questions guess -ability with the three-parameter logistic (3PL) IRT model. The 3PLs item response function looks like this:

where chi represents the items pseudoguess value. Chi is not considered a pure guessing value, since students can use some strategy or knowledge to eliminate bad guesses. Thus, while a pure guess would be the reciprocal of the number of options (i.e. a student has a one -in-four chance of guessing the answer to a multiple-choice question with four options), those odds may increase if the student manages to eliminate an answer (i.e. that same student increases her guessing odds to one-in-three if she knows one option isnt correct). As before, lets take a look at how the pseudoguess parameter affects the item response function curve:

Note that students of low ability now have a higher probability of guessing the questions answer. This is also clear from the 3PLs item r esponse function (chi is an additive term and the second term is non-negative, so the probability of answering correctly is at least as high as chi). Note that there are a few general concerns in the IRT literature regarding the 3PL, especially regarding whether an items guessability is instead a part of a students testing wisdom, which arguably represents some kind of student ability. Regardless, at Knewton weve found IRT models to be extremely helpful when trying to understand our students abilities by ex amining their test performance. References de Ayala, R.J. (2008). The Theory and Practice of Item Response Theory, New York, NY: The Guilford Press. Kim, J.S., Bolt, D (2007). Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods. Educational Measu rement: Issues and Practices 38 (51). Sheng, Y (2008). Markov Chain Monte Carlo Estimation of Normal Ogive IRT Models in MATLAB. Journal of Statistical Software 25 (8). Also, thanks to Jesse St. Charles, George Davis, and Christina Yu for their helpful feedback on this post! What's this? You're reading N choose K, the Knewton tech blog. We're crafting the Knewton Adaptive Learning Platform that uses data from millions of students to continuously personalize the presentation of educational content according to learners' needs. Sound interesting? We're hiring. This entry was posted in modeling and tagged item response theory, psychometrics. Bookmark thepermalink.
Follow @knewton Visit knewton.com Open Source contributions Jobs at Knewton Subscribe to RSS feed

CATEGORIES

data visualization deployment graphs machine learning modeling

programming languages quality assurance


N choose K | the Knewton tech blog 2011-2012 Knewton, Inc.

Das könnte Ihnen auch gefallen