Beruflich Dokumente
Kultur Dokumente
Iteman
Classical Item and Test Analysis
Version 4.3
June 2013
Contact Information
Assessment Systems Corporation
6053 Hudson Road, Suite 345
Woodbury, MN. 55125
Voice: (651) 647-9220
Fax: (651) 647-0412
www.assess.com
Bookmarks
To view PDF Bookmarks for this manual, select the Bookmark tab on the left side of the Acrobat
window. The bookmark entries are hyperlinks that will take you directly to any section of the
manual that you select.
License
Unless you have purchased multiple licenses for Iteman 4.3, your license is a single-user license.
Instructions for transferring your license between computers are in Appendix E.
Citation
Guyer, R., & Thompson, N.A., (2013). Users Manual for Iteman 4.3. Woodbury, MN:
Assessment Systems Corporation.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any meanselectronic, mechanical, photocopying, recording, or otherwisewithout
the prior written consent of the publisher.
Copyright 2013 by Assessment Systems Corporation
All Rights Reserved
Table of Contents
1. Introduction ...................................................................................................................... 1
Your Iteman 4 License and Unlocking Your Copy ............................................................ 1
2. Input Files ......................................................................................................................... 4
The Data Matrix File........................................................................................................... 4
Iteman 3 Data Format ......................................................................................................... 4
Delimited Data Matrix File ................................................................................................. 5
The Item Control File ......................................................................................................... 5
3. Running the Program ...................................................................................................... 7
Using the Menus ................................................................................................................. 7
The File Menu ................................................................................................................. 7
Using the Main Program ..................................................................................................... 8
The Files Tab .................................................................................................................. 8
The Input Format Tab ..................................................................................................... 9
The Scoring Options Tab .............................................................................................. 11
The Output Options Tab ............................................................................................... 13
Using Multiple Runs Files ................................................................................................ 15
Creating a Multiple Runs File ....................................................................................... 15
Opening a Multiple Runs File ....................................................................................... 17
A Sample MRF File ...................................................................................................... 17
Running the sample files................................................................................................... 19
4. Interpreting the Output ................................................................................................. 22
Test-Level Output (Examinee Scores) .............................................................................. 22
Test-Level Output (Reliability Analysis) ......................................................................... 23
Test-Level Output (Graphics) ........................................................................................... 24
Conditional Standard Error of Measurement (CSEM) ..................................................... 24
Item-Level Output............................................................................................................. 25
What to Look For .............................................................................................................. 29
Item Difficulty .............................................................................................................. 29
The P value (Multiple Choice)........................................................................... 29
The Item Mean (Polytomous) ............................................................................ 30
Item Correlations .......................................................................................................... 30
Multiple Choice Items ........................................................................................ 30
Polytomous Items ............................................................................................... 30
DIF Statistics ................................................................................................................. 31
Mantel-Haenszel ................................................................................................ 31
z-test Statistic ..................................................................................................... 31
p .......................................................................................................................... 32
Bias Against ....................................................................................................... 32
1. Introduction
Iteman is a Windows application designed to provide detailed item and test analysis reports
using classical test theory (CTT). The purpose of these reports is to help testing programs
evaluate the quality of test items by examining their psychometric characteristics.
Iteman has a friendly graphical user interface (GUI) that makes it easy to run the program, even
if you are not familiar with psychometrics. The GUI is organized into five tabs: Settings, Files,
Input Format, Scoring Options, and Output Options. These are discussed in detail in Chapter 3:
Running the Program.
Iteman 4.3 offers several substantial advantages over Iteman 3:
1. The most important advantage is the addition of graphics. It is now possible to produce
an item quantile plot for each item. Moreover, you control the number of points in the
plot.
2. Iteman 4 is able to handle pretest (trial or unscored) itemsitems that are not included in
the final score but for which statistics are still desired.
3. More statistics are calculated, including the alpha (KR-20) reliability coefficient with
each item deleted, several split-half reliability coefficients (both with and without
Spearman-Brown correction), conditional standard error of measurement, and subgroup P
(proportion correct) statistics for up to seven ordered groups.
4. Instead of simple ASCII text files, the output is now rich text file (RTF) format prepared
as a formal report, and also in a comma-separated value (CSV) format that is able to be
manipulated (sorted, highlighted, etc.) in spreadsheet software. It additionally produces a
CSV file of examinee scores.
5. Scaled scores and subscores can be added to the output.
6. Scores can be classified into two groups at a specified cut score, and the two groups can
use your labels.
7. Items can be analyzed relative to an external score rather than the total score on a test.
8. The maximum number of items that can be analyzed has been increased to 10,000.
9. A batch type of capability, using a Multiple Runs File has, been added to allow you
to run multiple data sets without having to use the graphic user interface for each run.
Multiple Runs files can be created outside Iteman in a text editor or interactively within
Iteman.
Page 1
desktop and a laptop) so long as there is no possibility that the two copies of the software will be
in use simultaneously. If you would like to use Iteman 4 on a network or by more than one user,
please contact us to arrange for the appropriate number of additional licenses.
Iteman 4 is shipped as a functionally-limited demonstration copy. It is limited to no more than 50
items and 50 examinees, but has no expiration date. We can permanently convert your demo
copy to the fully functioning software by email, phone, or fax once you have completed the
license purchase. To unlock Iteman 4, please email/phone/fax to ASC:
1. Your name and email address.
2. Your organization or affiliation.
3. Your invoice number (in the top right corner of your invoice). You should make a
record of your invoice number since you might be asked for it if you request technical
support.
4. The unlock codes, which are two numeric codes that are unique to the installation of
Iteman 4 on any given computer. To obtain these two codes, click on the Unlock
Program button when Iteman 4 starts (Figure 1.1) This license window can also be
reached by clicking on the License button and selecting Unlock when Iteman 4 is
running in demo mode.
Figure 1.1: Screen Visible When Iteman 4 is Locked
From the unlock screen you will need to send us the two blue Computer ID and Session ID
numbers (Figure 1.2). For your convenience we have provided a Copy IDs to Clipboard
button. This will copy both IDs to the Windows clipboard along with a brief message and the
email address to which to send your payment information. This can then be pasted into an email
message, filled in, and sent to sales@assess.com. If you have already paid for your Iteman 4
license, be sure to add your invoice number to this message.
When we receive these codes from you, we will respond with a single numeric Activation Code
(if you have purchased a permanent license) or two codes (if you have purchased an annual
subscription license) that you will need to enter into this same window from which you obtained
Iteman 4.3 Manual
Page 2
your Activation Codes (the red labels in Figure 1.2). Once you enter the code(s) that we send
you, your copy will be unlocked and fully functional.
Figure 1.2: The Unlock Screen
Note that if you install Iteman 4 on a second computer, you will need to repeat this process for
that computer since the unlock codes are specific to a given computer.
Iteman 4 is permanently unlocked for academic use, but is an annual subscription for
non-academic use. The license status box (see Figure 3.1) will display the current
license status, including the number of days remaining for your subscription. As the
subscription nears the end, the background color of the box will change to alert you to the
need to renew your subscription for another year (red if you have less than 30 days
remaining, yellow if 30-90 days, and green if more than 90 days).
Page 3
2. Input Files
Iteman 4 requires two input files: the Data Matrix File and an Item Control File. The formats for
these files are described in the following sections.
4213323412
1213323410
3323123413
1223323414
2214323411
Additional columns can be ignored, so it is not necessary to delete any data if your data file has
information other than ID and responses. For example, your file might contain exam dates,
locations, education level, or sensitive personal data that you do not want included in the output.
In the example shown in Figure 2.2, group membership information is included in column 19.
An example of this is shown in Figure 2.2; you might want to include examinee ID numbers (the
first six columns) in your output but not names. Chapter 3: Running the Program describes
how to skip these columns.
Figure 2.2: Example of an Input Data File (Columns to Ignore)
6153425
5947824
5976281
1359687
9778236
John Doe
Jane Doe
Jack Hall
Jim Hill
Jen Smith
M
F
M
M
F
4213323412
1213323410
3323123413
1223323414
2214323411
Page 4
Page 5
N = No (not included),
P = Pretest;
6. Item type:
M = Unscored multiple-choice items with responses that begin at 1 or A. For
scored multiple-choice items, see P below.
R = Rating scale items: polytomous items with responses that begin at 1 or A).
P = Items with numeric responses that begin at 0 (e.g., 0, 1, 2, 3). This includes
multiple-category partial credit items, and dichotomously scored multiple-choice
items (scored 0 or 1);
An example of the control file is shown in Figure 2.5. There are ten items, with nine multiple
choice items and one partial credit item. The first five are in Domain 1, while the latter five are
in Domain 2. The first four items in each domain are scored, while the fifth item in domain 2 is
a pretest item. The keyed answers are either 1, 2, 3, or 4 for each multiple choice item since each
item has 4 alternatives. Keys can be alphabetical or numeric. Item 7 has two keyed responses (3
and 1). For Item 7, item responses will be scored as correct if the examinee answers either 3 or
1.
If an item is polytomously scored, the key should be + if positively scored and - if negatively
(reverse) scored. Item 10 is a positively scored (+) partial credit item with item responses that
begin at 0. For item 10, the item responses will be 0, 1, 2, 3, and 4, since the item has five
options.
The control file should have as many lines as there are items in the test. The program counts the
lines of information in the control file, and that serves as the total number of items in the test.
There is a maximum of 10,000 items (lines) in Iteman 4.3.
Figure 2.5: Example of an Item Control File
Item01
Item02
Item03
Item04
Item05
Item06
Item07
Item08
Item09
Item10
1
2
3
4
1
2
31
4
1
+
4
4
4
4
4
4
4
4
4
5
Science
Science
Science
Science
Science
Reading
Reading
Reading
Reading
Reading
Y
Y
Y
Y
P
Y
Y
Y
Y
P
M
M
M
M
M
M
M
M
M
P
Page 6
Page 7
for each file. This will activate a standard dialog window to specify the path and name of each
file.
If the Data Matrix File has an Iteman 3 (ITAP) header, be sure to check this box:
The Item Control file box will be disabled when the Iteman 3 Header box is checked, as will the
options on the Input Format tab.
The output file must have an .rtf extension. The fourth box is used if you have a file containing
examinee scores that have been produced by some method other than number-correct that you
wish to use as the basis for your statistics (for example, a scaled score reported by your testing
vendor). The scores in this file, one line per examinee, must be in the same order as those in the
examinee data file. The fifth box allows you to use a previously saved options file. The
selected options file will override the current program defaults when opened. The last file text
box allows you to provide a title for your report.
Page 8
Page 9
If the Data Matrix File is delimited, specify so by clicking on the Data matrix is delimited by a.
Next specify whether the data file is delimited by a tab character or a comma. Selecting that the
data matrix file is delimited will disable the Data matrix file includes an Iteman 3 Header box
and the three fixed width column boxes. If the delimited response matrix does not include
examinee ID in the first column, make sure that the Response matrix includes examinee ID
box is not checked.
If you have a special character in your data representing omitted responses or not-administered
items, these are specified next. These responses will be treated separately, with frequencies
provided in the output. If all items were answered by all examinees, you can leave these
characters as the default value, and of course no examinees will be noted as having such
characters.
Page 10
If your Data Matrix File includes an Iteman 3 header, the options on this tab will be deactivated and
the following message will be displayed:
To request a DIF analysis for each scored dichotomous item select the checkbox next to that option.
If you are performing a DIF analysis you must specify which column the group code appears
in. This option is not valid for delimited input and will remain deactivated. The DIF code
will not be included as part of the examinee ID.
The create X ability levels for the DIF test option specifies the number of levels created
for purposes of the Mantel-Haenszel DIF test.
Specify the characters used to identify the reference and focal groups. These characters are
not case sensitive.
Specify the labels for the reference and focal groups. The labels provided will be used in the
output when the DIF test is significant.
Page 11
If your testing program reports scaled scores based on raw number-correct scores, these can
be calculated directly. Scaled scores are computed using the scaling function (detailed
below) for the total number correct scores and/or the domain number-correct scores. Scaled
scoring is often used to mask details about the test, such as exact number of items or raw
cutoff score, or to express scores on a different scale than number correct.
o Linear scaling: The raw scores are first multiplied by the slope coefficient then the
intercept is added to the product. For example, if you want the scores to be reported on a
scale of 100 to 200 for a test of 50 items, the scaled score could be specified as SCALE =
RAW 2 + 100.
o Standardized scaling: The raw scores are converted to have a mean of X and a standard
deviation of Y. This form of scaling is useful if you desire to center the mean of the test
around a constant value (e.g., 50) for use in a report.
Iteman 4.3 Manual
Page 12
If you want to perform dichotomous classification for the total number-correct scores. click
the box next to that statement. It is possible to classify based on either total number-correct
or the scaled total number-correct scores.
o Cutpoint: The cutpoint is the value at which scores are classified as in the high group.
Scores below the cutpoint are classified as being in the low group.
o Low group label: Label used in the Scores output file for those in the low group.
o High group label: Label used in the Scores output file for those in the high group.
Page 13
Item statistic flagging allows you to specify an acceptable range for a statistic. For example, if
you want to identify all items that have a P (proportion correct) between 0.20 and 1.00, it can be
specified here, and then the output will label items with low P as LP and high P as HP.
Figure 3.4 is set up with these bounds, as well as a minimum point-biserial of 0.10. The
acceptable item mean range is used to flag the item means of polytomous items to identify
outlier items. Flags are further explained in Chapter 4.
Selecting the Exclude omits from option statistics box will prevent omits from having the full
complement of option statistics computed for them. The default of scoring omits as incorrect
affects the reliability coefficients, and provides the full complement of option statistics for omits.
For polytomous items, omits are automatically excluded from the option statistics.
If you want to have the point-biserial and biserial correlations corrected for spuriousness,
click the check box next to that statement. Spuriousness refers to the fact that an items
scores are included in the total score, so correlating an item with the total score implies that it
is being correlated with itself to some extent. This effect is negligible if there are a large
number of items on the test (e.g., more than 30), but Iteman 4 provides the option to correct
for this issue, which should be utilized for tests of 30 items or less.
Produce quantile plots for each item will produce a graphical plot of the specified number
of subgroups (up to 7) for each item; interpretation of these plots is discussed in Chapter 4:
Interpretation of the Output. The quantile plot will be produced for only the first 9
alternatives for an item. Click the check box for this option if you wish to produce quantile
plots for each item, with every page of the output containing the plot and the statistics table
for a given item.
Produce the quantile plot data table will provide a table for each item that contains the
proportions in each subgroup that are shown graphically in the quantile plot. The quantile
plot data table will present the subgroup proportions for up to 15 alternatives plus the omit
and not administered codes.
Use X points for quantile plots allows you to increase or decrease the number X of
examinee groups used for constructing the quantile plots. This number can range from 2 to
7. Larger numbers of points are recommended only for large sample sizes of at least 1,000
examinees.
Produce collusion index matrix (multiple-choice item only) will provide a matrix of index
from response similarity analysis in a separate BBO-matrix.csv file. The analysis involves
comparisons between all possible pairs of examinees to see if their responses might be
similar. Pairs of responses are considered as suspect i.e., flagged, if the index value is
below 0.0001 based on the criteria developed by Bellezza and Bellezza (1989). Iteman
produces the response similarity analysis only for unscored multiple-choice items.
If you need to convert multiple-choice (ABCD) data into dichotomously-scored (0/1) data,
Iteman 4 provides an option for this. In addition, if you have the item responses reversed (1,
2, 3, 4). This option is present because some psychometric software requires scored data,
such as PARSCALE. The scored item responses will be saved with the name of your
primary output file, but with a .TXT extension.
o Include omit codes in the data matrix and Include not administered codes in the data
matrix determines whether omit/not administered codes are kept in the scored matrix or
scored as incorrect (0). Omit/not administered codes are automatically left in the data
matrix for polytomous items.
Page 14
To save the Item Control File to an external file, check this box. The control file will also be
saved with the same name as the output file, but with Control.txt appended to the end of the
filename.
The Flags panel allows you to specify key flag, low and high flags for P-value, point-biserial
correlation, item mean, and DIF flag.
Page 15
To create an MRF:
1. Select the folder where the files used for the analysis are stored. Click Add Path to add
the Path to the MRF. (You must complete steps 2, 3, and 4 to perform an analysis.)
2. Select the Options File:
a. If you saved the program options to an external file, open this file and select Add
Options. The Options file will be added to the MRF.
b. If you wish to use the program defaults, select Use Defaults. The Keyword
DEFAULTS will appear in the MRF text editor next to OPTS.
3. Select the item control file (the data file(s) must follow the item control keyword):
a. If you are using an Item Control File, use the file open icon to select the then select
Add Control. The name of the control file will appear in the MRF box next to
CTRL.
b. If the data matrix includes an Iteman 3 Header then select the Skip Control box. A
blank space will appear next to the CTRL statement in the MRF box.
4. Select the data file(s) and click Add Data.
Note that if you enter a file name that does not exist in the selected folder, and select Add, the
program will not add the file to the MRF. It is important to note that the options*, control**, and
data files for a specific analysis all must reside within the same folder.
*Unless the defaults are used
**Unless an Iteman 3 Header is used
You may delete entries in the MRF text editor by clicking on the line and hitting Delete or
Backspace. However the following file sequence must be observed for the MRF to work
correctly:
1. The first PATH keyword must be followed by the OPTS, CTRL, and DATA lines
2. If you wish to use a different OPTS file, that file must appear after the PATH statement.
3. The CTRL statement must be followed by the DATA line(s).
To Save the text in the MRF editor box to an external file, select the Save MRF button. This
will allow you to save the MRF to a folder of your choosing.
An example of a completed MRF file is shown below.
To Run the MRF select the RUN MRF box. Note that the text in the MRF editor box will
automatically be saved to an external file when you run the MRF. The saved MRF text file will
have the word MRF appended to the end of the filename of the last selected data file.
The following output files will be generated for each DATA file in the MRF
1. DATA.rtf
The main rich text output file that includes the graphics and tables
2. DATA.csv
The comma-separated values output file
3. DATA Scores.csv
The scores saved as a comma-separated values file
Page 16
The following output files are optional and will be generated for each DATA file in the MRF if
requested in the Options File:
4. DATA Matrix.txt
The scored data matrix file
5. DATA Control.txt
The item control file if the original data matrix file used an Iteman 3
Header and a scored data matrix was requested
C:\Sample Files\
Sample.options
Control.txt
Exam1.txt
Exam2.txt
Exam3.txt
ITEMAN 3
Exam4.txt
The data files Exam1.txt, Exam2.txt, and Exam3.txt all make use of the control file
Control.txt. The data file Exam4.txt uses an Iteman 3 Header, so the CTRL line with
ITEMAN 3 precedes the DATA line. The new CTRL line overrides the previous CTRL file
Control.txt and the keyword ITEMAN 3 deactivates the input of the control file. A new
PATH statement at the end of this file would change the folder location of any following OPTS,
CTRL and DATA files to be analyzed. An MRF file can have any number of lines.
Figure 3.7 shows the multiple runs window following the successful completion of the multiple
runs analysis. The window above the Add Path button reports the following information:
1. The dataset being analyzed
2. One of two things:
Iteman 4.3 Manual
Page 17
a. If no errors were encountered then The analysis was completed successfully will be
reported
b. Any error messages will be reported here if any are encountered. See Figure 3.9
below for sample error messages that may be encountered.
3. If the analysis was completed successfully then the number of items and examinees is
reported on the third line for that data file.
Figure 3.7: The Multiple Runs Window Following the Sample Analysis
Page 18
Page 19
Figure 3.10: The Input Format Tab for the Sample Files
3. Specify any Scoring Options and Output Options you wish. The program will run successfully if
you do not make any changes on the third and fourth tabs.
Once the program has successfully run, you will be shown the message in Figure 3.11 to tell you that
the run is complete, and where to find the output file. Clicking Yes will open the relevant
directory.
Page 20
Page 21
Min score
Max score
Mean P
Item Mean
Mean R
Explanation
which portion of the test that the row is describing
number of items in that portion of the test
average number correct
standard deviation, a measure of dispersion (a range of two SDs
from the mean includes approximately 95% of the examinees, if
their number-correct scores are normally distributed)
the minimum number of items an examinee answered correctly
the maximum number of items an examinee answered correctly
average item difficulty statistic for that portion; also the average
proportion-correct score if there are no omitted responses (not
reported if there are no multiple choice items)
average of the item means for polytomous items (not reported if
there are no polytomous items)
average item-total correlation for that portion of the test
The test-level summary table (Table 4.3) allows you to make important comparisons between
these various parts of the test. For example, are the new pretest items of comparable difficulty to
Iteman 4.3 Manual
Page 22
the current scored items? Are items in Domain 2 more difficult than Domain 1? Were the mean
and standard deviation (SD) of the raw scores what should be expected?
Table 4.1: Example Summary Statistics
Score
Items
Mean
SD
Min
Score
Max
Score
Mean P
Item
Mean
Mean R
All items
Scored Items
Pretest items
Domain 1
Domain 2
Domain 3
42
36
6
8
16
12
38.560
33.600
4.960
7.360
13.600
12.640
5.288
4.703
1.087
0.776
2.185
2.926
27
23
2
5
7
7
46
40
6
8
16
17
0.863
0.869
0.827
0.920
0.850
0.860
2.020
2.020
0.000
0.000
0.000
2.020
0.224
0.223
0.230
0.130
0.259
0.239
Alpha
SEM
Split-Half Split-Half
Split-Half
S-B
(Random) (First-Last) (Odd-Even) Random
S-B FirstLast
S-B OddEven
All items
Scored items
Pretest items
Domain 1
Domain 2
Domain 3
0.765
0.731
0.519
0.073
0.642
0.590
2.561
2.439
0.754
0.747
1.307
1.874
0.537
0.462
0.014
0.607
0.209
0.643
0.605
0.829
0.811
0.308
0.551
0.259
-0.016
0.494
0.750
0.473
0.434
0.182
0.380
0.149
0.707
0.682
-0.008
0.328
0.600
0.699
0.632
0.028
0.755
0.345
If a dichotomous classification was performed, and all the scored items are multiple choice, the
Livingston decision consistency index is computed at the cut-score (expressed as number-correct
scores). The equation for the Livingston index is provided in Appendix C.
Page 23
After the histograms for the scored items, histograms for the item statistics are provided, each
followed by a table of numerical values corresponding to the histograms.. If there were scored
multiple-choice items, the histogram for the item P values and Rpbis correlations are provided.
If there were scored polytomous items then the histogram for the item means and the Pearson r
correlations are provided.
Next scatterplots are provided of the P value by Rpbis if there are scored multiple-choice items,
and of the item mean by Pearsons r if there are scored polytomous items.
Page 24
If dichotomous classification was performed, then the CSEM is reported at the cutscore
(expressed as number correct). If you used a scaled cutscore, this scaled cutscore is converted to
the raw number-correct scale for reporting.
Item-Level Output
After the test-level statistics, a detailed table of the statistics for each item is provided, one item
to a page. If the quantile plots option is selected, that is also provided on the same page, as
shown in Figure 4.3 for a dichotomously scored item and Figure 4.4 for a polytomous item.
These quantile plots can be pasted into the item record for test items that are stored in ASCs
FastTEST item banker, FastTEST Pro, or FastTEST Web.
The quantile plot, as seen in Figure 4.3, can be difficult to interpret, but is arguably the best way
to graphically depict the performance of an item with classical test theory. It is constructed by
dividing the sample into X groups based on overall number-correct score, or an external score if
used, and then calculating the proportion of each group that selected each option. For a fouroption multiple-choice item with three score groups as in the example, there are 12 data points.
The 3 points for a given option are connected by a colored line. A good item will typically have
a positive slope on the line for the correct/keyed answer, while the slope for the incorrect options
should be negative.
Page 25
Item information
Seq. ID
5
5
Key
C
Item statistics
N
1699
0.563
Domain
Rpbis
0.571
Domain
Rbis
0.718
Total
Rpbis
0.561
Total
Rbis
0.706
Rbis
-0.481
-0.319
0.706
-0.509
0.092
Mean
54.617
63.855
86.265
62.689
85.000
SD
17.678
18.060
15.566
18.756
0.000
Alpha w/o
0.946
Option statistics
Option
A
B
C
D
Omit
Not Admin
N
81
179
956
482
1
0
Prop.
0.048
0.105
0.563
0.284
0.001
Rpbis
-0.224
-0.189
0.561
-0.383
0.008
Color
Maroon
Green
Blue
Olive
**KEY**
N
81
179
956
482
0-20%
0.134
0.196
0.131
0.539
20-40%
0.067
0.152
0.364
0.416
40-60%
0.027
0.102
0.604
0.267
60-80%
0.003
0.048
0.793
0.156
80-100%
0.008
0.031
0.910
0.051
Color
Maroon
Green
Blue
Olive
**KEY**
Page 26
Item information
Seq. ID
56
Poly 6
Key
+
Item statistics
N
1000
Mean
3.092
Total R
0.682
Option statistics
Option
1
2
3
4
5
Omit
Not Admin
Weight
1
2
3
4
5
N
147
208
261
174
210
0
0
Prop.
0.147
0.208
0.261
0.174
0.210
Rpbis
-0.420
-0.310
-0.020
0.196
0.513
Rbis
-0.647
-0.438
-0.027
0.289
0.726
20-40%
0.197
0.370
0.298
0.101
0.034
40-60%
0.045
0.196
0.442
0.211
0.106
60-80%
0.015
0.085
0.303
0.308
0.289
Mean
65.265
76.178
91.042
103.230
118.081
SD
15.574
18.970
19.000
19.285
16.855
Color
Maroon
Green
Blue
Olive
Gray
N
147
208
261
174
210
0-20%
0.476
0.356
0.136
0.026
0.005
80-100%
0.015
0.035
0.119
0.219
0.612
Color
Maroon
Green
Blue
Olive
Gray
Page 27
The item information table in Figures 4.3 and 4.4 provides the item sequence number, item ID,
keyed response, number of options, and the domain the item is in. The item statistics table
provides item-level statistics and is described separately for multiple-choice and polytomous
items.
Multiple-Choice Items
Label
Explanation
N
Number of examinees that responded to the item
P
Proportion correct
Domain Rpbis*
Point-biserial correlation of keyed response with domain score
Domain Rbis*
Biserial correlation of keyed response with domain score
Total Rpbis
Point-biserial correlation of keyed response with total score
Total Rbis
Biserial correlation of keyed response with total score
Alpha w/o
The coefficient alpha of the test if the item was removed
Flags
Any flags, given the bounds provided; LP = Low P, HP = High
P, LR = Low rpbis , HR = High rpbis , K = Key error (rpbis for a
distractor is higher than rpbis for key), DIF for any item with a
significant DIF test result
If requested, the DIF test results also appear in the classical statistics table and are defined as:
Label
M-H
p
Bias Against
Label
N
Mean
Domain r*
Domain Eta*+
Total r
Total Eta+
Alpha w/o
Flags
Explanation
The Mantel-Haenszel DIF statistic
p-value associated with M-H test statistic
If p is less than 0.05, the group the item is biased against
Polytomous Items
Explanation
Number of examinees that responded to the item
Average score for the item
Correlation of item (Pearsons r) with domain score
Coefficient eta from an ANOVA using item and domain scores
Correlation of item (Pearsons r ) with total score
Coefficient eta from an ANOVA using item and total scores
The coefficient alpha of the test if the item was removed
Any flags, given the bounds provided; same as dichotomous
except that mean score instead of P
Page 28
Label
Option
Weight
N
Prop.
Rpbis
Rbis
Mean
Color
(key)
Explanation
Letter/Number of the option
Scoring weight for polytomous items
Number of examinees that selected the option
Proportion of examinees that selected the option
Point-biserial correlation (rpbis) of option with total score
Biserial correlation of option with total score
Average score of examinees that selected the option
Color of the option on the quantile plot
The keyed answer will be denoted by **KEY** for multiple
choice items
The final table in Figures 4.3 and 4.4 presents the calculations for the quantile plots. The number
of columns in this table will match the number of score groups you specified on the Output
Options tab.
Iteman 4.3 was designed to produce RTF output instead of PDF output to allow you to
make additions to the report. A very useful addition would be to paste item text and
comments below the plot/table for each item (Figures 4.3 and 4.4). The report can then be
delivered to content experts with an easy-to-read plot, detailed tables, and the item text
neatly arranged on each page, one page for each item.
Item Difficulty
The P value (Multiple Choice)
The P value is the proportion of examinees that answered an item correctly (or in the keyed
direction). It ranges from 0.0 to 1.0. A high value means that the item is easy, and a low value
means that the item is difficult.
The minimum P value bound represents what you consider the cut point for an item being too
difficult. For a relatively easy test, you might specify 0.50 as a minimum, which means that
50% of the examinees have answered the item correctly. For a test where we expect examinees
to do poorly, the minimum might be lowered to 0.4 or even 0.3. The minimum should take into
account the possibility of guessing; if the item is multiple-choice with four options, there is a
25% chance of randomly guessing the answer, so the minimum should probably not be 0.20.
The maximum P value represents the cut point for what you consider to be an item that is too
easy. The primary consideration here is that if an item is so easy that nearly everyone gets it
correct, it is not providing much information about the examinees. In fact, items with a P of 0.95
or higher typically have very poor point-biserial correlations.
Iteman 4.3 Manual
Page 29
Item Correlations
The item point-biserial (r-pbis) correlation. The Pearson point-biserial correlation (rpbis) is a measure of the discrimination, or differentiating strength, of the item. It ranges from
.0 to 1.0. A good item is able to differentiate between examinees of high and low ability, and
will have a higher point-biserial, but rarely above 0.50. A negative point-biserial is indicative of
a very poor item, because then the high-ability examinees are answering incorrectly, while the
low examinees are answering it correctly. A point-biserial of 0.0 provides no differentiation
between low-scoring and high-scoring examinees, essentially random noise.
The minimum item-total correlation bound represents the lowest discrimination you are willing
to accept. This is typically a small positive number, like 0.10 or 0.20. If your sample size is
small, it could possibly be reduced.
The maximum item-total correlation bound is almost always 1.0, because it is typically desired
that the r-pbis be as high as possible.
The item biserial (r-bis) correlation. The biserial correlation is also a measure of the
discrimination, or differentiating strength, of the item. It ranges from 1.0 to 1.0. The biserial
correlation is computed between the item and total score as if the item was a continuous measure
of the trait. Since the biserial is an estimate of Pearsons r it will be larger in absolute magnitude
than the corresponding point-biserial. The biserial makes the stricter assumption that the score
distribution is normal. The biserial correlation is not recommended for traits where the score
distribution is known to be non-normal (e.g., pathology).
Polytomous Items
Page 30
the item responses for an item form a continuous variable. The r correlation and the r-pbis are
equivalent for a 2-category item.
The minimum item-total correlation bound represents the lowest discrimination you are willing
to accept. Since the typical r correlation (0.5) will be larger than the typical rpbis (0.3)
correlation, you may wish to set the lower bound higher for a test with polytomous items (0.2 to
0.3). If your sample size is small, it could possibly be reduced.
The maximum item-total correlation bound is almost always 1.0, because it is typically desired
that the r-pbis be as high as possible.
Eta coefficient. The eta coefficient is computed using an analysis of variance with the
item response as the independent variable and total score as the dependent variable. The eta
coefficient is the ratio of the between groups sum of squares to the total sum of squares and has a
range of 0 to 1. The eta coefficient does not assume that the item responses are continuous and
also does not assume a linear relationship between the item response and total score. As a result,
the eta coefficient will always be equal or greater than Pearsons r. Note that the biserial
correlation will be reported if the item has only 2 categories.
DIF Statistics
Differential item functioning (DIF) occurs when the performance of an item differs across groups of
examinees. These groups are typically called the reference (usually majority) and focal (usually
minority) groups. The goal of this analysis is to flag items that are potentially biased against one
group.
There are a number of ways to evaluate DIF. The current version of Xcalibre utilizes the MantelHaenszel statistic, where each group is split into several ability levels, and the probability of a correct
response compared between the focal and reference groups for each level. See Appendix C for the
equations. Results of this analysis are added into both the CSV and RTF output files.
Mantel-Haenszel
The Mantel-Haenszel (M-H) coefficient is reported for each item as an odds ratio. The
coefficient is a weighted average of the odds ratios for each level. If the odds ratio is less than
1.0, then the item is more likely to be correctly endorsed by the reference group than the focal
group. Likewise, odds ratios greater than 1.0 indicate that the focal group was more likely to
correctly endorse the item than the focal group. The RTF file contains the overall M-H
coefficient for an item; the CSV output file also includes the odds ratios for each level. These
ratios can be used to determine if the DIF present was constant for all abilities (uniform DIF) or
varied conditional on (crossing DIF). The M-H coefficient is not sensitive to crossing DIF, so
null results should be checked to confirm that there wasnt crossing DIF present.
z-test Statistic
The negative of the natural logarithm of the M-H odds ratio was divided by its standard error to
obtain the z-test statistic used to test the significance of the M-H against a null of zero DIF (odds
ratio of 1.0). This test statistic is provided in the CSV output file.
Page 31
p
The two tailed p value associated with the z test for DIF. Items with p values less than .05 will
be flagged as having significant DIF.
Bias Against
The group the item is biased against when the p value is less than .05. In the context of the M-H
test for DIF, the group that the item is biased against has a lower probability of a correct
response than the other group, controlling for ability level.
Option statistics
Each option has a P value and an r-pbis. The values for the keyed response serve as the statistics
for the item as a whole, but it is the values for the incorrect options (the distractors) that provide
the opportunity to diagnose issues with the item. A high P for a distractor means that many
examinees are choosing that distractor; a high positive r-pbis means that many high-ability
examinees are choosing that distractor. Such a situation identifies a distractor that is too
attractive, and could possibly be argued as correct.
Page 32
13. Classification If dichotomous classification was performed the results of the classification
will be provided. Classification can be performed with the raw total number-correct scores or
the scaled total number correct score.
14. CSEM III The conditional standard error of measurement Formula III from Lord (1984) if
there are no scored polytomous items.
15. CSEM IV The conditional standard error of measurement formula IV from Lord (1984) if
there are no scored polytomous items. Across all observed scores, this formula is most
comparable to the classical SEM found in Table 4.2.
Figure 4.5: Sample Examinee Scores Output
Page 33
References
Bellezza, F. S., & Bellezza, S. F. (1989). Detection of cheating on multiple-choice tests by using
error-similarity analysis. Teaching of Psychology, 16, 151-155.
Lord, F. M. (1984). Standard errors of measurement at different ability levels. Journal of
Educational Measurement, 21(3), 239243.
Page 34
A data file with an Iteman 3 control header consists of five primary components:
1.
2.
3.
4.
5.
Comments may also be included in the data file. Each of these elements is described in the
following sections.
Page 35
The first entry in the Iteman 3 header file specifies the number of items to be scored. Unlike
Iteman 3, Iteman 4.3 does not require that this number be located in a fixed position on the first
line. A space or tab must separate the number of items from the next character, the omit code.
The column immediately following the space/tab must contain the alphanumeric code for items
that the examinee has omitted. This may be a digit larger than the number of alternatives, a
letter, or some other character, including a blank. For example, it might be 9 for a fivealternative item, an O for omitted, or a period. Following the omit character must be a space
or tab. Immediately following the space/tab must be the alphanumeric code for items that the
examinee did not reach and therefore did not have a chance to answer. Like the omission code, it
may be a digit larger than the number of alternatives or any other character. In Figure A.1, the
letter O indicates an omitted item, and N indicates a not-reached item.
A space or tab must separate the not-reached code from the number of ID columns. In Iteman
4 this value can now range from 0 to 1,000 columns of examinee identification. A zero must
be placed on the control line when there is no examinee ID information provided. The
example in Figure A.1 indicates that there are 5 characters of identification for each examinee;
in the data lines (beginning on line 5 of the input file in Figure A.1), you will note that
examinees are identified by characters EX001 through EX005.
Page 36
Appendix B: Troubleshooting
The following section documents the different error messages you might encounter when you
use Iteman 4.
Page 37
Page 38
Page 39
If you are using different characters for the omitted responses in a single data set, then you
should consider consolidating them for use in Iteman 4. Unidentified responses will be scored as
incorrect, but will not have any option statistics calculated for them.
Figure B.6: Unidentified Response Character Error
Check the data matrix file, examinee XXX did not respond to all
XXX items
You will receive this error when Iteman 4 reaches the end of the line before all of the item
responses are read in for any examinee other than the first one. If you received this error you
should check the following:
1. Whether one or more examinees have an incomplete identification record.
2. Whether one or more examinees are missing item responses (or did not respond to all of
the items on the test and responses were not coded as not reached).
Figure B.7: Examinee Did Not Respond to All Items Error
It should be noted that the examinee number reported in the dialog box is only the last examinee
in the data matrix to have an incomplete record. It is possible that multiple examinees did not
have a complete record.
Page 40
Appendix C: Formulas
Conditional Standard Error of Measurement Formulas
CSEM III =
where :
CSEM IV =
where :
x(n x)
(n 1)
x = number-correct score
n = number of items
(1 K ) [CSEM III ]
K=
n(n 1) sP2
x (n x ) sx2 n( sP2 )
(C.1)
(C.2)
(C.3)
and
sP2 = variance of the proportion correct
x = mean of the number correct scores
sx2 = variance of the number correct scores
sx2 + ( x npc ) 2
L= 2
sx + ( x npc ) 2
where :
(C.4)
= Cronbachs alpha
pc = proportion correct at the cutscore
Note: L equals when the cutscore is at the mean of the number-correct scores.
k =
CRk I Fk
CFk I Rk
,
(C.5)
where
Iteman 4.3 Manual
Page 41
CFk I Rk
k N k
,
=
CFk I Rk
k N
(C.6)
n!
k !(n k )! P
(1 P)( n k ) ,
i=k
(C.7)
where
k denotes the number of EEIC,
n is the number of EIC.
Note that it is summed from k to N to estimate the probability of having k or more EEIC out of
EIC.
The calculation of P is left to the researcher to some extent. Published resources on the topic
note that if examinees always selected randomly among distractors, the probability of an
examinee selecting a given distractor is 1/d, where d is the number of incorrect answers, usually
one less than the total number of possible responses. Two examinees randomly selecting the
same distractor would be (1/d)(1/d). Summing across d distractors by multiplying by d, the
calculation of P would be
=
P d=
(1/ d )(1/ d ) 1/ d
(C.8)
That is, for a four-option multiple choice item, d = 3 and P = 0.3333. For a five-option item, d =
4 and P = 0.25.
However, examinees most certainly do not select randomly among distractors. Suppose a fouroption multiple-choice item was answered correctly by 50% (0.50) of the sample. The first
distractor might be chosen by 0.30 of the sample, the second by 0.15, and the third by 0.05.
Iteman uses these observed probabilities to provide a more realistic estimate of P.
Page 42
The defaults file allows you to change the values for the components of Iteman 4.3 listed below.
The lines of the default file include the following information (this information is case sensitive).
All entries must be separated by a single space unless otherwise indicated. Note that there must
be an entry for each option even if the option is not relevant to a given run.
Line 1. The file internal identifier, which is the following string: Iteman 4.3
Line 2. Run title
Line 3. The following options control the specifications found on the Files tab and must be
separated by a single space:
a)
b)
Line 4. The following options control the starting values found on the Input Format tab:
a)
b)
c)
d)
Omit character
e)
f)
Page 43
g)
h)
b)
c)
d)
e)
Line 6. The following options control the specifications found on the Scoring Options tab:
a)
b)
c)
d)
e)
f)
g)
h)
i)
j)
k)
l)
High group label for classification (separated from low label by a tab)
Line 7. The following options control the specifications found on the Output Options tab:
a)
b)
Lower and upper bounds for acceptable item means (rating scale items)
c)
d)
e)
f)
g)
h)
Page 44
i)
j)
k)
l)
m)
n)
Line 8. The following flag characters must each be separated by a tab character:
a)
b)
c)
d)
e)
DIF flag
All of the program options, can be saved to the defaults file by making changes to the options in
the GUI and clicking Save the Program Defaults under the pull-down menu from the File tab.
You will be notified that the defaults file is missing upon start-up of Iteman 4.3 if you move,
rename, or destroy the file. If the defaults file is missing you can easily save a new one by
clicking on the Save an Options File option on the File Menu.
Page 45
Select Start Transfer and follow the prompts. Be sure to connect the appropriate drive for use
as the transfer drive when prompted, if it isnt already connected (Figure E.3). Remember the
drive letter assignment for this drive.
Page 46
Once OK is clicked, the drive dialog is displayed (Figure E.4). Removable (A:) will always be
the floppy drive. Internal hard drives are marked by their drive letter only. USB flash/thumb
drives and other externally connected drives will be marked as Removable.
Figure E.4: Choose a Drive
Select the drive to carry the transfer file. Once the process is complete, if a USB flash/thumb
drive or external hard drive is used, carefully disconnect it. If there is a problem during this step,
an error message will be shown. Please note any error codes and report the error to Assessment
Systems at support@assess.com.
Page 47
The program will ask for confirmation, then prompt once again to connect the drive or diskette
carrying the transfer file (Figure E.6). If this has not been done already, please do so, and
remember which drive letter Windows assigns to it.
Figure E.6: Drive Dialog
Follow the prompts to the drive dialog (Figure E.6), and select the appropriate drive, which
might have a different drive letter on the original licensed computer than on the original demo
computer. The program will transfer the license to the transfer file and will indicate that it is now
in demo/trial mode (Figure E.7).
Figure E.7: Notification of Change in Mode
Carefully disconnect the drive once this step is complete. If there have been any errors, please
note them along with any specific codes and report them to Assessment Systems at
support@assess.com.
Page 48
Follow the prompts to connect the transfer drive if this hasnt already been done, and to select
the drive. If the license transfer was successful, a message will appear.
Figure E.9: Successful Transfer
If there have been any errors, please note them along with any specific codes and report them to
Assessment Systems at support@assess.com.
Page 49