You are on page 1of 16

Mastering Modern Psychological Testing

Theory and Methods 1st Edition by Cecil R.


Reynolds -Test Bank
To purchase this Complete Test Bank with Answers Click the
link Below

https://exambanks.com/?product=mastering-modern-psychological-testing-theory-and-methods-1st-
edition-by-cecil-r-reynolds-test-bank

If face any problem or Further information contact us At Exambanks123@gmail.com

Description
INSTANT DOWNLOAD WITH ANSWERS
Mastering Modern Psychological Testing Theory and Methods 1st Edition by Cecil R. Reynolds -Test Bank

Chapter 6 Test Questions

1. An oral examination, scored by examiners who use a manual and rubric, is an example of
_________ scoring.
1. objective
2. subjective X
3. projective
4. validity

2. A fill-in-the-blank question is a ___________ item.


1. constructed-response X
2. selected-response
3. typical-response
4. objective-response
3. Which of the following formats is a selected-response format?
1. Multiple-choice
2. True-false
3. Matching
4. All of above X

4. How many distracters is it recommended that one provide for multiple choice items?
1. 2
2. 2 to 6
3. 3 to 5 X
4. 4

5. When writing true-false items, one should include approximately _________ true and ________
false.
1. 30%; 70%
2. 50%; 50% X
3. 70%; 30%

6. When developing matching items, one should keep the lists as ___________ as possible.
1. heterogeneous
2. homogeneous X
3. sequential
4. simultaneous

7. What is a strength of selected-response items?


1. Selected-response items are easy and quick to write.
2. Selected-response items can be used to assess all constructs.
3. Selected-response items can be objectively scored. X

8. ___________ require examinees to complete a process or produce a project in a real-life


simulation.
1. Projective tests
2. Performance assessments X
3. Selected response test
4. Multi-trait/multi-method tasks
9. A strength of constructed-response items is that they:
1. eliminate random guessing. X
2. produce highly reliable scores
3. can be quickly completed by examinees.
4. eliminate “feigning.”

10. You are creating a test designed to assess a flute player’s ability. Which format would assess this
domain most effectively?
1. Performance assessment X
2. Matching
3. Selected-response
4. True-false

11. General guidelines for writing test items include:


1. the frequent use of negative statements.
2. the use of complex, compound sentences to challenge the examinees.
3. the avoidance of inadvertent cues to the answers. X
4. arranging items in a non-systematic manner.

12. When developing maximum performance tests, it is best to arrange the items:
1. from easiest to hardest. X
2. from hardest to easiest.
3. in the order the information was taught.
4.

13. Including more selected-response and other time-efficient items can:


1. enhance the sampling of the content domain and increase reliability. X
2. enhance the sampling of the content domain and decrease reliability.
3. introduce construct irrelevant variance.
4. decrease validity.

14. In order to determine the number of items to include on a test, one should consider the:
1. age of examinees.
2. purpose of test.
3. types of items.
4. type of test.
5. All of the above X

15. __________ are reported as the most popular selected-response items.


1. Essays
2. Matching
3. Multiple-choice X
4. True-false

16. When writing multiple-choice items, one advantage to the ______________ is that it may
present the problem in a more concise manner.
1. direct-question format X
2. incomplete sentence format
3. indirect question format

17. What would be the recommended multiple-choice format for the stem: ‘What does 10×10
equal?’
1. Best answer
2. Correct answer X
3. Closed negative
4. Double negative

18. Which multiple-choice answer format requires the examinee to make subtle distinctions among
distracters?
1. Best answer X
2. Correct answer
3. Closed negative
4. Double negative

19. Which of the following is NOT a guideline for developing true-false items?
1. Include more than one idea in the statement. X
2. Avoid using specific determiners such as all, none, or never.
3. Ensure that true and false statements are approximately the same length.
4. Avoid using moderate determiners such as sometimes and usually.
20. What is a strength of true-false items?
1. They can measure very complex objectives.
2. Examinees can answer many items in a short period of time. X
3. They are not vulnerable to guessing.

21. _________ scoring rubrics identify different aspects or dimensions, each of which is scored
separately.
1. Analytic X
2. Holistic
3. Sequential
4. Simultaneous

22. With a _______ rubric, a single grade is assigned based on the overall quality of the response.
1. analytic
2. holistic X
3. reliable
4. structured

23. One way to increase the reliability of short-answer items is to:


1. give partial credit.
2. provide a word bank.
3. use the incomplete sentence format with multiple blanks.
4. use a scoring rubric. X

24. What item format is commonly used in both maximum performance tests and typical response
tests?
1. Constructed-response
2. Multiple-choice
3. Rating scales
4. True-false X

25. For typical-response tests, which format provides more information per item and thus can
increase the range and reliability of scores?
1. Constructed-response
2. Frequency ratings X
3. True-false
4. Matching

26. Which format is the most popular when assessing attitudes?


1. Constructed-response
2. Forced choice
3. Frequency scales
4. Likert items X
5. True-false

27. What is a guideline for developing typical response items?


1. Include more than one construct per item to increase variability.
2. Include items that are worded in both positive and negative directions. X
3. Include more than 5 options on rating scales in order to increase reliability.
4. Include statements that most people will endorse in a specific manner.

28. Examinees tend to overuse the neutral response when Likert items use ________ and may omit
items when Likert items use __________.
1. an odd number of options; an even number of options X
2. an even number of options; an odd number of options
3. homogenous options; heterogeneous options
4. heterogeneous options; homogenous options

29. Which of the following items are difficult to score in a reliable manner and subject to feigning?
1. Constructed-response X
2. True-false
3. Selected-response
4. Forced choice

30. Guttman scales provide which scale of measurement?


1. Nominal
2. Ordinal X
3. Interval
4. Ratio
31. Which assessment would best use a Thurstone scale?
1. Constructed-response test
2. Maximum performance test
3. Speed test
4. Power test
5. Typical response test X

32. According to a study by Powers and Kaufman (2002) regarding the relationship between
performance on the GRE and creativity, depth, and quickness, what were the findings?
1. There is substantial evidence that creative, deep-thinkers are penalized by multiple-
choice items.
2. There was no evidence that creative, deep-thinkers are penalized by multiple-choice
items. X
3. There was a significant negative correlation between GRE scores and depth.
4. There was a significant negative correlation between GRE scores and creativity.

33. _________ are a form of performance assessment that involves the systematic collection and
evaluation of work products.
1. Rubrics
2. Virtual exams
3. Practical assessments
4. Portfolio assessments X

34. Distracters are:


1. rubric grading criteria. X
2. the incorrect response on a multiple-choice items.
3. words inserted in an item intended to “trick” the examinee.
4. unintentional clues to the correct answer.

Chapter 7 Test Questions

1. Reliability relates to test ___________.


1. items
2. length
3. scores X
4. constructs

2. On a maximum performance test administered to 100 students, 60 students correctly answer


item #4. The item difficulty index equals:
1. 40
2. 60 X
3. 40
4. 60

3. Item 10 on an exam had an item difficulty index equal to .00. From this information, one knows
that:
1. the item was very difficult and all students answered it incorrectly. X
2. the item was very easy and all students answered it correctly.
3. the item was of medium difficulty and half of the students answered it correctly.
4. nothing since there is not enough information provided.

4. Items with a difficulty index of _______ do not contribute to the measurement characteristics of
a test.
1. 25
2. 50
3. 75
4. 0 X

5. Your employer wants a test that will help him to select the upper 30% of employees to consider
for new positions. It would be beneficial for the item difficulty index to average around:
1. 20
2. 30 X
3. 70
4. 80
5. 90

6. On ___________ tests, measures of item difficulty largely reflects the position of the item in the
test.
1. power
2. typical response
3. multidimensional
4. speed X

7. As a general rule, the authors of the chapter suggest that items with D values greater than
______ are acceptable.
1. 25
2. 30 X
3. 50
4. 70

8. The item-total correlation is typically calculated using which correlation?


1. Coefficient Alpha
2. Pearson product moment correlation
3. Point-biserial correlation X
4. Spearman rank order correlation

9. The item-total correlation for an item is 0.50. Which of the following interpretations is correct?
2. 5% of the total test variance is predictable from performance on the item.
3. 5% of the total test variance is predictable from performance on the item.
4. 25% of the total test variance is predictable from performance on the item. X
5. 75% of the total test variance is predictable from performance on the item.

10. What is the optimal Item Difficulty Index on a test consisting of only constructed response
items?
1. 0
2. 25
3. 50 X
4. 75
5. 0

11. What is the approximate optimal Item Difficulty Index for a multiple-choice item with 4 choices?
1. 0
2. 25
3. 50
4. 75 X
5. 0

12. A general recommendation is to use items with p values that have a range of approximately
_________ around the optimal value.
1. 05
2. 10
3. 15
4. 20 X
5. 25

13. While the item difficulty index is only applicable for _____________, the percent endorsement
statistic can be calculated for ____________.
1. maximum performance tests; constructed-response tests
2. maximum performance tests; typical-response tests X
3. typical-response tests; constructed-response tests
4. typical-response tests; maximum performance tests

14. On a reading comprehension exam, the proportion of examinees in the top group that answered
item 5 correctly equaled 0.70 and the proportion of examinees in the bottom group that
answered item 5 correctly equaled 0.10. What is the discrimination index for item 5?
1. 36
2. 49
3. 60 X
4. 80

15. When a test item has a discrimination index ______, it is considered to be acceptable by the
chapter authors.
1. greater than 0.30 X
2. less than 0.30
3. greater than 0.60
4. less than 0.60

16. A test item with a p = .70 and D = .45 is:


1. relatively easy and discriminates well. X
2. relatively easy and does not discriminate well.
3. relatively difficult and discriminates well.
4. relatively difficult and does not discriminate well.
5. of intermediate difficulty and does not discriminate well.

17. A test item with a p = .30 and D = .15 is:


1. relatively easy and discriminates well.
2. relatively easy and does not discriminate well.
3. relatively difficult and does discriminate well.
4. relatively difficult and does not discriminate well. X
5. of intermediate difficulty and does not discriminate well.

18. A test item with p = .80 and D = .40 is:


1. relatively easy and discriminates well. X
2. relatively easy and does not discriminate well.
3. relatively difficult and does discriminate well.
4. relatively difficult and does not discriminate well.
5. of intermediate difficulty and does not discriminate well.

19. A test item with p = .50 and D = .40 is:


1. relatively easy and discriminates well
2. relatively easy and does not discriminate well.
3. relatively difficult and does discriminate well.
4. relatively difficult and does not discriminate well.
5. of intermediate difficulty and discriminates well. X

20. In a class of 100 students, 70 answer item #4 correctly and 30 answer it incorrectly. What is the
p value of this item?
1. 30
2. 40
3. 49
4. 70 X
21. An effective distracter should be selected by:
1. at least some examinees and demonstrate positive discrimination.
2. at least some examinees and demonstrate negative discrimination. X
3. no examinees and demonstrate zero discrimination.
4. all examinees and demonstrate positive discrimination.

22. Item difficulty and distracter analysis are related to:


1. Classical Test Theory. X
2. Factor Analytic Theory.
3. Item Characteristic Curve Theory.
4. Item Response Theory.

23. An Item Characteristic Curve is a graph with ____________ reflected on the horizontal axis and
______________reflected on the vertical axis.
1. ability; probability of a correct response X
2. probability of a correct response; ability
3. ability; probability of an incorrect response
4. probability of an incorrect response; ability

24. The one-parameter IRT model assumes that items only differ in:
1. the chance of a random correct answer.
2. X
3.
4. scree point.

25. The ___________ IRT model takes into consideration the possibility of an examinee with no
‘ability’ answering the item correctly by chance.
1. one-parameter
2. Rasch model
3. two-parameter
4. three-parameter X
26. On an Item Characteristic Curve, the point halfway between the lower and upper asymptotes is
referred to as the __________ and represents the difficulty of the item.
1. medial apex
2. modal beta
3. beta index
4. inflection point X
27. __________ illustrates the reliability of measurement at different points along the distribution.
1. Classical Test Theory
2. Item Characteristic Curve
3. Reliability Curve
4. Test Information Function X

28. In a class of 100 students, only 20 answered item 10 correctly. The p value for this item equals:
1. 20 X
2. 40
3. 64
4. 80

29. For item 21, pT is 0.80 and pB is 0.30. What is the value of D for this item?
1. 20
2. 25
3. 50 X
4. 70

30. The optimal p value for a selected-response item with two choices is approximately:
1. 55.
2. 65.
3. 75.
4. 85. X

31. It is reasonable to present items with p = 1.0 at the beginning of an exam to:
1. enhance the overall variability of the exam.
2. enhance examinees’ confidence.
3. enhance the measurement characteristics of the exam.
4. ensure examinees do not become overconfident.

32. A test item’s statistics are displayed in the following table. Based on this data, what is the most
obvious problem with this item?
p = 0.50 D = 0.14 Options

Item #5 A* B C D

Number in top group 17 9 0 4

Number in bottom group 13 6 2 9

* correct answer

1. All distracters are performing well but the item is too easy.
2. Distracter B demonstrates positive discrimination and should be revised. X
3. Distracter D demonstrates negative discrimination and should be revised.
4. No problem, retain the item as is.

33. A test analysis is displayed below. Based on this information, what is the most obvious problem?

p = 0.40 D = 0.54 Options

Item #7 A B C* D

Number in top group 0 1 10 4

Number in bottom group 4 3 2 6

* correct answer

1. All distracters are performing well but the item is too easy.
2. Distracter B demonstrates negative discrimination and should be revised.
3. Distracter A demonstrates negative discrimination and should be revised.
4. No problem, retain the item as is. X

34. A test analysis is displayed below. Based on this information, what is the most obvious problem?
p = 0.40 D = 0.13 Options

Item #3 A B C D*

Number in top group 6 1 1 7

Number in bottom group 2 4 4 5

* correct answer

1. All distracters are performing well.


2. Distracter A displays positive discrimination and should be revised. X
3. Distracter B displays positive discrimination and should be revised.
4. No problem, retain the item as is.

35. Which of the following was described as a qualitative approach to item analysis?
1. Set the test aside and review at a later time.
2. Have a colleague review the test.
3. Allow examinees to provide feedback.
4. All of the above. X

36. Which of the following statements regarding p values is correct?


1. They can range from -1.0 to 1.0.
2. Items with a high p value are more difficult.
3. Items with a low p value are more difficult. X
4. Items with a p value of 1.0 were answered incorrectly by all students.

37. Items with a value of _____ and ______ do not contribute to the variability of test scores.
1. -1.0; 0.00
2. 00; 1.0 X
3. -1.0; 1.0
38. Item difficulty indexes on mastery tests tend to be ________ item difficulty indexes on tests
designed to produce norm-referenced scores.
1. larger than X
2. lower than
3. equal to

39. The optimal p value for a true/false items is approximately:


1. 55.
2. 65.
3. 75.
4. 85. X

40. How can reliability be reduced with many items close to a p value of 1.0?
1. The total score is transformed in an oblique manner.
2. The total score is an area transformation.
3. This results in a restricted range of scores. X
4. This results in an artificially inflated validity coefficient.