Sie sind auf Seite 1von 9

PSYCHOLOGICAL ASSESSMENT OUTLINE

PART 1
I. Psychological Assessment and Psychological Testing
II. Types of Tests
III. History
IV. Statistics
a. Scales of Measurement
b. Describing Data (Graphs) PSYCHOLOGICAL ASSESSMENT AND PSYCHOLOGICAL TESTING
c. Normal Curve ● Psychological Measurement
d. Standard Scores o Psychometrician
e. Correlation and Regression ● PsychologicalAssessment
V. Of Test and Testing o Types
a. Assumptions ▪ Typical
b. Norms ▪ Collaborative
VI. Reliability ▪ Therapeutic
VII. Validity ▪ Dynamic
VIII. Test Development ▪ Alternate
a. Irt o Process
b. Ctt ▪ Referral Question
IX. Test Administration ▪ Prepare Tools
X. Bias in Testing ▪ Assessment
▪ Psychological Report
PART 2 ▪ Feedback
I. Theories of Intelligence o Tools
II. Stanford-Binet 5 ▪ Psychological Testing
III. Weschler ● Psychological Testing vs. Psychological
IV. Personality Assessment Assessment
V. Personality Tests ● Variables
VI. Neuropsychological o Content
VII. Corporate o Format
o Administration Procedures
o Scoring Procedures
▪ Score
▪ Scoring
▪ Raw score
o Interpretation
o Technical Quality
▪ Reliability
▪ Validity
▪ Utility
● Test Protocol: test materials
● Testing Protocol: standard procedures
▪ Interview
▪ Portfolio
▪ Case History
● Psychological Autopsy
▪ Observation
● Behavioral
● Naturalistic
▪ Roleplay Tests
▪ Computers
● CAPA (Computer Assisted Psychological
Assessment): assists the test user, not the TYPES OF TESTS
test taker.
● CAT (Computer Adaptive Testing) According to number
● Computers as Administrators ● Individual Tests
o Local processing ● Group Tests
o Central processing
o Teleprocessing According to Variable Being Measured
● Computers when Reporting ● Ability Tests/Test of Maximal Performance
o Simple Scoring Report o Achievement Test
o Extended Scoring Report o Aptitude Test
o Interpretative Scoring Report o Intelligence Test
o Consultative Scoring Report
o Integrative Report Format
o Parties Involved ✓ Speed Tests
▪ Test Developer/Authors ✓ Power Tests
▪ Test Publisher ➢ Alternate-Choice Format
▪ Test Reviewers ➢ Free-Response Format
▪ Test Sponsors
▪ Test User ● Personality Tests/Test of Typical Performance
▪ Test Taker o Structured Test
▪ Society at Large ▪ Self-Report
o Unstructured Test/Projective Test
Types of Evaluation ▪ Example: Essays
● Diagnostic: before instruction
● Formative: during and after instruction According to Qualifications
● Summative: at the end of a specificed time ● Level A:
● Level B: Psychometrician (Group Intelligence and objective tests) o Nominal
● Level C: Psychologist (Projective Tests) o Ordinal
o Interval
According to Use o Ratio
● Classification: based on two or more tests
○ Placement: where they will perform better ; based on one
score
○ Screening: potential clients/people
○ Certification
○ Selection: choosing most qualified
● Diagnosis and Treatment
● Self-Knowledge
● Program Evaluation
● Research IQ = ordinal scale (Cohen); interval scale (Kaplan)

● Treatment has nothing to do with assessment. Describing Data


● Raw Score
HISTORY ● Distribution: data is arranged/summarized
o Frequency Distribution
▪ Simple Frequency Distribution
▪ Grouped Frequency Distribution/Test-Score
Intervals/Class Intervals: has upper and lower limits
● Percentile Rank
● Percentage: proportion
● Percentile: location
# of students beaten x 100
Total # if students
STATISTICS
Describing Distributions
● Descriptive Statistics: describe ● Graph
● Inferential Statistics: inferences (logical) o Histogram: continuous trend
● Measurement: application of rules for assigning numbers to objects. o Bar Graph: for categorical data
● Scales: o Frequency Polygon/Line Graph
o Continuous Scale: approximation (usually involves an error) o
o Discrete Scale: exact ● Measures of Central Tendency
● Properties of Scales o Mean/Arithmetic Mean/Average
o Magnitude o Median: middle score
o Equivalent Interval N+1
o Absolute Zero 2
● Scales of Measurement o Mode: most frequent score
▪ Nomode (1,1,2,2,3,3,4,4)
▪ Unimodal
▪ Bimodal
▪ Multimodal

Normal Curve

● Measures of Variability
o Variability: how scattered or dispersed the score
o Range: difference from the highest to lowest scores
▪ HS-LS = Range
o Interquartile and Semi-quartile Ranges
▪ Quartile: 3 lines; 4 quarters
▪ Decile: 9 lines; 10 quarters
o Standard Deviation
o Variance
o Skewness

o Abraham DeMoivre
o Marquis de Laplace
o Karl Friedrich Gauss ( named it “Laplace-Gaussian”)
o Karl Pearson – (finally named it Normal Curve)

Standard Scores
● z score
● t score
o derived by W.A. McCall (to honor E.L. Thorndike)
● Stanine
● Deviation IQ
o Kurtosis ● GRE/SAT
● Linear Transformation
o NS = Sd(z) + Mean

Correlation and Regression


● Correlation: degree (strength) and direction of a relationship
o Correlation Coefficient/ Coefficient of Correlation

Direction of Correlation
o Positive Correlation
o Negative Correlation
o Perfectly No Correlation ● Regression: used to make predictions about scores on one variable from
knowledge of scores on another variable.
Strength of Correlation o Scatter Diagram/Scattergram/Scatter Plot/Bivariate Distribution
o Regression Line:
▪ Best-fitting straight line through a set of points in a
scatter diagram.
▪ The running mean or the line of least squares in two
dimensions or in the space created by two variables
▪ Found by using Principle of Least Squares: minimizes
the squared deviation around the regression line.
o Regression coefficient/ slope of the regression line: how much y
changes when x increases.
o Covariance: how much two scores vary together.
o Intercept: value of y when x is 0.
o Residual: difference between actual score and predicted score
▪ Actual Score – Predicted Score = Residual
● Pearson r: 2 continuous variables
● Multivariate Analysis
● Spearman Rho: 2 ordinal scale
o Relationship among three or more variables
● Kendall’s: 3 or more set of ranks
o Multiple Regression: studies the linear relationship of many
● Biserial: 1 artificial dichotomous & 1 continuous variable
predictors and one outcome, as well as the relationship among
● Point-Biserial: 1 true dichotomous & 1 continuous variable
the predictors.
● Phi Coefficient: 2 dichotomous scale (at least 1 true dichotomous)
▪ For continuous variables only
● Tetrachoric Correlation: 2 artificial dichotomous
o Discriminant Analysis: when we want to find the linear
combination of variables thar provies maximum discrimination
between categories.
▪ For nominal variables only (if variables would lead to
passed/fail)
o Factor Analysis: study the interrelationships among a set of
variables without reference to a criterion.
▪ Principal Components/Factors
▪ Factor Loadings · Error: Inconsistency
▪ Methods of Rotation: · r = 0.7-0.9 (Research)
● Oblique Rotation · r = 0.9-0.95 (Clinical Decision)
● Orthogonal Rotation
● Meta-Analysis · Test-Retest Reliability: Time Sampling Reliability
o Anaylsis of data from several studies o Error Variance: corresponds to the random fluctuation of
o Combines the effect sizes of all the studies performance from one test session to another due to
o Effect Size: size of association independent of the sample size time-interval
o For static characteristics
Issues in Correlation o Carry Over Effects: 1st testing affects the 2nd testing (due
● Residual to short interval)
● Standard Error of Estimate o Practice Effects
o Standard deviation of the residual · Parallel-Forms Reliability/Item Sampling Reliability
o Measure of the accuracy of prediction o Both tests should have the same number of items, type,
o Prediction is most accurate when this is small content, and gauge of difficulty.
● Coefficient Determination o For static characteristics
o Coefficient Correlation squared o Best reliability to use out of all
o r^2 · Internal Consistency
o Suggests the percentage of effect of the other variable o Administered only once
● Coefficient of Alienation o For dynamic characteristics
o Measure of non-association between two variables o Unidimensional: measuring only one construct
o 1.0 - coefficient of determination o High inter-correlation among the items
o Suggests the percentage not affected by the other variable o Split-Half Reliability
● Shrinkage § Splitting the test into two (odd/even split)
o Amount of decrease observed in the variance when a § Problem: Few items = low reliability
regression equation is created for one population and then § Solution: Spearman Brown Formula
used to another population. § Spearman Brown Prophecy: computes how many
● Cross Validation items you need to add
o Validating the predicted and the actual o Cronbach Alpha
§ For unequal variances
● Correlation-Causation Problem § Provides the lowest score of reliability
o Correlation does not necessarily mean causation § Used when the test is using Likert scale
● Third Variable Explanation o Kuder-Richardson
o Influence of external factors in the result § If items are dichotomous (scored right or wrong)
● Restricted Range § KR-20: varying level of difficulty
o If variability is restricted then significant correlations are § KR-21:same level of difficulty
difficult to find. · Inter-Rater Reliability
o Evaluators
RELIABILITY: A gauge of how much error entered the test
o Kappa Statistics
· Reliability: Consistency
§ 2 raters (Cohen); 3 raters (Fleiss)
● Inter-item Consistency: degree of correlation among all the
items
○ For assessing homogeneity(items in a scale are
unifactorial) of the test
○ Homogeneity items = high internal consistency
○ Heterogeneous items = low internal consistency
● Average Proportional Distance (ASD): a measure used to
evaluate the internal consistency of a test by focusing on
the degree of difference between item scores.
Ø If reliability is low
o Increase number of items
o Factor analysis and item analysis
o Correction of Attenuation

Validity
● Content Validity
● Construct Validity
○ Convergent
○ Divergent
● Criterion-Related
○ Concurrent
○ Predictive

Test Development
1. Test Conceptualization
○ Pilot Work
2. Test Construction
○ Scaling
■ Types
● Age-based
● Grade-based
■ Methods
● Summative
● Likert
● Method of paired comparison
● Comparative scaling
● Categorical scaling
● Guttman scale
○ Scallogram analysis
○ Writing items
■ Item format
● Selected-response format
● Constructed-response format
● Multiple choice format
● Matching item
● Binary-choice item
○ True or false
○ Agree or disagree
○ Yes or no
● Completion item/short-answer item
● Essay item
● Omnibus Spiral Format: tests should start
from easy to hard
○ Scoring items
■ Cumulative scoring
■ Class scoring/category scoring
■ Ipsative scoring
3. Test tryout
○ Should not be fewer than 5 and preferably more than 10
4. Item Analysis
○ Item-Difficulty index(for achievement tests) /item-
endorsement index (for personality tests)
■ The larger the p = the easier the item
■ Optimal Item Difficulty
● [(1÷number of choices) + 1.00]÷2
○ Item-reliability index
■ Higher the index = the greater the internal
consistency
○ Item-validity index
■ Higher the index = greater the criterion validity
○ Item-discrimination index
■ Discriminates high scorers from low scorers
■ d=[(U-L)/n]
■ The higher the d = the more the item can
discriminate high scorers from low scorers.

5. Revision

Das könnte Ihnen auch gefallen