Sie sind auf Seite 1von 16

Quantitative Research for the Behavioral

Sciences
WHY STUDY RESEARCH METHODS?
 To learn about/predict behavior of people and objects
 Must be able to sort/evaluate research/info others provide
 Need to evaluate other programs/policies for success
 Will make decisions (including lifestyle ones) based on research of others
 Need to read/report research for classes
 Research is about answering questions using the scientific
method
End Introduction
------------------------------------------------------------------------------------------------------------
CHAPTER 1

Nature and Kinds of Research


 Quantitative research is a systematic way of answering
questions with different purposes
 Descriptive/observational – describe event, situation (snapshot of what is
- U.S. Census)
 Exploratory – investigate situations not clear or familiar (moves beyond
description to explaining or predicting, often correlational)
 Theoretical (pure or basic research) – usually cause/effect, to answer why
questions
 Applied (evaluation research) – solve real world problems, evaluate
programs (either summative or formative evaluations)
 Problems with applied include poor controls; guinea pig effect; measuring problems

Research Paradigms
 Quantitative research
 Focuses on quantifying or counting variables; researcher is objective and removed from
involvement
 Examples: The amount of marketing done by colleges; employers’ use of different
operating systems
 Qualitative research
 Focuses on beliefs/feelings; researcher deeply involved with participants
 Examples are: Year long shadowing of executive to provide picture of that job;
unstructured interviews on experience in grad school to identify contributors to success
 Qualitative requires more expertise in research methods in order to
prevent researcher’s bias contaminating results

Video
------------------------------------------------------------------------------------------------------------
Chapter 2

THE NATURE OF SCIENCE


(behavioral scientists study human behavior)
 Goals of science are to describe, predict, explain,
control phenomena
 Description (observe and record in a measured way, then
describe situation)
 Prediction (after observe, try to predict what might happen –
correlational research – see what variables move together)
 Explanation (determine why something happens)
 Control (make things happen, modify behavior)

WAYS OF KNOWING
(these evolved into scientific method)
 Traditionalism (authority based)
 Rationalism (reasoning as source of knowledge)
 Deductive reasoning (general to specific – all dogs have fleas,
Max is a dog, therefore Max has fleas)
 Uses syllogism (general principle and specific assumption to
reach a specific conclusion)
 Foundation of deductive reasoning is valid assumptions (Do all
dogs have fleas?)

WAYS OF KNOWING
(Falsification)
 Empiricism (problems w/ valid assumptions so move to knowledge from
experience – observe events and draw conclusions)
 Science is empirical (observation carries more weight than just reasoning)
 Tries to escape making assumptions that may not hold true by checking beliefs against
observations
 Unscientific empiricism – observation that is not systematically tested often leads to
erroneous conclusions or false generalizations
 Falsification avoids the problems with causal reasoning
 Maintains that no theory can ever be proved
 Show beliefs are false by finding observations that disagree w/ belief
 Test beliefs after they are created – never believe something is true – never say
something is true/proven (always subject to more testing)

System of Falsification
 Statements to be tested must be falsifiable
 Meaning there is some observation that would show them to be false
 Cannot be inclusive (either it will rain or it won’t)
 Cannot be definitions (all men are male)
 Cannot be vapor statements (so vague that there is no specific prediction – it might rain
soon)
 If a specific event shows the theory is not true, then it is falsified
 If a specific even shows that it is, then it is confirmed or supported
 Repeated attempts that result in no falsification lead us to believe the
theory is true, but it is never proven since the next observation might result in a
falsification
 It is always possible that there was an error in methods that led to a wrong decision (so
only confirm or disconfirm, DO NOT PROVE)

Scientific Method
 Science consists of repeated attempts to falsify theories
 If falsified, then modified or replaced but if confirmed, then let the theory stand, but
still keep trying to falsify it
 Scientific method involves these steps:
1. Observe some aspect of the universe.
2. Develop a tentative description, called a hypothesis, that is consistent with
what you have observed.
3. Use the hypothesis to make predictions.
4. Test those predictions by experiments or further observations and modify the
hypothesis in the light of your results.
5. Repeat steps 3 and 4 until there are no discrepancies between theory and
experiment and/or observation.

End Chapter 2
------------------------------------------------------------------------------------------------------------

CHAPTER 3

Ethics of Research
 Beliefs about what is right/wrong
 Questionable activities have greatest potential for ethical violations
 APA Guidelines (1982) – no physical OR mental harm to
subjects (came from Zeller, Milgram studies)
 Must get informed consent before subjects participate (they must be told
of risks and that participation is voluntary)
 Anonymity v. confidentiality (subjects told in cover letter)
 Buckley Amendment guarantees a right to privacy (not observing behavior
one would not expect to be observed)
 At Tusculum, must complete Ethics in Research form (based on APA) and
have it signed by supervisor and instructor

Covert versus Overt


Observation
 Overt (subjects know they are being observed and why)
 Covert or hidden observation (most ethical problems here)
 Urinal filming example
 If told of an illegal act in the past, no duty to reveal, but there is for
future events
 Deception can be active (tell lies) or passive (not tell whole truth)
 Passive is often needed in research to avoid Hawthorne effect
 Subjects must be debriefed afterwards and should:
 find it acceptable once purpose is revealed
 be allowed to refuse use of information gathered
 be subjected to follow-up and remedy if needed

COERCION OF SUBJECTS
(making people do something they don’t want to do)
 Research carried out on the powerless which may be
indirect coercion
 Students, homeless, prisoners, military
 Once subjects start, they feel compelled to complete
(i.e. Orne study on adding digits)
 Must be informed they can quit at any time
 Researcher must stop them if they look uneasy

PLAGIARISM
(stealing another’s ideas and presenting them as yours)
 A grave ethical lapse
 Plagiarism has resulted in dismissal from school and
even jail time!
 Biggest problems are
 Using an idea in your report without giving full credit
 Publishing the work of someone else as yours (Dr.
Hewish/Bell pulsar discovery – Nobel prize)
 Fabricating data (book discusses Sir Cyril Burt, but recently his
records were released)

Evaluating Ethical Issues


 Cost benefit analysis – weigh the benefits of the research
against the cost, but hard to judge long term cost/effects (Milgram
studies … what if help prevent another holocaust?)
 National Research Act of 1972
 Ethics oversight committees oversee and investigate complaints
 Researchers have a moral obligation to stop unethical things
 In Tusculum’s program your proposal must be approved before starting
data collection (Ethics in Research signed) and minors’ parental permission must
be gained
End Chapter 3
------------------------------------------------------------------------------------------------------------

CHAPTER 4

Theory of Measurement
 What is measurement?
 Measure a property, usually with a number that tells something
about the property
 Measurement is the relationship between a label and an object
 Ruler tells the relationship between height (property) and a label (66
inches); if we know label, know height
 All measurements tell the relationship between property of objects/events
and labels (intelligence is a property, while an IQ score is a label)

Constructs and Observations


 Constructs are properties that cannot be observed, can only
observe their effects (constructed)
 Abstract properties (not concrete); cannot touch or see constructed
properties
 Ex. anxiety, intelligence, attitude
 Many behavioral science observations measure theoretical
constructs
 By observing indicators of their presence
 Empirical observations provide info about invisible constructs

VARIABLES AND VARIABILITY


(a variable is anything that can vary)
 Properties that have different measurements (take on different values) are
variables
 Ex. height, weight, intelligence, performance, gender, attitude, honesty
 Variability refers to differences in measurements of a property
 When measure property (e.g. honesty on a scale from 1=low to 10=high),
the measure is made up of three parts
 True value (the person’s honesty level)
 Irrelevant values (might measure intelligence in figuring out the scale)
 Random error (things that just happen, such as guesses)
 A good measure has little variability because it reduces or eliminates
irrelevant values (aka systematic errors or bias) and random errors

Measuring a
Person’s Intelligence
 How intelligent they are
 Whether the test scorer likes them
- How to overcome this problem? *
 How they feel; not paying close attention or marking answer
sheet correctly; room is too hot
- How to overcome these problems? *

(Use objective scoring; Give standard instructions…if feel poorly, take


on another day; Ask many questions, not just a few questions;
Standardize testing conditions – 75 degree room)

EVALUATING MEASURES
 Good measures have little variability because they
reduce or eliminate both random errors and systematic
errors (biases or irrelevant values) and measure the true
value
 The measurement is not the construct…just indicates it is there
 Good measures start with operational definitions
Operational Definitions-Part 1
 Specify for readers how a measurement was obtained
 Give empirical observations collected to measure
construct
 Researcher decides how to define the construct (For
example…measure intelligence by how quickly solve a
puzzle, or by GPA, or by annual salary)
Operational Definitions-Part 2
 Constructs are subjective (different people measure them differently), but
operational definitions are objective (set rules so everyone measures the same
way)
 If intelligence is defined as how large your annual salary is, there may be
disagreement about the definition, but if everyone measures annual salary the
same way, this eliminates variation in measuring
 Good research relies on good operational variables, frequently by using
converging operations…several definitions converge on the construct so have
more faith are measuring the true value - For intelligence: measure puzzle
solution time, GPA, and annual salary

LEVELS OF MEASUREMENT
(Lower/Categorical)
 Nominal (name only categories, no order)
 Names as symbols of properties, no amounts…Numbers may be used, but only as ID
tags such as 1=male, 2=female
 Math functions on these name tags make no sense (report frequency/percent, mode);
Examples: gender, race
 Ordinal (ordered categories)
 Specified order, but not distance
 Tells quantity only relative to others (more/less)
 Numbers only indicate relative position, not quantity so limited math such as
add/subtract
 Simply ranks (report frequency/percent, median, range); Examples: Place in finishing in
race; arrival to class (first, second, etc.)

LEVELS OF MEASUREMENT
(Higher/Continuous)
 Interval (real numbers, in order, according to quantity)
 Zero point is arbitrary, can have negative numbers; Most math functions are okay, but
ratios don’t make sense
 Report mean, median, mode, range, SD; Examples are temperature (C or F); test scores;
GPA
 Ratio (same as interval, plus true zero)
 Absolute zero indicates lack of the property, no negative numbers; All math functions are
appropriate, including ratios (twice as much, half as much)
 Report mean, median, mode, range, SD; Examples are income (in dollars); number of
children

HOW GOOD IS MEASUREMENT - I


(validity and reliability)
 Reliability = consistency
 Unreliable measures give a different answer each time you measure
(court witness tells different story each time)
 Random error affects reliability
 Methods to test reliability
 Test/retest (measure, wait, measure again; correlate tests)
 Split half (divide test in half and correlate two sets)
 Alternate forms (different test versions; correlate tests)
 Inter and Intra rater (consistency among judges-correlate their scores;
internal consistency of a judge-corr. scores)
HOW GOOD IS MEASUREMENT-II (validity and
reliability)
 Validity = soundness (on target)
 Invalid measures miss the target (jetlag/geography text example)
 Systematic error (bias) affects validity
 A measure can be reliable, but still not valid (court witness tells same lie
repeatedly)
 Methods to test validity
 Facial (On the surface does it measure the construct?)
 Criterion (Does it accurately indicate the criterion?)
 Concurrent (now) and Predictive (future)
 Content (Are all aspects of construct covered?)
 Construct (Does it give similar results to other measures of this
construct?)

Reliability versus Validity


 Which is harder to determine – validity or reliability?

 Can you have validity without reliability?

 How about reliability without validity?

Reliability versus Validity


 Which is harder to determine – validity or reliability?
VALIDITY

 Can you have validity without reliability? NO

 How about reliability without validity? YES

End Chapter 4
 See Checkpoints
------------------------------------------------------------------------------------------------------------

Chapter 5

ISSUES IN MEASUREMENT
(Sampling)
 What/who to measure depends on what you will do with the
results
 Sampling
 A population is every member of a well-defined group
 A sample is a small group of examples chosen from the population for
measuring
 Sampling techniques (or methods) are the procedures used to select the
sample out of the population
 Generalization – the process of taking measurements
on samples and applying the results to the population
Generalizations
 A sample is representative when measurements of it apply to
the population
 The sample must come from the population to represent it
 In 1936 Literary Digest wrongly predicted Landon over FDR (based on a
sample of its readers); Gallup (a newcomer) got it right (by using representative
random sample)
 But in 1948 Gallup used old census data for sampling and predicted
Dewey as winner over Truman (this led him to refine his sampling techniques)
 The sampling method is how you determine the
representativeness of the sample

SAMPLING METHODS
(Probability versus Non-Probability)
 Non-probability means every population member does not have
an equal chance to be in the sample
 These are NOT representative samples, so cannot generalize (cannot
determine probability of being selected for the sample)
 Examples: Haphazard/convenience; Self-selection/ volunteers; Quota
sampling (Gallup in 1948); Snowball
 Probability (random) sampling gives every population member
an equal and known chance or probability of being selected

SAMPLING METHODS
(Probability)
 Simple random (numbers/names in a hat)
 Requires list of everyone in the pop. as a sample frame
 Can use a table of random numbers
 Systematic or Nth (N = pop. size ÷ sample size)
 Take a randomly ordered list, start at randomly chosen point
 Stratified (divide population into strata/levels, select from each
strata)
 Proportional or disproportional
 Cluster (use existing groups/clusters as sample units)
 For geographically dispersed pop. or when an experimental treatment can
only be applied to a group (multi-cluster too)
SAMPLE SIZE
 Must be large enough to be representative
 The more variation in the population, the larger the sample
needs to be
 The more sensitive an effect that must be identified, the larger
it needs to be
 Can calculate sample accuracy (e.g., ±3%)
 Research/statistics books often provide tables
 General guide is 10% of the population, with a
minimum of 30, a maximum of 1,000; but small populations
(<100) include everyone (census)

Types Of Measurement
 Behavioral measures (observe behaviors)
 Often only way to avoid social desirability bias
 Some behaviors cannot be observed without changing the behavior
(guinea pig/Hawthorne effect)
 Surveys or self observation (election/opinion polls,
marketing/attitude surveys)
 Questionnaires designed around a single research question
 Respondents must have and be willing to share the information the
researcher is after

Developing Surveys
 Write many individual questions and pretest them (have others
read them and give feedback)
 Must decide on open ended (easy to write, hard to analyze) versus closed
ended (pigeon hole responses) or combination of both
 Once questionnaire is finished, have it reviewed for
facial/content validity
 Pilot test questionnaire and procedures to collect data
 See question examples, p. 107 and handouts

Survey Administration
 In person, on telephone, by mail (less personal bias here)
 Essential that all respondents be asked the same questions in same
manner to reduce variation
 Do not reveal purpose of study in cover letter unless must (to avoid
acquiescence bias)
 Non-response is a problem and must be addressed (essentially
self selection bias and those who feel most strongly are most likely to
respond)
 Random non-response is okay, but systematic is a problem (best to get
two-thirds back, but at least half)
 Can do follow-up calls (compare respondents to non-respondents)
 Can compare respondents’ characteristics to known population factors

Interviews
 Structured or unstructured
 Often do a few unstructured and then develop
structured interview from results
 Interviewing (in person/telephone)
 Stick to standard 9am-9pm hours
 Talk, look, act like those you are interviewing
 Be polite and professional
 Can tape interviews (ask permission and tell them they can
shut off recorder if desired – they often forget it’s there)

UNOBTRUSIVE RESEARCH
(those observed are not aware of it)
 Archival research (existing databases/records)
 Must establish accuracy of info
 Must have info in the desired form (unit of measure)
 Tusculum requires that you collect your own data
 Content analysis (systematically analyze the content of recorded info)
 Count the number of words/ideas (Pres. Clinton’s speech, entitled “The Era of Big
Government is Over” included 39 new government programs)
 Content analysis of school board meetings showed the butterfly effect of random events
(no garbage collection)
 Trace measures (deposits and withdrawals) – museum tiles
 Hidden observations (beware of privacy laws) – urinal example

Systematic Observation
 Decide beforehand what to observe and how to measure
 Use check-off sheets
 Define variables (ex: when measuring classroom misbehavior
record out-of-seat, talking, hitting, etc.)
 Use one of these types of sampling
 Continuous sampling (record all behaviors in a time interval such as an
hour…requires several observers)
 Time point sampling (look & record, rest, look & record)
 Time interval sampling (record all behaviors within a short time, such as 5
minutes)
End Chapter 5
See Checkpoints
------------------------------------------------------------------------------------------------------------

Chapter 6
Measuring and Using Correlation
(Essentially descriptive research, but goes further)
 Correlation research focuses on relationships between
two variables
 Used to describe and predict (but NOT explain…does not
establish cause and effect)
 Finding relationships (co-relate means one variable
moves with another variable)
 Measure two variables (x and y) on one set of subjects
 Can be positive, negative, or no linear relationship

Correlations
 Positive (direct) relationship means a high score on one
variable is associated with a high score on the other (and
vice versa); HI/HI & LO/LO
 Negative (inverse) relationship means a high score on
one variable is associated with a low score on the other (and
vice versa); HI/LO & LO/HI
 No linear relationship means no apparent linear pattern
Correlation Coefficients
 Report DIRECTION and STRENGTH
 For linear data, use Pearson Product Moment Correlation Coefficient
(ppm), r
 Check scatterplot for linearity (other coefficients for curves)
 Direction is indicated by the sign…a positive sign means a direct/positive
(HI/HI;LO/LO) relationship while a negative sign means inverse (HI/LO;LO/HI)
 Strength is indicated by the absolute value of r , zero means no
relationship
 r is from -1.0 → 0 →+1.0 (written r=.76 or r=-.68)
 Example of positive relationship is grades and time studying, while
negative would be grades and missed classes

Making Predictions
 If two things correlate, can use one to predict the other
 The stronger the correlation (closer to ± 1.0), the more accurate the
prediction
 Regression equations use a formula whereby a new y is predicted based
on a given x
 If the prediction works, or is accurate, can use it even if there are
arguments of biased measures (SAT and grades)
 Beware that relationships change over time and samples AND restricted
range in one variable causes problems (such as graduate school GPA)

Correlation is NOT Causation


(One variable does not make the other happen)

 If cause and effect relationship exists, the two variables will be


correlated, but just because two variables are correlated does not
mean it is cause and effect
 Example: Number of bars and number of churches
 Must meet three canons of causation for cause and effect to be
present
 A and B must be related
 A must come before B to cause it
 Must be able to rule out other causes of B (this requires experimental
designs)
Correlational Studies
 Only measure and describe relationships (no manipulation or
changing of variables)
 Smoking and lung cancer link in people – evidence is only correlational
 Can still predict, even if do not know why the relationship is present (ice
cream/violent crime)
 Identify x,y variables; gather sample info on both variables;
plot the data to check linearity (explanatory variable is x on plot);
then calculate coefficient
 Conduct correlational studies prior to experimental to save time
and money (if no correlation, cannot be cause and effect, so save
money, do not do experiment)
End Chapter 6

Das könnte Ihnen auch gefallen