Sie sind auf Seite 1von 89

DEFINATION OF RESEARCH DESIGN:

The research design refers to the overall strategy that you choose to integrate the different components of the study in
a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the
blueprint for the collection, measurement, and analysis of data.



Meaning
A research design is a systematic plan to study a scientific problem. The design of a study defines the study type
(descriptive, correlational, semi-experimental, experimental, review, meta-analytic) and sub-type (e.g., descriptive-
longitudinal case study), research question, hypotheses, independent and dependent variables,experimental design, and,
if applicable, data collection methods and a statistical analysis plan. Research design is the framework that has been
created to seek answers to research questions.
New
A detailed outline of how an investigation will take place. A research design will typically include how data is to
be collected, what instruments will be employed, how the instruments will be used and the intended means for
analyzinA detailed outline of how an investigation will take place. A research design will typically include
how data is to be collected, what instruments will be employed, how the instruments will be used and the
intended means for analyzing data collected.










ADVANTAGES
Addresses Specific Research Issues
Carrying out their own research allows the marketing organization to address issues specific to their own situation.
Primary research is designed to collect the information the marketer wants to know (Step 2) and report it in ways that
benefit the marketer. For example, while information reported with secondary research may not fit the marketers needs
(e.g., different age groupings) no such problem exists with primary research since the marketer controls the research
design.
Greater Control
Not only does primary research enable the marketer to focus on specific issues, it also enables the marketer to have a
higher level of control over how the information is collected. In this way the marketer can decide on such issues as size
of project (e.g., how many responses), location of research (e.g., geographic area) and time frame for completing the
project.
Efficient Spending for Information
Unlike secondary research where the marketer may spend for information that is not needed, primary data collections
focus on issues specific to the researcher improves the chances that research funds will be spent efficiently.
Proprietary Information
Information collected by the marketer using primary research is their own and is generally not shared with others. Thus,
information can be kept hidden from competitors and potentially offer an information advantage to the company that
undertook the primary research.



DISADVANTAGES
Cost
Compared to secondary research, primary data may be very expensive since there is a great deal of marketer
involvement and the expense in preparing and carrying out research can be high.
Time Consuming
To be done correctly primary data collection requires the development and execution of a research plan. Going from
the start-point of deciding to undertake a research project to the end-point to having results is often much longer than
the time it takes to acquire secondary data.
Not Always Feasible
Some research projects, while potentially offering information that could prove quite valuable, are not within the reach
of a marketer. Many are just too large to be carried out by all but the largest companies and some are not feasible at all.
For instance, it would not be practical for McDonalds to attempt to interview every customer who visits their stores on
a certain day since doing so would require hiring a huge number of researchers, an unrealistic expense. Fortunately, as
we will see in a later tutorial there are ways for McDonalds to use other methods (e.g., sampling) to meet their needs
without the need to talk with all customers




features
Quantitative Design
o Quantitative, or fixed, design allows the researcher to actively change the circumstances of the experiment. In this type
of research, the researcher can control the conditions that lead to changes in behavior. This can also be considered
explanatory research, as the focus is on the question of "why."
For example, a city is experiencing an increasing crime rate. To combat this problem the city puts more officers on the
street. Suppose that this did nothing to change the crime rate initially. The city would look at why it did not work, and
what can be done to change that outcome. After studying the problem, the city determines they need to train officers to
deal with gangs and gang activities. Once the training is completed, crime rates begin to drop.
This shows how studying the problem, considering different solutions, deciding on a method to solve the problem and
then implementing that solution caused a different result than the initial method of simply hiring more officers. This
provides the answer to the question of "why."
2. Qualitative Design
o Qualitative, or flexible, design deals with the study aspect of solving a problem. This is an answer to "what," and
identifies the problem itself. This works hand in hand with quantitative design, as problems cannot be tested for
solutions until they have been identified.
Using the same example, when the city initially tried to reduce crime rates, they simply hired more officers. What was
wrong with that approach is the fact that they did not have an adequate description of what the problem was. When they
did further investigation and determined that the officers needed training in gangs and gang activities, they had a better
idea of what the problem was. Their "what" was identified, which led to options of how to solve the problem.
o
Career Options
o Research design careers are available in almost every field and sector of the economy. For example, public opinion
researchers determine the issues people are facing; what is wrong, what is good and so on. A public opinion analyst
would then look at why these issues are occurring and propose solutions to make things better. Another example would
be a research engineer in the field of commercial kitchen ventilation. In this case the researcher determines just how
much energy is needed for specific commercial kitchen appliances to vent cooking effluent outside the kitchen. This
knowledge can be translated into savings for the restaurant, as well as resulting in a positive ecological outcome.
Who Should Consider a Career in Research Design
o This field is perfect for those who like to figure out what a problem is, or what solutions can be had to fix a problem.
Research design is the right career for individuals who want to know more, and are not satisfied with the status quo.









HISTORY
The Design Research Society was founded in the UK in 1966. The origins of the Society lay in the Conference on
Design Methods, held in London in 1962, which enabled a core of people to be identified who shared interests in new
approaches to the process of designing.
The purpose of the DRS, as embodied in its first statement of rules, was to promote the study of and research into the
process of designing in all its many fields'. This established the intention of being an interdisciplinary, learned society.
The DRS promoted its aims through a series of one-day conferences and the publication of a quarterly newsletter to
members.
However, within a few years, fruitless attempts to establish a published journal, and equally fruitless internal debate
about the Society's goals led to inactivity. The Society was revived by its first major international conference, on
Design Participation, held in Manchester in 1971. At that conference a meeting of DRS members led to a call for a
special general meeting of the Society, and to changes of officers and council members. Subsequently, a series of
international conferences was held through the 1970s and 80s: in London (1973), Portsmouth (1976, 1980), Istanbul
(1978), and Bath (1984).
In the mid-1970s DRS also collaborated with the Design Methods Group, based in the USA, including publishing a
joint journal, Design Research and Methods. By the late 1970s there was enough enthusiasm, and evidence of design
research activity around the world, for the DRS to approach IPC Press (now Elsevier) with a successful proposal for its
own journal. Design Studies, the international journal for design research, was launched in 1979.
A new biennial series of DRS conferences began in 2002 with the 'Common Ground' conference in London.
Subsequent ones have been in Melbourne, Australia (2004), Lisbon, Portugal (2006), Sheffield, UK (2008), Montreal,
Canada (2010).
Design Research Society - holders of the Chair:
1967-69 John Page
1969-71 William Gosling
1971-73 Chris Jones
1973-77 Sydney Gregory
1977-80 Thomas Maver
1980-82 Nigel Cross
1982-84 James Powell
1984-88 Robin Jacques
1988-90 Bruce Archer
1990-94 Sebastian Macmillan
1994-98 Conall O'Cathain
1998-06 David Durling
2006-09 Chris Rust
2009- Seymour Roworth-Stokes

Honorary President:
1992-00 Bruce Archer
2000-06 Richard Buchanan
2006- Nigel Cross
New
Historical Design
So Stan decides that he wants to figure out why the Nazis acted the way they did. He wants to dohistorical research,
which involves interpreting past events to predict future ones. In Stan's case, he's interested in examining the reasons
behind the Holocaust to try to prevent it from happening again.
Historical research design involves synthesizing data from many different sources. Stan could interview former
Nazis, or read diaries from Nazi soldiers to try to figure out what motivated them. He could look at public records and
archives, examine Nazi propaganda, or look at testimony in the trials of Nazi officers. There are several steps that
someone like Stan has to go through to do historical research:
1. Formulate an idea: This is the first step of any research, to find the idea and figure out the research question. For
Stan, this came from his mother, but it could come from anywhere. Many researchers find that ideas and questions
arise when they read other people's research.
2. Formulate a plan: This step involves figuring out where to find sources and how to approach them. Stan could
make a list of all the places he could find information (libraries, court archives, private collections) and then figure
out where to start.
3. Gather data: This is when Stan will actually go to the library or courthouse or prison to read or interview or
otherwise gather data. In this step, he's not making any decisions or trying to answer his question directly; he's just
trying to get everything he can that relates to the question.
4. Analyze data: This step is when Stan goes through the data he just collected and tries more directly to answer his
question. He'll look for patterns in the data. Perhaps he reads in the diary of the daughter of a Nazi that her father
didn't believe in the Nazi party beliefs but was scared to stand up for his values. Then he hears the same thing from a
Nazi soldier he interviews. A pattern is starting to emerge.
5. Analyze the sources of data: Another thing that Stan has to do when he is analyzing data is to also analyze the
veracity of his data. The daughter's diary is a secondary source, so it might not be as true as a primary source, like the
diary of her father. Likewise, people have biases and motivations that might cloud their account of things; perhaps the
Nazi soldier Stan interviews is up for parole, and he thinks that if he says he was scared and not a true Nazi believer,
he might get out of jail.
Once Stan has gone through all of these steps, he should have a good view of what he wants to know about his
question. If he doesn't, then he goes back to step two (formulating a plan) and starts again. He will keep doing steps
two through five until he finds something that he can use.
New hisducational research has changed dramatically in both its stated objectives and techniques since the late
nineteenth century. From its early beginnings in the ivory towers of many research universities to the present day,
educational research has not only undergone a variety of transformations but it has been a topic of significant scholarly,
societal and political debate over the years. This brief essay will provide a historical overview of the key
transformations educational research has gone through since the late nineteenth century until present day by focusing
on important individuals who lead these efforts. It is not an attempt to validate or discredit certain educational scholars
or the research approaches they embraced. It will primarily focus on the development of educational research in the
United States with particular attention to the University of Chicago.




TYPES
ACTION REASEACH DESIGN:
Definition and Purpose
The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted,
where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then
the intervention is carried out (the "action" in Action Research) during which time, pertinent observations are collected
in various forms. The new interventional strategies are carried out, and the cyclic process repeats, continuing until a
sufficient understanding of (or implement able solution for) the problem is achieved. The protocol is iterative or
cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and
particularizing the problem and moving through several interventions and evaluations.
What do these studies tell you?
1. A collaborative and adaptive research design that lends itself to use in work or community situations.
2. Design focuses on pragmatic and solution-driven research rather than testing theories.
3. When practitioners use action research it has the potential to increase the amount they learn consciously from
their experience. The action research cycle can also be regarded as a learning cycle.
4. Action search studies often have direct and obvious relevance to practice.
5. There are no hidden controls or preemption of direction by the researcher.
What these studies don't tell you:
1. It is harder to do than conducting conventional studies because the researcher takes on responsibilities of
advocating for change as well as for researching the topic.
2. Action research is much harder to write up because you probably cant use a standard format to report your
findings effectively.
3. Personal over-involvement of the researcher may bias research results.
4. The cyclic nature of action research to achieve its twin outcomes of action (e.g. change) and research (e.g.
understanding) is time-consuming and complex to conduct.

5. Case Study Design
Definition and Purpose
A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey. It is often
used to narrow down a very broad field of research into one or a few easily researchable examples. The case study
research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real
world. It is a useful design when not much is known about a phenomenon.
What do these studies tell you?
1. Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a
limited number of events or conditions and their relationships.
2. A researcher using a case study design can apply a vaiety of methodologies and rely on a variety of sources to
investigate a research problem.
3. Design can extend experience or add strength to what is already known through previous research.
4. Social scientists, in particular, make wide use of this research design to examine contemporary real-life
situations and provide the basis for the application of concepts and theories and extension of methods.
5. The design can provide detailed descriptions of specific and rare cases.
What these studies don't tell you?
1. A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a
wider population of people, places, or things.
2. The intense exposure to study of the case may bias a researcher's interpretation of the findings.
3. Design does not facilitate assessment of cause and effect relationships.
4. Vital information may be missing, making the case hard to interpret.
5. The case may not be representative or typical of the larger problem being investigated.
6. If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for
study, then your intepretation of the findings can only apply to that particular case.
Causal Design
Definition and Purpose
Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, If
X, then Y. This type of research is used to measure what impact a specific change will have on existing norms and
assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic
perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in
variation in another phenomenon, the dependent variable.
Conditions necessary for determining causality:
Empirical association--a valid conclusion is based on finding an association between the independent variable
and the dependent variable.
Appropriate time order--to conclude that causation was involved, one must see that cases were exposed to
variation in the independent variable before variation in the dependent variable.
Nonspuriousness--a relationship between two variables that is not due to variation in a third variable.
What do these studies tell you?
1. Causality research designs helps researchers understand why the world works the way it does through the
process of proving a causal link between variables and eliminating other possibilities.
2. Replication is possible.
3. There is greater confidence the study has internal validity due to the systematic subject selection and equity of
groups being compared.
What these studies don't tell you?
1. Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events
appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive
years but, the fact remains, he's just a big, furry rodent].
2. Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding
variables that exist in a social environment. This means causality can only be inferred, never proven.
3. If two variables are correlated, the cause must come before the effect. However, even though two variables
might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore,
to establish which variable is the actual cause and which is the actual effect.
Cohort Design
Definition and Purpose
Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a
study conducted over a period of time involving members of a population which the subject or representative member
comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study
makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are
relevant to the research problem being investigated, rather than studying statistical occurrence within the general
population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts
can be either "open" or "closed."
Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is
defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of
entry and exit from the study is individually defined, therefore, the size of the study population is not constant.
In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants
thereof.
Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who
enter into the study at one defining point in time and where it is presumed that no new participants can enter the
cohort. Given this, the number of study participants remains constant (or can only decrease).
What do these studies tell you?
1. The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you
cannot deliberately expose people to asbestos, you can only study its effects on those who have already been
exposed. Research that measures risk factors often relies on cohort designs.
2. Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that
these causes preceded the outcome, thereby avoiding the debate as to which is the cause and which is the
effect.
3. Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of
different types of changes [e.g., social, cultural, political, economic, etc.].
4. Either original data or secondary data can be used in this design.
What these studies don't tell you?
1. In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to
asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two
groups. These factors are known as confounding variables.
2. Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of
interest to develop within the group. This also increases the chance that key variables change during the course
of the study, potentially impacting the validity of the findings.
3. Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs
where the researcher randomly assigns participants.
Cross-Sectional Design
Definition and Purpose
Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences
rather than change following intervention; and, groups are selected based on existing differences rather than random
allocation. The cross-sectional design can only measure diffrerences between or from among a variety of people,
subjects, or phenomena rather than change. As such, researchers using this design can only employ a relatively passive
approach to making causal inferences based on findings.
What do these studies tell you?
1. Cross-sectional studies provide a 'snapshot' of the outcome and the characteristics associated with it, at a
specific point in time.
2. Unlike the experimental design where there is an active intervention by the researcher to produce and measure
change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing
differences between people, subjects, or phenomena.
3. Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple
measures over an extended period of time, cross-sectional research is focused on finding relationships between
variables at one moment in time.
4. Groups identified for study are purposely selected based upon existing differences in the sample rather than
seeking random sampling.
5. Cross-section studies are capable of using data from a large number of subjects and, unlike observational
studies, is not geographically bound.
6. Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole
population.
7. Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive
and take up little time to conduct.
What these studies don't tell you?
1. Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be
difficult.
2. Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical
or temporal contexts.
3. Studies cannot be utilized to establish cause and effect relationships.
4. This design only provides a snapshot of analysis so there is always the possibility that a study could have
differing results if another time-frame had been chosen.
5. There is no follow up to the findings.

Hall, John. Cross-Sectional Survey Design. In Encyclopedia of Survey Research Methods. Paul J. Lavrakas, ed.
(Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design,
Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study.
Wikipedia.
Descriptive Design
Definition and Purpose
Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated
with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive
research is used to obtain information concerning the current status of the phenomena and to describe "what exists"
with respect to variables or conditions in a situation.
What do these studies tell you?
1. The subject is being observed in a completely natural and unchanged natural environment. True experiments,
whilst giving analyzable data, often adversely influence the normal behavior of the subject.
2. Descriptive research is often used as a pre-cursor to more quantitative research designs, the general overview
giving some valuable pointers as to what variables are worth testing quantitatively.
3. If the limitations are understood, they can be a useful tool in developing a more focused study.
4. Descriptive studies can yield rich data that lead to important recommendations in practice.
5. Appoach collects a large amount of data for detailed analysis.
What these studies don't tell you?
1. The results from a descriptive research cannot be used to discover a definitive answer or to disprove a
hypothesis.
2. Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the
results cannot be replicated.
3. The descriptive function of research is heavily dependent on instrumentation for measurement and observation.
Experimental Design
Definition and Purpose
A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of
an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Research is
often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal
relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic
experimental design specifies an experimental group and a control group. The independent variable is administered to
the experimental group and not to the control group, and both groups are measured on the same dependent variable.
Subsequent experimental designs have used more groups and more measurements over longer periods. True
experiments must have control, randomization, and manipulation.
What do these studies tell you?
1. Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer
the question, what causes something to occur?
2. Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo
effects from treatment effects.
3. Experimental research designs support the ability to limit alternative explanations and to infer direct causal
relationships in the study.
4. Approach provides the highest level of evidence for single studies.
What these studies don't tell you?
1. The design is artificial, and results may not generalize well to the real world.
2. The artificial settings of experiments may alter subject behaviors or responses.
3. Experimental designs can be costly if special equipment or facilities are needed.
4. Some research problems cannot be studied using an experiment because of ethical or technical reasons.
5. Difficult to apply ethnographic and other qualitative methods to experimental designed research studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services. Chapter 7, Flexible Methods:
Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design,
Experimental Designs. School of Psychology, University of New England, 2000; Experimental Research. Research
Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Trochim, William M.K.
Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare
presentation.
Exploratory Design
Definition and Purpose
An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to. The
focus is on gaining insights and familiarity for later investigation or undertaken when problems are in a preliminary
stage of investigation.
The goals of exploratory research are intended to produce the following possible insights:
Familiarity with basic details, settings and concerns.
Well grounded picture of the situation being developed.
Generation of new ideas and assumption, development of tentative theories or hypotheses.
Determination about whether a study is feasible in the future.
Issues get refined for more systematic investigation and formulation of new research questions.
Direction for future research and techniques get developed.
What do these studies tell you?
1. Design is a useful approach for gaining background information on a particular topic.
2. Exploratory research is flexible and can address research questions of all types (what, why, how).
3. Provides an opportunity to define new terms and clarify existing concepts.
4. Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
5. Exploratory studies help establish research priorities.
What these studies don't tell you?
1. Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to
the population at large.
2. The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings.
3. The research process underpinning exploratory studies is flexible but often unstructured, leading to only
tentative results that have limited value in decision-making.
4. Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for
exploration could be to determine what method or methodologies could best fit the research problem.
Historical Design
Definition and Purpose
The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts
that defend or refute your hypothesis. It uses secondary sources and a variety of primary documentary evidence, such
as, logs, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual
recordings]. The limitation is that the sources must be both authentic and valid.
What do these studies tell you?
1. The historical research design is unobtrusive; the act of research does not affect the results of the study.
2. The historical approach is well suited for trend analysis.
3. Historical records can add important contextual background required to more fully understand and interpret a
research problem.
4. There is no possibility of researcher-subject interaction that could affect the findings.
5. Historical sources can be used over and over to study different research problems or to replicate a previous
study.
What these studies don't tell you?
1. The ability to fulfill the aims of your research are directly related to the amount and quality of documentation
available to understand the research problem.
2. Since historical research relies on data from the past, there is no way to manipulate it to control for
contemporary contexts.
3. Interpreting historical sources can be very time consuming.
4. The sources of historical materials must be archived consistentally to ensure access.
5. Original authors bring their own perspectives and biases to the interpretation of past events and these biases are
more difficult to ascertain in historical resources.
6. Due to the lack of control over external variables, historical research is very weak with regard to the demands of
internal validity.
7. It rare that the entirety of historical documentation needed to fully address a research problem is available for
interpretation, therefore, gaps need to be acknowledged.
Longitudinal Design
Definition and Purpose
A longitudinal study follows the same sample over time and makes repeated observations. With longitudinal surveys,
for example, the same group of people is interviewed at regular intervals, enabling researchers to track changes over
time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe
patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on
each variable over two or more distinct time periods. This allows the researcher to measure change in variables over
time. It is a type of observational study and is sometimes referred to as a panel study.
What do these studies tell you?
1. Longitudinal data allow the analysis of duration of a particular phenomenon.
2. Enables survey researchers to get close to the kinds of causal explanations usually attainable only with
experiments.
3. The design permits the measurement of differences or change in a variable from one period to another [i.e., the
description of patterns of change over time].
4. Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
What these studies don't tell you?
1. The data collection method may change over time.
2. Maintaining the integrity of the original sample can be difficult over an extended period of time.
3. It can be difficult to show more than one variable at a time.
4. This design often needs qualitative research to explain fluctuations in the data.
5. A longitudinal research design assumes present trends will continue unchanged.
6. It can take a long period of time to gather results.
7. There is a need to have a large sample size and accurate sampling to reach representativness.
Meta-Analysis Design
Definition and Purpose
Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a
number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study
effects of interest. The purpose is to not simply summarize existing knowledge but to develop a new understanding of a
research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the
results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis
depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study
to properly analyze their findings. Lack of information can severely limit the type of analyses and conclusions that can
be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more
difficult it is to justify interpretations that govern a valid synopsis of results.

A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:
Clearly defined description of objectives, including precise definitions of the variables and outcomes that are
being evaluated;
A well-reasoned and well-documented justification for identification and selection of the studies;
Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those
studies;
Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
Justification of the techniques used to evaluate the studies.
What do these studies tell you?
1. Can be an effective strategy for determining gaps in the literature.
2. Provides a means of reviewing research published about a particular topic over an extended period of time and
from a variety of sources.
3. Is useful in clarifying what policy or programmitic actions can be justified on the basis of analyzing research
results from multiple studies.
4. Provides a method for overcoming small sample sizes in individual studies that previously may have had little
relationship to each other.
5. Can be used to generate new hypotheses or highlight research problems for future studies.
What these studies don't tell you?
1. Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or
meaningless findings.
2. A large sample size can yield reliable, but not necessarily valid, results.
3. A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how
findings are measured within the sample of studies you are analyzing, can make the process of synthesis
difficult to perform.
4. Depending on the sample size, the process of reviewing and synthesizing multple studies can be very time
consuming.

Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-
Analysis. 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond
A. Katzell. Meta-Analysis Analysis. In Research in Organizational Behavior, Volume 9. (Greenwich, CT: JAI Press,
1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis. Thousand Oaks, CA: Sage
Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington
University; Timulak, Ladislav. Qualitative Meta-Analysis. In The SAGE Handbook of Qualitative Data Analysis.
Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal
W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-
439.
Observational Design
Definition and Purpose
This type of research design draws a conclusion by comparing subjects against a control group, in cases where the
researcher has no control over the experiment. There are two general types of observational designs. In direct
observations, people know that you are watching them. Unobtrusive measures involve any method for studying
behavior where individuals do not know they are being observed. An observational study allows a useful insight into a
phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.
What do these studies tell you?
1. Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis
about what you expect to observe (data is emergent rather than pre-existing).
2. The researcher is able to collect a depth of information about a particular behavior.
3. Can reveal interrelationships among multifaceted dimensions of group interactions.
4. You can generalize your results to real life situations.
5. Observational research is useful for discovering what variables may be important before applying other methods
like experiments.
6. Observation researchd esigns account for the complexity of group behaviors.
What these studies don't tell you?
1. Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and
difficult to replicate.
2. In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized
to other groups.
3. There can be problems with bias as the researcher may only "see what they want to see."
4. There is no possiblility to determine "cause and effect" relationships since nothing is manipulated.
5. Sources or subjects may not all be equally credible.
6. Any group that is studied is altered to some degree by the very presence of the researcher, therefore, skewing to
some degree any data collected (the Heisenburg Uncertainty Principle).

Atkinson, Paul and Martyn Hammersley. Ethnography and Participant Observation. In Handbook of Qualitative
Research. Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261;
Observational Research. Research Methods by Dummies. Department of Psychology. California State University,
Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods. Chapter 6, Fieldwork Strategies
and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Rosenbaum, Paul R. Design of Observational
Studies. New York: Springer, 2010.
Philosophical Design
Definition and Purpose
Understood more as an broad approach to examining a research problem than a methodological design, philosophical
analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an
area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models,
and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates,
to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem.
These overarching tools of analysis can be framed in three ways:
Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is
fundamental and what is derivative?
Epistemology -- the study that explores the nature of knowledge; for example, on what does knowledge and
understanding depend upon and how can we be certain of what we know?
Axiology -- the study of values; for example, what values does an individual or group hold and why? How are
values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a
matter of fact and a matter of value?
What do these studies tell you?
1. Can provide a basis for applying ethical decision-making to practice.
2. Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
3. Brings clarity to general guiding practices and principles of an individual or group.
4. Philosophy informs methodology.
5. Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
6. Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality
(metaphysics).
7. Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
What these studies don't tell you?
1. Limited application to specific research problems [answering the "So What?" question in social science
research].
2. Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
3. While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the
writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and
documentation.
4. There are limitations in the use of metaphor as a vehicle of philosophical analysis.
5. There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and
application to the phenomenal world.

Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa;
Labaree, Robert V. and Ross Scimeca. The Philosophical Problem of Truth in Librarianship. The Library Quarterly
78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide.
Washington, D.C.: Falmer Press, 1994; Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, CSLI,
Stanford University, 2013.
Sequential Design
Definition and Purpose
Sequential research is that which is carried out in a deliberate, staged approach [i.e. serially] where one stage will be
completed, followed by another, then another, and so on, with the aim that each stage will build upon the previous one
until enough data is gathered over an interval of time to test your hypothesis. The sample size is not predetermined.
After each sample is analyzed, the researcher can accept the null hypothesis, accept the alternative hypothesis, or select
another pool of subjects and conduct the study once again. This means the researcher can obtain a limitless number of
subjects before finally making a decision whether to accept the null or alternative hypothesis. Using a quantitative
framework, a sequential study generally utilizes sampling techniques to gather data and applying statistical methods to
analze the data. Using a qualitative framework, sequential studies generally utilize samples of individuals or groups of
individuals [cohorts] and use qualitative methods, such as interviews or observations, to gather information from each
sample.

What do these studies tell you?
1. The researcher has a limitless option when it comes to sample size and the sampling schedule.
2. Due to the repetitive nature of this research design, minor changes and adjustments can be done during the
initial parts of the study to correct and hone the research method. Useful design for exploratory studies.
3. There is very little effort on the part of the researcher when performing this technique. It is generally not
expensive, time consuming, or workforce extensive.
4. Because the study is conducted serially, the results of one sample are known before the next sample is taken and
analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
What these studies don't tell you?
1. The sampling method is not representative of the entire population. The only possibility of approaching
representativeness is when the researcher chooses to use a very large sample size significant enough to represent
a significant portion of the entire population. In this case, moving on to study a second or more specific sample
can be difficult.
2. Because the sampling technique is not randomized, the design cannot be used to create conclusions and
interpretations that pertain to an entire population. Generalizability from findings is limited.
3. Difficult to account for and interpret variation from one sample to another over time, particularly when using
qualitative methods of data collection


THE BIRTH OF EDUCATION AS A FIELD OF STUDY AND RESEARCH

The study of and research on education traces its roots back to the late 1830s and early 1840s with the revival of the
common school and it is the first time that both school supervision and planning were influenced by systematic data
collection.[1] These data collection efforts, according to Robert Travers, involved an examination of the ideas on
which education was based, an intellectual crystallization of the function of education in a democracy, and the
development of a literature on education that attempted to make available to teachers and educators important new
ideas related to education that had emerged in various countries.[2] Horace Mann and Henry Barnard were early
pioneers in educational data collection and in the production and dissemination of educational literature during the mid-
to late-nineteenth century. Additionally, they held prominent educational leadership positions by being the first
secretaries of educational boards of Massachusetts (Horace Mann) and of Connecticut (Henry Barnard).[3] In many
ways, the trends in the early history of educational research were components of the trends in American culture of the
time.[4]

The founding of Johns Hopkins University as the first research university in 1876 set the stage for new elite research
universities to be founded such as Stanford University and the University of Chicago.[5] Additionally, the Morrill Act
of 1862[6]allowed for the establishment of land-grant colleges and universities, many of which would rival the more
established elite institutions on the east coast in research and knowledge production, across the United States. As Ellen
Condliffe Lagemann points out, research universities quickly became the leaders in creating and disseminating new
knowledge, the professionalization of many professions and they became the spawning grounds for research on
education at the end of the nineteenth century.[7] During this time period, there was a belief that the social world could
be acted on and changed through scientific practices and that teaching and the social welfare professions embodied
scientific analysis and planning.[8] The restructuring of higher education in the United States from a focus on teaching
to a new focus that included both teaching and research activities lead to new schools of thought and approaches to
science. Professors at universities were now expected to teach and to plan and conduct original research.[9] Numerous
pioneers of American education began their work and research in the major research institutions of the day. Perhaps
one of the most well known of these scholars was John Dewey, Chair of the Department of Philosophy at the
University of Chicago from 1896 to 1904, who introduced a new approach to the study of education and became a
leader in pedagogy. Deweys experimental Laboratory School was based more on psychology than on behaviorism
which had long influenced educational research activities.[10] John Deweys progressive education philosophy
opposed testing and curriculum tracking and relied more on argument than on scientific research and its
evidence.[11] He worked to combine philosophy, psychology and education. Surprisingly, John Dewey never proposed
future areas of inquiry or suggested future research directions in his writings and he never published any evidence on
the effects his Laboratory School experiment had on children.[12] Deweys influence on educational practice outside of
his Laboratory School was quite limited and overestimated.[13] Ellen Condliffe Lagemann summed up John Deweys
legacy on educational research as follows: to suggest that Dewey had served as something of a cultural icon,
alternatively praised and damned by thinkers on both the right and left, might capture his place in the history of
education more accurately than to say he was important as a reformer. Certainly, his ideas about a science of education
did not create a template for educational study.[14] In 1904, John Dewey left the University of Chicago for Teachers
College at Columbia University where he remained as a professor of Philosophy until his death in 1952.

Within five years of John Deweys departure, Charles Judd arrived in 1909 to serve as Chair of the School of Education
at the University of Chicago. Charles Judd differed substantially from John Dewey in his approach to educational
research. Charles Judd, a psychologist, sought to bring a rigorous and scientific approach to the study of education.
Judd was a proponent of the scientific method and worked to integrate it into educational research. This was evidenced
by the University of Chicagos School of Education reorganization into the Department of Education within the
Division of the Social Sciences shortly after Judds arrival on campus. Judds preference for quantitative data collection
and analysis and his emphasis on the scientific method, with a particular focus on psychology, was one of the leading
schools of thought on educational research during the early decades of the twentieth century.

In 1904, the same year that John Dewey left the University of Chicago for Teachers College at Columbia University,
the psychologist Edward Thorndike, also of Teachers College, published An Introduction to the Theory of Mental and
Social Measurements which argued for a strong positivistic theoretical approach to educational research. Thorndike
held a similar epistemological approach to the study of education to that of Charles Judd at Chicago. Thorndike favored
the separation of philosophy and psychology. He did not care for the collection of data for census purposes but rather
the production of statistics and precise measurements that could be analyzed. Thorndike became a very influential
educational scholar and his approach to educational research was widely accepted and adopted across academia both in
the United States and abroad.[15] What Ellen Condliffe Lagemann describes as Edward Thorndikes triumph and John
Deweys defeat was critical to the field and to attempts to define an educational science.[16]

EDUCATIONAL RESEARCH AND THE INTER-WAR YEARS

The inter-war years were a time of transformation in educational research. By 1915, the study of education had been
established at the university level with 300 of the 600 institutions of higher education in the United States offering
courses on education. This time period also experienced an increase at the doctoral level of study which saw
enrollments higher than any other discipline other than chemistry.[17] While faculty at institutions such as Harvard,
Teachers College and the University of Chicago, which had dominated the educational research landscape decades
earlier, continued to make significant contributions to the study of education, there were scholars at many other
institutions making additional valuable contributions to educational research scholarship. At the conclusion of World
War I the focus on educational reform in the United States began to change to a more social control and efficiency and
there was an opportunity for many educationists to provide guidance to public schools in the United
States.[18] Disagreement among educational research scholars persisted during this time period and there was little
consensus on the aims of education. With the population of the United States growing rapidly and the demographic
make-up of its people changing due to the arrival of immigrants from across the globe coupled with the migration of
African Americans from rural areas and the Southern states to the urban cities in the Northeast and Midwest the student
bodies at public schools were diversifying at a rapid pace. The arrival of new immigrants to the United States coincided
with the testing movement that emerged during World War I when the United States Army was testing its recruits.
The most prominent psychologists of the time, including Edward Thorndike, were either involved with or supported the
Armys testing. The testing movement attracted both psychologists and sociologists alike and it was the sociologists,
primarily in the Department of Sociology at the University of Chicago, who challenged and actively researched the
racial differences in intelligence quotients. Otto Klineberg from Columbia University also played a leading role in
studying racial and cultural differences in intelligence quotients and their measures.

EDUCATIONAL RESEARCH POST WORLD WAR II AND THE FUTURE

Educational research continued to flourish in the years and decades after World War II ended. During this time period,
the growth in schools of education and in the number of courses on education at institutions of higher education
continued to rise. Additionally, more academic journals with a focus on educational issues emerge as a means to
disseminate new knowledge. These exciting changes in educational scholarship were not confined to the ivory towers
in the United States. Even as Europe was rebuilding, the study of education across the continent was on the rise and in
the United Kingdom, for example, the rise of professional graduate degrees in education was significant.[19] Scholarly
debates on the aims of education as well as epistemological discourse persisted.

In the decades after World War II, and in particular at the start of the 1960s, a post-positivist movement in educational
research starts taking shape.[20] While positivistic approaches to educational research continued to be put forth during
the post-war years and continued to be favored by many social scientists, we start to see the introduction of, and in
some case the reemergence of, other epistemological approaches.[21] Constructivism, functionalism and
postmodernism theoretical frameworks, among others, have offered strong criticisms of positivism.[22] Vigorous
debates on the virtues of the various theoretical perspectives about knowledge, science and methodologies have played
a very important role in educational research. Frequently, these critical discussions and analyses have found both a
platform and a captive audience in the field of comparative education. These philosophical debates continue today both
in and outside of academia.

The United States federal government also began to take a much more active role in educational research in the post-
war years. Specifically, in 1954, the United States Congress passed the Cooperative Research Act. The Cooperative
Research Act was passed as a means for the federal government to take a more active role in advancing and funding
research on education in academia.[23] Additional legislation and federal initiatives during the 1950s and 1960s that
supported and/or funded educational research and provided a means for the dissemination of new educational
knowledge included the National Defense Education Act of 1958 and the establishment of the Educational Resources
Information Center (ERIC) in 1966. These are but a few of the many examples of the new role the federal government
was playing in educational research during this time period. To be sure, the federal government has continued to play a
significant role in educational research since this time period. Since the 1970s, according to Robert Travers, virtually
every bill authorizing particular educational programs has included a requirement that the particular program be
evaluated to determine whether the program was worth the money spent upon it.[24] For a long period of time, public
focus on education and schools focused on resource allocation, student access, and the content of the curriculum and
paid relatively little attention to results.[25] This new evidenced-based movement[26] is one that remains with us
today. Patti Lather describes the evidenced-based movement as governmental incursion into legislating scientific
method in the real of educational research and that the federal governments focus on evidence-based knowledge is
much more about policy for science than it is about science for policy.[27] The federal government has a vested interest
in and support for applied research over basic or pure research. This, of course, is challenging for social scientists
and educational researchers who are positivist in their approach to science and knowledge.[28]

A distinctive form of research emerged from the new assessment or evaluation movement in recent decades.
Educational assessment, in many ways, is a form of action research.[29] Action research does not aim to produce
new knowledge. Instead, action research aims to improve practice and in the context of education it aims to improve
the educational practice of teachers.[30] Action research, as Richard Pring points out, might be supported and funded
with a view to knowing the most effective ways of attaining particular goals goals or targets set by government or
others external to the transaction which takes place between teacher and learner.[31] Action research proponents,
Yvonna Lincoln and Egon Guba, highlight that the call for actiondifferentiates between positivist and postmodern
criticalist theorists.[32]
A new and interesting approach to educational research can be found today at the University of Chicago. The
Department of Education at the University of Chicago was closed in 1997 to much surprise around the world. Despite
the closing of the Department of Education, a sprinkling of educational research activities by faculty and available
education course offerings can be found in a variety of academic departments and professional schools. In addition, the
University of Chicago has also operated the North Kenwood/Oakland Charter School under the Center for Urban
School Improvement since 1998. Campus interest in urban schools and educational research led to the creation of a
new Committee on Education in 2005, with a home in the Division of the Social Sciences, chaired by Stephen
Radenbusch who joined the faculty in the Department of Sociology and whom the University lured from the School of
Education at the University of Michigan. The University of Chicago Chronicle highlighted the arrival of Stephen
Radenbusch and noted that the Committee on Education will bring together distinguished faculty from several
departments and schools considered to be among the best in the world into common research projects, seminars and
training programs. The committee will engage faculty and graduate students from such areas as public policy,
sociology, social service administration, economics, business, mathematics and the sciences to collaborate on the most
critical issues affecting urban schools.[33] The interdisciplinary focus of Chicagos Committee on Education and its
Urban Education Initiative plans to create a Chicago Model for urban schools that will draw on and test the best
ideas about teaching, learning, school organization, school governance, teacher preparation, and social service
provision.[34] While interdisciplinary research and collaboration is no stranger to the University of Chicago, it is a
new and innovative approach to the study of education. The Committee on Education at the University of Chicago is
highly quantitatively driven and data focused.[35] If this interdisciplinary approach to educational research is
successful and is modeled by other institutions of higher education, both in the United States and abroad, it will be
interesting to see if a positivistic approach similar to that found at Chicago is followed or if a more relativistic approach
is pursued. Either way, interdisciplinary collaboration may very well be the next chapter in the history of educational
research.

CONCLUSION

From its inception, educational research has been a subject of debate. Educational research has grown significantly over
time and the variety of theoretical approaches that have been implemented in the research has diversified greatly over
time. This essay identified many, but certainly not all, of the key transformations in educational research from the late
nineteenth century to present day. Also, this essay is not an attempt to recommend one theoretical approach over
another in the study and research of education. Rather, it is an attempt to provide a brief history of the types of
educational research efforts and to highlight the epistemological debates that have occurred during this time period.

REFERENCES

Bowen, J. 1981. A History of Western Education; Volume III: The Modern West. London: Methuen.

Cohen, D.K, and C.A. Barnes. 1999. Research and the Purposes of Education, in Issues in Educational Research:
Problems and Possibilities, ed. E.C. Lagemann and L.S. Shulman, 17-41. San Francisco: Jossey-Bass Publishers.

Committee on Education. 2008. The Role of the Committee on Education. Chicago:
The Committee on Education, University of Chicago. http://coe.uchicago.edu/about/index.shtml.

Fuchs, E. 2004. Educational Sciences, Morality and Politics: International Educational Congresses in the early
twentieth Century. Pedagogica Historica 40, no. 5: 757 784.

Greenwood, D.J., and M. Levin. 2003. Reconstructing the Relationships between Universities and Society through
Action Research in The Landscape of Qualitative Research: Theories and Issues, 2nd ed., ed. Norman K. Denzin and
Yvonna S. Lincoln, 131-166. Thousand Oaks, CA: SAGE Publications.

Hamilton, D. 2002. Noisy Fallible and Biased Though it be (On the Vagaries of Educational Research). British
Journal of Educational Studies 50, no. 1: 144 164.

Harms, W. 2005. Radenbusch to Chair New Committee on Education, The University of Chicago Chronicle. May
26.http://chronicle.uchicago.edu/050526/raudenbush.shtml.

Hofstetter, R. and B. Schneuwly. 2004. Introduction Educational Sciences in Dynamic and Hybrid
Institutionalization.Pedagogica Historia 40, no. 5: 569-589.

Kliebard, H.M. 1986. The Struggle for the American Curriculum, 1893-1958. Boston: Routledge & Kegan Paul.

Lagemann, E.C. 2000. An Elusive Science: The Troubling History of Education Research. Chicago: The University of
Chicago Press.

Lather, P. 2004. Scientific Research in Education: A Critical Perspective. British Educational Research Journal 30, no.
6: 759-772.

Lincoln, Y.S., and E.G. Guba. 2003. Paradigmatic Controversies, Contradictions, and Emerging Confluences in The
Landscape of Qualitative Research: Theories and Issues, 2nd ed., ed. Norman K. Denzin and Yvonna S. Lincoln, 253-
291. Thousand Oaks, CA: SAGE Publications.

Popkewitz, T.S. 1998. The Culture of Redemption and the Administration of Freedom as Research. Review of
Educational Research 68, no. 1: 1-34.

Pring, R. 2000. Philosophy of Educational Research, 2nd ed. New York: Continuum.

Suskie, L. 2004. Assessing Student Learning: A Common Sense Guide. Bolton, MA: Anker Publishing Company, Inc.

Travers, R.M.W. 1983. How Research Has Changed American Schools: A History from 1840 to the Present.
Kalamazoo, MI: Mythos Press.




Case study
From Wikipedia, the free encyclopedia
This article is about the method of doing research. For the teaching method, see Case method. For the method of
teaching law, see Casebook method. For reports of clinical cases, see Case report. For the Case Study (1969) film
series by Lockheed Aircraft Corporation, see propaganda film.
In the social sciences and life sciences, a case study (or case report) is a descriptive, exploratory or explanatory
analysis of a person, group or event. An explanatory case study is used to explore causation in order to find underlying
principles.
[1][2]
Case studies may be prospective (in which criteria are established and cases fitting the criteria are
included as they become available) or retrospective (in which criteria are established for selecting cases from historical
records for inclusion in the study).
Thomas
[3]
offers the following definition of case study: "Case studies are analyses of persons, events, decisions,
periods, projects, policies, institutions, or other systems that are studied holistically by one or more method. The case
that is the subject of the inquiry will be an instance of a class of phenomena that provides an analytical frame
an object within which the study is conducted and which the case illuminates and explicates." According to J.
Creswell, data collection in a case study occurs over a "sustained period of time."
[4]

Another suggestion is that case study should be defined as a research strategy, an empirical inquiry that investigates a
phenomenon within its real-life context. Case study research can mean single and multiple case studies, can include
quantitative evidence, relies on multiple sources of evidence, and benefits from the prior development of theoretical
propositions. Case studies should not be confused with qualitative research and they can be based on any mix of
quantitative and qualitative evidence. Single-subject research provides the statistical framework for making inferences
from quantitative case-study data.
[2][5]
This is also supported and well-formulated in (Lamnek, 2005): "The case study
is a research approach, situated between concrete data taking techniques and methodologic paradigms."
The case study is sometimes mistaken for the case method, but the two are not the same.
Contents
1 Case selection and structure
2 Generalizing from case studies
3 History of the case study
4 See also
5 References
6 Useful Sources
7 External links
Case selection and structure[edit]
An average, or typical, case is often not the richest in information. In clarifying lines of history and causation it is more
useful to select subjects that offer an interesting, unusual or particularly revealing set of circumstances. A case selection
that is based on representativeness will seldom be able to produce these kinds of insights. When selecting a subject for
a case study, researchers will therefore use information-oriented sampling, as opposed to random
sampling. Outlier cases (that is, those which are extreme, deviant or atypical) reveal more information than the
potentially representative case. Alternatively, a case may be selected as a key case, chosen because of the inherent
interest of the case or the circumstances surrounding it. Or it may be chosen because of researchers' in-depth local
knowledge; where researchers have this local knowledge they are in a position to soak and poke as Fenno
[6]
puts it,
and thereby to offer reasoned lines of explanation based on this rich knowledge of setting and circumstances.
Three types of cases may thus be distinguished:
1. Key cases
2. Outlier cases
3. Local knowledge cases
Whatever the frame of reference for the choice of the subject of the case study (key, outlier, local knowledge), there is a
distinction to be made between thesubjestorical unity
[7]
through which the theoretical focus of the study is being
viewed. The object is that theoretical focus the analytical frame. Thus, for example, if a researcher were interested in
US resistance to communist expansion as a theoretical focus, then the Korean War might be taken to be the subject, the
lens, the case study through which the theoretical focus, the object, could be viewed and explicated.
[8]

Beyond decisions about case selection and the subject and object of the study, decisions need to be made about
purpose, approach and process in the case study. Thomas
[3]
thus proposes a typology for the case study wherein
purposes are first identified (evaluative or exploratory), then approaches are delineated (theory-testing, theory-building
or illustrative), then processes are decided upon, with a principal choice being between whether the study is to be single
or multiple, and choices also about whether the study is to be retrospective, snapshot or diachronic, and whether it is
nested, parallel or sequential. It is thus possible to take many routes through this typology, with, for example, an
exploratory, theory-building, multiple, nested study, or an evaluative, theory-testing, single, retrospective study. The
typology thus offers many permutations for case study structure.
A closely related study in medicine is the case report, which identifies a specific case as treated and/or examined by the
authors as presented in a novel form. These are, to a differentiable degree, similar to the case study in that many contain
reviews of the relevant literature of the topic discussed in the thorough examination of an array of cases published to fit
the criterion of the report being presented. These case reports can be thought of as brief case studies with a principal
discussion of the new, presented case at hand that presents a novel interest.
Generalizing from case studies[edit]
A critical case is defined as having strategic importance in relation to the general problem. A critical case allows the
following type of generalization, If it is valid for this case, it is valid for all (or many) cases. In its negative form, the
generalization would be, If it is not valid for this case, then it is not valid for any (or valid for only few) cases.
The case study is also effective for generalizing using the type of test that Karl Popper called falsification, which forms
part of critical reflexivity. Falsification is one of the most rigorous tests to which a scientific proposition can be
subjected: if just one observation does not fit with the proposition it is considered not valid generally and must
therefore be either revised or rejected. Popper himself used the now famous example of, "All swans are white," and
proposed that just one observation of a single black swan would falsify this proposition and in this way have general
significance and stimulate further investigations and theory-building. The case study is well suited for identifying
"black swans" because of its in-depth approach: what appears to be "white" often turns out on closer examination to be
"black."
Galileo Galileis rejection of Aristotles law of gravity was based on a case study selected by information-oriented
sampling and not random sampling. The rejection consisted primarily of a conceptual experiment and later on of a
practical one. These experiments, with the benefit of hindsight, are self-evident. Nevertheless, Aristotles incorrect
view of gravity dominated scientific inquiry for nearly two thousand years before it was falsified. In his experimental
thinking, Galileo reasoned as follows: if two objects with the same weight are released from the same height at the
same time, they will hit the ground simultaneously, having fallen at the same speed. If the two objects are then stuck
together into one, this object will have double the weight and will according to the Aristotelian view therefore fall
faster than the two individual objects. This conclusion seemed contradictory to Galileo. The only way to avoid the
contradiction was to eliminate weight as a determinant factor for acceleration in free fall.
[9]

History of the case study[edit]
It is generally believed that the case-study method was first introduced into social science by Frederic Le Play in 1829
as a handmaiden to statistics in his studies of family budgets. (Les Ouvriers Europeens (2nd edition, 1879).
[10]

The use of case studies for the creation of new theory in social sciences has been further developed by the
sociologists Barney Glaser and Anselm Strauss who presented their research method, Grounded theory, in 1967.
The popularity of case studies in testing hypotheses has developed only in recent decades. One of the areas in which
case studies have been gaining popularity is education and in particular educational evaluation.
[11]
(MacDonald, B., &
Walker, R. (1975) Case Study and the social philosophy of educational research. Cambridge Journal of Education 5,
pp. 2-11.) (MacDonald, B. (1978) The Experience of Innovation, CARE Occasional Publications #6, CARE, University
of East Anglia, Norwich, UK) ( Kushner, S. (2000) Personalizing Evaluation. Thousand Oaks: Sage Publications)
Case studies have also been used as a teaching method and as part of professional development, especially in business
and legal education. The problem-based learning (PBL) movement is such an example. When used in (non-business)
education and professional development, case studies are often referred to as critical incidents.
Ethnography is an example of a type of case study, commonly found in communication case studies. Ethnography is
the description, interpretation, and analysis of a culture or social group, through field research in the natural
environment of the group being studied. The main method of ethnographic research is through observation where the
researcher observes the participants over an extended period of time within the participants own environment.
[12]

When the Harvard Business School was started, the faculty quickly realized that there were no textbooks suitable to a
graduate program in business. Their first solution to this problem was to interview leading practitioners of business and
to write detailed accounts of what these managers were doing. Cases are generally written by business school faculty
with particular learning objectives in mind and are refined in the classroom before publication. Additional relevant
documentation (such as financial statements, time-lines, and short biographies, often referred to in the case as
"exhibits"), multimedia supplements (such as video-recordings of interviews with the case protagonist), and a carefully
crafted teaching note often accompany cases.
he case study is the subject and relevance. In a case study, you are deliberately trying to isolate a small study group,
one individual case or one particular population.
For example, statistical analysis may have shown that birthrates in African countries are increasing. A case study on
one or two specific countries becomes a powerful and focused tool for determining the social and economic pressures
driving this.
In the design of a case study, it is important to plan and design how you are going to address the study and make sure
that all collected data is relevant. Unlike a scientific report, there is no strict set of rules so the most important part is
making sure that the study is focused and concise; otherwise you will end up having to wade through a lot of irrelevant
information.
It is best if you make yourself a short list of 4 or 5 bullet points that you are going to try and address during the study. If
you make sure that all research refers back to these then you will not be far wrong.
With a case study, even more than a questionnaire or survey, it is important to be passive in your research. You are
much more of an observer than an experimenter and you must remember that, even in a multi-subject case, each case
must be treated individually and then cross case conclusions can be drawn.
How to Analyze the Results
Analyzing results for a case study tends to be more opinion based than statistical methods. The usual idea is to try and
collate your data into a manageable form and construct a narrative around it.
Use examples in your narrative whilst keeping things concise and interesting. It is useful to show some numerical data
but remember that you are only trying to judge trends and not analyze every last piece of data. Constantly refer back to
your bullet points so that you do not lose focus.
It is always a good idea to assume that a person reading your research may not possess a lot of knowledge of the
subject so try to write accordingly.
In addition, unlike a scientific study which deals with facts, a case study is based on opinion and is very much designed
to provoke reasoned debate. There really is no right or wrong answer in a case study.
Introduction to Design
What is Research Design?
Research design can be thought of as the structure of research -- it is the "glue" that holds all of the elements in a
research project together. We often describe a design using a concise notation that enables us to summarize a complex
design structure efficiently. What are the "elements" that a design includes? They are:
Observations or Measures
These are symbolized by an 'O' in design notation. An O can refer to a single measure (e.g., a measure of body weight),
a single instrument with multiple items (e.g., a 10-item self-esteem scale), a complex multi-part instrument (e.g., a
survey), or a whole battery of tests or measures given out on one occasion. If you need to distinguish among specific
measures, you can use subscripts with the O, as in O
1
, O
2
, and so on.
Treatments or Programs
These are symbolized with an 'X' in design notations. The X can refer to a simple intervention (e.g., a one-time surgical
technique) or to a complex hodgepodge program (e.g., an employment training program). Usually, a no-treatment
control or comparison group has no symbol for the treatment (some researchers useX+ and X- to indicate the treatment
and control respectively). As with observations, you can use subscripts to distinguish different programs or program
variations.
Groups
Each group in a design is given its own line in the design structure. if the design notation has three lines, there are three
groups in the design.
Assignment to Group
Assignment to group is designated by a letter at the beginning of each line (i.e., group) that describes how the group
was assigned. The major types of assignment are:
R = random assignment
N = nonequivalent groups
C = assignment by cutoff
Time
Time moves from left to right. Elements that are listed on the left occur before elements that are listed on the right.
Design Notation Examples
It's always easier to explain design
notation through examples than it is to
describe it in words. The figure shows
the design notation for apretest-posttest
(or before-after) treatment versus
comparison group randomized
experimental design. Let's go through
each of the parts of the design. There
are two lines in the notation, so you
should realize that the study has two
groups. There are four Os in the notation, two on each line and two for each group. When the Os are stacked vertically
on top of each other it means they are collected at the same time. In the notation you can see that we have two Os that
are taken before (i.e., to the left of) any treatment is given -- the pretest -- and two Os taken after the treatment is given
-- the posttest. The R at the beginning of each line signifies that the two
groups are randomly assigned (making it an experimental design). The design
is a treatment versus comparison group one because the top line (treatment
group) has an Xwhile the bottom line (control group) does not. You should be
able to see why many of my students have called this type of notation the "tic-
tac-toe" method of design notation -- there are lots of Xs and Os! Sometimes
we have to be more specific in describing the Os or Xs than just using a single
letter. In the second figure, we have the identical research design with some
subscripting of the Os. What does this mean? Because all of the Os have a
subscript of 1, there is some measure or set of measures that is collected for
both groups on both occasions. But the design also has two Os with a subscript of 2, both taken at the posttest. This
means that there was some measure or set of measures that were collected onlyat the posttest.
With this simple set of rules for describing a research design in notational form, you can concisely explain even
complex design structures. And, using a notation helps to show common design sub-structures across different designs
that we might not recognize as easily without the notation.
Types of Designs
What are the different major types of
research designs? We can classify
designs into a simple threefold
classification by asking some key
questions. First, does the design use
random assignment to groups? [Don't
forget that random assignment is not
the same thing as random selection of a
sample from a population!] If random
assignment is used, we call the design a randomized experiment ortrue experiment. If random assignment is not
used, then we have to ask a second question: Does the design use either multiple groups or multiple waves of
measurement? If the answer is yes, we would label it a quasi-experimental design. If no, we would call it a non-
experimental design. This threefold classification is especially useful for describing the design with respect to internal
validity. A randomized experiment generally is the strongest of the three designs when your interest is in establishing a
cause-effect relationship. A non-experiment is generally the weakest in this respect. I have to hasten to add here, that I
don't mean that a non-experiment is the weakest of the the three designs overall, but only with respect to internal
validity or causal assessment. In fact, the simplest form of non-experiment is a one-shot survey design that consists of
nothing but a single observation O. This is probably one of the most common forms of research and, for some research
questions -- especially descriptive ones -- is clearly a strong design. When I say that the non-experiment is the weakest
with respect to internal validity, all I mean is that it isn't a particularly good method for assessing the cause-effect
relationship that you think might exist between a program and its outcomes.
To illustrate the different types of
designs, consider one of each in design
notation. The first design is a posttest-
only randomized experiment. You can
tell it's a randomized experiment
because it has an R at the beginning of
each line, indicating random
assignment. The second design is a pre-
post nonequivalent groups quasi-
experiment. We know it's not a
randomized experiment because
random assignment wasn't used. And
we know it's not a non-experiment because there are both multiple groups and multiple waves of measurement. That
means it must be a quasi-experiment. We add the label "nonequivalent" because in this design we do not explicitly
control the assignment and the groups may be nonequivalent or not similar to each other (see nonequivalent group
designs). Finally, we show a posttest-only nonexperimental design. You might use this design if you want to study the
effects of a natural disaster like a flood or tornado and you want to do so by interviewing survivors. Notice that in this
design, you don't have a comparison group (e.g., interview in a town down the road the road that didn't have the
tornado to see what differences the tornado caused) and you don't have multiple waves of measurement (e.g., a pre-
tornado level of how people in the ravaged town were doing before the disaster). Does it make sense to do the non-
experimental study? Of course! You could gain lots of valuable information by well-conducted post-disaster
interviews. But you may have a hard time establishing which of the things you observed are due to the disaster rather
than to other factors like the peculiarities of the town or pre-disaster characteristics.
Factorial Designs
A Simple Example
Probably the easiest way to begin understanding factorial designs is by looking at an example. Let's imagine a design
where we have an educational program where we
would like to look at a variety of program variations to
see which works best. For instance, we would like to
vary the amount of time the children receive
instruction with one group getting 1 hour of instruction
per week and another getting 4 hours per week. And,
we'd like to vary the setting with one group getting the
instruction in-class (probably pulled off into a corner
of the classroom) and the other group being pulled-out
of the classroom for instruction in another room. We
could think about having four separate groups to do
this, but when we are varying the amount of time in
instruction, what setting would we use: in-class or pull-out? And, when we were studying setting, what amount of
instruction time would we use: 1 hour, 4 hours, or something else?
With factorial designs, we don't have to compromise when answering these questions. We can have it both ways if we
cross each of our two time in instruction conditions with each of our two settings. Let's begin by doing some defining
of terms. In factorial designs, a factor is a major independent variable. In this example we have two factors: time in
instruction and setting. A level is a subdivision of a factor. In this example, time in instruction has two levels and
setting has two levels. Sometimes we depict a factorial design with a numbering notation. In this example, we can say
that we have a 2 x 2 (spoken "two-by-two) factorial design. In this notation, the number of numbers tells you how many
factors there are and the number values tell you how many levels. If I said I had a 3 x 4 factorial design, you would
know that I had 2 factors and that one factor had 3 levels while the other had 4. Order of the numbers makes no
difference and we could just as easily term this a 4 x 3 factorial design. The number of different treatment groups that
we have in any factorial design can easily be determined by multiplying through the number notation. For instance, in
our example we have 2 x 2 = 4 groups. In our notational example, we would
need 3 x 4 = 12 groups.
We can also depict a factorial design in design notation. Because of the
treatment level combinations, it is useful to use subscripts on the treatment (X)
symbol. We can see in the figure that there are four groups, one for each combination of levels of factors. It is also
immediately apparent that the groups were randomly assigned and that this is a posttest-only design.
Now, let's look at a variety of different results we might get from this simple 2 x 2 factorial design. Each of the
following figures describes a different possible outcome. And each outcome is shown in table form (the 2 x 2 table with
the row and column averages) and in graphic form (with each factor taking a turn on the horizontal axis). You should
convince yourself that the information in the tables agrees with the information in both of the graphs. You should also
convince yourself that the pair of graphs in each figure show the exact same information graphed in two different ways.
The lines that are shown in the graphs are technically not necessary -- they are used as a visual aid to enable you to
easily track where the averages for a single level go across levels of another factor. Keep in mind that the values shown
in the tables and graphs are group averages on the outcome variable of interest. In this example, the outcome might be a
test of achievement in the subject being taught. We will assume that scores on this test range from 1 to 10 with higher
values indicating greater achievement. You should study carefully the outcomes in each figure in order to understand
the differences between these cases.
In the second main effect graph we see that in-class training was better than pull-out training for all amounts of time.

between factors, not levels. We wouldn't say there's an interaction between 4 hours/week and in-class treatment.
Instead, we would say that there's an interaction between time and setting, and then we would go on to describe the
specific levels involved.
How do you know if there is an interaction in a factorial design? There are three ways you can determine there's an
interaction. First, when you run the statistical analysis, the statistical table will report on all main effects and
interactions. Second, you know there's an interaction when can't talk about effect on one factor without mentioning the
other factor. if you can say at the end of our study that time in instruction makes a difference, then you know that you
have a main effect and not an interaction (because you did not have to mention the setting factor when describing the
results for time). On the other hand, when you have an interaction it is impossible to describe your results accurately
without mentioning both factors. Finally, you can always spot an interaction in the graphs of group means -- whenever
there are lines that are not parallel there is an interaction present! If you check out the main effect graphs above, you
will notice that all of the lines within a graph are parallel. In contrast, for all of the interaction graphs, you will see that
the lines are not parallel.
In the first interaction effect graph, we see that one combination of levels -- 4 hours/week and in-class setting -- does
better than the other three. In the second interaction we have a more complex "cross-over" interaction. Here, at 1
hour/week the pull-out group does better than the in-class group while at 4 hours/week the reverse is true. Furthermore,
the both of these combinations of levels do equally well.
Summary
Factorial design has several important features. First, it has great flexibility for exploring or enhancing the signal
(treatment) in our studies. Whenever we are interested in examining treatment variations, factorial designs should be
strong candidates as the designs of choice. Second, factorial designs are efficient. Instead of conducting a series of
independent studies we are effectively able to combine these studies into one. Finally, factorial designs are the only
effective way to examine interaction effects.
So far, we have only looked at a very simple 2 x 2 factorial design structure. You may want to look at somefactorial
design variations to get a deeper understanding of how they work. You may also want to examine how we approach
the statistical analysis of factorial experimental designs.
BIBLIOGRAPHY OF RESEARCH METHODS TEXTS
The ACRL Instruction Section Research and Scholarship Committee supports research and scholarship opportunities
for academic and research librarians by identifying instruction-related topics for study and research. The Bibliography
of Research Methods Texts provides information on research methods relevant to library and information science, and
is intended to complement the Research Agenda for Library Instruction and Information Literacy.
This site serves as the current research methods texts bibliography. It lists English language texts that are currently in
print and that focus on research methods in librarianship or the social sciences. The books are arranged into general
subject categories and then listings appear alphabetically by author. Paired with each citation is a review authored by
committee members.
The bibliography is reviewed and updated every three years, and is intended to be selective rather than exhaustive. The
annotated bibliography is a work in progress: current selections are based on recommendations from committee
members.
The IS Research and Scholarship Committee welcomes suggestions of citations for inclusion or comments on current
selections. Please contact the current Research & Scholarship Committee chair with any questions or suggestions.

Das könnte Ihnen auch gefallen