Beruflich Dokumente
Kultur Dokumente
Meaning of research
Research in common parlance refers to a search for knowledge. Once can also define research as a scientific
and systematic search for pertinent information on a specific topic. In fact, research is an art of scientific
investigation. The Advanced Learners Dictionary of Current English lays down the meaning of research as
a careful investigation or inquiry especially through search for new facts in any branch of knowledge.1
Redman and Mory define research as a systematized effort to gain new knowledge.2 Some people consider
research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery.
We all possess the vital instinct of inquisitiveness for, when the unknown confronts us, we wonder and our
inquisitiveness makes us probe and attain full and fuller understanding of the unknown. This inquisitiveness
is the mother of all knowledge and the method, which man employs for obtaining the knowledge of whatever
the unknown, can be termed as research.
Research is an academic activity and as such the term should be used in a technical sense.
According to Clifford Woody research comprises defining and redefining problems, formulating hypothesis
or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching
conclusions; and at last carefully testing the conclusions to determine whether they fit the formulating
hypothesis. D. Slesinger and M. Stephenson in the Encyclopedia of Social Sciences define research as the
manipulation of things, concepts or symbols for the purpose of generalizing to extend, correct or verify
knowledge, whether that knowledge aids in construction of theory or in the practice of an art.3 Research is,
thus, an original contribution to the existing stock of knowledge making for its advancement. It is the pursuit
of truth with the help of study, observation, comparison and experiment. In short, the search for knowledge
through objective and systematic method of finding solution to a problem is research. The systematic approach
concerning generalization and the formulation of a theory is also research. As such the term research refers
to the systematic method consisting of enunciating the problem, formulating a hypothesis, collecting the facts
or data, analyzing the facts and reaching certain conclusions either in the form of solutions(s) towards the
concerned problem or in certain generalizations for some theoretical formulation.
Objectives of research
The purpose of research is to discover answers to questions through the application of scientific procedures.
The main aim of research is to find out the truth which is hidden and which has not been discovered as yet.
Though each research study has its own specific purpose, we may think of research objectives as falling into
a number of following broad groupings:
1. To gain familiarity with a phenomenon or to achieve new insights into it (studies with this object in view
are termed as exploratory or formulate research studies);
2. To portray accurately the characteristics of a particular individual, situation or a group (studies with this
object in view are known as descriptive research studies);
3. To determine the frequency with which something occurs or with which it is associated with something else
(studies with this object in view are known as diagnostic research studies);
4. To test a hypothesis of a causal relationship between variables (such studies are known as hypothesis-testing
research studies).
Motivation in research
What makes people to undertake research? This is a question of fundamental importance. The possible motives
for doing research may be either one or more of the following:
1|Page
1. Desire to get a research degree along with its consequential benefits;
2. Desire to face the challenge in solving the unsolved problems, i.e., concern over practical problems initiates
research;
3. Desire to get intellectual joy of doing some creative work;
4. Desire to be of service to society;
5. Desire to get respectability.
However, this is not an exhaustive list of factors motivating people to undertake research studies.
Many more factors such as directives of government, employment conditions, curiosity about new things,
desire to understand causal relationships, social thinking and awakening, and the like may as well motivate
(or at times compel) people to perform research operations.
Significance of Research
All progress is born of inquiry. Doubt is often better than overconfidence, for it leads to inquiry, and
inquiry leads to invention is a famous Hudson Maxim in context of which the significance of research can
well be understood. Increased amounts of research make progress possible. Research inculcates scientific
and inductive thinking and it promotes the development of logical habits of thinking and organization.
The role of research in several fields of applied economics, whether related to business or to the economy as
a whole, has greatly increased in modern times. The increasingly complex nature of business and
government has focused attention on the use of research in solving operational problems. Research, as an aid
to economic policy, has gained added importance, both for government and business.
Research provides the basis for nearly all government policies in our economic system.
For instance, governments budgets rest in part on an analysis of the needs and desires of the people and on
the availability of revenues to meet these needs. The cost of needs has to be equated to probable revenues
and this is a field where research is most needed. Through research we can devise alternative policies and
can as well examine the consequences of each of these alternatives.
Pure and applied research
There are different types of research from pure research to applied research. But what does this mean?
Pure research or curiosity-driven research involves seeking systematically and methodically for knowledge
without having any particular application in mind. A difference is often made between pure basic researches
and focused basic research, where the second can be viewed as providing a platform for applications. Pure
research is not necessarily economically profitable in itself but may offer conditions for future innovations
and scientific breakthroughs.
Applied research: Applied research involves the systematic and methodical search for knowledge with a
specific application in mind.
Development work: In development work research findings are used to create a new product.
Sectorial research: Sectorial research includes all the concepts described above restricted to a specific social
sector.
Two more two terms are used in Sweden as well: contract research and research with individual responsibility.
Contract research: In contract research the focus of the project, its extent and the level of ambition is
determined by whoever commissions it.
2|Page
Research with individual responsibility: Research with individual responsibility can be funded either by a
research council or through the budget of a higher education institution and refers to research that is justified
on sectorial or industrial grounds in which the aim is long-term development of knowledge. The researchers
themselves initiate this research and are responsible for its results.
Pure and applied research: Pure research (also known as basic or fundamental research) is exploratory
in nature and is conducted without any practical end-use in mind. It is driven by gut instinct, interest, curiosity
or intuition, and simply aims to advance knowledge and to identify/explain relationships between variables.
However, as the term fundamental suggests, pure research may provide a foundation for further, sometimes
applied research. In general, applied research is not carried out for its own sake but in order to solve specific,
practical questions or problems. It tends to be descriptive, rather than exploratory and is often based upon pure
research. However, the distinction between applied and pure research may sometimes be unclear; for example,
is research into the genetic codes of plants being carried out simply to advance knowledge or for possible
future commercial exploitation? It could be argued that the only real difference between these two categories
of research is the length of time between research and reasonably foreseeable practical applications, either in
the public or private sectors.
The terms quantitative research and qualitative research are commonly used within the research
community and implicitly indicate the nature of research being undertaken and the types of assumptions being
made. In reality, many research activities do not fall neatly into one or other category, as we shall discuss
later. However, as a staging post in our exploration of research, it is useful to discuss each term. The terms
will be explored in the next section of this theme.
3|Page
poorly designed experiments, badly executed experiments or faulty interpretations. As such the researcher
must pay all possible attention while developing the experimental design and must state only probable
inferences. The purpose of survey investigations may also be to provide scientifically gathered information
to work as a basis for the researchers for their conclusions.
The scientific method is, thus, based on certain basic postulates which can be stated as under:
1. It relies on empirical evidence;
2. It utilizes relevant concepts;
3. It is committed to only objective considerations;
4. It presupposes ethical neutrality, i.e., it aims at nothing but making only adequate and correct statements
about population objects;
5. It results into probabilistic predictions;
6. Its methodology is made known to all concerned for critical scrutiny are for use in testing the conclusions
through replication;
7. It aims at formulating most general axioms or what can be termed as scientific theories.
Thus, the scientific method encourages a rigorous, impersonal mode of procedure dictated by the demands
of logic and objective procedure. Accordingly, scientific method implies an objective, logical and
systematic method, i.e., a method free from personal bias or prejudice, a method to ascertain demonstrable
qualities of a phenomenon capable of being verified, a method wherein the researcher is guided by the rules
of logical reasoning, a method wherein the investigation proceeds in an orderly manner and a method that
implies internal consistency.
Two Research Fallacies
A fallacy is an error in reasoning, usually based on mistaken assumptions. Researchers are familiar with all
the ways they could go wrong and the fallacies they are susceptible to. Here, I discuss two of the most
important. The ecological fallacy occurs when you make conclusions about individuals based only on
analyses of group data. For instance, assume that you measured the math scores of a particular classroom
and found that they had the highest average score in the district. Later (probably at the mall) you run into
one of the kids from that class and you think to yourself, 'She must be a math whiz.' Aha! Fallacy! Just
because she comes from the class with the highest average doesn't mean that she is automatically a high-
scorer in math. She could be the lowest math scorer in a class that otherwise consists of math geniuses.
An exception fallacy is sort of the reverse of the ecological fallacy. It occurs when you reach a group
conclusion on the basis of exceptional cases. This kind of fallacious reasoning is at the core of a lot of
sexism and racism. The stereotype is of the guy who sees a woman make a driving error and concludes that
women are terrible drivers. Wrong! Fallacy!
Both of these fallacies point to some of the traps that exist in research and in everyday reasoning. They also
point out how important it is to do research. It is important to determine empirically how individuals
perform, rather than simply rely on group averages. Similarly, it is important to look at whether there are
correlations between certain behaviors and certain groups
In logic, a distinction is often made between two broad methods of reasoning known as the deductive and
inductive approaches. Deductive reasoning works from the more general to the more specific. Sometimes
this is informally called a top-down approach. You might begin with thinking up a theory about your topic
of interest. You then narrow that down into more specific hypotheses that you can test. You narrow down
even further when you collect observations to address the hypotheses. This ultimately leads you to be able to
test the hypotheses with specific dataa confirmation (or not) of your original theories.
Theory-Hypothesis-Observation-Confirmation
4|Page
Inductive reasoning works the other way, moving from specific observations to broader generalizations and
theories. Informally, this is sometimes called a bottom up approach. (Please note that it's bottom up and not
bottoms up, which is the kind of thing the bartender says to customers when he's trying to close for the
night!) In inductive reasoning, you begin with specific observations and measures, begin detecting patterns
and regularities, formulate some tentative hypotheses that you can explore, and finally end up developing
some general conclusions or theories. These two methods of reasoning have a different feel to them when
you're conducting research. Inductive reasoning, by its nature, is more open-ended and exploratory,
especially at the beginning. Deductive reasoning is narrower in nature and is concerned with testing or
confirming hypotheses. Even though a particular study may look like it's purely deductive (for example, an
experiment designed to test the hypothesized effects of some treatment on some outcome), most social
research involves both inductive and deductive reasoning processes at some time in the project. Even in the
most constrained experiment, the researchers might observe patterns in the data that lead them to develop
new theories.
Observation-Pattern-Tentative Hypothesis-Theory
Ethics in Research
This is a time of profound change in the understanding of the ethics of applied social research. From the
time immediately after World War II until the early 1990s, there was a gradually developing consensus
about the key ethical principles that should underlie the research endeavor. Two marker events stand out
(among many others), as symbolic of this consensus. The Nuremberg War Crimes Trial following World
War II brought to public view the ways German scientists had used captive human subjects as subjects in
often gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the withholding
of known effective treatment for syphilis from African-American participants who were infected. Events
like these forced the reexamination of ethical standards and the gradual development of a consensus that
potential human subjects needed to be protected from being used as guinea pigs in scientific research.
By the 1990s, the dynamics of the situation changed. Cancer patients and persons with AIDS fought publicly
with the medical research establishment about the length of time needed to get approval for and complete
research into potential cures for fatal diseases. In many cases, it is the ethical assumptions of the previous
thirty years that drive this go-slow mentality. According to previous thinking, it is better to risk denying
treatment for a while until there is enough confidence in a treatment, than risk harming innocent people (as
in the Nuremberg and Tuskegee events). Recently, however, people threatened with fatal illness have been
saying to the research establishment that they want to be test subjects, even under experimental conditions of
considerable risk. Several vocal and articulate patient groups who wanted to be experimented on came up
against an ethical review system designed to protect them from being the subjects of experiments! Although
the last few years in the ethics of research have been tumultuous ones, a new consensus is beginning to
evolve that involves the stakeholder groups most affected by a problem participating more actively in the
formulation of guidelines for research. Although it's not entirely clear, at present, what the new consensus
will be, it is almost certain that it will not fall at either extreme: protecting against human experimentation at
all costs versus allowing anyone who is willing to be the subject of an experiment.
The Language of Ethics: As in every other aspect of research, the area of ethics has its own vocabulary. In
this section, I present some of the most important language regarding ethics in research. The principle of
voluntary participation requires that people not be coerced into participating in research. This is especially
relevant where researchers had previously relied on captive audiences for their subjectsprisons,
universities, and places like that. Closely related to the notion of voluntary participation is the requirement
of informed consent. Essentially, this means that prospective research participants must be fully informed
5|Page
about the procedures and risks involved in research and must give their consent to participate. Ethical
standards also require that researchers not put participants in a situation where they might be at risk of harm
as a result of their participation. Harm can be defined as both physical and psychological. Two standards are
applied to help protect the privacy of research participants. Almost all research guarantees the participants
confidentiality; they are assured that identifying information will not be made available to anyone who is
not directly involved in the study. The stricter standard is the principle of anonymity, which essentially
means that the participant will remain anonymous throughout the study, even to the researchers themselves.
Clearly, the anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to
accomplish, especially in situations where participants have to be measured at multiple time points (for
example in a pre-post study). Increasingly, researchers have had to deal with the ethical issue of a person's
right to service. Good research practice often requires the use of a no-treatment control groupa group of
participants who do not get the treatment or program that is being studied. But when that treatment or
program may have beneficial effects, persons assigned to the no-treatment control may feel their rights to
equal access to services are being curtailed. Even when clear ethical standards and principles exist, at times
the need to do accurate research runs up against the rights of potential participants. No set of standards can
possibly anticipate every ethical circumstance. Furthermore, there needs to be a procedure that assures that
researchers will consider all relevant ethical issues in formulating research plans. To address such needs
most institutions and organizations have formulated an Institutional Review Board (IRB), a panel of
persons who reviews grant proposals with respect to ethical implications and decides whether additional
actions need to be taken to assure the safety and rights of participants. By reviewing proposals for research,
IRBs also help protect the organization and the researcher against potential legal implications of neglecting
to address important ethical issues of participants.
Definitions:
Concept: An abstraction encompassing observed events; a word that represents the similarities or common
aspects of objects or events that are otherwise quite different from one another. The purpose of a concept is to
simplify thinking by including a number of events (or the common aspects of otherwise diverse things) under
one general heading. Ex: Chair, dog, tree, liquid, a doughnut, etc
Concepts are abstract ideas which have been "defined" according to particular characteristics or
generalizations (constructs) about them.
Construct: Constructs are the highest highest-level abstractions of complicated objects and events, created
by combining concepts and less complex constructs. used to account for observed regularities and
relationships, and to summarize observations and explanations A concept with added meaning of having been
deliberately and con consciously invented or seriously adopted for a special scientific purpose.
1) it enters into theoretical schemes and is theoretical related in various ways to other constructs.
Scientists measure things in three classes: direct observables, indirect observables (not experienced or
observed first hand), and constructs. These constructs are defined as constructs theoretical creations based on
observations but cannot be observed directly or observed indirectly. Ex: Motivation, visual acuity, justice,
problem solving ability.
6|Page
A construct is based on concepts, or can be thought of as a conceptual model that has measurable aspects.
This will allow the researcher to "measure" the concept and have a common acceptable platform when other
researches do a similar research. Constructs are built from the logical combination of a number of more
observable concepts. In the case of source credibility, we could define the construct as the combination of the
concepts of expertise, objectivity, and status. Each of these concepts can be more directly observed in an
individual. We might also consider some of these terms to be constructs themselves, and break them down
into combinations of still more concrete concepts.
Measuring advertising effectiveness is a construct, and concepts related would be brand awareness and
consumer behavior. Pain is a concept, a theoretical model of pain would be a construct, and a pain assessment
tool would give a measurable variable.
Some definitions of constructs:
PERMISSIVENESS:
Oxford def. a) tolerant; liberal; b) giving permission
REINFORCEMENT:
Oxford def. Strengthen or support, especially with additional personnel, material etc.
READING ABILITY:
Oxford def. none
ACHIEVEMENT:
Oxford def. a) something achieved b) act of achieving achieve: a) attain by effort acquire; gain earn
b) accomplish
Measured def. To put your research findings into a written format to be used as documentation.
INTERESTS:
Oxford def. a) curiosity, concern b) quality existing curiosity c) note worthiness, importance 2) subject,
hobby in which one is concerned 3) advantage or profit 4) self-interest, excite the curiosity or attention
to take a personal interest.
Experimental def. An educational topic that concerns you and is worthy of your research.
7|Page
Measured def. same as above
NEEDS:
Oxford def. archaic of necessity, requirement
Measured def. The requirements for charting the research findings in order to give documented
support.
TRANSFER OF TRAINING:
Oxford def. none
Experimental def. To give another researcher the information needed so that they can continue
researching from where you left off.
LEADERSHIP:
Oxford def. A person that leads or is followed by others.
Experimental def. A person who has the ability to direct others through a research experiment, and
who will set the tone of the research.
Measured def. A person who will design the way in which the research findings will be documented.
CLASS ATMOSPHERE:
Oxford def. none
Experimental def. The tone of a class setting that will allow for similar testing conditions.
DELINQUENCY:
Oxford def. Failing in ones duty
Experimental def. Failing to plan out the way you are going to conduct you research so that your
findings are valid.
Measured def. failing to document your findings in a way that another researcher can duplicate your
research findings.
ORGANIZATIONAL CONFLICT:
Oxford def. none
The operational definition must be very closely associated with the theoretical definition. It must state clearly
how observations will be made so they will reflect as fully as possible the meaning associated with the verbal
concept or construct. The operational definition must tell us how to observe and quantify the concept in the
real world. This connection between theoretical and operational definitions is quite critical. This connection
establishes the validity of the measurement.
Two Types of Operational Definitions:
Examples of an Operational Definition: Measured Operational Definition: An actual (score) value from a test
or questionnaire the researchers would develop to measure hunger.
Experimental Operational Definition: A manipulated scenario to produce the condition of hunger. (Such as
preventing the subject from consuming anything for x number of hours)
Without knowing explicitly what a term means, we cannot evaluate the research or determine whether the
researcher has carried out what was proposed in the problem statement. Need not necessarily agree with such
a definition, but as long as we know what the researcher means when using the term, we are able to understand
and appraise it appropriately. A formal definition contains three parts: (a) the term to be defined; (b) the
genera, or the general class to which the concept being defined belongs; and (c) the differentia, the specific
characteristics or traits that distinguish the concept being defined from all other members of the general
classification.
Polytomous: Multiple values for variables. Example: Religion (Catholicism, Islam, Judaism,
Hinduism, Buddhism, etc)
9|Page
Continuous: A variable that takes on an infinite number of values within a range. Example: Height &
Weight
Attribute: Any variable that cannot be manipulated by the researcher. For example, all human
characteristics are attribute variables: intelligence, sex, socioeconomic status etc.
Dependent: The dependent variable is the phenomenon that is the object of study and investigation
(also: Outcome, Response, Criterion, and Effect). - -
Categorical: Referred to as nominal measurements. One creates categories, and classifies all
variables that fall under this definition without rank order. All variables under the same category are
considered of equal value, and not differentiated.
Latent: An unobserved entity that stands between the independent variable and the dependent
variable, and mediates the effect of the independent variable on the dependent variable. It is dependent
on the independent variable as well as other constructs, yet still plays a role in determining the outcome
(possibly: Intervening, Mediating, Hypothetical construct).
Control: An independent variable that is measured in a study because the y potentially influence the
dependent variable. It is a more clearly defined independent variable in attempts to eliminate all bias
in regard s to its effects on the dependent variable. (Keeps the study in check). they clearly regards
Confounding: Variables not actually measured or observed in a study, yet they exist, and its influence
cannot be directly detected or understood in a study. One becomes aware of a confounding variable at
the end of a study, they realize that there is an effect that was not measured or accounted for, but should
be addressed.
Every Problem Needs Further Delineation to comprehend fully the meaning of the problem, the researcher
should eliminate any possibility of misunderstanding by
Stating the hypotheses and/or research questions: Describing the specific hypotheses being
tested or questions being asked.
Delimiting the research: Fully disclosing what the researcher intends to do and, conversely,
does not intend to do.
Defining the terms: Giving the meanings of all terms in the statements of the problem and sub
problems that have any possibility of being misunderstood.
Stating the assumptions: Presenting a clear statement of all assumptions on which the research
will rest.
These matters facilitate understanding of the research called the setting of the problem
10 | P a g e
Theory:
Theoretical conceptualizations are almost as many as there are researchers conducting research and using
theories. Researchers view theory in different ways and across different research disciplines. The systematic
nature of theory is to provide explanatory leverage on a problem, describing innovative features of a
phenomenon, or providing predictive utility defined a theory as a group of logically organized laws or
relationships that constitutes explanation in a discipline. Theory that is driven by research is directly
relevant to practice and beneficial to the field. Although substantive theory is often used as a theoretical
framework and a strategic link in the formulation and generation of grounded formal theory, grounded
theory emphasizes the concept of emergence that inspires new research. It is important to note that the
process of grounded theory and substantive theory produces four primary constructs of (a) heuristics
(expansion of the existing body of knowledge, discovery, and problem solving), (b) description, (c)
delimitation, and (d) parsimonious. The relationship between theory, practice, and research is central to the
discussion of theory as a conceptualized cycle of development and facilitation.
However, theory is speculative and ones theory seems to follow ones chosen philosophical commitment,
even to a degree that advocates of different philosophical stances do not necessarily understand each others
conceptions of theory. The theoretical process puts boundaries on what is examined or studied.
Stating the Hypotheses and/or Research Questions:
Hypotheses are tentative, intelligent guesses posited for the purpose of directing ones thinking toward the
solution of the problem, necessary in searching for relevant data and in establishing a tentative goal.
Hypotheses are neither proved nor disproved. They are nothing more than tentative propositions set forth to
assist in guiding the investigation of a problem or to provide possible explanations for the observations made
either to accept or reject the hypotheses.
Hypotheses have nothing to do with proof. Their acceptance or rejection is dependent on what the data and
the data alone ultimately reveal. Hypotheses may originate in the subproblem, could be 1 to 1. Hypothesis
provides a position from which a researcher begins to initiate an exploration of problem and subproblems and
checkpoints to test the findings that the data reveal to accept/reject the hypotheses.
If the data do not support the research hypothesis, dont be disturbed it merely means that the educated guess
about the outcome of the investigation was incorrect. Frequently, rejected hypotheses are a source of genuine
and gratifying surprise truly made unexpected discovery.
Null Hypothesis: It is an indicator only & it reveals some influences, forces, or factors that have resulted in a
statistical difference or no such difference.
Null Hypothesis Dynamics:
If null hypothesis shows the presence of dynamics, then the next logical questions are as follows:
For example, lets say that a team of social workers believe that one type of after-school programme
for teenagers (well call it Programme A) is more effective than another programme (well call it
Programme B) in terms of reducing high school dropout rates.
11 | P a g e
The null hypothesis stating that there will be no difference in the high school graduation rates of
teenagers enrolled in Programme A and those enrolled in Programme B has been rejected
encouraging news it is mezzanine conclusion
What specifically were the factors within the programme that cause the null hypothesis to be rejected?
These are fundamental questions will uncover facts that may lie very close to the discovery of new
substantive knowledge the purpose of all research.
12 | P a g e
about why this is so. We may have many possibly incompatible hunches and will need to collect
information that enables us to see which hunches work best empirically.
Answering the `why' questions involves developing causal explanations.
Causal explanations argue that phenomenon Y (e.g. income level) is affected by factor X (e.g. gender).
Some causal explanations will be simple while others will be more complex. For example, we might
argue that there is a direct effect of gender on income (i.e. simple gender discrimination). We might
argue for a causal chain, such as that gender affects choice of eld of training which in turn affects
occupational options, which are linked to opportunities for promotion, which in turn affect income
level. Or we could posit a more complex model involving a number of interrelated causal chains.
Prediction, correlation and causation
People often confuse correlation with causation. Simply because one event follows another, or two
factors co-vary, does not mean that one causes the other. The link between two events may be
coincidental rather than causal.
The divorce rate changed over the twentieth century the crime rate increased a few years later. But this
does not mean that divorce causes crime. Rather than divorce causing crime, divorce and crime rates
might both be due to other social. Students at fee paying private schools typically perform better in
their final year of schooling than those at government funded schools. But this need not be because
private schools produce better performance. It may be that attending a private school and better final-
year performance are both the outcome of some other cause (see later discussion).
Confusing causation with correlation also confuses prediction with causation and prediction with
explanation. Where two events or characteristics are correlated we can predict one from the other.
Knowing the type of school attended improves our capacity to predict academic achievement. But this
does not mean that the school type affects academic achievement. Predicting performance on the basis
of school type does not tell us why private school students do better. Good prediction does not depend
on causal relationships. Nor does the ability to predict accurately demonstrate anything about causality.
Recognizing that causation is more than correlation highlights a problem. While we can observe
correlation we cannot observe cause. We have to infer cause. These inferences however are
`necessarily fallible . . . [they] are only indirectly linked to observables' (Cook and Campbell, 1979:
10). Because our inferences are fallible we must minimize the chances of incorrectly saying that a
relationship is causal when in fact it is not. One of the fundamental purposes of research design in
explanatory research is to avoid invalid inferences.
13 | P a g e
some phenomenon. In other words, when designing research we need to ask: given this research
question (or theory), what type of evidence is needed to answer the question (or test the theory) in a
convincing way?
Research design deals with a logical problem and not a logistical problem (Yin, 1989: 29). Before a
builder or architect can develop a work plan or order materials they must rst establish the type of
building required, its uses and the needs of the occupants. The work plan from this. Similarly, in social
research the issues of sampling, method of data collection (e.g. questionnaire, observation, and
document analysis), and design of questions are all subsidiary to the matter of `What evidence do I
need to collect?'
Too often researchers design questionnaires or begin interviewing far too early before thinking
through what information they require to answer their research questions. Without attending to these
research design matters at the beginning, the conclusions drawn will normally be weak and
unconvincing and fail to answer the research question.
Research approaches are plans and the procedures for research that span the steps from broad assumptions
to detailed methods of data collection, analysis, and interpretation. This plan involves several decisions, and
they need not be taken in the order in which they make sense to me and the order of their presentation here.
The overall decision involves which approach should be used to study a topic. Informing this decision should
be the philosophical assumptions the researcher brings to the study; procedures of inquiry (called research
designs); and specific research methods of data collection, analysis, and interpretation. The selection of a
research approach is also based on the nature of the research problem or issue being addressed, the
researchers personal experiences, and the audiences for the study. Thus, in this book, research approaches,
research designs, and research methods are three key terms that represent a perspective about research that
presents information in a successive way from broad constructions of research to the narrow procedures of
methods.
Often the distinction between qualitative research and quantitative research is framed in terms of using
words (qualitative) rather than numbers (quantitative), or using closed-ended questions (quantitative hypoth-
eses) rather than open-ended questions (qualitative interview questions). A more complete way to view the
gradations of differences between them is in the basic philosophical assumptions researchers bring to the
study, the types of research strategies used in the research (e.g., quantitative experiments or qualitative case
studies), and the specific methods employed in conducting these strategies (e.g., collecting data quantitatively
on instruments versus collecting qualitative data through observing a setting). Moreover, there is a historical
evolution to both approacheswith the quantitative approaches dominating the forms of research in the social
sciences from the late 19th century up until the mid-20th century. During the latter half of the 20th century,
interest in qualitative research increased and along with it, the development of mixed methods research. With
this background, it should prove helpful to view definitions of these three key terms as used in this book:
Qualitative research is an approach for exploring and understanding the meaning individuals or
groups ascribe to a social or human problem. The process of research involves emerging questions and pro-
cedures, data typically collected in the participants setting, data analysis inductively building from
particulars to general themes, and the researcher making interpretations of the meaning of the data. The final
written report has a flexible structure. Those who engage in this form of inquiry support a way of looking at
research that honors an inductive style, a focus on individual meaning, and the importance of rendering the
complexity of a situation.
Quantitative research is an approach for testing objective theories by examining the relationship
among variables. These variables, in turn, can be measured, typically on instruments, so that numbered data
can be analyzed using statistical procedures. The final written report has a set structure consisting of
introduction, literature and theory, methods, results, and discussion. Like qualitative researchers, those who
engage in this form of inquiry have assumptions about testing theories deductively, building in protections
against bias, controlling for alternative explanations, and being able to generalize and replicate the findings.
Two important components in each definition are that the approach to research involves philosophical
assumptions as well as distinct methods or procedures. The broad research approach is the plan or proposal
to conduct research, involves the intersection of philosophy, research designs, and specific methods. A
framework use to explain the interaction of these three components. To reiterate, in planning a study,
15 | P a g e
researchers need to think through the philosophical worldview assumptions that they bring to the study, the
research design that is related to this worldview, and the specific methods or procedures of research that trans-
late the approach into practice.
Philosophical Worldviews
Although philosophical ideas remain largely hidden in research (Slife & Williams, 1995), they still
influence the practice of research
A Framework for ResearchInterconnection of Worldviews, Design, and Research Methods. The Selection
of a Research Approach - Quantitative (e.g., Experiments) Qualitative (e.g., Ethnographies).
RESEARCH APPROACHES Qualitative, Quantitative. Research Methods Questions, Data
Collection, Data Analysis, Interpretation, Validation
Worldviews arise based on discipline orientations, students advisors/mentors inclinations, and past research
experiences. The types of beliefs held by individual researchers based on these factors will often lead to
embracing a qualitative, quantitative approach in their research.
Research Designs
The researcher not only selects a qualitative, quantitative, or mixed methods study to conduct; the inquirer
also decides on a type of study within these three choices. Research designs are types of inquiry within
qualitative, quantitative approaches that provide specific direction for procedures in a research design. Others
have called them strategies of inquiry (Denzin & Lincoln, 2011). The designs available to the researcher have
grown over the years as computer technology has advanced our data analysis and ability to analyze complex
models and as individuals have articulated new procedures for conducting social science research.
Quantitative Designs
During the late 19th and throughout the 20th century, strategies of inquiry associated with quantitative
research were those that invoked the postpositivist worldview and that originated mainly in psychology. These
include true experiments and the less rigorous experiments called quasi-experiments (see, an original, early
treatise on this, Campbell & Stanley, 1963). An additional experimental design is applied behavioral analysis
or single-subject experiments in which an experimental treatment is administered over time to a single
individual or a small number of individuals (Cooper, Heron, & Heward, 2007; Neuman & McCormick, 1995).
One type of non-experimental quantitative research is causal-comparative research in which the investigator
compares two or more groups in terms of a cause (or independent variable) that has already happened. Another
non-experimental form of research is the correlational design in which investigators use the correlational
statistic to describe and measure the degree or association (or relationship) between two or more variables or
sets of scores (Creswell, 2012). These designs have been elaborated into more complex relationships among
variables found in techniques of structural equation modeling, hierarchical linear modeling, and logistic
regression. More recently, quantitative strategies have involved complex experiments with many variables
and treatments (e.g., factorial designs and repeated measure designs). They have also included elaborate struc-
tural equation models that incorporate causal paths and the identification of the collective strength of multiple
variables. Rather than discuss all of these quantitative approaches, I will focus on two designs: surveys and
experiments.
16 | P a g e
Survey research provides a quantitative or numeric description of trends, attitudes, or opinions of a
population by studying a sample of that population. It includes cross-sectional and longitudinal studies using
questionnaires or structured interviews for data collectionwith the intent of generalizing from a sample to
a population (Fowler, 2008).
Experimental research seeks to determine if a specific treatment influences an outcome. The researcher
assesses this by providing a specific treatment to one group and withholding it from another and then
determining how both groups scored on an outcome. Experiments include true experiments, with the random
assignment of subjects to treatment conditions, and quasi-experiments that use nonrandomized assignments
(Keppel, 1991). Included within quasi-experiments are single-subject designs.
Qualitative Designs
In qualitative research, the numbers and types of approaches have also become more clearly visible during the
1990s and into the 21st century. The historic origin for qualitative research comes from anthropology,
sociology, the humanities, and evaluation. Books have summarized the various types, and complete
procedures are now available on specific qualitative inquiry approaches. For example, Clandinin and Connelly
(2000) constructed a picture of what narrative researchers do. Moustakas (1994) discussed the philosophical
tenets and the procedures of the phenomenological method; Charmaz (2006), Corbin and Strauss (2007), and
Strauss and Corbin (1990, 1998) identified the procedures of grounded theory. Fetterman (2010) and Wolcott
(2008) summarized ethnographic procedures and the many faces and research strategies of ethnography, and
Stake (1995) and Yin (2009, 2012) suggested processes involved in case study research. In this book,
illustrations are drawn from the following strategies, recognizing that approaches such as participatory action
research (Kemmis & McTaggart, 2000), discourse analysis (Cheek, 2004), and others not mentioned are also
viable ways to conduct qualitative studies:
Narrative research is a design of inquiry from the humanities in which the researcher studies the lives of
individuals and asks one or more individuals to provide stories about their lives (Riessman, 2008). This
information is then often retold or restoried by the researcher into a narrative chronology. Often, in the end,
the narrative combines views from the participants life with those of the researchers life in a collaborative
narrative (Clandinin & Connelly, 2000).
Phenomenological research is a design of inquiry coming from philosophy and psychology in which the
researcher describes the lived experiences of individuals about a phenomenon as described by participants.
This description culminates in the essence of the experiences for several individuals who have all
experienced the phenomenon. This design has strong philosophical underpinnings and typically involves
conducting interviews (Giorgi, 2009; Moustakas, 1994).
Grounded theory is a design of inquiry from sociology in which the researcher derives a general, abstract
theory of a process, action, or interaction grounded in the views of participants. This process involves using
multiple stages of data collection and the refinement and interrelationship of categories of information
(Charmaz, 2006; Corbin & Strauss, 2007).
Ethnography is a design of inquiry coming from anthropology and sociology in which the researcher studies
the shared patterns of behaviors, language, and actions of an intact cultural group in a natural setting over a
prolonged period of time. Data collection often involves observations and interviews.
Case studies are a design of inquiry found in many fields, especially evaluation, in which the researcher
develops an in-depth analysis of a case, often a program, event, activity, process, or one or more individuals.
Cases are bounded by time and activity, and researchers collect detailed information using a variety of data
collection procedures over a sustained period of time (Stake, 1995; Yin, 2009, 2012).
17 | P a g e
Formulating the research problem: There are two types of research problems, viz., those which relate to
states of nature and those which relate to relationships between variables. At the very outset the researcher
must single out the problem he wants to study, i.e., he must decide the general area of interest or aspect of a
subject-matter that he would like to inquire into. Initially the problem may be stated in a broad general way
and then the ambiguities, if any, relating to the problem be resolved. Then, the feasibility of a particular
solution has to be considered before a working formulation of the problem can be set up. The formulation of
a general topic into a specific research problem, thus, constitutes the first step in a scientific enquiry.
Essentially two steps are involved in formulating the research problem, viz., understanding the problem
thoroughly, and rephrasing the same into meaningful terms from an analytical point of view.
The best way of understanding the problem is to discuss it with ones own colleagues or with those having
some expertise in the matter. In an academic institution the researcher can seek the help from a guide who is
usually an experienced man and has several research problems in mind.
Often, the guide puts forth the problem in general terms and it is up to the researcher to narrow it down and
phrase the problem in operational terms. In private business units or in governmental organizations, the
problem is usually earmarked by the administrative agencies with whom the researcher can discuss as to
how the problem originally came about and what considerations are involved in its possible solutions.
The researcher must at the same time examine all available literature to get himself acquainted with the
selected problem. He may review two types of literaturethe conceptual literature concerning the concepts
and theories, and the empirical literature consisting of studies made earlier which are similar to the one
proposed. The basic outcome of this review will be the knowledge as to what data and other materials are
available for operational purposes which will enable the researcher to specify his own research problem in a
meaningful context. After this the researcher rephrases the problem into analytical or operational terms i.e.,
to put the problem in as specific terms as possible. This task of formulating, or defining, a research problem
is a step of greatest importance in the entire research process. The problem to be investigated must be
defined unambiguously for that will help discriminating relevant data from irrelevant ones. Care must,
however, be taken to verify the objectivity and validity of the background facts concerning the problem.
Professor W.A. Neiswanger correctly states that the statement of the objective is of basic importance
because it determines the data which are to be collected, the characteristics of the data which are relevant,
relations which are to be explored, the choice of techniques to be used in these explorations and the form of
the final report. If there are certain pertinent terms, the same should be clearly defined along with the task of
formulating the problem. In fact, formulation of the problem often follows a sequential pattern where a
number of formulations are set up, each formulation more specific than the preceding one, each one phrased
in more analytical terms, and each more realistic in terms of the available data and resources.
Sampling
Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by
studying the sample we may fairly generalize our results back to the population from which they were chosen.
Let's begin by covering some of the key terms in sampling like "population" and "sampling frame." Then,
because some types of sampling rely upon quantitative models, we'll talk about some of the statistical terms
used in sampling. Finally, we'll discuss the major distinction between probability and Nonprobability sampling
methods and work through the major types in each.
18 | P a g e
Probability Sampling
A probability sampling method is any method of sampling that utilizes some form of random selection. In
order to have a random selection method, you must set up some process or procedure that assures that the
different units in your population have equal probabilities of being chosen. Humans have long practiced
various forms of random selection, such as picking a name out of a hat, or choosing the short straw. These
days, we tend to use computers as the mechanism for generating random numbers as the basis for random
selection.
Some Definitions
Before I can explain the various probability methods we have to define some basic terms. These are:
That's it. With those terms defined we can begin to define the different probability sampling methods.
The simplest form of random sampling is called simple random sampling. Pretty tricky, huh? Here's the
quick description of simple random sampling:
Objective: To select n units out of N such that each NCn has an equal chance of being selected.
Procedure: Use a table of random numbers, a computer random number generator, or a mechanical
device to select the sample.
Neither of these mechanical procedures is very feasible and, with the development of inexpensive computers
there is a much easier way. Here's a simple procedure that's especially useful if you have the names of the
clients already on the computer. Many computer programs can generate a series of random numbers. Let's
assume you can copy and paste the list of client names into a column in an EXCEL spreadsheet. Then, in the
column right next to it paste the function =RAND() which is EXCEL's way of putting a random number
between 0 and 1 in the cells. Then, sort both columns -- the list of names and the random number -- by the
random numbers. This rearranges the list in random order from the lowest to the highest random number.
Then, all you have to do is take the first hundred names in this sorted list pretty simple. You could probably
accomplish the whole thing in under a minute.
Simple random sampling is simple to accomplish and is easy to explain to others. Because simple random
sampling is a fair way to select a sample, it is reasonable to generalize the results from the sample back to the
population. Simple random sampling is not the most statistically efficient method of sampling and you may,
just because of the luck of the draw, not get good representation of subgroups in a population. To deal with
these issues, we have to turn to other sampling methods.
Stratified Random Sampling, also sometimes called proportional or quota random sampling, involves
dividing your population into homogeneous subgroups and then taking a simple random sample in each
subgroup. In more formal terms:
Objective: Divide the population into non-overlapping groups (i.e., strata) N1, N2, N3, ... Ni, such that N1 +
N2 + N3 + ... + Ni = N. Then do a simple random sample of f = n/N in each strata.
There are several major reasons why you might prefer stratified sampling over simple random sampling. First,
it assures that you will be able to represent not only the overall population, but also key subgroups of the
population, especially small minority groups. If you want to be able to talk about subgroups, this may be the
only way to effectively assure you'll be able to. If the subgroup is extremely small, you can use different
sampling fractions (f) within the different strata to randomly over-sample the small group (although you'll
then have to weight the within-group estimates using the sampling fraction whenever you want overall
population estimates). When we use the same sampling fraction within strata we are conducting proportionate
stratified random sampling. When we use different sampling fractions in the strata, we call this
disproportionate stratified random sampling. Second, stratified random sampling will generally have more
statistical precision than simple random sampling. This will only be true if the strata or groups are
homogeneous. If they are, we expect that the variability within-groups is lower than the variability for the
population as a whole. Stratified sampling capitalizes on that fact.
20 | P a g e
For example, let's say that the
population of clients for our
agency can be divided into
three groups: Caucasian,
African-American and
Hispanic-American.
Furthermore, let's assume that
both the African-Americans
and Hispanic-Americans are
relatively small minorities of
the clientele (10% and 5%
respectively). If we just did a
simple random sample of
n=100 with a sampling
fraction of 10%, we would
expect by chance alone that we would only get 10 and 5 persons from each of our two smaller groups. And,
by chance, we could get fewer than that! If we stratify, we can do better. First, let's determine how many
people we want to have in each group. Let's say we still want to take a sample of 100 from the population of
1000 clients over the past year. But we think that in order to say anything about subgroups we will need at
least 25 cases in each group. So, let's sample 50 Caucasians, 25 African-Americans, and 25 Hispanic-
Americans. We know that 10% of the population, or 100 clients, are African-American. If we randomly sample
25 of these, we have a within-stratum sampling fraction of 25/100 = 25%. Similarly, we know that 5% or 50
clients are Hispanic-American. So our within-stratum sampling fraction will be 25/50 = 50%. Finally, by
subtraction we know that there are 850 Caucasian clients. Our within-stratum sampling fraction for them is
50/850 = about 5.88%. Because the groups are more homogeneous within-group than across the population
as a whole, we can expect greater statistical precision (less variance). And, because we stratified, we know we
will have enough cases from each group to make meaningful subgroup inferences.
Here are the steps you need to follow in order to achieve a systematic random sample:
21 | P a g e
All of this will be much clearer
with an example. Let's assume
that we have a population that
only has N=100 people in it
and that you want to take a
sample of n=20. To use
systematic sampling, the
population must be listed in a
random order. The sampling
fraction would be f = 20/100 =
20%. in this case, the interval
size, k, is equal to N/n =
100/20 = 5. Now, select a
random integer from 1 to 5. In
our example, imagine that you
chose 4. Now, to select the
sample, start with the 4th unit
in the list and take every k-th unit (every 5th, because k=5). You would be sampling units 4, 9, 14, 19, and so
on to 100 and you would wind up with 20 units in your sample.
For this to work, it is essential that the units in the population are randomly ordered, at least with respect to
the characteristics you are measuring. Why would you ever want to use systematic random sampling? For one
thing, it is fairly easy to do. You only have to select a single random number to start things off. It may also be
more precise than simple random sampling. Finally, in some situations there is simply no easier way to do
random sampling. For instance, I once had to do a study that involved sampling from all the books in a library.
Once selected, I would have to go to the shelf, locate the book, and record when it last circulated. I knew that
I had a fairly good sampling frame in the form of the shelf list (which is a card catalog where the entries are
arranged in the order they occur on the shelf). To do a simple random sample, I could have estimated the total
number of books and generated random numbers to draw the sample; but how would I find book #74,329
easily if that is the number I selected? I couldn't very well count the cards until I came to 74,329! Stratifying
wouldn't solve that problem either. For instance, I could have stratified by card catalog drawer and drawn a
simple random sample within each drawer. But I'd still be stuck counting cards. Instead, I did a systematic
random sample. I estimated the number of books in the entire collection. Let's imagine it was 100,000. I
decided that I wanted to take a sample of 1000 for a sampling fraction of 1000/100,000 = 1%. To get the
sampling interval k, I divided N/n = 100,000/1000 = 100. Then I selected a random integer between 1 and
100. Let's say I got 57. Next I did a little side study to determine how thick a thousand cards are in the card
catalog (taking into account the varying ages of the cards). Let's say that on average I found that two cards
that were separated by 100 cards were about .75 inches apart in the catalog drawer. That information gave me
everything I needed to draw the sample. I counted to the 57th by hand and recorded the book information.
Then, I took a compass. (Remember those from your high-school math class? They're the funny little metal
instruments with a sharp pin on one end and a pencil on the other that you used to draw circles in geometry
class.) Then I set the compass at .75", stuck the pin end in at the 57th card and pointed with the pencil end to
the next card (approximately 100 books away). In this way, I approximated selecting the 157th, 257th, 357th,
and so on. I was able to accomplish the entire selection procedure in very little time using this systematic
random sampling approach. I'd probably still be there counting cards if I'd tried another random sampling
22 | P a g e
method. (Okay, so I have no life. I got compensated nicely, I don't mind saying, for coming up with this
scheme.)
The problem with random sampling methods when we have to sample a population that's disbursed across a
wide geographic region is that you will have to cover a lot of ground geographically in order to get to each of
the units you sampled. Imagine taking a simple random sample of all the residents of New York State in order
to conduct personal interviews. By the luck of the draw you will wind up with respondents who come from
all over the state. Your interviewers are going to have a lot of traveling to do. It is for precisely this problem
that cluster or area random sampling was invented.
Multi-Stage Sampling
The four methods we've covered so far -- simple, stratified, systematic and cluster -- are the simplest random
sampling strategies. In most real applied social research, we would use sampling methods that are considerably
more complex than these simple variations. The most important principle here is that we can combine the
23 | P a g e
simple methods described earlier in a variety of useful ways that help us address our sampling needs in the
most efficient and effective manner possible. When we combine sampling methods, we call this multi-stage
sampling.
For example, consider the idea of sampling New York State residents for face-to-face interviews. Clearly we
would want to do some type of cluster sampling as the first stage of the process. We might sample townships
or census tracts throughout the state. But in cluster sampling we would then go on to measure everyone in the
clusters we select. Even if we are sampling census tracts we may not be able to measure everyone who is in
the census tract. So, we might set up a stratified sampling process within the clusters. In this case, we would
have a two-stage sampling process with stratified samples within cluster samples. Or, consider the problem of
sampling students in grade schools. We might begin with a national sample of school districts stratified by
economics and educational level. Within selected districts, we might do a simple random sample of schools.
Within schools, we might do a simple random sample of classes or grades. And, within classes, we might even
do a simple random sample of students. In this case, we have three or four stages in the sampling process and
we use both stratified and simple random sampling. By combining different sampling methods we are able to
achieve a rich variety of probabilistic sampling methods that can be used in a wide range of social research
contexts.
Nonprobability Sampling
The difference between nonprobability and probability sampling is that nonprobability sampling does not
involve random selection and probability sampling does. Does that mean that nonprobability samples aren't
representative of the population? Not necessarily. But it does mean that nonprobability samples cannot depend
upon the rationale of probability theory. At least with a probabilistic sample, we know the odds or probability
that we have represented the population well. We are able to estimate confidence intervals for the statistic.
With nonprobability samples, we may or may not represent the population well, and it will often be hard for
us to know how well we've done so. In general, researchers prefer probabilistic or random sampling methods
over non-probabilistic ones, and consider them to be more accurate and rigorous. However, in applied social
research there may be circumstances where it is not feasible, practical or theoretically sensible to do random
sampling. Here, we consider a wide range of non-probabilistic alternatives.
We can divide nonprobability sampling methods into two broad types: accidental or purposive. Most sampling
methods are purposive in nature because we usually approach the sampling problem with a specific plan in
mind. The most important distinctions among these types of sampling methods are the ones between the
different types of purposive sampling approaches.
One of the most common methods of sampling goes under the various titles listed here. I would include in this
category the traditional "man on the street" (of course, now it's probably the "person on the street") interviews
conducted frequently by television news programs to get a quick (although non-representative) reading of
public opinion. I would also argue that the typical use of college students in much psychological research is
primarily a matter of convenience. (You don't really believe that psychologists use college students because
they believe they're representative of the population at large, do you?). In clinical practice, we might use
clients who are available to us as our sample. In many research contexts, we sample simply by asking for
volunteers. Clearly, the problem with all of these types of samples is that we have no evidence that they are
24 | P a g e
representative of the populations we're interested in generalizing to -- and in many cases we would clearly
suspect that they are not.
Purposive Sampling
In purposive sampling, we sample with a purpose in mind. We usually would have one or more specific
predefined groups we are seeking. For instance, have you ever run into people in a mall or on the street who
are carrying a clipboard and who are stopping various people and asking if they could interview them? Most
likely they are conducting a purposive sample (and most likely they are engaged in market research). They
might be looking for Caucasian females between 30-40 years old. They size up the people passing by and
anyone who looks to be in that category they stop to ask if they will participate. One of the first things they're
likely to do is verify that the respondent does in fact meet the criteria for being in the sample. Purposive
sampling can be very useful for situations where you need to reach a targeted sample quickly and where
sampling for proportionality is not the primary concern. With a purposive sample, you are likely to get the
opinions of your target population, but you are also likely to overweight subgroups in your population that are
more readily accessible.
All of the methods that follow can be considered subcategories of purposive sampling methods. We might
sample for specific groups or types of people as in modal instance, expert, or quota sampling. We might
sample for diversity as in heterogeneity sampling. Or, we might capitalize on informal social networks to
identify specific respondents who are hard to locate otherwise, as in snowball sampling. In all of these methods
we know what we want -- we are sampling with a purpose.
In statistics, the mode is the most frequently occurring value in a distribution. In sampling, when we do a
modal instance sample, we are sampling the most frequent case, or the "typical" case. In a lot of informal
public opinion polls, for instance, they interview a "typical" voter. There are a number of problems with this
sampling approach. First, how do we know what the "typical" or "modal" case is? We could say that the modal
voter is a person who is of average age, educational level, and income in the population. But, it's not clear that
using the averages of these is the fairest (consider the skewed distribution of income, for instance). And, how
do you know that those three variables -- age, education, income -- are the only or even the most relevant for
classifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly, modal
instance sampling is only sensible for informal sampling contexts.
Expert Sampling
Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and
expertise in some area. Often, we convene such a sample under the auspices of a "panel of experts." There are
actually two reasons you might do expert sampling. First, because it would be the best way to elicit the views
of persons who have specific expertise. In this case, expert sampling is essentially just a specific sub case of
purposive sampling. But the other reason you might use expert sampling is to provide evidence for the validity
of another sampling approach you've chosen. For instance, let's say you do modal instance sampling and are
concerned that the criteria you used for defining the modal instance are subject to criticism. You might
convene an expert panel consisting of persons with acknowledged experience and insight into that field or
topic and ask them to examine your modal definitions and comment on their appropriateness and validity. The
25 | P a g e
advantage of doing this is that you aren't out on your own trying to defend your decisions -- you have some
acknowledged experts to back you. The disadvantage is that even the experts can be, and often are, wrong.
Quota Sampling
In quota sampling, you select people non-randomly, according to some fixed quota. There are two types of
quota sampling: proportional and non-proportional. In proportional quota sampling you want to represent
the major characteristics of the population by sampling a proportional amount of each. For instance, if you
know the population has 40% women and 60% men, and that you want a total sample size of 100, you will
continue sampling until you get those percentages and then you will stop. So, if you've already got the 40
women for your sample, but not the sixty men, you will continue to sample men but even if legitimate women
respondents come along, you will not sample them because you have already "met your quota." The problem
here (as in much purposive sampling) is that you have to decide the specific characteristics on which you will
base the quota. Will it be by gender, age, education race, religion, etc.?
Non proportional quota sampling is a bit less restrictive. In this method, you specify the minimum number
of sampled units you want in each category. here, you're not concerned with having numbers that match the
proportions in the population. Instead, you simply want to have enough to assure that you will be able to talk
about even small groups in the population. This method is the nonprobabilistic analogue of stratified random
sampling in that it is typically used to assure that smaller groups are adequately represented in your sample.
Heterogeneity Sampling
We sample for heterogeneity when we want to include all opinions or views, and we aren't concerned about
representing these views proportionately. Another term for this is sampling for diversity. In many
brainstorming or nominal group processes (including concept mapping), we would use some form of
heterogeneity sampling because our primary interest is in getting broad spectrum of ideas, not identifying the
"average" or "modal instance" ones. In effect, what we would like to be sampling is not people, but ideas. We
imagine that there is a universe of all possible ideas relevant to some topic and that we want to sample this
population, not the population of people who have the ideas. Clearly, in order to get all of the ideas, and
especially the "outlier" or unusual ones, we have to include a broad and diverse range of participants.
Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.
Snowball Sampling
In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study.
You then ask them to recommend others who they may know who also meet the criteria. Although this method
would hardly lead to representative samples, there are times when it may be the best method available.
Snowball sampling is especially useful when you are trying to reach populations that are inaccessible or hard
to find. For instance, if you are studying the homeless, you are not likely to be able to find good lists of
homeless people within a specific geographical area. However, if you go to that area and identify one or two,
you may find that they know very well who the other homeless people in their vicinity are and how you can
find them.
26 | P a g e
Measurement
Measurement is the process observing and recording the observations that are collected as part of a research
effort. There are two major issues that will be considered here.
First, you have to understand the fundamental ideas involved in measuring. Here we consider two of major
measurement concepts. In Levels of Measurement, I explain the meaning of the four major levels of
measurement: nominal, ordinal, interval and ratio. Then we move on to the reliability of measurement,
including consideration of true score theory and a variety of reliability estimators.
Second, you have to understand the different types of measures that you might use in social research. We
consider four broad categories of measurements. Survey research includes the design and implementation of
interviews and questionnaires. Scaling involves consideration of the major methods of developing and
implementing a scale. Qualitative research provides an overview of the broad range of non-numerical
measurement approaches. And unobtrusive measures presents a variety of measurement methods that don't
intrude on or interfere with the context of the research.Levels of Measurement
First, knowing the level of measurement helps you decide how to interpret the data from that variable. When
you know that a measure is nominal (like the one just described), then you know that the numerical values are
just short codes for the longer names. Second, knowing the level of measurement helps you decide what
27 | P a g e
statistical analysis is appropriate on the values that were assigned. If a measure is nominal, then you know that
you would never average the data values or do a t-test on the data.
Nominal
Ordinal
Interval
Ratio
In nominal measurement the numerical values just "name" the attribute uniquely. No ordering of the cases is
implied. For example, jersey numbers in basketball are measures at the nominal level. A player with number
30 is not more of anything than a player with number 15, and is certainly not twice whatever number 15 is.
In ordinal measurement the attributes can be rank-ordered. Here, distances between attributes do not have
any meaning. For example, on a survey you might code Educational Attainment as 0=less than H.S.; 1=some
H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post college. In this measure, higher numbers
mean more education. But is distance from 0 to 1 same as 3 to 4? Of course not. The interval between values
is not interpretable in an ordinal measure.
Finally, in ratio measurement there is always an absolute zero that is meaningful. This means that you can
construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social
research most "count" variables are ratio, for example, the number of clients in past six months. Why? Because
you can have zero clients and because it is meaningful to say that "...we had twice as many clients in the past
six months as we did in the previous six months."
It's important to recognize that there is a hierarchy implied in the level of measurement idea. At lower levels
of measurement, assumptions tend to be less restrictive and data analyses tend to be less sensitive. At each
level up the hierarchy, the current level includes all of the qualities of the one below it and adds something
new. In general, it is desirable to have a higher level of measurement (e.g., interval or ratio) rather than a lower
one (nominal or ordinal).
28 | P a g e
Scaling
Scaling is the branch of measurement that involves the construction of an instrument that associates qualitative
constructs with quantitative metric units. Scaling evolved out of efforts in psychology and education to
measure "unmeasurable" constructs like authoritarianism and self esteem. In many ways, scaling remains one
of the most arcane and misunderstood aspects of social research measurement. And, it attempts to do one of
the most difficult of research tasks -- measure abstract concepts.
Most people don't even understand what scaling is. The basic idea of scaling is described in General Issues in
Scaling, including the important distinction between a scale and a response format. Scales are generally
divided into two broad categories: unidimensional and multidimensional. The unidimensional scaling methods
were developed in the first half of the twentieth century and are generally named after their inventor. We'll
look at three types of unidimensional scaling methods here:
In the late 1950s and early 1960s, measurement theorists developed more advanced techniques for creating
multidimensional scales. Although these techniques are not considered here, you may want to look at the
method of concept mapping that relies on that approach to see the power of these multivariate methods.
Likert Scaling
Like Thurstone or Guttman Scaling, Likert Scaling is a unidimensional scaling method. Here, I'll explain the
basic steps in developing a Likert or "Summative" scale.
Defining the Focus. As in all scaling methods, the first step is to define what it is you are trying to measure.
Because this is a unidimensional scaling method, it is assumed that the concept you want to measure is one-
dimensional in nature. You might operationalize the definition as an instruction to the people who are going
to create or generate the initial set of candidate items for your scale.
29 | P a g e
items. It's desirable to have as large a set of potential items as possible at this stage, about 80-100 would be
best.
Rating the Items. The next step is to have a group of judges rate the
items. Usually you would use a 1-to-5 rating scale where:
Notice that, as in other scaling methods, the judges are not telling you
what they believe -- they are judging how favorable each item is with
respect to the construct of interest.
Selecting the Items. The next step is to compute the intercorrelations between all pairs of items, based on the
ratings of the judges. In making judgements about which items to retain for the final scale there are several
analyses you can do:
Throw out any items that have a low correlation with the total (summed) score across all items
In most statistics packages it is relatively easy to compute this type of Item-Total correlation. First,
you create a new variable which is the sum of all of the individual items for each respondent. Then,
you include this variable in the correlation matrix computation (if you include it as the last variable in
the list, the resulting Item-Total correlations will all be the last line of the correlation matrix and will
be easy to spot). How low should the correlation be for you to throw out the item? There is no fixed
rule here -- you might eliminate all items with a correlation with the total score less that .6, for example.
For each item, get the average rating for the top quarter of judges and the bottom quarter. Then, do a
t-test of the differences between the mean value for the item for the top and bottom quarter judges.
Higher t-values mean that there is a greater difference between the highest and lowest judges. In more
practical terms, items with higher t-values are better discriminators, so you want to keep these items.
In the end, you will have to use your judgement about which items are most sensibly retained. You
want a relatively small number of items on your final scale (e.g., 10-15) and you want them to have
high Item-Total correlations and high discrimination (e.g., high t-values).
Administering the Scale. You're now ready to use your Likert scale. Each respondent is asked to rate each
item on some response scale. For instance, they could rate each item on a 1-to-5 response scale where:
1. = strongly disagree
2. = disagree
3. = undecided
4. = agree
5. = strongly agree
30 | P a g e
There are a variety possible response scales (1-to-7, 1-to-9, 0-to-4). All of these odd-numbered scales have a
middle value is often labeled Neutral or Undecided. It is also possible to use a forced-choice response scale
with an even number of responses and no middle neutral or undecided choice. In this situation, the respondent
is forced to decide whether they lean more towards the agree or disagree end of the scale for each item.
The final score for the respondent on the scale is the sum of their ratings for all of the items (this is why this
is sometimes called a "summated" scale). On some scales, you will have items that are reversed in meaning
from the overall direction of the scale. These are called reversal items. You will need to reverse the response
value for each of these items before summing for the total. That is, if the respondent gave a 1, you make it a
5; if they gave a 2 you make it a 4; 3 = 3; 4 = 2; and, 5 = 1.
Here's an example of a ten-item Likert Scale that attempts to estimate the level of self esteem a person has on
the job. Notice that this instrument has no center or neutral point -- the respondent has to declare whether
he/she is in agreement or disagreement with the item.
INSTRUCTIONS: Please rate how strongly you agree or disagree with each of the following statements by
placing a check mark in the appropriate box.
Strongly Somewhat Somewhat Strongly 1. I feel good about my work on the job.
Disagree Disagree Agree Agree
Strongly Somewhat Somewhat Strongly 2. On the whole, I get along well with others at work.
Disagree Disagree Agree Agree
Strongly Somewhat Somewhat Strongly 10. I can tell that my coworkers respect me.
Disagree Disagree Agree Agree
The Quantitative data collection methods rely on random sampling and structured data collection
instruments that fit diverse experiences into predetermined response categories. They produce results that are
easy to summarize, compare, and generalize.
Quantitative research is concerned with testing hypotheses derived from theory and/or being able to estimate
the size of a phenomenon of interest. Depending on the research question, participants may be randomly
assigned to different treatments. If this is not feasible, the researcher may collect data on participant and
situational characteristics in order to statistically control for their influence on the dependent, or outcome,
variable. If the intent is to generalize from the research participants to a larger population, the researcher will
employ probability sampling to select participants.
Experiments/clinical trials.
Observing and recording well-defined events (e.g., counting the number of patients waiting in
emergency at specified times of the day).
Obtaining relevant data from management information systems.
Administering surveys with closed-ended questions (e.g., face-to face and telephone interviews,
questionnaires etc).(http://www.achrn.org/quantitative_methods.htm)
Interviews
In Quantitative research (survey research), interviews are more structured than in Qualitative research.
In a structured interview, the researcher asks a standard set of questions and nothing more.(Leedy and Ormrod,
2001)
Face -to -face interviews have a distinct advantage of enabling the researcher to establish rapport with
potential participants and there for gain their co operation. These interviews yield highest response rates in
survey research. They also allow the researcher to clarify ambiguous answers and when appropriate, seek
follow-up information. Disadvantages include impractical when large samples are involved time consuming
and expensive (Leedy and Ormrod, 2001).
Telephone interviews are less time consuming and less expensive and the researcher has ready access to
anyone on the planet that has a telephone. Disadvantages are that the response rate is not as high as the face-
to- face interview as but considerably higher than the mailed questionnaire. The sample may be biased to the
32 | P a g e
extent that people without phones are part of the population about whom the researcher wants to draw
inferences.
Computer Assisted Personal Interviewing (CAPI): is a form of personal interviewing, but instead of
completing a questionnaire, the interviewer brings along a laptop or hand-held computer to enter the
information directly into the database. This method saves time involved in processing the data, as well as
saving the interviewer from carrying around hundreds of questionnaires. However, this type of data collection
method can be expensive to set up and requires that interviewers have computer and typing skills.
Questionnaires
Paper-pencil-questionnaires can be sent to a large number of people and saves the researcher time and
money. People are more truthful while responding to the questionnaires regarding controversial issues in
particular due to the fact that their responses are anonymous. But they also have drawbacks. Majority of the
people who receive questionnaires don't return them and those who do might not be representative of the
originally selected sample.(Leedy and Ormrod, 2001)
Web based questionnaires: A new and inevitably growing methodology is the use of Internet based research.
This would mean receiving an e-mail on which you would click on an address that would take you to a secure
web-site to fill in a questionnaire. This type of research is often quicker and less detailed. Some disadvantages
of this method include the exclusion of people who do not have a computer or are unable to access a computer.
Also the validity of such surveys is in question as people might be in a hurry to complete it and so might not
give accurate responses. Questionnaires often make use of Checklist and rating scales. These devices help
simplify and quantify people's behaviors and attitudes. A checklist is a list of behaviors, characteristics, or
other entities that to researcher is looking for. Either the researcher or survey participant simply checks
whether each item on the list is observed, present or true or vice versa. A rating scale is more useful when a
behavior needs to be evaluated on a continuum. They are also known as Likert scales. (Leedy and Ormrod,
2001)
Qualitative data collection methods play an important role in impact evaluation by providing information
useful to understand the processes behind observed results and assess changes in peoples perceptions of their
well-being. Furthermore qualitative methods can be used to improve the quality of survey-based quantitative
evaluations by helping generate evaluation hypothesis; strengthening the design of survey questionnaires and
expanding or clarifying quantitative evaluation findings. These methods are characterized by the following
attributes:
they tend to be open-ended and have less structured protocols (i.e., researchers may change the data
collection strategy by adding, refining, or dropping techniques or informants)
they rely more heavily on interactive interviews; respondents may be interviewed several times to
follow up on a particular issue, clarify concepts or check the reliability of data
they use triangulation to increase the credibility of their findings (i.e., researchers rely on multiple data
collection methods to check the authenticity of their results)
generally their findings are not generalizable to any specific population, rather each case study
produces a single piece of evidence that can be used to seek general patterns among different studies
of the same issue
33 | P a g e
Regardless of the kinds of data involved, data collection in a qualitative study takes a great deal of time. The
researcher needs to record any potentially useful data thoroughly, accurately, and systematically, using field
notes, sketches, audiotapes, photographs and other suitable means. The data collection methods must observe
the ethical principles of research.
The qualitative methods most commonly used in evaluation can be classified in three broad categories:
in depth interview
observation methods
document review
Observational method
Observation is way of gathering data by watching behavior, events, or noting physical characteristics in their
natural setting. Observations can be overt (everyone knows they are being observed) or covert (no one knows
they are being observed and the observer is concealed). The benefit of covert observation is that people are
more likely to behave naturally if they do not know they are being observed. However, you will typically
need to conduct overt observations because of ethical problems related to concealing your observation.
Observations can also be either direct or indirect. Direct observation is when you watch interactions,
processes, or behaviors as they occur; for example, observing a teacher teaching a lesson from a written
curriculum to determine whether they are delivering it with fidelity. Indirect observations are when you watch
the results of interactions, processes, or behaviors; for example, measuring the amount of plate waste left by
students in a school cafeteria to determine whether a new food is acceptable to them. When should you use
observation for evaluation?
When you are trying to understand an ongoing process or situation, through observation you can monitor or
watch a process or situation that your are evaluating as it occurs. When you are gathering data on individual
behaviors or interactions between people, Observation allows you to watch peoples behaviors and
interactions directly, or watch for the results of behaviors or interactions. When you need to know about a
physical setting, seeing the place or environment where something takes place can help increase your
understanding of the event, activity, or situation you are evaluating. For example, you can observe whether a
classroom or training facility is conducive to learning. When data collection from individuals is not a realistic
option & if respondents are unwilling or unable to provide data through questionnaires or interviews,
observation is a method that requires little from the individuals for whom you need data. How do you plan for
observations? Determine the focus. Think about the evaluation question(s) you want to answer through
observation and select a few areas of focus for your data collection. For example, you may want to know how
well a new curriculum is being implemented in the classroom. Your focus areas might be interactions between
students and teachers, and teachers knowledge, skills, and behaviors. Design a system for data collection.
Once you have focused your evaluation think about the specific items for which you want to collect data and
then determine how you will collect the information you need. There are three primary ways of collecting
observation data. These three methods can be combined to meet your data collection needs. Recording sheets
and checklists are the most standardized way of collecting observation data and include both preset questions
and responses. These forms are typically used for collecting data that can be easily described in advance o
Observation guides list the interactions, processes, or behaviors to be observed with space to record open-
ended narrative data. Field notes are the least standardized way of collecting observation data and do not
include preset questions or responses. Field notes are open-ended narrative data that can be written or dictated
34 | P a g e
onto a tape recorder. Select the sites. Select an adequate number of sites to help ensure they are representative
of the larger population and will provide an understanding of the situation you are observing. Select the
observers. You may choose to be the only observer or you may want to include others in conducting
observations. Stakeholders, other professional staff members, interns and graduate students, and volunteers
are potential observers. Train the observers. It is critical that the observers are well trained in your data
collection process to ensure high quality and consistent data. The level of training will vary based on the
complexity of the data collection and the individual capabilities of the observers. Time your observations
appropriately. Programs and processes typically follow a sequence of events. It is critical that you schedule
your observations so you are observing the components of the activity that will answer your evaluation
questions. This requires advance planning. What are the advantages of observation? Collect data where and
when an event or activity is occurring, does not rely on peoples willingness or ability to provide information.
Allows you to directly see what people do rather than relying on what people say they did. What are the
disadvantages of observation? Susceptible to observers bias. Susceptible to the hawthorne effect, that is,
people usually perform better when they know they are being observed, although indirect observation may
decrease this problem. Can be expensive and time-consuming compared to other data collection methods.
Does not increase your understanding of why people behave as they do.
EXPERIMENTAL DESIGNS
1. True Designs
2. Quasi Designs
3. Ex Post Facto Designs
Quasi Designs
The primary difference between true designs and quasi designs is that quasi designs do not use random
assignment into treatment or control groups since this design is used in existing naturally occurring settings.
35 | P a g e
Groups are given pretests, then one group is given a treatment and then both groups are given a post-test. This
creates a continuous question of internal and external validity, since the subjects are self-selected. The steps
used in a quasi-design are the same as true designs.
An ex post facto design will determine which variables discriminate between subject groups.
1. Formulate the research problem including identification of factors that may influence dependent
variable(s).
2. Identify alternate hypotheses that may explain the relationships.
3. Identify and select subject groups.
4. Collect and analyze data
Ex post facto studies cannot prove causation, but may provide insight into understanding of phenomenon.
Delphi Method
The Delphi method was developed to structure discussions and summarizes options from a selected group to:
avoid meetings, collect information/expertise from individuals spread out over a large geographic area, and
save time through the elimination of direct contact.
Although the data may prove to be valuable, the collection process is very time consuming. When time is
available and respondents are willing to be queried over a period of time, the technique can be very powerful
in identifying trends and predicting future events.
The technique requires a series of questionnaires and feedback reports to a group of individuals. Each series
is analyzed and the instrument/statements are revised to reflect the responses of the group. A new
questionnaire is prepared that includes the new material, and the process is repeated until a consensus is
reached.
Types of Surveys
Surveys can be divided into two broad categories: the questionnaire and the interview. Questionnaires are
usually paper-and-pencil instruments that the respondent completes. Interviews are completed by the
interviewer based on the respondent says. Sometimes, it's hard to tell the difference between a questionnaire
and an interview. For instance, some people think that questionnaires always ask short closed-ended
questions while interviews always ask broad open-ended ones. But you will see questionnaires with open-
ended questions (although they do tend to be shorter than in interviews) and there will often be a series of
closed-ended questions asked in an interview.
Survey research has changed dramatically in the last ten years. We have automated telephone surveys that
use random dialing methods. There are computerized kiosks in public places that allows people to ask for
36 | P a g e
input. A whole new variation of group interview has evolved as focus group methodology. Increasingly,
survey research is tightly integrated with the delivery of service. Your hotel room has a survey on the desk.
Your waiter presents a short customer satisfaction survey with your check. You get a call for an interview
several days after your last call to a computer company for technical assistance. You're asked to complete a
short survey when you visit a web site. Here, I'll describe the major types of questionnaires and interviews,
keeping in mind that technology are leading to rapid evolution of methods. We'll discuss the relative
advantages and disadvantages of these different survey types in Advantages and Disadvantages of Survey
Methods.
Questionnaires
When most people think of questionnaires, they think of the mail survey. All of us
have, at one time or another, received a questionnaire in the mail. There are many
advantages to mail surveys. They are relatively inexpensive to administer. You can
send the exact same instrument to a wide number of people. They allow the
respondent to fill it out at their own convenience. But there are some disadvantages as well. Response rates
from mail surveys are often very low. And, mail questionnaires are not the best vehicles for asking for
detailed written responses.
What's the difference between a group administered questionnaire and a group interview or focus group? In
the group administered questionnaire, each respondent is handed an instrument and asked to complete it
while in the room. Each respondent completes an instrument. In the group interview or focus group, the
interviewer facilitates the session. People work as a group, listening to each other's comments and answering
the questions. Someone takes notes for the entire group -- people don't complete an interview individually.
37 | P a g e
Interviews
Interviews are a far more personal form of research than questionnaires. In the
personal interview, the interviewer works directly with the respondent. Unlike with
mail surveys, the interviewer has the opportunity to probe or ask follow-up questions.
And, interviews are generally easier for the respondent, especially if what is sought is
opinions or impressions. Interviews can be very time consuming and they are resource
intensive. The interviewer is considered a part of the measurement instrument and
interviewers have to be well trained in how to respond to any contingency.
Constructing a survey instrument is an art in itself. There are numerous small decisions that must be made --
about content, wording, format, placement -- that can have important consequences for your entire study.
While there's no one perfect way to accomplish this job, we do have lots of advice to offer that might
increase your chances of developing a better final product.
First of all you'll learn about the two major types of surveys that exist, the questionnaire and the interview
and the different varieties of each. Then you'll see how to write questions for surveys. There are three areas
involved in writing a question:
Finally, once you have your questions written, there is the issue of how best to place them in your survey.
You'll see that although there are many aspects of survey construction that are just common sense, if you are
not careful you can make critical errors that have dramatic effects on your results.
Types of Data
We'll talk about data in lots of places in The Knowledge Base, but here I just want to make a fundamental
distinction between two types of data: qualitative and quantitative. The way we typically define them, we
call data 'quantitative' if it is in numerical form and 'qualitative' if it is not. Notice that qualitative data could
be much more than just words or text. Photographs, videos, sound recordings and so on, can be considered
qualitative data.
38 | P a g e
Personally, while I find the distinction between qualitative and quantitative data to have some utility, I think
most people draw too hard a distinction, and that can lead to all sorts of confusion. In some areas of social
research, the qualitative-quantitative distinction has led to protracted arguments with the proponents of each
arguing the superiority of their kind of data over the other. The quantitative types argue that their data is
'hard', 'rigorous', 'credible', and 'scientific'. The qualitative proponents counter that their data is 'sensitive',
'nuanced', 'detailed', and 'contextual'.
For many of us in social research, this kind of polarized debate has become less than productive. And, it
obscures the fact that qualitative and quantitative data are intimately related to each other. All quantitative
data is based upon qualitative judgments; and all qualitative data can be described and manipulated
numerically. For instance, think about a very common quantitative measure in social research -- a self
esteem scale. The researchers who develop such instruments had to make countless judgments in
constructing them: how to define self esteem; how to distinguish it from other related concepts; how to word
potential scale items; how to make sure the items would be understandable to the intended respondents; what
kinds of contexts it could be used in; what kinds of cultural and language constraints might be present; and
on and on. The researcher who decides to use such a scale in their study has to make another set of
judgments: how well does the scale measure the intended concept; how reliable or consistent is it; how
appropriate is it for the research context and intended respondents; and on and on. Believe it or not, even the
respondents make many judgments when filling out such a scale: what is meant by various terms and
phrases; why is the researcher giving this scale to them; how much energy and effort do they want to expend
to complete it, and so on. Even the consumers and readers of the research will make lots of judgments about
the self esteem measure and its appropriateness in that research context. What may look like a simple,
straightforward, cut-and-dried quantitative measure is actually based on lots of qualitative judgments made
by lots of different people.
On the other hand, all qualitative information can be easily converted into quantitative, and there are many
times when doing so would add considerable value to your research. The simplest way to do this is to divide
the qualitative information into units and number them! I know that sounds trivial, but even that simple
nominal enumeration can enable you to organize and process qualitative information more efficiently.
Perhaps more to the point, we might take text information (say, excerpts from transcripts) and pile these
excerpts into piles of similar statements. When we do something even as easy as this simple grouping or
piling task, we can describe the results quantitatively. For instance, if we had ten statements and we grouped
these into five piles (as shown in the figure), we
could describe the piles using a 10 x 10 table of 0's
and 1's. If two statements were placed together in
the same pile, we would put a 1 in their row-column
juncture. If two statements were placed in different
piles, we would use a 0. The resulting matrix or
table describes the grouping of the ten statements in
terms of their similarity. Even though the data in
this example consists of qualitative statements (one
per card), the result of our simple qualitative
procedure (grouping similar excerpts into the same
piles) is quantitative in nature. "So what?" you ask.
Once we have the data in numerical form, we can manipulate it numerically. For instance, we could have
five different judges sort the 10 excerpts and obtain a 0-1 matrix like this for each judge. Then we could
39 | P a g e
average the five matrices into a single one that shows the proportions of judges who grouped each pair
together. This proportion could be considered an estimate of the similarity (across independent judges) of
the excerpts. While this might not seem too exciting or useful, it is exactly this kind of procedure that I use
as an integral part of the process of developing 'concept maps' of ideas for groups of people (something that
is useful!).
Unit of Analysis
One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major
entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a
study:
individuals
groups
artifacts (books, photos, newspapers)
geographical units (town, census tract, state)
social interactions (dyadic relations, divorces, arrests)
Why is it called the 'unit of analysis' and not something else (like, the unit of sampling)? Because it is the
analysis you do in your study that determines what the unit is. For instance, if you are comparing the
children in two classrooms on achievement test scores, the unit is the individual child because you have a
score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit
of analysis is the group, in this case the classroom, because you only have a classroom climate score for the
class as a whole and not for each individual student. For different analyses in the same study you may have
different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. But
you might decide to compare average classroom performance. In this case, since the data that goes into the
analysis is the average itself (and not the individuals' scores) the unit of analysis is actually the group. Even
though you had data at the student level, you use aggregates in the analysis. In many areas of social research
these hierarchies of analysis units have become particularly important and have spawned a whole area of
statistical analysis sometimes referred to as hierarchical modeling. This is true in education, for instance,
where we often compare classroom performance but collected achievement data at the individual student
level.
Questionnaires
Questionnaires are a popular means of collecting data, but are difficult to design and often require many
rewrites before an acceptable questionnaire is produced.
Advantages:
Can be used as a method in its own right or as a basis for interviewing or a telephone survey.
No interviewer bias.
Disadvantages:
Design problems.
Respondent can read all questions beforehand and then decide whether to complete or not. For
example, perhaps because it is too long, too complex, uninteresting, or too personal.
The general theme of the questionnaire should be made explicit in a covering letter. You should state who you
are; why the data is required; give, if necessary, an assurance of confidentiality and/or anonymity; and contact
number and address or telephone number. This ensures that the respondents know what they are committing
themselves to, and also that they understand the context of their replies. If possible, you should offer an
estimate of the completion time. Instructions for return should be included with the return date made obvious.
For example: It would be appreciated if you could return the completed questionnaire by... if at all possible.
41 | P a g e
Instructions for completion
You need to provide clear and unambiguous instructions for completion. Within most questionnaires these are
general instructions and specific instructions for particular question structures. It is usually best to separate
these, supplying the general instructions as a preamble to the questionnaire, but leaving the specific
instructions until the questions to which they apply. The response method should be indicated (circle, tick,
cross, etc.). Wherever possible, and certainly if a slightly unfamiliar response system is employed, you should
give an example.
Appearance
Appearance is usually the first feature of the questionnaire to which the recipient reacts. A neat and
professional look will encourage further consideration of your request, increasing your response rate. In
addition, careful thought to layout should help your analysis. There are a number of simple rules to help
improve questionnaire appearance:
Consistent positioning of response boxes, usually to the right, speeds up completion and also avoids
inadvertent omission of responses.
Differentiate between instructions and questions. Both lower case and capitals can be used, or
responses can be boxed.
Length
There may be a strong temptation to include any vaguely interesting questions, but you should resist this at all
costs. Excessive size can only reduce response rates. If a long questionnaire is necessary, then you must give
even more thought to appearance. It is best to leave pages unnumbered; for respondents to flick to the end and
see page 27 can be very disconcerting!
Order
Probably the most crucial stage in questionnaire response is the beginning. Once the respondents have started
to complete the questions they will normally finish the task, unless it is very long or difficult. Consequently,
you need to select the opening questions with care. Usually the best approach is to ask for biographical details
first, as the respondents should know all the answers without much thought. Another benefit is that an easy
start provides practice in answering questions.
Once the introduction has been achieved the subsequent order will depend on many considerations. You
should be aware of the varying importance of different questions. Essential information should appear early,
just in case the questionnaire is not completed. For the same reasons, relatively unimportant questions can be
placed towards the end. If questions are likely to provoke the respondent and remain unanswered, these too
are best left until the end, in the hope of obtaining answers to everything else.
42 | P a g e
Coding
If analysis of the results is to be carried out using a statistical package or spreadsheet it is advisable to code
non-numerical responses when designing the questionnaire, rather than trying to code the responses when they
are returned. An example of coding is:
Male [ ] Female [ ]
1 2
Thank you
Respondents to questionnaires rarely benefit personally from their efforts and the least the researcher can do
is to thank them. Even though the covering letter will express appreciation for the help given, it is also a nice
gesture to finish the questionnaire with a further thank you.
Questions
Keep the questions short, simple and to the point; avoid all unnecessary words.
Use words and phrases that are unambiguous and familiar to the respondent. For example, dinner has
a number of different interpretations; use an alternative expression such as evening meal.
Only ask questions that the respondent can answer. Hypothetical questions should be avoided. Avoid
calculations and questions that require a lot of memory work, for example, How many people stayed
in your hotel last year?
Avoid loaded or leading questions that imply a certain answer. For example, by mentioning one
particular item in the question, Do you agree that Colgate toothpaste is the best toothpaste?
Vacuous words or phrases should be avoided. Generally, usually, or normally are imprecise terms
with various meanings. They should be replaced with quantitative statements, for example, at least
once a week.
Questions should only address a single issue. For example, questions like: Do you take annual
holidays to Spain? should be broken down into two discreet stages, firstly find out if the respondent
takes an annual holiday, and then secondly find out if they go to Spain.
Do not ask two questions in one by using and. For example, Did you watch television last night and
read a newspaper?
Avoid double negatives. For example, Is it not true that you did not read a newspaper yesterday?
Respondents may tackle a double negative by switching both negatives and then assuming that the
same answer applies. This is not necessarily valid.
43 | P a g e
State units required but do not aim for too high a degree of accuracy. For instance, use an interval
rather than an exact figure:
Avoid emotive or embarrassing words usually connected with race, religion, politics, sex, money.
Types of questions
Closed questions
A question is asked and then a number of possible answers are provided for the respondent. The respondent
selects the answer which is appropriate. Closed questions are particularly useful in obtaining factual
information:
Some Yes/No questions have a third category Do not know. Experience shows that as long as this
alternative is not mentioned people will make a choice. Also the phrase Do not know is ambiguous:
What was your main way of travelling to the hotel? Tick one box only.
Car [ ]
Coach [ ]
Motor bike [ ]
Train [ ]
With such lists you should always include an other category, because not all possible responses might have
been included in the list of answers.
Sometimes the respondent can select more than one from the list. However, this makes analysis difficult:
Why have you visited the historic house? Tick the relevant answer(s). You may tick as many as you like.
44 | P a g e
I enjoy visiting historic [ ]
houses
The weather was bad and [ ]
I could not enjoy outdoor
activities
I have visited the house [ ]
before and wished to
return
Other reason, please
specify
Attitude questions
Frequently questions are asked to find out the respondents opinions or attitudes to a given situation. A Likert
scale provides a battery of attitude statements. The respondent then says how much they agree or disagree
with each one:
Read the following statements and then indicate by a tick whether you strongly agree, agree, disagree or
strongly disagree with the statement.
There are many variations on this type of question. One variation is to have a middle statement, for example,
Neither agree nor disagree. However, many respondents take this as the easy option. Only having four
statements, as above, forces the respondent into making a positive or negative choice. Another variation is to
rank the various attitude statements, however, this can cause analysis problems:
Which of these characteristics do you like about your job? Indicate the best three in order, with the best being
number 1.
Varied work [ ]
Good salary [ ]
Opportunities for promotion [ ]
Good working conditions [ ]
High amount of responsibility [ ]
Friendly colleagues [ ]
45 | P a g e
A semantic differential scale attempts to see how strongly an attitude is held by the respondent. With these
scales double-ended terms are given to the respondents who are asked to indicate where their attitude lies on
the scale between the terms. The response can be indicated by putting a cross in a particular position or circling
a number:
Difficult 1 2 3 4 5 6 7 Easy
Useless 1 2 3 4 5 6 7 Useful
Interesting 1 2 3 4 5 6 7 Boring
For summary and analysis purposes, a score of 1 to 7 may be allocated to the seven points of the scale, thus
quantifying the various degrees of opinion expressed. This procedure has some disadvantages. It is implicitly
assumed that two people with the same strength of feeling will mark the same point on the scale. This almost
certainly will not be the case. When faced with a semantic differential scale, some people will never, as a
matter of principle, use the two end indicators of 1 and 7. Effectively, therefore, they are using a five-point
scale. Also scoring the scale 1 to 7 assumes that they represent equidistant points on the continuous spectrum
of opinion. This again is probably not true. Nevertheless, within its limitations, the semantic differential can
provide a useful way of measuring and summarizing subjective opinions.
Open questions
An open question such as What are the essential skills a manager should possess? should be used as an
adjunct to the main theme of the questionnaire and could allow the respondent to elaborate upon an earlier
more specific question. Open questions inserted at the end of major sections, or at the end of the questionnaire,
can act as safety valves, and possibly offer additional information. However, they should not be used to
introduce a section since there is a high risk of influencing later responses. The main problem of open
questions is that many different answers have to be summarized and possibly coded.
Questionnaire design is fraught with difficulties and problems. A number of rewrites will be necessary,
together with refinement and rethinks on a regular basis. Do not assume that you will write the questionnaire
accurately and perfectly at the first attempt. If poorly designed, you will collect inappropriate or inaccurate
data and good analysis cannot then rectify the situation.
To refine the questionnaire, you need to conduct a pilot survey. This is a small-scale trial prior to the main
survey that tests all your question planning. Amendments to questions can be made. After making some
46 | P a g e
amendments, the new version would be re-tested. If this re-test produces more changes, another pilot would
be undertaken and so on. For example, perhaps responses to open-ended questions become closed; questions
which are all answered the same way can be omitted; difficult words replaced, etc.
It is usual to pilot the questionnaires personally so that the respondent can be observed and questioned if
necessary. By timing each question, you can identify any questions that appear too difficult, and you can also
obtain a reliable estimate of the anticipated completion time for inclusion in the covering letter. The result can
also be used to test the coding and analytical procedures to be performed later.
The questionnaire should be checked for completeness to ensure that all pages are present and that none is
blank or illegible. It is usual to supply a prepaid addressed envelope for the return of the questionnaire. You
need to explain this in the covering letter and reinforce it at the end of the questionnaire, after the Thank you.
Finally, many organizations are approached continually for information. Many, as a matter of course, will not
respond in a positive way.
47 | P a g e
Step 2: Review the Literature
Now that the problem has been identified, the researcher must learn more about the topic under
investigation. To do this, the researcher must review the literature related to the research problem. This
step provides foundational knowledge about the problem area. The review of literature also educates the
researcher about what studies have been conducted in the past, how these studies were conducted, and the
conclusions in the problem area. In the obesity study, the review of literature enables the programmer to
discover horrifying statistics related to the long-term effects of childhood obesity in terms of health issues,
death rates, and projected medical costs. In addition, the programmer finds several articles and information
from the Centers for Disease Control and Prevention that describe the benefits of walking 10,000 steps a
day. The information discovered during this step helps the programmer fully understand the magnitude of
the problem, recognize the future consequences of obesity, and identify a strategy to combat obesity (i.e.,
walking).
Step 3: Clarify the Problem
Many times the initial problem identified in the first step of the process is too large or broad in scope. In
step 3 of the process, the researcher clarifies the problem and narrows the scope of the study. This can only
be done after the literature has been reviewed. The knowledge gained through the review of literature
guides the researcher in clarifying and narrowing the research project. In the example, the programmer has
identified childhood obesity as the problem and the purpose of the study. This topic is very broad and could
be studied based on genetics, family environment, diet, exercise, self-confidence, leisure activities, or
health issues. All of these areas cannot be investigated in a single study; therefore, the problem and purpose
of the study must be more clearly defined. The programmer has decided that the purpose of the study is to
determine if walking 10,000 steps a day for three days a week will improve the individuals health. This
purpose is more narrowly focused and researchable than the original problem.
48 | P a g e
Step 4: Clearly Define Terms and Concepts
Terms and concepts are words or phrases used in the purpose statement of the study or the description of
the study. These items need to be specifically defined as they apply to the study. Terms or concepts often
have different definitions depending on who is reading the study. To minimize confusion about what the
terms and phrases mean, the researcher must specifically define them for the study. In the obesity study,
the concept of individuals health can be defined in hundreds of ways, such as physical, mental,
emotional, or spiritual health. For this study, the individuals health is defined as physical health. The
concept of physical health may also be defined and measured in many ways. In this case, the programmer
decides to more narrowly define individual health to refer to the areas of weight, percentage of body fat,
and cholesterol. By defining the terms or concepts more narrowly, the scope of the study is more
manageable for the programmer, making it easier to collect the necessary data for the study. This also
makes the concepts more understandable to the reader.
Step 5: Define the Population
Research projects can focus on a specific group of people, facilities, park development, employee
evaluations, programs, financial status, marketing efforts, or the integration of technology into the
operations. For example, if a researcher wants to examine a specific group of people in the community, the
study could examine a specific age group, males or females, people living in a specific geographic area, or
a specific ethnic group. Literally thousands of options are available to the researcher to specifically identify
the group to study. The research problem and the purpose of the study assist the researcher in identifying
the group to involve in the study. In research terms, the group to involve in the study is always called the
population. Defining the population assists the researcher in several ways. First, it narrows the scope of the
study from a very large population to one that is manageable. Second, the population identifies the group
that the researchers efforts will be focused on within the study. This helps ensure that the researcher stays
on the right path during the study. Finally, by defining the population, the researcher identifies the group
that the results will apply to at the conclusion of the study. In the example in table 2.4, the programmer has
identified the population of the study as children ages 10 to 12 years. This narrower population makes the
study more manageable in terms of time and resources.
Step 6: Develop the Instrumentation Plan
The plan for the study is referred to as the instrumentation plan. The instrumentation plan serves as the
road map for the entire study, specifying who will participate in the study; how, when, and where data will
be collected; and the content of the program. This plan is composed of numerous decisions and
considerations that are addressed in chapter 8 of this text. In the obesity study, the researcher has decided
to have the children participate in a walking program for six months. The group of participants is called
the sample, which is a smaller group selected from the population specified for the study. The study cannot
possibly include every 10- to 12-year-old child in the community, so a smaller group is used to represent
the population. The researcher develops the plan for the walking program, indicating what data will be
collected, when and how the data will be collected, who will collect the data, and how the data will be
analyzed. The instrumentation plan specifies all the steps that must be completed for the study. This ensures
that the programmer has carefully thought through all these decisions and that she provides a step-by-step
plan to be followed in the study.
49 | P a g e
Step 7: Collect Data
Once the instrumentation plan is completed, the actual study begins with the collection of data. The
collection of data is a critical step in providing the information needed to answer the research question.
Every study includes the collection of some type of datawhether it is from the literature or from
subjectsto answer the research question. Data can be collected in the form of words on a survey, with a
questionnaire, through observations, or from the literature. In the obesity study, the programmers will be
collecting data on the defined variables: weight, percentage of body fat, cholesterol levels, and the number
of days the person walked a total of 10,000 steps during the class.
The researcher collects these data at the first session and at the last session of the program. These two sets
of data are necessary to determine the effect of the walking program on weight, body fat, and cholesterol
level. Once the data are collected on the variables, the researcher is ready to move to the final step of the
process, which is the data analysis.
Step 8: Analyze the Data
All the time, effort, and resources dedicated to steps 1 through 7 of the research process culminate in this
final step. The researcher finally has data to analyze so that the research question can be answered. In the
instrumentation plan, the researcher specified how the data will be analyzed. The researcher now analyzes
the data according to the plan. The results of this analysis are then reviewed and summarized in a manner
directly related to the research questions. In the obesity study, the researcher compares the measurements
of weight, percentage of body fat, and cholesterol that were taken at the first meeting of the subjects to the
measurements of the same variables at the final program session. These two sets of data will be analyzed
to determine if there was a difference between the first measurement and the second measurement for each
individual in the program. Then, the data will be analyzed to determine if the differences are statistically
significant. If the differences are statistically significant, the study validates the theory that was the focus
of the study. The results of the study also provide valuable information about one strategy to combat
childhood obesity in the community.
As you have probably concluded, conducting studies using the eight steps of the scientific research process
requires you to dedicate time and effort to the planning process. You cannot conduct a study using the scientific
research process when time is limited or the study is done at the last minute. Researchers who do this conduct
studies that result in either false conclusions or conclusions that are not of any value to the organization.
Modes of Classification
There are four types of classification, viz., (i) qualitative; (ii) quantitative; (iii) temporal and (iv) spatial.
(i) Qualitative classification: It is done according to attributes or non-measurable characteristics; like social
status, sex, nationality, occupation, etc. For example, the population of the whole country can be classified
into four categories as married, unmarried, widowed and divorced. When only one attribute, e.g., sex, is
used for classification, it is called simple classification. When more than one attributes, e.g., deafness, sex
and religion, are used for classification, it is called manifold classification.
(ii) Quantitative classification: It is done according to numerical size like weights in kg or heights in cm.
Here we classify the data by assigning arbitrary limits known as class-limits.
The quantitative phenomenon under study is called a variable. For example, the population of the whole
country may be classified according to different variables like age, income, wage, price, etc. Hence this
classification is often called classification by variables.
50 | P a g e
(a) Variable: A variable in statistics means any measurable characteristic or quantity which can assume a
range of numerical values within certain limits, e.g., income, height, age, weight, wage, price, etc. A
variable can be classified as either discrete or continuous.
(1) Discrete variable: A variable which can take up only exact values and not any fractional values, is
called a discrete variable. Number of workmen in a factory, members of a family, students in a class,
number of births in a certain year, number of telephone calls in a month, etc., are examples of discrete-
variable.
(2) Continuous variable: A variable which can take up any numerical value (integral/fractional) within a
certain range is called a continuous variable. Height, weight, rainfall, time, temperature, etc., are examples
of continuous variables. Age of students in a school is a continuous variable as it can be measured to the
nearest fraction of time, i.e., years, months, days, etc.
(iii) Temporal classification: It is done according to time, e.g., index numbers arranged over a period of
time, population of a country for several decades, exports and imports of India for different five year plans,
etc.
(iv) Spatial classification: It is done with respect to space or places, e.g., production of cereals in quintals in
various states, population of a country according to states, etc.
53 | P a g e
The above examples are just the samples for bibliography entries and may be used, but one should also
remember that they are not the only acceptable forms. The only thing important is that, whatever method
one selects, it must remain consistent.
Writing the final draft: This constitutes the last step. The final draft should be written in a concise and
objective style and in simple language, avoiding vague expressions such as it seems, there may be, and
the like ones. While writing the final draft, the researcher must avoid abstract terminology and technical
jargon. Illustrations and examples based on common experiences must be incorporated in the final draft as
they happen to be most effective in communicating the research findings to others. A research report should
not be dull, but must enthuse people and maintain interest and must show originality. It must be remembered
that every report should be an attempt to solve some intellectual problem and must contribute to the solution
of a problem and must add to the knowledge of both the researcher and the reader.
LAYOUT OF THE RESEARCH REPORT
Anybody, who is reading the research report, must necessarily be conveyed enough about the study so that
he can place it in its general scientific context, judge the adequacy of its methods and thus form an opinion
of how seriously the findings are to be taken. For this purpose there is the need of proper layout of the
report. The layout of the report means as to what the research report should contain. A comprehensive layout
of the research report should comprise (A) preliminary pages; (B) the main text; and (C) the end matter. Let
us deal with them separately.
(A) Preliminary Pages
In its preliminary pages the report should carry a title and date, followed by acknowledgements in the form
of Preface or Foreword. Then there should be a table of contents followed by list of tables and
illustrations so that the decision-maker or anybody interested in reading the report can easily locate the
required information in the report.
(B) Main Text
The main text provides the complete outline of the research report along with all details. Title of the research
study is repeated at the top of the first page of the main text and then follows the other details on pages
numbered consecutively, beginning with the second page. Each main section of the report should begin on a
new page. The main text of the report should have the following sections:
(i) Introduction; (ii) Statement of findings and recommendations; (iii) The results; (iv) The implications
drawn from the results; and (v) The summary.
(i) Introduction: The purpose of introduction is to introduce the research project to the readers. It should
contain a clear statement of the objectives of research i.e., enough background should be given to make clear
to the reader why the problem was considered worth investigating. A brief summary of other relevant
research may also be stated so that the present study can be seen in that context. The hypotheses of study, if
any, and the definitions of the major concepts employed in the study should be explicitly stated in the
introduction of the report.
The methodology adopted in conducting the study must be fully explained. The scientific reader would like
to know in detail about such thing: How was the study carried out? What was its basic design? If the study
was an experimental one, then what were the experimental manipulations? If the data were collected by
means of questionnaires or interviews, then exactly what questions were asked (The questionnaire or
interview schedule is usually given in an appendix)? If measurements were based on observation, then what
instructions were given to the observers? Regarding the sample used in the study the reader should be told:
Who were the subjects? How many were there? How were they selected? All these questions are crucial for
estimating the probable limits of generalizability of the findings. The statistical analysis adopted must also
be clearly stated. In addition to all this, the scope of the study should be stated and the boundary lines be
demarcated. The various limitations, under which the research project was completed, must also be narrated.
54 | P a g e
(ii) Statement of findings and recommendations: After introduction, the research report must contain a
statement of findings and recommendations in non-technical language so that it can be easily understood by
all concerned. If the findings happen to be extensive, at this point they should be put in the summarized
form.
(iii) Results: A detailed presentation of the findings of the study, with supporting data in the form of tables
and charts together with a validation of results, is the next step in writing the main text of the report. This
generally comprises the main body of the report, extending over several chapters. The result section of the
report should contain statistical summaries and reductions of the data rather than the raw data. All the results
should be presented in logical sequence and splitted into readily identifiable sections. All relevant results
must find a place in the report. But how one is to decide about what is relevant is the basic question. Quite
often guidance comes primarily from the research problem and from the hypotheses, if any, with which the
study was concerned. But ultimately the researcher must rely on his own judgment in deciding the outline of
his report. Nevertheless, it is still necessary that he states clearly the problem with which he was concerned,
the procedure by which he worked on the problem, the conclusions at which he arrived, and the bases for his
conclusions.
(iv) Implications of the results: Toward the end of the main text, the researcher should again put down the
results of his research clearly and precisely. He should, state the implications that flow from the results of
the study, for the general reader is interested in the implications for understanding the human behaviour.
Such implications may have three aspects as stated below:
(a) A statement of the inferences drawn from the present study which may be expected to apply in similar
circumstances.
(b) The conditions of the present study which may limit the extent of legitimate generalizations of the
inferences drawn from the study.
(c) The relevant questions that still remain unanswered or new questions raised by the study along with
suggestions for the kind of research that would provide answers for them.
It is considered a good practice to finish the report with a short conclusion which summarizes and
recapitulates the main points of the study. The conclusion drawn from the study should be clearly related to
the hypotheses that were stated in the introductory section. At the same time, a forecast of the probable
future of the subject and an indication of the kind of research which needs to be done in that particular field
is useful and desirable.
(v) Summary: It has become customary to conclude the research report with a very brief summary, resting in
brief the research problem, the methodology, the major findings and the major conclusions drawn from the
research results.
(C) End Matter
At the end of the report, appendices should be enlisted in respect of all technical data such as questionnaires,
sample information, mathematical derivations and the like ones. Bibliography of sources consulted should
also be given. Index (an alphabetical listing of names, places and topics along with the numbers of the pages
in a book or report on which they are mentioned or discussed) should invariably be given at the end of the
report. The value of index lies in the fact that it works as a guide to the reader for the contents in the report.
TYPES OF REPORTS
Research reports vary greatly in length and type. In each individual case, both the length and the form are
largely dictated by the problems at hand. For instance, business firms prefer reports in the letter form, just
one or two pages in length. Banks, insurance organizations and financial institutions are generally fond of
the short balance-sheet type of tabulation for their annual reports to their customers and shareholders.
Mathematicians prefer to write the results of their investigations in the form of algebraic notations. Chemists
report their results in symbols and formulae. Students of literature usually write long reports presenting the
55 | P a g e
critical analysis of some writer or period or the like with a liberal use of quotations from the works of the
author under discussion. In the field of education and psychology, the favourite form is the report on the
results of experimentation accompanied by the detailed statistical tabulations. Clinical psychologists and
social pathologists frequently find it necessary to make use of the case-history form.
News items in the daily papers are also forms of report writing. They represent firsthand on-the scene
accounts of the events described or compilations of interviews with persons who were on the scene. In such
reports the first paragraph usually contains the important information in detail and the succeeding
paragraphs contain material which is progressively less and less important.
Book-reviews which analyze the content of the book and report on the authors intentions, his success or
failure in achieving his aims, his language, his style, scholarship, bias or his point of view.
Such reviews also happen to be a kind of short report. The reports prepared by governmental bureaus,
special commissions, and similar other organizations are generally very comprehensive reports on the issues
involved. Such reports are usually considered as important research products. Similarly, Ph.D. theses and
dissertations are also a form of report-writing, usually completed by students in academic institutions.
The above narration throws light on the fact that the results of a research investigation can be presented in a
number of ways viz., a technical report, a popular report, an article, a monograph or at times even in the
form of oral presentation. Which method(s) of presentation to be used in a particular study depends on the
circumstances under which the study arose and the nature of the results. A technical report is used whenever
a full written report of the study is required whether for recordkeeping or for public dissemination. A
popular report is used if the research results have policy implications. We give below a few details about the
said two types of reports:
(A) Technical Report
In the technical report the main emphasis is on (i) the methods employed, (it) assumptions made in the
course of the study, (iii) the detailed presentation of the findings including their limitations and supporting
data.
A general outline of a technical report can be as follows:
1. Summary of results: A brief review of the main findings just in two or three pages.
2. Nature of the study: Description of the general objectives of study, formulation of the problem in
operational terms, the working hypothesis, the type of analysis and data required, etc.
3. Methods employed: Specific methods used in the study and their limitations. For instance, in sampling
studies we should give details of sample design viz., sample size, sample selection, etc.
4. Data: Discussion of data collected, their sources, characteristics and limitations. If secondary data are
used, their suitability to the problem at hand be fully assessed. In case of a survey, the manner in which data
were collected should be fully described.
5. Analysis of data and presentation of findings: The analysis of data and presentation of the findings of the
study with supporting data in the form of tables and charts be fully narrated. This, in fact, happens to be the
main body of the report usually extending over several chapters.
6. Conclusions: A detailed summary of the findings and the policy implications drawn from the results be
explained.
7. Bibliography: Bibliography of various sources consulted be prepared and attached.
8. Technical appendices: Appendices be given for all technical matters relating to questionnaire,
mathematical derivations, elaboration on particular technique of analysis and the like ones.
9. Index: Index must be prepared and be given invariably in the report at the end.
The order presented above only gives a general idea of the nature of a technical report; the order of
presentation may not necessarily be the same in all the technical reports. This, in other words, means that the
56 | P a g e
presentation may vary in different reports; even the different sections outlined above will not always be the
same, nor will all these sections appear in any particular report.
It should, however, be remembered that even in a technical report, simple presentation and ready availability
of the findings remain an important consideration and as such the liberal use of charts and diagrams is
considered desirable.
(B) Popular Report
The popular report is one which gives emphasis on simplicity and attractiveness. The simplification should
be sought through clear writing, minimization of technical, particularly mathematical, details and liberal use
of charts and diagrams. Attractive layout along with large print, many subheadings, even an occasional
cartoon now and then is another characteristic feature of the popular report.
Besides, in such a report emphasis is given on practical aspects and policy implications.
We give below a general outline of a popular report.
1. The findings and their implications: Emphasis in the report is given on the findings of most practical
interest and on the implications of these findings.
2. Recommendations for action: Recommendations for action on the basis of the findings of the study is
made in this section of the report.
3. Objective of the study: A general review of how the problem arise is presented along with the specific
objectives of the project under study.
4. Methods employed: A brief and non-technical description of the methods and techniques used, including a
short review of the data on which the study is based, is given in this part of the report.
5. Results: This section constitutes the main body of the report wherein the results of the study are presented
in clear and non-technical terms with liberal use of all sorts of illustrations such as charts, diagrams and the
like ones.
6. Technical appendices: More detailed information on methods used, forms, etc. is presented in the form of
appendices. But the appendices are often not detailed if the report is entirely meant for general public.
There can be several variations of the form in which a popular report can be prepared. The only important
thing about such a report is that it gives emphasis on simplicity and policy implications from the operational
point of view, avoiding the technical details of all sorts to the extent possible.
ORAL PRESENTATION
At times oral presentation of the results of the study is considered effective, particularly in cases where
policy recommendations are indicated by project results. The merit of this approach lies in the fact that it
provides an opportunity for give-and-take decisions which generally lead to a better understanding of the
findings and their implications. But the main demerit of this sort of presentation is the lack of any permanent
record concerning the research details and it may be just possible that the findings may fade away from
peoples memory even before an action is taken. In order to overcome this difficulty, a written report may be
circulated before the oral presentation and referred to frequently during the discussion. Oral presentation is
effective when supplemented by various visual devices. Use of slides, wall charts and blackboards is quite
helpful in contributing to clarity and in reducing the boredom, if any. Distributing a board outline, with a
few important tables and charts concerning the research results, makes the listeners attentive who have a
ready outline on which to focus their thinking. This very often happens in academic institutions where the
researcher discusses his research findings and policy implications with others either in a seminar or in a
group discussion.
Thus, research results can be reported in more than one ways, but the usual practice adopted, in academic
institutions particularly, is that of writing the Technical Report and then preparing several research papers to
be discussed at various forums in one form or the other. But in practical field and with problems having
policy implications, the technique followed is that of writing a popular report.
57 | P a g e
Researches done on governmental account or on behalf of some major public or private organizations are
usually presented in the form of technical reports.
MECHANICS OF WRITING A RESEARCH REPORT
There are very definite and set rules which should be followed in the actual preparation of the research
report or paper. Once the techniques are finally decided, they should be scrupulously adhered to, and no
deviation permitted. The criteria of format should be decided as soon as the materials for the research paper
have been assembled. The following points deserve mention so far as the mechanics of writing a report are
concerned:
1. Size and physical design: The manuscript should be written on un-ruled paper 8 1 2 11 in size. If it is
to be written by hand, then black or blue-black ink should be used. A margin of at least one and one-half
inches should be allowed at the left hand and of at least half an inch at the right hand of the paper. There
should also be one-inch margins, top and bottom. The paper should be neat and legible. If the manuscript is
to be typed, then all typing should be double-spaced on one side of the page only except for the insertion of
the long quotations.
2. Procedure: Various steps in writing the report should be strictly adhered (All such steps have already
been explained earlier in this chapter).
3. Layout: Keeping in view the objective and nature of the problem, the layout of the report should be
thought of and decided and accordingly adopted (The layout of the research report and various types of
reports have been described in this chapter earlier which should be taken as a guide for report-writing in case
of a particular problem).
4. Treatment of quotations: Quotations should be placed in quotation marks and double spaced, forming an
immediate part of the text. But if a quotation is of a considerable length (more than four or five type written
lines) then it should be single-spaced and indented at least half an inch to the right of the normal text margin.
5. The footnotes: Regarding footnotes one should keep in view the followings:
(a) The footnotes serve two purposes viz., the identification of materials used in quotations in the report and
the notice of materials not immediately necessary to the body of the research text but still of supplemental
value. In other words, footnotes are meant for cross references, citation of authorities and sources,
acknowledgement and elucidation or explanation of a point of view. It should always be kept in view that
footnote is not an end nor a means of the display of scholarship. The modern tendency is to make the
minimum use of footnotes for scholarship does not need to be displayed.
(b) Footnotes are placed at the bottom of the page on which the reference or quotation which they identify or
supplement ends. Footnotes are customarily separated from the textual material by a space of half an inch
and a line about one and a half inches long.
(c) Footnotes should be numbered consecutively, usually beginning with 1 in each chapter separately. The
number should be put slightly above the line, say at the end of a quotation.
At the foot of the page, again, the footnote number should be indented and typed a little above the line.
Thus, consecutive numbers must be used to correlate the reference in the text with its corresponding note at
the bottom of the page, except in case of statistical tables and other numerical material, where symbols such
as the asterisk (*) or the like one may be used to prevent confusion.
(d) Footnotes are always typed in single space though they are divided from one another by double space.
6. Documentation style: Regarding documentation, the first footnote reference to any given work should be
complete in its documentation, giving all the essential facts about the edition used. Such documentary
footnotes follow a general sequence. The common order may be described as under:
(i) Regarding the single-volume reference
1. Authors name in normal order (and not beginning with the last name as in a bibliography) followed by a
comma;
58 | P a g e
2. Title of work, underlined to indicate italics;
3. Place and date of publication;
4. Pagination references (The page number).
Example
John Gassner, Masters of the Drama, New York: Dover Publications, Inc. 1954, p. 315.
(ii) Regarding multivolume reference
1. Authors name in the normal order;
2. Title of work, underlined to indicate italics;
3. Place and date of publication;
4. Number of volume;
5. Pagination references (The page number).
(iii) Regarding works arranged alphabetically
For works arranged alphabetically such as encyclopedias and dictionaries, no pagination reference is usually
needed. In such cases the order is illustrated as under:
Example 1
Salamanca, Encyclopaedia Britannica, 14th Edition.
Example 2
Mary Wollstonecraft Godwin, Dictionary of national biography.
But if there should be a detailed reference to a long encyclopedia article, volume and pagination reference
may be found necessary.
(iv) Regarding periodicals reference
1. Name of the author in normal order;
2. Title of article, in quotation marks;
3. Name of periodical, underlined to indicate italics;
4. Volume number;
5. Date of issuance;
6. Pagination.
(v) Regarding anthologies and collections reference
Quotations from anthologies or collections of literary works must be acknowledged not only by author, but
also by the name of the collector.
(vi) Regarding second-hand quotations reference
In such cases the documentation should be handled as follows:
1. Original author and title;
2. quoted or cited in,;
3. Second author and work.
Example
J.F. Jones, Life in Ploynesia, p. 16, quoted in History of the Pacific Ocean area, by R.B. Abel,
p. 191.
(vii) Case of multiple authorship
If there are more than two authors or editors, then in the documentation the name of only the first is given
and the multiple authorship is indicated by et al. or and others.
Subsequent references to the same work need not be as detailed as stated above. If the work is cited again
without any other work intervening, it may be indicated as ibid, followed by a comma and the page number.
A single page should be referred to as p., but more than one page be referred to as pp. If there are several
pages referred to at a stretch, the practice is to use often the page number, for example, pp. 190ff, which
means page number 190 and the following pages; but only for page 190 and the following page 190f.
59 | P a g e
Roman numerical is generally used to indicate the number of the volume of a book. Op. cit. (opera citato, in
the work cited) or Loc. cit. (loco citato, in the place cited) are two of the very convenient abbreviations used
in the footnotes. Op. cit. or Loc. cit. after the writers name would suggest that the reference is to work by
the writer which has been cited in detail in an earlier footnote but intervened by some other references.
7. Punctuation and abbreviations in footnotes: The first item after the number in the footnote is the authors
name, given in the normal signature order. This is followed by a comma. After the comma, the title of the
book is given: the article (such as A, An, The etc.) is omitted and only the first word and proper nouns
and adjectives are capitalized. The title is followed by a comma.
Information concerning the edition is given next. This entry is followed by a comma. The place of
publication is then stated; it may be mentioned in an abbreviated form, if the place happens to be a famous
one such as Lond. for London, N.Y. for New York, N.D. for New Delhi and so on. This entry is followed by
a comma. Then the name of the publisher is mentioned and this entry is closed by a comma. It is followed
by the date of publication if the date is given on the title page. If the date appears in the copyright notice on
the reverse side of the title page or elsewhere in the volume, the comma should be omitted and the date
enclosed in square brackets [c 1978], [1978]. The entry is followed by a comma. Then follow the volume
and page references and are separated by a comma if both are given. A period closes the complete
documentary reference. But one should remember that the documentation regarding acknowledgements
from magazine articles and periodical literature follow a different form as stated earlier while explaining the
entries in the bibliography.
Certain English and Latin abbreviations are quite often used in bibliographies and footnotes to eliminate
tedious repetition. The following is a partial list of the most common abbreviations frequently used in
report-writing (the researcher should learn to recognize them as well as he should learn to use them):
anon., anonymous
ante., before
art., article
aug., augmented
bk., book
bull., bulletin
cf., compare
ch., chapter
col., column
diss., dissertation
ed., editor, edition, edited.
ed. cit., edition cited
e.g., exempli gratia: for example
eng., enlarged
et.al., and others
Interpretation and Report Writing 357
et seq., et sequens: and the following
ex., example
f., ff., and the following
fig(s)., figure(s)
fn., footnote
ibid., ibidem: in the same place (when two or more successive footnotes refer to the
same work, it is not necessary to repeat complete reference for the second
footnote. Ibid. may be used. If different pages are referred to, pagination
60 | P a g e
must be shown).
id., idem: the same
ill., illus., or
illust(s). illustrated, illustration(s)
Intro., intro., introduction
l, or ll, line(s)
loc. cit., in the place cited; used as op.cit., (when new reference
loco citato: is made to the same pagination as cited in the previous note)
MS., MSS., Manuscript or Manuscripts
N.B., nota bene: note well
n.d., no date
n.p., no place
no pub., no publisher
no(s)., number(s)
o.p., out of print
op. cit: in the work cited (If reference has been made to a work
opera citato and new reference is to be made, ibid., may be used, if intervening
reference has been made to different works, op.cit. must be used. The
name of the author must precede.
p. or pp., page(s)
passim: here and there
post: after
rev., revised
tr., trans., translator, translated, translation
vid or vide: see, refer to
viz., namely
vol. or vol(s)., volume(s)
vs., versus: against
8. Use of statistics, charts and graphs: A judicious use of statistics in research reports is often considered a
virtue for it contributes a great deal towards the clarification and simplification of the material and research
results. One may well remember that a good picture is often worth more than a thousand words. Statistics
are usually presented in the form of tables, charts, bars and line-graphs and pictograms. Such presentation
should be self-explanatory and complete in itself. It should be suitable and appropriate looking to the
problem at hand. Finally, statistical presentation should be neat and attractive.
9. The final draft: Revising and rewriting the rough draft of the report should be done with great care before
writing the final draft. For the purpose, the researcher should put to himself questions like: Are the sentences
written in the report clear? Are they grammatically correct? Do they say what is meant? Do the various
points incorporated in the report fit together logically? Having at least one colleague read the report just
before the final revision is extremely helpful. Sentences that seem crystal-clear to the writer may prove quite
confusing to other people; a connection that had seemed self-evident may strike others as a non-sequitur. A
friendly critic, by pointing out passages that seem unclear or illogical, and perhaps suggesting ways of
remedying the difficulties, can be an invaluable aid in achieving the goal of adequate communication.6
10. Bibliography: Bibliography should be prepared and appended to the research report as discussed earlier.
11. Preparation of the index: At the end of the report, an index should invariably be given, the value of
which lies in the fact that it acts as a good guide, to the reader. Index may be prepared both as subject index
and as author index. The former gives the names of the subject-topics or concepts along with the number of
61 | P a g e
pages on which they have appeared or discussed in the report, whereas the latter gives the similar
information regarding the names of authors. The index should always be arranged alphabetically. Some
people prefer to prepare only one index common for names of authors, subject-topics, concepts and the like
ones.
PRECAUTIONS FOR WRITING RESEARCH REPORTS
Research report is a channel of communicating the research findings to the readers of the report. A good
research report is one which does this task efficiently and effectively. As such it must be prepared keeping
the following precautions in view:
1. While determining the length of the report (since research reports vary greatly in length), one should keep
in view the fact that it should be long enough to cover the subject but short enough to maintain interest. In
fact, report-writing should not be a means to learning more and more about less and less.
2. A research report should not, if this can be avoided, be dull; it should be such as to sustain readers
interest.
3. Abstract terminology and technical jargon should be avoided in a research report. The report should be
able to convey the matter as simply as possible. This, in other words, means that report should be written in
an objective style in simple language, avoiding expressions such as it seems, there may be and the like.
4. Readers are often interested in acquiring a quick knowledge of the main findings and as such the report
must provide a ready availability of the findings. For this purpose, charts, graphs and the statistical tables
may be used for the various results in the main report in addition to the summary of important findings.
5. The layout of the report should be well thought out and must be appropriate and in accordance with the
objective of the research problem.
6. The reports should be free from grammatical mistakes and must be prepared strictly in accordance with
the techniques of composition of report-writing such as the use of quotations, footnotes, documentation,
proper punctuation and use of abbreviations in footnotes and the like.
7. The report must present the logical analysis of the subject matter. It must reflect a structure wherein the
different pieces of analysis relating to the research problem fit well.
8. A research report should show originality and should necessarily be an attempt to solve some intellectual
problem. It must contribute to the solution of a problem and must add to the store of knowledge.
9. Towards the end, the report must also state the policy implications relating to the problem under
consideration. It is usually considered desirable if the report makes a forecast of the probable future of the
subject concerned and indicates the kinds of research still needs to be done in that particular field.
10. Appendices should be enlisted in respect of all the technical data in the report.
11. Bibliography of sources consulted is a must for a good report and must necessarily be given.
12. Index is also considered an essential part of a good report and as such must be prepared and appended at
the end.
13. Report must be attractive in appearance, neat and clean, whether typed or printed.
14. Calculated confidence limits must be mentioned and the various constraints experienced in conducting
the research study may also be stated in the report.
15. Objective of the study, the nature of the problem, the methods employed and the analysis techniques
adopted must all be clearly stated in the beginning of the report in the form of introduction.
CONCLUSION
In spite of all that has been stated above, one should always keep in view the fact report-writing is an art
which is learnt by practice and experience, rather than by mere doctrination.
62 | P a g e
QUALITATIVE VERSUS Qualitative Research Quantitative Research
QUANTITATIVE RESEARCH
Criteria
Purpose To understand & interpret social To test hypotheses, look at cause
interactions. & effect, & make predictions.
Group Studied Smaller & not randomly selected. Larger & randomly selected.
Variables Study of the whole, not variables. Specific variables studied
Type of Data Collected Words, images, or objects. Numbers and statistics.
Form of Data Collected Qualitative data such as open- Quantitative data based on precise
ended responses, interviews, measurements using structured &
participant observations, field validated data-collection
notes, & reflections. instruments.
Type of Data Analysis Identify patterns, features, Identify statistical relationships.
themes.
Objectivity and Subjectivity Subjectivity is expected. Objectivity is critical.
Role of Researcher Researcher & their biases may be Researcher & their biases are not
known to participants in the known to participants in the
study, & participant study, & participant
characteristics may be known to characteristics are deliberately
the researcher. hidden from the researcher
(double blind studies).
Results Particular or specialized findings Generalizable findings that can be
that is less generalizable. applied to other populations.
Scientific Method Exploratory or bottomup: the Confirmatory or top-down: the
researcher generates a new researcher tests the hypothesis
hypothesis and theory from the and theory with the data.
data collected.
View of Human Behavior Dynamic, situational, social, & Regular & predictable.
personal.
Most Common Research Explore, discover, & construct. Describe, explain, & predict.
Objectives
Focus Wide-angle lens; examines the Narrow-angle lens; tests a specific
breadth & depth of phenomena. hypotheses.
Nature of Observation Study behavior in a natural Study behavior under controlled
environment. conditions; isolate causal effects.
Nature of Reality Multiple realities; subjective. Single reality; objective.
Final Report Narrative report with contextual Statistical report with
description & direct quotations correlations, comparisons of
from research participants. means, & statistical significance
of findings.
63 | P a g e
STAGES IN THE RESEARCH
PROCESSSTAGES IN THE
64 | P a g e