Sie sind auf Seite 1von 35

PLT COLLEGE, INC.

Bayombong, Nueva Vizcaya

Institute of Health Sciences –College of Nursing

Nursing Research
(Julie Ann T. Garnace, RN- Instrucor)

"If you have not taught students how to be independent learners, you have not taught them how to be more
updated than they are the day they graduate. They're essentially frozen in time," says Jeanne Grace, Ph.D.,
RN-C., associate professor of clinical nursing.

Beyond The Bedside: Nurses Improve Patients' Lives Through Research

The sentence in the quote above really says it all. How many of us that have
been working for years, fall into this category? We all know nurses like this.
I would like to believe that most nurses are interested in new data and new
research that can impact patient education and care, and improve our
professional lives and safety.

Many of us have worked in the same specialty area for years, and we can
become stagnant in our thinking, and bored with our work. If the excitement
about what we do every day is gone, and it becomes just a way to make a
living, reaching out for new learning experiences and taking in new information
is a safe way to jump start our creativity and keep us on our toes.

Basic nursing research is always valuable, and pays off sometimes in unexpected
ways.

Introduction

The ability to conduct research is becoming an ever important skill. The ultimate purpose of nursing is to provide high-
quality patient care. Clinical practice without research is practice based on tradition without validation. Research is
needed to evaluate the effectiveness of nursing treatment modalities, to determine the impact of nursing care on the health
of the patients or to test out theory. Nursing practice is undergoing tremendous changes and challenges. In order to meet
social challenges and needs, nursing practice must be research based (Lanuza,1999).

As the context of health care is changing due to the pharmaceutical services and technological advances, nurses and other
health care professionals need to be prepared to respond in knowledgeable and practical ways. Research in nursing
evolved predominantly when nursing education became a part of higher education and was seeking its own body of
knowledge, different from that of medicine. Nursing’s first researchers were being prepared in fields other than nursing
and have brought to nursing the various paradigms from those fields (Munhall, 2001).

Scientific research is a systematic and objective attempt to provide answers to certain questions. The purpose of scientific
research is to discover and develop an organized body of knowledge. Nursing research refers to the use of systematic,
controlled, empirical, and critical investigation in attempting to discover or confirm facts that relate to specific problem or
question about the practice of nursing. Nursing research is defined as the application of scientific inquiry to the
phenomena of concern to nursing. The systematic investigation of patients and their health experience is the primary
concern of nursing (Schotfetdt, 1977).
Definition and Nature of Research

RESEARCH is defined as a careful systematic study in a field of knowledge that is undertaken to discover or establish facts or
principles (Webster, 1984). It is also defined as a systematic process of collecting and analyzing data to find an answer to
a question or solution to a problem, to validate or test existing theory.

Research is systematic inquiry that uses disciplined methods to answer questions or solve problems. The ultimate goal
of research is to develop, refine, and expand a body of knowledge (Polit & Beck, 2004)

A systematic search for and validation of knowledge about issues of importance to the nursing profession (Polit &
Hungler).

Research concerned with knowledge that directly or indirectly influences clinical nursing practice.

Systematic, objective process of analyzing phenomena of importance to nursing.

Nursing Research is systematic inquiry designed to develop knowledge about issues of importance to the nursing
profession, including nursing practice, education, administration, and informatics.

Clinical Nursing Research is research designed to generate knowledge to guide nursing practice and to improve the
health and quality of life of nurse’s clients.

Characteristics of Research

A scientific research has some characteristics (Singh, 2002)

 Research is always directed towards the solution of a problem.


 Research is always based on empirical and observational evidence.
 Research involves precise observation and accurate description.
 Research emphasize to the development of theories, principles, and generalizations.
 Research is characterized by systematic, objective and logical procedures.
 Research is marked by patience, courage and unhurried activities.
 Research requires that the researcher has full experience of the problem being studied.
 Research is replicable.
 Research uses systematic method of problem-solving.
 In research the factors which are not under study are controlled.
 Research requires full skill of writing report

Importance of nursing research

There are several reasons for conducting research investigations, description of phenomena, exploration, or prediction of
occurrence of a specific phenomenon. In nursing the purpose of research are

 to build a body of nursing knowledge


 to validate improvements in nursing practice
 to make healthcare efficient and cost effective
 Production of evidence-based practice.
 Credibility of the nursing profession.
 Accountability for nursing practice.

Role of Nurses in Research

1. Principal investigator
 Demands preparation beyond BSN level
2. Member of research team
 Data collector, administer experiments or interventions.
3. Identifier of researchable problems
 Eg. Nurse at bedside can determine problem areas that may be investigated on.
4. Evaluator of research findings
 Determines the usefulness of findings; beginning researchers should critique research articles.
5. User of research findings
6. Patient or client advocate
7. Subject of studies

The Consumer – Producer Continuum in Nursing Research

Nurses increasingly are expected to adopt an evidence – based practice (EBP), which is broadly defined as the use of the
best clinical evidence in making patient care decisions. Nurses are accepting the need to base specific nursing actions and
decisions on evidence indicating that the actions are clinically appropriate, cost-effective, and result in positive outcomes
for clients. With the current emphasis on EBP, it has become every nurse’s responsibility to engage in one or more roles
along a continuum of research participation. At one end of the continuum are those nurses whose involvement in research
is indirect. Consumers of nursing research read research reports to develop new skills and to keep up to date on relevant
findings that may affect their practice. Nurses increasingly are expected to maintain this level of involvement with
research, at a minimum. Research utilization – the use of research findings in a practice setting – depends on intelligent
nursing research consumers.

At the other end of a continuum are the producers of nursing research: nurses who actively participate in designing and
implementing research studies. At one time, most nurse researchers were academics who taught in schools of nursing, but
research is increasingly being conducted by practicing nurses who want to find what works best for their patients.

There are research activities in which nurses engage as a way of improving their effectiveness and enhancing their
professional lives. The following are:

 Participating in a journal club in a practice setting, which involves regular meetings among nurses to discuss and
critique research articles.
 Attending research presentations at professional conferences.

 Discussing the implications and relevance of research findings with clients.

 Giving clients information and advice about participation in studies.

 Assisting in the collection of research information

 Reviewing a proposed research plan with respect to its feasibility in a clinical setting and offering clinical
expertise to improve the plan

 Collaborating in the development of an idea for a clinical research project.

 Participating on an institutional committee that reviews the ethical aspects of proposed research before it is
undertaken.

 Evaluating completed research for its possible use in practice, and using it when appropriate.
Nursing Research: Past, Present, and Future

The Early Years

Approaches to Nursing Research

There can be two broad approaches to nursing research, quantitative and qualitative approach. Quantitative Research is an approach to
structuring knowledge by determining how much of a given behaviour, characteristic or phenomenon present. Quantitative Research
methods are particularly concerned with objectivity and ability to generalize the findings to others. It is based on the fundamental
assumptions of prediction, manipulation, and control (Brockopp & Hastings-Tolsma, 2003).

By quantitative method of research we mean the traditional scientific methods characterized by deductive reasoning, objectivity,
quazi-experiments, statistical techniques, and control. In contrast the qualitative method is characterized by inductive reasoning,
subjectivity, discovery, description, and process orienting (Reichardt & Cook, 1979). The outcome depending on the method can be
derived from description, interpretation, and analysis (Ashworth, 1997). Qualitative research is an approach to structuring knowledge
that utilizes methods of inquiry that emphasize subjectivity and the meaning of the experience to the individual. Qualitative research is
an inductive approach to discovering or expanding knowledge. It requires the involvement of the researcher in the identification of the
meaning or relevance of a particular phenomenon to the individual. Analysis and interpretation in this method are not generally
dependent upon the quantification of observations (Brockopp &Hastings-Tolsma, 2003).

Qualitative research approach can be of several forms; phenomenological, philosophical, historical, grounded theory method or
ethnographic research. The differentiation between qualitative and quantitative research is less than clear-cut (Polit & Hugler, 1999).
Further, categorization of research approaches also includes basic research, applied research and epidemiological research.

Basic Research

Basic research refers to those studies that are designed to seek knowledge for its own sake and does not therefore specify an
application of the findings. Basic research is conducted in order to understand the relationship among phenomena. Basic research is
not aimed toward the solution of problems or the facilitation of decision making (LoBiondo-Wood, G. & Haber, J. 1997).

Applied Research

Applied research is research that is designed to produce findings that can be used to remediate or modify a given situation. The term
refers to those studies that have their purpose an identified practical use or application. A problem is investigated, and some resolution
is sought by way of research findings (Polit & Hungler, 1995).

Epidemiologic Research

Epidemiology is an approach to generating knowledge that uses quantitative research methods to understand the incidence,
distribution, and control health problems within a population. Epidemiologic studies can be categorized as observational or
experimental. Observational studies include cohort, case control, cross-sectional and ecological designs. Experimental studies include
randomized controlled trails and the cross-over designs. Observational Studies do not attempt to manipulate variables in a systematic
fashion; instead, inferences are made on the basis of an ongoing series of observations. Some of the most common observational
studies include the cohort study, the panel study, and the case-control study. In a cohort study, groups of people who share some
common characteristics are followed over the course of time. These studies, which are often prospective, resample the same
population of individuals on repeated occasions. However, the exact participants in the study may not be the same on repeated
observations.
A panel study is similar to a cohort study; however, it has the stricter requirement that exactly the same individuals who were in the
original sample are followed at each repeated assessment. Cohort and panel studies are considered to be longitudinal designs, which
make inferences about changes over the course of time. Cross-sectional studies differ from longitudinal studies in that they examine
different groups of individuals at the same point in time. To make inferences about drug use in college, for example, the cross-
sectional method would require sampling of each current class such that freshmen could be compared to sophomores, juniors, and
seniors. These individuals would not be members of the same class or birth cohort.

A case-Control methodology compares a group of people with a diagnosed disease (cases) with one or more groups that have not
been given the same diagnosis. Case-control studies are typically retrospective because they make inferences about events that have
caused currently diagnosed cases. Longitudinal studies are often prospective and have the advantage of documenting the antecedents
of new cases. Observational studies often used correlational and multivariate statistical techniques. Variables that are uncontrolled
through the experimental design are often adjusted for using statistical methods. In contrast to observational studies in which
important variables are not controlled, experimental studies typically involve the systematic manipulation of variables.Methodological
approaches to nursing research can be of following nature.

Experimental Designs

Experiment is a design that is commonly used in natural or basic sciences. It refers to a research design that is characterized by a
comparison among groups that are as equal as possible, the manipulation of the independent variable, the use of inferential statistics,
and stringent control of extraneous factors. The design permits the researcher to establish cause-effect relationships and therefore
accurately predict and explain phenomena. Here the investigator attempts to establish that the results of the study can be accurately
attributed to the manipulation of the variable under examination.

Characteristics of Experiments

 Two or more groups


 The results of the study can be attributed to the manipulation of the variable and not to differences between the groups.
 Controlling extraneous factors

Experimental studies are easier to conduct in natural sciences than in the social sciences because the objects or units can be more
readily controlled and manipulated.

Non-experimental design or Descriptive Studies

Non-experimental studies are present-oriented. It attempts to describe what exists. Variable are not deliberately manipulated, nor is the
setting controlled. The analysis of data often leads to the formation of hypothesis that can then be tested experimentally.

Non-experimental or descriptive studies can be exploratory (simply exploring what exists without having any research data in the
area), explanatory research (explaining a particular phenomenon), and correlational designs (exploring the relationship between
different states)

RESEARCH PROCESS

Research process is the examination and analysis of systematically gathered facts about a particular problem. The aim of research
process is the discovery or validation of knowledge. It is the systematic process of problem solving. Research process has following
steps:

1. Conceptualization of Research problem


2. Development of Conceptual framework
 Identifying assumptions
 Defining variables
 Stating hypothesis
3. Research Design
 Selecting Research Approach
 Planning for data collection
 Selection of Sample
 Pilot Study
 Planning of Data Processing
 Planning for Data analysis
 Planning for Interpretation
4. Data Collection
5. Data Analysis
6. Interpretation of Results
7. Writing the Report
8. Critique and Publication
9. Application Results

1. Conceptualization of Research problem

Conceptualization research problem may include several activities, such as stating the problem, defining the objectives, purposes and
terms to be used in the study. Research problem originates from a situation of need, where unresolved difficulties occur.
Conceptualization of research problem begins with identification of problem area and research problem, and definition of problem.
The problem under study is written and called as “Statement of Problem”. The problem should be significant to nursing, practical in
nature, feasible and the findings add to the knowledge of nursing. The researcher need to be interested in doing the study and should
possess required qualification. Research questions may arise from personal intuition and personal observation of the environment or
personal beliefs, but ideas are more often developed through interaction with others. Similarly, research questions tend to arise from an
examination of others’ work which can be found in the research journals and textbooks of deferent scientific disciplines.

2. Development of Conceptual framework

Introduction

 RESEARCH – Process of inquiry


 THEORY – Product of knowledge
 SCIENCE – Result of the relationship between research & theory
 To effectively build knowledge to research process should be developed within some theoretical structure that facilities
analysis and interpretation of findings.
 Relationship between theory and research in nursing is not well understood.

Need to Link Theory and Research

 Research without theory results in discreet information or data which does not add to the accumulated knowledge of the
discipline.
 Theory guides the research process, forms the research questions, aids in design, analysis and interpretation.
 It enables the scientist to weave the facts together.

Theories from Nursing or Other Disciplines?

 Nursing science is blend of knowledge that is unique to nursing and knowledge that is borrowed from other disciplines.
 Debate is whether the use of borrowed theory has hindered the development of the discipline.
 It has contributed to problems connecting research and theory in nursing.

Historical Overview of Research and Theory in Nursing

 Florence Nightingale supported her theoretical propositions through research, as statistical data and prepared graphs
were used to depict the impact of nursing care on the health of British soldiers.
 Afterwards, for almost century reports of nursing research were rare.
 Research and theory developed separately in nursing.
 Between 1928 and 1959 only 2 out of 152 studies reported a theoretical basis for the research design.
 In 1970’s growing number of nurse theorists were seeking researchers to test their models in research and clinical
application
 Grand nursing theories are still not widely used. In 1990’s borrowed theories were used more.
 Now the focus of research and theory have moved more towards middle range theories

Purpose of Theory in Research


 To identify meaningful and relevant areas for study.
 To propose plausible approaches to health problems.
 To develop or refine theories
 Define the concepts and proposed relationships between concepts.
 To interpret research findings
 To develop clinical practice protocols.
 Generate nursing diagnosis.

Types of theory and corresponding research

Type of theory Type of research


 Descriptive  Descriptive or explanatory
 Explanatory  Co relational

 Predictive  Experimental

How Theory is used in Research

Causal theory of planned behaviour

Theory Generating Research

 It is designed to develop and describe relationships between and among phenomena without imposing preconceived
notations.
 It is inductive and includes field observations and phenomenology.
 During the theory generating process, the researcher moves by logical thought from fact to theory by means of a proposition
stated as an empirical generalization.

Grounded Theory Research

 Inductive research technique developed by Glazer and Strauss (1967)


 Grounded theory provides a way to describe what is happening and understanding the process of why it happens.
 Methodology – The researcher observes, collects data, organizes data and forms theory from the data at the same time.
 Data may be collected by interview, observation, records or a combination of these techniques.
 Data are coded in preparation for analysis.
 Category development – Categories are identified and named
 Category saturation – Comparison of similar characteristics in each of the categories
 Concept development – Defines the categories
 Search for additional categories – Continues to examine the data for additional categories
 Category reduction – Higher order categories are selected
 Linking of categories – The researcher seeks to understand relationships among categories
 Selective sampling of the literature
 Emergence of the core variable – Central theme are focus of the theory
 Concept modification and integration – Explaining the phenomenal

Theory testing research

 In theory testing research, theoretical statements are translated into questions and hypothesis. It requires a deductive
reasoning process.
 The interpretation determines whether the study supports are contradicts the propositional statement.
 If a conceptual model is used as a theoretical framework for research it is not theory testing.
 Theory testing requires detailed examination of theoretical relationships.

Theory as a conceptual framework

 Problem being investigated is fit into an existing theoretical framework, which guides the study and enriches the value of its
findings.
 The conceptual definitions are drawn from the framework
 The data collection instrument is congruent with the framework.
 Findings are interpreted in light of explanations provided by the framework.
 Implications are based on the explanatory power of a framework.

A Typology of Research

 Testing
 Analyzing
 Experimentation
 Deducting
 Deductive research
 Quantitative research
 The scientific method
 Theory / hypothesis testing
 Assaying
 Refining
 Interpreting
 Reflecting
 Inducing
 Inductive research
 Qualitative research
 Phenomenological research
 Theory generation
 ‘Divining’; ‘heuristic’ research
Guidelines for writing about a research study’s theoretical framework
In the study’s problem statement

o Introduce the framework


o Briefly explain why it is a good fit for the research problem area
o At the end of the literature review
o Thoroughly describe the framework and explain its application to the present study.
o Describe how the framework has been used in studies about similar problems
o In the study’s methodology section
o Explain how the framework is being operationalized in the study’s design.
o Explain how data collection methods (such as questionnaire items) reflect the concepts in the framework.
o In the study’s discussion section
o Describe how study findings are consistent (or inconsistent) with the framework.
o Offer suggestions for practice and further research that are congruent with the framework’s concepts and
propositions.

Conclusion

The relationship between research and theory is undeniable, and it is important to recognize the impact of this relationships on the
development of nursing knowledge. So interface theory and research by generating theories, testing the theories and by using it as a
conceptual framework that drives the study.

Reference

 George B. Julia , Nursing Theories- The base for professional Nursing Practice , 3rd ed. Norwalk, Appleton and Lange.
 Polit DF, Hungler BP. Nursing Research: Principles and Methods. Philadelphia: JB Lippincott Company; 1998.
 Burns N, Grove SK. The practice of Nursing Research. 4th Ed. Philadelphia: WB Saunders Publications; 2001.
 Treece JW, Treece EW. Elements of Research in Nursing (3rded.). St. Louis: Mosby; 1982.

3. Research Design

ETHICAL ISSUES IN NURSING RESEARCH

Issues of ethical behavior are central to health professions. In conducting clinical trials and research projects ethical issues should be
taken in to consideration. It is unethical for an investigator not to give patients the best possible treatment. Other way, it is unethical
not to discover whether a new treatment is an improvement, since this would deny future patients the possibility of a cure. It is also
unethical to perform bad trials that give misleading results, and there by encourage others not to use a treatment that is beneficial, or to
use a treatment that is not beneficial, or may indeed have harmful effects.

Two important areas of ethical consideration are rights of human subjects and freedom from harm. Three factors are important
regarding the rights of the participants, confidentiality, anonymity and the voluntary participation. It can be ensured through an
informed consent which clearly explains the study objectives and states participants’ right to accept or refuse to participate (Fowler &
Chevannes, 2002).

Research Participants at Risk

Research participants at risk are individuals who may be harmed physically, emotionally, spiritually, economically, socially, or legally
through participation in a research study. A basic responsibility of the researcher and those assisting in carrying out the project, is to
protect all research participants from harm and while they are participating in an investigation or as a result of the study.

Informed Consent

Informed consent is the process of providing an individual with sufficient understandable information regarding his or her
participation in a research project. It includes providing potential participants with information about their rights and responsibilities
with in the project and documenting the nature of the agreement. All consent forms need to assure potential participants of their right
to withdraw from a research study at any time. . Informed consent is the researcher’s conscious and deliberate attempt to clearly and
fully provide the potential participant with information about the study.

It is a fundamental responsibility of the investigator in human research to ensure research participants understand the nature of the
project and the implications of participation and the individual is able to decide freely whether to participate in a project, without fear
or reprisal. When the researcher fails to adequately inform potential research participants about full nature of the research, there by
preventing them from making an informed decision on their participation is called deception.

Confidentiality & Anonymity

Confidentiality refers to the researcher’s responsibility to protect all data gathered within the scope of the project from being divulged
to others. Anonymity refers to the act of keeping individuals nameless in relation to their participation in a research project.

Regulatory Bodies

The researcher may need to take permission from regulatory bodies for conducting research investigations, mainly when subjects are
human beings. There are regulatory bodies in most of the countries which grant permission after considering ethical issues of the
study. Indian Council of Medical Research (ICMR) released 'Ethical guidelines for biomedical research in human subjects' in the year
2000, which are similar to Good Clinical Practice (GCP) guidelines and the prevalent international guidelines. These guidelines
regulate all biomedical research in human subjects in India.

REVIEW OF LITERATURE

A review of literature is a comprehensive description as well as an evaluation of the evidence related to a given topic. Review of
literature sets the stage for the reminder of the article. An effective relevant literature includes those studies which have been
completely executed, clearly reported and closely related to the research problem. Well-written reviews of literature include evaluative
statements regarding the studies described. Comment about sample size, instruments used, research design, and other components of
the research process can be helpful to the reader in better understanding the value of the results of the investigations.In conducting an
in-depth search of the literature, the investigator needs to identify all relevant publications in the area of interest.

The investigator starts with the most recent publications in order to find the most relevant information. When searching the literature
both primary and secondary sources may need to be considered A primary source refers to the publication in its original form. A
secondary source refers when the author writes about another authors work. Primary sources are generally preferred because a
distortion of ideas can occur in a secondary source (Polit & Hungler, 1999).

The investigator critically evaluates the information gathered by examining each component of the publication. Analysis of a clinical
opinion article requires the reader to evaluate the logic to validate the author’s conclusion. When examining a research report, the
reader must examine each component of the research process and make judgments about the appropriateness of the methods used in
relation to the conclusion drawn (Brockopp & Hastings-Tolsma, 2003).

There are two methods of searching literature, performing computer search in databases, and examining books and periodicals
manually. A combination of two methods can be comprehensive. For manual search for information related to a given topic the
investigator can use library facilities with the help of library mechanisms like card catalogs or computer catalogs.Computerized
databases are increasingly popular and necessary as the volume of published material continues to grow.

Computer searches are more advantageous than hand searches of indexes because they take less time and allow concepts to be linked
(Sinclair, 1987). Searching by computer can be accomplished using a library-based computer system or a personal computer and
online resources in the home or workplace. The computer also allows the researcher to choose only the references of interest form
those presented and to obtain a printed list of articles chosen fro review.

Searching nursing literature on the Internet

The internet possesses an enormous number of medical and nursing databases, which are very useful for nursing professionals. It is
quiet impossible to calculate the quantity of medical information on the Internet. So many new resources are created each day that
nobody could possibly keep abreast of them and present an exhaustive analysis of all existing medical resources. MEDLINE is a
comprehensive database for health literature which is managed by National Library of Medicine, USA. International nursing index
and IndexMedicus are included in MEDLINE. Cochrane Review (www.cochranereview.org) is another database which gives
extensive search options. Cumulative Index to Nursing and Allied Health Literature (CINAHL) is a database of exclusive nursing
articles. Internet search engine Google has developed Google Scholar (http://scholar.google.com ), which helps to confine search
options to academic papers only. Open access journals (http://doaj.org) are another source of searching literature. These journals are
access free and can be used without restriction provided the policies are accepted. Many standard journals provide their archives for
free online access search after a period of time to developing countries. Elsevier publications’ internet database for nursing journals is
accessible at http://sciencediretct.com/. The Nursing Center (http://nursingcenter.com ) is the online access site for Lippincott
Williams & Wilkins’ nursing journals. These journals are to be subscribed and most of them are indexed in MEDLINE and CINAHL.
Many of the journals are published by the organization themselves. The Indian Medlars Centre of National Informatics Centre, New
Delhi, has designed two databases, IndMED, a bibliographic database of peer reviewed Indian biomedical journals and medIND, full-
text of selected IndMED journals. These are accessible free of cost from the Center’s site <http://indmed.nic.in/> (Ameen, 2004).

Bradford-Hill guidelines

Bradford-Hill criteria are nine specific criteria that are used to evaluate studies for the existence of a cause-and
–effect relationship. The nine criteria are the strength of the association, confounding variables and bias,
temporality, biologic gradient, specificity, consistency, biologic plausibility, studies appropriately done(having
clear comparison group, blinding description of the methods used, analysis consistence with study design), and
freedom from bias and confounding variables. Consistent use of the criteria helps in the determination that the
increased relative risk is not likely the result of bias or other factors (Bradford-Hill, 1971).

HYPOTHESIS AND ESTIMATION

Hypothesis

Hypothesis is statement or declaration of the expected outcome of a research


study. It is based on logical rationale and has empirical possibilities for testing.
Hypothesis is formulated in experimental research. In some non-experimental
correlational studies, hypothesis may also be developed. Normally, there are
four elements in a hypothesis:

1. dependent and independent variables,


2. some type of relationship between independent and dependent
variable,

3. the direction of the change, and

4. it mentions about the subjects, i.e. population being studied.

It is defined as “A tentative assumption made in order to draw out and test its
logical or empirical consequences” (Webster, 1968).Standards in formulating a
hypothesis (Ahuja, R. 2001):

1. It should be empirically testable, whether it is right or wrong.


2. It should be specific and precise.
3. The statements in the hypothesis should not be contradictory.
4. It should specify variables between which the relationship to be
established
5. It should describe one issue only.

Characteristics of a Hypothesis (Treece & Treece, 1989)

1. It is testable
2. It is logical
3. It is directly related to the research problem
4. It is factually or theoretically based
5. It states a relationship between variables
6. It is stated in such a form that it can be accepted or rejected

Hypothesis formation
Directional hypothesis predicts an outcome in a particular direction, and
nondirectional hypothesis simply states that there will be difference between
the groups. There can be two hypotheses, research hypothesis and null
hypothesis. The null hypothesis is formed for the statistical purpose of negating
it. If the research hypothesis states there is positive correlation between
smoking and cancer, the null hypothesis states there is no relation between
smoking and cancer. It is easy to negate a statement than trying to establish it.

.Dependent and independent variables

1. An independent variable is presumed to cause of the dependent


variable-the presumed effect.
2. The independent variable is one which explains or accounts for
variations in the dependent variable.
3. A dependent variable is one which changes in relationship to changes
in another variable.
4. An independent variable is one whose change results in change in
other variable.
5. In experiments, the independent variable is the variable manipulated
by the experimenter.
6. A variable which is dependent in one study may be independent in
another.
7. Intervening variable is one that comes between the independent and
dependent variable.

INSTRUMENTS IN NURSING RESEARCH

An instrument in a research study is a device used to measure the concept of interest in a research project. It is used to measure a
concept of interest. An ideal measuring instrument is one which results in measures that are relevant, accurate, objective, sensitive and
efficient. Measures which are Physical and physiological have higher chance of success in attaining these goals than measures that are
psychological and behavioral. Instruments can be observation scales, questionnaires or interview schedules.

Validity and reliability are two statistical properties used to evaluate the quality of research instruments (Anastasi, 1986). It is
important that assessment techniques possess both validity and reliability.

Validity

Validity in relation to research is a judgment regarding the degree to which the components of the research reflect the theory, concept,
or variable under study (Streiner& Norman. 1996). The validity of the instrument used and validity of the research design as whole are
important criteria in evaluating the worth of the results of the results conducted. Internal validity refers to the likelihood that
experimental manipulation indeed was responsible for the differences observed. External validity refers to the extent to which the
results of the study can be generalized to the larger population (Polit & Hungler, 1999).

Four types of validity are used to judge the accuracy of an instrument:

(1) content validity

(2) predictive validity

(3) concurrent validity, and

(4) construct validity.


Content validity is the extent to which different items in the assessment measure the trait or phenomenon they were meant to. High
level of content validity indicates that test items accurately reflect the trait being measured. A questionnaire to assess anxiety, for
example, would be high in content validity if it included questions about known symptoms of anxiety such as muscle tension and a
rapid pulse rate.

Predictive validity is the ability of an assessment measure to predict someone’s future behaviour in related but different, situation. An
assessment measure with high predictive validity is capable of making accurate predictions of future behaviour. Low predictive
validity means that a measure is of little use in predicting a particular behaviour.

Concurrent validity reflects how well different measures of the same trait agree with another. If a test possesses high degree of
concurrent validity, then it can be expected to give results very similar to other measures of same characteristic.

Construct validity is the extent to which a theoretical construct such as a personality trait can be empirically defined.

Reliability

Reliability of an instrument reflects its stability and consistency within a given context. Reliability is the consistency of measurement
over time, whether it provides the same results on repeated trails. It is defined as a characteristic of an instrument that reflects the
degree to which the instrument provokes consistent responses. For example, a scale developed to measure intelligence might not be
reliable for measurement of personality. Three characteristics of reliability are commonly evaluated:

(1) stability,

(2) internal consistency, and

(3) equivalence.

Test-retest reliability or stability refers to degree to which research participants’ response change overtime. Test-retest method is
used to test stability of the tool. In this method an instrument is given to the same individuals on two occasions within relatively short
duration of time. A correlation coefficient is calculated to determine how closely the participants’ responses on the second occasion
matched their responses on the first occasion.

Half-split reliability or internal consistency is a measure of reliability that is frequently used with scales designed to assess
psychosocial characteristics. Instruments can be assessed for internal consistency using half-split technique (i.e. answers to one half of
the items are compared with answers to the other half of the items) or by calculating the alpha coefficient or using Kuder-Richardson
formula. In the case alpha coefficient and Kuder-Richardson formula, a coefficient that ranges from 0 to 1.00 usually results.

Interrater reliability or the notion of equivalence is often concern when different observers are using the same instrument to collect
data at the same time. A coefficient can be calculated or other statistical or nonstatistical procedure can be used to see the correlation
of values.

SAMPLING, DATA COLLECTION AND ANALYSIS

Data Collection

Once the problem has been decided and methodology is planned, the systematic collection of reliable and valid evidence is the next
step in the research process. Data collection should be systematic and meticulous. The purpose of gathering is to transform them into
information in order to identify variable, measure variables, describe behaviour and obtain empirical evidence.

In view of the statistical analysis, the levels of measurement should be defined, as nominal, ordinal, interval or ratio level data. The
sources for collection of data for a research study vary with interest of the researcher and type of the study. Sources of data can be
documentary sources as primary and secondary sources, field sources as subjects in person, conditions, environment and events that
are observable and measurable, and historical data. The methods of collecting include questioning using interview schedule and
questionnaires, observation techniques with the help of structured or unstructured instruments, and measuring with standardized
instruments.
A pilot study is done to establish the feasibility and practicability of the whole research design. It helps to find out whether any
changes in the methodology are required.

Sampling

Sampling can be probability sampling or non-probability sampling. Probability sampling, also called random sampling, is a selection
process that ensures each participant the same probability of being selected. Random sampling is the best method for ensuring that a
sample is representative of the larger population. Random sampling can be simple random sampling, stratified random sampling, and
cluster sampling. Nonprobability sampling is the selection process in which the probability that any one individual or subject selected
is not equal to the probability that another individual or subject may be chosen. The probability of inclusion and the degree to which
the sample represents the population are unknown. The major problem with nonprobability sampling is that sampling bias can occur.
Nonprobability sampling can be convenience sampling, purposive sampling or quota sampling

Extraneous Variable

Extraneous variable are those variables that can influence the relationship between the independent dependent variables. They must be
controlled through statistical analysis or research design.

There are six methods of controlling extraneous variable (Polit & Hugler, 1999). They are (1) ensuring subjects are homogenous,
including the extraneous variable as independent variable

(3)matching subjects in relation to extraneous variables,

(4) using statistical procedures to control undesirable variables,

(5) randomly assigning subjects to experimental and control groups, and using

(6) repeated measures design.

Analysis

The purpose of analyzing data in a study is to describe the data in meaningful terms. Statistics help to answer important research
questions and it is the answers to such questions that further our understanding of the field and provide for academic study. It is
required the researcher to have an understanding of what tools are suitable for a particular research study. Depending on the kinds of
variables identified (nominal, ordinal, interval, and ratio) and the design of particular study, a number of statistical techniques is
available to analyze data.

There are two approaches to the statistical analysis of data the descriptive approach and inferential approach. Descriptive statistics
convert data into picture of the information that is readily understandable. The inferential approach helps to decide whether the
outcome of the study is a result of factors planned within design of the study or determined by chance. The two approaches are often
used sequentially in that first data are described with descriptive statistics, and then additional statistical manipulations are done to
make inferences about the likelihood that the outcome the outcome was due to chance through inferential statistics.Interpreting the
resultsThe result section of the research report is followed by section which focuses on interpretation of the results. In this task, the
investigator tries to interpret the results within the given conceptual framework. Here the researcher draws conclusions based on the
results. If hypotheses have been formed, this section discusses the support or lack of support hypothesis, and if hypothesis have not
been formed the descriptive findings are discussed.Writing

Basic Statistical Concepts for Nurses

-statistics are simply tools that researchers employ to help answer research questions.

Introduction

As the context of health care is changing due to the pharmaceutical services and technological advances, nurses and other health care
professionals need to be prepared to respond in knowledgeable and practical ways. Health information is very often explained in
statistical terms for making it concise and understandable. Statistics plays a vitally important role in the research. Statistics help to
answer important research questions and it is the answers to such questions that further our understanding of the field and provide for
academic study. It is required the researcher to have an understanding of what tools are suitable for a particular research study. It is
essential for healthcare professionals to have a basic understanding of basic concepts of statistics as it enables them to read and
evaluate reports and other literature and to take independent research investigations by selecting the most appropriate statistical test for
their problems. The purpose of analyzing data in a study is to describe the data in meaningful terms.

Descriptive approach and inferential approach

Depending on the kinds of variables identified (nominal, ordinal, interval, and ratio) and the design of particular study, a number of
statistical techniques is available to analyze data. There are two approaches to the statistical analysis of data the descriptive approach
and inferential approach. Descriptive statistics convert data into picture of the information that is readily understandable. The
inferential approach helps to decide whether the outcome of the study is a result of factors planned within design of the study or
determined by chance. The two approaches are often used sequentially in that first data are described with descriptive statistics, and
then additional statistical manipulations are done to make inferences about the likelihood that the outcome the outcome was due to
chance through inferential statistics. When descriptive approach is used, terms like mean, median, mode, variation, and standard
deviation are used to communicate the analysis information of data. When inferential approach is used, probability values (P) are used
to communicate the significance or lack of significance of the results (Streiner & Norman, 1996).

Measurement

Measurement defined as “assignment of numeral according to rules” (Tyler 1963:7). Regardless of the variables under study, in order
to make sense out of data collected, each variable must be measured in such a way that its magnitude or quantity must be clearly
identified. The specific strategy for a particular study depends upon the particular research problem, the sample under study, the
availability of instruments, and the general feasibility of the project (Brockopp & Hastings-Tolsma, 2003). A variety of measurement
methods are available for use in nursing research. Four measurement scales are used: nominal, ordinal, interval and ratio.

The nominal level of measurement

The nominal level of measurement is the most primitive or lowest level of classifying information. Nominal variables include
categories of people, events, and other phenomena are named, are exhaustive in nature, and are mutually exclusive. These categories
are discrete and noncontinous. In case of nominal measurement admissible statistical operation are counting of frequency, percentage,
proportion, mode, and coefficient of contingency.

The ordinal level of measurement

The ordinal level of measurement is second in terms of its refinement as a means of classifying information. Ordinal implies that the
values of variables can be rank-ordered from highest to lowest.

Interval Level of Measurement

Interval level of measurement is quantitative in nature. The individual units are equidistant from one point to the other. The interval
data does not have an absolute zero. For example, temperature is measured in Celsius or Fahrenheit. Interval level of measurement
refers to the third level of measurement in relation to complexity of statistical techniques that can be used to analyze data. Variables
with in this level of measurement are assessed incrementally, and the increments are equal.

Ratio Level of Measurement

Ratio level of measurement is characterized by variables that are assessed incrementally with equal distances between the increments
and a scale that has an absolute zero. Ratio variables exhibit the characteristics of ordinal and interval measurement and can also be
compared by describing it as two or three times another number or as one-third, one-quarter, and so on. Variable like time, length and
weight are ratio scales and also be measured using nominal or ordinal scale. The mathematical properties of interval and ratio scales
are very similar, so the statistical procedures are common for both the scales.

Errors of measurement

When a variable is measured there is the potential for errors to occur. Some of the sources of errors in measurement are, instrument
clarity, variations in administrations, situational variations, response set bias, transitory personal factors, response sampling, and
instrument format.

Population, Sample, Variable


Population is defined as the entire collection of a set of objects, people, or events, in a particular context. The population is the entire
group of persons or objects that is of interest to the investigator. In statistics population means, any collection of individual items or
units that is the subject of investigation. Population refers to the collection of all items upon which statements will be based. This
might include all patients with schizophrenia in a particular hospital, or all depressed individuals in a certain community.
Characteristics of a population that differ form individual to individual are called variables. A variable is a concept (construct) that has
been so specifically defined that precise observations and therefore measurement can be accomplished. Length, age, weight,
temperature, pulse rate are a few examples of variables.

The sample is a subset of the population selected by investigator to participate in a research study. A sample refers to a subset of
observations selected from the population. It might be unusual for an investigator to describe only patients with schizophrenia in a
particular hospital and it is unlikely that an investigator will measure every depressed person in a community. As it is rarely
practicable to obtain measures of a particular variable from all the units in population, the investigator has to collect information from
a smaller group or sub-set that represents the group as a whole. This sub-set is called a sample. Each unit in the sample provides a
record, such as measurement, which is called an observation. The sample represents the population of those critical characteristics the
investigator plan to study.

Dependent and independent variables

An independent variable is presumed cause of the dependent variable-the presumed effect. The independent variable is one which
explains or accounts for variations in the dependent variable. An independent variable is one whose change results in change in other
variable. In experiments, the independent variable is the variable manipulated by the experimenter. A dependent variable is one which
changes in relationship to changes in another variable. A variable which is dependent in one study may be independent in another.
Intervening variable is one that comes between the independent and dependent variable.

Hypothesis

Hypothesis is statement or declaration of the expected outcome of a research study. It is based on logical rationale and has empirical
possibilities for testing. Hypothesis is formulated in experimental research. In some non-experimental correlational studies, hypothesis
may also be developed. Normally, there are four elements in a hypothesis:

 (1) dependent and independent variables,


 (2) some type of relationship between independent and dependent variable,
 (3) the direction of the change, and
 (4) it mentions about the subjects, i.e. population being studied.

It is defined as “A tentative assumption made in order to draw out and test its logical or empirical consequences” (Webster 1968).

Standards in formulating a hypothesis (Ahuja, R. 2001):

 It should be empirically testable, whether it is right or wrong.


 It should be specific and precise.
 The statements in the hypothesis should not be contradictory.
 It should specify variables between which the relationship to be established
 It should describe one issue only.

Characteristics of a Hypothesis

 Characteristics of a Hypothesis (Treece & Treece, 1989)


 It is testable
 It is logical
 It is directly related to the research problem
 It is factually or theoretically based
 It states a relationship between variables
 It is stated in such a form that it can be accepted or rejected

Directional hypothesis predicts an outcome in a particular direction, and nondirectional hypothesis simply states that there will be
difference between the groups. There can be two hypotheses, research hypothesis and null hypothesis. The null hypothesis is formed
for the statistical purpose of negating it. If the research hypothesis states there is positive correlation between smoking and cancer, the
null hypothesis states there is no relation between smoking and cancer. It is easy to negate a statement than establishing it.

The null hypothesis is statistical statement that there is no difference between the groups under study. A statistical test is used to
determine the probability that the null hypothesis is not true and rejected, i.e. inferential statistics are used in an effort to reject the
null, thereby showing that a deference does exists. The null hypothesis is a technical necessity when using inferential statistics, based
on statistical significance which is used as criterion.

Types of errors

When the null hypothesis is rejected, the observed differences between groups are deemed improbable by chance alone. For example,
if drug A is compared to a placebo for its effects on depression and the null hypothesis is rejected, the investigator concludes that the
observed differences most likely are not explainable simply by sampling error. The key word in these statements is probable. When
offering this conclusion, the investigator has the odds on his or her side. However, what are the chances of the statement being
incorrect?

In statistical inference there is no way to say with certainty that rejection or retention of the null hypothesis was correct. There are two
types of potential errors. A type I error occurs when the null hypothesis is rejected when indeed it should have been retained; a type II
error occurs if the null hypothesis is retained when indeed it should have been rejected.

Type I Error

Type I errors occur when the null hypothesis is rejected but should have been retained, such as when a researcher decides that two
means are different. He or she might conclude that the treatment works or those groups are not sampled from the same population
whereas in reality the observed differences are attributable only to sampling error. In a conservative scientific setting, type I errors
should be made rarely. There is a great disadvantage to advocating treatments that really do not work.

The probability of a type I error is denoted with the Greek letter alpha (a). Because of the desire to avoid type I errors, statistical
models have been created so that the investigator has control over the probability of a type I error. At the .05 significance or alpha
level, a type I error is expected to occur in 5 percent of all cases. At the .01 level, it may occur in 1 percent of all cases. Thus, at the .05
a level, one type I error is expected to be made in each of 20 independent tests. At the .01 a level, one type I error is expected to be
made in each 100 independent tests.

Type II Error

The motivation to avoid a type I error might increase the probability of making a second type of error. In this case the null hypothesis
is retained when it actually was wrong. For example, an investigator may reach the conclusion that a treatment does not work when
actually it is efficacious. The probability of a type II error is symbolized by the Greek capital letter beta (B). Here the decision is not to
reject the null hypothesis when in actuality the null hypothesis was false. This is a type II error with the probability of beta (B).

Statistical Power

There are several maneuvers that will increase control over the probability of different types of errors and correct decisions. One type
of correct decision is the probability of rejecting the null hypothesis and being correct in that decision. Power is defined as the
probability of rejecting the null hypothesis when it should have been rejected. Ultimately, the statistical evaluation will be more
meaningful if it has high power.

It is particularly important to have high statistical power when the null hypothesis is retained. Retaining the null hypothesis with high
power gives the investigator more confidence in stating that differences between groups were non-significant. One factor that affects
the power is the sample size. As the sample size increases, power increases. The larger the sample, greater the probability that a
correct decision will be made in rejecting or retaining the null hypothesis.

Another factor that influences power is the significance level. As significance increases, the power increases. For instance, if the .05
level is selected rather than the .01 level, there will be a greater chance of rejecting the null hypothesis. However, there will also be a
higher probability of a type I error. By reducing the chances of a type I error, the chances of correctly identifying the real difference
(power) are also reduced. Thus, the safest manipulation to affect power without affecting the probability of a type I error is to increase
the sample size.
The third factor affecting power is effect size. The larger the true differences between two groups, the greater the power. Experiments
attempting to detect a very strong effect, such as the impact of a very potent treatment, might have substantial power even with small
sample sizes. The detection of subtle effects may require very large samples in order to achieve reasonable statistical power. It is worth
noting that not all statistical tests have equal power. The probability of correctly rejecting the null hypothesis is higher with some
statistical methods than with others. For example, nonparametric statistics are typically less powerful than parametric statistics, for
example.

Introduction

As the context of health care is changing due to the pharmaceutical services and technological advances, nurses and other health care
professionals need to be prepared to respond in knowledgeable and practical ways. Health information is very often explained in
statistical terms for making it concise and understandable. Statistics plays a vitally important role in the research. Statistics help to
answer important research questions and it is the answers to such questions that further our understanding of the field and provide for
academic study. It is required the researcher to have an understanding of what tools are suitable for a particular research study. It is
essential for healthcare professionals to have a basic understanding of basic concepts of statistics as it enables them to read and
evaluate reports and other literature and to take independent research investigations by selecting the most appropriate statistical test for
their problems. The purpose of analyzing data in a study is to describe the data in meaningful terms.

Descriptive approach and inferential approach

Depending on the kinds of variables identified (nominal, ordinal, interval, and ratio) and the design of particular study, a number of
statistical techniques is available to analyze data. There are two approaches to the statistical analysis of data the descriptive approach
and inferential approach. Descriptive statistics convert data into picture of the information that is readily understandable. The
inferential approach helps to decide whether the outcome of the study is a result of factors planned within design of the study or
determined by chance. The two approaches are often used sequentially in that first data are described with descriptive statistics, and
then additional statistical manipulations are done to make inferences about the likelihood that the outcome the outcome was due to
chance through inferential statistics. When descriptive approach is used, terms like mean, median, mode, variation, and standard
deviation are used to communicate the analysis information of data. When inferential approach is used, probability values (P) are used
to communicate the significance or lack of significance of the results (Streiner & Norman, 1996).

Measurement

Measurement defined as “assignment of numeral according to rules” (Tyler 1963:7). Regardless of the variables under study, in order
to make sense out of data collected, each variable must be measured in such a way that its magnitude or quantity must be clearly
identified. The specific strategy for a particular study depends upon the particular research problem, the sample under study, the
availability of instruments, and the general feasibility of the project (Brockopp & Hastings-Tolsma, 2003). A variety of measurement
methods are available for use in nursing research. Four measurement scales are used: nominal, ordinal, interval and ratio.

The nominal level of measurement

The nominal level of measurement is the most primitive or lowest level of classifying information. Nominal variables include
categories of people, events, and other phenomena are named, are exhaustive in nature, and are mutually exclusive. These categories
are discrete and noncontinous. In case of nominal measurement admissible statistical operation are counting of frequency, percentage,
proportion, mode, and coefficient of contingency.

The ordinal level of measurement

The ordinal level of measurement is second in terms of its refinement as a means of classifying information. Ordinal implies that the
values of variables can be rank-ordered from highest to lowest.

Interval Level of Measurement

Interval level of measurement is quantitative in nature. The individual units are equidistant from one point to the other. The interval
data does not have an absolute zero. For example, temperature is measured in Celsius or Fahrenheit. Interval level of measurement
refers to the third level of measurement in relation to complexity of statistical techniques that can be used to analyze data. Variables
with in this level of measurement are assessed incrementally, and the increments are equal.

Ratio Level of Measurement


Ratio level of measurement is characterized by variables that are assessed incrementally with equal distances between the increments
and a scale that has an absolute zero. Ratio variables exhibit the characteristics of ordinal and interval measurement and can also be
compared by describing it as two or three times another number or as one-third, one-quarter, and so on. Variable like time, length and
weight are ratio scales and also be measured using nominal or ordinal scale. The mathematical properties of interval and ratio scales
are very similar, so the statistical procedures are common for both the scales.

Errors of measurement

When a variable is measured there is the potential for errors to occur. Some of the sources of errors in measurement are, instrument
clarity, variations in administrations, situational variations, response set bias, transitory personal factors, response sampling, and
instrument format.

Population, Sample, Variable

Population is defined as the entire collection of a set of objects, people, or events, in a particular context. The population is the entire
group of persons or objects that is of interest to the investigator. In statistics population means, any collection of individual items or
units that is the subject of investigation. Population refers to the collection of all items upon which statements will be based. This
might include all patients with schizophrenia in a particular hospital, or all depressed individuals in a certain community.
Characteristics of a population that differ form individual to individual are called variables. A variable is a concept (construct) that has
been so specifically defined that precise observations and therefore measurement can be accomplished. Length, age, weight,
temperature, pulse rate are a few examples of variables.

The sample is a subset of the population selected by investigator to participate in a research study. A sample refers to a subset of
observations selected from the population. It might be unusual for an investigator to describe only patients with schizophrenia in a
particular hospital and it is unlikely that an investigator will measure every depressed person in a community. As it is rarely
practicable to obtain measures of a particular variable from all the units in population, the investigator has to collect information from
a smaller group or sub-set that represents the group as a whole. This sub-set is called a sample. Each unit in the sample provides a
record, such as measurement, which is called an observation. The sample represents the population of those critical characteristics the
investigator plan to study.

Dependent and independent variables

An independent variable is presumed cause of the dependent variable-the presumed effect. The independent variable is one which
explains or accounts for variations in the dependent variable. An independent variable is one whose change results in change in other
variable. In experiments, the independent variable is the variable manipulated by the experimenter. A dependent variable is one which
changes in relationship to changes in another variable. A variable which is dependent in one study may be independent in another.
Intervening variable is one that comes between the independent and dependent variable.

Hypothesis

Hypothesis is statement or declaration of the expected outcome of a research study. It is based on logical rationale and has empirical
possibilities for testing. Hypothesis is formulated in experimental research. In some non-experimental correlational studies, hypothesis
may also be developed. Normally, there are four elements in a hypothesis:

 (1) dependent and independent variables,


 (2) some type of relationship between independent and dependent variable,
 (3) the direction of the change, and
 (4) it mentions about the subjects, i.e. population being studied.

It is defined as “A tentative assumption made in order to draw out and test its logical or empirical consequences” (Webster 1968).

Standards in formulating a hypothesis (Ahuja, R. 2001):

 It should be empirically testable, whether it is right or wrong.


 It should be specific and precise.
 The statements in the hypothesis should not be contradictory.
 It should specify variables between which the relationship to be established
 It should describe one issue only.
Characteristics of a Hypothesis

 Characteristics of a Hypothesis (Treece & Treece, 1989)


 It is testable
 It is logical
 It is directly related to the research problem
 It is factually or theoretically based
 It states a relationship between variables
 It is stated in such a form that it can be accepted or rejected

Directional hypothesis predicts an outcome in a particular direction, and nondirectional hypothesis simply states that there will be
difference between the groups. There can be two hypotheses, research hypothesis and null hypothesis. The null hypothesis is formed
for the statistical purpose of negating it. If the research hypothesis states there is positive correlation between smoking and cancer, the
null hypothesis states there is no relation between smoking and cancer. It is easy to negate a statement than establishing it.

The null hypothesis is statistical statement that there is no difference between the groups under study. A statistical test is used to
determine the probability that the null hypothesis is not true and rejected, i.e. inferential statistics are used in an effort to reject the
null, thereby showing that a deference does exists. The null hypothesis is a technical necessity when using inferential statistics, based
on statistical significance which is used as criterion.

Types of errors

When the null hypothesis is rejected, the observed differences between groups are deemed improbable by chance alone. For example,
if drug A is compared to a placebo for its effects on depression and the null hypothesis is rejected, the investigator concludes that the
observed differences most likely are not explainable simply by sampling error. The key word in these statements is probable. When
offering this conclusion, the investigator has the odds on his or her side. However, what are the chances of the statement being
incorrect?

In statistical inference there is no way to say with certainty that rejection or retention of the null hypothesis was correct. There are two
types of potential errors. A type I error occurs when the null hypothesis is rejected when indeed it should have been retained; a type II
error occurs if the null hypothesis is retained when indeed it should have been rejected.

Type I Error

Type I errors occur when the null hypothesis is rejected but should have been retained, such as when a researcher decides that two
means are different. He or she might conclude that the treatment works or those groups are not sampled from the same population
whereas in reality the observed differences are attributable only to sampling error. In a conservative scientific setting, type I errors
should be made rarely. There is a great disadvantage to advocating treatments that really do not work.

The probability of a type I error is denoted with the Greek letter alpha (a). Because of the desire to avoid type I errors, statistical
models have been created so that the investigator has control over the probability of a type I error. At the .05 significance or alpha
level, a type I error is expected to occur in 5 percent of all cases. At the .01 level, it may occur in 1 percent of all cases. Thus, at the .05
a level, one type I error is expected to be made in each of 20 independent tests. At the .01 a level, one type I error is expected to be
made in each 100 independent tests.

Type II Error

The motivation to avoid a type I error might increase the probability of making a second type of error. In this case the null hypothesis
is retained when it actually was wrong. For example, an investigator may reach the conclusion that a treatment does not work when
actually it is efficacious. The probability of a type II error is symbolized by the Greek capital letter beta (B). Here the decision is not to
reject the null hypothesis when in actuality the null hypothesis was false. This is a type II error with the probability of beta (B).

Statistical Power

There are several maneuvers that will increase control over the probability of different types of errors and correct decisions. One type
of correct decision is the probability of rejecting the null hypothesis and being correct in that decision. Power is defined as the
probability of rejecting the null hypothesis when it should have been rejected. Ultimately, the statistical evaluation will be more
meaningful if it has high power.
It is particularly important to have high statistical power when the null hypothesis is retained. Retaining the null hypothesis with high
power gives the investigator more confidence in stating that differences between groups were non-significant. One factor that affects
the power is the sample size. As the sample size increases, power increases. The larger the sample, greater the probability that a
correct decision will be made in rejecting or retaining the null hypothesis.

Another factor that influences power is the significance level. As significance increases, the power increases. For instance, if the .05
level is selected rather than the .01 level, there will be a greater chance of rejecting the null hypothesis. However, there will also be a
higher probability of a type I error. By reducing the chances of a type I error, the chances of correctly identifying the real difference
(power) are also reduced. Thus, the safest manipulation to affect power without affecting the probability of a type I error is to increase
the sample size.

The third factor affecting power is effect size. The larger the true differences between two groups, the greater the power. Experiments
attempting to detect a very strong effect, such as the impact of a very potent treatment, might have substantial power even with small
sample sizes. The detection of subtle effects may require very large samples in order to achieve reasonable statistical power. It is worth
noting that not all statistical tests have equal power. The probability of correctly rejecting the null hypothesis is higher with some
statistical methods than with others. For example, nonparametric statistics are typically less powerful than parametric statistics, for
example.

Sampling

The process of selecting a fraction of the sampling unit (i.e. a collection with specified dimensions) of the target population for
inclusion in the study is called sampling. Sampling can be probability sampling or non-probability sampling.

Probability Sampling or Random sampling

Probability sampling, also called random sampling, is a selection process that ensures each participant the same probability of being
selected. Probability sampling is the process of selecting samples based on probability theory. Probability theory states that possibility
that events occur by chance. Random sampling is the best method for ensuring that a sample is representative of the larger population.
Random sampling can be simple random sampling, stratified random sampling, and cluster sampling.

Nonprobability sampling

Nonprobability sampling is the selection process in which the probability that any one individual or subject selected is not equal to the
probability that another individual or subject may be chosen. The probability of inclusion and the degree to which the sample
represents the population are unknown. The major problem with nonprobability sampling is that sampling bias can occur.
Nonprobability sampling can be convenience sampling, purposive sampling or quota sampling.

Sampling Error (Standard Error)

Sampling error refers to the discrepancies that inevitably occur when a small group (sample) is selected to represent the characteristics
of a larger group (population). It is defined as the deference between a parameter and an estimate of that parameter which is derived
from a sample (Lindquist, 1968:8). The means and standard deviations calculated from the data collected on a given sample would not
be the same as those calculations derived from data collected from the entire population. It is the discrepancy between the
characteristics of the sample and the population that constitutes sampling error.

Descriptive statistics

Descriptive statistics are techniques which help the investigator to organize, summarize and describe measures of a sample. Here no
predictions or inferences are made regarding population parameters. Descriptive statistics are used to summarize observations and to
place these observations within context. The most common descriptive statistics include measures of central tendency and measures of
variability.

Central tendency or “measures of the middle”

There are three commonly used measures of central tendency: the mean, the median, and the mode- are calculated to identify the
average, the most typical and the most common values, respectively among the data collected. The mean is the arithmetic average, the
median is the point representing the 50th percentile in a distribution, and the mode is the most common score. Sometimes each of
these measures is the same; on other occasions, the mean, the median, and the mode can be different. The mean, median, and mode are
the same when the distribution of scores is normal. Under most circumstances the mean, median, and mode will not be exactly the
same. The mode is most likely to misrepresent the underlying distribution and is rarely used in statistical analysis. The mean and the
median are the most commonly reported measures of central tendency.

The major consideration in choosing between them is how much weight should be given to extreme scores. The mean takes into
account each score in the distribution; the median finds only the halfway point. As mean best represents all subjects and because of
desirable mathematical properties, the mean is typically favored in statistical analysis. Despite the advantages of the mean, there are
also some advantages to the median. In particular, the median disregards outlier cases, whereas the mean moves further in the
direction of the outliers. Thus, the median is often used when the investigator does not want scores in the extreme of the distribution to
have a strong impact. The median is also valuable for summarizing data for a measure that might be insensitive toward the higher
ranges of the scale. For instance, a very easy test may have a ceiling effect but does not show the true ability of some test-takers. A
ceiling effect occurs when the test is too easy to measure the true ability of the best students. Thus, if some scores stack up at the
extreme, the median may be more accurate than the mean. If the high scores had not been bounded by the highest obtainable score, the
mean may actually have been higher.

The mean, median, and mode are exactly the same in a normal distribution. However, not all distributions of scores have a normal or
bell-shaped appearance. The highest point in a distribution of scores is called the modal peak. A distribution with the modal peak off to
one side or the other is described as skewed. The word skew literally means "slanted."
The direction of skew is determined by the location of the tail or flat area of the distribution. Positive skew occurs when the tail goes
off to the right of the distribution. Negative skew occurs when the tail or low point is on the left side of the distribution. The mode is
the most frequent score in the distribution. In a skewed distribution, the mode remains at the peak whereas the mean and the median
shift away from the mode in the direction of the skewness. The mean moves furthest in the direction of the skewness, and the median
typically falls between the mean and the mode. Mode is the best measure of central tendency when nominal variables are used.
Median is the best measure of central tendency when ordinal variables are used. Mean is the best measure of central tendency when
interval or ratio scales are used.

Measures of Variability

If there is no variability within populations there would be no need for statistics: a single item or sampling unit would tell us all that is
needed to know about the population as a whole. Three indices are used to measure variation or dispersion among scores: (1) range,
(2) variance, and (3) standard deviation (Cozby, 2000). The range describes the deference between the largest and smallest
observations made: the variance and standard deviation are based on average difference or deviation of observations from the mean.

Measures of central tendency, such as the mean and median, are used to summarize information. They are important because they
provide information about the average score in the distribution. Knowing the average score, however, does not provide all the
information required to describe a group of scores. In addition, measures of variability are required. The simplest method of describing
variability is the range, which is simply the difference between the highest score and lowest score.

Another statistic, known as the interquartile range, describes the interval of scores bounded by the 25th and 75th percentile ranks; the
interquartile range is bounded by the range of scores that represent the middle 50 percent of the distribution. In contrast to ranges,
which are used infrequently in statistical analysis, the variance and standard deviation are used commonly. Since the mean is the
average score in a distribution, the sum of the deviations around the mean will always equal zero. Yet, in order to understand the
characteristic of a distribution of scores, some estimation of deviation around the mean is important. The sum of these deviations will
always equal zero. However, the squared deviations around the mean can yield a meaningful index. The variance is the sum of the
squared deviations around the mean divided by the number of cases.

Range

Range is the simplest method of examining variation among scores and refers to the difference between the highest and lowest values
produced. It shows how wide the distribution is over which the measurements are spread. For continuous variables, the range is the
arithmetic difference between the highest and lowest observations in the sample. In the case of counts or measurements, 1 should be
added to the difference because the range is inclusive of the extreme observations.. The range takes account of only the most extreme
observations. It is therefore limited in its usefulness, because it gives no information about how observations are distributed.
Interquartile range is the area between the lowest quartile and the highest quartile, or the middle 50% of the scores

Variance

The variance is a very useful statistic and is commonly employed in data analysis. However, its calculation requires finding the
squared deviations around the mean rather than the simple or absolute deviations around the mean. Thus, when the variance is
calculated, the resulting calculation will be in units that are the natural squared units. Taking the square root of the variance puts the
observations back into their original metric. The square root of the variance is known as the standard deviation. The standard deviation
is an approximation of the average deviation around the mean. Although the standard deviation is not technically equal to the average
deviation, it gives an approximation of how much the average score deviates from the mean. One method for calculating variance is to
first calculate the deviation scores. The sum of the set of deviation score equal to zero. Variance is the squire of the standard
deviation: conversely, a standard deviation is the squire root of the variance. The deviation of a distribution of scores can then be used
to calculate the variance.

Standard Deviation

The standard deviation is the most widely applied measure of variability. When observations have been obtained from every item or
sampling unit in a population, the symbol for the standard deviation is (lower case sigma). This is parameter of the population. When
it is calculated from a sample it is symbolized s. Standard deviation of a distribution of scores is the squire root of the variance. Large
standard deviations suggest that scores do not cluster around the mean: they are probably widely scattered. Similarly small standards
deviations suggest that there is very little deference among scores.

Normal Distribution

The normal distribution is a mathematical construct which suggests that naturally occurring observations follow a given pattern. The
pattern is the normal curve, which places most observations at the mean and lesser number of observations at either extreme. This
curve or bell-shaped distribution reflects the tendency of the observations concerning a specific variable to cluster in a particular
manner

The normal curve can be described for any set of data given the mean and standard deviation of the data and assumptions that the
characteristics under study would be normally distributed within the population. A normal distribution of the data suggests that 68% of
observations fall within one standard deviation of the mean, 95% fall within two standard deviations of the mean, and 99.87% fall
within three standard deviations of the mean. Theoretically range of the curve is unlimited.

Standard Scores

One of the problems with means and standard deviations is that their meanings are not independent of context. For example, a mean of
45.6 means little unless the score is known. The Z-score is a transformation into standardized units that provides a context for the
interpretation of scores. The Z-score is the difference between the score and the mean, divided by the standard deviation. To make
comparisons between groups, standard scores rather than raw scores can be used. Standard scores enable the investigator to examine
the position of a given score by measuring its mean deviation from the means of all sores.

Most often, the units on the x axis of the normal distribution are in Z-units. Any variable transformed into Z-units will have a mean of
0 and a standard deviation of 1. Translation of Z-scores into percentile ranks is accomplished using a table for the standard normal
distribution. Certain Z-scores are of particular interest in statistics and psychological testing. The Z-score 1.96 represents the 97.5th
percentile in a distribution whereas -1.96 represents the 2.5th percentile. A Z-score of less than -1.96 or greater than +1.96 falls outside
of a 95 percent interval bounding the mean of the Z-distribution. Some statistical definitions of abnormality view these defined
deviations as cutoff points. Thus, a person who is more than 1.96 Z-scores from the mean on some attribute might be regarded as
abnormal. In addition to the interval bounded by 95 percent of the cases, the interval including 99 percent of all cases is also
commonly used in statistics.

Confidence Intervals

In most statistical inference problems the sample mean is used to estimate the population mean. Each sample mean is considered to be
an unbiased estimate of the population mean. Although the sample mean is unlikely to be exactly the same as the population mean,
repeated random samples will form a sampling distribution of sample means. The mean of the sampling distribution is an unbiased
estimate of the population mean. However, taking repeated random samples from the population is also difficult and expensive.
Instead, it is necessary to estimate the population mean based on a single sample; this is done by creating an interval around the
sample mean.

The first step in creating this interval is finding the standard error of the mean. The standard error of the mean is the standard deviation
divided by the square root of the sample size. Statistical inference is used to estimate the probability that the population mean will fall
within some defined interval. Because sample means are distributed normally around the population mean, the sample mean is most
probably near the population value. However, it is possible that the sample mean is an overestimate or an underestimate of the
population mean. Using information about the standard error of the mean, it is possible to put a single observation of a mean into
context.
The ranges that are likely to capture the population mean are called confidence intervals. Confidence intervals are bounded by
confidence limits. The confidence interval is defined as a range of values with a specified probability of including the population
mean. A confidence interval is typically associated with a certain probability level. For example, the 95 percent confidence interval has
a 95 percent chance of including the population mean. A 99 percent confidence interval is expected to capture the true mean in 99 of
each 100 cases. The confidence limits are defined as the values for points that bound the confidence interval.Creating a confidence
interval requires a mean, a standard error of the mean, and the Z-value associated with the interval.

nferential statistics

Inferential statistics are mathematical procedures which help the investigator to predict or infer population parameters from sample
measures. This is done by a process of inductive reasoning based on the mathematical theory of probability (Fowler, J., Jarvis, P. &
Chevannes M. 2002).

Probability

The idea of probability is basic to inferential statistics. The goal of inferential statistical techniques is same, to determine as precisely
as possible the probability of an occurrence. It can be regarded as quantifying the chance that a stated outcome of an event will take
place. Probability refers to the likelihood that the differences between groups under study are the result of chance. Probability Theory
states, any given event out of all possible outcomes. When any numbers of mutually exclusive sets are given they add up to a
singularity. When a coin is tossed it has two out comes, either head or tail, i.e. 0.5 chance for head and 0.5 chance for tail. When these
two chances are added it gives 1. For example, in a class there are fifty students, the chance of students to become first in the class is 1
in 50 (i.e. .02). By convention, probability values fall on a scale between 0 (impossibility) and 1 (certainty), but they are sometimes
expressed as percentages, so the ‘probability’ scale has much in common with the proportion scale. The chance of committing type
one error is decided by testing the hypothesis for its probability value. In behavioural sciences <.05 is taken as alpha value for testing
the hypothesis. When stringent outcomes are required <.01 or <.001 are taken as the alpha value or p value.

Statistical Significance (alpha level)

The level of significance (or alpha level) is determined to identify the probability that the deference between the groups have occurred
by chance rather than in response to the manipulation of variables. The decision of whether the null hypothesis should be rejected
depends on the level of error that can be tolerated. The tolerance level of error is expressed as a level of significance or alpha level.
The usual level of significance or alpha level is 0.05, although at times levels of 0.01 or o.001 may be used when high level of
accuracy is required. In testing the significance of obtained statistics, if the investigator rejects the null hypothesis when, in fact, it is
true he commits type I error or alpha error, and when the investigator accepts the null hypothesis when, in fact, it is false he commits
type II or beta error (Singh AK, 2002).

Parametric and Non-parametric Tests

Parametric and non-parametric test are commonly employed in behavioral researches.

Parametric Tests

A parametric test is one which specifies certain conditions about the parameter of the population from which a sample is taken. Such
statistical tests are considered to be more powerful than non-parametric tests and should be used if their basic requirements or
assumptions are met. Assumptions for using parametric tests:

 The observation must be independent.


 The observation must be drawn from a normal distribution.
 The sample drawn from a population must have equal variances and this condition is more important if the size of the sample
is particularly small, i.e. homogenicity of variables.
 The variables must be expressed in interval or ratio scales.
 The variables under study should be continuous

Examples of parametric tests are t-test, z-test and F-test.

Non-parametric tests
A non-parametric test is one does not specify any conditions about the parameter of the population from which the population is
drawn. These tests are called distribution-free statistics. For non-parametric tests, the variables under study should be continuous and
the observations should be independent. Requisites for using a non-parametric statistical test are:

 The shape of the distribution of the population from which a sample is drawn is not known to be normal curve.
 The variables have been quantified on the basis of nominal measures (or frequency counts)
 The variables have been quantified on the basis of ordinal measures or ranking.
 A non-parametric test should be used only when parametric assumptions cannot be met.

Common non-parametric tests

 Chi-squire test
 Mann-Whitney U test
 Rank difference methods (Spearman rho and Kendal’s tau)
 Coefficient of concordance (W)
 Median test
 Kruskal-Wallis test
 Friedman test

Tips on using appropriate tests in experimental design

Two unmatched (unrelated) groups, experimental and control (e.g. patient receiving a prepared therapeutic intervention for
depression and control group of patients on routine care)-

 See the distribution, whether normal or non-normal


 If normal, use parametric tests (independent t-test)
 If non-normal, go for nonparametric tests- Mann-Whitney U test or making the data normal through natural log
transformation or z-transformation.

Two-matched (related) groups, pre-post design (the same group is rated before intervention and after the period of intervention the
group is again rate. i.e. two ratings in the same or related group)-

 See distribution, whether normal or non-normal


 If normal use parametric paired t-test.
 If non-normal, use nonparametric Wilcoxon Sign Rank (W) test

More than two –unmatched (unrelated) groups (for example three groups: schizophrenia, bipolar and control group)-

 see distribution whether normal or non-normal


 if normally distributed use parametric One-way ANOVA
 if non-normal use nonparametric Kruskal-Wallis test

More than two matched (related) groups (for example in ongoing intervention ratings at different times- t1, t2, t3, t4 …)

 See distribution, normal or non-normal


 If the data is normal use parametric Repeated Measures ANOVA
 If data is non-normal use nonparametric Friedman’s test

Matched (related) and unmatched (unrelated) observations

When analyzing bivariate data such as correlations, a single sample unit gives a pair of observations representing two different
variables. The observations comprising a pair are uniquely linked, are said to be matched or paired. For example, the systolic blood
pressure of 10 patients and measurements of another 10 patients after administration are unmatched. However, the measurements of
the same 10 patients before and after administration of the drug are matched. It is possible to conduct more sensitive analysis if the
observations are matched.

Common Statistical tests


Chi-squire (X2) Test (analyzing frequencies)

The chi-squire test is one of the important non-parametric tests. Guilford (1956) has called it the ‘general-purpose statistic’. Chi-squire
test are widely referred to as test of homogenicity, randomness, association, independence and goodness of fit. The chi-squire test is
used when the data are expressed in terms of frequencies of proportions or percentages. This test applies only to discrete data, but any
continuous data can be reduced to the categories of in such a way that they can be treated as discrete data. The chi-square statistic is
used to evaluate the relative frequency or proportion of events in a population that fall into well-defined categories. For each category,
there is an expected frequency that is obtained from knowledge of the population or from some other theoretical perspective. There is
also an observed frequency for each category. The observed frequency is obtained from observations made by the investigator. The
chi-square statistic expresses the discrepancy between the observed and the expected frequency.

There are several uses of chi-squire test as:

1. Chi-squire test can be used as a test of equal probability hypothesis (equal probability hypothesis is meant the probability of having
the frequencies in all the given categories as equal).

2. Testing the significance of the independence hypothesis (independent hypothesis means that one variable is not affected by or
related to another variable and hence, these two variables are independent).

3. Chi-squire test can be used in testing a hypothesis regarding the normal shape of a frequency distribution (goodness-of-fit).

4. Chi-squire test is used in testing significance of several statistics like phi-coefficient, coefficient of concordance, and coefficient of
contingency.

5. In chi-squire test, the frequencies we observe are compared with those we expect on the basis of some null hypothesis. If the
discrepancy between the observed and expected frequencies is great, then the value of the calculated test statistic will exceed the
critical value at the appropriate number of degree of freedom. Then the null hypothesis is rejected in favor of some alternative. The
mastery of the method lies not in so much in the computation of the test statistic itself, but in the calculation of expected frequencies.

6. The chi-squire statistic does not give any information regarding the strength of a relationship: it only conveys the existence of or
non-existence of the relationship between the variables investigated. To establish the extent and nature of the relationship, additional
statistics such as phi, Cramer’s V, or contingency coefficient can be used (Brockopp &Hastings-Tolsma, 2003).

Tips on analyzing frequencies

 All versions of the chi-squire test compare the agreement between a set of observed frequencies and those expected if some
null hypothesis is true.
 All objects are counted the nominal scale or unambiguous intervals on a continuous scale like successive days or moths ma
be regarded for the application of the tests.
 Apply Yate’s correction in the chi-squire test when there is only one degree of freedom, i.e. when there is only ‘one way’ test
and in 2×2 contingency table.

Testing normality of a data

Parametric statistical techniques depend upon the mathematical properties of the normal curve. They usually assume that samples are
drawn from populations that are normally distributed. Before adopting a statistical test, it is essential to determine whether the data is
normal or non-normal. The normality of data can be checked by two ways, either plot out the data to see if they look normal or using
sophisticated statistical procedures. There are statistical tests to see normality of the data. The commonest one is Kolmogorov-
Smirnov test. As per the central limit theorem, if there is no significance in the P value (> .05) ideally a parametric test can be used
for analyzing the data, and if there is significance (<.05) a non-parametric test should be used for analysis. A Shapiro-Wilk test is
used to see whether parameters used to test normality is within the allowed limit. Statistical packages like SPSS can be used for doing
this test.

t-test and z-test (comparing means)

In experimental sciences, comparisons between groups are very common. Usually, one group is the treatment, or experimental group,
while the other group is the untreated, or control group. If patients are randomly assigned to these two groups, it is assumed that they
differ only by chance prior to treatment. Differences between groups after the treatment are usually used to estimate treatment effect.
The task of the statistician is to determine whether any observed differences between the groups following treatment should be
attributed to chance or to the treatment. The t-test is commonly used for this purpose. There are actually several different types of t-
tests

Types of t-Tests

 Comparison of a sample mean with a hypothetical population mean.


 Comparison between two scores in the same group of individuals.
 Comparison between observations made on two independent groups.

t-test and z-test are parametric inferential statistical techniques used when comparison of two means are required. It is used to test the
null hypothesis that there is no difference in means between the two groups. The reporting of the results of t-test generally includes the
df, t-value, and probability level. A t-test can be one-tailed or two-tailed. If the hypothesis is directional, a one-tailed test is generally
used, and if the hypothesis is non-directional. t-test is used when sample size is less than 30 and z-test is used when sample size is
more than 30.

There are dependent and independent t-tests. The formula to calculate a t-test can differ depending on whether the samples involved
are dependent or independent. Samples are independent when there are two groups such as an experimental and a control group.
Samples are dependent when the participants from two groups are paired in some manner. The form of the t-test that is used with a
dependent sample may be termed as paired, dependent, matched, or correlated (Brockopp & Hastings-Tolsma, 2003).

Degree of freedom (df)

Degree of freedom (df) is a mathematical concept that describes the number of events or observations that are free to vary: for each
statistical test there is a formula for calculating the appropriate degree of freedom (n-1).

Mann-Whitney U-test

The Mann-Whitney U test is a non-parametric substitute for the parametric t-test, for comparing the medians of two unmatched pairs.
For application of U test data must be obtained on ordinal or interval scale. We can use Mann-Whitney U-test to compare the median
time undertaken to perform the task by a sample of subjects who had not drunk with that of another sample who had drunk a
standardized volume of alcohol. This test is used to see group difference, when the data is non-normal and the groups are independent.
The test can be applied in groups with unequal or equal size.
Some key points about using Mann-Whitney U-test are:

 This test can be applied to interval data (measurements), to count of things, derived variable (proportions and indices) and to
ordinal data (rank scales, etc.)
 Unlike some test statistics, the calculated value of U has to be smaller than the tabulated critical value in order to reject null
hypothesis.
 The test is for difference in medians. It is common error to record a statement like ‘the Mann-Whitney U-test showed there is
significant difference in means. There is, however, no need to calculate the medians of each sample to do the test.

Wilcoxon test -matched pairs

The Wilcoxon test for matched pairs is a non-parametric test for comparing the medians of two matched samples. It calls for a test
statistic T whose probability distribution is known. The observation must be drawn on interval scale. It is not possible to use this test
on ordinal measurements. The Wilcoxon's test can be used in matched pair samples. This test is for difference in medians and the test
assumes that samples have been drawn from parent populations that are symmetrically not necessarily normally distributed.

Pearson Product-Moment Correlation Coefficient

The Pearson product-moment correlation method is a parametric test is a common method assessing the association between two
variables under study. In this test an estimation of at least one parameter is involved, measurement is at an interval level, and it is
assumed that the variable under study is normally distributed within the population.

Spearman Rank correlation Coefficient


Spearman’s r is a nonparametric test, which is equivalent to parametric Pearson r. Spearman’s Rank Correlation Technique is used
when the conditions of the Product Moment Correlation Coefficient do no apply. This test is widely used by health scientists and uses
ranks of the x and y observations and the raw data themselves are discarded.

Tips on using correlation tests

 When observations of one or both variables are on an ordinal scale, or are proportions, percentages, indices or counts of
things, use the Spearman’s Rank Correlation Coefficient. The number of units in the sample i.e. the number of paired
observations should be between 7 and 30.
 When observations are measured on interval scale use Product Moment Correlation Coefficient should be considered. .
Sample units must be obtained randomly, and the data should be bivariate normal i.e. x and y.
 The relationship between the variables should be rectilinear (straight line) not curved. Certain mathematical transformations
(e.g. logarithmic transformation) will ‘straighten up’ curved relationships.
 A strong and significant correlation does not mean does not mean one necessarily the cause of the other. It is possible that
some additional, unidentified factor is underlying source of variability in both variables.
 Correlations measured in samples estimate correlations in the populations. A correlation in a sample is not ‘improved’ or
strengthened by obtaining more observations: however, larger samples may be required to confirm the statistical significance
of weaker correlations.

Common Statistical Tests

Regression Analysis

Regression analysis is often used to predict the value of one variable given information about another variable. The procedure can
describe how two continuous variables are related. Regression analysis is used to examine relationships among continuous variables
and is most appropriate for data that can be plotted on a graph. Data are usually plotted, so that the independent variable is seen on the
horizontal (x) axis and the dependent variable on the vertical (y) axis. The statistical procedure for regression analysis includes a test
for the significance of the relationship between two variables. Given a significant relationship between two variables, knowledge of
the value of the independent variable permits a prediction of the value of the dependent variable.

One-Way Analysis of Variance (ANOVA)

When there are three or more samples, and the data from each sample are thought to be distributed normally, analysis of variance
(ANOVA) may be a technique of choice One-way analysis of variance is a parametric inferential statistical test that enables the
investigators to compare two or more group means, which was developed by RF. Fisher. The reporting of the results includes the df, F
value and the probability level. ANOVA is of two types: simple analysis of variance and complex analysis of variance or two-way
analysis of variance. One-Way Analysis of Variance (ANOVA) is an extension of t-test, which permits the investigator to compare
more than two means simultaneously.

Researchers studying two or more groups can use ANOVA to determine whether there are differences among the groups. For example,
nurse investigators who want to assess the levels of helplessness among three groups of patients--long-term, acute care and
outpatients-can administer an instrument designed to measure levels of helplessness and then calculate an F ratio. If the F ratio is
sufficiently large, then conclusion can be that there is a difference between at least two of the means can be drawn.

The larger the F-ratio, the more likely it is that the null hypothesis can be rejected. Other tests called post hoc comparisons, can be
used to determine which of the means differ significantly. Fisher’s LSD, Duncan’s new multiple range test, the Neuman-Keuls,
Tukey’s HSD, and Scheffe’s test are the post hoc comparison tests that are most frequently used following ANOVA. In some instances
a post hoc comparison is not necessary because the means of the groups under consideration readily convey the differences between
the groups (Brockopp & Hastings-Tolsma, 2003).

Kruskal-Wallis test-more than two samples

The Kruskal-Wallis test is a simple non-parametric test to compare the medians of three or more samples. Observations may be
interval measurements, counts of things, derived variables, or ordinal ranks. If there are only three samples, then there must be at least
five observations in each sample. Samples do not have to be of equal sizes. The statistic K is used to indicate the test value.

Multivariate Analysis

Two-way or Factorial Analysis of Variance


Factorial analysis of variance permits the investigator to analyze the effects of two or more independent variables on the dependent
variable (one-way ANOVA is used with one independent variable and one dependent variable). The term factor is interchangeable with
independent variable and factorial ANOVA therefore refers to the idea that data having two or more independent variables can be
analyzed using this technique.

Analysis of Covariance (ANCOVA)

ANCOVA is an inferential statistical test that enables investigators t adjusts statistically for group differences that may interfere with
obtaining results that relate specifically to the effects of the independent variable(s) on the dependent variable(s).

Multivariate Analysis

Multivariate analysis refers to a group of inferential statistical tests that enable the investigator to examine multiple variables
simultaneously. Unlike other statistical techniques, these tests permit the investigator to examine several dependent and independent
variables simultaneously.

Choosing the appropriate test

If the data fulfill the requirement of parametric assumptions, any of the parametric tests which suit the purpose can be used. O the
other hand, if the data do not fulfill the parametric requirements, any of the non-parametric statistical tests, which suit the purpose, can
be selected. Other factors which decide the selection of appropriate statistical tests are the number of independent and dependent
variables, and he nature of the variables (whether nominal, ordinal, interval or ratio). When both independent and dependent variables
are interval measures and are more than one, multiple correlation is the most appropriate statistic. On the other hand when they are
interval measures and their number is only one, Pearson r may be used. With ordinal and nominal measures, the non-parametric
statistics are the common choice.

Computer Aided Analysis

The availability of computer software has greatly facilitated the execution of most statistical techniques. The many statistical packages
run on different types of platforms or computer configurations. For general data analysis the Statistical Package for the Social
Sciences (SPSS), the BMDP series, and the Statistical Analysis System (SAS) are recommended. These are general-purpose statistical
packages that perform essentially all the analyses common to biomedical research. In addition, a variety of other packages have
emerged.

SYSTAT runs on both IBM-compatible and Macintosh systems and performs most of the analyses commonly used in biomedical
research. The popular SAS program has been redeveloped for Macintosh systems and is sold under the name JMP. Other commonly
used programs include Stata, which is excellent for the IBM-compatible computers. The developers of Stata release a regular
newsletter providing updates, which makes the package very attractive. StatView is a general-purpose program for the Macintosh
computer.

Newer versions of StatView include an additional program called Super ANOVA, which is an excellent set of ANOVA routines.
StatView is user-friendly and also has superb graphics. For users interested in epidemiological analyses, Epilog is a relatively low-cost
program that runs on the IBM-compatible platforms. It is particularly valuable for rate calculations, analysis of disease-clustering
patterns, and survival analysis. GB-STAT, is a low-cost, multipurpose package that is very comprehensive.

SPSS (Statistical Package for Social Sciences) is one among the popular computer programs for data analysis. This software provides
a comprehensive set of flexible tools that can be used to accomplish a wide variety of data analysis tasks (Einspruch, 1998). SPSS is
available in a variety of platforms. The latest product information and free tutorial are available at www.spss.com.

Computer software programs that provide easy access to highly sophisticated statistical methodologies represent both opportunities
and dangers. On the positive side, no serious researcher need be concerned about being unable to utilize precisely the statistical
technique that best suits his or her purpose, and to do so with the kind of speed and economy that was inconceivable just two decades
ago. The danger is that some investigators may be tempted to employ after-the-fact statistical manipulations to salvage a study that
was flawed to start with, or to extract significant findings through use of progressively more sophisticated multivariate techniques.

References & Bibliography

1. Ahuja R (2001). Research Methods. Rawat Publications, New Delhi. 71-72.


2. Brockopp D Y & Hastings-Tolsma M (2003). Fundamental of Nursing Research. 3rd Edition. Jones and Bartlet: Boston
3. Cozby P C (2000). Methods in Behavioral Research (7th Edition). Toronto: Mayfield Publishing Co.
4. Kerr A W, Hall H K, Kozub S A (2002). Doing Statistics with SPSS. Sage Publications, London.
5. Einspruch E L (1998). An Introductory Guide to SPSS for Windows. Sage Publications, Calf.
6. Fowler J, Jarvis P & Chevannes M (2002). Practical Statistics for Nursing and Health Care. John Wiley & Sons: England
7. Guilliford, J P (1956). Fundamental Statistics in Psychology and Education. New York: McGraw-Hill Book Co.
8. Lindquist, E F. (1968). Statistical Analysis in Educational Research. New Delhi: Oxford and IBH Publishing Co.
9. Singh AK. (2002). Tests, Measurements and Research Methods in Behavioural Sciences. Bharahty Bhavan. New Delhi.
10. Singlton, Royce A. and Straits, Bruce (1999). Approaches to Social Research (3rd Ed), Oxford University Press, New York.
11. Streiner, D. & Norman, G. (1996). PDQ Epidemiology (2nd Edition). St. Louis: Mosbey
12. Therese Baker L (1988). Doing Social Research, McGraw Hill Book Co., New York.
13. reece E W & Treece J H (1989). Elements of Research in Nursing, The C.V. Mosby Co.,St.Louis.
14. Tyler L E (1963).Tests and Measurements. Englewood Cliffs, New Jersey: Prentice Hall, a-p7.b-p.14
15. Chalmers TC, Celano P, Sacks H, Smith H(1983). Bias in treatment assignment in controlled clinical trials. N Engl J Med
309:1358.
16. Cohen J (1988). Statistical Power Analysis for the Behavioral Sciences. Erlbaum, Hillsdale, NJ.
17. .Cook TD, Campbell DG(1979). Quasi-experimentation: Design and Analysis Issues for Field Studies. Rand-McNally,
Chicago.
18. Daniel WW (1995) Biostatistics: A Foundation for Analysis in the Health Sciences, ed 6. Wiley, New York.
19. Daniel WW (1990). Applied Nonparametric Statistics, ed 2. PWS-Kent, Boston.
20. Dawson-Saunders B, Trapp RG (1994) Basic and Clinical Biostatistics, ed 2. Appleton & Lange, Norwalk, CT.
21. Edwards LK, editor (1993) Applied Analysis of Variance in Behavioral Science. Marcel Dekker, New York.
22. Efron B, Tibshirani R (1991). Statistical data analysis in the computer age. Science 253:390.
23. Jaccard J, Becker MA (1997). Statistics for the Behavioral Sciences, ed 3. Brooks/Cole Publishing Co, Pacific Grove, CA.
24. Keppel G (1991). Design and Analysis. Prentice-Hall, Englewood Cliffs, NJ.
25. Kaplan RM, Grant I, (200). Statistics and Experimental Design in Kaplan & Sadock's Comprehensive Textbook of Psychiatry
7th Edition.
26. McCall R (1994). Fundamental Statistics for Psychology, ed 6. Harcourt Brace, & Jovanovich, New York.
27. Pett MA (1997). Nonparametric Statistics for Health Care Research: Statistics for Small Samples and Unusual Distributions.
Sage Publications, Thousand Oaks, CA.
28. Sacks H, Chalmers DC, Smith H (1982). Randomized versus historical controls for clinical trials. Am J Med 72:233.
29. Ware ME, Brewer CL, editors (1999). Handbook for Teaching Statistics and Research Methods, ed 2. Erlbaum, Mahwah, NJ.

REPORTING AND COMMUNICATION OF RESEARCH FINDINGS

Research Report

The purpose writing research report is to document the research findings, to share the results with other interested groups, and apply
the results in practice. It is a challenging job and requires imagination, creativity, and resourcefulness. The research report aims at
telling the readers the problem identified, investigated and methods adopted, the results found and the conclusion reached. The highest
standard of correct usage of word and sentences is expected.The outcome of the study should be presented in a way that the consumer
should understand the findings.

The results can be presented through written word or through various kinds of pictorial displays. Graphs and tables are the two
common methods of communicating results. Graphs are generally used to describe the data in question, and tables are used to
summarize the findings (Brockopp &Hastings-Tolsma, 2003). Criteria for evaluating the effectiveness of both graphs and tables
include the clarity of the presentation, its conciseness, and its adequacy in conveying appropriate information (Wilson, 1987, p.295).
Bar graph, histogram, frequency polygon, pie diagram, pictorial charts are the common methods of displaying results
diagrammatically.

Tables are generally used to summarize the meaningful results of a study. They should be numbered in sequence and are referred with
in the text. Tables should be accompanied by factual, precise description of their meaning. Scientific writing is the presentation of a set
of reasons in support of a thesis, or proposition. The format suggested by the Publication Manual of the American Psychological
Association, can be consulted for detailed matters of style (web address). A scientific report requires the same attention to good
writing as does any other form of written persuasion. Key concepts are clarity, brevity, and felicity. Authors should be careful to avoid
sexism and ethnic bias
References are cited in the text by author name and date of publication. Harvard style and Vancouver style are the commonly used
methods of writing references. The reference list contains an entry for each work cited in the text, and no others. The parts of a paper
are:

 (1) title,
 (2) authors and their affiliations,
 (3) abstract,
 (4) introduction,
 (5) method,
 (6) results,
 (7) discussion,
 (8) references,
 (9) footnotes,
 (10) tables,
 (11) figure captions, and
 (12) figures.

The title should convey the main idea of the paper in a few words. The authors of the paper are listed in the order of the importance of
their contributions. The abstract is a brief summary of the paper and includes elements from the introduction, method, results, and
discussion sections. The introduction states the general problem the paper deals with, discusses the relevant literature, and states what
the paper will contribute to the understanding of the problem. The method section tells what you did in the experiment in such a way
that another person can evaluate the validity of the conclusions of the study and can repeat it in all essentials.

The method section describes the subjects, apparatus, design, and procedure. The results section describes the results and their
statistical analysis. Graphs and tables are described here. The discussion section interprets the results and relates them to the literature.
It states the contributions that the study makes to the understanding of the problem posed in the introduction, and it deals with any
weakness in the data or any qualifications of the conclusions.

Communicating Research Results

Scientific communication takes place in many ways, including archival publication in scholarly journals and informal communication
among groups of scientists, known as invisible colleges. Research outcome needs to be shared with other professionals, regardless of
the study’s outcome. The investigator can present the findings in an oral format (conference presentations) or written format (journals
or scientific publications). Nursing is a relatively a new profession and the body of knowledge needs to be developed. Publication of
research findings in international journals makes the findings of the study available to professionals of other countries.

The investigator should decide the appropriate format for presenting the findings.The steps in the publication process include choosing
the journal, submitting the final manuscript along with a cover letter, revising the paper to account for reviewers’ comments,
resubmitting the paper, reviewing the copyedited manuscript and reading the page proofs.Oral presentations include most of the
elements of the written paper in specified format.

Practicing the talk before a sympathetic audience, preparing good visual aids, and speaking from an outline rather than reading the
paper directly are keys to a good presentation.Poster presentations are an increasingly popular form of communicating results at
scientific meetings. The various parts of the paper are placed on a vertical surface in such a way that they can be read from a distance
of several feet. The author remains near the poster to discuss the results with passersby.

Research Utilization

Nursing research contribute positively to the health care system. Research utilization is the process of transferring research knowledge
into practice, thus facilitating an innovative change in practice or the verification of existing practice protocols. Knowledge about
published materials and what other people have tried is vital when exploring solutions to a problem. It is the professional
responsibility of the nurses to determine the best practice, under what conditions, in which circumstances.

To enhance the integration of research and practice, nurses must have an organizational environment to in which enquiry and critical
thinking are valued. Research utilization is helps in improving nursing practice by providing process by answers to clinical questions,
evaluating effectiveness of the nursing actins, testing theories relevant to nursing practice and expanding nursing knowledge
(Lanuza,1999).

WRITING RESEARCH PROPOSALS


Research ProposalThe writing of a research proposal is an important aspect of research process. A research proposal is a detailed plan
of the research to be conducted. A written research proposal follows a general format of a journal article with the following nine
general steps:

1. Problem
2. Definitions, assumptions, limitations or delimitations
3. Review of related literature
4. Hypothesis
5. Methods
6. Time schedule
7. Expected results
8. References
9. Appendix

NURSING RESEARCH TERMINOLOGY

1.Feasibility

Feasibility of a study refers to the ease with which the particular study can be completed.

2. Purpose of the study

The purpose of the study describes why the study has been designed. The purpose reflects the intent of the investigator and use of the
knowledge derived.

3. .Theory

A theory is composed of specific concepts and propositions that attempts to account for a particular notion that is observed in the real
world. Theory assumes that a particular conceptual model is utilized. The purpose of using theory is to describe a notion, to explain an
idea, or to predict what might be observed.

4. Propositions

Propositions are statements that suggest a specific relationship between two or more concepts. A proposition may take the form of an
axiom or theorem. An axiom is a statement that links the concepts of a theory. The links or relationship between concepts is assumed
to be true. A theorem is a statement that designates a relationship between concepts that are deduced from relationship already formed
by axioms.

5. Construct

A construct reflects the specific, potentially observable characteristics of a concept and thus facilitates testing of the idea.

6. Variable

A variable is a concept (construct) that has been so specifically defined that precise observations and therefore measurement can be
accomplished.

7. Deductive Reasoning

Deductive reasoning is method of thinking that begins with a general statement of belief and moves to obtain specific observations.
Reasoning moves from the general to the specific.

8. Inductive Reasoning

Inductive reasoning involves the collection of observations related to a particular event. From these observations, a theory or general
explanation regarding the event can evolve. Reasoning moves from specific to the general

9. Bias
Bias is a feeling or influence that strongly favors the outcome of a particular finding in a research project. When the chance of bias is
not addressed, the reliability of the scientific findings is considered to be highly questionable.

10. The Problem Statement

The problem statement presents the topic under study, provides a rationale for the choice of topic, represents a synthesis of fact and
theory, and directs the selection of the design

11. Qualitative and quantitative variables

Quantitative variable is one whose values or categories consists of numbers and if differences between its categories can be expressed
numerically (age, income, size, etc.). The qualitative variable is one which consists of discrete categories rather than numerical (sex,
religion, etc). Relationship among quantitative variables may be either positive or negative (Singleton and Starits, 1999:76). A positive
relationship exists if an increase in the value of one variable is accompanied by an increase in the value of other, or decrease in one is
accompanied by other. The negative relationship between variables exists if the decrease in the value of one variable is accompanied
by an increase in the value of other.

12. Scholarly publications

Scholarly publications are the documents that serve to communicate to other professionals the methods and achievements produced
through academic study and research investigation. Scholarly publications are used to disseminate scholarly work within discipline,
which is crucial for the growth of its members.

13. Delimitation and Limitation

Delimitations indicate the cut off points beyond which the researcher does not intent to probe. It includes those restrictions the
researcher placed in the study prior to gathering data. Delimitations are considered at every decision point during planning stage.
Limitations indicate the weakness of the entire study, as the researcher perceives them. Delimitations are set during the planning stage,
whereas limitations are experienced during implementation stage and these uncontrollable elements are reported research report.

14. Dependent and Independent variables

The independent variable (often referred to in an experimental or quasi-experimental study as the experimental or treatment variable)
is an antecedent to other variables. In an experiment or quasi-experiment, it is the variable that is manipulated, and its effect on the
dependent variable is observed. The dependent variable represents the area of interest under investigation. It reflects the effect of or
the response to the independent variable.

15. Defining Terms

Two types of definition: Conceptual definition or dictionary definition, and operational definition. Operational definition assigns
meaning to a variable and describes the activities required to measure it.

16. Operational Definition

Operational definition of variables refers to definition of terms in a way that the explanation used in the study that help in defining
variables in measurable and quantifiable terms.

17. Probability Sampling

Probability sampling is the process of selecting samples based on probability theory. Probability theory states that possibility that
events occur by chance.

18. Population

The population is the entire group of persons or objects that is of interest to the investigator.

19. Sample
The sample is a subset of the population selected by investigator to participate in a research study

20. Variable

A variable is a concept (construct) that has been so specifically defined that precise observations and therefore measurement can be
accomplished.

21. Validity in Relation to Research Design

There can be two kinds of validity related to research design: internal and external validity (Brockopp & Hastings-Tolsma (2003).
Internal validity refers to whether the independent variable actually made a deference and results are not due to extraneous factors.
External validity refers to the extent to which the results of the study can be generalized to the larger population.

22. Meta-analysis

Meta-analysis is technique where the findings from several small clinical trials are analyzed together. Although, the findings from each
study alone may not be powerful enough to allow for decisions affecting clinical practice, when analyzed together, the findings may be
much useful. Meta-analysis is a statistical procedure that compares similar studies to determine readiness of the outcomes for
implementation in clinical practice (Massay & Loomis, 1988)

23. Incidence

Incidence is a mathematical reflection of the number of cases of a health problem in a given population. The term incidence describes
the number of new cases within a specific time period.

24. Prevalence

Prevalence is a mathematical reflection of the number of cases of a health problem in a given population. The term prevalence
describes all cases of a health problem in a given population.

References & Bibliography

1. Ahuja, R.(2001). Research methods. Rawat Publications. 71-72.


2. Ahuja, N.(2002). A short Text Book of Psychiatry(5thEdition). New Delhi:Jaypee Publications, 241.
3. Ameen, S. & Nizamie, S.H. (2004). The internet revolution: implications for mental health professionals. Indian Journal of
Social Psychiatry, 20 (1-4), 16–26
4. Ashworth,P.D.(1997). The variety of qualitative research (Part2: Non-positivistic approaches. Nurse Education Today 17(3),
219-224.
5. Bradford-Hill,A. (1971). Principles of Medical Statistics (9th Edition). New York: Oxford University Press, pp309-323
6. Brockopp, Dorothy Y. & Hastings-Tolsma, Marie. (2003). Fundamental of Nursing Research. 3rd Edition. Jones and Bartlet:
Boston
7. Cozby, P.C. (2000). Methods in Behavioral Research (7th Edition). Tornto: Mayfield Publishing Co.
8. Fowler, J., Jarvis, P.& Chevannes, M.(2002), Practical Statistics for Nursing and Health Care. John Wiley & Sons: England.
9. Lanuza, D M.(1999).Research and Practice in Using and Conducting Nursing Research in the Clinical Setting. Mathew
M.A., & Kirchoff, K.T.,Edrs.P-11.
10. Lindquist, E F. (1968). Statistical Analysis in Educational Research. New Delhi: Oxford and IBH Publishing Co.
11. LoBiondo-Wood, G. & Haber, J. (1997). Nursing Research: Methods, critical appraisal and utilization (3rd Edition). Boston:
Mosbey.
12. Massay,J.,& Loomis, M.(1988). When Should Nurses Use research findings? Applied Nursing Resaerch,1,32-40.
13. Polit, D.F. & Hungler, B.P. (1995). Nursing Research: Principles and Methods (5th Edition). Philadelphia: J.B. Lippincott.
14. Polit, D.F. & Hungler, B.P. (1995). Nursing Research: Principles and Methods (6th Edition). Philadelphia: J.B. Lippincott.
15. Reichardt, C., & Cook, T. (Eds.). (1979). Qualitative and quantitative methods in evaluation research. Beverly Hills, CA:
Sage.
16. Schotfetdt, R.M.1977. Nursing Research: Reflection of values. Nursing Research, 26(1):4-8
17. Sinclair, V.,1987. Literature Searches by Computer. Image: Journal of Nursing Scholarship, 19, 35-37.
18. Singh AK. (2002). Tests, Measurements and Research Methods in Behavioural Sciences. Bharahty Bhavan. New Delhi.
19. Singlton, Royce A. and Straits, Bruce. Approaches to Social Research 1999 .3rd Edition. Oxford University Press, New York.
20. Streiner, D. & Norman, G. (1996). PDQ Epidemiology (2nd Edition). St. Louis: Mosbey
21. Therese Baker L., Doing Social Research 1988, McGraw Hil Book Co., New York
22. Treece, E.W. & Treece, J.H., Elements of Research in Nursing, The C.V. Mosby Co.,St.Louis 1989.
23. Wilson, H. (1987). Introducing research in nursing. Menlo Park, CA: Addison-Wesley.

Das könnte Ihnen auch gefallen