Sie sind auf Seite 1von 12

How to Critique and Analyze a Quantitative Research Report

Author: EMS Village Staff

To perform a true objective critique of a quantitative research report the entire study must be read and the following types of
questions considered.
Theoretical/Conceptual Framework:
1. Is a conceptual framework described? If not, does
the absence detract from the significance of the research?
2. Are the concepts to be studied identified and defined?
3. Are measures for each of the concepts identified and described?
4. Does the research problem flow naturally from the conceptual
framework?
Protection of Human Rights:
1. Is there evidence of an independent ethics review by a board or a committee?
2. Has the study been designed to minimize risk and maximize
benefits to participants?
3. Is there an indication that participants gave voluntary, informed consent?
4. Is there evidence in the study that individuals can be identified?
The Research Problem
1. Was problem statement introduced promptly?
2. Is the problem significant to nursing and is the
significance described?
3. Has the purpose for conducting the research been explained?
4. What are the research variables and are research variables
explained?
5. Will an answer to the problem provide insight into clinical applicability of the problem?
Research Questions/Hypotheses
1. Are research questions or hypotheses formally stated?
If no, should they be included?
2. Do the research questions and hypotheses naturally flow from the research problem and theoretical framework?
3. Does each research questions or hypothesis contain at least two variables?
4. Are the research questions or hypotheses worded clearly and objectively? Is a prediction evident?
Review of the Literature
1. Is the review comprehensive, logical and relevant
to the problem?

2. Is the relationship to the research purpose evident?


3. Does it include recent research and theoretical work?
4. Can a case be made for conducting this study based on the review?
Research Design
1. What design has been used for the study?
2. Is the design appropriate for the research question
and purpose of research?
3. Has enough information been given to permit replication?
Sampling
1. Is the target population carefully described?
2. Are sample selection procedures clearly defined?
3. Does the sampling method fit the research design?
4. Are potential sample biases described?
5. Is the sample sufficiently large? How has size been justified?
6. To whom can study results be generalized?
Data Collection
1. Describe the instruments used for data collection.
2. Has rationale been given for the selection of instruments?
3. Are instruments congruent with the research question?
4. Are instruments suitable for use with the study sample?
5. Have procedures for testing the reliability and validity
of instruments been described? Are results sufficient to indicate their
use?
Quantitative Analysis
1. Does the research design and the study questions
fit with the analysis methods used?
2. Does the level of measurement of the data fit with the type of statistics used?
3. Is the link between the analysis and the findings logical
and clear?
4. Is the statistical result presented clearly both in
the text as well as in numerical presentation?
5. Are graphic displays clear, simple, and accurate?
Conclusions and Recommendations
1. What are the assumptions and limitations of the study?
Are they listed or do you have to infer what they are?

2. Are results of data analysis clearly explained in reference


to research questions, hypotheses and theoretical framework?
3. Has there been appropriate generalization of significant
findings beyond the study sample to the population?
4. What recommendations for nursing practice and future
research studies have been made? Are these recommendations supported by
the data?

Understanding and critiquing qualitative


research papers
18 July, 2006

The first article in this series on understanding research (Lee, 2006a) examined the basic terminology used by
researchers and identified that qualitative research produced non-numerical (qualitative) data. This type of research
aims to report a situation as it actually is in a natural rather than a laboratory setting.
Qualitative researchers justify this approach by suggesting that it is not possible to separate the context or setting in which the
phenomenon occurs from the phenomenon itself (Morse and Field, 1996).
VOL: 102, ISSUE: 29, PAGE NO: 30
Polly Lee, MSc, BA, RSCN, RGN, RM, DipN, ILTM, is lecturer in child health nursing, City University, London

Understanding qualitative research


Some of the terminology that relates to qualitative and quantitative research and how these relate to different worldviews
(paradigms) was introduced in the first article in this series. The notion of qualitative (non-numerical) and quantitative (numerical)
data was also introduced. Readers of qualitative research need a sound understanding of the terminology specific to this type of
research to make full sense of the report.
Within qualitative research there are different approaches (methodologies). These may be more easily understood by returning to
the example used in the first article of the series. This discussed a patient being asked about their experiences of receiving different
treatment for hypertension.
A researcher undertaking a study on this topic may: follow a particular ethnic group over a prolonged period of time - in the past
this would have involved living among that population (this is known as ethnography); explore what patients understand about the
different treatments and build a theory as the research progresses (grounded theory); or explore the lived experience of
hypertension (phenomenology). These are the most common approaches used in qualitative research.

Critiquing frameworks
There are several frameworks for critiquing research, some of which have been constructed to critique or evaluate both qualitative
and quantitative studies. Others, however, have been constructed to critique only one of these. Box 1 shows some of the commonly
utilised frameworks within the nursing literature.

This article focuses on one framework designed for critiquing research. The Critical Appraisal Skills Programme (CASP) framework
has been chosen as it has separate frameworks for qualitative and quantitative research. It is also available on the internet, with
more detailed questions than can be discussed within this article (www.phru.nhs.uk/casp/critical_appraisal_tools.htm).
The 10 main questions that the CASP asks of qualitative research are listed in Box 2.
- Is there a clear statement of the aims of the research?
Qualitative research needs to answer questions set by the researcher (there is no hypothesis). The intended aim(s) of the research
should therefore be stated and the questions the research seeks to address should be identified.
- Is a qualitative methodology appropriate?
Some research questions are best addressed by qualitative enquiry and others by quantitative enquiry. Returning to the example of
hypertension, a researcher who chooses to ask patients about their experiences of receiving different treatments for hypertension is
clearly seeking to use a qualitative paradigm, as the patients thoughts and feelings are being considered, so qualitative (nonnumerical) data will be collected. It would not be possible to use the quantitative paradigm and collect numerical data for this.
- Was the research design appropriate to address the aims of the research?
The research approach normally influences its design. That is, the research design tells readers how the researcher actually
implemented the research approach.
- Was the recruitment strategy appropriate to the aims of the research?
Recruitment to any research project is an important consideration but qualitative researchers often want to obtain the thoughts or
opinions of a specific group of people who have experienced a phenomenon. It is more important therefore for qualitative
researchers to ensure that participants have experienced the phenomenon rather than randomly selecting people who may not be
able to answer the questions.
- Was data collected in a way that addressed the research issue?
Within qualitative research the most common methods of data collection are interview, questionnaire or observation. These different
methods are used to obtain slightly different data. For example, it is not possible to obtain patients thoughts and feelings of their
different treatments for hypertension by observing them. Therefore the method of data collection should address the research
issue/question.
- Has the relationship between researcher and participants been adequately considered?
Within qualitative research the researcher and participant often have a close relationship, especially if the researcher returns to the
participant on several occasions over a period of time - which may even run into many months. Researchers must be careful not to
introduce bias by accidentally reporting their interpretation of participants feelings. Within some qualitative approaches
(phenomenology) researchers must separate out (bracket) and declare their feelings at the beginning of the research project.
- Have ethical issues been taken into consideration?
Ethical issues are important and should be considered at every step of the research process. This is not just about obtaining ethical
approval for a study but also ensuring the rights of participants are not violated. When reporting qualitative research, participants
anonymity and confidentiality must not be breached. The Central Office for Research Ethics Committees (www.corec.org.uk) offers
guidance for researchers planning to undertake any research within the NHS.
- Was the data analysis sufficiently rigorous?
Data analysis is both time consuming and rigorous within qualitative research. Although it is easy for qualitative researchers to
make assumptions and want to interpret the data quickly, they really need to be immersed in the data over a period of time as the
first stage of data analysis.

- Was there a clear statement of findings?


Once the data has been analysed thoroughly, the findings should be clearly displayed. Although there are many ways to analyse
qualitative data, researchers normally organise it into common groups/topics (themes). These should be examined within the
research report with examples (quotes) from each theme being given.
- How valuable is the research?
The majority of research is never published even though it may have the potential to make a valuable contribution to the
development of nursing knowledge. Caution should still be exercised as to whether published research is repetitive of previous
research or conducted well - hence the reason for a critiquing framework - and adds to the body of nursing knowledge.

What are the critiquing frameworks for qualitative


research trying to do?
The questions in frameworks for critiquing qualitative research tend to be sequential. It is vital for readers to understand the aims
and questions of the research in order to answer sections of the framework. Critiquing frameworks enable readers to make a
judgement regarding the soundness of the research. While it is possible to critique a piece of research without them, frameworks
serve as useful aides-memoires for those who are not used to critiquing research.

What do academic journals expect?


Many of the issues affecting quantitative research reported in the second article in this series (Lee, 2006b) also apply to qualitative
research, such as restrictions on word limits. This makes it important for readers to examine author guidelines for the particular
journal.
It is clearly not possible to cover every aspect of a study in a journal report. Readers should therefore be careful before boldly
stating that a researcher did not consider a certain aspect, since it may have been discussed at length in the original (unpublished)
research report. By reading the author guidelines of professional journals readers are better able to determine what authors can
include (and therefore by implication exclude), and then relate this to the critiquing framework outlined above.
The terminology used in some professional journals would require inexperienced readers to explore meanings further. For example,
the term constant comparative analysis would be familiar to qualitative researchers - particularly those who use the grounded
theory approach (methodology) even though there may not be space within the journal to explain why this is unique to one
particular research approach. Once data analysis has been completed, then qualitative researchers discuss their findings, although
generalisability is not normally possible with qualitative research.
Nursing journals have different expectations regarding how much discussion should be included regarding the trustworthiness of
qualitative research. In fact not all qualitative researchers report these ideas in the same way - some use the notion of
trustworthiness, whereas others use the terms validity and reliability, which are actually related more to quantitative research.
Finally, readers should consider implications of the research for nursing and midwifery. While these implications can relate to the
practice, education, research and management of nursing and midwifery, not all research reports detail the implications for all
aspects of the professions. Indeed, the word allowance in many health journals will only allow for detailed discussion of a few
implications.

How qualitative research assists practitioners


Careful examination of qualitative research gives practitioners a better understanding of how a group of people view or understand
a particular situation. It can therefore enable individual practitioners to enhance their practice and contribute to evidence-based
practice.

Conclusion
This series outlines the two main approaches to research (recognising that there are other more specialised approaches) and
explains how to read and critique qualitative and quantitative research (see last weeks issue for part two in the series). The
development of such skills should assist pre-registration students with relevant assignments. They should also help practitioners to
determine if a piece of research is relevant and suitable to be implemented in their practice. The skills outlined in this series are
also essential prerequisites for those intending to undertake a critical review of literature, begin their own programme of research,
or undertake systematic reviews of research.
As practitioners gain a deeper understanding of critiquing a single piece of research, they should consider critiquing several
research studies on a particular topic, searching for common themes. They could then write a critical review of the literature on that
chosen topic.

Learning objectives
Each week Nursing Times publishes a guided learning article with reflection points to help you with your CPD. After reading the
article you should be able to:

Understand the nature and purpose of qualitative research;


Know the role of critiquing frameworks;
Understand what these frameworks aim to achieve;
Be familiar with how qualitative research can help practitioners.

Guided reflection
Use the following points to write a reflection for your PREP portfolio:

Outline where you work and the relevance of this article to your practice;
Identify the last time you came across a piece of qualitative research;
Discuss something new you have learnt about qualitative research in this article;
Explain how this information could have informed your care of a patient;
Outline how you intend to disseminate what you have learnt among your colleagues.

HAVE YOUR SAY


Tips on How to Critique a Research Paper
How to critique a research paper? A lot of students need to know the answer. We decided to write this article in order to
explain you what the research paper critique is, and how one should critique a research paper.
1.

First of all, research paper critique includes the hypothesis evaluation. Research paper hypothesis (or proposal) has to
be stated clearly. It has to be closely related to the topic of the research paper, as well.

2.

Secondly, research paper critique will focus on your research paper introduction. It should help your readers
understand what is studied, and why you have chosen this particular topic. Also, it has to show the direction of your
research, so that the readers could decide if they are interested in further reading, or not.
o
o

Thirdly, research paper critique will try to define if the method is clearly stated.
Then comes the research itself. Critique a research paper presupposes detailed study of your research
paper. Many points will be examined. So, you have to be ready to answer different questions. That is why revising a
research paper is necessary. For example, critics will try to find out if the data is connected with your hypothesis. Or

if the conclusion is consequent. Or if your research paper is able to answer the question, which was stated in the
introduction. You should be ready to answer any kind of question, and to prove all the facts and statements.
Practical Tips in Starting a Journal Club
Journal Club Definition: A group of individuals that meet to discuss and critique research that appears in
professional journals.
A. Identify Purpose & Goals
1. Generally the purpose is to generate questions & disseminate knowledge
2. Potential goals: Improve critical literature appraisal skills, to discuss controversies, to improve clinical practice,
to generate ideas for future research
B. Designate a Format (meaning who are the staff targeted for participation)
1. Unit-based (within one specific nursing unit)
2. Hospital-based (all nurses within a facility)
3. Multidisciplinary (open to other disciplines such as Respiratory Therapy, Pharmacy)
4. Online/Internet (need the institutions Informatics department to help set this up if feasible)
5. Formal versus Informal (members do not follow a critique checklist in the informal)
C. Choose a Design (what to present at the meeting)
1. One article (most common & easiest to conduct)
a. Identify audience if 1-article design chosen select a study that will appeal to the group
2. One topic (examine several research studies on a single topic requires expertise to critique)
3. One journal (review all articles within a single journal NOTE: may not be all research articles)
D. Enlist Nursing Leadership Support
1. Not only the concept but support attendance & the ability for staff to leave the bedside
2. Financial assistance for snacks/meals
3. Determine if nursing CEs may be awarded by working with Staff Education
E. Designate a Leader
1. Person must be dedicated to the journal club concept & have a basic knowledge
2. Options: APN, Educator, Nursing Manager
3. Leaders responsibilities:
a. Schedule meetings
b. Disseminate article to be read
c. Develop discussion questions in advance
d. May also be discussion leader or rotate that assignment to interested members
F. Identify Length of Meeting/Location/Frequency/Schedule
1. Length: 30 to 60 minutes (do not make the meeting any longer than 60 minutes lose interest)
2. Location: make it convenient for the nursing staff
3. Frequency: varies & depends upon resources (monthly/bimonthly/quarterly - repeated)
4. Schedule: Consider lunch & learn meetings/breakfast meetings survey staff
G. Meeting Structure (running the Journal Club meetings)
1. First meeting
a. Establish purpose/goals/chose format/design/length/frequency of meetings
b. Determine discussion leader requirements (present the article & lead the critique)
c. Determine participant requirements (example: reading the article before the meeting) Journal Club Tips
2
d. Perform a mock critique
2. Incorporate brief sessions at the beginning of the first couple of meetings that cover:
a. What is included in a review of literature
b. Different types of study designs what each means (quasi-experimental, descriptive)
c. Discussing p value and its meaning in statistics
3. All meetings
a. Discuss & critique article identify implications for nursing
b. Evaluate each meeting
c. Identify topics for future review
H. Other Potential Journal Club Activities
1. Use a debate-team format during study critique
2. Writing a letter to the editor regarding a study
3. Consider replicating a study
I. Develop a Standard Discussion/Review Critique Checklist (allows for consistency-see next page)
Journal Club Article Discussion Review Critique Checklist Example
The overall goals of a research critique are to formulate a general evaluation of the merits of a study and to
evaluate its
applicability to clinical practice.
General targeted areas when critiquing a research article:
The introduction and background information: is the problem statement/introduction clearly described. Is it
relevant to the clinical topic selected, what are the objectives or aims of the research article?

The presentation of the article: Is the research question or hypothesis clear. In the literature review: is it
informative, is it research-based and does it support the purpose/problem. Are the references current and from
respected sources?
What study design and methods are used to collect the data? What is the sample size and characteristics, what
statistics are utilized are they appropriate?
What are the results & conclusions drawn by the author? Any implications for clinical practice? Can the
conclusions be generalized to various settings and populations of people?
A. Description of the Study
What was the purpose of the research?
Why is the research being conducted & why is it considered significant/important?
Were the research questions, objectives or hypothesis(es) clearly stated?
B. Literature Evaluation
Does the literature review seem thorough & recent (within the last 5 years)?
Does the content of the literature review relate directly to the research problem?
C. Conceptual Framework
Does the research use a theoretical or conceptual model?
Does the model guide the research and seem appropriate?
D. Sample
Who were the subjects?
Were the inclusion/exclusion criteria specified?
How representative is the sample?
Was there any selection bias evident in the sample selection?
E. Method and Design
Describe the study design is it appropriate?
How was the research conducted (the study procedure itself) & data collected?
Were the subjects rights protected?
Was IRBE approval obtained? Journal Club Tips
3
F. Analysis
How were the data analyzed?
Do the selected statistical tests appear appropriate?
Were the results significant?
G. Results
What were the findings of the study?
Are the results presented in a clear and understandable way?
How did the authors interpret the results?
Were there any study limitations discussed?
H. Clinical Significance
What were the implications of this study to clinical nursing practice?
How does the study contribute to the body of knowledge?
Could the study be replicated?
What additional questions does the study raise?
SVN Copyright 2008 - All rights reserved. Any use of materials in this document, including reproduction,
modification, d
Writing a Critical
The advice in this brochure is a general guide only. We strongly recommend that you also follow
your assignment instructions and seek clarification from your lecturer/tutor if needed.
Purpose of a Critical Review
The critical review is a writing task that asks you to summarise and evaluate a text. The critical review can be of a
book, a chapter, or a journal article. Writing the critical review usually requires you to read the selected text in
detail
and to also read other related texts so that you can present a fair and reasonable evaluation of the selected text.
What is meant by critical?
At university, to be critical does not mean to criticise in a negative manner. Rather it requires you to question the
information and opinions in a text and present your evaluation or judgement of the text. To do this well, you should

attempt to understand the topic from different perspectives (i.e. read related texts) and in relation to the theories,
approaches and frameworks in your course.
What is meant by evaluation or judgement?
Here you decide the strengths and weaknesses of a text. This is usually
based on specific criteria. Evaluating requires an understanding of not just
the content of the text, but also an understanding of a texts purpose, the
intended audience and why it is structured the way it is.
What is meant by analysis?
Analysing requires separating the content and concepts of a text into their
main components and then understanding how these interrelate, connect
and possibly influence each other.
Structure of a Critical Review
Critical reviews, both short (one page) and long (four pages), usually have a similar structure.
Check your assignment instructions for formatting and structural specifications. Headings are usually
optional for longer reviews and can be helpful for the reader.
Introduction
The length of an introduction is usually one paragraph for a journal article review and two or three paragraphs for a
longer book review. Include a few opening sentences that announce the author(s) and the title, and briefly explain
the
topic of the text. Present the aim of the text and summarise the main finding or key argument. Conclude the
introduction
with a brief statement of your evaluation of the text. This can be a positive or negative evaluation or, as is usually
the
case, a mixed response.
Summary
Present a summary of the key points along with a limited number of examples. You can also briefly explain the
authors purpose/intentions throughout the text and you may briefly describe how the text is organised. The
summary
should only make up about a third of the critical review.
Critique
The critique should be a balanced discussion and evaluation of the strengths, weakness and notable features of the
text. Remember to base your discussion on specific criteria. Good reviews also include other sources to support
yourevaluation (remember to reference).
You can choose how to sequence your critique. Here are some examples to get you started:

Most important to least important conclusions you make about the text.
If your critique is more positive than negative, then present the negative points first and the positive last.
If your critique is more negative than positive, then present the positive points first and the negative last.
If there are both strengths and weakness for each criterion you use, you need to decide overall what your
judgement
is. For example, you may want to comment on a key idea in the text and have both positive and negative
comments.
You could begin by stating what is good about the idea and then
concede and explain how it is limited in some way. While this example
shows a mixed evaluation, overall you are probably being more
negative than positive.
In long reviews, you can address each criteria you choose in a
paragraph, including both negative and positive points. For very short
critical reviews (one page or less) where your comments will be briefer,
inlude a paragraph of positive aspects and another of negative.
You can also include recommendations for how the text can be
improved in terms of ideas, research approach; theories or frameworks
used can also be included in the critique section.
Conclusion
This is usually a very short paragraph.
Restate your overall opinion of the text.
Briefly present recommendations.
If necessary some further qualification or explanation of your
judgement can be included. This can help your critique sound fair
and reasonable.
References
If you have used other sources in you review you should also include a
list of references at the end of the review.
Summarising and paraphrasing f

A critique of a research article from a professional journal


A critique of a research article from a professional journal

Evidence-based practice (EBP) is a wide ranging term with a large and multi-faceted meaning.
Traditionally, a narrow definition may refer to EBP as ...de-emphasising intuition, unsystematic clinical
experience...and stresses the examination of clinical evidence from research (Evidence-Based
Medicine Working Group, 1992). This definition misses the current broad and overarching nature of
evidence-based practice. A more broad and current definition by the McMaster University Evidence
Based Medicine Group (1996) identifies implications for the research used, for example concepts such
as validity and appropriate data collection methods, as well as acknowledging patient preference as an
important factor. Evidence-based practice has become a cornerstone of a variety of professional
conduct, for example, the Nursing and Midwifery Council (NMC) mandates, for example, that all advice
given to patients is based upon the best available evidence (NMC, 2008). The evidence provided by
research does not, however, necessarily mandate a change in practice: the whole purpose of EBP is to
use available research to inform practice, and as a result of good judgement by practitioners ensure
that as healthcare professionals we do what is best by our patients (Sackett, 1996).
The paper selected for analysis is called Effective and Sustainable Multimedia Education for Children
with Asthma: A Randomized Control Trial (Krishna et al. 2006). Asthma is a common condition,
affecting more than 5.2 million people in the UK as of 2004 (Asthma UK, 2004), costing the British
economy more than 2.3 billion a year in a combination of NHS costs, lost days due to sickness etc.
(Asthma UK, 2004). In 2001, 69,000 hospital admissions were directly related to asthma: more than
40,000 of these were adult admissions (Department of Health, 2001). With a combination of good
education and access to appropriate healthcare services, these admissions could be reduced: children
in good control of their condition are much less likely to require hospital admissions after transfer to
adult services (Department of Health, 2004). Therefore, as an adult nurse, I can see that the correct
education in relation to asthma as a child can only benefit the patients that I take care of in the future.
A possible specific question that the researchers aim to answer is presented as part of an introductory
sentence. The overall aim of the study appears to be to improve asthma care by trying a different
method of information-giving (i.e. multimedia presentation). The question appears to be equivocal:
according to Cormack and Benton (2000) a good question will involve some mention of the different
variables involved, something that this question fails to do. Following on from this is a list of five
specific hypotheses that the study aimed to examine. Despite the lack of a clear and explicit research
question, these hypotheses serve to focus the research: they form a clear, measurable guide as to
what the researchers expect from the results (Hek & Moule, 2006). However, this particular study only
examines two of the five hypotheses, as the other three were already examined thoroughly in a
previous study. This indicates that the bulk of the study was already completed, possibly in some form
of pilot study. Therefore, this specific study only examines a small proportion of what it initially
intended. Unfortunately, a copy of the previous research could not be found, and therefore specific
details cannot be ascertained.
The paper appears to be quantitative in nature: the researchers are seeking to test hypotheses, have
operationalised the concepts to be measured and have created, in advance, the tools with which to
measure the outcomes (Parahoo, 2006). Further more, the title of the article states that the research is
a randomised control trial (RCT). Research of quantitative design is intended to look at facts and
figures rather than opinions, be objective, rather than subjective and produce hard and fast data that
can be applied to a larger population (Carter, 1996). This study aims to test two hypotheses: one will
be tested using numbers (the results of spirometry) and the other using results from a Likert-scale
questionnaire, again producing a number (a percentage). Therefore, a qualitative design, whereby
opinions and feelings would be recorded would be inappropriate (Carter, 1996). Of course, it could be
argued that the quantitative method of asking opinions is a qualitative design, rather than
quantitative. The very fact that the researchers are asking for opinions could be considered inherently
qualitative, as any results obtained would be subjective, that is personal to the respondent. This

potentially would make the study a mixed methods design, whereby both quantitative and qualitative
design aspects are incorporated into a single study (Hek & Moule, 2006). This design has advantages,
such as increasing the scope of research: not only is factual data obtained, but is then complemented
by the thoughts and feelings of the target group. This allows conclusions to be broader and relevant
(Arthur & Nazroo, 2003). Conversely, a mixed-method study may produce contradictory results, and it
may be difficult, if not impossible to ascertain which data is accurate, therefore rendering the research
potentially useless (Maggs-Rapport, 2000). True to the design of a RCT, the study incorporates two
groups: an experimental group (receiving both traditional and multimedia interventions) and a control
group (receiving only the traditional intervention). The purpose of the control group is primarily to give
a comparison, in this case, comparing the new intervention with the old. This system can be inherently
unethical, discussed in later sections.
As previously stated, the study claims to be a randomised control trial (RCT), more specifically an open
experiment, meaning that everyone participating in the study was aware of who was in which group
and it was conducted within a controlled environment. In this case, it is quite appropriate to use an
RCT, as the questions posed by the study seem best answered with this means. Randomised control
studies are designed to be carried out within a practice environment, within which variables can be
easily controlled or manipulated (Hek & Moule, 2006). Unfortunately, although experiment-design
studies are easier to control, the do have some disadvantages. For example, they can be seen to be
particularly susceptible to the Hawthorne Effect, whereby participants responses are skewed by the
knowledge that they are in a trial (Carter, 1996). An open design, in this specific case, was probably
the only way to make this study feasible. Blinding the participants to the theme of the study would
have proved extremely difficult, and also quite unethical given the ages of the participants (Parahoo,
2006).
A possible alternative design could have been based more around a qualitative design: placing more
emphasis on the personal experience of the patient after using the multimedia software provided and
the traditional methods. A semi-structured interview, whereby all of the participants are interviewed,
using a set of questions to provide a loose structure (Hek & Moule, 2006), would have allowed a more
subjective view of patient experiences, but suffers from being completely subjective, as well as
expensive, difficult to measure and almost impossible to apply to an entire population (Bell, 2005). It is
also a method fraught with reliability issues: for example, interviewing is not necessarily an innate
skill, and those conducting the interviews will need to be experienced, so as not to inadvertently lead
the participant, or provide any cues that may influence the respondent (Hek & Moule, 2006).
The sample used in this experiment consisted initially of 246 children fitting the recruitment criterion,
falling to 228 after attrition for various reasons. The sample appears to be convenience sample: the
participants were obtained from a population that the researcher had easy access to (Parahoo, 2006).
It appears that the initial 246 children were those that responded positively to participating in the trial
from 1000 children asked. This, however, is not explicitly stated, and has been interpreted from given
information.
1st visit to Coursework.info? Welcome!