Sie sind auf Seite 1von 51

An Introduction to Critical Appraisal

Kholoud S. Al Ghamdi (MD, PhD)


November 27, 2014





.
Criticism is important in the
continuation of the humans life
cycle, since every human, without
exception, has his/her own
shortage in one or more aspects of
life and every person admits this. So
as long as shortage exists, there is
always place for criticism.
Dr. Salman Alowdah

It is astonishing with how little reading a doctor can practice


medicine, but it is not astonishing how badly he may do it

What is evidence based practice?


Evidence-based practice is the integration of
individual clinical expertise
with the
best available external clinical evidence from systematic
research
and
patients values and expectations

What is the Relation Between

AND

The evidence-based practice (EBP) process includes:

Decision or question arising from a patients care.


Formulate a focused question.
Search for the best evidence.
Appraise the evidence.
Apply the evidence.

Do NOT believe anything people write until


youve convinced yourself it was a well done
study with valid conclusions.

What is critical appraisal?


It is the process of systematically examining research evidence to assess its
validity, results, and relevance before using it to inform a decision
(Hill and Spittlehouse, 2001)

A critical review must identify the strengths and weaknesses in a piece of research
and this should be carried out in a systematic manner
(Eachus, 2003)

It is a skill that needs to be practiced by all health professionals as part


of their work.

Why do we need to critically appraise?


studies which don't report their methods fully overstate the benefits of
treatments by around 25%
Khan et al. Arch Intern med, 1996; Maher et al, Lancet 1998.

studies funded by a pharmaceutical company were found to be 4 times


as likely to give results that were favourable to the company than
independent studies
Lexchin et al, BMJ, 2003

How do I appraise?
You need to know if this research is:
Valid & Relevant
Mostly common sense.
You dont have to be a statistical expert!
Checklists help you focus on the most important aspects of
the article.
Different checklists for different types of research.

General Questions to Ask


What is the objective/hypothesis of this manuscript?

What outcomes are being measured?


What is the data type gathered?
Is the study biased, is there confounding, can the results be explained by chance?
Subject selection, data collection proper?
Are the conclusions supported by the study data?
Appropriate statistics, adequate power?
Was the study ethical and without conflict of interest?

Research methods
Quantitative
Uses numbers to describe and analyse
Useful for finding precise answers to defined questions
Qualitative
Uses words to describe and analyse
Useful for finding detailed information about peoples perceptions and attitudes

What type of study is it?


Descriptive studies
Data used for descriptive purposes and not used to make predictions
Correlational studies, case reports or series, cross sectional surveys
Measures of central tendency (mean, median, mode)

Inferential studies
Use data from study sample to derive conclusions and/or make
predictions about the population
Statistics used to prove or reject hypothesis

What type of study is it?

Systematic reviews
Randomized controlled trials
Prospective studies (cohort studies)
Retrospective studies (case control)
Case series and reports

What is the intervention?


What are the:
Dependent
Independent
Confounding
variables of the study?

Who are the subjects?


The ideal patients to study is from a random sample
most studies do not use a totally random sample
selection bias!
Subject inclusion/exclusion criteria?

Is the study internally valid?


Do I believe it?
Assess hypothesis/objective of the study
Research hypothesis what the researcher predicts

Null hypothesis (Ho) there is no difference in outcome between the two


groups; in general expect the null hypothesis to be rejected because researcher
usually predicts a difference between groups
Alternate hypothesis (H1) there is a difference between the groups; typically,
researcher expects this to be supported so this is the research hypothesis

Systematic reviews

Thorough search of literature

All RCTs (or other studies) on a similar subject synthesised and summarised.

Meta-analysis to combine statistical findings of similar studies.

Appraising systematic reviews

Was a thorough literature search carried out ?

How was the quality of the studies assessed?

If results were combined, was this appropriate?

Randomised controlled trials (RCTs)

Normal treatment/placebo versus new treatment.

Participants are randomised.

If possible should be blinded:


Best design is when neither the investigator nor the subject know which
group they are in (double blinding)

Intention to treat analysis

Appraising RCTs

Recruitment and sample size

Randomisation method and controls

Confounding factors: factors that 'get in the way

Blinding

Follow-up (flow diagram)

Intention to treat analysis

Adjusting for multiple analyses

As a non-statistician I tend only to look for 3 numbers


in the methods :
Size of sample
Duration of follow-up
Completeness of follow-up
Greenhalgh, 2010

Assess adequate sample size

Especially important for descriptive statistics


83% success rate in 6 patients is different from same success rate in 600 patients

In general, inferential statistics take sample size into consideration


Results may trend towards significance (p = 0.05) with low sample size but become
significant if more subjects were enrolled

In research design, however, power calculations should be done to assess adequate


sample size

Cohort studies

Prospective

groups (cohorts)

exposure to a risk factor

followed over a period of time

compare rates of development of an outcome of interest

Confounding factors and bias

Case-control studies

Retrospective

Subjects confirmed with a disease (cases) are compared with non-diseased


subjects (controls) in relation to possible past exposure to a risk factor.

Confounding factors and bias

Appraising cohort/case control studies

Recruitment selection bias

Exposure - measurement, recall or classification bias

Confounding factors & adjustment

Time-frames

Plausibility: a relationship between a putative cause and an outcome

Publication bias/ Results


papers with more interesting results are more likely to be:

Submitted for publication


Accepted for publication
Published in a major journal
Published in the English language

Results

How was data collected?

Which statistical analyses were used?

How precise are the results?

How are the results presented?

Statistical tests
Type of test used depends upon:

Type of data categorical, continuous etc

One- or two-tailed (or sided) significance

Independence and number of samples

Number of observations and variables

Distribution of data, e.g. normal

P values

P stands for probability - how likely is the result to have occurred by chance?

P value of less than 0.05 means likelihood of results being due to chance is less
than 1 in 20 = statistically significant.

P values
When a difference is shown, it could be due to
(1) chance or
(2) a true finding
Chance (Type I error), False positive
Generally we accept less than 5% chance of type I error; so check that alpha
level (P value) is set at 0.05 for level of significance.

P values
When no difference is found (accept null hypothesis), it could be:
(1) The truth
(2) False negative (Type II error, beta)
Beta is typically set at 20%
Power of the study is defined as the probability of true positive (accept
alternate hypothesis), typically 0.80 (i.e. there is an 80% chance of detecting
the difference if one truly exists)

Statistical significance

statistical significance does not necessarily equal clinical significance

statistical non-significance may be due to small sample size

A bigger sample may result in a statistically significant difference

You also will need to look for the following:

Look for Sources of Bias in the Study


Bias
Things that may influence the research and lead to a systematic
deviation from the truth
May occur in each stage of data manipulation:

Collection
Analysis
Interpretation
Publication
Review

Examples of Bias in a Study

Design bias
Sampling bias
Observer bias
Reviewer bias

Sampling bias is introduced when the sample used is not representative of the
population or inappropriate for the question asked.

Look for Confounding Variables


A confounding variable is one that is associated with the predictor variable
and is a cause of the outcome variable
Example: An association was seen in a study between coffee drinking and MI.
However, if more coffee drinkers were also smokers then smoking is the
confounding variable
- So need to know other risk factors for disease
- Randomization reduces confounding but this research design is not always
possible

Look for adequate follow-up


In general the follow-up should be at least 80%.
Inadequate follow-up or too many loss to follow-up is a serious flaw in
research;
What if only the happy patients followed up with the study?

The authors should account for all patients lost to follow-up, and at least
discuss the potential bias and data scenarios

Look for statistical measures used and findings


What is the data type measured for outcomes?

Nominal (categorical)
Ordinal (rank order)
Continuous
Ratio

Look for statistical measures used and findings

Check that proper statistical analysis was performed; example,


Categorical data chi square
Ordinal data Mann - Whitney U Test, spearmans rho, weighted kappa
Continuous data t-test, Z-test

Keep a statistical reference book handy to review new statistical terms while
reading journal articles until familiar with it

Look for Disclosure?


Disclosure
the act of revealing something

Medical Disclosure
Author, editor, and reviewer must disclose any financial or personal
relationships that inappropriately influence (bias) his or her actions

"Moderation of Critical Appraisal session/ Journal club


The role of a moderator of a session:
1: Identifying or help in selection of paper. The paper must not be a 'perfect'
paper published in 'Nature' or 'Lancet', It must have some deficiencies to
generate critical appraisal/discussion
2: Following the critical appraisal checklists as regards critique of different
items: critique of title, abstract, introduction, methods, results, discussion,
conclusions, references.
3: Minimal interference. Listening and interfering only when the presenting
student or attending students make a blunder or attending faculty do not
contribute in a matter of difference of opinion. Let the presenting student
'present' and attending students discuss

"Moderation of Critical Appraisal session/ Journal club


The role of a moderator of a session:
4: Preparation: so that moderator is capable of supporting or negating some
views if the need is there.
5: Awareness of the methods and particularly statistics used in the paper.
6: Evaluation of presenting and attending students.
7: Advertising: Ensuring that the student sends information/invitation and
reminder to all students and faculty. Encourage faculty to attend, add
'attractants' like prizes or refreshments!

Critical Appraisal: Online Notes and Checklists

http://www.sign.ac.uk/methodology/checklists.html

Dr. I. Selvaraj
i.selvarajirms@yahoo.com

Dr. Sarah Lawson


sarah.lawson@kcl.ac.uk

Dr. Samira Alsenany


ssenany@hotmail.com

Thank you.

Das könnte Ihnen auch gefallen