Sie sind auf Seite 1von 2

How to critique a paper

A search of the Web of Science database found that in 2008 there were 13,590 papers published in peer reviewed journals or at conferences on the topic HIV. Many smaller, independent pieces of research were also published that year in the form of reports or conference abstracts (grey literature). With so much research about HIV how do we assess which findings are relevant to our everyday work? When we read a paper or report, how do we judge whether we should immediately incorporate the study results into our work plans or wait for more evidence? This NAHIP toolkit describes how to critique a paper and provides a step-by-step guide to assessing the quality and reliability of peer reviewed and grey literature about HIV and AIDS.

1. Where was the article published?


Peer review is a well-accepted indicator of quality scholarship. Authors submit their papers to a journal to be reviewed by other researchers or practitioners (peers) in their field. Articles are only accepted by peer reviewed journals if they meet expected standards. Journal popularity and/or importance can be judged by impact factor, that is a measure of how many times a journals articles are quoted over a three year period. The higher the impact factor the more selective the journal editors are about which manuscripts they accept. Generic medical publications (e.g. The Lancet) tend to have a higher impact factor; articles about HIV and AIDS that appear in these journals are thought to be of the highest scientific value. Box 1. shows journals that publish articles about sexual health and their respective impact factors. Research published as grey literature is often specialised and only applicable to small populations. This does not mean the findings are not valuable or reliable, however an especially critical eye should be cast over the research as it has often not been judged by recognised experts in the field.

Box 1. Sexual health-related peer reviewed jounals and their 2008 Impact Factor

Journal Name

2008 Impact Factor 0.9 1.1 1.5 1.5 2.4 2.6 2.6 2.7 2.9 3.1 3.2 3.3 4.6 5.4 5.5 5.8 12.8 28.4 31.4 50.0

Ethnicity & Health International Journal of STD & AIDS AIDS Care-Psychological And SocioMedical aspects of AIDS / HIV AIDS Education and Prevention AIDS Patient Care and STDS Social Science & Medicine Sexually Transmitted Infections AIDS & Behavior Sexually Transmitted Diseases HIV Medicine Journal of Epidemiology and Community Health AIDS Reviews JAIDS-Journal Of Acquired Immune Deficiency Syndromes Journal of Sexual Medicine AIDS International Journal Of Epidemiology British Medical Journal The Lancet Nature New England Journal of Medicine

2. Who wrote the paper?


It might be helpful to establish whether the author is well known for his/her work. A good reputation is important and experienced, well known researchers are unlikely to put their name to research that might damage their professional standing.

NAHIP Toolkit_part9.indd 1

07/08/2009 18:41:29

3. What is the relevance to prevention?


Having read the paper, it is important to first understand its relevance to HIV prevention and ask what the study adds to the established knowledge base. Perhaps this study builds on the work of others by using more rigorous methods. Maybe it provides new insight into risk behaviour or highlights the effectiveness of a new prevention intervention. If it is hard to understand the relevance perhaps the original study was not designed to answer questions about HIV prevention, indicating that the article should be viewed with some caution.

to be significant. Good papers take account of the sample size in their methods section and note p-values for any differences they detect. Qualitative Qualitative studies often have much smaller sample sizes than quantitative studies. This is because qualitative research is about choosing individuals so that a range of experiences are included so rich, in-depth answers about complex questions are provided. It is about how and why things are experienced not how many people experience them. Qualitative research includes a range of methods such as focus groups, interviews, review of documents and participant and non-participant observation. Qualitative analysis, like quantitative analysis, should be easy to follow, systematic and reproducible by others.

4. What was the target population?


Its important to know who took part in the study and how they were recruited. A study about the prevention needs of black Africans that only recruits individuals from health services might be biased in favour of those whose needs are already being met. Who was included in the study? Was it appropriate to include the views and experiences of people already living with HIV? Who was excluded? Studies sometimes exclude young people under 18 years of age making the results less applicable when working with teenagers.

6. Are the findings from the study reasonable and consistent with other studies?
It is rare that a piece of research is so original that there is no other data on the area of interest. If the authors findings are not consistent with other studies then reasons for this should be clearly spelled out in the article.

5. Were the methods used appropriate?


This can be very difficult to assess if you are not a practising researcher. An important rule of thumb is to look out for bias and other factors that may mean that study findings are not valid. For example, if researchers choose participants for intervention group based on shoe size then a systematic bias may be introduced (men tend to have bigger feet than women). Researchers control for bias and other factors either at the study design stage (e.g. using randomisation) or at the analysis stage using statistical techniques. Always note whether the authors have accounted for bias and other factors. Sample Size Judging just how many people to include in study depends on complex statistical calculations. Statisticians calculate how many participants are needed to detect a true difference between two populations. Large numbers (e.g. 100+) are often needed but some studies do not have enough people in the sample to be able to detect a difference even though one may exist. P-values indicate the probability of detecting a difference due to chance. The closer the p-value is to zero, the less likely the difference is due to chance. P-values <0.05 are said

7. Are there any unanswered questions? What are the limitations of the study?
When critiquing a paper or report look out for the questions that researchers do not address. For example, how many people refused to take part in the study? How many people dropped out during the study? People who completed the study may have been different from those who refused or dropped out, how did the authors handle this bias? How generalisable are the results? For example, can findings from a study with African men in Manchester apply to African men who have sex with men in London? A good article addresses the study limitations and anticipates and answers most questions the reader has. At the end of this 7-step process judging the strength of the research presented should be easy. Casting a critical eye over research papers improves our understanding of HIV prevention and allows us to reflect on our own work, ensuring the very highest standards are reached.
NAHIP Toolkit written and produced by: Ibidun Fakoya, Migration Ethnicity and Sexual Health (MESH) Programme, University College London. 3rd Floor, Mortimer Market Centre, London, WC1E 6JB

NAHIP Toolkit_part9.indd 2

07/08/2009 18:41:29

Das könnte Ihnen auch gefallen