Beruflich Dokumente
Kultur Dokumente
READ THIS SERIES IF YOU WANT TO yy BE MORE EVIDENCE-BASED IN YOUR PRACTICE yy FEEL MOTIVATED TO READ JOURNAL ARTICLES yy INFLUENCE THE DEVELOPMENT OF YOUR SERVICE
the past year, this programme has focused primarily on development opportunities for staff in electronic searching skills and critical appraisal. Applications of research findings to practice and evaluation of own practice will be a focus in future years. In Fife, our critical appraisal education is delivered through a series of small group journal clubs within each of our three client care groups (adult learning disability; adults with acquired disorders; paediatrics).
Figure 1 RCSLT expectations The Royal College of Speech and Language Therapists (RCSLT), in its current Research Strategy, lists the following underpinning principles: 1. All practitioners engaged in meeting the speech, language, communication and swallowing needs and disorders in the population / of their clients must use the evidence base to inform and support their clinical decision-making and as part of judging the safety, efficacy and appropriateness of their clinical practice. 2. The RCSLT expects all members will engage in a range of research related activities, (including self-directed and work-based learning) that will enable them to continue to develop their skills and knowledge throughout their careers. Speech and language therapists must demonstrate a personal commitment to ongoing education in order to continue developing their knowledge and skills when undertaking research. 3. The RCSLT expects every practitioner will have a minimum 'skills set' which will allow them to be evidence based practitioners. These skills should include searching the evidence, critical appraisal, applying research findings to practice and methods for evaluating their own practice.
Appraisal help
There are quite a lot of appraisal tools available now to help practitioners evaluate healthrelated research literature, for example the Critical Appraisal Skills Programme (CASP) tools published in 2006 by the Public Health Resource Unit in England. RCSLT Clinical Guidelines (2005: Appendix 2) also provide the very detailed set of checklists used during development of the guidelines. With these sorts of tools, you choose a specific checklist according to the methodology of the research article in question. Each checklist is a structured set of questions with points for consideration and room to record your appraisal notes. I encountered some problems when I tried to use the currently available appraisal tools in our journal clubs. Most were initially developed for appraisal of medical literature, so there is a strong emphasis on clinical trials of medical treatments. Novice users may be put off by the arcane terminology of the randomised controlled trial. By and large, current evaluation tools also favour quantitative over qualitative study designs (although the CASP toolkit does include a checklist for qualitative research). This is a problem given that qualitative methods are the ones of choice when the research either involves an under-explored area lots of those in our practice! or seeks to understand clients experiences, attitudes or beliefs.
17
Secondary or integrative studies summarise and draw conclusions from primary studies
Economic analyses Table 1 Primary or secondary study? Level 1 Systematic reviews and metaanalyses Randomised controlled trials (RCTs) with (statistically) definitive results RCTs with nondefinitive results Cohort studies
All primary studies on a particular subject hunted out and critically appraised according to rigorous criteria. Participants randomly allocated to one intervention or another. Both groups followed up for a specific time period and analysed in terms of specific outcomes defined at the outset of the study. Because, on average, the groups are identical apart from the intervention, any differences in outcomes are, in theory, attributable to the intervention. Two or more groups selected on the basis of differences in their exposure to a particular agent (for example, prematurity) and followed up to see how many in each group develop a particular outcome (for example, language impairment). People with a particular condition are identified and matched with controls. Data then collected for both groups on their past exposure to possible causal agents. Representative sample of people are interviewed, examined or otherwise studied to gain answers to a particular clinical question. Considered relatively weak scientific evidence but they have the advantage of being richer in information and easier to understand and remember!
Level 2
Level 3 Level 4
Once you have found your article (more on this later in the series), your first task is, in Greenhalghs words, getting your bearings (2006, p.40). The following three questions may be helpful. 1. Why was the study done? (What clinical questions did it address?) Sometimes the authors will explicitly tell you their clinical questions. If they dont, we have found it useful to try to reformulate the authors aims as one or more questions. The PICO framework can be helpful with this: P I C O population, problem intervention control, comparison outcome
Level 5
Case-controlled studies
Level 6
Level 7
For example: In children aged under 6 years with speech sound disorder (P), is there any difference in rate of speech sound development (O) when intervention includes non-oral motor exercises (I) compared to speech sound intervention alone (C)? When you reword it like this, it is immediately obvious what things the authors needed to define and measure. So, in
the example above, their definition of speech sound disorder should link explicitly to their participant selection criteria for the study. How are they defining and measuring rate of speech sound development and does this accord with your expectations? For some research designs, you only need part of the PICO. For example: Are teenagers with a history of specific (i.e. primary) language impairment (P) more at risk of negative mental health (O) than their peers (C)? At this point, you may decide to discontinue if the question(s) posed are really not what you are looking for answers to.
2. What type of study was done? This is a crucial step as the choice of an appraisal framework is largely driven by the design and methods used in the study. Tables 1, 2 and 3 provide some definitions to help you. 3. Was the study design appropriate to the broad field of research addressed? This question gets easier with experience - the most important issues are covered within the individual appraisal tools. Broadly speaking, you are checking how well the study has been designed so as to minimise the possibility that the results are untrue, biased, misleading or unreliable. An intervention study needs to demonstrate that the outcomes resulted
18
Diagnosis
Screening
Prognosis
Causation
Psychometric studies Qualitative design methods involve exploration and interpretation via data generation; stronger on validity (closeness to the truth): preferred methods for poorly understood or relatively unexplored phenomena Documents Passive observation Participantobservation Semi-structured interview
Narrative interview
Focus Groups
Table 2 Quantitative, qualitative or mixed methods? Delegates at the 3rd East African Speech and Language Therapy Conference
from specific aspects of the intervention and not from, for example, the passage of time or receiving general attention from a nice therapist (the speech and language therapy equivalent of the placebo effect, which I think I once heard Professor Pam Enderby describe as random niceness!) You also need to have a think about whether the design of an intervention study was more about efficacy (Does it work in ideal conditions with carefully selected participants?) or effectiveness (Does it work under typical clinical conditions with a range of clients?) Logically, youre supposed to do efficacy first but, well, the path of research is sometimes more about serendipity than logic. However, if you cant work out what the researchers thought they were doing in this respect, be suspicious. There is an established pecking order within study designs in terms of weight and quality of evidence (see the Hierarchy of Evidence in table 3). However, how well the research was conducted (methodological quality) should influence how you rate it just as much as the level of evidence of the study
An intervention study needs to demonstrate that the outcomes resulted from specific aspects of the intervention and not from, for example, the passage of time or receiving general attention from a nice therapist
design. Common sense judgement is needed as well as hierarchies of study design when assessing a studys relative contribution to clinical evidence. If the study design is very wide of the mark, you may wish to conclude that it is not worth the effort of continuing with the appraisal. If on the other hand you wish to continue, you can select from your appraisal toolkit the framework that appears to fit best with the
study design. (Be aware that a mixed methods study may need you to use bits of more than one tool.)
I shall be presenting a range of critical appraisal tools over coming issues of the magazine. For this issue, here is an appraisal framework for expert opinion articles which are not based on systematic research or which go beyond the evidence base. It can be applied to a narrative or simple (non-systematic) review of intervention, management or decisionmaking for a specific clinical population, problem or issue, as well as for the sorts of articles that offer overviews of or advice on specific areas of clinical practice. You may download this tool as a document set up for you to print off and use as an individual or with colleagues in a journal club from www.speechmag.com/Members/CASLT. The original set of questions came from a very useful recent article in an American journal (Lass & Pannbacker, 2008):
19
Check in the references, though you may need to follow this up with a literature search. Most academic journals claim to be peer-reviewed these days. It is tempting to think that an article must surely be credible if it has got through peer review into print. Unfortunately, it seems the process is less than watertight and reviewers may not share a clinicians main concerns. It might help to understand that there is a pecking order amongst academic journals. People whose careers hinge on the rankings of their publications (that is, any current or aspiring academic) want to get their articles published in journals that are highly ranked and cited by other academics in their field, such as the Journal of Speech, Language and Hearing Research. The journal Child Language Teaching and Therapy may not be so highly ranked by academics but I bet that it is considerably more widely read by speech and language therapy practitioners. Aspiring authors have to play the game by the rules of the academic journals; this has an impact on perceptions of scientific rigour and credibility, as well as on readability and impact on practitioners. Question 4a: Does the expert consider the quality of the quoted evidence? They should be considering methodological quality and not just its level in the evidence hierarchy (see table 3).
This is where you need your speech and language therapy craft knowledge to help you as well as any awareness of what research has been undertaken in this area. Of course, there may be no relevant research, and thats why you may be reviewing an expert opinion article in the first place! Question 7: Did the expert make full disclosure of any financial interests related to products such as materials and publications?
There are lots of definitions of evidence-based healthcare but Justice (2006) stresses the importance for speech and language therapy of four key types of knowledge: 1. Information from high quality research studies and systematic literature reviews. 2. Clinicians expertise (speech and language therapy craft knowledge) because of their theoretical knowledge and clinical experience. 3. Understanding of client preferences to allow us to work effectively with people and their communities. 4. Institutional norms and policies, which constrain the scope of clinical decision-making.
20
References
Read between the lines and look for sources of bias or unspoken influence, for example from institutional affiliations or personal beliefs. Question 9: Does the expert provide a comprehensive overview (both sides)?
Greenhalgh, T. (2006) How to read a paper: the basics of evidence-based medicine (3rd edn). Oxford: Blackwell Publishing Ltd. Justice, L. (2006) Evidence-based practice briefs: an introduction, EBP Briefs. Available at: http://www.speechandlanguage.com/ebp/ justice-intro.asp (Accessed: 19 July 2010). Lass, N.J. & Pannbacker, M. (2008) The application of evidence-based practice to nonspeech oral motor treatments, Language, Speech and Hearing Services in Schools 39(3), pp.408-421. Public Health Resource Unit (2006) Critical Appraisal Skills Programme (CASP). Oxford: PHRU. Available at: www.phru.nhs.uk/Pages/ PHD/CASP.htm (Accessed: 19 July, 2010). Taylor-Goh, S. (2005) RCSLT Clinical Guidelines. Milton Keynes: Speechmark. Ward, D. (2008) The aetiology and treatment of developmental stammering in childhood, Archives of Disease in Childhood 93(1), pp.68-71.
Is there another interpretation or approach that the expert appears to have overlooked or ignored? Question 10: Does the expert mainly cite his or her own work?
Download the expert opinion framework document from w w w. s p e e c h m a g . c o m / Members/CASLT. Use it yourself or with colleagues in a journal club, and let us know how you get on (email avrilnicoll@ speechmag.com). If you are a member of the Royal College of Speech & Language Therapists you can also see information to help you start and make the best of journal clubs at www.rcslt.org/ members/cpd/journal_clubs.
21
Check the references and the text. This can be a bit of a giveaway. Its okay to cite yourself (racks up citation points for your academic ratings!) so long as you cite other authorities as well.