Sie sind auf Seite 1von 12

Available online at www.sciencedirect.

com

International Journal of Nursing Studies 46 (2009) 576587 www.elsevier.com/ijns

Knowledge translation and implementation research in nursing


Lars Wallin *
Department of Neurobiology, Care Sciences and Society, Division of Nursing, Karolinska Institutet and Clinical Research Utilization (CRU), Karolinska University Hospital, Eugeniahemmet T4:02, SE-171 76 Stockholm, Sweden Received 27 January 2008; received in revised form 20 May 2008; accepted 20 May 2008

Keywords: Nursing; Knowledge translation; Implementation research; Interventions

What is already known about the topic?  Knowledge translation research in nursing is predominated of descriptive studies.  A valid knowledge base for issuing recommendations on implementation strategies is lacking. What this paper adds?  A description of intervention strategies used in recent implementation research in nursing.  An analysis and discussion of issues involved in the evaluation of complex interventions for implementing evidence-based practice. 1. Introduction Knowledge translation and implementation research is an emerging eld in healthcare science. It is certainly not a new idea. The use of research has been on the healthcare agenda for a long time, but has only received greater notability during the past two decades. It is a research eld with signicance for all healthcare professionals and has immense global implications (Sanders and Haines, 2006). A striking example is the estimation that up to 70% of the 4 million infants that die each year in the neonatal period could be saved if basic evidence-based practices were used (primarily nursing care interventions) (Lawn et al., 2005;
* Tel.: +46 8 517 754 54; fax: +46 8 517 760 95. E-mail address: lars.wallin@karolinska.se.

Darmstadt et al., 2005). Another example is the persistent problem of pressure ulcers in hospitalized patients. Inadequate compliance to existing guidelines results in a high prevalence of ulcers, leading to patient suffering, prolonged hospitalization, a need for continued intensive care and a nancial burden on the health care system (Laat et al., 2006). The Institute for Healthcare Improvement in the US has added pressure ulcers prevention as 1 of 12 interventions to reach the campaign goal of saving 5 million lives from medical harm (IHI, 2006). In a widely cited report based on data from the US and the Netherlands Grol and Grimshaw (2003) reported that 3040% of all patients do not receive healthcare based on current relevant knowledge and as much as 2025% of all patients receive harmful or unnecessary care. These gures largely concern medical treatment, but there is no reason to assume that nursing care would prove to be more evidence based if such information were available. The World Health Organization (WHO) has realized the serious nature of this situation, stating stronger emphasis should be placed on translating knowledge into action to improve public health by bridging the gap of what is known and what is actually done (WHO, 2004, p. V). This urgent request conveys implications for nursing research. It is no longer reasonable to predominantly do research on, for example, patients experiences of healthcare or surveys on working conditions of the nursing staff. I am well aware that qualitative research is essential in understanding and improving nursing care. However, internationally there is a growing demand for more intervention research in order to establish evidence on the effectiveness of various nursing practices (Rahm-Hallberg, 2006). It is indeed necessary to

0020-7489/$ see front matter # 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijnurstu.2008.05.006

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

577

raise the level of evidence on the impact of nursing though it is also appropriate to take that reasoning one step further. For evidence-based knowledge to be used, intervention research is needed on how to implement such knowledge. In joint efforts researchers, decision-makers and practitioners need to enhance our understanding of how to get evidence into practice and, through that, improve processes in and outcomes of healthcare. Thus, the aim of this paper is to present a discussion on issues in the eld of knowledge translation and implementation research. The discussion is primarily based on the literature of the past 34 years and by that strategy adds to Titlers (2004) paper on methods in translation science.

will be discussed in this paper. Finally, as evidence-based practice (EBP) is the ultimate objective of KT, it is appropriate to give the concept of EBP a precise meaning by referring to the denition proposed by Dawes et al. (2005, p. 7). EBP requires that decisions about health care are based on the best available, current, valid and relevant evidence. These decisions should be made by those receiving care, informed by the tacit and explicit knowledge of those providing care, within the context of available resources.

3. Nursing and research uptake what do we know? The use of evidence in nursing practice is a eld that has received increased attention over the years. This growth can be noted in the appearance of journals as Evidence-Based Nursing and Worldviews of Evidence-Based Nursing, in the increasing number of systematic reviews on various clinical topics and in the very large number of publications on evidence-based practice. There is a history of 40 years of research on research utilization in nursing and a marked increase of reports from the beginning of 1990s, which runs parallel to the development of the evidence-based movement (Estabrooks et al., 2004). As in nursing research, generally the bulk of the work is descriptive. For example, some 45 studies using the Barriers Scale (measuring barriers to research use among nurses, Funk et al., 1991) have been published (Hutchinson and Johnston, 2006), but only one entailed evaluation of an intervention (Fink et al., 2005). A large number of papers have examined predictors of research use among nurses, predominantly individual characteristics (Estabrooks et al., 2003), and to a minor extent organizational factors (Meijers et al., 2006). Such research has largely been based on a bi-variate correlational design failing to take into account complex and interactive associations between determinants and research utilization. An exception from this pattern is two recent articles by Estabrooks research group in which structural equation modelling and multilevel analysis were used in combination with sophisticated statistical techniques (Estabrooks et al., 2007; Cummings et al., 2007). These two papers revealed factors (such as Internet use, lower levels of emotional exhaustion, facilitation, nurse-to-nurse collaboration, staff development and ability to control policy) that predicted nurses use of research. However, in these two studies, as in other studies examining predictors of research use, research utilization is treated as the outcome variable of interest. Estabrooks (2007) suggested that researchers must strive to use research utilization as an intermediate outcome, focussing on patient or system outcomes as endpoints. Such study designs would enhance the opportunities to understand the course of events in the black box of implementation. A further deciency with existing KT research is its limited use of theory. This is particularly interesting as prominent KT researchers in nursing, such as Stetler (1994) and Titler et al. (1994), had already in the early

2. Concepts and denitions A large number of terms are used in the literature for the process of getting knowledge into practice, including knowledge utilization, knowledge transfer, evidence-based practice and innovation diffusion (Graham et al., 2006; Estabrooks et al., 2006). Terms vary depending on the discipline from which they originate and thus may differ slightly in meaning. This paper uses the concepts of knowledge translation (KT) and implementation research (IR). The Canadian Institutes of Health Research (CIHR) dened KT as . . . a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system (CIHR). IR is the scientic study of methods to promote the systematic uptake of research ndings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care (Eccles and Mittman, 2006, p. 1). As you may have recognized, these two denitions complement each other. The KT denition is general and highlights the importance of the exchange between researchers and users, the need of synthesizing research outcomes before application and that implementation is a complex undertaking involving a good deal of social interaction (Graham and Tetroe, 2007). The IR denition emphasizes the need to study research uptake scientically. Both perspectives unite in the overall objective of improving quality of healthcare. A third concept of interest in this eld is complex interventions. Interventions in healthcare organizations aimed at changing behaviours and organizing those behaviours, such as implementing clinical guidelines, are usually comprised of multiple components that may act both independently and inter-dependently (Medical Research Council, MRC, 2000; Blackwood, 2006). The greater the difculty in dening what, exactly, are the active ingredients of an intervention and how they relate to each other, the greater the likelihood you are dealing with a complex intervention (MRC, 2000, p. 1). Evaluations of complex interventions require specic considerations, part of which

578

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

1990s proposed models for research utilization. However, with some exceptions, such as the diffusion of innovation theory (Rogers, 2003) and the Promoting Action on Research Implementation in Health Services (PARIHS) framework (Rycroft-Malone et al., 2002), there is a poor record of theory use to guide study design and develop measures for KT research (Estabrooks et al., 2006). Most striking in viewing nursing KT history is, however, the fact that rigorous research evaluating interventions aimed at changing nurses behaviour is still scarce. In Estabrooks et al.s (2004) review of the literature on research utilization in nursing and allied health professions only 1.3% (7/544) of the identied articles evaluated implementation strategies. Notably, 60% of these 544 articles were classied as general opinion papers.

4. Implementation strategies Several systematic reviews have been published on interventions for changing practitioners behaviour (e.g. Bero et al. (1998), Grimshaw et al. (2001), Grol and Grimshaw (2003) and Grimshaw et al., 2004). A range of implementation interventions appear to be useful (see Box 1) although at present researchers are unable to make recommendations on when to use a specic intervention to support implementation in a specic setting (Grimshaw et al., 2004). These reviews, however, were largely limited to studies on medical practice and outcomes linked to physician performance. Because the nature and social

structure of work differs substantially between the medical and the nursing profession, it is not reasonable to draw farreaching implications for nursing practice from these reports (Thompson et al., 2007). Within nursing, a review on the effectiveness of dissemination and implementation of guidelines was published as early as 1999 (Thomas et al., 1999). Findings indicated that included studies were of poor quality, education materials frequently used for dissemination and education interventions might be effective for implementing guidelines. When considering the continuous stream of publications on EBP in nursing, these results should be outdated and no longer relevant. But remarkably, despite nearly one additional decade of research, the general ndings of this review still appear to be relevant. In a recent review on interventions aimed at increasing research use in nursing (Thompson et al., 2007) only four studies (Dufault et al., 1995; Hong et al., 1990; Tranmer et al., 2002; Tsai, 2003) met the inclusion criteria. The most common intervention was education (three of the four studies evaluated educational meetings), but ndings were inconclusive regarding the effectiveness of intervention strategies. Inclusion criteria allowed for only randomized controlled trials (RCTs) or controlled before and after (CBA) designed studies and outcomes had to be derived from the measurement of research use or compliance with all behaviour recommendations in a specic guideline. Similar to 1999 review, the quality of the studies reviewed was judged as poor. To nd more concrete examples of interventions in implementation research in nursing I selected the 10 studies

Box 1. Overview of strategies for implementation of evidencea


Dissemination/educational strategies Educational materials Conferences, courses Education different strategies Educational outreach visits Mass media campaigns Social interaction strategies Interactive small-group meetings Feedback on performance Opinion leaders Multiprofessional collaboration Decision-support strategies Reminders Computerized decision support Organizational strategies Computers into practice Substitution of tasks Total quality management/quality improvement Financial interventions Patient-oriented strategies Patient-mediated interventions
a

Mixed effects Mixed effects Mixed effects Effective Mostly effective Mostly effective Mixed effects Mixed effects Effective Mostly effective Mostly effective Mostly effective Mixed effects Limited effects Effective Mixed effects

Adapted from Grol and Grimshaw (2003). The division in subgroups is tentatively made by the author.

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

579

that were excluded in the nal assessment in Thompson et al.s (2007) study. The following interventions directed to nurses were evaluated in these reports: interactive workshops and information through existing communication channels (Davies et al., 2002), marketing strategy (Hodnett et al., 1996), e-mail reminders (McDonald et al., 2005; Murtaugh et al., 2005; Feldman et al., 2005), patient selfcare guide and training to support teaching and support skills (Feldman et al., 2004), ward-based teaching package (Gould and Chamberlain, 1997), interactive educational sessions, change agents and feedback reports (Jones et al., 2004), peer feedback (Moongtui et al., 2000) and teaching and organizational interventions (committees) (Krichbaum et al., 2005). Eight of the 10 studies showed either no or uncertain effects. To extend the list of studied interventions further a simple search on Medline for the past 3 years (20052007) was performed using guideline implementation intervention nurs* as search phrases. A review of the tables of content of the journal Worldviews on Evidence-Based Nursing for this 3-year period was also done. These measures yielded an additional 11 studies reporting on KT interventions. These interventions included self-learning packet and one-on-one teaching (Abbott et al., 2006), education and information meetings (Blackwood and Wilson-Barnett, 2007), information communication, reminders, audit feedback and education (Bucknall, 2007), audit and feedback and educational outreach (Cheater et al., 2006), educational sessions, facilitators and information tools (Edwards et al., 2006), consensus process, audit and feedback and educational activities (Helm et al., 2006), organizational measures, such as, EBP philosophy in job descriptions, journal clubs, EBP workshops (Fink et al., 2005), education, information through hospital communication channels and new utensils (mattresses) (Laat et al., 2006), coaching protocol, on-line tutorial, workshop and performance feedback (Stacey et al., 2006) small group training sessions (Taylor, 2006) and facilitation (Wallin et al., 2005). About half of these studies showed either no or uncertain effects. From the 25 above-mentioned studies, it appears that study designs in general were weak, with only a few of the studies being RCTs. Most of these studies were quasiexperiments with before and after comparisons, but some of these did not include control group comparisons. Interventions were most often multi-faceted, with information activities and education as prominent elements. Even if the use of multiple components might appear logical in relation to enhancing the strength of the intervention, it must be noted that it is extremely difcult to evaluate which component(s), if any, is effective. Furthermore, considering that in their review Grimshaw et al. (2004) found that multifaceted interventions were no more effective than single interventions, some caution is warranted when the focus is limited to multi-faceted interventions. While there are many interventions available for change of practice, we should not cling to education as the universal implementation strategy, especially when traditional didactic education has not shown

much potential in accomplishing change (Grol and Grimshaw, 2003). Thus, it is appealing that interventions, such as reminders and audit and feedback, which have shown promise in changing behaviour among physicians (Grimshaw et al., 2004), appear as strategies in nursing KT studies. Another issue is the terminology used to designate the different strategies for implementing evidence. In the present review of implementation strategies in nursing I relied on the descriptions used by each author. Interventions, however, were not always adequately described and labelling seemed inconsistent. This problem is probably related to the lack of a common theoretical framework for KT and contributes to the problems of implementation. Leeman et al. (2006) suggested a theory-based taxonomy to inform the comprehension and selection of methods for implementing change in nursing. A different classication framework of interventions was developed by the Cochrane Effective Practice and Organization of Care Group (EPOC). Both these approaches deserve further attention as they can contribute both theoretically and operationally in getting evidence into practice. A careful description of the intervention is also required to follow the consolidation of standards for reporting of trials (CONSORT) (Moher et al., 2001) and the CONSORT statement for cluster RCTs (Campbell et al., 2004). To be clear about the components of an intervention (e.g., duration and frequency, deliverer and receiver and mode of delivery) are important in achieving a better understanding of implementation strategies (Thompson et al., 2007). Because the present overview of the literature on interventions was not a systematic review, quality assessment or a deeper analysis of the referred studies was not undertaken. Conclusions based on the overview are also hampered by the primitive search strategy used. However, although the number of intervention studies appears to increase, we still know stunningly little about the effectiveness of various approaches to implement evidence into clinical practice. Unfortunately, it is quite appropriate to echo Titlers conclusion from 2004: although there are a myriad of initiatives aimed at increasing use of evidence in practice, there is little systematic evidence of the effectiveness of these initiatives (p. 38). As long as the knowledge is this limited it might be advisable, both in research and clinical practice, to choose a simple and cheap implementation strategy over a complicated and expensive one.

5. Implementation research in nursingthoughts and suggestions The observant reader has probably already noticed my point of view; if we want to understand what strategies are working in changing practice to be more evidence-based, then we must test these strategies. There are, however, a number of issues to consider in evaluating complex KT interventions, including the level of knowledge on the

580

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

clinical topic of interest and its change priority, the design of the study, linkage to theory, the inuence of contextual factors and measurement and outcomes. In the following sections some comments are given on these issues. 5.1. Clinical topic To setup an implementation study there must be something to implement, i.e. there is a need for well-developed and accurate knowledge on the clinical problem of interest. Thus, a history of rigorous research on the topic and a synthesis of the knowledge generated are required. The level of knowledge in a specic eld might also reect the opportunities to perform evaluation research: with less developed knowledge, there might be a lack of useful measures to evaluate changes in care processes and patient outcomes. Yet, well-developed knowledge is not the only criterion. There should be a potential for and a need of improvement in the clinical area. 5.2. Study design In the medical eld RCTs have been the golden standard in implementation research. The Medical Research Council in the UK suggests that RCTs are likely the optimal study design to minimize bias and provide the most accurate estimate of a complex interventions benets (MRC, 2000, p. 2). The RCT is generally considered the most accurate method to assess cause and effect relationships (Thompson, 2004) and a well-conducted RCT should generate a relatively precise estimation of the effectiveness of an intervention. For example, it has been shown that guideline implementation through reminders has a median improvement effect of 14% (changes in practitioner behaviour or patient outcomes) and that various intervention strategies (reminders, audit and feedback, educational outreach and dissemination of educational materials) have an overall improvement effect of 9% (Grimshaw et al., 2004). RCTs reduce bias by decreasing the impact of various contextual factors while unknown predictors of study outcomes are balanced between control and intervention groups. In fact, data from the ideal RCT are context-free. However, there are authors challenging the usefulness of RCTs in evaluating complex social interventions. Walsh (2007), for instance, claims that the variance in context, content, application and outcomes is too high in these kinds of evaluations for RCTs to provide valid and reliable answers to the does it work question. He means that the answer is too often, yes, sometimes. Seers (2007) also hesitates using RCTs in the evaluation of complex interventions. Rhetorically, she asks if the results are really robust when so many factors additional to the intervention are at play. Such weaknesses and the equal distribution of confounders between control and intervention appear to contribute to the regular RCT approach not providing helpful information about the circumstances under which a specic intervention works par-

ticularly well or not. This is clearly shown in Grimshaw et al.s (2004) report. Despite the review of 235 studies this research team was unable to provide recommendations on when a specic implementation strategy should be used (or not used). To some extent this lack of guiding recommendations was a consequence of the many different combinations of interventions that were tested in the reviewed studies. In many studies researchers appear to have a preference for studying a novel strategy or combination of known strategies instead of replicating and evaluating previously used interventions. This approach makes it hard to aggregate ndings and generate clear conclusions. Another recurrent scarcity in the reviewed studies was the lack of examination of the implementation process. The RCT design in implementation research must preferably be completed with process evaluations and specic measurements of context in order to increase explanatory power and understanding of the generalizability of a specic intervention (MRC, 2000; Seers, 2007). Process evaluation uses both quantitative and qualitative methods and can take many forms: individual interviews with key stakeholders, questionnaire surveys, observations and focus groups with staff and/or patients may elucidate the process of change and the important ingredients in that process (Blackwood, 2006; Oakley et al., 2006). In their document on evaluation of complex interventions the Medical Research Council in UK (2000) emphasizes that qualitative research is particularly helpful in understanding why something happens and identifying which are the active ingredients of such interventions. A good example of such research is the study by Graham et al. (2004) that explored the factors inuencing the introduction of guidelines for fetal health surveillance, where Davies et al. (2002) had previously reported on the impact of the implementation strategy used (interactive workshops). Referring to my own experience I achieved better insight into how the intervention was performed and how it was perceived by the nursing staff by using focus groups in a quasi-experiment on implementation of guidelines in neonatal care (Wallin et al., 2005). Obviously RCTs have not been used in implementation research in nursing to any greater extent; rather, RCTs are the exception, not the rule of single-site interventions and quasi-experiments. This is understandable considering that RCTs are complicated, costly and require a well-developed knowledge base in both the area of interest and trial methodology. Clinical research is a practical undertaking and thus, beside scientic considerations, designing a study is always a matter of feasibility and affordability. This suggests that single-site studies will probably continue to occur since many practitioners who implement innovations and guidelines want to evaluate the effects of their efforts. Titler (2004) advocates for investigating natural experiments because it is a way to capitalize on real world initiatives that otherwise would be too costly or not feasible to investigate. We might be able to learn from single-site interventions which are conducted in a rigorous and reproducible

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

581

way and by adding recurrent ndings from cases. However, single-site studies with before and after measurement provide results that are highly open for alternative interpretations regarding relationships between intervention and outcomes. In fact, most often it is not clear that the intervention is the cause of the effects. This problem is mirrored by the low rating of quality of included studies in the published reviews (Thomas et al., 1999; Thompson et al., 2007). If RCTs are difcult and expensive to perform and singlesite studies present invalid results, what are the alternatives? Other approaches suggested for evaluating complex interventions include interrupted time series design, action research, detailed case studies and controlled before and after studies (Shojania and Grimshaw, 2005; Seers, 2007). Some of the advantages and disadvantages of these approaches are briey described here. The interrupted time series design does not imply an intervention that repeatedly needs to be turned-on and turned-off. It is an approach that looks at multiple time periods (e.g., monthly outcomes the year before and the year after the intervention). This procedure provides a much more accurate estimation of effects than short measurement periods before and after the intervention. Action research can offer useful information on impact of contextual factors but has a problem with generalizability of the ndings. Sustainability might also be a problem as the researcher, from being highly involved in the intervention, often leaves the eld of study at a given time point. Case studies should also be useful for in-depth information about contextual effects, but is similar to action research in that it is difcult to draw any general conclusions. Controlled beforeafter studies normally generate much more valid outcomes than single-site experiments because the experiment institutions are compared with similar institutions that did not implement the intervention. General trends in outcomes are detected that otherwise could be assessed as an impact of the intervention. In summary, in implementation research the choice of study design ultimately depends on the research question. Research is an additive process and many approaches have been used to establish the current level of knowledge. Recognizing the need of establishing evidence-based practice, it is time to scale up the learning about implementing guidelines and well-supported nursing interventions. Although there are several problems with the RCT method, it seems to be the right way to go at the present time. Particularly when it is supplemented with a thorough process evaluation of the intervention under study. 5.3. Theoretical basis of implementation interventions The need for theory to guide implementation research is currently under intensive debate (Rycroft-Malone, 2007). A main argument for use of theory is the need to gain a fuller understanding of the range of factors at different levels that interact and determine whether and to what extent an

implementation intervention results in change (Grol et al. (2007). In planning for any intervention it is essential to identify such factors and their potential interaction and effect. Available theories can be helpful in several ways: for example, they can help in describing and deriving these factors, setting up testable hypotheses and in discussing outcomes of a study. Estabrooks et al. (2006) and Grol et al. (2007) provide reviews of theoretical perspectives useful in developing testable KT interventions. Michie and Abraham (2004, p. 33) dened theory as a system of ideas or statements held as an explanation or account of a group of factors or phenomena. Grol et al. (2007) divided theories into impact and process theories, where impact theories describe how an intervention will facilitate change and process theories can be used for planning and organizing implementation activities. In his criticism of RCTs for evaluating complex interventions Walsh (2007) asserted that the theoretical basis for an intervention (why and how it works) is even more important than its empirical performance (whether it works). Opponents claim that instead of theory more common sense and rigorous evaluations of important outcomes are needed (Oxman et al., 2005). I concur with the proponents of theory use (e.g. Eccles et al. (2005)), that well-developed theory is a required prerequisite if progress is to be made in the KT eld. An indication on the underdevelopment of linking theory to implementation research is that only 10% of the studies in Grimshaw et al.s (2004) review provided an explicit theoretical rationale for the intervention under evaluation. In KT there are primarily two theoretical frameworks that have been used in nursing: Rogers Diffusion of innovation (2003), originally developed in rural sociology, and Promoting Action on Research Implementation in Health Services, developed in 1990s as a reaction to prevailing linear models for research uptake (Kitson et al., 1998; Rycroft-Malone et al., 2002). Rogers suggested that four main elements inuence the spread of a new idea: the innovation, communication channels, time and the social system. His work has been very inuential in KT research generally. In nursing it guided the development of measures like the Barriers Scale (Funk et al., 1991) and the nurses practice questionnaire (Brett, 1987). Both these instruments have been extensively used, however, mainly in descriptive studies. The Diffusion of innovation framework has also been applied in recent studies, e.g. Squires et al. (2007) in exploring the role of organizational policies for promoting research use among nurses, but also in framing of intervention studies, e.g. Fink et al. (2005) and Abbott et al. (2006). The developers of the PARIHS emphasized the strength of and interplay between the evidence being used, the capacity of the context in terms of coping with change and the type of facilitation needed to ensure a successful change process. PARIHS has attracted a large group of researchers in the eld of KT in nursing. There are a number of publications from recent years (e.g., Alkema and Frey (2006), Brown and McCormack (2005), Cummings et al. (2007), Doran and

582

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

Sidani (2007), Ellis et al. (2005) and Wallin et al. (2006a)) that either use PARIHS to establish a study framework or evaluate components of it. Although this framework is obviously useful and has potential for further development, I believe there is a need for more specic theories on issues such as individual learning and behaviour change as well as organizational learning and change. In its richness, Grol et al.s (2007) review should be highly useful for further guidance on the development of implementation strategies. The review covers theories from the elds of cognition, learning, social inuence, motivation, communication, leadership, professional development, organization, economy and other domains. Somewhat tentatively speaking, but also based on the literature reviewed for this paper, I would like point out a few implementation strategies of interest for evaluation in further studies. While various educational interventions have been the main target for nursing researchers, I think a broader range of strategies should be considered. Decision support in the form of reminders has shown a strong potential in medical studies. It would be highly appropriate to examine if that is also valid for nurses, preferably by evaluating the effects of computer-based decision support linked to the patient record. Audit and feedback is another eld where nursing studies are lacking. Is it a useful strategy to enhance the use of evidence in practice? Evaluation is part of the PARIHS framework, which puts a strong emphasis on facilitation as the primary method to support implementation of evidence. This emphasis, however, is not based on an extensive empirical base. The validity of the PARIHS framework needs to be examined through intervention studies and the areas which I believe are most interesting is the impact of facilitation, leadership and evaluation. These components should be highly feasible for controlled studies. A nal thought on the issue of interventions to evaluate is the need of conducting studies where only one strategy, maybe two, is evaluated. Enhanced knowledge on the effects of separate interventions should provide greater opportunities to design multi-faceted strategies. 5.4. The inuence of contextual factors The difculties of implementing evidence might largely be explained in terms of contextual inuences. Proving that an intervention works in one setting does not necessarily mean it will work in a different context (McCormack et al., 2002; Greenhalgh et al., 2004; Rycroft-Malone et al., 2004; Dopson and Fitzgerald, 2005; French, 2005; Wallin et al., 2006b; Estabrooks et al., 2007; Cummings et al., 2007). Factors frequently described as inuencing the success or failure of a process of change include leadership, resources, time, support functions, staff development, interpersonal relationships, job pressure and organizational culture and climate. These factors, however, are probably not important factors in all settings. Rather, it seems to be the case that each practice environment (such as a particular unit or depart-

ment) offers its own specic set of factors that hinder or promote the implementation of new knowledge. A key element in the advancement of KT research is to nd ways to integrate contextual factors into the study design in order to better understand the impact of such factors. An approach advocated to enhance the potential for successful change processes is the identication of barriers to change, i.e. mapping the terrain before starting an implementation project and based on the identication of barriers develop specic interventions to decrease their impact. Shaw et al. (2005) reviewed 15 RCTs using this approach. Barriers were predominantly identied through interviews and focus groups with professionals. The content of barriers varied widely as did interventions to limit their inuence. Shaw and coworkers, however, could not identify an overall signicant positive impact of this strategy. Similar ndings were reported in a multiple-case analysis by Bosch et al. (2007). This research group noted that few of the included studies used a consistent approach in linking the improvement intervention to the identied barriers, which resulted in a mismatch between barriers and interventions. Unfortunately, these two studies underscore that, even if barriers are identied, little is known about how to match barriers to interventions and what interventions are effective in overcoming barriers. In nursing there has been an extensive use of the Barriers Scale as a survey tool, but not as a diagnostic component in intervention studies (Hutchinson and Johnston, 2006). The Barriers Scale measures perceptions of barriers to research utilization regarding the nurse, the setting, the research, and the presentation of research. In a recent study nurses perceptions of barriers were compared to their ratings of research use in clinical practice (Bostrom et al., 2008). Signicant differences in the perceptions of barriers were identied between research users and nonresearch users on the subscales the nurse, the research and the presentation. Knowing that these factors actually are linked to research use might provide some condence for researchers assessing barriers of research use. However, the lack of difference between the research users and nonresearch users on the subscale the setting is problematic. Organizational barriers are often highlighted as the prominent barriers to evidence-based practice. According to Bos trom et al.s ndings this is either not completely true, or is the Barriers Scale not assessing the right barriers. Similar to ndings in Boschs and Shaws studies Bostrom et al. conclude that the instrument identies barriers at a generic level. Assessing barriers with wide-ranging characteristics not only make it difcult to link potential barriers to specic implementation strategies but also to design tailored interventions to decrease the barriers. There is probably a need of more specic measures on barriers. Like in many other areas of implementation researcha lot of work remains. Another path to take is to make assessment of contextual factors as a built-in component of the implementation study. In their review on innovation diffusion in healthcare Greenhalgh et al. (2004) emphasized that implementation

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

583

research needs to involve contextual aspects; otherwise, the contribution of a study to understand the change process runs the risk of being rather limited. Assessing context could obviously be part of the previously described process evaluation in which both qualitative and quantitative approaches are appropriate. Within the dimension of context, the concepts of organizational culture and organizational climate have received much attention. In a review on measurement of these concepts Gershon et al. (2004) identied 12 instruments, among them the nursing work index (NWI) (Aiken and Patrician, 2000). Personally, I have experience of working with the quality-work-competence (QWC) questionnaire (Wallin et al., 2006b) and the creative climate questionnaire (CCQ) (Bostrom et al., 2007). Both the QWC and CCQ instruments were useful in assessing work environment factors and in linking outcomes to staff perceptions of change of practice. However, instruments specically developed to measure features of context that are known to be valid or potential predictors of KT are rare. One recent attempt is the context assessment index (CAI) (McCormack and McCarthy, 2007), which is an instrument developed to assess contextual factors in continence care. It links to the PARIHS framework by focusing on measurement of leadership, culture and evaluation. In Canada, Estabrooks research team is developing the Alberta Context Tool, but no report is yet available on this instrument. Presuming that quantitative data collection on context is feasible and generates valid and reliable information, such data should preferably be integrated in the statistical analysis of intervention outcomes. Using (multi-level) multivariate regression analysis to include data on contextual factors would address the relative contribution of intervention and contextual factors in explaining the variation of outcome variables (Brown and Prescott, 2006). Such a procedure requires careful consideration in the planning stage of the study and access to a skilled statistician. In return, the opportunities to understand the inuences of contextual factors would be greatly improved. 5.5. Outcome measurement Outcome measurement in implementation research is primarily an issue of measuring performance of care processes and outcomes of care. Performance measures normally focus on practitioners behaviour (e.g., adherence to guideline recommendations) and outcome indicators assess the impact on patients when using evidence-based practice. In Grimshaw et al.s (2004) review 225/235 studies used process measures and 50/235 studies used outcome measures. This domination of process measurement in the evaluations probably reects difculties in developing valid and reliable measures on care outcomes. Another reason for the lesser focus on measurement of outcomes is that it might take time for change to occur while researchers at the same time are pressured to conduct and report the ndings from the study. Neither time nor resources are available for

waiting; therefore the true impact of the intervention might be missed. The issue of sustainability of change is most often a neglected area of research. It is clearly challenging, both nancially and methodologically, to extend interventions studies to be able to examine outcomes in a longitudinal perspective. Thompson et al.s review (2007) on interventions aimed at increasing nurses use of research punctuates some of the methodological challenges associated with measurement. Many studies in implementation research measure multiple outcomes (e.g., compliance to a large number of guideline recommendations). Such an approach makes it difcult to determine whether the intervention had the intended effect. For instance, is it when there is a signicant difference between intervention and control on 3 of 15 measured outcomes, or is it when there is a difference on 5 of 10, or must the difference be on all 15 of the variables? In line with many others (e.g. Grimshaw et al. (2004), Shojania and Grimshaw (2005) and Thompson et al. (2007)), I want to underline the importance of deciding on a primary outcome. Having a primary outcome variable is necessary for sample size calculations (power analysis) and should preferably constitute the variable with the highest weight in interpretation of the results. Which variable should then be selected as primary outcome? I am well aware about the hardship of identifying or developing appropriate patient outcome measures, nevertheless I believe patient outcomes should be prioritized in evaluating the impact of implementing evidence-based practice and constitute rst hand alternatives. Another measurement issue, closely discussed in Thompson et al.s review, is the potential need to assess what is happening in the black box of implementation. It would be highly useful to follow the course of learning and research utilization. Such assessment would provide information about the implementation process and generate data that could be used in comparisons between different studies and different implementation strategies. Another important issue in designing a study and interpreting its results is how the intervention is targeted and the unit of analysis. This issue is closely linked to sample size. Most of the implementation studies in nursing are, or should be, based on provider institutions (or units/groups/clusters) instead of individual patients or providers. For many reasons, it is difcult to conduct implementation targeted to specic patients or individual staff members within a unit and thus it is normally better that the whole unit serves as the unit of analysis. Consequently, an intervention is (randomly) allocated to one or many units (often called clusters in this kind of research) and effects are compared with units to which no (or another) intervention is allocated. This means that the sample size of primary interest is the number of clusters, not the number of individuals within the clusters. Unit of analysis error occurs when an analysis of the data is performed such that individuals in the intervention clusters

584

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587

are compared with individuals in the control clusters without accounting for dependency between individual observations and the cluster to which they belong. Unfortunately, this is a frequent mistake in implementation research in nursing (Thompson et al., 2007). Ignoring clustering effects normally results in underestimation of sample size, which may seriously aw the results of the study (Wears, 2002; Dawson, 2004).

Funding This article was written through nancial support from The Centre for Health Care Science at Karolinska Institutet.

Ethical approval None.

6. Conclusions Acknowledgement The current overview has examined some of the recent literature in the knowledge translation and implementation research eld in nursing. It shows that progress is being made in several important areas, but researchers still need to contend with a number of challenging issues in moving the eld from descriptive research to intervention studies on implementation strategies. Compared to Titlers (2004) paper on methods in translational science many of the issues discussed remain unchanged. The recent and growing literature has made it possible to look at this eld of research with somewhat updated glasses. Current issues may be summarized as follows:  Directing intervention studies to clinical areas where implementation of evidence-based practice has a strong improvement potential.  Selecting the most promising intervention strategies for evaluationand keeping the focus there.  Increasing the use of theory in developing intervention strategies and designing studies to increase the potential to answer such questions as what works, for whom and when?  Mobilizing knowledge and resources for setting up more robustly designed studies (if possible, RCTs).  Completing trials with a careful process evaluation in order to understand which components of context and intervention are active in accomplishing change.  Developing the assessment of barriers to change and increasing the understanding on how to overcome barriers with tailored interventions.  Including contextual factors in the planning and evaluation of implementation studies.  Deciding on primary outcome variables and unit of analysis to carry out a valid evaluation of intervention effects. The successful handling of such issues does not involve a straightforward and simple research agenda. Rather, it requires programmatic, purposeful and sustainable work. It entails a strong network of collaboration among nursing researchers within and across nations as well as collaboration with researchers from other disciplines. Implementation research is fundamentally interdisciplinary. Last but not least, carrying out useful implementation research requires a strong mobilization of funding. I would like to thank Anna Ehrenberg for useful comments in processing this manuscript.

Conict of interest None.

References
Abbott, C., Dremsa, T., Stewart, D.W., Mark, D.D., Swift, C.C., 2006. Adoption of a ventilator-associated pneumonia clinical practice guideline. Worldviews on Evidence-Based Nursing 3 (4), 139152. Aiken, L.H., Patrician, P.A., 2000. Measuring organizational traits of hospitals: the revised nursing work index. Nursing Research 49 (3), 146153. Alkema, G.E., Frey, D., 2006. Implications of translating research into practice: a medication management intervention. Home Health Care Services Quarterly 25 (1/2), 3354. Bero, L., Grilli, R., Grimshaw, J., Harvey, E., Oxman, A., Thomson, M., 1998. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research ndings. British Medical Journal 317, 465468. Blackwood, B., 2006. Methodological issues in evaluating complex healthcare interventions. Journal of Advanced Nursing 54 (5), 612622. Blackwood, B., Wilson-Barnett, J., 2007. The impact of nursedirected protocolised-weaning from mechanical ventilation on nursing practice: a quasi-experimental study. International Journal of Nursing Studies 44 (2), 209226. Bosch, M., van der Weijden, T., Wensing, M., Grol, R., 2007. Tailoring quality improvement interventions to identied barriers: a multiple case analysis. Journal of Evaluation in Clinical Practice 13, 161168. Bostrom, A.M., Wallin, L., Nordstrom, G., 2007. Evidence-based practice and determinants of research use in elderly care in Sweden. Journal of Evaluation in Clinical Practice 13, 665673. Bostrom, A.-M., Nilsson Kajermo, K., Nordstrom, G., Wallin, L., 2008. Barriers to research utilization and research use among registered nurses working in the care of older people: does the BARRIERS Scale discriminate between research user and nonresearch user on perceptions of barriers? Implementation Science 3, 24, doi:10.1186/1748-5908-3-24.

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587 Brett, J.L.L., 1987. Use of nursing practice research ndings. Nursing Research 36 (6), 344349. Brown, D., McCormack, B., 2005. Developing postoperative pain management: utilising the promoting action on research implementation in health services (PARIHS) framework. Worldviews of Evidence-Based Nursing 2 (3), 131141. Brown, H., Prescott, R., 2006. Applied Mixed Models in Medicine, 2nd ed. John Wiley & Sons Ltd., Chichester. Bucknall, T., 2007. Implementing guidelines to improve medication safety for hospitalised patients: experiences from Western Health, Australia. Worldviews on Evidence-Based Nursing 4 (1), 5153. Campbell, M.K., Elbourne, D.R., Altman, D.G., 2004. CONSORT group. CONSORT statement: extension to cluster randomised trials. British Medical Journal 328 (7441), 702708. Canadian Institutes of Health Research (CIHR), 2008. About Knowledge Translation. The KT Portfolio at CIHR. Accessed 13.01.2008 at http://www.cihr-irsc.gc.ca/e/29418.html#The. Cheater, F., Baker, R., Reddish, S., Spiers, N., Wailoo, A., Gillies, C., Robertson, N.D., Cawood, C., 2006. Cluster randomized controlled trial of the effectiveness of audit and feedback and educational outreach on improving nursing practice and patient outcomes. Medical Care 44 (6), 542551. Cochrane: Effective Practice and Organization of Care Group (EPOC), 2008. EPOC-specic resources for review authors. Accessed 18.01.2008 at http://www.epoc.cochrane.org/en/hand searchers.html. Cummings, G.C., Estabrooks, C.A., Midodzi, W.K., Wallin, L., Hayduk, L., 2007. Inuence of organizational characteristics and context on research utilization. Nursing Research 56 (4), S2439. Darmstadt, G.L., Bhutta, Z.A., Cousens, S., Adam, T., Walker, N., de Bernis, L., 2005. Evidence-based, cost-effective interventions: how many newborn babies can we save? Lancet 365 (9463), 977988. Davies, B., Hodnett, E., Hannah, M., OBrien-Pallas, L., Pringle, D., Wells, G., 2002. Fetal health surveillance: a community wide approach versus a tailored intervention for the implementation of clinical practice guidelines. Canadian Medical Association Journal 167, 469474. Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Martin, J., Hopayain, K., Porzsolt, F., Burls, A., Osborne, J., 2005. Sicily statement on evidence-based practice. BMC Medical Education 5, 1. Dawson, J., 2004. Quantitative analytical methods in translational research. Worldviews of Evidence-Based Nursing 1 (3), S13S20. Dopson, S., Fitzgerald, L., 2005. The active role of context. In: Dopson, S., Fitzgerald, L. (Eds.), Knowledge to Action? University Press, Oxford, pp. 79102. Doran, D.M., Sidani, S., 2007. Outcomes-focused knowledge translation: a framework for knowledge translation and patient outcomes improvement. Worldviews of Evidence-Based Nursing 4 (1), 313. Dufault, M., Bielecki, C., Collins, E., Willey, C., 1995. Changing nurses pain assessment practice: a collaborative research utilization approach. Journal of Advanced Nursing 21, 634645. Eccles, M., Grimshaw, J., Walker, A., Johnston, M., Pitts, M., 2005. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research ndings. Journal of Clinical Epidemiology 58, 107112.

585

Eccles, M., Mittman, B., 2006. Welcome to implementation science. Implementation Science 1, 1, doi:10.1186/1748-5908-1-1. Edwards, N., Peterson, W.E., Davies, B.L., 2006. Evaluation of a multiple component intervention to support the implementation of a therapeutic relationships best practice guideline on nurses communication skills. Patient Education and Counseling 63 (1/2), 311. Ellis, I., Howard, P., Larson, A., Robertson, J., 2005. From workshop to work practice: an exploration of context and facilitation in the development of evidence-based practice. Worldviews of Evidence-Based Nursing 2 (2), 8493. Estabrooks, C.A., Floyd, J.A., Scott-Findlay, S., OLaery, K.A., Gushta, M., 2003. Individual determinants of research utilization: a systematic review. Journal of Advanced Nursing 43 (5), 506520. Estabrooks, C.A., Scott-Findlay, S., Winther, C., 2004. A nursing and allied health sciences perspective on knowledge utilization. In: Lemieux-Charles, L., Champagne, F. (Eds.), Using knowledge and Evidence in Health Care. University of Toronto Press, Toronto, pp. 242280. Estabrooks, C.A., Thompson, D.S., Lovely, J.E., Hofmeyer, A., 2006. A guide to knowledge translation theory. The Journal of Continuing Education in the Health Professions 26, 2536. Estabrooks, C., 2007. Prologue. A program of research in knowledge translation. Nursing Research 56 (4), S4S6. Estabrooks, C.A., Midodzi, W.K., Cummings, G.C., Wallin, L., Adewale, A., 2007. Predicting research use in nursing organizations: a multilevel analysis. Nursing Research 56 (4), S723. Feldman, P.H., Murtaugh, C.M., Pezzin, L.E., McDonald, M.V., Peng, T.R., 2005. Just-in-time evidence-based e-mail reminders in home health care: impact on patient outcomes. Health Services Research 40, 865885. Feldman, P.H., Peng, T., Murtaugh, C., Kelleher, C., Donelson, S., McCann, E., 2004. A randomized intervention to improve heart failure outcomes in community-based home health care. Home Health Care Services Quarterly 23, 123. Fink, R., Thompson, C.J., Bonnes, D., 2005. Overcoming barriers and promoting the use of research in practice. Journal of Nursing Administration 35 (3), 121129. French, B., 2005. Contextual factors inuencing research use in nursing. Worldviews on Evidence-Based Nursing 2 (4), 172 183. Funk, S.G., Champagne, M.T., Wiese, R.A., Tornquist, E.M., 1991. BARRIERS: the barriers to research utilization scale. Applied Nursing Research 4, 3945. Gershon, R.R.M., Stone, P.W., Bakken, S., Larson, E., 2004. Measurement of organizational culture and climate in healthcare. Journal of Nursing Administration 34, 3340. Gould, D., Chamberlain, A., 1997. The use of a ward-based educational teaching package to enhance nurses compliance with infection control procedures. Journal of Clinical Nursing 6, 5567. Graham, I.D., Logan, J., Davies, B., Nimrod, C., 2004. Changing the use of electronic fetal monitoring and labor support: a case study of barriers and facilitators. Birth 31 (4), 293301. Graham, I.D., Logan, J., Harrison, M.B., Straus, S.E., Tetroe, J., Caswell, W., Robinson, N., 2006. Lost in knowledge translation: time for a map? The Journal of Continuing Education in the Health Professions 26, 1324. Graham, I.D., Tetroe, J., 2007. Whither knowledge translation. Nursing Research 56 (4S), 8688.

586

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587 McDonald, M., Pezzin, L., Feldman, P., Murtaugh, C., Peng, T., 2005. Can just-in-time, evidence-based reminders improve pain management among home health care nurses and their patients. Journal of Pain Symptom Management 29, 474 488. Medical Research Council, 2000. A framework for the development and evaluation of RCTs for complex interventions to promote health. Accessed 14.01.2008 at http://www.mrc.ac.uk/utilities/ Documentrecord/index.htm?d=MRC003372. Meijers, J.M., Janssen, M., Cummings, G., Wallin, L., Estabrooks, C., Halfens, R., 2006. Assessing the relationships between contextual factors and nurses use of research: a systematic review. Journal of Advanced Nursing 55 (5), 622635. Michie, S., Abraham, C., 2004. Interventions to change health behaviours: evidence-based or evidence-inspired? Psychology and Health 19 (1), 2949. Moher, D., Schulz, K.F., Altman, D.G., 2001. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357 (9263), 11911194. Moongtui, W., Gauthier, D., Turner, J., 2000. Using peer feedback to improve handwashing and glove usage among Thai health care workers. American Journal of Infection Control 28, 365369. Murtaugh, C., Pezzin, L., McDonald, M., Feldman, P., Peng, T., 2005. Just-in-time evidence based e-mail reminders in home health care: impact on nurse practices. Health Services Research 40, 849864. Oakley, A., Strange, V., Bonnell, C., Allen, E., Stephenson, J., 2006. RIPPLE Study Team. Process evaluation on randomised controlled trials of complex interventions. British Medical Journal 332, 413416. Oxman, A., Fretheim, A., Flottorp, S., 2005. The OFF theory of research utilization. Journal of Clinical Epidemiology 58, 113 116. Rahm-Hallberg, I., 2006. Challenges for future nursing research: providing evidence for health-care practice. International Journal of Nursing Studies 43, 923927. Rogers, E.M., 2003. Diffusion of Innovations, 5th ed. The Free Press, New York. Rycroft-Malone, J., Kitson, A., Harvey, G., McCormack, B., Seers, K., Titchen, A., Estabrooks, C.A., 2002. Ingredients for change: revisiting a conceptual framework. Quality and Safety in Health Care 11, 174180. Rycroft-Malone, J., Harvey, G., Seers, K., Kitson, A., McCormack, B., Titchen, A., 2004. An exploration of the factors that inuence the implementation of evidence into practice. Journal of Clinical Nursing 13, 913924. Rycroft-Malone, J., 2007. Theory and knowledge translation. Setting some coordinates. Nursing Research 56 (4S), S7885. Sanders, D., Haines, A., 2006. Implementing research is needed to achieve international health goals. PLoS Medicine 3 (6), 719722. Seers, K., 2007. Evaluating complex interventions. Worldviews on Evidence-Based Nursing 4 (2), 6768. Shaw, B., Cheater, F., Baker, R., Gillies, C., Hearnshaw, H., Flottorp, S., Robertson, N., 2005. Tailored interventions to overcome identied barriers to change: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews CD005470. Shojania, K., Grimshaw, J., 2005. Evidence-based quality improvement: the state of the science. Health Affairs 24, 138150.

Greenhalgh, T., Macfarlane, F., Bate, P., Kyriakidou, O., 2004. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Quarterly 82 (4), 581 629. Grimshaw, J.M., Shirran, L., Thomas, R., Mowatt, G., Fraser, C., Bero, L., Grilli, R., Harvey, E., Oxman, A., OBrien, M.A., 2001. Changing provider behavior: an overview of systematic reviews of interventions. Medical Care 39, S245. Grimshaw, J.M., Thomas, R., MacLennan, G., Fraser, C., Ramsay, C., Vale, L., Whitty, P., Eccles, M.P., Matawe, L., Shirran, L., Wensing, M., Dijkstra, R., Donaldson, C., 2004. Effectiveness and efciency of guideline dissemination and implementation strategies. Health Technology Assessment 8, 172. Grol, R., Grimshaw, J., 2003. From best evidence to best practice: effective implementation of change in patients care. Lancet 362 (9391), 12251230. Grol, R.P., Bosch, M.C., Hulscher, M.E., Eccles, M.P., Wensing, M., 2007. Planning and studying improvement in patient care: the use of theoretical perspectives. Milbank Quarterly 85 (1), 93138. Helm, J., Goossens, A., Bossuyt, P., 2006. When implementation fails: the case of a nursing guideline for fall prevention. Journal on Quality and Patient Safety 32 (3), 152160. Hong, S.W., Ching, T.Y., Fung, J.P., Seto, W.L., 1990. The employment of ward opinion leaders for continuing education in the hospital. Medical Teacher 12, 209218. Hodnett, E.D., Kaufman, K., OBrien-Pallas, L., Chipman, M., Watson-MacDonell, J., Hunsburger, W., 1996. A strategy to promote research-based nursing care: effects on childbirth outcomes. Nursing Research 19, 1320. Hutchinson, A.M., Johnston, L., 2006. Beyond the barriers scale: commonly reported barriers to research use. The Journal of Nursing Administration 36 (4), 189199. Institute for Healthcare Improvement, 2006. Protecting 5 million lives from harm. Accessed 14.01.2008 at http://www.ihi.org/ IHI/Programs/Campaign/. Jones, K., Fink, R., Vojir, C., Pepper, G., Hutt, E., Clark, L., Scott, J., Martinez, R., Vincent, D., Mellis, K., 2004. Translating research in long-term care: improving pain management in nursing homes. Worldviews of Evidence-Based Nursing 1, S1320. Kitson, A., Harvey, G., McCormack, B., 1998. Enabling the implementation of evidence based practice: a conceptual framework. Quality in Health Care 7, 149158. Krichbaum, K., Pearson, V., Savik, K., Mueller, C., 2005. Improving resident outcomes with GAPN organization level interventions. Western Journal of Nursing Research 27, 322337. Laat, E., Schoonhoven, L., Pickkers, P., Verbeek, A.L., Achterberg, T., 2006. Implementation of a new policy results in a decrease of pressure ulcer frequency. International Journal for Quality in Health Care 18 (2), 107112. Lawn, J.E., Cousens, S., Zupan, J., 2005. 4 million neonatal deaths: when? Where? Why?. Lancet 365 (9462), 891900. Leeman, J., Baernholdt, M., Sandelowski, M., 2006. Developing a theory-based taxonomy of methods for implementing change in practice. Journal of Advanced Nursing 58 (2), 191200. McCormack, B., Kitson, A., Harvey, G., Rycroft-Malone, J., Titchen, A., Seers, K., 2002. Getting evidence into practice: the meaning of context. Journal of Advanced Nursing 38, 94104. McCormack, B., McCarthy, G., 2007. Development of the Context Assessment index (CAI). Accessed 16.01.2008 at http:// www.science.ulster.ac.uk/inr/pdf/ContinenceStudy.pdf.

L. Wallin / International Journal of Nursing Studies 46 (2009) 576587 Squires, J.E., Moralejo, D., LeFort, S.M., 2007. Exploring the role of organizational policies and procedures in promoting research utilization in registered nurses. Implementation Science 2, 17. Stacey, D., OConnor, A.M., Graham, I., Pomey, M.-P., 2006. Randomized controlled trial of the effectiveness of an intervention to implement evidence-based patient decision support in a nursing call centre. Journal of Telemedicine and Telecare 12 (8) 410415(6). Stetler, C.B., 1994. Renement of the Stetler/Marram model for application of research ndings to practice. Nursing Outlook 42 (1), 1525. Taylor, B., 2006. Implementation and clinical audit of alcohol detoxication guidelines. British Journal of Nursing 15 (1), 3037. Titler, M.G., Kleiber, C., Steelman, V., Goode, C., Rakel, B., BarryWalker, J., Small, S., Buckwalter, K., 1994. Infusing research into practice to promote quality care. Nursing Research 43 (5), 307313. Titler, M., 2004. Methods in translational science. Worldviews of Evidence-Based Nursing 1 (1), S3848. Thomas, L.H., McColl, E., Cullum, N., Rousseau, N., Soutter, J., 1999. Clinical guidelines in nursing, midwifery and the therapies: a systematic review. Journal of Advanced Nursing 30, 4050. Thompson, C., 2004. Fortuitous phenomena: on complexity, pragmatic randomised controlled trials, and knowledge for evidence-based practice. Worldviews on Evidence-Based Nursing 1 (1), 917. Thompson, D., Estabrooks, C.A., Scott Findlay, S., Moore, K., Wallin, L., 2007. Interventions aimed at increasing research use in nursing: a systematic review. Implementation Science 2, 15.

587

Tranmer, J.E., Lochaus-Gerlach, J., Lam, M., 2002. The effect of staff nurse participation in a clinical nursing research project on attitudes towards, access to, support of and use of research in acute care settings. Canadian Journal of Nursing Leadership 15, 1826. Tsai, S., 2003. The effects of a research utilization in-service program on nurses. International Journal of Nursing Studies 40, 105113. Wallin, L., Rudberg, A., Gunningberg, L., 2005. Staff experiences in implementing guidelines for Kangaroo Mother Carea qualitative study. International Journal of Nursing Studies 42 (1), 6173. Wallin, L., Estabrooks, C.A., Midodzi, W.K., Cummings, G.G., 2006a. Development and validation of a derived measure of research utilization by nurses. Nursing Research 55 (3), 149160. Wallin, L., Ewald, U., Wikblad, K., Scott-Findlay, S., Arnetz, B., 2006b. Understanding work contextual factors: a short-cut to evidence-based practice? Worldviews of Evidence-Based Nursing 3 (4), 153164. Walsh, K., 2007. Understanding what works and why in quality improvement: the need for theory-driven evaluation. International Journal for Quality in Health Care 19 (2), 5759. Wears, R.L., 2002. Advanced statistics: statistical methods for analysing cluster and cluster-randomized data. Academic Emergency Medicine 9 (4), 330341. WHO, 2004. World report on knowledge for better health. World Health Organization, Geneva. Accessed 14.01.2008 at http:// www.who.int/rpc/meetings/pub1/en/.

Das könnte Ihnen auch gefallen