Sie sind auf Seite 1von 9

1

The practice problem that is identified by the authors in the chosen article Evaluation of a preoperative team briefing: a new communication routine results in improved clinical practice; was the administration of preoperative antibiotics in a time period that meets practice standards, generally accepted is within one hour of the first surgical incision (Haynes, A.B., Weiser, T.G., Berry, W.R., et al 2009).. The identified practice problem for study here has some important implications for nursing, in that it is the nursing service which is responsible for carrying out provider instructions for the delivery of care to the patient. Thus, it is incumbent upon the nurse in charge to ensure that nurses under her direction carry out orders in competent manner; and that includes the administration of medications in an accurate and timely manner, in order for the medication to have an optimal ability to do what is expected of it to enhance the chances a beneficial response to it by the patient; and appropriate and timely, and accurate, documentation of the antibiotics administration. In the study of Haynes et al, they utilized a quantitative approach (that a 19-item surgical checklist to analysis was undertaken and results measured and applied in an appropriate quantitative way; that they might see improvement in team communication and consistency of care, would reduce complication and deaths associated with surgery (Haynes et al 2009). They did not use a single item checklist to estimate its solo effect on surgery related, possibly preventable, infections. Whereas, in this study, the single item checklist (consistent documentation of the time an antibiotic was administered; and not the actual potentially discordant documentation of the actual administration time of the antibiotic.

The Haynes groups primary end point was the rate of complications, including death, during hospitalization within the first 30 days after the operation (Haynes et al 2009). Thus, one can readily see the applicability of the data, whereas in the reviewed study here, the end point was time documentation of an antibiotics administration and not a well-defined end point as measured in the Haynes et al study. The study in the article reviewed here was a limited number of combinations, retrospective and prospective samplings. Thus, it seems their only conclusion would have been to suggest, rather than recommend acceptance of their conclusion based on such limited data, without appropriate effect on outcome statistics. Which would have been the reason hospital units would be more likely to accept and implement a recommendation; is if it had a significant impact on outcome. The Haynes group utilized a 19-item checklist not a single item; especially one that has not been shown to be dominant in determining surgical complication as was analyzed in the Haynes et al study. A good quantitative study method utilizes a study designed to develop numerical results that can then be summarized in a way to have evidence to give validity to the obtained results. To obtain a result, as in this study, that appears all but useless, except for something to have in place when seeking accreditation from an organization such as the Joint Committee for Accreditation of Healthcare Organizations (JCAHO). But they were insufficient to arrive at the authors conclusion. There were too many variables unaccounted for, such as teams or units refusing to join the study from the outset, and others dropped out when piggy-backing was instituted. Were some of the reasons for decline or even later dropping of the study prior to its completion when there were piggy-back duties instituted, as stated in an article by Rosenthal et al (Rosenthal 2008), they might very well have been important in bringing

forth a more viable conclusion, and recommendations that would have had much more evidenced based support. A quantitative study is one in which results are expressed in meaningful numbers, in order to more accurately reflect, the point being sought or the position taken by the researcher(s). There are various ways that experimental research results can be expressed, and one must be careful to fully assess what is represented by the results, and what the researchers primary reasons were for undertaking a particular study. Their guideline was developed out of another study of Haynes et al (Haynes et al 2009), to evaluate whether a pre-surgical checklists would improve clinical practice. The authors of the guideline for preoperative team briefing, as a new communication routine to improve clinical practice, were Lingard L, Regehr G, Cartmill C, Orser B, Espin S, Bohnen J, Reznick R, Baker R, Rotstein L, & Doran D. Dr. Lorelei Lingard is a leading researcher in the study of communication and collaboration on healthcare teams. She is a Professor in the Department of Medicine at the University of Western Ontario (UWO) and the inaugural Director of the Centre for Education Research & Innovation at the Schulich School of Medicine & Dentistry. From the beginning in article, Evaluation of a preoperative team briefing: a new communication routine results in improved clinical practice, the researchers stated that the research was undertaken because they wanted to quantify the impact of the preoperative team briefing on direct patient care. What was studied was only the timing of preoperative antibiotic administration as compared to accepted treatment guidelines, in three patient-care set tings, not a quantification of a process or procedure, with no
3

confirming evidence of the process or procedure had on outcome in their particular study. The study determined nothing at all about patient care, as it only improved documentation and not an actual impact on patient care: Which is not possible without observed outcome data. What was done would hardly be sufficient sampling, and analytic approach to extrapolate (without outcome data), to give recommendations to the entire medical community.

One of the references that the researchers used to support their conclusion stated with a more comprehensive analysis, that researchers who are tracking prophylactic antibiotic timing as
an outcome measure, should be aware that documentation of antibiotic timing in the patient chart is frequently incomplete and unclear, and these inconsistencies should be accounted for in analysesthey recommended the general adoption of a systematic approach to analysis of this type of data will allow for cross-study comparisons and ensure that interpretation of results is on the basis of timing practices rather than documentation practices (Cartmill, Lingard, Regehr, Espin, Bohnen, Baker, & Rotstein 2009) . The current study for sure did not do that. As well, the evidence gathered during the present study did not seem quantified sufficiently; or, in a way that would allow for recommendations that would likely be entertained by the surgical community, in terms of whether they would consider adopting them or not. All this present study did was say whether the rated or percentage rate for on-time documentation and not on-time administration of the antibiotic was up significantly or not; and definitely was not sufficient to recommend the standardization of
4

a protocoled check list of duties that improved on-time antibiotic administration preoperatively, for an entire surgical/nursing community, when the study did not measure that. In an evidence based project, the evidence should support the conclusion drawn from the numbers given in the experimental outcome. In quantitative research, the values observed should be displayed or elucidated in the results portion of the write-up; and then used in drawing conclusions from them; regardless of was what was expected or something surprising result: That may in itself herald a new, serendipitously found breakthrough of some kind, in the field being studied. The majority of published research overwhelmingly suggests an association between properly timed antibiotic prophylaxis and reduced rates of surgical site infections However, diverse data collection methods and approaches to analysis make comparisons across studies difficult and impede knowledge building in this field (Cartmill, Lingard, Regehr, et al, 2009). In the present study, the design was also too small to draw the conclusion that was drawn; of the need to for surgical units to utilize standardized check list to ensure duties and responsibilities are dispatched in a timely manner, to increase the likelihood of the optimal in patient care. In this study, firstly, there was no real accountability information garnered, as the check-list was not required to be signed by the person responsible for a duty or assignment, when completed. Then, it would have been pertinent to obtained data from the people who refused to institute the check-list for the study; and when comments relating to sustainability would have been helpful, none were forthcoming.

It was apparent from the sustainability reference that they used, Rosenberg et al; it was found that compliance dropped by over 2%, at the end of the examination period. Since this study utilized research of Rosenberg et al, which found that for sustainability measurement purposes, sustainability dropped after the piggy-backing of an antibiotic safety prophylaxis safety initiative onto a pre-existing surgical time out (Rosenberg et al 2008). This finding may point to the fact that such a checklist may need to be incorporated within an already existing accountability format. rather than adding onto one.

While the present study was not designed to look at sustainability, then the study of Rosenberg et al either should not have been presented or there should have been something said about that study in comparison to what
may have been found in this study. Since that in itself could be the subject of an independent research study: one that might come up with the simplest measure or procedure that would accomplish the intended (in this study, on-time antibiotic administration), and alert the researches to some possible glitches that may hinder a very promising process to never see the light of day. Particularly since the researches here also utilized the results of a study by Haynes et al, from 2009: which marked an important step forward in linking briefing practices and morbidity and mortality outcomes. Lastly, the researchers in this study have a flawed conclusion from their study. In that patient safety is not necessarily adversely affected if a pre-surgical, prophylactic, antibiotic administration is documented in a timely manner or not; or is the important thing that it was given on time, but maybe not documented on time.

As well, the conclusion of theirs, did not quantitatively show, (as was an intent of theirs), where the orchestrated intervention, had any direct measureable effect that would define or could be categorized as a measure that may significantly impact patient safety: This study did not do that clearly, or convincingly. The study here seemed more like a qualitative conclusion would have been more appropriately drawn, as the outcome was the change in communication, which is hard to quantify, and not an effect on outcome. An appropriate question to have been answered would have been the resultant increased communications effect on measures that were more synonymous with true patient safety, than an intervention that while having an effect, may not have changed appreciably the number and rate of infections from what was done or not done previously by surgical staffs involved in this study. It seems this study drew an unsubstantiated conclusion that did not take into consideration, or even excluded, some very important data, and which seems to only add another task for an already burdened staff to be responsible for; one that while having been shown by other research to be important, this particular study did not expressed any results in a meaningful and accurate way. Neither did it show the effect of this intervention have on improving timing of antibiotic administration and was not just a study to improve documentation, which may or may not have been significant in this study, in terms of a change in the examined institutions post-surgical infection rates. It may have provided some intriguing and useful data for in depth and useful for analysis across other similar research.

References Cartmill, Lingard, Regehr, Espin, Bohnen, Baker, & Rotstein 2009, Timing of surgical antibiotic prophylaxis administration: Complexities of analysis, BMC Medical Research Methodology.

Haynes, A.B., Weiser, T.G., Berry, W.R., et al. 2009. A surgical safety checklist to reduce morbidity and mortality in a global population. New England Journal of Medicine, 360:491. Lingard L, Regehr G, Cartmill C, Orser B, Espin S, Bohnen J, Reznick R, Baker R, Rotstein L, Doran D.: BMJ Qual Saf. 2011 Jun; 20(6):475-82. Doi: 10.1136/bmjqs.2009.032326. Epub 2011 Feb 8. Rosenberg, A.D., Wambold, D., Kraemer, L., et al. 2008. Ensuring appropriate timing of antimicrobial prophylaxis. J Bone Joint Surg Am; 90:226-32.

Das könnte Ihnen auch gefallen