Sie sind auf Seite 1von 8

art & science research

&

Investigating the use of simulation as a teaching strategy


Shepherd CK et al (2010) Investigating the use of simulation as a teaching strategy. Nursing Standard. 24, 35, 42-48. Date of acceptance: December 11 2009.

Abstract
Aim To compare the performance of two groups of pre-registration nursing students exposed to two different methods of simulation as a teaching strategy, with the aim of providing an evidence base to assist in the selection of appropriate teaching methods, and to inform resource allocation with regard to teaching clinical skills. Method A longitudinal, comparative quasi-experimental design, including a validated and piloted assessment tool, was used to evaluate students performance within three domains: cognitive (knowledge and decision making), motor and affective. Students also completed self-assessments of confidence and anxiety levels. Data were statistically and thematically analysed. Results Students who had been exposed to different forms of simulated teaching, showed no significant difference in performance within the cognitive and motor domains. However, one form of simulation was more effective in enabling students learning in the affective domain (students interpersonal, communication and professional nursing skills). An unexpected finding was students inability to measure vital signs manually. Conclusion Simulation as a teaching strategy contributes to students learning. Education providers and clinicians need to recognise that overuse of automated equipment may potentially de-skill future generations of nurses.

Author
Chew Kim Shepherd, Margaret McCunnis and Lynn Brown, lecturers, School of Health, Nursing and Midwifery, and Mario Hair, lecturer, School of Science, University of the West of Scotland, Paisley. Email: kim.shepherd@uws.ac.uk

Keywords
Nursing students, patient assessment, simulation, skills These keywords are based on subject headings from the British Nursing Index. All articles are subject to external double-blind peer review and checked for plagiarism using automated software. For author and research article guidelines visit the Nursing Standard home page at www.nursing-standard.co.uk. For related articles visit our online archive and search using the keywords.

clear, comprehensive and exhaustive guidance covering the required standards of proficiency for pre-registration nursing. These standards stipulate the overarching principles necessary to practise as a nurse. The NMCs (2005) review raised concerns regarding the variation in competence of newly qualified registered nurses in certain fundamental and essential nursing skills, for example communication, medication administration and decision making (Wilford and Doyle 2006). However, the NMC (2005) indicated that students must be fit for practice at the point of registration, and employers are asking educators to do a better job of preparing students for the real world of nursing (Jeffries 2005). Following this review (NMC 2005), support for the use of simulation as a pre-registration teaching and learning strategy has gained momentum (Prescott and Garside 2009). Simulation assists a student by consolidating his or her skills and addresses skills deficits (Wilford and Doyle 2006). Billings and Halstead (2005) defined simulation as: A near representation of an actual life event; may be presented by using computer software, role-play, case studies, or games that represent realities and actively involve learners in applying the content of the lesson. Simulation is the promotion of understanding through doing (Billings and Halstead 2005). In response to the concerns raised by the review, the NMC implemented a simulation and practice project, which consulted on how best to ensure competence in practice and indicated the need to look more closely at how simulation could support the development of direct care skills needed for safe and effective nursing practice. The findings of the project were positive and suggested that simulated learning achieves the following (NMC 2007): Helps students to achieve clinical learning outcomes. Provides students with learning opportunities that are not possible in a clinical setting. Increases students confidence in the clinical environment. NURSING STANDARD

THE NURSING AND MIDWIFERY COUNCIL (NMC) (2005) carried out a review to assess if nursing students are fit to practise at the point of registration. This was in spite of the NMCs (2004) 42 may 5 :: vol 24 no 35 :: 2010

Nurses of today must be critical thinkers (Nehring et al 2002, Dickerson 2005), effective decision makers (Bakalis and Watson 2005) and competent. Clinical decision making is where discriminative thinking is used to choose a specific course of action (Cioffi 1998), and is essential to the future of professional nursing practice (Tschikota 1993). Experienced nurses, managers and those responsible for staff development find that many students and newly qualified nurses lack the critical thinking skills needed to work in the increasingly complex clinical environment (Aronson et al 1997). Critical thinking, when applied to nursing, incorporates data collection and analysis, explores what is known in relation to the outcome, and examines the individual and determines the best course of action (Dickerson 2005). Qualified nurses must use effective decision-making skills to provide safe nursing care (Paul 1993). It is also recognised that higher education affects the quality of decision making and the intellectual maturity of the student (Glen 1995). Simulation is a fast moving trend in nursing education. Considerable funds are being invested in the development of sophisticated clinical simulation rooms to help create a variety of healthcare environments from critical care areas to general wards and community healthcare settings. However, it could be suggested that equipment is often purchased without a plan or idea about implementation of appropriate learning practices. As the use of simulated learning continues to increase, more research is needed to identify the hallmarks of good simulation (Jeffries 2006). This study seeks to establish if simulation promotes effective learning. In addition, it intends to inform resource allocation with regard to teaching clinical skills to assist in the selection of appropriate teaching methods. The literature suggests that there is confusion and ambiguity regarding definitions of competence (Eraut 1988, McMullan et al 2003). In this study, competence in relation to effective nursing skills involved knowledge and understanding, the ability to solve problems and make decisions, and to practise in a professional context using appropriate interpersonal and communication skills (Baillie 2005).

Literature review
Clinical skills laboratories are widely used to help integrate theory and practice, and to address problems associated with insufficient practice placements (Morgan 2006). Universities in Scotland use various methods, including simulation, to help clinicians develop new skills NURSING STANDARD

as well as assess existing skills (NHS Education for Scotland 2005). With simulation technology, undergraduate students can gain and improve skills in a safe, non-threatening experiential environment that also provides opportunities for decision making, critical thinking and team building (Medley and Horne 2005). Various studies have examined the effect of a variety of simulation techniques on student learning (Jeffries et al 2003, Haskvitz and Koop 2004, Comer 2005, Jeffries 2005, Larew et al 2006, Harlow and Sportsman 2007). Haskvitz and Koop (2004) recognised that there was no quantitative evidence to prove that clinical simulation using manikins was more effective than alternative teaching methods. Many studies are subjective, qualitative and only consider the students perspective of simulation (Henrichs et al 2002, Ramsay et al 2008, Prescott and Garside 2009). Even though these evaluations were positive, the studies did not measure if learning had taken place. It has been suggested that confidence may improve if simulation is used (Alinier 2003, Mayne et al 2004). Fletcher (2005) indicated that simulation increased confidence levels, but did not provide any evidence to support this assumption. Prescott and Garside (2009) also found that confidence gained during simulation was disseminated into clinical practice, but again with no supporting evidence. Following the use of electronic patient simulators, only half of the students in Harlow and Sportsmans (2007) study thought that the skills could be transferred to real life situations. Comer (2005) found that role play techniques can be an effective substitute for clinical simulation when teaching clinical skills. Comer (2005) also identified that role play increased students understanding and improved their examination performance. Alinier et al (2006) critically appraised the value of simulation in nurse education by comparing the performance of two student groups using an objective structured clinical examination (OSCE). The experimental group was exposed to simulation training. The researchers found that the experimental groups performance significantly improved in the second OSCE (Alinier et al 2006). It is important, however, to acknowledge that identical OSCE stations were used at the time of re-testing and that global scores using a checklist were realised rather than individual components of competence. The authors study intends to establish if a specific method of simulation is more effective in assisting learning in relation to practical nursing skills (knowledge, understanding, decision making and problem solving), motor skills and affective skills. may 5 :: vol 24 no 35 :: 2010 43

art & science research


Aim
The main aims of this study were to: Compare the performance of two groups of pre-registration nursing students exposed to two different methods of simulation: role play versus the use of manikins. Provide an evidence base to assist in the selection of appropriate teaching methods within pre-registration nursing programmes. Inform resource allocation with regard to teaching clinical skills.

&

Method
This longitudinal study used a quantitative quasi-experimental design. Problem solving nursing scenarios were used to compare students performance in relation to measuring and assessing vital signs in a simulated environment. The study was carried out between January 2008 and February 2009. Participants were in their final year of a three-year pre-registration adult nursing programme. The first phase took place in the middle of year three (June/July 2008) and the second phase towards the end of the programme (January 2009). A pilot study was carried out in January 2008 with a sample of five students from a different cohort of students to those in the main study. This allowed the researchers to assess the feasibility of the study and to test the data collection tools for validity and reliability. It also allowed the researchers to trial the recording equipment. Sample All third-year students were invited to participate, of which 28 students agreed to take part. Participants were allocated to a site: site A (n=18) and site B (n=10). Data collection A validated and piloted assessment tool was used to evaluate the students performances within the cognitive (knowledge and understanding, and decision making and problem solving) motor and affective domains (University of the West of Scotland 2008). Each student was awarded a percentage to enable comparisons to be made. The total score possible equalled 100% (knowledge and understanding: 25%, decision making and problem solving: 25%, motor: 25% and affective: 25%). Students also completed self-assessments of confidence (Schwarzer 1992) and anxiety (Spielberger 1983) levels before and following the simulation exercise. No automated equipment was used at either site to measure vital signs. 44 may 5 :: vol 24 no 35 :: 2010

Phase 1 Participants at both sites were assessed in the skills laboratory using a clinical scenario appropriate to the stage of their programme, which was the third year of a Bachelor of Science adult nursing programme. On completion of the patient assessment, specific questions were asked to assess students knowledge and understanding, and decision making and problem solving, relating to measuring and assessing vital signs. At site A, a volunteer patient was recruited and briefed to act as a patient in the role play simulation. At site B, a high-fidelity manikin that simulates vital signs and verbal sounds was used. All participants were examined within a specified time frame and their performances were recorded. These recordings allowed the students performance to be viewed remotely and negated the need for an assessor to be present in the room. These recordings were sent to two external assessors experienced in the assessment of clinical skills. Phase 2 Following six months of clinical practice, at the end of their programme, participants were reassessed using a similar scenario to that used in phase 1. This provided an opportunity to test whether there were any changes in the students performance following an extensive period of practice. In phase 2, 24 students continued to participate (site A (n=15) and site B (n=9)) and four students declined to participate any further . Data analysis The study sought to establish whether there were any significant differences in performance between the two sites. Recordings were randomly assigned to the external assessors and both received recordings from both sites. Inter-marker reliability was addressed by each assessor cross-marking five performances. Quantitative and qualitative data were realised. Quantitative data were analysed by a statistician, and descriptive and inferential statistics were realised. Qualitative data were analysed thematically by the researchers. Ethical issues Ethical approval was sought from the universitys ethics committee. Following approval, the principal investigator and research team approached the cohort of students to be studied at both sites to recruit them for the study. In April 2008, potential participants were approached regarding their participation in the study. If students chose to take part, they were asked to sign an opt-in form at recruitment and to sign a consent form before participation. At these recruitment sessions, students were issued with an information pack containing details of the study and what would be required of them should they choose to participate. They were informed that they could withdraw from the study at any stage, even if they had decided to participate, without having to provide an explanation. Participants were informed that all data collected NURSING STANDARD

would be anonymised and all results would be kept securely by the principal investigator.

TABLE 1 Results of phase 1


Site Totals Cognitive domain: knowledge and understanding Cognitive domain: decision making and problem solving Total for cognitive domain Motor domain Affective domain A B A B A B A B A B A B Number of participants 18 10 18 10 18 10 18 10 18 10 18 10 Mean 55.17 55.10 13.67 14.30 13.67 13.40 27.33 27.70 12.94 12.80 14.89 14.60 Standard deviation 11.60 8.99 3.16 2.54 2.50 3.27 5.39 5.19 3.72 2.53 4.14 3.75

Results
Phase 1 Table 1 shows the mean and standard deviation of scores at each site for the overall total mark and the three separate domains that make up the total (cognitive is the sum of both cognitive elements: knowledge and understanding, and decision making and problem solving). Mean scores for the overall total and for each domain are similar, with no site being consistently better or worse. The mean difference between sites for each score was formally tested using an independent sample t-test. There was no evidence to suggest that any site achieved better results than the other site. However, with any statistical test there is always a possibility that the test is not sufficiently powerful to detect a real difference. The probability that the test could detect a real difference is called the power of the test. A post hoc power analysis was conducted to determine the likelihood that the test could determine a real difference. In this case, the power of the test is such that it can be reasonably concluded that any real difference between sites in excess of 10% could be determined by these tests (Figure 1). A difference of 10% equates to an effect size of one and gives a power of 70%. Anxiety and confidence Both pre-test anxiety and confidence were higher in participants at site A than site B. However, neither of these differences were statistically significant. To confirm whether anxiety or confidence had any significant effect on test scores, an analysis of covariance was undertaken with total score as the dependent variable, the site as the fixed factor and both pre-test anxiety and confidence levels as covariates. There was no significant difference between sites (F(1, 24) = 0.03, P=0.863). There was, however, a significant negative correlation (r=-0.683, P<= 0.01) between pre-test anxiety and change in anxiety. The more anxious students were before the test, the smaller the change in anxiety after the test. Those who were least anxious before the test experienced the biggest increase in anxiety after the test. The pattern was similar in both sites. There was also a significant negative correlation between change in confidence and change in anxiety after the test (r= 0.572, P< 0.01). The greater the change in confidence after the test, the smaller the change in anxiety. Those students who showed the greatest increase in confidence following simulation also showed the least increase in anxiety. Once again there was a similar pattern in both sites. NURSING STANDARD

FIGURE 1 Post hoc power analysis

1.4

1.2

Effect size

1.0

0.8 0.6

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Power (1 prob type 2 error)

TABLE 2 Results of phase 2


Site Totals Cognitive domain: knowledge and understanding Cognitive domain: decision making and problem solving Total for cognitive domain Motor domain Affective domain A B A B A B A B A B A B Number of participants 15 9 15 9 15 9 15 9 15 9 15 9 Mean 55.87 51.78 14.60 14.00 12.60 13.56 27.20 27.56 13.20 11.78 15.47 12.44 Standard deviation 8.38 8.86 2.69 2.83 1.81 2.35 3.80 5.10 3.45 2.77 3.23 2.56

Phase 2 Table 2 shows the mean and standard deviation of scores at each site for the overall total mark and the three separate domains that may 5 :: vol 24 no 35 :: 2010 45

art & science research


make up the total. The mean overall total scores were higher in site A compared with site B. The cognitive scores were similar. However, site A scored higher in both the motor and affective domains. The mean difference between sites for each score was formally tested using an independent sample t-test. Despite the large difference in mean scores between sites the difference was not significant because of the large variability in overall scores. However, there was a significant difference in the mean affective scores (t(22)=2.39, P<0.05). The mean affective score at site A was significantly higher than the mean score at site B. The 95% confidence interval for the difference in mean affective scores shows that the mean at site A is between 0.4 and 5.64 greater than the mean at site B. As the sample sizes were small the difference was retested using a non-parametric Mann-Whitney test. The test confirmed that the difference between the sites was significant. A similar post hoc power analysis was conducted as for phase 1 and gave similar results. In this case, the tests could detect a real difference of 10% with a power of 62%.

&

Limitations
It is recognised that the sample size in this study was small. It would be useful to replicate this study with more participants; for example, all third-year adult branch students at the university could be included. Furthermore, it would be advantageous to carry out a comparative study at another school of nursing in a different higher education institution.

Discussion
Although the results for performance in the cognitive domain in phase 1 were not significantly different between sites, the scores achieved in these domains were relatively low (between 27.33 and 27.70 out of a possible score of 50). The lowest scores in phase 1 were in the motor domain (between 12.80 and 12.94 out of 25). This is of concern as these students had previously been exposed to two years of practice placements. Nonetheless, neither type of simulation was shown to be more effective than the other in phase 1. Nehring et al (2002) recommended that students comfort with simulation should be measured. In this study, students anxiety and confidence scores did not appear to affect their performance. These results for anxiety concur with Spielbergers (1983) research regarding anxiety levels before sitting examinations. Rationale for participants evaluation of confidence and anxiety pre and post-simulation was to ensure that these factors did not unreasonably affect performance. Affective scores in phase 2 were significantly different and at one site they dropped even further when compared with phase 1 scores. In this study, students using role play achieved greater scores in the affective domain in phase 2 (15.47 versus 12.44). This concurs with Ramsay et als (2008) assertion that role play was useful when rehearsing communication. Education providers must therefore be mindful, when teaching communication and interpersonal skills, to select the most appropriate method of simulation. In phase 2, the total cognitive scores fell even further at both sites (27.20 versus 27.56 out of a possible 50). This is a concern as it could be assumed that knowledge and understanding, and decision making and problem solving, would improve with further clinical experience. Although there was a slight improvement in psychomotor scores at one site these results remained poor. Motor scores fell even further at one site (12.80 to 11.78) and there was a lack of awareness regarding what equipment was required to assess patients. Furthermore, students demonstrated poor manual dexterity when using NURSING STANDARD

Qualitative findings of phase 1 and phase 2


Cognition On questioning, all students recognised that the changes in the patients vital signs were significant and that they demonstrated deterioration in the patients condition. Furthermore, all students suggested appropriate interventions that must be taken. At site B, although the monitor was switched off, some students continued to pay a great deal of attention to it. Cognition and motor The majority of students demonstrated indecision when choosing the equipment required to measure vital signs, with continual back and forth movement observed. Also, manual dexterity differed between sites. Site B students did not demonstrate either competence or confidence in these domains. Affective Communication varied between students. Some participants were able to initiate reasonable conversation, while others focused on procedural conversation, for example: I am going to take your blood pressure. In general, students did not communicate well with the manikin. In addition, several students exposed the manikins chest to obtain a respiratory rate. 46 may 5 :: vol 24 no 35 :: 2010

this equipment, with some unable to apply the blood pressure cuff correctly or demonstrating incorrect use of the stethoscope. A lack of confidence was also demonstrated regarding the physiological measurements obtained. Of great concern was the inability of some students to locate radial and brachial pulses. This may be because students routinely use automated equipment to measure pulse rate, but omit other essential pulse characteristics, such as volume, strength and rhythm. This is a cause for concern considering that measuring vital signs is a crucial component of monitoring individuals physiological status, as they enable prompt detection of deterioration in a patients condition. Holcomb et al (2005) identified the importance of manually establishing pulse characteristics, not just pulse rate, to initiate life-saving interventions. Early warning systems are increasingly being used to identify at-risk patients, but depend on accurate measurements of vital signs. If, however, these measurements are incomplete, as would be the situation if only the pulse rate is recorded when using automated

equipment, then patients deterioration may not be noted early enough to initiate treatment. Parish (2008) highlighted the changes made at a hospital trust where all vital signs measurements were assessed manually, including pulse characteristics. This resulted in a marked decrease in patients experiencing cardiac arrest the number of arrests dropped by two thirds. In addition, it has been suggested by Evans et al (2001) that some automated equipment is used in clinical areas because it is available and not because robust evidence shows that it improves care delivery, outcomes or cost. In this study, students still looked at monitors that were switched off, as if for affirmation of vital signs gathered. Some participants in this study measured respiratory rates by placing their hand on the manikins chest. This contradicts good practice, which suggests that patients should be unaware that their breathing pattern is being assessed (Baillie 2005, Nicol et al 2008). Ensuring patient safety is an important component of the Scottish Governments (2009) quality strategy draft document. A hands-on approach to patient

References
Alinier G (2003) Nursing students and lecturers perspectives of objective structured clinical examination incorporating simulation. Nurse Education Today. 23, 6, 419-426. Alinier G, Hunt B, Gordon R, Harwood C (2006) Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. Journal of Advanced Nursing. 54, 3, 359-369. Aronson BS, Rosa JM, Anfinson J, Light N (1997) Teaching tools. A simulated clinical problem-solving experience. Nurse Educator. 22, 6, 17-19. Baillie L (2005) Developing Practical Nursing Skills. Second Edition. Hodder Arnold, London. Bakalis NA, Watson R (2005) Nurses decision-making in clinical practice. Nursing Standard. 19, 23, 33-39. Billings DM, Halstead JA (2005) Teaching in Nursing: A Guide for Faculty. Second edition. Elsevier, St. Louis MO. Cioffi J (1998) Education for clinical decision making in midwifery practice. Midwifery. 14, 1, 18-22. Comer SK (2005) Patient care simulations: role playing to enhance clinical understanding. Nursing Education Perspectives. 26, 6, 357-361. Dickerson PS (2005) Nurturing critical thinkers. Journal of Continuing Education in Nursing. 36, 2, 68-72. Eraut M (1998) Concepts of competence. Journal of Interprofessional Care. 12, 2, 127-139. Evans D, Hodgkinson B, Berry J (2001) Vital signs in hospital patients: a systematic review. International Journal of Nursing Studies. 38, 6, 643-650. Fletcher M (2005) Unique lab broadens education options. Canadian Nurse. 101, 2, 10. Glen S (1995) Developing critical thinking in higher education. Nurse Education Today. 15, 3, 170-176. Harlow KC, Sportsman S (2007) An economic analysis of patient simulators for clinical training in nursing education. Nursing Economics. 25, 1, 24-29. Haskvitz LM, Koop EC (2004) Students struggling in clinical? A new role for the patient simulator. Journal of Nursing Education. 43, 4, 181-184. Henrichs B, Rule A, Grady M, Ellis W (2002) Nurse anesthesia students perceptions of the anesthesia patient simulator: a qualitative study. American Association of Nurse Anesthetists Journal. 70, 3, 219-225. Holcomb JB, Salinas J, McManus JM, Miller CC, Cooke WH, Convertino VA (2005) Manual vital signs reliably predict need for life-saving interventions in trauma patients. The Journal of Trauma. 59, 4, 821-829. Jeffries PR (2005) A framework for designing, implementing, and evaluating simulations used as teaching strategies in Nursing. Nursing Education Perspectives. 26, 2, 96-103. Jeffries PR (2006) Designing simulations for nursing education. Annual review of nursing education. 4, 161-177. Jeffries PR, Woolf S, Linde B (2003) Technology-based vs. traditional instruction. A comparison of two methods for teaching the skill of performing a 12-lead ECG. Nursing Education Perspectives. 24, 2, 70-74. Larew C, Lessans S, Spunt D, Foster D, Covington BG (2006) Innovations in clinical simulation: application of Benners theory in an interactive patient care simulation. Nursing Education Perspectives. 27, 1, 16-21. Mayne W, Jootun D, Young B, Marland G, Harris M, Lyttle CP (2004) Enabling students to develop confidence in basic clinical skills. Nursing Times. 100, 24, 36-39.

NURSING STANDARD

may 5 :: vol 24 no 35 :: 2010 47

art & science research


assessment uses manual equipment and a full minute of pulse recording to include pulse characteristics and not just rate. It must also be recognised that automated equipment is for single patient use and should be recalibrated if used for multiple patients. These procedures can only assist person centred, safe and clinically effective care for everybody, every time, now and for the future (Scottish Government 2009).

&

IMPLICATIONS FOR PRACTICE


It should be the concern of all educators and healthcare providers that nursing students and qualified nurses are potentially becoming deskilled, and patient safety is being compromised, because there is an over reliance on automated equipment in clinical areas. A hands on assessment of patients status, including pulse characteristics, is essential to enable early recognition of patient deterioration. All nursing students should be fully versed in manual observation and measurement skills as well as being sufficiently conversant with the use of automated equipment. Educators should be aware that learning within different domains requires different forms of simulated teaching. Educators therefore should select the appropriate type of simulation depending on the learning outcome they are striving to achieve. Those in control of the budget in higher education institutions should consider carefully what type of simulation resources to purchase in view of how students learn. Hand-in-hand with selective purchasing is the often neglected, but vital, ongoing need for educators to be trained in the purpose and use of simulation resources.

Conclusion
This study demonstrated that both forms of simulation role play and the use of manikins realised similar outcomes in terms of students scores, except in the affective domain, where role play achieved significantly better results. Participants cognitive scores were relatively poor in relation to knowledge and understanding, and decision making and problem solving vital components of a registered nurses practice. However, an unexpected and concerning finding was the inability of many senior nursing students to assess patients vital signs manually. Furthermore, following six months of clinical practice, participants performance in this study did not improve NS

Acknowledgement The authors would like to thank Rosemary Mullen and Marie Robertson for their assistance.

McMullan M, Endacott R, Gray MA et al (2003) Integrative Literature Reviews and Meta-Analyses. Portfolios and assessment of competence: a review of literature. Journal of Advanced Nursing. 41, 3, 283-294. Medley CF, Horne C (2005) Using simulation technology for undergraduate nursing education. The Journal of Nursing Education. 44, 1, 31-34. Morgan R (2006) Using clinical skills laboratories to promote theory-practice integration during first practice placement: an Irish perspective. Journal of Clinical Nursing. 15, 2, 155-161. Nehring WM, Lashley FR, Ellis WE (2002) Critical incident nursing management using human patient simulators. Nursing Education Perspectives. 23 , 3, 128-132. NHS Education for Scotland (2005) Clinical Skills Units Improving Patient Safety through Simulated Practice. NHS Education for Scotland, Edinburgh.

Nicol M, Bavin C, Cronin P, Rawlings-Anderson K (2008) Essential Nursing Skills. Third edition. Elsevier, Edinburgh. Nursing and Midwifery Council (2004) Standards of Proficieny for Pre-Registration Nursing Education. NMC, London. Nursing and Midwifery Council (2005) Consultation on Proposals Arising from A Review of Fitness for Practice at the Point of Registration. NMC, London. Nursing and Midwifery Council (2007) Supporting Direct Care through Simulated Practice Learning in the Pre-Registration Nursing Programme. NMC, London. Parish C (2008) Hands-on pulse monitoring sees cardiac arrests cut by two thirds. Nursing Standard. 22, 37, 7. Paul RW (1993) Critical Thinking: What Every Person Needs to Survive in A Rapidly Changing World. Third edition. Foundation for Critical Thinking, Santa Rosa CA. Prescott S, Garside J (2009) An evaluation of simulated practice for adult branch students. Nursing Standard. 23, 22, 35-40.

Ramsay J, Keith G, Ker JS (2008) Use of simulated patients for a communication skills exercise. Nursing Standard. 22, 19, 39-44. Schwarzer R (Ed) (1992) Self-Efficacy: Thought Control of Action. Hemisphere, Washington DC. Scottish Government (2009) The Healthcare Quality Strategy for Scotland: Draft Strategy Document. Scottish Government, Edinburgh. Spielberger CD (1983) State-Trait Anxiety Inventory for Adults. Mind Garden, Redwood City, California. Tschikota S (1993) The clinical decision-making processes of student nurses. The Journal of Nursing Education. 32, 9, 389-398. University of the West of Scotland (2008) Expected behaviours and assessment criteria (Level 9). University of the West of Scotland, Paisley. Wilford A, Doyle TJ (2006) Integrating simulation training into the nursing curriculum. British Journal of Nursing. 15, 17, 926-930.

48 may 5 :: vol 24 no 35 :: 2010

NURSING STANDARD

Copyright of Nursing Standard is the property of RCN Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.