Beruflich Dokumente
Kultur Dokumente
ABSTRACT:
Over the past 40 years many businesses have developed comprehensive business continuity plans; these plans have been developed within a post-war environment, where enterprises have been willing to share information and experiences within a framework of cooperation. During this period of cooperation, the establishment of business continuity best practice has focused on the low hanging fruit; or addressing the 80% of probable and foreseeable business continuity threats. As we enter a new era of dependence upon infrastructure and technology, enterprises find themselves faced with a new societal expectation of business resilience, while at the same time as they deal with the challenges of dealing with the remaining 20% of continuity challenges. According to the Pareto principal (or the 80-20 rule), this 20% of challenges represent 80% of the total risk.
Introduction
At the conclusion of World War II, the focus of the business world shifted from a wartime footing to a period of rapid growth, primarily motivated by gaining market share of the post-war economy. A quarter of a century later those enterprises which had prevailed within the boom era were then faced with the challenge of protecting the gains that they had made as they were confronted with the political upheavals of the cold war, the energy crisis and dramatic social change. As a result of these challenges, and a desire to retain market dominance many enterprises invested heavily in measures that would make their operations resilient to social, economic and environmental changes and anomalies. Through the 1970s and into the 1980s enterprises began developing backup strategies which would allow them to recover their operations in the event of a catastrophic event. Through the 1990s and into the new Millennium these backup strategies were augmented by High Availability strategies which provided system redundancy that significantly reduced the chances of a catastrophe. Although the business continuity achievements of the past 40 years have been significant, it remains clear that, when put to the test, most continuity plans fail in overcoming a real catastrophe. They fail, not because of a failure to address the 80% of predictable challenges, but because they do not address the remaining 20% of predictable, and unforeseeable scenarios. As the events of 9-11 and the Global Financial Crisis (GFC) have unfolded, it is becoming clearer that post-war thinking and the paradigm of too big to fail may not be in the best interests of individual enterprises, and within a context of market competition, in the interests of economies as a whole. While a spirit of cooperation has resulted, to date, in the creation of business continuity best practice, as enterprises come to understand that business continuity provides competitive advantage, the finer details of business continuity will be developed with a completely proprietary framework. This white paper discusses the 5 remaining challenges of business continuity. These challenges have been widely overlooked, but have been the main contributors to past business continuity failures, and will, until they are addressed, continue to put enterprises at significant risk. These challenges are: 1. 2. 3. 4. 5. Moral Hazard. The widening gap between base load and tertiary technology. The generational avalanche. Risk Compensation. An overreliance on procedure.
The reality is that most of the infrastructure on which business continuity relies was developed and implemented, on average 3 decades ago.
While this infrastructure has served us well, the sheer market dominance of existing vendors has resulted in a lack of innovation and competition. This lack of innovation has also resulted in a failure to provision for the future as enterprises have waited to invest in replacement technologies which simply didnt materialize. As a result, enterprises have been left with legacy technology providing base load services along with a diverse collection of tertiary technology that has been implemented as stop-gap measures while they have been waiting for replacement base load technology. This tertiary technology, along with the middle-ware that it relies upon is often overlooked when it comes to business resilience, is often poorly supported and is rarely tested for recovery.
Risk Compensation
Up until recently, contemporary thinking was that those who were employed were responsible for their actions, and a fear of being fired, combined with the resulting stigma was a major motivating factor in workplace quality control.
Within the past decade, our understanding of decision processes and motivations has revealed that people are not capable of free thought; they are in fact constrained by their natural abilities and personal experience. What is also understood is that as technological advances reduce levels or risk, people change their behavior to compensate for the reduction. This behavioral change is known as the Peltzman effect, or Risk Compensation. As we embrace this new way of understanding employee behavior, technology is rapidly evolving to meet this demand. As we become reliant on technology to help us avoid simple mistakes, this shifts much of the burden off individuals to check their work, and onto enterprises to provide technology and tools that simply cant fail. This shift in expectation and burden presents a significant challenge to future business continuity considerations. While it brings with it the benefits of much greater workplace efficiency, and the potential of error reduction, it also significantly complicates the recovery process and broadens the potential impact when things go wrong.
An overreliance on procedure
Having a set procedure for business continuity is very important, but in most cases a procedure alone is simply not enough. Todays businesses use incredibly complex systems and in many cases it is simply not feasible to have a documented recovery procedure. The absence of an ability to properly provide redundancy, or to properly document a recovery procedure isnt in and of itself the biggest risk to business. What is a much greater risk is the temptation to comply with the expectation that a documented recovery procedure is a requirement by creating a set of procedures which cannot be followed and are not tested, or worse still tested and falsely reported as successful. In the event that it is feasible for a recovery procedure to be written it is critical that those in recovery planning understand that these procedures are no more than a set of steps that were designed to work and have worked during testing. Procedures and test results must always be viewed with suspicion and the burden of proof must be on those who write the procedures and those who test them to demonstrate that they have been successful. Where possible quality control and benchmarking mechanisms should also be implemented to underpin the quality of any procedure.
About GazillaByte
GazillaByte LLC is based in Colorado USA where it develops and supports its flagship TapeTrack tape management software. Today TapeTrack is used by over 4000 enterprises around the world. These companies range from the top of the Fortune 500 through to newly created technology companies that you are yet to hear of. To learn more about TapeTrack, visit the product website at www.tapetrack.com, or call GazillaByte LLC on +1-720-583-8880 to organize a free 90 day no obligation trial of our unique technology.