Sie sind auf Seite 1von 6

Business Continuity and the Pareto principal GazillaByte LLC November 2012

ABSTRACT:
Over the past 40 years many businesses have developed comprehensive business continuity plans; these plans have been developed within a post-war environment, where enterprises have been willing to share information and experiences within a framework of cooperation. During this period of cooperation, the establishment of business continuity best practice has focused on the low hanging fruit; or addressing the 80% of probable and foreseeable business continuity threats. As we enter a new era of dependence upon infrastructure and technology, enterprises find themselves faced with a new societal expectation of business resilience, while at the same time as they deal with the challenges of dealing with the remaining 20% of continuity challenges. According to the Pareto principal (or the 80-20 rule), this 20% of challenges represent 80% of the total risk.

Introduction
At the conclusion of World War II, the focus of the business world shifted from a wartime footing to a period of rapid growth, primarily motivated by gaining market share of the post-war economy. A quarter of a century later those enterprises which had prevailed within the boom era were then faced with the challenge of protecting the gains that they had made as they were confronted with the political upheavals of the cold war, the energy crisis and dramatic social change. As a result of these challenges, and a desire to retain market dominance many enterprises invested heavily in measures that would make their operations resilient to social, economic and environmental changes and anomalies. Through the 1970s and into the 1980s enterprises began developing backup strategies which would allow them to recover their operations in the event of a catastrophic event. Through the 1990s and into the new Millennium these backup strategies were augmented by High Availability strategies which provided system redundancy that significantly reduced the chances of a catastrophe. Although the business continuity achievements of the past 40 years have been significant, it remains clear that, when put to the test, most continuity plans fail in overcoming a real catastrophe. They fail, not because of a failure to address the 80% of predictable challenges, but because they do not address the remaining 20% of predictable, and unforeseeable scenarios. As the events of 9-11 and the Global Financial Crisis (GFC) have unfolded, it is becoming clearer that post-war thinking and the paradigm of too big to fail may not be in the best interests of individual enterprises, and within a context of market competition, in the interests of economies as a whole. While a spirit of cooperation has resulted, to date, in the creation of business continuity best practice, as enterprises come to understand that business continuity provides competitive advantage, the finer details of business continuity will be developed with a completely proprietary framework. This white paper discusses the 5 remaining challenges of business continuity. These challenges have been widely overlooked, but have been the main contributors to past business continuity failures, and will, until they are addressed, continue to put enterprises at significant risk. These challenges are: 1. 2. 3. 4. 5. Moral Hazard. The widening gap between base load and tertiary technology. The generational avalanche. Risk Compensation. An overreliance on procedure.

The remaining challenges


Moral Hazard
When facing a catastrophic event, many enterprises are horrified to learn that their confidence in their preparedness and actual ability to continue business is overestimated. The major cause of this overestimation is the belief in a regime of testing, which up until the real event, has always been, or has reported to have been successful. In most cases the belief that prior tests have succeeded is based upon: 1. 2. 3. 4. 5. A failure to define the objective of each subsequent test for fear of it failing. A reluctance to report on failures. Exaggeration of the positive outcomes of the test and underreporting of the negatives. Lower management wanting to report success to upper management. Lack of technical expertise by those performing the test and lack of comprehension by those who witness the tests. 6. A desire to justify the purchasing decisions of recovery technologies. 7. Under provisioning for a complete disaster.

The widening gap between base load and tertiary technology


The base load of government and corporate infrastructure is carried by post-war era infrastructure, these systems range from power generation plants and telecommunications systems through to the mainframe and UNIX computer systems which run enterprise information systems. Although there is a widely held view the technology that we have now come to rely on was invented post-Facebook, the reality is that most of it was designed before Mark Zukerburg was born. For instance: The average age of a US coal fired power plant is 44 years The average age of a US nuclear power plant is 34 years The Internet is 44 years old, and many of the technologies on which it relies are 30 years old The IBM mainframe is 60 years old UNIX is 42 years old Microsoft Windows is 29 years old Age of Symantec NetBackup is 25 years old

The reality is that most of the infrastructure on which business continuity relies was developed and implemented, on average 3 decades ago.

While this infrastructure has served us well, the sheer market dominance of existing vendors has resulted in a lack of innovation and competition. This lack of innovation has also resulted in a failure to provision for the future as enterprises have waited to invest in replacement technologies which simply didnt materialize. As a result, enterprises have been left with legacy technology providing base load services along with a diverse collection of tertiary technology that has been implemented as stop-gap measures while they have been waiting for replacement base load technology. This tertiary technology, along with the middle-ware that it relies upon is often overlooked when it comes to business resilience, is often poorly supported and is rarely tested for recovery.

The Generational Avalanche


As the technical knowledge involved in the design, provision, implementation and day to day operation of backup and high availability solutions is highly demanding, those employed to support these systems require at least 20-30 years of experience to qualify for any new advertised position. This is reflected in the fact that the average age of those working on the IBM mainframes which carry the base load of government and corporate data processing requirements is 53. The current generation of baby-boomers who work in these positions have little to no exit-strategy, and, due to the erosion of their net wealth due to the current economic circumstances, tend to be overprotective of their knowledge and position which makes it near impossible for younger employees to cross-train. It is inevitable that the baby-boomer generation will be replaced by a subsequent generation, it is also quite probable that Generation-X (the subsequent generation), will not be the ones to fill these roles. In the event that this hand-over of responsibility is not generationally contiguous, this will result in a significant experience differential between those who support critical infrastructure today and those who will support it in the near future. This rapid change in experience will also be combined with a significant change in personal expectation and perspective.

Risk Compensation
Up until recently, contemporary thinking was that those who were employed were responsible for their actions, and a fear of being fired, combined with the resulting stigma was a major motivating factor in workplace quality control.

Within the past decade, our understanding of decision processes and motivations has revealed that people are not capable of free thought; they are in fact constrained by their natural abilities and personal experience. What is also understood is that as technological advances reduce levels or risk, people change their behavior to compensate for the reduction. This behavioral change is known as the Peltzman effect, or Risk Compensation. As we embrace this new way of understanding employee behavior, technology is rapidly evolving to meet this demand. As we become reliant on technology to help us avoid simple mistakes, this shifts much of the burden off individuals to check their work, and onto enterprises to provide technology and tools that simply cant fail. This shift in expectation and burden presents a significant challenge to future business continuity considerations. While it brings with it the benefits of much greater workplace efficiency, and the potential of error reduction, it also significantly complicates the recovery process and broadens the potential impact when things go wrong.

An overreliance on procedure
Having a set procedure for business continuity is very important, but in most cases a procedure alone is simply not enough. Todays businesses use incredibly complex systems and in many cases it is simply not feasible to have a documented recovery procedure. The absence of an ability to properly provide redundancy, or to properly document a recovery procedure isnt in and of itself the biggest risk to business. What is a much greater risk is the temptation to comply with the expectation that a documented recovery procedure is a requirement by creating a set of procedures which cannot be followed and are not tested, or worse still tested and falsely reported as successful. In the event that it is feasible for a recovery procedure to be written it is critical that those in recovery planning understand that these procedures are no more than a set of steps that were designed to work and have worked during testing. Procedures and test results must always be viewed with suspicion and the burden of proof must be on those who write the procedures and those who test them to demonstrate that they have been successful. Where possible quality control and benchmarking mechanisms should also be implemented to underpin the quality of any procedure.

About GazillaByte
GazillaByte LLC is based in Colorado USA where it develops and supports its flagship TapeTrack tape management software. Today TapeTrack is used by over 4000 enterprises around the world. These companies range from the top of the Fortune 500 through to newly created technology companies that you are yet to hear of. To learn more about TapeTrack, visit the product website at www.tapetrack.com, or call GazillaByte LLC on +1-720-583-8880 to organize a free 90 day no obligation trial of our unique technology.

Das könnte Ihnen auch gefallen