Beruflich Dokumente
Kultur Dokumente
www.emeraldinsight.com/0368-492X.htm
K
43,2 Business continuity management:
a systemic framework for
implementation
156
Nijaz Bajgoric
School of Economics and Business,
Received 19 November 2013
Revised 14 January 2014 University of Sarajevo, Sarajevo, Bosnia and Herzegovina
Accepted 17 January 2014
Abstract
Purpose – The paper aims at defining a systemic framework for the implementation of business
continuity management (BCM). The framework is based on the assertion that the implementation of
BCM should be done through the systemic implementation of an “always-on” enterprise information
system.
Design/methodology/approach – Systems approach is used in order to design a systemic
framework for the implementation of continuous computing technologies within the concept of an
always-on enterprise information system.
Findings – A conceptual framework has been proposed to develop a framework for a systemic
implementation of several continuous computing technologies that enhance business continuity (BC)
in the form of an “always-on” enterprise information system.
Originality/value – The paper identifies BC as a business pressure in internet era and suggests a
systemic framework for implementation.
Keywords Information technology, Information systems, Systemic approach, Business continuity
Paper type Conceptual paper
1. Introduction
The concepts of business continuity (BC) and business continuity management (BCM)
were introduced more than 20 years ago in response to several kinds of business
interruptions resulting from operational, organizational and environmental factors.
In modern e-business era, the information technology (IT) dimension of business
(dis)continuity risk has been added as well. The IT dimension relates to a high
dependency of modern business on its information system infrastructure.
If business-critical applications are up, running and providing services to end-users
(e.g. employees, customers, prospective, suppliers, etc.), such a business is said to be “up”
or “in business”. However, if an application server running a business-critical
application is down for any reason, business may simply “go down” and become “out of
business”. Therefore, the process of addressing, mitigating and managing these kinds of
IT-business risks has become one of major issues in IT-management and organizational
management as a whole.
Aberdeen Group (Csaplar, 2012) found that between June 2010 and February 2012,
the cost per hour of downtime increased on average by 65 percent. In February 2010,
the average cost per hour of downtime was reported to be about $100,000. Martin
Kybernetes (2011) cited the results of the study by Emerson Network Power and the Ponemom
Vol. 43 No. 2, 2014
pp. 156-177 Institute which revealed that the average data centre downtime event costs $505,500
q Emerald Group Publishing Limited with the average incident lasting 90 minutes. Every minute the data centre remains
0368-492X
DOI 10.1108/K-11-2013-0252 down, a company is effectively losing $5,600. The study took statistics from 41 US data
centres in a range of industries, including financial institutions, healthcare companies Business
and collocation providers. Some examples of cloud outages such as those of Amazon continuity
and Google from April 2011 (Maitland, 2011), showed that even within “by-default”
highly availably infrastructures such as the cloud providers’ data centres system management
downtime is still possible and requires attention. Butler (2013) reported that a
49 minutes failure of Amazon’s services on January 31, 2013 resulted close to $5 million
in missed revenue. Similar failures happened in January/February/March 2013 to 157
Dropbox, Facebook, Microsoft, Google Drive, Twitter.
The paper aims at defining a systemic framework for implementation of BCM based
on Churchman’s (1968) systems approach. In addition, the framework is based on the
assertion that:
.
information system has a business-critical role in modern BC; and
.
organizational implementation of BCM should be done through the systemic
implementation of an “always-on” enterprise information system.
While modern BC depends on several non-ICT factors, the paper primarily focuses on
the ICT-dimension of BC, that of related to availability of enterprise servers and
mission-critical applications running on them. Forrester Report (2013) recently
noted that:
[. . .] across all industries, there is less and less tolerance for any kind of downtime. And in a
hyper-connected world, news of downtime spreads rapidly, making it ever more difficult to
repair damaged reputations. As a result, key stakeholders in the organization are demanding
much higher levels of IT service availability.
In other words, we consider the role of enterprise information system in achieving
higher levels of availability of business-critical applications and hence enhancing BC.
3. BC (re)defined
“Business continuity” or “business continuance” in e-business era is a term that
emphasizes the ability of a business to continue with its operations and services if some
sort of failure or disaster on its computing platform occurs. The terms of “business
resilience”, “business continuity”, “business continuance”, “always-on business” are
used as well in today’s e-business environment.
The ICT dimension is not the only factor of BC, rather it involves several non-ICT
factors as well that may be crucial for BC (staff availability, physical premises, supply
chains, etc.). Lavastre et al. (2012) introduced the term of “supply chain continuity
planning framework” within the concept of supply chain risk management (SCRM).
Craighead et al. (2007) considered BC issues within the supply chain mitigation
capabilities and supply chain disruption severity. Wolf (2011) presented a qualitative
analysis of the German manufacturing industry within the context of sustainable
supply chain management integration and relation to long-term BC. Rebman et al.
(2013) assessed BC related to biologic events, incentives businesses are providing to
maximize worker surge capacity, and seasonal influenza vaccination policy. However,
BC in e-business era significantly relies on “continuous computing” – an information
system infrastructure which is theoretically envisaged to be “always-on”, and with
“zero downtime” or, in terms of more realistic expectations, “highly available” or
having a “near-zero-downtime”. Avery Gomez (2011) underscored the fact that
resilience and the continuity of business are factors that rely on sustainable ICT
infrastructures. Lewis and Pickren (2003) pointed out that disaster recovery is
receiving increasing attention as firms grow more dependent on uninterrupted
information system functioning. As stated by Forrester Report (2013) “[. . .] key
stakeholders in the organization are demanding much higher levels of IT service
availability.”
K Several standards have been developed over the last two decades such as ISO22301,
BS25999, BS25777, in order to set up frameworks, methodologies, methods, techniques,
43,2 implementation procedures related to BC. ISO22301 standard published in May 2012
supersedes BS25999 and is considered the new international standard for BCM.
Organizations certified to BS25999 are in process of transition to being certified
according to ISO22301. BS25777 is the British Standards Institution standard released
160 in 2008 and focuses on the ICT continuity management within the framework of the
BS25999. According to ISO 22301 standard (ISO 22301:2012) BC is defined as “ [. . .] the
capability of the organization to continue delivery of products or services at acceptable
predefined levels following a disruptive incident”, while BCM is defined as a:
[. . .] holistic management process that identifies potential threats to an organization and the
impacts to business operations those threats, if realized, might cause, and which provides a
framework for building organizational resilience with the capability of an effective response that
safeguards the interests of its key stakeholders, reputation, brand and value-creating activities.
In addition to all BCM standards, as presented in previous section, several authors
suggested frameworks for BCM implementation. Lindstrom et al. (2010) noted that
organizations rely too much on the checklists provided in existing BC standards. They
proposed a multi-usable BC planning methodology based on using a staircase or
capability maturity model.
Today’s business seeks for such an information infrastructure that is supported by an
“always-on” information system (Bajgoric, 2006). Therefore, there is a need of identifying
an additional requirement (pressure) with regard to Turban et al. (2005) and Turban and
Volonino (2010) model. This new “pressure” is depicted in Figure 1 and relates to BC.
Today’s information systems are mainly based on the three major types of
information architecture:
(1) legacy-mainframe and its variant – web-enabled legacy-mainframe system;
(2) client-server (c/s) architecture; and
(3) cloud computing-based platforms.
Several modifications of standard c/s architecture are in use today as well in the forms
of newly created computing paradigms or models such as: web-enabled legacy
systems, utility/on-demand computing, software-as-a-service (SaaS), web-based
software agents and services, subscription computing, grid computing, clustering,
ubiquitous or pervasive computing, cloud computing. They are implemented primarily
in order to reduce the costs and enhance uptime. Recently, Gartner introduced the
cloud/client architecture (Coony, 2013) in which “the client is a rich application running
on an Internet-connected device, and a server is a set of application services hosted in a
increasingly elastically scalable cloud computing platform”.
In this paper, BC will be considered as:
.
an additional business pressure in e-business era; and
.
an ultimate objective of modern business with regard to capability of its
information system to provide continuous computing.
However, high availability systems, those characterized by “five-nines”, “six-nines”,
even “seven-nines” are expensive, complex to administer and require powerful
IT-infrastructure and skilled IT-professionals. For instance, the “five-nines” scenario
Business
continuity
management
161
Figure 1.
BC: an internet era’s
requirement and
business pressure
with 99.999 percent uptime and five minutes of annual downtime is difficult to achieve
for many businesses and sometime may not be worth. Some businesses would benefit
from these expenditures, but some would not. Importance of high availability ratio
varies by businesses or business applications with some of them called
“mission-critical applications” having higher levels of “data criticality”. Therefore,
these businesses/applications are characterized by recovery time objective (RTO)
expressed in minutes or even seconds, while some other businesses/applications may
have hours-long downtime and RTO expressed in days. There may be businesses that
need BC only on “8 hours £ 5 days” basis. While assessing solutions of achieving high
availability ratios, in addition to system uptime, a comprehensive analysis is needed
such as: BIA, total costs of ownerships (TCO) or return on investments (ROI).
A number of IT-related problems that cause downtime may occur within all types of
information architectures such as hardware glitches on server components, operating
system crashes, network components’ malfunctions, WAN/internet disconnections,
application bugs, system or network administrator’s errors, natural disasters that hit
main computer center or cloud-provider’s site (Figures 2 and 3).
These downtime points are considered critical in implementing continuous
computing solutions for enhancing BC. In both client/server and cloud-based
architectures, a number of downtime-related critical points can be identified, such as:
(1) client (end-user) computing devices, client operating system, client applications;
(2) network infrastructure disconnections (LAN/WAN/internet);
K
43,2
162
Figure 2.
Downtime points in a
client-server architecture
(3) server operating platform crashes on server hardware and server operating
system; and
(4) storage media and devices: hard disk crash, corrupted files, broken tapes.
Today, cloud computing tends to replace standard c/s concepts and becomes a platform
of choice. Forrester predicted that the global cloud computing market will grow from
$40.7 billion in 2011 to more than $241 billion in 2020 (Dignan, 2011). However, it is most
likely that the role of servers and server operating environment will remain almost the
same as business critical applications are again run by server computers, with the only
difference being in the fact that these servers reside within the boundaries of cloud
computing provider but not in organization’s computer center. As reported by CIO (2013):
Web-based services can crash and burn just like any other type of technology. If the
companies behind them are smart, you shouldn’t lose any data in the long run – but you’ll
likely lose a bit of sanity during the time the service is offline.
Marshall (2013) provided a story about the cloud service provider Nirvana that “[. . .]
has told its customers they have two weeks to find another home for their terabytes of
data because the company was closing its doors and shutting down its services”.
As said before, servers can be affected by several types of hardware glitches and
failures, system software crashes, application software bugs, hackers’ malicious acts,
system administrators’ mistakes, natural disasters like floods, fires, earthquakes,
hurricanes, or terrorist attacks. Therefore, the process of addressing and managing the
Business
continuity
management
163
Figure 3.
Downtime points
in a cloud-based
information architecture
164
Figure 4.
Always-on business
(organizational pressure)
and always-on
information system
(organizational response):
a systemic view
Physical threats result from any kind of physical damage that may occur on IT centres,
servers, communication devices. Natural-catastrophic event such as fire, lightning,
flood, earthquake, hurricane, tornado, snow, can damage IT-centres and cause
applications/data unavailability for some time. Logical threats may have different forms
such as: deleted system files, corrupted files, broken processes, corrupted file system,
crashed operating system. Technical glitches relate to the hardware component failures
that may occur on computer component/device within the IT infrastructure (memory
chips, fans, mainboards, hard disks, disk controllers, tapes or tape drivers, network
cards, switches, routers, communication lines, power supplies, etc.). Server operating
system crashes are the situations that make all applications and data stored on
enterprise server which underwent the crash unavailable for end-users. A typical
example of such a situation is well-known blue screen of death (“BSOD”) on Windows
Server systems. Application software defects, failures and crashes may have different
forms such as: bugs in programs, non-integrated or badly integrated applications, user
interventions on desktop computers, file corruptions. LAN/WAN/internet infrastructure
problems, in addition to possible hardware glitches on data communication devices,
include the issues such as those with domain controllers, active directory, DNS servers,
network configuration files and so on.
Human errors comprise several forms of accidental or intentional file deletion,
unskilled operations, intentional hazardous activities including sabotage, strikes,
epidemic situations, vandalism. Accidental or in some cases intentional removing
system file(s) by system administrators can shut down the whole operating system and
Business
continuity
management
165
Figure 5.
Major threats (glitches,
faults, failures, disasters)
on IT infrastructure
make applications and data stored on the server unavailable. Another example is loss
of key IT personal or leaving of expert staff due to several reasons, for instance,
managerial faults or bad decisions on IT-staffing policy. For instance, neglecting the
request for higher salary of a system administrator from CIO/CEO side can lead to a
situation when he/she may decide to leave organization over night with all admin/root
passwords in his/her head, or execute above command and intentionally delete all data.
Recently, Quorum Disaster Recovery Report (Quorum Report, 2013) unveiled that the
hardware failures are the most common type within small and mid-sized businesses with
the percentage of 55 percent, while in 22 percent of the disasters, reason was the human
error. Human errors include the system and network administrators’ mistakes and errors.
Software failure has 18 percent according to the report with operating systems’ failures
mostly being the reason contributing to software problems. The report states that:
Flood, earthquakes, tornadoes, etc. are common natural disasters which can physically
damage the system components. In spite of the fact that natural disasters happen very
frequently, this cause merely takes 5 percent of the overall reasons.
Venkatraman (2013) noted that:
More than a third of respondents viewed human error as the most likely cause of downtime.
Equipment failure and external threat of power outages were the second and third likely
causes cited by respondents.
K Clancy (2013) noted that:
43,2 Hardware failure is the biggest culprit, representing about 55 percent of all downtime events
at SMBs, while human error accounts for about 22 percent of them [. . .] That compares with
about 5 percent for natural disasters. It takes an average of 30 hours to recover from failures,
which can be devastating for a business of any size.
166 According to Information Today Report (2012), network outages (50 percent) were the
leading cause of unplanned downtime within the last year. Human error (45 percent),
server failures (45 percent) and storage failures (42 percent) followed closely behind.
An example of human error is an accidental or intentional operation of removing files
by system administrator on any server platform. By using a very simple command rm
on UNIX or Linux servers on the root level system administrator can remove all the
files (Bajgoric, 2010). A typical example of such a situation was PlusNet’s lost customer
emails (Oates, 2006) due to the problem that was “caused by the unfortunate engineer’s
first attempt to fix the original mistake”.
In short, in order to reduce the above risks, modern business seeks for such a
solution which will provide the following capabilities of its information system:
.
a highly reliable, available and scalable operating platform designed and
implemented for high availability, reliability and scalability;
.
integrated applications designed for high availability, reliability and scalability;
.
redundant hardware and redundant networking capabilities;
.
fault tolerant system with fault tolerance/disaster tolerance technologies;
.
efficient and effective backup and recovery system;
.
skilled system and network administration;
.
secure environment: user authentication, intrusion detection, secure transactions;
and
.
holistic IT-management integrated with organizational management.
167
Figure 6.
“Always-on” enterprise
information system –
framework for
BCM implementation:
a systemic model
computing platform for BC. It contains the technologies related to server hardware,
server operating system, server applications, and, in a broader term, storage and
network technologies. The main component of such an infrastructure in today’s
dominant client-server or even cloud computing architecture is a server operating
environment as a collection of server, server operating system and serverware
components that are aimed at enhancing the ratios of system uptime. Such an
environment can be implemented/organized “on-premises” and/or within the cloud
computing provider’s premises. This approach leads to identifying the “always-on”
enterprise information system that can be defined as an information system with a
100 percent uptime/zero downtime. These components are built and implemented
within three continuous computing layers, as shown in Figure 7:
(1) Layer 1. Server operating system (environment).
(2) Layer 2. Storage, backup and recovery technologies.
(3) Layer 3. Networking infrastructure.
168
Figure 7.
Layers of an “always-on
enterprise
information system”
systems are expected to provide high ratios in terms of availability, reliability, and
scalability for server configurations running business-critical applications. They do it
by providing the features such as: automatic failover, system recovery, reloadable
kernel, online upgrade, crash handling, SMP, VLM/VLDB, virtualization, etc. Therefore,
server operating systems must be viewed not only as operating systems from computer
science perspective, but from business perspective as well, having in mind their role in
assuring continuous computing and “always-on” information system (Bajgoric, 2008).
Server operating systems must be implemented and considered in their broader
form – that of “server operating environment”, which is a concept that includes several
server-based applications and extensions for fault tolerance, disaster tolerance/recovery
and high availability/reliability/scalability that are necessary for enhancing system
uptime. Modern server operating systems provide several additional components
(software modules) called serverware solutions, in addition to the core operating
platform. Today’s server processors and server configurations are designed and
configured in the form of integrated hardware platforms and implemented in order to
enhance availability ratios. Some of the HA-enabling technologies include: 64-bit
processors, multi-core processors, L1/L2/L3 cache, ECC, MEC, memory double-chip
spare, automatic deconfiguration of memory and processors, hot-swappable
components, fault-tolerance, redundant units, etc. Server platforms are run by server
operating systems. If a server operating platform is enhanced by fault-tolerant and
disaster-tolerant technologies, it can continue to operate even in case of several types
of failures and disasters.
Vendors of server operating systems follow the achievements in server hardware Business
related to fault tolerance/disaster tolerance, hardware mirroring, hot spared disks, high continuity
availability storage systems, clustering, redundant units. They are expected to enhance
their OS platforms in order to support embedded high availability and fault tolerant management
hardware capabilities. For instance, two major components of HP’s HP-UX server
operating environment (www.hp.com) are: high availability OE and data center OE.
They are intended to enhance the levels of availability, reliability and scalability. Some of 169
the BC-oriented features included in this server operating platform are: JFS file system, JFS
snapshot technology, hardware and software mirroring, system recovery routines, hot
spared technology, HA storage systems, HA monitoring, DRD feature, HP Serviceguard,
the concept of compartments, fine-grained privileges, role-based access control, etc.
171
Figure 8.
BC enablers: the “Onion”
model
References
Asgary, A. and Naini, A.S. (2011), “Modelling the adaptation of business continuity planning by
businesses using neural networks”, Intelligent Systems in Accounting, Finance and
Management, Vol. 18, pp. 89-104.
Avery Gomez, E. (2011), “Towards sensor networks: improved ICT usage behavior for business
continuity”, Proceedings of SIGGreen Workshop, Sprouts: Working Papers on Information
Systems, Vol. 11 No. 13, available at: sprouts.aisnet.org/11-13.
Bajgoric, N. (2006), “Information systems for e-business continuance: a systems approach”,
Kybernetes: The International Journal of Systems and Cybernetics, Vol. 35 No. 5,
pp. 632-652.
Bajgoric, N. (2010), “Server operating environment for business continuance: framework for
selection”, Int. J. Business Continuity and Risk Management, Vol. 1 No. 4, pp. 317-338.
Bartel, V.W. and Rutkowski, A.F. (2006), “A fuzzy decision support system for IT service
continuity threat assessment”, Decision Support Systems, Vol. 42 No. 3, pp. 1931-1943.
Bertrand, C. (2005), “Business continuity and mission critical applications”, Network Security,
Vol. 20 No. 8, pp. 9-11.
Boehman, W. (2009), “Survivability and business continuity management system according to
BS 25999”, Proceedings of the 2009 Third International Conference on Emerging Security
Information, Systems and Technologies, pp. 142-147.
Botha, J. and Von Solms, R. (2004), “A cyclic approach to business continuity planning”,
Information Management & Computer Security, Vol. 12 No. 4, pp. 328-337.
Broder, J.F. and Tucker, E. (2012), Business Continuity Planning, Risk Analysis and the Security
Survey, 4th ed., Elsevier, London.
Butler, B. (2013), “Amazon.com suffers outage: nearly $5M down the drain?”, Network World, Business
January 31, available at: www.networkworld.com/news/2013/013113-amazoncom-suffers-
outage-nearly-5m-266314.html (accessed September 12, 2013). continuity
Butler, B. and Gray, P.H. (2006), “Reliability, mindfulness, and information systems”, management
MIS Quarterly, Vol. 30 No. 2, pp. 211-224, available at: www.jstor.org/stable/25148728
(accessed July 19, 2013).
Cerullo, V. and Cerullo, R. (2004), “Business continuity planning: a comprehensive approach”, 175
Information Systems Management, Vol. 21 No. 3, pp. 70-78.
Churchman, C.W. (1968), The Systems Approach, Delacorte Press, New York, NY.
Clancy, H. (2013), “Most common cause of SMB downtime? The answer may
surprise you”, ZDNET, available at: www.zdnet.com/most-common-cause-of-smb-
downtime-the-answer-may-surprise-you-7000011137 (accessed September 11, 2013).
Coony, M. (2013), “Gartner: the top 10 IT-altering predictions for 2014”, available at: www.infoworld.
com/t/it-management/gartner-the-top-10-it-altering-predictions-2014-228419?page¼0,2&
source¼IFWNLE_nlt_mobilehdwr_2013-10-14 (accessed October 14, 2013).
Craighead, C.W., Blackhurst, J., Rungtusanatham, M.J. and Handfield, R.B. (2007), “The severity
of supply chain disruptions: design characteristics and mitigation capabilities”, Decision
Sciences, Vol. 38 No. 1, pp. 131-155.
Csaplar, D. (2012), “The cost of downtime is rising”, available at: http://blogs.aberdeen.com/it-
infrastructure/the-cost-of-downtime-is-rising/ (accessed March 12, 2012).
Dignan, L. (2011), “Cloud computing market: $241 billion in 2020”, available at: www.zdnet.com/
blog/btl/cloud-computing-market-241-billion-in-2020/47702 (accessed January 6, 2012).
Forrester Report (2013), How Organizations Are Improving Business Resiliency with Continuous
IT Availability, February, available at: www.emc.com/collateral/analyst-report/forrester-
improve-bus-resiliency-continuous-it-avail-ar.pdf (accessed November 11, 2013).
Gibb, F. and Buchanan, S. (2006), “A framework for business continuity management”,
International Journal of Information Management, Vol. 26, pp. 128-141.
Greening, P. and Rutherford, C. (2011), “Disruptions and supply networks: a multi-level,
multi-theoretical relational perspective”, The International Journal of Logistics
Management, Vol. 22 No. 1, pp. 104-126.
Herbane, B. (2010), “The evolution of business continuity management: a historical review of
practices and drivers”, Business History, Vol. 52 No. 6, pp. 978-1002.
Herbane, B., Elliott, D. and Swartz, E.M. (2004), “Business continuity management: time for a
strategic role?”, Long Range Planning, Vol. 37, pp. 435-457.
Information Today Report (2012), Enterprise Data and The Cost of Downtime: 2012 IOUG
Database Availability Survey, available at: www.oracle.com/us/products/database/2012-
ioug-db-survey-1695554.pdf (accessed September 11, 2013).
Jarvelainen, J. (2013), “IT incidents and business impacts: validating a framework for continuity
management in information systems”, International Journal of Information Management,
Vol. 33, pp. 583-590.
Kadam, A. (2010), “Personal business continuity planning”, Information Security Journal:
A Global Perspective, Vol. 19 No. 1, pp. 4-10.
Lavastre, O., Gunasekaran, A. and Spalanzani, A. (2012), “Supply chain risk management in
French companies”, Decision Support Systems, Vol. 52, pp. 828-838.
Lewis, W.R. and Pickren, A. (2003), “An empirical assessment of IT disaster risk”,
Communication of ACM, September, pp. 201-206.
K Lindstrom, J. (2012), “A model to explain a business contingency process”, Disaster Prevention
and Management, Vol. 21 No. 2, pp. 269-281.
43,2
Lindstrom, J., Samuelson, S. and Hagerfors, A. (2010), “Business continuity planning
methodology”, Disaster Prevention and Management, Vol. 19 No. 2, pp. 243-255.
Maitland, J. (2011), “A really bad week for Google and Amazon”, available at: http://searchcloud
computing.techtarget.com/news/2240035039/A-really-bad-week-for-Google-and-Amazon?
176 asrc¼EM_NLN_13718724&track¼NL-1324&ad¼826828 (accessed April 23, 2011).
Mansoori, B., Rosipko, B., Erhard, K.K. and Sunshine, J.L. (2013), “Design and implementation of
disaster recovery and business continuity solution for radiology PACS”, J. Digit Imaging,
August.
Marshall, D. (2013), “Cloud storage provider Nirvanix is closing its doors”, available at: www.
infoworld.com/d/virtualization/cloud-storage-provider-nirvanix-closing-its-doors-227289
(accessed October 1, 2013).
Martin, N. (2011), “The true costs of data center downtime”, available at: http://itknowledgeexchange.
techtarget.com/data-center/the-true-costs-of-data-center-downtime/ (accessed January 8, 2012);
also available at: www.emersonnetworkpower.com/en-US//Brands/Liebert/Pages/
LiebertGatingForm.aspx?gateID¼777 (accessed January 08, 2012).
Nollau, B. (2009), “Disaster recovery and business continuity”, Journal of GXP Compliance,
Vol. 13 No. 3, p. 51, ABI/INFORM Global.
Oates, J. (2006), “PlusNet admits customer emails are lost forever”, available at: www.theregister.
co.uk/2006/08/03/plusnet_loses_deleted_emails/ (accessed September 13, 2013).
Quorum Report (2013), Quorum Disaster Recovery Report, Q1 2013, available at: www.quorum.
net/news-events/press-releases/quorum-disaster-recovery-report-exposes-top-causes-of-
downtime (accessed September 09, 2013).
Rebman, T., Wang, J., Swick, Z., Reddick, D. and delRosario, J.L. (2013), “Business continuity and
pandemic preparedness: US health care versus non-health care agencies”, American
Journal of Infection Control, Vol. 41, pp. 27-33.
Sapateiro, C., Baloian, N., Antunes, P. and Zurita, G. (2011), “Developing a mobile collaborative
tool for business continuity management”, Journal of Universal Computer Science, Vol. 17
No. 2, pp. 164-182.
Speight, P. (2011), “Business continuity”, Journal of Applied Security Research, Vol. 6, pp. 529-554.
Swartz, E., Elliott, D. and Herbane, B. (2003), “Greater than the sum of its parts:
business continuity management in the UK finance sector”, Risk Management, Vol. 5 No. 1,
pp. 65-80.
Tammineedi, R.L. (2010), “Business continuity management: a standards-based approach”,
Information Security Journal: A Global Perspective, Vol. 19, pp. 36-50.
Tan, Y. and Takakuwa, S. (2011), “Use of simulation in a factory for business continuity
planning”, International Journal of Simulation Modelling ( IJSIMM ), Vol. 10, pp. 17-26.
Turban, E. and Volonino, L. (2010), Information Technology For Management – Transforming
Organizations in the Digital Economy, 7th ed., Wiley, New York, NY.
Turban, E., Rainer, R.K. and Potter, R.E. (2005), Introduction to Information Technology, Wiley,
New York, NY.
Venkatraman, A. (2013), “Human error most likely cause of datacentre downtime, finds study”,
Computerweekly, available at: www.computerweekly.com/news/2240179651/Human-
error-most-likely-cause-of-datacentre-downtime-finds-study (accessed September 11,
2013).
Winkler, U., Fritzsche, M., Gilani, W. and Marshall, A. (2010), “A model-driven framework for Business
process-centric business continuity management”, Proceedings of the 2010 Seventh
International Conference on the Quality of Information and Communications Technology, continuity
Porto, Portugal, pp. 248-252. management
Wolf, J. (2011), “Sustainable supply chain management integration: a qualitative analysis of the
German manufacturing industry”, Journal of Business Ethics, Vol. 102, pp. 221-235.
177
Further reading
Bajgoric, N. (2007), “Toward always-on enterprise information systems”, in Gunasekaran, A. (Ed.),
Modeling and Analysis of Enterprise Information Systems – Advances in Enterprise
Information Systems, IGI Global, Hershey, PA, pp. 250-284.
Churchman, C.W. (1971), The Design of Inquiring Systems: Basic Concepts of Systems and
Organizations, Basic Books, New York, NY.
Herbane, B., Elliot, D. and Swartz, E.M. (1997), “Contingency and continua: achieving excellence
through business continuity planning”, Business Horizons, Vol. 40 No. 6.
Corresponding author
Nijaz Bajgoric can be contacted at: nijaz.bajgoric@efsa.unsa.ba