Sie sind auf Seite 1von 14

The Journal of Operational Risk (114)

Volume 6/Number 1, Spring 2011

The most insidious operational risk: lack of effective information sharing


Steven Francis
Massachusetts Institute of Technology, Sloan School of Management, 600 Memorial Drive, W98-200 Cambridge, MA 02139-4822, USA; email: stevefr@mit.edu

Poorly integrated information systems often lead to risks that are high in frequency and low in severity. However, many catastrophic failures and loss events can also be traced back to a lack of integration. No single software application, consulting engagement or silver bullet exists that can solve the problem either: doing so requires vision, discipline and an understanding of why many integration initiatives do not succeed.

1 INTRODUCTION
Sharing information across departments, software applications and other silos may pose the greatest existing systemic operational risk in any industry. The problem hinders decision making across nearly every organizational function. The Senior Supervisors Group, a group of senior managers from a large number of international nancial rms (see Senior Supervisors Group (2009)), concluded that one of the four primary causes of the nancial crisis was inadequate and often fragmented technological infrastructures that hindered effective risk identication and measurement. The lack of commitment to such risk control by management as well as lack of resources to develop the required information technology (IT) infrastructure were cited as ongoing obstacles to improvement. Poor integration of data across systems and departments is repeatedly mentioned throughout the document as an impediment to risk management. In no way are these problems specic to banking, or specic to managing credit risk or market risk. Consider the operational risks that the US intelligence community faces because of the same problem. The terrorist attack of September 11, 2001 and the attempted attack on December 25, 2009 could both have been prevented if data had been shared more effectively (National Commission on Terrorist Attacks (2004)). Operational risks should be considered in terms of their relevance to an organizations ability to perform its mission. There is perhaps no greater example of mission failure.
1

S. Francis

Salespeople lose customers, and service personnel often provide slow or ineffective service because key information that is needed to service customers is spread all over the organization. Sales, service and operations staff spend tremendous amounts of time entering nearly identical data into multiple systems, and it does not always get entered in the same way, causing process errors and mistakes later on. A study by Vanson Bourne (2009) showed the following: (1) 89% of respondents stated that they cannot get a single view of process performance because information on business processes is held in multiple operational systems; (2) 80% of respondents use middleware to try to bring data together in a way that is unsatisfactory to those in charge; and (3) 67% admitted that they hear about problems in service from customers before they identify the problems themselves. Why are rms so bad at this? Making a mental connection between integration infrastructure and customer attrition due to poor service can be a difcult leap. There are other fairly simple reasons, and thankfully there are also remedies. First we dene the problem with some examples and focus on what some of the solutions look like. We then look at some of the risks and difculties associated with implementing these solutions. The ability to serve customers faster, reduce processing errors and dramatically reduce cycle times of core business processes are direct benets of having integrated data. Event-processing infrastructure can enable real-time visibility into key performance metrics that would otherwise only be available weeks or months after the relevant point in time. This allows for rapid responses in a constantly changing business environment. If customer waiting times go up, we should know right away. If customer complaints increase, we should know right away. If orders decrease, we should know right away. If input costs increase, we should know right away. Each of these simple examples presents a measurable operational risk that, if not addressed promptly, can lead to substantial loss events. Information technology organizations already spend around 30% of their budgets on enterprise information integration (Gannon et al (2009)), yet most of these efforts are not conducted in a proactive or systematic fashion. Integrations are built and maintained in a reactive way with little thought for consistency, reusability or any kind of strategy.

1.1 How do we spot integration problems?


We now give some examples of how this problem might occur, and a short description of straight-through processing. Straight-through processing is interesting because it
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing

is a formalized way of implementing some of the recommendations in this paper specically in the nancial-services industry. Because the stakes of data accuracy, integrity and timeliness are so high in nancial services, best practices for integration are formalized and broadly adopted. For these reasons, many nancial-services rms have become very adept at process automation and integration. A large manufacturing rm in the Pacic Northwest had a very difcult time obtaining a complete view of its committed costs. It did not know what had been spent in a given month or quarter. Only when the books were rolled up at the end of the period was the company able to see what was actually spent, and to compare this with budgeted numbers. This obviously made it difcult to control spending. Spending would frequently exceed what was budgeted and redundant stocks of spare parts were common. Hundreds of thousands or even millions of dollars were being wasted. The reason for these problems was that there were many remote ofces where spending took place. Some of the larger remote ofces had developed their own systems for tracking expenses and spending, while some of the small ofces just kept this data in spreadsheets. The remote ofces would reenter the data into the corporate nancial system, or send batch uploads of the data, on an infrequent and inconsistent basis. The company solved this problem by integrating the larger remote ofce systems into the corporate nancial system in real time, and by rolling out a webbased expense-tracking tool to the smaller ofces that directly updated the corporate nancial system. The result was greatly improved expense management, cash management, vendor management and inventory management. All of these benets arose from just a little integration. An oil and gas company that I worked with in Calgary several years ago was nding it difcult to decide where to invest tens of millions of dollars because they lacked visibility into which projects were paying off (or had the most potential to pay off) the best. Without an integrated view of how different projects were producing, related production costs, potential for future production and past and future maintenance costs, the organization had little condence that they were investing their capital optimally. To make matters worse, its economic situation changed daily due to uctuations in spot and future oil prices. Capital allocation was being performed based on monthly and quarterly data, in an environment where the economics were changing every day. One executive said that he felt like he was investing tens of millions of dollars with a blindfold on. To address this problem, the organization deployed technology to consolidate information into a reporting database. They standardized denitions of critical data across applications and delivered this information to users through personalized dashboards. These dashboards were updated throughout the day as data was delivered to the reporting database via events in real time. They also automated many core business processes to ensure that when data was updated in one application it stayed in sync with other applications. This solution gave the customer
Research Paper www.thejournalofoperationalrisk.com

S. Francis

vastly improved visibility of their business and the increased condence that they were making the right investments at the right time. In nancial services, sharing information across partners and departments is broadly known as straight-through processing (STP). Due to the high stakes of data quality in the nancial-services industry, many rms have become very good at this. Straightthrough processing represents a major shift from common T C 3 trading to same-day settlement. This is accomplished by sharing information more effectively between departments and between partners in a transaction. Straight-through processing helps to keep the front ofce and back ofce synchronized by integrating data and systems. This kind of integration can drive signicant operational costs out of a business. Rather than relying on operational staff to manually enter data from trading systems into back ofce systems, which is expensive and leads to errors, STP automates these activities. By implementing STP, rms increase the probability that a contract or an agreement is settled on time, thereby reducing settlement risk. The sharing of information between transaction partners is another important aspect of STP and is a great way for rms to nd a competitive advantage and to further reduce costs. In essence, however, STP is just a way to share information more effectively in order to improve transaction processes.

1.2 Risk management technology


When people think about software for risk management, integration infrastructure is not usually the rst thing that comes to mind: indeed, it is typically not even the third or fourth thing. There are a handful of technologies that are often associated with and applied to risk management problems. Document management packages, business continuity technology, assessment and survey tools, security technology, and business applications all play a role. These technologies are often combined to address a given risk or regulatory requirement. For example, electric utilities facing Federal Energy Regulatory Commission requirements may be very concerned with ensuring the continuing operation of the grid, and may therefore have to prevent any unauthorized access to systems that effect grid operations or, in the event of a catastrophe, they may need to ensure that systems that affect grid operations continue to run. Business continuity and identity and access management solutions may therefore be very important. Organizations strengthening SarbanesOxley compliance may be interested in auditing and document management. Many governance risk and compliance vendors package software components, industry best practices, and implementations of regulatory requirements into governance risk and compliance suites addressing these specic problems. We focus now on technologies, solutions and best practices for sharing information across an enterprise.Although such solutions are more broadly focused, they are still highly relevant from a risk management perspective, and the
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing

impact of such solutions may be far greater than the impact from more specic, or narrowly focused solutions. Lam (2003) identies three clear benets that should come from operational risk management endeavors. Data sharing initiatives satisfy all of these. (1) Rigorous operational risk management should both minimize day-to-day losses and reduce the potential for occurrences of more costly incidents. (2) Effective operational risk management improves a companys ability to achieve its business objectives. As such, management can focus its efforts on revenuegenerating activities, as opposed to managing one crisis after another. (3) Finally, accounting for operational risk strengthens the overall enterprise risk management system.

1.3 How did it get so bad?


As organizations evolve they typically adopt more and more software applications and data sources, and divisions occur naturally. The launch of a new project, the acquisition of a company or the introduction of a product often involves the rollout of some new software applications. With each new application, data that already exists often gets duplicated. As a result there are often multiple copies of customer data, vendor data, product data or asset data. This causes processes to become more lengthy and complex. What was once a simple transaction can become quite unwieldy and may require the use of several systems and the knowledge of many arbitrary rules. As this natural evolution occurs, problems typically manifest in one of two ways. Operational efciency: the proliferation of systems and data often make completing simple tasks more difcult, which leads to poor data quality, process errors and efciency problems. What used to be accomplished by accessing only one system may now require access to and knowledge of three or four systems. Business visibility: getting a complete view across systems can become very difcult as the number of systems grows. As data is distributed across multiple applications and data sources, retaining a single rational view of a customer, a vendor or an asset can become very challenging. This leads to decisions that may be based on inaccurate or incomplete information. Aside from process breakdowns that occur due to inconsistent data, or bad decisions that are made due to fragmented views of information, data proliferation causes other serious problems. One sure sign of a data-visibility problem is the proliferation of spreadsheets and desktop databases. Users often extract data from software applications and put it into spreadsheets or desktop databases. As soon as they become
Research Paper www.thejournalofoperationalrisk.com

S. Francis

reliant on these spreadsheets or small databases to do their job, it becomes difcult to change the upstream application because this will prevent users from keeping their spreadsheets up to date. This limits IT exibility, which in turn limits the exibility of the whole enterprise. Such desktop databases are also a security risk: it is very easy for employees to e-mail databases or to store them on a personal drive. Another persistent problem often occurs in the way in which siloed applications are integrated. Applications are typically integrated manually, through duplicate data entry, or by creating all manner of different software connections between different applications. We have already covered the problems with manual integration and multiple entry. However, the problems created by excessive software connections, or interfaces, are also profound. These connections can severely inhibit the exibility of an entire organization. These connections are created using whatever technology the developer desires and they are usually not monitored or created in a consistent way. These connections also get duplicated frequently, as data from one application may need to integrate with several other applications. The maintenance of these connections consumes signicant resources and the connections themselves prevent organizations from upgrading or switching software applications due to the effort that would be required to rebuild these connections if a new application were installed. Benetting from new technology is very hard when an organization is tightly wired to a legacy technology infrastructure. The next section outlines several ways of addressing these visibility and operational efciency problems.

2 COMMON APPROACHES 2.1 Data warehousing and business intelligence


Perhaps the most recognized and widely used way of integrating information is by consolidating data from multiple applications into a data mart or data warehouse. A data warehouse is a database that has been optimized for reporting and decision support functions. It usually includes data that is summarized at different levels in order to quickly answer common questions, such as questions based on a time period, geography or a department. Reporting and business-intelligence tools are usually used in conjunction with data warehouses to enable ad hoc reporting and analysis. Common reports or frequently sought-after information may be displayed in personalized dashboards, with varying levels of summarization and visibility permitted for different roles or levels within an organization. The expense and effort required to implement such a system is signicant: they tend to be lengthy and expensive projects. However, the value of a successful data warehouse and decision support system can be tremendous. They can provide visibility and business insights that were previously impossible to obtain. Although these
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing

systems can help immensely with data visibility problems, they do little to address the operational efciency problems related to siloed information. In fact, the new connections that are created in order to push data into a warehouse mean that the problem is often made worse. In reality, as organizations grow, data warehouses often become a necessity. Without a consistent, consolidated and summarized source of information, reporting becomes increasingly difcult over time. Also, data-warehouse solutions t well with the project-oriented nature of many IT departments. Although data warehouses can be used for some predictive analysis, they are typically backward looking and intended for historical reporting.

2.2 Event-based or service-based integration


The concept of building an integration infrastructure is often overlooked as a possible means of controlling risk. This may be because this is not technology that end users ever see or directly interact with. However, a well-developed integration strategy and infrastructure can improve operational efciency, data quality, process quality and decision making. The power of such solutions is tremendous. For example, consider a personal nancial advisor who uses four different applications to service his customers. Maybe he has a marketing and recommendations application, a customer information and service application, a portfolio management application and another application for insurance products. Let us consider how a common process might unfold if these applications are not integrated. When a customer noties his broker of an address change or a life-event change, such as a marriage, the broker may need to transfer the call to the operations department, or personally update this information in three or four systems. Either of these scenarios would be likely to irritate a customer. If these systems were integrated in a consistent fashion, they would just stay in sync. The customer would be serviced more quickly and there would be fewer process breakdowns down the road because data quality would improve. It would also not be necessary to create a laundry list of hard-wired software connections that inhibit IT and organizational exibility. Michelson (2006) gives the following description of an event:
A service may generate an event. The event may signify a problem or impending problem, an opportunity, a threshold, or a deviation. Upon generation, the event is immediately disseminated to all interested parties (human or automated). The interested parties evaluate the event, and optionally take action. The event-driven action may include the invocation of a service, the triggering of a business process, and/or further information publication/syndication. In this interaction, the service is purely one of many event sources in a broader event-driven architecture.
Research Paper www.thejournalofoperationalrisk.com

S. Francis

Organizations have always been driven by business events, so why is this not the case for IT departments? Chandy and Schulte (2007) cite three reasons for the increased importance of event-based integration: (1) increased business competitiveness; (2) an increased level of responsiveness and real-time interaction due to broad adoption of internet technologies; (3) increased power and capability of hardware and software, accompanied by dramatic increase in technical capabilities.

2.3 Process automation tools


Process automation tools allow organizations to create new processes, or new microapplications that run across existing applications. A list of a few products that have this capability is as follows (note there are many other products as well and this is not meant as an endorsement of any kind): (1) IBM WebSphere Business Modeler; (2) Oracle BPM Suite; (3) Pegasystems SmartBPM Suite; (4) SoftwareAG Business Process Management Suite. Such tools make it relatively easy to talk to and integrate with existing applications and trading partners. A business process that crosses multiple applications can often be graphically dened and implemented using the same tool. These tools are not just pretty picture drawers that facilitate documentation of processes: many modern tools exist that have the power to actually automate and reengineer processes across a variety of existing systems. Such tools are indispensable for rms that are serious about STP. These tools generate audit trails of processes executions and enable the reporting of processes that have been executed based on the data that owed through the processes. Such tools enable the creation of composite applications, or applications that are comprised of functions and services from existing applications. This ensures that neither data nor functionality are unnecessarily duplicated. Through the use of adapters, some process automation tools enable IT departments to talk to all different kinds of applications and technologies in a consistent way. This frees users from being required to know the nitty-gritty details or specic programming language of legacy software applications. Such tools make most software look almost identical. These tools are especially useful for STP when automating long-running transactions such as loan originations. Rather than rekeying data for a loan application,
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing

nancial analysis, scoring, credit decision or document preparation, business-process tools can automate much of this process by exposing and reusing functions in the existing systems, then connecting these exiting functions in new ways. Process tools typically enable complex rule denition as well. For example, if a customer is going to be denied a loan, the deny step of the process can automatically check to see whether the customer qualies for a credit card, which may be another path to prot, and a way to help a customer rebuild damaged credit. This same rule could be reused in another process as well, such as a process for a customer that is opening a new checking account. Process automation tools may be implemented with or without an event-based and service-based integration infrastructure. However, the value that process automation tools provide is often greater if an event and services infrastructure is already in place. Event-based and service-based integration infrastructure increases the usefulness of this technology for two reasons. First, it cuts down on the number of hard-wired connections, which are very damaging to organizational exibility. Event technology enables business processes to tell the messaging technology about an important event that needs to be shared across applications, or, vice versa, enables the messaging technology to tell a process about an important event that needs to be shared across applications. This contrasts with hard-wired connections between applications, which involve massive duplication and do not allow for reuse. Let us now consider an example that includes business-process automation as well as event-based integration. Consider what a part of a loan application process might look like if systems are not integrated. First, a loan ofcer has to manually enter data into a loan processing system, a credit management system and a document preparation system. Next, they walk to (or call) the credit department to nd a credit analyst to tell them to hurry up with the credit processing. The credit analyst has some difculty locating the credit request because the identication in the credit system is different from that of the loan processing system. After locating the record, the credit analyst informs the loan ofcer that the request has not yet been completed, or even begun, because the loan ofcer had entered some of the customers information incorrectly and the customers identication could not be validated. This is just the beginning of the process. As an alternative, imagine that when the loan ofcer entered information into the loan processing application it automatically created a loan request event. Other applications, such as the credit management and document preparation applications, might be interested in this event and have a subscription to it. This event might also be subscribed to by a business process called loan application. This process can tie together the activities of the loan application across all of the different applications that it touches and it can also assign tasks to people that are required to advance the loan application process. The loan request event has all of the loan application data
Research Paper www.thejournalofoperationalrisk.com

10

S. Francis

attached to it, and the subscribing process and systems are automatically updated with this information. As soon as the loan request event is received by these other systems, credit analysis and document preparation will begin. When the credit analyst completes his credit analysis task and submits his feedback, this would re an event called credit feedback, which the loan application process would subscribe to, causing the process to wake up from a state where it was waiting for feedback. Then the processing of the loan application would resume. All system interaction and human interaction are orchestrated by the loan application process, and each event and activity in the process is recorded and fully auditable. There are countless examples of other improvements that are possible in almost any kind of business.

3 IMPLEMENTATION CHALLENGES AND COMMON OBSTACLES


Implementing event-based and process-based integration is no easy task. The technology is not inherently difcult to use but IT organizations become very adept at creating hard-wired integrations whenever a need arises. When project timelines are being squeezed and someone needs to save a day or two rolling out a new application, quick and dirty integration may be a great way to cut corners. There is also a considerable amount of planning and coordination that must be done up front in order to develop standard denitions and representations of data. Over time, event-based integration makes integration much faster and easier. However, there is denitely some up-front planning and investment required. The benets of this technology are very compelling, but many organizations have started down this path without achieving success. There are some good reasons for this. Here are a few of them, as well as some tips on how to overcome these obstacles. This is by no means an exhaustive list, but these are certainly some of the more common factors inhibiting successful integration initiatives. (1) Information technology is usually project driven, and the value of integration with a single project is quite low compared with the value of integration across an enterprise. The benets of integration typically accrue to an organization, not to a project. (2) Developers tend to be very independent, and like to do things their own way. They may resist such changes. Incentives for developers are often overlooked. (3) Projects are usually carried out on very tight schedules, and if an integration infrastructure is new, this can slow things down for a couple of days when rst used. Switching to the old way can therefore please both developers and project managers.
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing FIGURE 1 Policy resistance to integration standards.

11

Data quality

+ Standardized
integration initiatives

Burden for

+ project/developers + + +
Benefit to organization

+ +
Haphazard integration

Project schedule pressure

+ +
New project or software application

Data visibility

Burden for department

(4) Lack of standard denitions for data can slow things down. It is easy to get stuck in the weeds discussing what data constitutes a customer or an order. This is further complicated by differences in semantic representations of data in different applications. For example, height can be measured in feet, inches, centimeters, etc. Weight and distances often have different measurements as well (Gannon et al (2009)). Such issues complicate the construction of standard denitions and, for those who have not dealt with such issues, the task can seem quite daunting. Figure 1 depicts a causal loop diagram showing how standardized integration initiatives can be thwarted by some of the forces described above. The diagram is easiest to read by starting on the right-hand side, at project schedule compression, and following the arrows.

4 TIPS FOR OVERCOMING COMMON OBSTACLES


In Figure 1, the arrows leading to haphazard integration that have a positive polarity represent the forces supporting the status quo and preventing improved integration practices. These dynamics need to be disrupted in order to achieve success. The next subsection gives some ways of doing this effectively.
Research Paper www.thejournalofoperationalrisk.com

12

S. Francis

4.1 Measurement
By creating a database of loss events, the measurement and analysis of these risks becomes far easier. Whether it is customer attrition, a work stoppage event or something more serious, the objective reporting of such events into a single repository will increase visibility and awareness. The reporting of such events needs to be independent, however. A salesperson or serviceperson may be reluctant to disclose a loss, or may attribute a loss incorrectly in order to manage his/her performance metrics. For this reason, reporting of loss events should be reviewed in a short process or workow before being entered into a database. Even when there is a high level of independence in the reporting process, attribution is still difcult. However, as long as the reporting process is fairly independent, this can be addressed later. Such measurement efforts make it much easier to achieve strategic management support and sponsorship, which is obviously critical. It is also important to measure benets as well as losses. There is some art here and some science, but data sharing undoubtedly creates some direct and measurable benets. Many business processes can be optimized and shortened, driving costs out of the business. The total amount of integration code that needs to be created and maintained is reduced. Troubleshooting becomes less costly because process audit trails are maintained and can be easily reported. Such improved exibility can actually become a competitive advantage that attracts new customers, although attribution of such effects is difcult.

4.2 Recognition
Effective incentives and controls are also essential. Communicating standards of how integration should be performed within IT, and publishing these standards, is a good start. Recognition is a powerful motivator too, and by publicly and positively acknowledging compliance with such standards, successful adoption is much more likely.

4.3 Using expandable denitions


To avoid the trap of getting stuck arguing about how to dene data, organizations can use canonical denitions of data (basically just a union of all data elements). This is a good method of best practice and allows progress to occur immediately. These canonical denitions can be expanded over time. However, there will also be challenges with data translation. Because similar data might have different units of measurement in different systems, a translation or rules engine is useful so that translation logic does not get duplicated.
The Journal of Operational Risk Volume 6/Number 1, Spring 2011

Operational risk: lack of effective information sharing

13

4.4 Do not forget the cloud


Cloud computing, and specically software as a service, poses new challenges relating to integration as well. As organizations move applications from the premises into the cloud, where it is controlled by a third party, integration becomes more complex. Integrating to something that is hosted elsewhere, in an environment that supports numerous customers, may sometimes not be possible. For example, the vendor might not allow it for security or quality control reasons. If it is possible, there will almost certainly be limitations on what kind of integration is permitted (such as batch-integration only, Web-services-based integration only, security restrictions, etc).

4.5 Timing
There are good times and bad times to roll out integration infrastructure. It is unrealistic to imagine that any organization would throw away years worth of hard-wired software connections and just replace them overnight. This would be a large and risky project, with little immediate or obvious benet to a business. However, upgrading or replacing large systems creates a great opportunity. Large systems have many software connections or interfaces. When these systems are replaced, many software connections must also be replaced. This is the ideal time to start doing things the right way.

4.6 Vision and management support


Having a clear vision of a desired future state that is supported by senior management is also critical to success. Such a vision needs to align with an organizations goals and mission. Making technology selections, decisions or purchases before clearly dening a desired future state, or having received support from senior management, is not recommended. Without such a vision, it is difcult to answer the basic questions, such as why are we doing this?

5 CONCLUSION
Data, along with customers and employees, is increasingly identied as one of an organizations most valuable assets (Pitney Bowes Business Insight (2009)). Organizations must take care to maintain these assets just as they take care of their other key assets, such as employees and customers. However, efforts to improve data quality, visibility and integration take a great deal of patience and commitment. There are many places where such efforts can be compromised. Although it may be difcult to track and measure, loss events resulting from poor quality data are rampant. Some of the examples provided in this paper provide a good starting point for beginning to
Research Paper www.thejournalofoperationalrisk.com

14

S. Francis

recognize data problems and attributing loss events to them. Progress will be gradual and iterative but, over time, data-quality improvements can profoundly change an organization and make it more adaptable and exible. Gradually, it will improve the quality of work from any employee that uses software to do his/her job. Process errors and breakdowns will diminish and customers will be served faster and more effectively.

REFERENCES
Chandy, K. M., and Schulte, W. R. (2007). What is event driven architecture (EDA) and why does it matter? Discussion Paper (July). Chandy, K. M., and Schulte, W. R. (2009). Event Processing: Designing IT Systems for Agile Companies. McGraw-Hill, New York. De Fontnouvelle, P., DeJesus-Rueff, V., Jordan, J., and Rosengren, E. (2003). Using loss data to quantify operational risk. Working Paper, Federal Reserve Bank of Boston. Gannon, T., Madnick, S., Moulton, A., Sabbouh, M., Siegel, M., and Zhu, H. (2009). Framework for the analysis of the adaptability, extensibility, and scalability of semantic information integration and the context mediation approach. In Proceedings of the 42nd Hawaii International Conference on System Sciences, January 58, Big Island, HI. Institute of Electrical and Electronics Engineers, New York. Gustafsson, J. Nielsen, J. P., Pritchard, P., and Roberts, D. (2006). Quantifying operational risk guided by kernel smoothing and continuous credibility: a practitioners view. The Journal of Operational Risk 1(1), 4355. Lam, J. (2003). Enterprise Risk Management: From Incentives to Controls. John Wiley & Sons. Michelson, B. M. (2006). Event-Driven Architecture Overview: Event-Driven SOA Is Just Part of the EDA Story. Report, Patricia Seybold Group (February). National Commission on Terrorist Attacks (2004). 9/11 Commission Report. Executive Summary (July). Pitney Bowes Business Insight (2009). Managing your data assets. White Paper, IDC Government Insights. Senior Supervisors Group (2009). Risk management lessons from the global banking crisis of 2008. Report, Senior Supervisors Group (October). Vanson Bourne (2009). Overtaken by events? The quest for operational responsiveness. Study commissioned by Progress Software (September).

The Journal of Operational Risk

Volume 6/Number 1, Spring 2011

Das könnte Ihnen auch gefallen