Sie sind auf Seite 1von 5

Article from:

Risk Management
August 2008 Issue No. 13
August 2008 w Risk Management

Risk Identification: A Critical First Step


in Enterprise Risk Management
Sim Segal

E
nterprise Risk Management (ERM) is 4. Gather data appropriately
often defined as a process to identify, 5. Define frequency-severity clearly
measure, manage and disclose all key
risks to increase value to stakeholders. ERM is Defining Risks by Source
still an emerging concept, and those companies
Risks are often defined by their outcome rather
adopting it are in varying stages of implementa-
than their source. For example, reputation
tion. The first phase in the ERM process cycle,
risk is a risk commonly found on a companys
after developing the initial ERM framework and
key risk list. However, this is not a source
plan, is risk identification.
of risk, but rather an outcome of other risks.
There are several riskssuch as poor product
Risk identification typically involves three
quality, poor service, fraud, etc.that might
types of activities:
rise to a level whereby reputation is negatively
impacted.
Defining and categorizing risks;
Conducting internal qualitative surveys on
Another example is ratings downgrade.
the frequency and severity of each risk; and
Again, this is not a source of risk, but an out-
Scanning the external environment for emerg-
come that can result from several different risk
ing risks.
sources, e.g., strategy risk, execution risk, etc.
A poor strategy, for example, might result in a
Since risk identification is the first phase in
rating agency downgrading the company.
the ERM cycle, some assume that by now the
approach must have matured, and that com-
This is a common practice, yet defining risks by
mon practice is essentially best practice.
their outcome, rather than their source, results
However, through our research and client work,
in several suboptimal ERM steps. It degrades
we have found that common practice in risk
the qualitative survey results; survey par-
identification is suboptimal in several aspects,
ticipants have an inconsistent understanding
and produces misleading information not only
of the risk they are assessing, since each person
in risk identification, but also in all downstream
may be considering a different risk source and
ERM phases: risk quantification, risk manage-
scenario triggering the event. This also makes
ment and risk disclosure. Relying upon this Sim Segal, FSA, CERA, MAAA,
risk quantification more challenging and un-
flawed information puts management at risk of: is US Leader of ERM Services
even; risk experts have difficulty constructing
at Watson Wyatt in New York.
specific risk scenarios for quantification, since
Focusing on the wrong priorities; He can be reached at sim.
the risk is defined so ambiguously. Finally, segal@watsonwyatt.com.
Making poor decisions; and
management struggles to identify and evaluate
Producing improper risk disclosures.
mitigation alternatives, since risks are generally
mitigated at the source rather than the outcome.
To have a successful ERM risk identification
For example, its easier to consider mitigation
phase and avoid these problems, companies
of potential sources of reputation risk (e.g., poor
must:
product quality, poor service, internal fraud)
than it is to mitigate an amorphous concept like
1. Define risks by source
reputation damage in the abstract.
2. Categorize risks with consistent granularity
3. Identify risks prospectively

continued on page 30

Page 29 w
Risk Management w August 2008

Chairmans Corner First Step


Risk ID: A Critical

Risk Identification
w continued from page 29

To avoid these difficulties, management must impacts from each originating source. Finally,
define risks by their source. In our prior ex- management can clearly identify and evaluate
ample of reputation risk, we listed three both pre-event and post-event mitigation al-
examples of risk sources that might involve ternatives, since both the source of risk and the
reputation damage in an extreme scenario. downstream events are apparent.
Chart A shows these risks along with a partial
illustration of the relationship of risk sources to Categorizing Risks with
intermediate impact(s) and to outcomes. In the Consistent Granularity
chart, the arrows show how each risk can trigger Risks are often categorized with inconsistent
media coverage, resulting in reputation dam- levels of granularityeither at too high a level
age, followed by financial repercussions. or too low a level.

It is common to find a risk list that includes some


risks defined at too high a level of abstraction
the risk is really a category of risks that should
be refined into a set of smaller, individual
component risks. For example, talent man-
agement a type of human resources risk
should be broken down into its individual risks,
such as ability to recruit/retain, succession
planning, etc.

Defining risks at too high a level, results in


suboptimal internal qualitative surveys. It leads
With risks defined by their source, the ERM to uneven scoring by survey participants, since
steps flow well. There is data integrity in the the larger category obscures its several compo-
qualitative survey; since each risk is clearly nent risks. However, when risks are consistently
defined by its source, survey participants defined at the individual risk level, the assess-
have a consistent understanding of each risk, ment is more meaningful, since participants can
resulting in a coherent assessment. This also consider and assess each risk individually.
makes risk quantification easier. Since risks are
defined so clearlyeach with its own specific It is even more common to find risks defined at
sourcerisk experts can more easily develop too low a level of abstractionthe risk is really
risk scenarios, following logical downstream only one of a larger category of risks. For ex-
ample, lack of innovative products is only one
specific risk in a larger category. This should be
elevated to a higher level of abstraction, and in-
cluded in the category of strategy execution.

Defining risks at too low a level, threatens the


environmental scanning activity. It can cause a
failure to identify all related types of risk in the
larger category. In our example, management
may not have considered other risks to strategy
execution, for example, inability to achieve
planned growth, failure to expand into key
new markets, etc.

w Page 30
August 2008 w Risk Management

A partial example of how to categorize risks at a gation in place; planned mitigation; anecdotal
consistent level of granularity is shown in Chart experience at competitors, etc.
B for human resources risks.
However, it is counter-productive to gather all
Identifying Risks Prospectively this data during the risk identification phase.
Too much data is gathered. Most of this data is
Risks are often identified retrospectively. Some
only needed for the key risks, rather than the
risks are on the key risk list merely because
long list of risks provided to survey participants.
they occurred recently and management wants
The primary purpose of the risk identification
to see them there. This is called fighting the
phase is to prioritizeto narrow down a list
last battle syndrome. In addition, these risks
of (potentially hundreds of) risks to those key
are often defined at too low a level of granular-
risks that will go to the next ERM phases: risk
ity, since they are descriptive of the recent
quantification, risk management and risk dis-
specific event.
closure. All that is needed for prioritization is
the frequency-severity scoring.
Including these on the risk list, in this way, can
skew the qualitative survey results. These risks
In addition, the data is collected too early.
are often over-weighted; participants are more
The data that is neededthe data for the key
sensitized to them and are not fully aware of the
risksis not needed until the risk quantifica-
mitigation that has likely been put in place fol-
tion phase because it is used to develop and
lowing the recent occurrence. Retrospectively
quantify risk scenarios. Since the data is col-
defining risks also negatively impacts envi-
lected too early, it is often deposited in a data-
ronmental scanning; it is a distraction from
base where it languishes and as time passes, the
identifying the next risk event (as opposed to the
quality decreases.
last risk event).

Finally, the burden of the shear volume of data


Identifying risks prospectively can help avoid
requested results in survey fatigue. this over-
these difficulties. It reduces some of the bias
whelms survey participants and decreases the
in the risk assessment, by not confusing recent
quality of the critical inputfrequency and
experience with future likelihood and impact.
severity assessment.
It also focuses management away from the
past, and concentrates attention on what might
These difficulties can be resolved by gathering
impact the companys ability to deliver on its
the appropriate data at the proper stage in the
strategic objectives going forward. This enables
ERM process. In the risk identification phase,
a robust, untainted examination of where the
the qualitative survey should focus participants
company is, where its headed and what could
primarily on assessing frequency and severity.
get in the way.
At the risk quantification phase, data should be
gathered for developing and quantifying risk
Gathering Data Appropriately scenarios for the key risks. This avoids gather-
In the risk identification phase, qualitative ing too much data, since the larger data request
survey participants are usually asked to assess is not unnecessarily performed for those risks
the frequency and severity of a large list of risks. that are not key risks. In addition, data is more
However, in most cases, there is also an attempt current, since it is gathered closer to the time it
to gather a large amount of additional data at this is needed. Finally, survey participants can do
stage: key risk indicators; exposure metrics; a better job, since they are not overwhelmed by
historical frequency and severity; current miti- excessive volume.
continued on page 32

Page 31 w
Risk Management w August 2008

Editorial
Risk ID:,AChairmans
Chairmans Corner
Corner First
Critical Step or Blank

Risk Identification
w continued from page 31
Defining Frequency- worst casenot an (extremely unlikely) end-
Chart C: Severity Clearly of-the-world event and not an event that occurs
Illustrative Scoring Criteria with moderate frequency.
When survey participants are asked to quali-
Frequency Severity tatively assess a list of potential risks, the most
5 = Very High 5 = Impact of $100M+ common approach is to ask them to score each To more clearly define severity, more specific-
risk on both a frequency and severity scale. ity should be provided on the metric(s) intend-
4 = High 4 = Impact of $50M - $100M
Guidance is usually provided in terms of scor- ed. A leading practice is to express the scoring
3 =Moderate 3 = Impact of $25M - $50M
ing criteria. A simplified example is shown in criteria in terms of a single metric that can cap-
2 = Low 4 = Impact of $10M - $25M ture all potential impactsimpacts to income
Chart C.
1 = Very low 5 = Impact of less than $10M statement, balance sheet, required capital and
However, this approach often results in dispa- cost of capital. The only metric that captures
rate impressions among survey participants as all of these impacts appropriately is enterprise
to how to score both frequency and severity, valuethe present value of projected cash and
negatively impacting survey results. capital flows into the future, where the projec-
tion is consistent with the strategic plan. This is
To score frequency, participants must consider not market capitalization. Rather, it is the value
a specific risk scenario. Is it an end-of-the world an investor should pay today, if the company
scenario? Is it a most likely scenario? The for- were to perfectly execute its strategic plan and
mer would solicit a lower frequency score than everything go precisely as expected.
the latter. However, such guidance is rarely
provided. As a result, each participant tends to The enterprise value metric is initially less tan-
imagine a different scenario, and collectively gible to some, since its a complex calculation.
they are essentially not scoring frequency for However, it is intuitivethe value of the firm
the same risk event. is a concept everyone understands. In addi-
tion, simple illustrations of selected risk events
To score severity, participants must understand and their relative impact on enterprise value
the metric impacted. Is it an earnings hit? Is it provide survey participants with a general feel
one-time or cumulative hit (and for how many for this metric that is sufficient for qualitative
years)? Is it a capital hit? Is it a hit to market cap- assessment purposes.
italization? While guidance usually includes
magnitude, as in our example, sufficient detail Though risk identification is the first step in the
regarding the impact is often omitted. Again, ERM process cycle, appears to be the simplest,
participants have an inconsistent understand- and is the most traveled, common practices are
ing and are not assessing on the same basis. fraught with issues that can damage an ERM
program. To avoid this, management must:
To resolve this, it is important to more clearly define risks by source; categorize risks with
define frequency and severity prior to the quali- consistent granularity; identify risks prospec-
tative risk assessment. tively; gather data appropriately; and define fre-
quency-severity clearly. Companies adopting
To define frequency clearly, participants must these better practices have found that the risk
be given guidance as to the type of risk scenario identification phase is quicker, easier, more
to consider. One example of how to do this is to widely understood and produces higher quality
focus participants on a particular type of risk results, paying dividends as well in downstream
event, as shown in Chart D. A range of data ERM phases. Those continuing with common
points is shown in the chart, each representing a practices may find themselves more at riskof
potential risk event. The ellipse illustrates that focusing on the wrong priorities, making poor
survey participants should consider a credible mitigation decisions, and ultimately improper
risk disclosures. F

Page 32 w

Das könnte Ihnen auch gefallen