Sie sind auf Seite 1von 3

"Take back your security infrastructure"

Anton Chuvakin, Ph.D., GCIA, GCIH, GCFA

WRITTEN: 2004

DISCLAIMER:
Security is a rapidly changing field of human endeavor. Threats we face literally change
every day; moreover, many security professionals consider the rate of change to be
accelerating. On top of that, to be able to stay in touch with such ever-changing reality,
one has to evolve with the space as well. Thus, even though I hope that this document will
be useful for to my readers, please keep in mind that is was possibly written years ago.
Also, keep in mind that some of the URL might have gone 404, please Google around.

This paper discusses the question of optimizing security decisions in an organization, based on the
information provided by the technical security infrastructure.

Imagine you work for one of those companies where information security is taken seriously, senior
management support is for granted, the appropriate IT defenses are deployed and users are educated on
the security policy (a security utopia, no less). Firewalls are humming along, intrusion detection systems are
installed and incident response team is trained and ready for action. This goes a long way towards creating
a more secure enterprise computing environment.

In this context, lets look at it from the prevention-detection-response model. Prevention is mostly likely
handled by the combination of organization’s firewalls, intrusion prevention devices, vulnerability scanning
as well as hardened hosts and applications. Apparently, intrusion detection systems seek to provide
detection, while a team of experts armed with forensic and other investigative tools provides response.
Admittedly, the above picture is a grand simplification, but the separation between prevention, detection and
response is still artificial to a large degree. Firewalls greatly help in detection by providing logs of allowed
and denied connections, IDS can be configured to respond to incidents automatically and security
professionals are at the core of all three components.

The above complex interplay between prevention detection and response is further complicated by the
continuous decision making process: 'what to respond to?', 'how to prevent catastrophic loss?', ‘do I care
about this thing I just detected’, etc. Such decisions are based on the information provided by the security
infrastructure components. Paradoxically, the more technical security defenses one deploys, the more
firewalls are blocking messages, the more detection systems are sending alerts, the harder it is to make the
right decisions about how to react. Volume and obscurity of the information emanated by the security
infrastructure contribute to such difficulties. And at some moment, the question of trying to predict what fire
will flare next (or, “being proactive” in marketspeak) will come up.

What are the common options for optimizing the security decisions made by the company security decision-
makers? The security information flow needs to be converted into a decision. The attempts to create a fully
automated solution for making such a decision, some even based on artificial intelligence, have not yet
reached a commercially viable stage. The problem is thus to create a system to reduce the information
deluge sufficiently and then to provide some guidance to the system's human operators in order to make the
right security decision. Notice, that does not preclude a certain degree of automation.

In addition to facilitating decision-making in case of a security event (defined as a single communication


instance from a security device) or an incident (defined as a confirmed attempted intrusion or other attack or
discovered abuse), reducing the information flow is required for implementing security benchmarks and
metrics. Assessing the effectiveness of deployed security controls is an extremely valuable part of an
organization security program. Such an assessment can be used to calculate a security Return On
Investment (ROI or ROSI) and to enable other methods for marrying security and business needs.
The commonly utilized scenarios that occur can be loosely categorized as such:
• install-and-forget, don’t look at the information, avoid decisions (unfortunately, all too common),
• manual data reduction or, reliance on a particular person to extract and analyze the meaningful audit
records
• in-house automation tools such as scripts and utilities aimed at processing the information flow

Let us briefly look at advantages and disadvantages of the above methods.

Is there a chance that that the first approach - deploying and leaving the security infrastructure unsupervised
- has a business justification? Indeed, some people do drive their cars without mandatory car insurance, but
companies are unlikely to be moved by the same reasons that motivate the reckless drivers. Most of the CSI
members have probably heard that 'Having a firewall does not provide 100% security' many times. In fact, 0-
day (i.e. previously unpublished and unknown to software vendor) exploits and new vulnerabilities are
overall less of a threat, than the company own employees. Technology solutions are rarely effective against
social and human problems, such as malicious insiders or those duped by social engineering attacks.
Advanced firewalls can probably be made to mitigate the threat from new exploits, but not from the firewall
administrators' mistakes and deliberate tampering from the inside of the protected perimeter. In addition,
total lack of feedback on security technology performance will prevent a company from taking a proactive
stance against new threats and adjusting its defenses against the flood of attacks hitting its bastions.
Security metrics will also be largely non-existent under such circumstances.

Does relying on human experts to understand your security information and to provide effective response
guidelines based on the gathered evidence constitutes a viable alternative to doing nothing? Specifically,
two approaches to the problem are common in this scenario. First, a security professional can study the
evidence AFTER the incident. Careful examination of evidence collected by various security devices will
certainly shed the light on the incident and will likely help to figure out what happened as well as draw
lessons from it to prevent the recurrence. However, in case extensive damage is done to the organization, it
is already too late and prevention of future incidents of the same kind will not return the stolen intellectual
property or disappointed business partners. Expert response after-the-fact has a good chance to be “too
little, too late” in the age of fast automated attack tools and worms. The second option is to review the
accumulated audit trail data periodically. A simple calculation is in order. A single border router will produce
hundreds of log messages per second on a busy network, and so will the firewall. Adding host messages
from even several servers will increase the flow to hundreds more per second. Now if one is to scale this to
an average large company network infrastructure, the information flow will increase hundredfold. No human
expert or a team will be able to review, let along analyze, the incoming flood of signals.

But what if a security professional chooses to automate the task by writing a script or a program to alert him
or her on the significant events? Such program may help with data collection (centralized syslog server or a
database) and alerting (email, pager, voice mail). However, a series of important questions arises. Collected
data will greatly help with an incident investigation, but what about the timeliness of the response?
Separating meaningful events from mere chaff is not a trivial task, especially in a large multi-vendor
environment. Moreover, even devices sold by a single vendor might have various event prioritization
schemes and protocols. Thus designing the right data reduction and analysis scheme that optimizes security
decision process might require significant time and capital investment and still not reach the set goals due to
a lack of the specific analysis expertise.

In addition, alerting on raw event data (such as 'if you see a specific IDS signature, send an email') will
quickly turn into the "boy that cried wolf" story with pagers screaming for attention and not getting it. In light
of the above problems with prioritization, simply alerting on "high-priority" events is not an effective solution.
Indeed, IDS systems can be tuned to provide less alerts, but to effectively tune the system one needs
access to the whole feedback provided by the security infrastructure and not just to raw IDS logs. For
example, outside and inside firewall logs are very useful for tuning the IDS deployed in the DMZ.

Overall, it appears that simply investing in more and more security devices does not create more security.
One needs to keep in close touch with the deployed devices, and the only way to do it is by using special-
purpose automated tools to analyze all the information they produce, correlate the results and to draw
meaningful conclusions aimed to optimize the effectiveness of the IT defenses. While having internal staff
write code to help accumulate data and map it might be acceptable in immediate term situations in small
environments, the maintenance, scalability and continued justification for such systems likely has a very low
ROI. In fact, it caused the birth of Security Information Management (SIM) products that have, as their
primary focus, the collection and correlation of this data in order to optimize security decision making as well
as act automatically in select circumstances.

ABOUT THE AUTHOR:

This is an updated author bio, added to the paper at the time of reposting in 2009.

Dr. Anton Chuvakin (http://www.chuvakin.org) is a recognized security expert in the field of


log management and PCI DSS compliance. He is an author of books "Security Warrior"
and "PCI Compliance" and a contributor to "Know Your Enemy II", "Information Security
Management Handbook" and others. Anton has published dozens of papers on log
management, correlation, data analysis, PCI DSS, security management (see list
www.info-secure.org) . His blog http://www.securitywarrior.org is one of the most popular
in the industry.

In addition, Anton teaches classes and presents at many security conferences across the
world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia
and other countries. He works on emerging security standards and serves on the
advisory boards of several security start-ups.

Currently, Anton is developing his security consulting practice, focusing on logging and
PCI DSS compliance for security vendors and Fortune 500 organizations. Dr. Anton
Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously,
Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world
about the importance of logging for security, compliance and operations. Before LogLogic,
Anton was employed by a security vendor in a strategic product management role. Anton
earned his Ph.D. degree from Stony Brook University.

Das könnte Ihnen auch gefallen