Sie sind auf Seite 1von 94


Basic Network Configuration

This section covers basic network configuration set up and testing. Also covered are basic
concepts and operations, including the difference between LAN and WAN networks and how IP
Addressing is used. In a networked environment, such as a company, typically there are many computers
connected together using a router or a switch (for more information, see router or switch in the
definitions section). In larger companies, there may be several different routers distributed in buildings
and plant locations.
A router allows any LAN-side computer communicate with computers and devices outside the
LAN (local area network). Routers send data packets from one place to another place on a network.
Routers use network addresses to route packets to the correct destination. For example, in a TCP/IP
network, the IP (internet protocol) address of the network interface is used to direct router destinations.
Because routers help computers inside the LAN “talk” with computers outside of the LAN. The security
of a company’s LAN may be compromised by gaps of open ports in the router. Security measures may
have been instituted to compensate for these vulnerabilities. Consult your network administrator to learn
about the security measures taken to protect your network. VPN, or virtual private network, is one such
security measure to protect the intelligence of the LAN. A computer outside the LAN must have an
address or key known by the VPN to allow access to the LAN. Many companies use a VPN to connect
two different LANs, thus allowing the transfer of data between the two networks.

LAN (local area network) vs WAN (wide area network)

Local Area Network
Simply put, a LAN is a computer network that connects a relatively small area (a single building
or group of buildings). Most LANs connect workstations and computers to each other. Each computer
(also known as a“node”), has its own processing unit and executes its own programs; however, it can also
access data and devices anywhere on the LAN. This means that many users can access and share the same
information and devices. A good example of a LAN device is a network printer. Most companies cannot
afford the budgetary or hardware expense of providing printers for each of its users. Therefore, one
printer (i.e., device) is placed on the LAN where every user can access the same printer.
The LAN uses IP addresses to route data to different destinations on the network. An IP Address is
a 32-bit numeric address written as four numbers separated by periods (For example,
Note: For more information on IP Addresses, see your local network administrator.

Figure 1.1 Local Area Network Diagram
Wide Area Network
A wide area network connects two or more LANs and can span a relatively large geographical
area. For example, Telex Headquarters in Burnsville, MN is connected to several of its branch offices in
Nebraska and Arkansas over the wide area network. The largest WAN in existence is the Internet.

Figure No 1.2 - WAN

Configuring Bastion Host

A bastion host is a system identified by the firewall administrator as a critical strong point in the
network's security. Typically, the bastion host serves as a platform for an application-level or circuitlevel
gateway. Common characteristics of a bastion host include the following:

● The bastion host hardware platform executes a secure version of its operating system, making it a
trusted system.
● Only the services that the network administrator considers essential are installed on the bastion host.
These include proxy applications such as Telnet, DNS, FTP, SMTP, and user authentication.
● The bastion host may require additional authentication before a user is allowed access to the proxy
services. In addition, each proxy service may require its own authentication before granting user access.
● Each proxy is configured to support only a subset of the standard application's command set.
● Each proxy is configured to allow access only to specific host systems. This means that the limited
command/feature set may be applied only to a subset of systems on the protected network.
● Each proxy maintains detailed audit information by logging all traffic, each connection, and the
duration of each connection. The audit log is an essential tool for discovering and terminating Intruder
● Each proxy module is a very small software package specifically designed for network security.
Because of its relative simplicity, it is easier to check such modules for security flaws. For example, a
typical UNIX mail application may contain over 20,000 lines of code, while a mail proxy may contain
fewer than 1000.
● Each proxy is independent of other proxies on the bastion host. If there is a problem with the operation
of any proxy, or if a future vulnerability is discovered, it can be uninstalled without affecting the
operation of the other proxy applications. Also, if the user population requires support for a new service,
the network administrator can easily install the required proxy on the Bastion host.
● A proxy generally performs no disk access other than to read its initial configuration file. This makes it
difficult for an intruder to install Trojan horse sniffers or other dangerous files on the bastion host.
● Each proxy runs as a no privileged user in a private and secured directory on the bastion host.

Firewall Configurations:
In addition to the use of a simple configuration consisting of a single system, such as a single packet
filtering router or a single gateway, more complex configurations are possible and indeed more common.
Below mentioned figure illustrates three common firewall configurations. We examine each of these in

Figure No 1.3 : Firewall Types

Figure No : 1.4 Firewall Configurations

In the screened host firewall, single-homed bastion configuration, the firewall consists of two systems:
a packet-filtering router and a bastion host. Typically, the router is configured so that
1. For traffic from the Internet, only IP packets destined for the bastion host are allowed in.
2. For traffic from the internal network, only IP packets from the bastion host are allowed out.
The bastion host performs authentication and proxy functions. This configuration has greater security
than simply a packet-filtering router or an application-level gateway alone, for two reasons. First, this
configuration implements both packet-level and application-level filtering, allowing for considerable
flexibility in defining security policy. Second, an intruder must generally penetrate two separate systems
before the security of the internal network is compromised.
This configuration also affords flexibility in providing direct Internet access. For example, the
internal network may include a public information server, such as a Web server, for which a high level of
security is not required. In that case, the router can be configured to allow direct traffic between the
information server and the Internet.

In the single-homed configuration just described, if the packet-filtering router is completely
Compromised, traffic could flow directly through the router between the Internet and other hosts on the
private network. The screened host firewall, dual-homed bastion configuration physically prevents
such a security breach . The advantages of dual layers of security that were present in the previous
configuration are present here as well. Again, an information server or other hosts can be allowed direct
communication with the router if this is in accord with the security policy.
The screened subnet firewall configuration of is the most secure of those we have considered. In
this configuration, two packet-filtering routers are used, one between the bastion host and the Internet and
one between the bastion host and the internal network. This configuration creates
an isolated sub network, which may consist of simply the bastion host but may also include one or more
information servers and modems for dial-in capability. Typically, both the Internet and the internal
network have access to hosts on the screened subnet, but traffic across the screened subnet is blocked.
This configuration offers several advantages:
● there are now three levels of defense to thwart intruders.
● the outside router advertises only the existence of the screened subnet to the Internet;
Therefore, the internal network is invisible to the Internet.
● Similarly, the inside router advertises only the existence of the screened subnet to the internal Network;
therefore, the systems on the inside network cannot construct direct routes to the Internet.

Troubleshooting CISCO IOS Firewall configurations
• In order to reverse (remove) an access list, put a “no” in front of the access-group command in interface
configuration mode:
int <interface>
no ip access-group # in|out

• If too much traffic is denied, study the logic of your list or try to define an additional broader list, and
then apply it instead.
For example:
access-list # permit tcp any any
access-list # permit udp any any
access-list # permit icmp any any
int <interface>
ip access-group # in|out

• The show ip access-lists command shows which access lists are applied and what traffic is denied by
them. If you look at the packet count denied before and after the failed operation with the source and
destination IP address, this number increases if the access list blocks traffic.

• If the router is not heavily loaded, debugging can be done at a packet level on the extended or ip inspect
access list. If the router is heavily loaded, traffic is slowed through the router. Use discretion with
debugging commands.
Temporarily add the no ip route-cache command to the interface:
int <interface>
no ip route-cache
Then, in enable (but not config) mode:
term mon
debug ip packet # det
produces output similar to this:
*Mar 1 04:38:28.078: IP: s= (Serial0),d= (Ethernet0), g=, len 100,
*Mar 1 04:38:28.086: IP: s= (Ethernet0), d= (Serial0), g=, len 100, forward
• Extended access lists can also be used with the “log” option at the end of the various statements:
• access-list 101 deny ip host host log
• access-list 101 permit ip any any

You therefore see messages on the screen for permitted and denied traffic:
*Mar 1 04:44:19.446: %SEC-6-IPACCESSLOGDP:
list 111 permitted icmp
-> (0/0), 15 packets
*Mar 1 03:27:13.295: %SEC-6-IPACCESSLOGP:
list 118 denied tcp
->, 1 packet
• If the ip inspect list is suspect, the debug ip inspect <type_of_traffic> command produces
output such as this output:
• Feb 14 12:41:17 56: 3d05h: CBAC* sis 258488 pak 16D0DC TCP P ack 3195751223
• seq 3659219376(2) ( => (
• Feb 14 12:41:17 57: 3d05h: CBAC* sis 258488 pak 17CE30 TCP P ack 3659219378 seq
3195751223(12) ( <= (

Troubleshooting Routers
Cisco Router Basic Troubleshooting Checklist
Excerpted from the book The Accidental Administrator:
Cisco Router Step-by-Step Configuration
Guide (Crawley, Don R., Seattle, WA,, ISBN 978-0983660729)
When a router isn’t functioning, here are some steps to perform to eliminate basic faults as the
source of trouble:
Physical Layer Stuff: Check power issues. Look for power lights, check plugs, and circuit breakers.
Check the Interfaces: Use the command show ip interface brief or show ipv6 interface brief to ensure
that desired interfaces are up and configured properly.
Ping: Use the ping and trace commands to check for connectivity.
Check the Routing Table: Use the show ip route or show ipv6 route command to find out what the
router knows.
Is there either an explicit route to the remote network or a gateway of last resort?
Is there a Firewall on the Computer? If the problem involves a computer, check to ensure that its firewall
is not blocking packets.
Sometimes there are computers at client locations with firewalls in operation without the client’s
Any Access Lists? If the above steps don’t resolve the issue, check for access-control lists that block
traffic. There is an implicit “deny any” at the end of every access-control list, so even if you don’t see a
statement explicitly denying traffic, it might be blocked by an implicit “deny any.” Is the VPN Up? If a

VPN is part of the connection, check to ensure that it is up. Use the show crypto family of commands to
check VPN connections. With VPN connections, each end of the connection must mirror the other. For
example, even something as seemingly inconsequential as a different timeout value or a different key
lifetime can prevent a connection.
Do the Protocols Match? If you are trying to gain remote access to a server, ensure that it supports the
protocol you’re attempting to use. For example, if the router hasn’t been configured to support SSH and
you use the default settings in PuTTY which call for SSH, you won’t be able to connect. Also, some
admins change the default port numbers, so you may expect to use port 22 with SSH, but the admin may
have configured it to use a non-standard port.
Check for Human Error: User errors can also be the source of errors. Check to ensure that correct
usernames and passwords are being used, that you and the admin on the other end of the connection are
using the same network addresses and matching subnet masks. Verify Settings: Do not make assumptions.
Verify everything! Often, by using the above steps, you can solve the problem. If that doesn’t do it, then
proceed to more advanced show and debug commands to isolate the problem.

Router Troubleshooting Tools

Using Router Diagnostic Commands
Cisco routers provide numerous integrated commands to assist you in monitoring and troubleshooting
your internetwork. The following sections describe the basic use of these commands:
• The show commands help monitor installation behaviour and normal network behaviour, as well as
isolate problem areas.
• The debug commands assist in the isolation of protocol and configuration problems.
• The ping commands help determine connectivity between devices on your network.
• The trace commands provide a method of determining the route by which packets reach their destination
from one device to another.

Using show Commands

The show commands are powerful monitoring and troubleshooting tools. You can use the show
commands to perform a variety of functions:
• Monitor router behaviour during initial installation
• Monitor normal network operation
• Isolate problem interfaces, nodes, media, or applications
• Determine when a network is congested
• Determine the status of servers, clients, or other neighbours

Following are some of the most commonly used show commands:

• show interfaces—Use the show interfaces exec command to display statistics for all interfaces
configured on the router or access server. The resulting output varies, depending on the network for which
an interface has been configured.
Some of the more frequently used show interfaces commands include the following:
° show interfaces ethernet
° show interfaces tokenring
° show interfaces fddi
° show interfaces atm
° show interfaces serial
° show controllers—This command displays statistics for interface card controllers.

For example, the show controllers mci command

provides the following fields:
MCI 0, controller type 1.1, microcode version 1.8
128 Kbytes of main memory, 4 Kbytes cache
22 system TX buffers, largest buffer size 1520
Restarts: 0 line down, 0 hung output, 0 controller
Interface 0 is Ethernet0, station address 0000.0c00.
d4a6 15 total RX buffers, 11 buffer TX queue
limit, buffer size 1520 Transmitter delay is 0
Interface 1 is Serial0, electrical interface is V.35 DTE
15 total RX buffers, 11 buffer TX queue
limit, buffer size 1520 Transmitter delay is 0
microseconds High speed synchronous serial
Interface 2 is Ethernet1, station address
aa00.0400.3be4 15 total RX buffers, 11 buffer TX
queue limit, buffer size 1520 Transmitter delay is
0 microseconds
Interface 3 is Serial1, electrical interface is V.35
DCE 15 total RX buffers, 11 buffer TX queue
limit, buffer size 1520 Transmitter delay is 0
microseconds High speed synchronous serial interface

Some of the most frequently used show controllers commands include the following:
° show controllers token
° show controllers FDDI
° show controllers LEX
° show controllers ethernet
° show controllers E1
° show controllers MCI
° show controllers cxbus
° show controllers t1
° show running-config— Displays the router configuration currently running
° show startup-config— Displays the router configuration stored in nonvolatile RAM (NVRAM)
° show flash— Group of commands that display the layout and contents of flash memory
° show buffers— Displays statistics for the buffer pools on the router
° show memory— Shows statistics about the router’s memory, including free pool statistics
° show processes— Displays information about the active processes on the router
° show stacks— Displays information about the stack utilization of processes and interrupt routines, as
well as the reason for the last system reboot
° show version— Displays the configuration of the system hardware, the software version, the names and
sources of configuration files, and the boot images

There are hundreds of other show commands available.

Using debug Commands
The debug privileged exec commands can provide a wealth of information about the traffic being seen (or
not seen) on an interface, error messages generated by nodes on the network, protocol-specific diagnostic
packets, and other useful troubleshooting data.
To access and list the privileged exec commands, complete the following tasks:
Step 1 Enter the privileged exec mode:
Router> enable
Password: XXXXXX Router#
Step 2 List privileged exec commands:
Router# debug?
Exercise care when using debug commands. Many debug commands are processor intensive
and can cause serious network problems (such as degraded performance or loss of connectivity) if they
are enabled on an already heavily loaded router. When you finish using a debug command, remember to

disable it with its specific no debug command (or use the no debug all command to turn off all

Use debug commands to isolate problems, not to monitor normal network operation. Because the high
processor overhead of debug commands can disrupt router operation, you should use them only when
you are looking for specific types of traffic or problems and have narrowed your problems to a likely
subset of causes.

Output formats vary with each debug command. Some generate a single line of output per packet, and
others generate multiple lines of output per packet. Some generate large amounts of output, and others
generate only occasional output. Some generate lines of text, and others generate information in field

To minimize the negative impact of using debug commands, follow this procedure:

Step 1 Use the no logging console global configuration command on your router.
This command disables all logging to the console terminal.

Step 2 Telnet to a router port and enter the enable exec command. The enable exec command will place
the router in the privileged exec mode. After entering the enable password, you will receive a prompt that
will consist of the router name with a # symbol.

Step 3 Use the terminal monitor command to copy debug command output and system error messages
to your current terminal display. By redirecting output to your current terminal display, you can view
debug command output remotely, without being connected through the console port.

If you use debug commands at the console port, character-by-character processor interrupts are
generated, maximizing the processor load already caused by using debug. If you intend to keep the
output of the debug command, spool the output to a file.

Using Router Diagnostic Commands

In many situations, using third-party diagnostic tools can be more useful and less intrusive
than using debug commands.
Using the ping Command
To check host reach ability and network connectivity, use the ping exec (user) or privileged exec
command. After you log in to the router or access server, you are automatically in user exec command

mode. The exec commands available at the user level are a subset of those available at the privileged
level. In general, the user exec commands allow you to connect to remote devices, change terminal
settings on a temporary basis, perform basic tests, and list system information. The ping command can be
used to confirm basic network connectivity on AppleTalk, ISO Connectionless Network Service (CLNS),
IP, Novell, Apollo, VINES, DECnet, or XNS networks. For IP, the ping command sends Internet Control
Message Protocol (ICMP) Echo messages. ICMP is the Internet protocol that reports errors and provides
information relevant to IP packet addressing. If a station receives an ICMP Echo message, it sends an
ICMP Echo Reply message back to the source.

The extended command mode of the ping command permits you to specify the supported IP header
options. This allows the router to perform a more extensive range of test options. To enter ping extended
command mode, enter yes at the extended commands prompt of the ping command.

It is a good idea to use the ping command when the network is functioning properly to see how the
command works under normal conditions and so you have something to compare against when

Using the trace Command

The trace user exec command discovers the routes that a router’s packets follow when traveling to their
destinations. The trace privileged exec command permits the supported IP header options to be specified,
allowing the router to perform a more extensive range of test options.

The trace command works by using the error message generated by routers when a datagram exceeds its
time-to-live (TTL) value. First, probe datagrams are sent with a TTL value of 1. This causes the first
router to discard the probe datagrams and send back “time exceeded” error messages.

The trace command then sends several probes and displays the round-trip time for each. After every third
probe, the TTL is increased by one. Each outgoing packet can result in one of two error messages. A
“time exceeded” error message indicates that an intermediate router has seen and discarded the probe. A
“port unreachable” error message indicates that the destination node has received the probe and discarded
it because it could not deliver the packet to an application. If the timer goes off before a response comes
in, trace prints an asterisk (*).
The trace command terminates when the destination responds, when the maximum TTL is exceeded, or
when the user interrupts the trace with the escape sequence. As with ping, it is a good idea to use the
trace command when the network is functioning properly to see how the command works under normal
conditions and so you have something to compare against when troubleshooting.

Incident Response
An incident is a set of one or more security events or conditions that requires action and Closure
in order to maintain an acceptable risk profile. In the haystack of events, organizations must find the
“needles” that are the security incidents. Events are isolated and disconnected, but incidents add the
context that enables security administrators to gain understanding and take action. Defined in this way—
as a set of events or conditions requiring response and closure—incidents comprise not only the
significant threats that jeopardize business and require intervention. They include more mundane
situations that occur on a daily basis and only threaten the business if no action is taken. Examples of
these routine situations include “low and slow” port scans and some varieties of email worms. Most
organizations face thousands of instances of the latter types of threats, together with the higher profile
blended threats like Code Red, Nimda, and Klez. Besides attacks, known system vulnerabilities or
discovered policy violations are also incidents that require a response in order to protect the business.
When related events (e.g. attacks, vulnerabilities, and policy violations) are viewed together, the true
nature (or type) of the incident becomes evident.

Introduction to Incident Handling and Response

Computer security incident response has become an important component of information
Technology (IT) security programs. An incident response capability is therefore necessary for rapidly
detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and
restoring IT services.

Different types of information security incidents

• Incidents due to peripheral devices such as

External/Removable Media
• Incidents triggered by Attrition (brute force methods that compromise, degrade, or destroy systems,
networks, or services)
• Incidents linked to website or web-based application
• Incidents via an email message or attachment
• Incidents resulting from improper usage of an organization’s acceptable usage policies by an
authorized user
• Incident as a result of Loss or Theft of Equipment
• Incidents due to other factors

These can be classified into:
• Malicious Code incidents
• Network reconnaissance incidents
• Unauthorised Access incidents
• Inappropriate Usage incidents
• Multiple component incidents

Impact of information security incidents

• Functional impact (current and likely future negative impact to business functions)
• Information impact (e.g., effect on the confidentiality, integrity, and availability of the organization’s
• Recoverability from the incident (e.g., the time and types of resources that must be spent on recovering
from the incident) Organizations prioritize information security incidents based on the weight ages they
give to each of the above categories for a particular incident. For example, an organization that deals with
massive amounts of personal identifying information (PII) might weight information impact more heavily
than recoverability impact, while an emergency response agency might prioritize functional impact to
ensure the continued delivery of emergency services.

Need for Incident Response

• to respond quickly and effectively when security breaches occur
• to be able to use information gained during incident handling to better prepare for handling
future incidents
• to provide stronger protection for systems and data
• to help deal properly with legal issues that may arise during incidents
• to comply with law, regulations, and policy directing a coordinated, effective defense against info

Goals of Incident Response

• formal, focused, and coordinated approach to responding to incidents,
• adhere to organization’s mission, size, structure, and functions
• formulate policy, plan, and procedure creation to counter adverse events
• to provide stronger protection for systems and data
• to minimize loss or theft of information and disruption of services
• to respond quickly and effectively when security breaches occur

How to Identify an Incident

• use of incident Analysis Hardware and Software to identify an incident

• use of appropriate incident handling communications means and facilities
• use of appropriate Incident Analysis Resources to identify an incident
• use of appropriate Incident Mitigation Software to identify an incident
• use of different response strategies to identify incidents through attack vectors such as
External/Removable Media, Attrition, Web, Email, Impersonation, Improper Usage
by organization’s authorized users, Loss or Theft of Equipment and others that are beyond the scope of
the above mentioned

Signs of Security Incident

Two main types of signs of an incident are:
Precursors: It is technically a sign that an incident may occur in the future.
Indicator: A sign that an incident may have occurred or may be occurring now.

Some of the common signs of security incident are:

• Web server log entries that show the usage of a vulnerability scanner
• An announcement of a new exploit that targets a vulnerability of the organization’s mail server
• A threat from a group stating that the group will attack the organization
• A network intrusion detection sensor alerts when a buffer overflow attempt occurs against a database
• Antivirus software alerts when it detects that a host is infected with malware.
• A system administrator sees a filename with unusual characters
• A host records an auditing configuration change in its log.
• An application logs multiple failed login attempts from an unfamiliar remote system
• An email administrator sees a large number of bounced emails with suspicious content
• A network administrator notices an unusual deviation from typical network traffic flows

Incident information
One can get information about incidents from various sources:

• Alerts: Reviewing alerts based on supporting data from sources such as Intrusion detection and
prevention systems (IDPS); Security Information and Event Management (SIEM) alerts; Antivirus and
antispam software; File integrity checking software; Third-party monitoring services; etc.

• Logs: Analysing logs from sources such as Operating system, service and application logs and Network
device logs in correlation with event information

• Network flow: Using routers and other networking devices to provide information and locate
anomalous network activity caused by malware, data exfiltration, and other malicious acts

• Publicly Available Information: Updating and integrating new vulnerabilities and exploits published
by authorized agencies such as National Vulnerability Database (NVD)

• People: Validating reports registered by Users, system administrators, network administrators, security
staff, and others within the organization; and also reports originating from external sources or parties

Handling Different Types of Information Security Incidents

Handling Incidents
There are five important incident handling phases—
• Preparation: establishing and training an incident response team, and acquiring the necessary tools and
• Detection and analysis: detecting security breaches and alerting organization during any imminent
• Containment: mitigating the impact of the incident by containing
• Eradication and recovery: carrying out detection and analysis cycle to eradicate incident and
ultimately initiate recovery
• Post-incident activity: preparing detailed report of the cause and cost of the incident and future
preventive measures against similar attacks This is similar to the tasks contained within incident
management plans.
• identify
• contain
• cleanse
• recover
• close
Organisations should have a plan to respond to various types of incidents detailing various aspects of
incident handling including the above.

Incident Response Plan

Incident Response Plan is an organization’s foundation to a formal, focused, and coordinated approach for
incident response.
Purpose of Incident Response Plan
The objective of instating an incident response plan is to provide the roadmap for implementing the
incidence response capability.

The incident response plan acts as a defence mechanism against hackers, malware, human error and a
series of other security threats.

Requirements of Incident Response Plan

The intervention of an incident response plan can be the structure to building an organization’s incident
response capability. Emphasis on computing security policies and practices are the main objectives of
most organization in their overall risk management strategies. Elements that are recommended as
important to an incident response plan are:
• Organization’s mission towards the plan
• Organization’s strategies and goals to determine the structure of incident response capability
• Senior management approval in the structuring of the proposed plan
• Organizational approach to incident response
• How the incident response team will communicate with the rest of the organization and with other
• Metrics for measuring the incident response capability and its effectiveness
• Roadmap for maturing the incident response capability (e.g. regular reviews, audits and tests, etc.)
• How the program fits into the overall organization

Incident Response Plan Checklist

Developing an incident response plan checklist can minimize the threat of security breach in the form of
attacks in websites and servers, or inadvertent leakage of share sensitive data, etc. Instating a structure
that ensures the latest developments are captured, understood, evaluated as threats to the business,
documented and distributed will help ensure an effective incident response.

An incident response plan checklist should be an amalgamation of the following key practices:
• provides a roadmap for implementing an incident response program based on the organization’s policy
• organize both short- and long-term goals program, including metrics for measuring the program
• highlight incident handler’s training needs and other technical requirements
• address existing and new cyber technologies are adequately addressed in policies and procedure
• conduct regular reviews, audits and tests to protect against security breach
• classify business data in the order of its sensitivity and security requirements
• selecting of appropriate incident response team structure
• complying with security-related incident regulations and law enforcement procedures

Preparation for Incident Response and Handling

Create a Core Team

Integrity of business security demands the presence of an effective incidence response team and the latter
can be achieved through the selection of appropriate structure and staffing models. Typically, a designated
incident response team or personnel function as the first point of contact (POC) in a situation involving
security breach in an organization. The incident handlers may then analyse the incident data, determine
the impact of the incident, and act appropriately to limit the damage and restore normal services. The
incident response team’s success depends on the participation and cooperation of individuals throughout
the organization. Therefore, an organization must create a core team, identify suitable individuals, discuss
incident response team models, and provide advice on selecting an appropriate model.

A team model may be based on the following models:

• Central Incident Response Team: A functional model for small organizations with limited or no
geographic presence wherein a single incident response team handles core security computing.

• Distributed Incident Response Teams: This model is effective for large organizations (e.g., one team
per division) and for organizations with major computing resources at distant locations (e.g., one team per
geographic region, one team per major facility).

• Coordinating Team: An incident response team provides advice to other teams without having
authority over those teams—for example, a department wide team may assist individual agencies’ teams
and it is almost modelled as a CSIRT for CSIRTs.

• Create a Tool Kit, Systems, and Instrumentation

A jumpkit is a portable case instrumental to incident response teams and it contains items such as laptop,
appropriate software such as packet sniffers, digital forensics, back up devices, blank media etc.
Listed below are range of various tool kit, systems and instrumentation that may be useful in an
incident response:
• Incident Handler Communications and Facilities: such as contact information of team members and
others within the organization and external, on-call information matrix, incident reporting mechanisms
such as phone numbers, email addresses, online forms, etc. ; incident tracking systems; smart phones for
round-the-clock communication; use of encryption software for internal team members; security materials
storage facility; etc.

• Incident Analysis Hardware and Software:

Digital forensic workstations and/or backup devices to create disk images, preserve log files, and save
other relevant incident data, etc.; Laptops; Spare workstations, servers, and networking equipment, or the
virtualized equivalents for storing and trying out malware; Blank removable media; Packet sniffers and

protocol analyzers; Digital forensic software; Evidence gathering accessories such as digital cameras,
audio recorders, chain of custody forms, etc. • Incident Analysis Resources: Port lists, including
commonly used ports and Trojan horse ports; Documentation for OSs, applications, protocols, etc.;
Network diagrams and lists of critical assets, such as database servers; Current baselines of expected
network, system, and application activity; Cryptographic hashes of critical files to speed incident
analysis, verification, and eradication

• Incident Mitigation Software: Access to images of clean OS and application installations for
restoration and recovery purposes;
Incident Response Team
Incident Response Team Members
A single employee, with one or more designated alternates, should be in charge of incident response. In a
fully outsourced model, this person oversees and evaluates the outsourcer’s work. All other models
generally have a team manager and one or more deputies who assume authority in the absence of the
team manager. Every team member should have good problem solving skills and critical thinking

Incident Response Team Members Roles and Responsibilities

An incident response team member should possess technical skills, such as system administration,
network administration, programming, technical support, or intrusion detection. An incident response
team should be a combination of skilled members in the area of technology (e.g. operating systems and
applications) and other technical areas such as network intrusion detection, malware analysis, or

Roles and responsibilities

A team member in an incident response unit is expected to have the basic understanding of the
technologies used and their applications. The individual should be capable of comprehending and
handling the following security incidents:
• the type of incident activity that is being reported or seen by the community
• the way in which incident response team services are being provided (the level and depth of technical
assistance provided to the constituency)
• the responses that are appropriate for the team (e.g., what policies and procedures or other regulations
must be considered or followed while undertaking the response)
• the level of authority the incident response team has in taking any specific actions when applying
technical solutions to an incident reported to the incident response team

Developing Skills in Incident Response Personnel
• maintain, enhance, and expand proficiency in technical areas and security disciplines, as well as less
technical topics such as the legal aspects of incident response
• incentivizing participation in staff conferences
• promote deeper technical understanding
• engage external technical knowledge facilitator with deep technical knowledge in needed areas to
impart learning and development
• provide opportunities to perform other tasks in non-functional areas
• rotate staffing of members across functions to gain new technical skills
• create a mentoring program to enable senior technical staff to help less experienced staff learn incident
• develop incident handling scenarios and conduct team discussions

Incident Response Team Structure

After successfully selecting a functional core team, it is best followed that team members be further
integrated and modeled into appropriate staffing based on the magnitude of incident response and size of
the organization.

Find details of the 3 types of staffing methods below:

• In house employees
• Partially Outsourced
• Fully Outsourced
Therefore, an organization must consider the following factors before selecting an appropriate incident
response team structures:

• The need for 24/7 availability: Real-time availability is considered one of the best for incident
response options because the longer an incident lasts, the more potential there is for damage and loss.

• Full-time versus part-time team members: Organizations with limited funding, staffing, or incident
response needs may have only part-time incident response team members, serving as more of a virtual
incident response team. An existing group such as the IT help desk can act as a first POC for incident
reporting and perform initial investigation and data collection.

• Employee morale: Segregate administrative work and core incident response to minimize stress on
employees and to help boost morale

• Cost: Implement sufficient funding for training and skills development of incident response team
members the area of work function demands broader knowledge of IT
• Staff Expertise: Incident handling requires specialized knowledge and experience in several technical
areas; the breadth and depth of knowledge required varies based on the severity of the organization’s risks

• In the case outsourced work, the organization must consider not only the current quality (breadth and
depth) of the outsourcer’s work, but also efforts to ensure the quality of future work
• Document line of work or authority of outsourced incident response work appropriately and ensure
actions for these decision points are handled
• Divide incident response responsibilities and restrict access to sensitive information
• Provide regularly updated documents that define what incidents it is concerned about to outsourcer
• Create correlation among multiple data sources
• Maintain basic incident response skills in house

Incident Response Team Dependencies

It is important to identify other groups within the organization and rely on the expertise, judgment, and
abilities of others, including response policy, budget, and staffing established by management;
information security staff members during certain stages of incident handling such as handling such as
prevention, containment, eradication, and recovery; IT technical experts e.g. system and network
administrators; legal departments to review plans, policies, documents, etc; public affairs and media
relations; human resources; business continuity planning; physical security and facilities management;

Different Methods and Techniques used when working with others

Incident Response Team Services The main focus of an incident response team is performing incident
response, however it may also undertake the provision of the following services:

• Intrusion detection: Incident response team analyzes incidents more quickly and accurately,
based on the knowledge it gains of intrusion detection technologies

• Advisory Distribution: The team also may also issue advisories within the organization regarding new
vulnerabilities and threats through automated methods

• Education and Awareness: Promote education and awareness among users technical staff know about
detecting, reporting, and responding to incidents through means such as workshops, websites, newsletters,
posters, and even stickers on monitors and laptops

• Information Sharing: Manage the organization’s incident information sharing efforts

Defining the Relationship between Incident Response, Incident Handling, and Incident
Incident response means responding to computer security incidents systematically or by following a
consistent incident handling methodology so that the appropriate actions are taken timely. It is a
mechanism to minimize loss or theft of information and disruption of services caused by incidents.

Incident handling refers to the several phases of incident response process i.e. preparation, detection and
analysis, containment, eradication and recovery, and post-incident activity required in adequate handling
of an incident.

Incident Management is term used to describe the overall computing security management to detect the
occurrence of incident, initiate and handle an incident response and prevent any future re-occurrences.

Routine Operational Procedures and Tasks Required to Co-ordinate and Respond to

Information Security Incidents
• Preparing to Handle Incidents
• Incident Analysis Hardware and Software
• Incident Analysis Resources
• Incident Mitigation Software
• Management responsible for coordinating incident response among various stakeholders, minimizing
damage, and reporting to Congress, OMB, the General Accounting Office (GAO), and other parties
• Information security staff members may be needed during certain stages of incident handling
(prevention, containment, eradication, and recovery)—for example, to alter network security controls
(e.g., firewall rule sets)
• IT technical experts (e.g., system and network administrators) can ensure that the appropriate actions are
taken for the affected system, such as whether to disconnect an attacked system
• Coordinate with relevant Legal experts to review incident response plans, policies, and procedures to
ensure their compliance with law and Federal guidance, including the right to privacy
• Coordinate and inform the media and, by extension, the public • Ensure that incident response policies
and procedures and business continuity processes are in sync

• Coordinate with Physical Security and Facilities Management to access facilities during incident

Incident Response Process

Step 1: Identification
Obtaining and validating information related to information security issues In incident handling, detection
may be the most difficult task. Incident response teams in an organization are equipped to handle security
incidents using well-defined response strategies beginning with information gathering. Preparing a list
most common attack vectors such as external/removable media, web, email, impersonation, improper use
by authorized users, etc. can narrow down to the most competent incident handling procedure. Therefore,
it is important to validate each incident using defined standard procedures and document each step taken
accurately. Common issues and incidents of information security that may require action and who to
report these to An indicator may not always translate into a security incident given the possibility of
technical faults due to human error in cases such as server crash or modification of critical files.
Determining whether a particular event is actually an incident is sometimes a matter of judgment. It may
be necessary to collaborate with other technical and information security personnel to make a decision.
Therefore, incident handlers need to report the matter to highly experienced and proficient staff members
who can analyse the precursors and indicators effectively and take appropriate actions.

Mentioned below are some of the means to conduct initial analysis for validation:
• Profiling Networks and Systems in order to measure the characteristics of expected activity so that
changes to it can be more easily identified and used one of the several detection and analysis techniques

• Studying networks, systems and applications to understand what their normal behavior is so that
abnormal behavior can be recognized more easily

• Creating and implementing a log retention policy that specifies how long log data should be
maintained may be extremely helpful in analysis because older log entries may show reconnaissance
activity or previous instances of similar attacks

• Correlating events using evidence of an incident captured in several logs such wherein each may
contain different types of data—a firewall log may have the source IP address that was used, whereas an
application log may contain a username

• Synchronizing hosts clock using protocols such as the Network Time Protocol (NTP) to record time of

• Maintain and Use a Knowledge Base of Information that handlers need for referencing quickly
during incident analysis

• Use Internet Search Engines for Research to help analysts find information on unusual Activity

• Run Packet Sniffers to Collect Additional Data to record traffic that matches specified criteria should
keep the volume of data manageable and minimize the inadvertent capture of other information

• Filter the Data to segregate categories of indicators that tend to be insignificant

Step 2: Incident Recording

Any occurrences of incident must be recorded and the incident response team should update the status of
incidents, along with other pertinent information. Observations and facts of the incident may be stored in
any of the following sources such as logbook, laptops, audio recorders, and digital cameras, etc.

Incident record samples and template

Documenting system events, conversations, and observed changes in files can lead to a more efficient,
more systematic and error-free handling of the problem. Using an application or a database, such as an
issue tracking system helps ensure that incidents are handled and resolved in a timely manner.
In an incident record template the following useful information are to be included:
• current status of the incident as new, in progress, forwarded for investigation, resolved, etc.
• summary of the incident
• indicators related to the incident
• other incidents related to this incident
• actions taken by all incident handlers on this incident
• chain of custody, if applicable
• impact assessments related to the incident
• contact information for other involved parties (e.g., system owners, system administrators)
• list of evidence gathered during the incident investigation
• comments from incident handlers
• next steps to be taken (e.g., rebuild the host, upgrade an application).

Step 3: Initial Response

Commence initial response to an incident based on the type of incident, the criticality of the resources and
data that are affected, the severity of the incident, existing Service Level Agreements (SLA) for affected

resources, the time and day of the week, and other incidents that the team is handling. Generally, the
highest priority is handling incidents that are likely to cause the most damage to the organization or to
other organizations.

Step 4: Communicating the Incident

The incident should be communicated in appropriate procedures through the organization’s points of
contact (POC) for reporting incidents internally. Therefore, it is important for an organization to structure
their incident response capability so that all incidents are reported directly to the incident response team,
whereas others will use existing support.

Assigning and escalating information on information security incidents

Organizations should also establish an escalation process for those instances when the team does not
respond to an incident within the designated time. This can happen for many reasons: for example, cell
phones may fail or people may have personal emergencies. The escalation process should state how long
a person should wait for a response and what to do if no response occurs. On failure to respond within a
stipulated time, then the incident should be escalated again to a higher level of management. This process
should be repeated until the incident is successfully handled.

Step 5: Containment
Containment and Quarantine Containment is important before an incident overwhelms resources or
increases damage. Most incidents require containment, so that is an important consideration early in the
course of handling each incident. Containment provides time for developing a tailored remediation
strategy. An essential part of containment is decision-making where the situation may demand immediate
action such as shut down a system, disconnect it from a network, and disable certain functions.
Various containment strategies may be considered in the following ways:
• Potential damage to and theft of resources
• Need for evidence preservation
• Service availability (e.g., network connectivity, services provided to external parties)
• Time and resources needed to implement the strategy
• Effectiveness of the strategy (e.g., partial containment, full containment)
• Duration of the solution (e.g., emergency workaround to be removed in four hours, temporary
workaround to be removed in two weeks, permanent solution)

Handling an incident may necessitate the use of strategies to contain the existing predicament and one
such method being redirecting the attacker to a sandbox (a form of containment) so that they can monitor

the attacker’s activity, usually to gather additional evidence. Hence, once a system has been compromised
and if allowed with the compromise to continue, it may help the attacker to use the compromised system
to attack other systems.

Incident Response Team Members Roles and Responsibilities

An incident response team member should possess technical skills, such as system administration,
network administration, programming, technical support, or intrusion detection. An incident response
team should be a combination of skilled members in the area of technology (e.g. operating systems and
applications) and other technical areas such as network intrusion detection, malware analysis, or

Understand Network Damage

On the other hand, containment may give rise to another potential issue and that is some attacks may
cause additional damage when they are contained. When the incident handler attempts to contain the
incident by disconnecting the compromised host from the network, the subsequent pings will fail. As a
result of the failure, the malicious process may overwrite or encrypt all the data on the host’s hard drive.

Identify and Isolate the Trust Model

Network information systems are vulnerable to threats and benign nodes often compromised because of
unknown, incomplete or distorted information while interacting with external sources. In this case,
malicious nodes need to be identified and isolated from the environment. The solution to insecure can be
found in the establishment of trust. Trust model can be formed based on the characteristics, information
sources to compute, most relevant and reliable information source, experience of other members of
community, etc.

Step 6: Formulating a Response Strategy

An analysis of the recoverability from an incident determines the possible responses that the team may
take when handling the incident. An incident with a high functional impact and low effort to recover from
is an ideal candidate for immediate action from the team. In situations involving high end data infiltration
and exposure of sensitive information the incident response team may formulate response by transferring
the case to strategic level team. Each response strategy should be formulated based on business impact
caused by the incident and the estimated efforts required to recover from the incident. Incident response
policies should include provisions concerning incident reporting—at a minimum, what must be reported
to whom and at what times. Important information to be included are: CIO, Head of information security,
Local information security officer, Other incident response teams within the organization, External

incident response teams (if appropriate), System owner, Human resources (for cases involving
employees, such as harassment through email), Public affairs, etc.

Step 7: Incident Classification

Classifying and prioritizing information security incidents An incident may be broadly classified based on
common attack vectors such as External/ Removable Media; Attrition; Web; Email; Improper Usage;
Loss or Theft of Equipment; Miscellaneous

Incident Prioritization
• Functional impact of the incident on the existing functionality of the affected systems and future
functional impact of the incident if it is not immediately contained

• Information Impact of the incident that may amount to information exfiltration and impact on
organization’s overall mission; impact of exfiltration of sensitive information on other organizations if
any of the data pertain to a partner organization

• Recoverability from the incident and how to determine the amount of time and resources that must be
spent on recovering from that incident. Necessity to actually recover from an incident and carefully weigh
that against the value the recovery effort will create and any requirements related to incident handling
Incident classification guidelines and templates Organizations should document their guidelines and
templates to handle any incident but should focus on being prepared to handle incidents that use common
attack vectors. Capturing the attack pattern formally with required information may help understand
specific parts of an attack, how it is designed and executed, providing the adversary’s perspective on the
problem and the solution, and gives guidance on ways to mitigate the Attack’s effectiveness

• Requirements Gathering – Identification of relevant security requirements, misuse and abuse cases
• Architecture and Design – Provide context for architectural risk analysis and guidance for security
• Implementation and Development – Prioritize and guide review activities
• Testing and Quality Assurance – Provide context for appropriate risk-based and penetration testing
• System Operation – Leverage lessons learned from security incidents into preventative guidance
• Policy and Standard Generation – Guide the identification of appropriate prescriptive organizational
policies and standards Incident prioritization guidelines and templates Creating written guidelines for
prioritizing incidents serve as a good practice and help achieve effective information sharing within an
organization. The step may also help in identifying situations that are of greater severity and demand
immediate attention. An ideal template for incident prioritization should be formulated based on relevant

factors such as the functional impact of the incident (e.g., current and likely future negative impact to
business functions), the information impact of the incident (e.g., effect on the confidentiality, integrity,
and availability of the organization’s information), and the recoverability from the incident (e.g., the time
and types of resources that must be spent on recovering from the incident)

Step 8: Incident Investigation

One of the key tasks of an incident response team is to receive information on possible incidents,
investigate them, and take action to ensure that the damage caused by the incidents is minimized.

Following up an Incident Investigation

In the course of the work, the team must adhere to the following procedures deemed appropriate to a
given situation:
• receive initial investigation and data gathering from IT help desk members and escalate to high strategic
level specialist if situation demands
• use appropriate materials that may be needed during an investigation
• should become acquainted with various law enforcement representatives before an incident occurs to
discuss conditions under which incidents should be reported to them
• maintain record of chain of custody forms should detail the transfer and include each party’s signature
while transferring evidence from person to person
• should be careful to give out only appropriate information—the affected parties may request details
about internal investigations that should not be revealed publicly
• ensure law enforcement are available to investigate incidents wherever necessary
• collect required list of evidence gathered during the incident investigation
• should collect evidence in accordance with procedures that meet all applicable laws and regulations that
have been developed from previous discussions with legal staff and appropriate law enforcement agencies
so that any evidence can be admissible in court

Lessons learnt from the Security Incident

Handling and rectifying security incident work best in a “learning and improving” model. Therefore,
incident handling teams must evolve to reflect on new threats, improved technology, and lessons learned.
Each lesson’s learned brief must include the following agenda:
• Exactly what happened, and at what times?
• How well did staff and management perform in dealing with the incident? Were the documented
procedures followed? Were they adequate?
• What information was needed sooner?
• Were any steps or actions taken that might have inhibited the recovery?

• What would the staff and management do differently the next time a similar incident occurs?
• How could information sharing with other organizations have been improved?
• What corrective actions can prevent similar incidents in the future?
• What precursors or indicators should be watched for in the future to detect similar incidents?
• What additional tools or resources are needed to detect, analyze, and mitigate future incidents?

Process change for the future

Because of the changing nature of information technology and changes in personnel, the incident
response team should review all related documentation and procedures for handling incidents at
designated intervals. A study of incident characteristics (data collected of previous incidents) may
indicate systemic security weaknesses and threats, as well as changes in incident trends.
Incident data can also be collected to determine if a change to incident response capabilities causes a
corresponding change in the team’s performance (e.g., improvements in efficiency, reductions in costs).

Incident Record Keeping

Incident record keeping or collecting data that are actionable, rather than collecting data simply because
they are available will be useful in several capacities to the organization. It may help in deriving at the
following information:
• Systemic security weaknesses and threats, as well as changes in incident trends
• Selection and implementation of additional controls
• measure the success of the incident response team
• expected return on investment from the data

Step 9: Data Collection

Chain of Custody
Evidences collected should be accounted for at all times; whenever evidence is transferred from person to
person, chain of custody forms should detail the transfer and include each party’s signature. A detailed log
should be kept for all evidence, including the following:
• Identifying information (e.g., the location, serial number, model number, hostname, media access
control (MAC) addresses, and IP addresses of a computer)
• Name, title, and phone number of each individual who collected or handled the evidence during the
• Time and date (including time zone) of each occurrence of evidence handling
• Locations where the evidence was stored

Step 10: Forensic Analysis

Incident handling requires some team members to be specialized in particular technical areas, such as
network intrusion detection, malware analysis, or forensics. Many incidents cause a dynamic chain of
events to occur; an initial system snapshot may do more good in identifying the problem and its source
than most other actions that can be taken at this stage. Therefore, it is appropriate to obtain snapshots
through full forensic disk images, not file system backups. Disk images should be made to sanitized
write-protectable or write once media. This process is superior to a file system backup for investigatory
and evidentiary purposes. Imaging is also valuable in that it is much safer to analyze an image than it is to
perform analysis on the original system because the analysis may inadvertently alter the original. Some of
the useful resources in forensic aspects of incident analysis may include digital forensic workstations21
and/or backup devices to create disk images, preserve log files, and save other relevant incident data

Step 11: Evidence Protection

Importance of keeping evidence relating to information security incidents Collecting evidence from
computing resources presents some challenges. It is generally desirable to acquire evidence from a system
of interest as soon as one suspects that an incident may have occurred. Users and system administrators
should be made aware of the steps that they should take to preserve evidence. In addition, evidence
should be accounted for at all times; whenever evidence is transferred from person to person, chain of
custody forms should detail the transfer and include each party’s signature and a registry or log be
maintained location of the stored evidence.

Step 12: Notify External Agencies

An organization’s incident response team should plan its incident coordination with those parties before
incidents occur to ensure that all parties know their roles and that effective line of communication are
established. Some of the organizations’ external agencies may include other or external incident response
teams, law enforcement agencies, Internet service providers and constituents, law enforcements/legal
departments and customers or system owner, etc.

Step 13: Eradication

Eliminating components of the incident, such as deleting malware and disabling breached user accounts,
as well as identifying and mitigating all vulnerabilities that were exploited follow next to successful
containment and quarantine. During the process, it is important to identify all affected hosts within the
organization so that they can be remediated. In some cases, some incidents, eradication is either not
necessary or is performed during recovery.

Identify Data Backup holes

Verify data back-up and restore procedures. Incident response should be aware of the location of backup
date storage, maintenance, user access and security procedures for data restoration and system recovery.

Following are the suggested data backup sources:

• spare workstations, servers, and networking equipment, or the virtualized equivalents, which may be
used for many purposes, such as restoring backups and trying out malware
• other important materials include backup devices, blank media, and basic networking equipment and

Operating System Updates and Patch management

All hosts patched appropriately using standard configurations be configured to follow the principle of
least privilege—granting users only the privileges necessary for performing their authorized tasks. Hosts
should have auditing enabled and should log significant security-related events; security of hosts and their
configurations should be continuously monitored. In some organizations, the use of Security Content
Automation Protocol (SCAP) expressed operating system and application configuration checklists to
assist in securing hosts consistently and effectively. Infrastructure and Security Policy Improvement
Security cannot be achieved by merely implementing various security systems, tools or products.
However security failures are less likely through the implementation of security policy, process,
procedure and product(s). Multiple layers of defence need to be applied to design a fail-safe security
system. The organization should also report all changes and updates made to its IT infrastructure, network
configuration, and systems. Organization should also focus on longer-term changes (e.g., infrastructure
changes) and ongoing work to keep the enterprise as secure as possible.

Step 14: Systems Recovery

In recovery, administrators restore systems to normal operation, confirm that the systems are functioning
normally, and (if applicable) remediate vulnerabilities to prevent similar incidents. Recovery may involve
such actions as restoring systems from clean backups, rebuilding systems from scratch, replacing
compromised files with clean versions, installing patches, changing passwords, and tightening network
perimeter security (e.g., firewall rulesets, boundary router access control lists). Higher levels of system
logging or network monitoring are often part of the recovery process. Once a resource is successfully
attacked, it is often attacked again, or other resources within the organization are attacked in a similar

Step 15: Incident Documentation

A logbook is an effective and simple medium for recording all facts regarding incidents. Documenting
system events, conversations, and observed changes in files can lead to a more efficient, more systematic,

and less error prone handling of the problem. Every step taken from the time the incident was detected to
its final resolution should be documented and time-stamped. Every document regarding the incident
should be dated and signed by the incident handler as such information can also be used as evidence in a
court of law if legal prosecution is pursued. Importance of keeping records and evidence relating to
information security incidents The incident response team should maintain records about the status of
incidents, along with other pertinent information. Using an application or a database, such as an issue
tracking system, helps ensure that incidents are handled and resolved in a timely manner.

Audio and Video Documentation Strategies Recording details of evidence gathering accessories including
hard-bound notebooks, digital cameras, audio recorders, chain of custody forms, etc. is one of the
common strategies used to track incidents and security. In addition, laptops, audio recorders, and digital
cameras can also serve the purpose beside system events, conversations, and observed changes in files
can lead to a more efficient, more systematic, and less error prone handling of the problem. Update the
status of information security incidents Incident handling team may need to provide status updates to
certain parties, even in some cases the entire organization. The team should plan and prepare several
communication methods, including out-of-band methods (e.g., in person, paper), and select the methods
that are appropriate for a particular incident.
Possible communication methods include:
• Email
• Website (internal, external, or portal)
• Telephone calls
• In person (e.g., daily briefings)
• Voice mailbox greeting (e.g., set up a separate voice mailbox for incident updates, and update the
greeting message to reflect the current incident status; use the help desk’s voice mail greeting)
• Paper (e.g., post notices on bulletin boards and doors, hand out notices at all entrance points)

Incident status template

An incident status should carry statement of the current status of the incident so that communications
with the media are consistent and up-to-date. Template may include the following details:
• current status of the incident (new, in progress, forwarded for investigation, resolved, etc.)
• summary of the incident
• indicators related to the incident
• other incidents related to this incident
• actions taken by all incident handlers on this incident
• chain of custody, if applicable
• impact assessments related to the incident

• contact information for other involved parties (e.g., system owners, system administrators)
• list of evidence gathered during the incident investigation
• comments from incident handlers
• next steps to be taken (e.g., rebuild the host, upgrade an application)

Preparing reports on information security incidents

This estimate may become the basis for subsequent prosecution activity by entities such as the U.S.
Attorney General’s office. Follow-up reports should be kept for a period of time as specified in record
retention policies Another important post-incident activity is creating a follow-up report for each incident,
which can be quite valuable for future use. The report provides a reference that can be used to assist in
handling similar incidents.

Incident report templates

Creating a formal chronology of events in the incident report template for criteria including time-stamped
information such as log data from systems (important for legal reasons) and monetary estimate of the
amount of damage the incident caused. Additionally,

the following information may also be a part of the report:

• Number of Incidents Handled
• Time Per Incident
• Objective Assessment of Each Incident
• Subjective Assessment of Each Incident
Organizations should specify which incidents must be reported, when they must be reported, and to
whom. The parties most commonly notified are the CIO, head of information security, local information
security officer, other incident response teams within the organization, and system owners.

Submitting information security reports

Security follow-up reports are usually kept for a period of time as specified in record retention policies.
Most organizations have data retention policies that state how long certain types of data may be kept. For
example, an organization may state that email messages should be retained for only 180 days. If a disk
image contains thousands of emails, the organization may not want the image to be kept for more than
180 days unless it is absolutely necessary.

Step 16: Incident Damage and Cost Assessment

After the incident is adequately handled, the organization issues a report that details the cause and cost of
the incident and the steps the organization should take to prevent future incidents. The incident data,

particularly the total hours of involvement and the cost, may be used to justify additional funding of the
incident response team. Cost of storing evidence and the cost of retaining functional computers that can
use the stored hardware and media can be substantial. Cost is a major factor, especially if employees are
required to be onsite 24/7. Organizations may fail to include incident response-specific costs in budgets,
such as sufficient funding for training and maintaining skills.

Step 17: Review and Update the Response Policies

The organization must review and update response policies, related activities, gather information from
the handlers, provide incident updates to other groups, and ensure that the team’s needs are met. The
gambit of the work may also include periodically reviewing and updating threat update information
through briefings, web postings, and mailing lists published by authorized agencies or public bodies.

Step 18: Training and Awareness

Organizations must create, provision, and operate a formal incident response capability. Security
Awareness and Training Checklist Establishing an incident response training and awareness should
include the following actions:
• Creating an incident response training and awareness policy and plan
• Developing procedures for performing incident handling and reporting
• Setting guidelines for communicating with outside parties regarding incidents
• training IT staff on complying with the organization’s security standards and making users aware of
policies and procedures regarding appropriate use of networks, systems, and applications
• training should be provided for SOP (delineation of the specific technical processes, techniques,
checklists, and forms) users
• Staffing and training the incident response team
• provide a solid training program for new employees
• training to maintain networks, systems, and applications in accordance with the organization’s
security standards
• awareness of policies and procedures regarding appropriate use of networks, systems, and applications

Incident response knowledge base

The knowledge base is the consolidated incident data collected onto common incident database;
organizations can create their own knowledge base or refer to those established by several groups and
organizations. Although it is possible to build a knowledge base with a complex structure, a simple
approach can be effective. Text documents, spreadsheets, and relatively simple databases provide
effective, flexible, and searchable mechanisms for sharing data among team members. The knowledge
base should also contain a variety of information, including explanations of the significance and validity

of precursors and indicators, such as IDPS alerts, operating system log entries, and application error

Accessing and updating knowledge base

An incident handler may access knowledge databases information quickly during incident analysis; a
centralized knowledge base provides a consistent, maintainable source of information. The knowledge
base should include general information, such as data on precursors and indicators of previous incidents.

Importance of Tracking Progress

Several groups collect and consolidate incident data from various organizations into incident databases.
This information sharing may take place in many forms, such as trackers and real-time blacklists. The
organization can also check its own knowledge base or issue tracking system for related activity.

Corrective and Preventative Actions for Information Security Incidents

In the absence of security controls higher volumes of incidents may occur, overwhelming the incident
response team. An incident response team may be able to identify problems that the organization is
otherwise not aware of; the team can play a key role in risk assessment and training by identifying gaps.

The following text, however, provides a brief overview of some of the main recommended practices for
securing networks, systems, and applications:
• Periodic risk assessments of systems and applications to determine what risks posed by combinations of
threats and vulnerabilities
• Hardened hosts appropriately using standard configurations while keeping each host properly patched,
hosts should be configured to follow the principle of least privilege—granting users only the privileges
necessary for performing their authorized tasks • The network perimeter should be configured to deny all
activity that is not expressly permitted
• Software to detect and stop malware

Data Backup
Backup is the activity of copying files or databases so that they will be preserved in case of equipment
failure or other catastrophe. Backup is usually a routine part of the operation of large businesses with
mainframes as well as the administrators of smaller business computers. For personal computer users,
backup is also necessary but often neglected. The retrieval of files you backed up is called restoring them.


All electronic information considered of institutional value should be copied onto secure storage media
on a regular basis (i.e., backed up), for disaster recovery and business resumption. Special backup needs,
identified through technical risk analysis that exceeds these requirements, should be accommodated on an
individual basis.
Data custodians are responsible for providing adequate backups to ensure the recovery of data and
systems in the event of failure. Backup provisions allow business processes to be resumed in a reasonable
amount of time with minimal loss of data. Since hardware and software failures can take many forms, and
may occur over time, multiple generations of institutional data backups need to be maintained.

Types of Backup
Full backup
Full backup is a method of backup where all the files and folders selected for the backup will be backed
up. It is commonly used as an initial or first backup followed with subsequent incremental or differential
backups. After several incremental or differential backups, it is common to start over with a fresh full
backup again. Some also like to do full backups for all backup runs typically for smaller folders or
projects that do not occupy too much storage space.
Restores are fast and easy to manage as the entire list of files and folders are in one backup Set. Easy to
maintain and restore different versions.
Backups can take very long as each file is backed up again every time the full backup is run. Assumes the
most storage space compared to incremental and differential backups. The exact same files are be stored
repeatedly resulting in inefficient use of storage.

Incremental backup
Incremental backup is a backup of all changes made since the last backup. The last backup can be a full
backup or simply the last incremental backup. With incremental backups, one full backup is done first
and subsequent backup runs are just the changed files and new files added since the last backup.
Much faster backup’s efficient use of storage space as files are not duplicated. Much less storage space
used compared to running full backups and even differential backups.
Restores are slower than with a full backup and differential backups. Restores are a little more
complicated. All backup sets (first full backup and all incremental backups) are needed to perform a

Differential backups
Differential backups fall in the middle between full backups and incremental backup. Differential backup
is a backup of all changes made since the last full backup. With differential backups, ne full backup is
done first and subsequent backup runs are the changes made since the last full backup. The result is a
much faster backup then a full backup for each backup run. Storage space used is less than a full backup
but more then with Incremental backups. Restores are slower than with a full backup but usually faster
than with Incremental backups.
Much faster backups then full backups more efficient use of storage space then full backups since only
files changed since the last full backup will be copied on each differential backup run. Faster restores than
incremental backups
Backups are slower then incremental backups not as efficient use of storage space as compared to
incremental backups. All files added or edited after the initial full backup will be duplicated again with
each subsequent differential backup. Restores are slower than with full backups. Restores are a little more
complicated then full backups but simpler than incremental backups. Only the full backup set and the last
differential backup are needed to perform a restore.

Mirror backups
Mirror backups are as the name suggests a mirror of the source being backed up. With mirror backups,
when a file in the source is deleted, that file is eventually also deleted in the mirror backup. Because of
this, mirror backups should be used with caution as a file that is deleted by accident, sabotage or through
a virus may also cause that same file in mirror to be deleted as well. Some do not consider a mirror to be
a backup. Many online backup services offer a mirror backup with a 30 day delete. This means that when
you delete a file on your source, that file is kept on the storage server for at least 30 days before it is
eventually deleted. This helps strike a balance offering a level of safety while not allowing the backups to
keep growing since online storage can be relatively expensive. Many backup software utilities do provide
support for mirror backups.
The backup is clean and does not contain old and obsolete files
There is a chance that files in the source deleted accidentally, by sabotage or through a virus may also be
deleted from the backup mirror.

Full PC backup

Full PC backup of full computer backup typically involves backing up entire images of the Computers
hard drives rather than individual files and folders. The drive image is like a snapshot of the drive. It may
be stored compressed or uncompressed. With other file backups, only the user’s document, pictures,
videos and music files can be restored while the operating system, programs etc. need to be reinstalled
from is source download or disc media. With the full PC backup however, you can restore the hard drives
to its exact state when the backup was done. Hence, not only can the documents, pictures, videos and
audio files be restored but the operating system, hardware drivers, system files, registry, programs, emails
etc. In other words, a full PC backup can restore a crashed computer to its exact state at the time the
backup was made. Full PC backups are sometimes called “Drive Image Backups”
A crashed computer can be restored in minutes with all programs databases emails etc Intact. No need to
install the operating system, programs and perform settings etc.Ideal backup solution for a hard drive
May not be able to restore on a completely new computer with a different motherboard, CPU, Display
adapters, sound card etc.Any problems that were present on the computer (like viruses, or mis-configured
drivers, unused programs etc.) at the time of the backup may still be present after a full restore.

Local backup
A local backup is any backup where the storage medium is kept close at hand. Typically, the storage
medium is plugged in directly to the source computer being backed up or is connected through a local
area network to the source being backed up.
Offers good protection from hard drive failures, virus attacks, accidental deletes and deliberate employee
sabotage on the source data. Very fast backup and very fast restore. Storage cost can be very cheap when
the right storage medium is used like external hard Drives Data transfer cost to the storage medium can
be negligible or very cheap Since the backups are stored close by, they are very conveniently obtained
whenever needed for backups and restore. Full internal control over the backup storage media and the
security of the data on it. There is no need to entrust the storage media to third parties.
Since the backup is stored close by to the source, it does not offer good protections against theft, fire,
flood, earthquakes and other natural disasters. When the source is damaged by any of these
circumstances, there’s a good chance the backup will be also damaged.

Offsite Backup

Any backup where the backup storage medium is kept at a different geographic location from the source
is known as an offsite backup. The backup may be done locally at first on the usual storage devices but
once the storage medium is brought to another location, it becomes an offsite backup.
Offers additional protection when compared to local backup such as protection from theft,fire, flood,
earthquakes, hurricanes and more.
Except for online backups, it requires more due diligence to bring the storage media to the offsite
location. May cost more as people usually need to rotate between several storage devices. For example
when keeping in a bank deposit box, people usually use 2 or 3 hard drives and rotate between them. So at
least one drive will be in storage at any time while the other is removed to perform the backup. Because
of increased handling of the storage devices, the risk of damaging delicate hard disk is higher. (Does not
apply to online storage)

Online backup
An online backup is a backup done on an ongoing basis to a storage medium that is always connected to
the source being backed up. The term “online” refers to the storage device or facility being always
connected. Typically the storage medium or facility is located offsite and connected to the backup source
by a network or Internet connection. It does not involve human intervention to plug in drives and storage
media for backups to run. Many commercial data centers now offer this as a subscription service to
consumers. The storage data centers are located away from the source being backed up and the data is
sent from the source to the storage center securely over the Internet. Typically a client application is
installed on the source computer being backed up. Users can define what folders and files they want to
backup and at one times of the day they want the backups to run. The data may be compressed and
encrypted before being sent over the Internet to the storage data center. The storage facility is a
commercial data center located away from the source computers being backed up. Typically they are built
to certain fire and earthquake safety specifications. They have higher security standards with CCTV and
round the clock monitoring. They typically have backup generators to deal with grid power outages and
the facility is temperature controlled. Data is not just stored in one physical media but replicated across
several devices. These facilities are usually serviced by multiple redundant Internet connection so there is
no single point of failure to bring the service down.

Offers the best protection against fires, theft and natural disasters. Because data is replicated across
several storage media, the risk of data loss from hardware failure is very low. Because backups are

frequent or continuous, data loss is very minimal compared to other backups that are run less frequently.
Because it is online, it requires little human or manual interaction after it is setup.
Is a more expensive option then local backups. Initial or first backups can be a slow process
spanning a few days or weeks depending on Internet connection speed and the amount of data backed up.
Can be slow to restore.

Remote backups
Remote backups are a form of offsite backup with a difference being that you can access, Restore or
administer the backups while located at your source location or other physical Location. The term
“remote” refers to the ability to control or administer the backups from another location. You do not need
to be physically present at the backup storage facility to access the backups. Putting your backup hard
drive at your bank safe deposit box would not be considered a remote backup. You cannot administer or
access it without making a trip to the bank. The term “remote backup” is often used loosely and
interchangeably with “online backup”
And “cloud backup”.
Much better protection from natural disasters than local backups. Easier administration as it does not need
a physical trip to the offsite backup location.
More expensive then local backups can take longer to backup and restore than local backups

Cloud backup
Cloud backup is a term often used loosely and interchangeably with Online Backup and Remote Backup.
This is a type of backup where data is backed up to a storage server or facility connected to the source via
the Internet. With the proper login credentials, that backup can then be accessed securely from any other
computer with an Internet connection. The term “cloud” refers to the backup storage facility being
accessible from the Internet.
Since this is an offsite backup, it offers protection from fire, floods, earth quakes and other natural
disasters. Able to easily connect and access the backup with just an Internet connection. Data is replicated
across several storage devices and usually serviced by multiple internet connections so the system is not
at the mercy of a single point of failure. When the service is provided by a good commercial data center,
service is managed and protection is un-paralleled.
More expensive than local backups Can take longer to backup and restore

FTP Backup
This is a kind of backup where the backup is done via the File Transfer Protocol (FTP) over the Internet
to an FTP Server. Typically the FTP Server is located in a commercial data center away from the source
data being backed up. When the FTP server is located at a different location, this is another form of
offsite backup.
Since this is an offsite backup, it offers protection from fire, floods, earth quakes and other natural
disasters. Able to easily connect and access the backup with just an Internet connection.
More expensive then local backups Can take longer to backup and restore. Backup and restore times are
dependent to the Internet connection.

Backup Procedures
The 3-2-1 Rule
The simplest way to remember how to back up your images safely is to use the 3-2-1 rule.We recommend
keeping 3 copies of any important file (a primary and two backups) We recommend having the files on 2
different media types (such as hard drive and optical media), to protect against different types of
hazards.* 1 copy should be stored offsite (or at least offline).
The data backup procedures must include
• Frequency,
• Data backup retention,
• testing,
• Media replacement,
• Recovery time,
• Roles and responsibilities Local data backup procedures must include the following:
• Data Backup Retention. Retention of backup data must meet System and institution Requirements for
critical data.
• Testing - Restoration of backup data must be performed and validated on all types of Media in use
• Media Replacement - Backup media should be replaced according to manufacturer Recommendations.
• Recovery Time - The recovery time objective (RTO) must be defined and support Business
• Roles and Responsibilities – Appropriate roles and responsibilities must be defined for data backup and
restoration to ensure timeliness and accountability.
• Offsite Storage - Removable backup media taken offsite must be stored in an offsite location That is
insured and bonded or in a locked media rated, fire safe.

• Onsite Storage - Removable backup media kept onsite must be stored in a locked container with
restricted physical access.
• Media Destruction - How to dispose of data storage media in various situations.
• Encryption - Non-public data stored on removable backup media must be encrypted. Non-public data
must be encrypted in transit and at rest when sent to an offsite backup facility, either physically or via
electronic transmission.
• Third Parties - Third parties’ backup handling & storage procedures must meet System, or Institution
policy or procedure requirements related to data protection, security and privacy. These procedures must
cover contract terms that include bonding, insurance, disaster recovery planning and requirements for
storage facilities with appropriate environmental controls.

Archive: An archive is a collection of historical data specifically selected for long-term retention And
future reference. It is usually data that is no longer actively used, and is often stored on removable media.
Backup: A copy of data that may be used to restore the original in the event the latter is lost or damaged
beyond repair. It is a safeguard for data that is being used. Backups are not intended to provide a means to
archive data for future reference or to maintain a versioned History of data to meet specific retention
requirements. Critical Data: Data that needs to be preserved in support of the institution’s ability to
recover from a disaster or to ensure business continuity.
Data: Information collected, stored, transferred or reported for any purpose, whether in computers or in
manual files.
Data can include: financial transactions, lists, identifying information about people, projects or processes,
and information in the form of reports. Because data has value, and because it has Various sensitivity
classifications defined by federal law and state statute, it must be protected. Destruction: Destruction of
media includes: disintegration, incineration, pulverizing, shredding, And melting. Information cannot be
restored in any form following destruction. Media Rated, Fire Safe: A safe designed to maintain internal
temperature and humidity levels low enough to prevent damage to CDs, tapes, and other computer
storage devices in a fire. Safes are rated based on the length of time the contents of a safe are preserved
while directly exposed to fire and high temperatures.
Information Technology Resources: Facilities, technologies, and information resources used for System
information processing, transfer, storage, and communications. Included in this definition are computer
labs, classroom technologies, computing and electronic communications devices and services, such as
modems, e-mail, networks, telephones (including cellular), voice mail, fax transmissions, video,
multimedia, and instructional materials.
This definition is not all-inclusive, but rather, reflects examples of System equipment, supplies and
services. Recovery Point Objective (RPO): Acceptable amount of service or data loss measured in time.

The RPO is the point in time prior to service or data loss that service or data will be recovered to.
Recovery Time Objective (RTO). Acceptable duration from the time of service or data loss to the time of

Automated Backup
If the data backup plan defines a daily interval, making manual backups becomes quite time consuming,
and one may discover now and then that they have skipped making backups because they had something
else more important to do at same time. It is better to foresee the risk of not making backups and try to
automate the whole backup process as much as possible.

Types of storage
Local Storage Options
1. External Hard Drive
These are hard drives similar to the type that is installed within a desktop computer or laptop Computer.
The difference being that they can be plugged in to the computer or removed and kept separate from the
main computer.
Very good option for local backups of large amounts of data. The cheapest storage option in terms of cost
per GB. Very reliable when handled with care
Can be very delicate. May be damaged if dropped or through electrical surge

2. Solid State Drive (SSD)

Solid State Drives look and function similar to traditional mechanical/ magnetic hard drives but the
similarities stop there. Internally, they are completely different. They have no moving parts or rotating
platers. They rely solely on semiconductors and electronics for data storage making it a more reliable and
robust than traditional magnetic. No moving parts also means that they use less power than traditional
hard drives and are much faster too. With the prices of Solid State Drives coming down and is lower
power usage, SSD’s are used extensively on laptops and mobile devices. External SSD’s are also a viable
option for data backups.
Faster read and write performance More robust and reliable than traditional magnetic hard drives Highly
portable. Can be easily taken offsite
Still relatively expensive when compared to traditional hard drives Storage space is typically less than
that of traditional magnetic hard drives.

3. Network Attached Storage (NAS)
NAS are simply one or more regular IDE or SATA hard drives plugged in an array storage enclosure and
connected to a network Router or Hub through a Ethernet port. Some of these NAS enclosures have
ventilating fans to protect the hard drives from overheating.
Very good option for local backups especially for networks and small businesses. As several hard drives
can be plugged in, NAS can hold very large amounts of data can be setup with Redundancy (RAID)
increasing the reliability and/ or read and write performance. Depending on the type of RAID level used,
the NAS can still function even if one hard drive in the RAID set fails. Or two hard drives can be setup to
double the read and write speed of single hard drive. The drive is always connected and available to the
network making the NAS a good option for implementing automated scheduled backups.
Significantly more expensive than using single External Hard Drives Difficult to bring offsite making it
very much a local backup hence still susceptible to some events like theft and floods, fire etc.

4. USB Thumb Drive or Flash Drive

These are similar to Solid State Drives except that it is much smaller in size and capacity. They have no
moving parts making them quite robust. They are extremely portable and can fit on a keychain. They are
Ideal for backing up a small amount of data that need to be brought with you on the go.
The most portable storage option. Can fit on a keychain making it an offsite backup when you bring it
with you. Much more robust than traditional magnetic hard drives
Relatively expensive per GB so can only be used for backing up a small amount of data

5. Optical Drive (CD/ DVD)

CD’s and DVD’s are ideal for storing a list of songs, movies, media or software for distributionor for
giving to a friend due to the very low cost per disk. They do not make good storage options for backups
due to their shorter lifespan, small storage space and slower read and write speeds.
Low cost per disk
Relatively shorter life span than other storage options not as reliable as other storage options like external
hard disk and SSD. One damaged disk in a backup set can make the whole backup Unusable.
Remote Storage Options

6. Cloud Storage
Cloud storage is storage space on commercial data center accessible from any computer with Internet
access. It is usually provided by a service provider. A limited storage space may be provided free with
more space available for a subscription fee. Examples of service Providers are Amazon S3, Google Drive,
and Sky Drive etc.

A very good offsite backup. Not affected by events and disasters such as theft, floods, fire etc

More expensive than traditional external hard drives. Often requires an ongoing subscription.
Requires an Internet connection to access the cloud storage. Much slower than other local backups

Features of a Good Backup Strategy

The following are features to aim for when designing your backup strategy: Able to recover from data
loss in all circumstances like hard drive failure, virus attacks, theft, accidental deletes or data entry errors,
sabotage, fire, flood, earth quakes and other natural disasters. Able to recover to an earlier state if
necessary like due to data entry errors or accidental deletes. Able to recover as quickly as possible with
minimum effort, cost and data loss. Require minimum ongoing human interaction and maintenance after
the initial setup. Hence able to run automated or semi-automated.

Planning Your Backup Strategy

1. What to Backup
The first step in planning your backup strategy is identifying what needs to be backed up. Identify the
files and folders that you cannot afford to lose? It involves going through your documents, databases,
pictures, videos, music and program setup or installation files. Some of these media like pictures and
videos may be irreplaceable. Others like documents and databases may be tedious or costly to recover
from hard copies. These are the files and folders that need to be in your backup plan.
2. Where to Backup to
This is another fundamental consideration in your backup plan. In light of some content being
Irreplaceable, the backup strategy should protect against all events. Hence a good backup strategy should
employ a combination of local and offsite backups. Local backups are needed due to its lower cost
allowing you to backup a huge amount of data. Local backups are also useful for its very fast restore
speed allowing you to get back online in minimal time. Offsite backups are needed for its wider scope of
protection from major disasters or catastrophes not covered by local backups.

3. When to Backup
Frequency: How often you backup your data is the next major consideration when planning your backup
policy. Some folders are fairly static and do not need to be backed up very often. Other folders are
frequently updated and should correspondingly have a higher backup frequency like once a day or more.
Your decision regarding backup frequency should be based on a worst case scenario. For example, if
tragedy struck just before the next backup was scheduled to run, how much data would you lose since the
last backup. How long would it take and how much would it cost to re key that lost data? Backup Start
Time: You would typically want to run your backups when there’s minimal usage on the computers.
Backups may consume some computer resources that may affect performance. Also, files that are open or
in use may not get backed up. Scheduling backups to run after business hours is a good practice providing
the computer is left on overnight. Backups will not normally run when the computer is in “sleep” or
“hibernate mode”. Some backup software will run immediately upon boot up if it missed a scheduled
backup the previous night. So if the first hour on a business day morning is your busiest time, you would
not want your computer doing its backups then. If you always shut down or put your computer in sleep or
hibernate mode at the end of a work day, maybe your lunch time would be a better time to schedule a
backup. Just leave the computer on but logged-off when you go out for lunch. Since servers are usually
left running 24 hours, overnight backups for servers are a good choice.

4. Backup Types
Many backup software offer several backup types like Full Backup, Incremental Backup and Differential
backup. Each backup type has its own advantages and disadvantages. Full backups are useful for projects,
databases or small websites where many different files (text, pictures, videos etc.) are needed to make up
the entire project and you may want to keep different versions of the project.

5. Compression & Encryption

As part of your backup plan, you also need to decide if you want to apply any compression to your
backups. For example, when backing up to an online service, you may want to apply compression to save
on storage cost and upload bandwidth. You may also want to apply compression when backing up to
storage devices with limited space like USB thumb drives.

If you are backing up very private or sensitive data to an offsite service, some backup tools and services
also offer support for encryption. Encryption is a good way to protect your content should it fall into
malicious hands. When applying encryption, always ensure that you remember your encryption key. You
will not be able to restore it without your encryption key or phrase.

6. Testing Your Backup

A backup is only worth doing if it can be restored when you need it most. It is advisable to periodically
test your backup by attempting to restore it. Some backup utilities offer a validation option for your
backups. While this is a welcome feature, it is still a good idea to test your backup with an actual restore
once in a while.

7. Backup Utilities & Services

Simply copying and pasting files and folders to another drive would be considered a backup. However the
aim of a good backup plan is to set it up once and leave it to run on its own.
You would check up on it occasionally but the backup strategy should not depend on your ongoing
interaction for it to continue backing up. A good backup plan would incorporate
The use of good quality, proven backup software utilities and backup services.

Event Logs - Concepts
A log is a record of the events occurring within an organization’s systems and networks. Logs are
composed of log entries; each entry contains information related to a specific event that has occurred
within a system or network. Originally, logs were used primarily for troubleshooting problems, but logs
now serve many functions within most organizations, such as optimizing system and network
performance, recording the actions of users, and providing data useful for investigating malicious activity.
Logs have evolved to contain information related to many different types of events occurring within
networks and systems. Within an organization, many logs contain records related to computer security;
common examples of these computer security logs are audit logs that track user authentication attempts
and security device logs that record possible attacks

Key Concepts
Log management: Log management refers to the broad practice of collecting, aggregating and analysing
network data for a variety of purposes. Data logging devices collect incredible amounts of information on
security, operational and application events — log management comprises the tools to search and parse
this data for trends, anomalies and other relevant information.

Security information event management (SIEM): Like log management, SIEM also involves the
collection and analysis of data. The key distinction to be made is that SIEM is a specialized tool for
information security. SIEM appliances enable event reduction and real-time alerting, and they provide
specific workflows to address security breaches as they occur. Another key feature of SIEM is the
incorporation of non-event based data, such as vulnerability scanning reports, for correlation and analysis.
A lot of money has been invested in security products such as firewalls, intrusion detection, and strong
authentication over the past several years. However, system penetration attempts continue to occur and go
unnoticed until it is too late. It is not that security countermeasures are ineffective against intrusive
activity. Indeed, they can be very effective within an organization where security policies and procedures
require analysis of security events and appropriate incident response. However, deploying and analysing a
single device in an effort to maintain situational awareness with respect to the state of security within an
organization is the “computerized version of tunnel vision”. Security events must be analysed from as
many sources as possible in order to assess threat and formulate appropriate response. Extraordinary
levels of security awareness can be attained in an organization’s network by simply listening to what its
devices are telling you.
• Security software logs primarily contain computer security-related information.
• Operating system logs and application logs typically contain a variety of information, including
computer security-related data

Security Software
Most organizations use several types of network- based and host-based security software to detect
malicious activity, protect systems and data, and support incident response efforts. Accordingly, security
software is a major source of computer security log data. Common types of network-based and host
based security software include the following: Antimalware Software. The most common form of
antimalware software is antivirus software, which typically records all instances of detected malware, file
and system disinfection attempts, and file quarantines. Additionally, antivirus software might also record
when malware scans were performed and when antivirus signature or software updates occurred.
Antispyware software and other types of antimalware software (e.g., rootkit detectors) are also common
sources of security information. Intrusion Detection and Intrusion Prevention Systems. Intrusion detection
and intrusion prevention systems record detailed information on suspicious behaviour and detected
attacks, as well as any actions intrusion prevention systems performed to stop malicious activity in
progress. Some intrusion detection systems, such as file integrity checking software, run periodically
instead of continuously, so they generate log entries in batches instead of on an ongoing basis.

Remote Access Software

Remote access is often granted and secured through virtual private networking (VPN). VPN systems
typically log successful and failed login attempts, as well as the dates and times each user connected and
disconnected, and the amount of data sent and received in each user session session. VPN systems that
support granular access control, such as many Secure Sockets Layer (SSL) VPNs, may log detailed
information about the use of resources.

Web Proxies
Web proxies are intermediate hosts through which Web sites are accessed. Web proxies make Web page
requests on behalf of users, and they cache copies of retrieved Web pages to make additional accesses to
those pages more efficient. Web proxies can also be used to restrict Web access and to add a layer of
protection between Web clients and Web servers. Web proxies often keep a record of all URLs accessed
through them.

Vulnerability Management Software

Vulnerability management software, which includes patch management software and vulnerability
assessment software, typically logs the patch installation history and vulnerability status of each host,
which includes known vulnerabilities and missing software updates. Vulnerability management software
may also record additional information about hosts’ configurations. Vulnerability management software
typically runs occasionally, not continuously, and is likely to generate large batches of log entries.

Authentication Servers
Authentication servers, including directory servers and single sign-on servers, typically log each
authentication attempt, including its origin, username, success or failure, and date and time.

Routers may be configured to permit or block certain types of network traffic based on a policy. Routers
that block traffic are usually configured to log only the most basic characteristics of blocked activity.

Like routers, firewalls permit or block activity based on a policy; however, firewalls use much more
sophisticated methods to examine network traffic. Firewalls can also track the state of network traffic and
perform content inspection. Firewalls tend to have more complex policies and generate more detailed logs
of activity than routers.

Network Quarantine Servers

Some organizations check each remote host’s security posture before allowing it to join the network. This
is often done through a network quarantine server and agents placed on each host. Hosts that do not
respond to the server’s checks or that fail the checks are quarantined on a separate virtual local area
network (VLAN) segment. Network quarantine servers log information about the status of checks,
including which hosts were quarantined and for what reasons. Operating systems (OS) for servers,
workstations, and networking devices (e.g., routers, switches) usually log a variety of information related
to security. The most common types of security-related OS data are as follows:

System Events
System events are operational actions performed by OS components, such as shutting down the system or
starting a service. Typically, failed events and the most significant successful events are logged, but many
Oss permit administrators to specify which types of events will be logged. The details logged for each
event also vary widely; each event is usually timestamped, and other supporting information could
include event, status, and error codes; service name; and user or system account associated with an event.

Audit Records
Audit records contain security event information such as successful and failed authentication attempts,
file accesses, security policy changes, account changes (e.g., account creation and deletion, account
privilege assignment), and use of privileges. OSs typically permit system administrators to specify which
types of events should be audited and whether successful and/or failed attempts to perform certain actions

should be logged. OS logs are most beneficial for identifying or investigating suspicious activity
involving a particular host. After suspicious activity is identified by security software, OS logs are often
consulted to get more information on the activity.

Operating systems and security software provide the foundation and protection for applications, which
are used to store, access, and manipulate the data used for the organization’s business processes. Most
organizations rely on a variety of commercial off-the-shelf (COTS) applications, such as e-mail servers
and clients, Web servers and browsers, file servers and file sharing clients, and database servers and
clients. Some applications generate their own log files, while others use the logging capabilities of the OS
on which they are installed. Applications vary significantly in the types of information that they log.
The following lists some of the most commonly logged types of information and the potential benefits of
Client requests and server responses, which can be very helpful in reconstructing sequences of events and
determining their apparent outcome. If the application logs successful user authentications, it is usually
possible to determine which user made each request. Some applications can perform highly detailed
logging, such as e-mail servers recording the sender, recipients, subject name, and attachment names for
each e-mail; Web servers recording each URL requested and the type of response provided by the server;
and business applications recording which financial records were accessed by each user. This information
can be used to identify or investigate incidents and to monitor application usage for compliance and
auditing purposes.
Account information such as successful and failed authentication attempts, account changes (e.g., account
creation and deletion, account privilege assignment), and use of privileges. In addition to identifying
security events such as brute force password guessing and escalation of privileges, it can be used to
identify who has used the application and when each person has used it.
Usage information such as the number of transactions occurring in a certain period (e.g., minute, hour)
and the size of transactions (e.g., e-mail message size, file transfer size). This can be useful for certain
types of security monitoring (e.g., a ten-fold increase in e-mail activity might indicate a new e-mail–
borne malware threat; an unusually large outbound e-mail message might indicate inappropriate release
of information). Significant operational actions such as application startup and shutdown, application
failures, and major application configuration changes. This can be used to identify security compromises
and operational failures. Much of this information, particularly for applications that are not used through
unencrypted network communications, can only be logged by the applications, which makes application
logs particularly valuable for application- related security incidents, auditing, and compliance efforts.
However, these logs are often in proprietary formats that make them more difficult to use, and the data
they contain is often highly context-dependent, necessitating more resources to review their contents.

Log Management and its need
Log management can benefit an organization in many ways. It helps to ensure that computer security
records are stored in sufficient detail for an appropriate period of time. Routine log reviews and analysis
are beneficial for identifying security incidents, policy violations, fraudulent activity, and operational
problems shortly after they have occurred, and for providing information useful for resolving such
problems. Logs can also be useful for performing auditing and forensic analysis, supporting the
organization’s internal investigations, establishing baselines, and identifying operational trends and long
term problems
A log management infrastructure typically comprises the following three tiers:

Log Generation
The first tier contains the hosts that generate the log data. Some hosts run logging client applications or
services that make their log data available through networks to log servers In the second tier. Other hosts
make their logs available through other means, such as Allowing the servers to authenticate to them and
retrieve copies of the log files.

Log Analysis and Storage

The second tier is composed of one or more log servers that receive log data or copies of log data from
the hosts in the first tier. The data is transferred to the servers either in a real-time or near-real-time
manner, or in occasional batches based on a schedule or the amount of log data waiting to be transferred.
Servers that receive log data from multiple log generators are sometimes called collectors or aggregators.
Log data may be stored on the log servers themselves or on separate database servers.

Log Monitoring
The third tier contains consoles that may be used to monitor and review log data and the results of
automated analysis. Log monitoring consoles can also be used to generate reports. In some log
management infrastructures, consoles can also be used to provide management for the log servers and
clients. Also, console user privileges sometimes can be limited to only the necessary functions and data
sources for each user. Log management infrastructures typically perform several functions that assist in
the storage, analysis, and disposal of log data. These functions are normally performed in such a way that
they do not alter the original logs.
The following items describe common log management infrastructure functions:


Log parsing is extracting data from a log so that the parsed values can be used as input for another
logging process. A simple example of parsing is reading a text-based log file that contains 10 comma-
separated values per line and extracting the 10 values from each line. Parsing is performed as part of
many other logging functions, such as log conversion and log viewing.
Event filtering is the suppression of log entries from analysis, reporting, or long-term storage because
their characteristics indicate that they are unlikely to contain information of interest.For example,
duplicate entries and standard informational entries might be filtered because they do not provide useful
information to log analysts. Typically, filtering does not affect the generation or short-term storage of
events because it does not alter the original log files. In event aggregation, similar entries are consolidated
into a single entry containing a count of the number of occurrences of the event. For example, a thousand
entries that each record part of a scan could be aggregated into a single entry that indicates how many
hosts were scanned. Aggregation is often performed as logs are originally generated (the generator counts
similar related events and periodically writes a log entry containing the count), and it can also be
performed as part of log reduction or event correlation processes, which are described below.

Log rotation is closing a log file and opening a new log file when the first file is considered to be
complete. Log rotation is typically performed according to a schedule (e.g., hourly, daily, weekly) or
when a log file reaches a certain size.
The primary benefits of log rotation are preserving log entries and keeping the size of log files
manageable. When a log file is rotated, the preserved log file can be compressed to save space. Also,
during log rotation, scripts are often run that act on the archived log. For example, a script might analyze
the old log to identify malicious activity, or might perform filtering that causes only log entries meeting
certain characteristics to be preserved. Many log generators offer log rotation capabilities; many log files
can also be rotated through simple scripts or third-party utilities, which in some cases offer features not
provided by the log generators. Log archival is retaining logs for an extended period of time, typically on
removable media, a storage area network (SAN), or a specialized log archival appliance or server.
Logs often need to be preserved to meet legal or regulatory requirements. There are two types of log
archival: retention and preservation. Log retention is archiving logs on a regular basis as part of standard
operational activities. Log preservation is keeping logs that normally would be discarded, because they
contain records of activity of particular interest. Log preservation is typically performed in support of
incident handling or investigations.
Log compression is storing a log file in a way that reduces the amount of storage space needed for the file
without altering the meaning of its contents. Log compression is often performed when logs are rotated or
archived. Log reduction is removing unneeded entries from a log to create a new log that is smaller. A
similar process is event reduction, which removes unneeded data fields from all log entries. Log and

event reduction are often performed in conjunction with log archival so that only the log entries and data
fields of interest are placed into long-term storage.
Log conversion is parsing a log in one format and storing its entries in a second format. For example,
conversion could take data from a log stored in a database and save it in an XML format in a text file.
Many log generators can convert their own logs to another format; third party conversion utilities are also
available. Log conversion sometimes includes actions such as filtering, aggregation, and normalization.
In log normalization, each log data field is converted to a particular data representation and categorized
consistently. One of the most common uses of normalization is storing dates and times in a single format.
For example, one log generator might store the event time in a twelve-hour format (2:34:56 P.M. EDT)
categorized as Timestamp, while another log generator might store it in twenty-four (14:34) format
categorized as Event Time, with the time zone stored in different notation (-0400) in a different field
categorized as Time Zone. 24 Normalizing the data makes analysis and reporting much easier when
multiple log formats are in use. However, normalization can be very resource-intensive, especially for
complex log entries (e.g., typical intrusion detection logs).
Log file integrity checking involves calculating a message digest for each file and storing the message
digest securely to ensure that changes to archived logs are detected. A message digest is a digital
signature that uniquely identifies data and has the property that changing a single bit in the data causes a
completely different message digest to be generated. The most commonly used message digest algorithms
are MD5 and Secure Hash Algorithm 1 (SHA- 1). 25 If the log file is modified and its message digest is
recalculated, it will not match the original message digest, indicating that the file has been altered. The
original message digests should be protected from alteration through FIPS-approved encryption
algorithms, storage on read-only media, or other suitable means. Analysis Event correlation is finding
relationships between two or more log entries. The most common form of event correlation is rule-based
correlation, which matches multiple log entries from a single source or multiple sources based on logged
values, such as timestamps, IP addresses, and event types. Event correlation can also be performed in
other ways, such as using statistical methods or visualization tools. If correlation is performed through
automated methods, generally the result of successful correlation is a new log entry that brings together
the pieces of information into a single place. Depending on the nature of that information, the
infrastructure might also generate an alert to indicate that the identified event needs further investigation.
Log viewing is displaying log entries in a human-readable format. Most log generators provide some sort
of log viewing capability; third-party log viewing utilities are also available.
Some log viewers provide filtering and aggregation capabilities. Log reporting is displaying the results of
log analysis. Log reporting is often performed to summarize significant activity over a particular period
of time or to record detailed information related to a particular event or series of Events.


Log clearing is removing all entries from a log that precede a certain date and time. Log clearing is often
performed to remove old log data that is no longer needed on a system because it is not of importance or
it has been archived.

Log Management Process

System-level and infrastructure administrators should follow standard processes for managing the logs for
which they are responsible.
Major operational processes for log management are as follows:
• Configure the log sources, including log generation, storage, and security
• Perform analysis of log data
• Initiate appropriate responses to identified events
• Manage the long-term storage of log data.

Configure Log Sources

System-level administrators need to configure log sources so that they capture the necessary information
in the desired format and locations,as well as retain the information for the appropriate period of time.
The process includes:

• Administrators determine which of their hosts and host components must or should participate in the log
management infrastructure,
• A single log file might contain information from several sources, such as an OS log containing
information from the OS itself and several security software programs and applications. Administrators
ascertain which log sources use each log file.
• For each identified log source, administrators determine which types of events each log source must or
should log, as well as which data characteristics must or should be logged for each type of event. The
administrator’s ability to configure each log source is dependent on the features offered by that particular
type of log source. For example, some log sources offer very granular configuration options, while some
offer no granularity at all—logging is simply enabled or disabled, with no control over what is logged.
This section discusses log source configuration in three categories: log generation, log storage and
disposal, and log security.

Event Logs
Event logs are special files that record significant events on your computer, such as when a user logs on
to the computer or when a program encounters an error.

Example: Windows Event Log

Whenever the significant types of events occur, Windows records the event in an event log that you can
read by using Event Viewer. Advanced users might find the details in event logs helpful when
troubleshooting problems with Windows and other programs. Event Viewer tracks information in several
different logs.
Windows Logs include:

• Application (program) events

Events are classified as error, warning, or information, depending on the severity of the event. An error is
a significant problem, such as loss of data. A warning is an event that isn’t necessarily significant, but
might indicate a possible future problem. An information event describes the successful operation of a
program, driver, or service.

• Security-related events
These events are called audits and are described as successful or failed depending on the event, such as
whether a user trying to log on to Windows was successful.
• Setup events
Computers that are configured as domain controllers will have additional logs displayed here.
• System events
System events are logged by Windows and Windows system services, and are classified as error, warning,
or information.
• Forwarded events
These events are forwarded to this log by other computers. Applications and Services Logs vary. They
include separate logs about the programs that run on your computer, as well as more detailed logs that
pertain to specific Windows services. Open Event Viewer by clicking the Start button Picture of the Start
button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then
double-clicking Event Viewer. Administrator permission is required if you’re prompted for an
administrator password or confirmation, type the password
or provide confirmation. Click an event log in the left pane. Double-click an event to view the details of
the event.
Configuring Windows Event Log
Authorized administrators can define security settings for the event logs. The choices are somewhat
limited, and include log size, the length of time a log should be stored, and when the log should be
cleared. Each event log can be configured individually.
1. Click Start, select Programs, select Administrative Tools, and click Computer Management.
2. In the console tree, click Event Viewer. Right-click Security and select Properties.

3. The Security Properties window will appear. Here authorized administrators can set the Maximum log
size and select what action to take when the maximum log size is reached.
° To restore the default settings, click Restore Defaults.
° To clear the log, click Clear Log.

Under Log size, select one of these options:

If the log is not to be archived, click Overwrite events as needed. To archive the log at scheduled
intervals, click Overwrite events older than and specify the appropriate number of days. Be sure that the
Maximum log size is large enough to accommodate the interval. To retain all the events in the log, click
Do not overwrite events (clear log manually). This option requires that logs be cleared manually. When
the maximum log size Is reached, new events are discarded. If the event log is not cleared and archived
regularly, the following message will appear.
1. After establishing the security log settings, click the Apply button.
2. The Security Properties window also provides the ability to set filters on the event log to perform
searches and sorting of audit data. To filter an existing event log in order to view or save specific security
events, select the Filter tab and configure the filter.

3. To configure the filter, select the Event types that will be included by checking or unchecking a
selection box next to Information, Warning, Error, Success Audit, and/or Failure audit, then input any
additional desired filtering requirements by Event source, Category, Event ID, User, or Computer.
4. By default the entire event log will be filtered for viewing by the parameters selected above. If desired,
select a date and time range for the logs that will be filtered for viewing. This is accomplished by first
clicking on the From: drop down menu and changing the selection to Events On.
The date and time dialog boxes will become active. Change the date by selecting the drop down menu
and choosing a date from the calendar that is presented. Change the time by scrolling the up and down
arrows in the time dialog box. Follow the same procedures clicking on the To: drop down menu and
changing the selection to Events On. Set the date and time for the last as described above.

5. Once all the desired filtering options have been selected, click the Apply button and click OK. The
Event Viewer will filter the log and display the information as defined by the filter.

Windows Logon Types

1. Logon Types are logged in the Logon Type field of logon events (event IDs 528 and 540 for successful
logons, and 529- 537 and 539 for failed logons). Windows supports the following logon types and
associated logon type values:
2. Interactive logon—This is used for a logon at the console of a computer. A type 2 logon is logged when
you attempt to log on at a Windows computer’s local keyboard and screen.
3. Network logon—This logon occurs when you access remote file shares or printers.
Also, most logons to Internet Information Services (IIS) are classified as network logons, other than IIS
logons that use the basic authentication protocol (those are logged as logon type 8).
4. Batch logon—This is used for scheduled tasks. When the Windows Scheduler service
starts a scheduled task, it first creates a new logon session for the task, so that it can run in the security
context of the account that was specified when the task was created.
5. Service logon—This is used for services and service accounts that log on to start a service. When a
service starts, Windows first creates a logon session for the user account that is specified in the service
6. Unlock—This is used whenever you unlock your Windows machine.
7. Network clear text logon—This is used when you log on over a network and the password is sent in
clear text. This happens, for example, when you use basic authentication to authenticate to an IIS server.
8. Network clear text logon—This is used when you log on over a network and the password is sent in
clear text. This happens, for example, when you use basic authentication to authenticate to an IIS server.
9. New credentials-based logon—This is used when you run an application using the Run As command
and specify the /net only switch. When you start a program with Run As using /net only, the program
starts in a new logon session that has the same local identity (this is the identity of the user you are
currently logged on with), but uses different credentials (the ones specified in the runas command) for
other network connections. Without /net only, Windows runs the program on the local computer and on
the network as the user specified in the run as command, and logs the logon event with type 2.
10. Remote Interactive logon—This is used for RDP-based applications like Terminal Services,
Remote Desktop or Remote Assistance.
11. Cached Interactive logon—This is logged when users log on using cached credentials, which basically
means that in the absence of a domain controller, you can still log on to your local machine using your
domain credentials. Windows supports logon using cached credentials to ease the life of mobile users and
users who are often disconnected.

How to Read the Windows Application, Security, and System Log Files
The Windows application, security, and system log files can be read with a Windows application
called “Event Viewer,” which is accessed through the Control Panel:
• Click the Start button on the desktop’s Taskbar
• Click the Control Panel menu item
• The Control Panel’s window will open
• In the Control Panel, double-click the Administrative Tools icon
• The Administrative Tools window will open with a list of different icons
• Double click the Event Viewer icon

How to Read Other Windows Log Files

Many log files that software applications use are written as plain text file, making it possible
to use any freeware text editor, “Notepad” or “WordPad”, to read the generated log files.
To read .txt files in WordPad:
• Click the Start button on the desktop’s Taskbar
• Click All Programs option
• Click Accessories menu item
• Click WordPad application
• A new WordPad window will open
• Click the File menu
• Click the Open menu item
• Navigate to the desired log file and click the Open button
There are also programs that allow the user to monitor log files as they occur in real-time. Examples of
such software include Tail For Win32 and Hoo Win Tail. These programs make it easy to read new entries
from the bottom (tail) of the log file.
IIS log files
Internet Information Services (IIS) is a web server developed by Microsoft for use with Windows Server.
The server is meant for a variety of hosting uses while attempting to maintain a high level of flexibility
and scalability.
To help with server use and analysis, IIS is integrated with several types of log files. These log
file formats provide information on a range of websites and specific statistics, including Internet Protocol
(IP) addresses, user information and site visits as well as dates, times and queries.
Log File Formats in IIS (IIS 6.0)
IIS provides six different log file formats that you can use to track and analyze information about your
IIS-based sites and services. In addition to the six available formats, you can create your own custom log
file format.

The following log file formats and logging options are available in IIS:
• W3C Extended Log File Format Text-based, customizable format for a single site. This is the default
• W3C Centralized Logging All data from all Web sites is recorded in a single log file in the W3C log file
• NCSA Common Log File Format Text-based, fixed format for a single site.
• IIS Log File Format Text-based, fixed format for a single site.
• ODBC Logging Fixed format for a single site. Data is recorded in an ODBC-compliant database.
• Centralized Binary Logging Binary-based, unformatted data that is not customizable.
Data is recorded from multiple Web sites and sent to a single log file. To interpret the
data, you need a special parser.
• HTTP.sys Error Log Files Fixed format for HTTP . sys-generated errors. You can read text-based log
files using a text editor such as Notepad, which is included with Windows, but administrators often
import the files into a report-generating software tool for further analysis.IIS logs, when properly
analysed, provide information about demographics and usage of the IIS web server. By tracking usage
data, web providers can better tailor their services to support specific regions, time frames or IP ranges.
Log filters also allow providers to track only the data deemed necessary for analysis.
Analyze an IIS Log file
IIS logs contain crucial information for improving the web site. Log files for an IIS server are the key
source of information for managing the websites hosted on the server. The log files contains a record of
each request from a web user and the response provided by the IIS server. This data is crucial for
marketing, site performance and security. Logs are often the only indication that a user is attempting to
hack into your IIS server. Patterns and trends can be spotted in this data to help you segment your users
for marketing opportunities. IIS log analysis is a critical tool in improving your website. Internet
Information Services (IIS) 6.0 offers a number of ways to record the activity of your Web sites, File
Transfer Protocol (FTP) sites, Network News Transfer Protocol (NNTP) service, and Simple Mail
Transfer Protocol (SMTP) service and allows you to choose the log file format that works best for your
environment. IIS logging is designed to be more detailed than the event logging or performance
monitoring features of the Microsoft® Windows® Server 2003, Standard Edition, Windows® Server
2003, Enterprise Edition, and Windows® Server 2003, Datacenter Edition, operating systems. IIS log
files can include information such as who has visited your site, what was viewed, and when the
information was last viewed. You can monitor attempts to access your sites, virtual folders, or files and
determine whether attempts were made to read or write to your files. IIS log file formats allow you to
record events independently for any site, virtual folder, or file.

Using a text editor the following steps can be used to analyze the IIS file:

• Open the log file labeled as “ex010110.log” in your text editor. The six digits in the log file name are in
the format day, month and year the file was created.
• Locate the header information. This is a line starting with “#Fields:.” Use this line to determine
the corresponding values in each column.
• Use the date and time to identify when the request was created. The “site name” and “computer name”
will indicate what server responded to the request.
• Identify the visitor to your web server by the “c-ip” which is the ip address of the visitors computer.
• The “cs-method” column will most often contain either “post” or “get” depending on the request made
by the visitor’s browser. The fields “cs-Uri-stem” and “cs-Uri-query” will denote the resource such as an
image or web page the visitor requested.
• Use the “sc-status” column to determine whether the web server was capable of correctly responding to
the request. A link is provided in the resource section of this article to a complete list of response codes.
• Use the “cs(User-Agent)” to determine what type of browser the visitor used, or if the visitor
is actually a search engine. A link to a list of common user agents has been provided in the resource area
of this article.
Log Analysis and Response
Analyze Log Data
Effective analysis of log data is often the most challenging aspect of log management, but is also usually
the most important. Although analyzing log data is sometimes perceived by administrators as
uninteresting and inefficient (e.g., little value for much effort), having robust log management
infrastructures and automating as much of the log analysis process as possible can significantly improve
analysis so that it takes less time to perform and produces more valuable results. The most effective way
to gain a solid understanding of log data is to review and analyze portions of it regularly (e.g., every day).
The goal is to eventually gain an understanding of the baseline of typical log entries, likely encompassing
the vast majority of log entries on the system. (Because a few types of entries often comprise a significant
percentage of the log entries, this is not as difficult as it may first sound.) Daily log reviews should
include those entries that have been deemed most likely to be important, as well as some of the entries
that are not yet fully understood. Because it can make considerable effort to understand the significance
of most log entries, the initial days, weeks, or even months of performing the log analysis process are the
most challenging and time-consuming. Over time, as the baseline of normal activity is broadened and
deepened, the daily log reviews should take less time and be more focused on the most important log
entries, thus leading to more valuable analysis results.
Another motivation for understanding the log entries is so that the analysis process can be automated as
much as possible. By determining which types of log entries are of interest and which are not,
administrators can configure automated filtering of the log entries. This allows events known to be
malicious to be recognized and responded to automatically (e.g., alerting administrators, reconfiguring

other security controls). Another purpose for filtering is to ensure that the manual analysis performed by
administrators is prioritized appropriately. The filtering should be configured so that it presents
administrators with a reasonable number of entries for manual analysis. Web log analysis software (also
called a web log analyzer) is a kind of web analytics software that passes a server log file from a web
server, and based on the values contained in the log file, derives indicators about when, how, and by
whom a web server is visited. Usually reports are generated from the log files immediately, but the log
files can alternatively be passed for a database and reports generated on demand. There are free, open
source and paid software tools available for log analysis or management.

Response to events
During their log analysis, infrastructure and system- level administrators may identify events of
significance, such as incidents and operational problems that necessitate some type of response. When an
administrator identifies a likely computer security incident, as defined by the organization’s incident
response policies, the administrator should follow the organization’s incident response procedures to
ensure that it is addressed appropriately. Examples of computer security incidents include a host being
infected by malware and a person gaining unauthorized access to a host. Administrators should perform
their own responses to non-incident events, such as minor operational problems (e.g., misconfiguration of
host security software). Some organizations require system-level administrators to report incidents and
logging-related operational problems to infrastructure administrators so that the infrastructure
administrators can better identify additional instances of the same activities and patterns that cannot be
seen at the individual system level. Infrastructure and system-level administrators should also be prepared
to assist incident response teams with their efforts. For example, when an incident occurs, affected
system-level administrators may be asked to review their systems’ logs for particular signs of malicious
activity or to provide copies of their logs to incident handlers for further analysis. Administrators should
also be prepared to alter their logging configurations as part of a response. Adverse events such as worms
often cause unusually large numbers of events to be logged. This can cause various negative impacts,
such as slowing system performance, overwhelming logging processes, and overwriting recent log
entries. Analysts may not be able to see other events of significance because their records are hidden
among all of the other log entries. Accordingly, administrators may need to reconfigure logging for the
short term, long term, or permanently, depending on the source of the log data, to prevent it from
overwhelming the system and the logs. Administrators may also need to adjust logging to capture more
data as part of a response effort, such as collecting additional information on a particular type of activity.
To identify similar incidents, especially in the short term, administrators may need to perform additional
log monitoring and analysis, such as more closely examining the types of logging sources that recorded
pertinent information on the initial incident.


Handling Network security incidents

Network Reconnaissance Incidents
Intruders over computer networks to gather information about computer systems and resources.
A probe is any attempt launched to detect:

• Active hosts and networks that are reachable over a public or an accessible medium • The services and
applications they are running that could be connected to any vulnerability that these services and
applications may have, which could be exposed and taken advantage of.

Probes can be classified appropriately into three main activities: host detection, port enumeration and
vulnerability assessment.
Host detection essentially aims to establish aliveness of a host, along with its network address. Hardware
addresses may also be sought by intruders having access to the same segment as the target.
Port enumeration is to do with the listing of TCP/UDP services running on a host. This may be a list of all
services or only those of particular interest to an intruder, along with the port address they are running on.
Vulnerability assessment seeks to establish information on the type and version of the operating system
and the different applications running on a machine. Version and patch level details about an operating
system and applications are important to judge the possible exploits that could be used to attack the host.

A probe could be seen to be launched by an intruder in two modes: active and passive.
An active probe involves some attempted interaction over the network on behalf of the intruder. This may
involve sending a packet directly to a target host or a network, or some Intermediary used for the
purposes of probing.

A passive probe, on the other hand, would involve an intruder restricting herself to sniffing and logging
traffic, originating from and destined to a potential or an identified target, and obtaining relevant
information. The choice of being passive may be due to reasons of configuration or access, or it may be a
deliberate act by an intruder to avoid detection. Passive probes by their nature are hard to detect. Any
reconnaissance information gained using such tactics, however, is limited to the traffic visible to an
intruder. Active probes are necessary if an intruder wishes to gather information both timely and of her

A variety of techniques exist for active probes, including making use of mechanisms such as the TCP
handshake to judge a host’s liveness, fingerprinting the protocol stack (which often indicates the
operating system the host is running), probing DNS servers, and grabbing service
banners volunteering information on the host. Most active probes make use of techniques that use the
core protocols of the modern day communications, namely IP, ICMP, TCP and UDP.

Common approaches to counter probing activity at this level include:

• Filtering inbound ICMP probes (responses to which are used to determine what machine is alive)

• Filtering outbound ICMP responses to UDP port scanning attempts (where a lack of response allows an
intruder to determine a live host)

• Filtering inbound TCP probes with different combinations of flags set, (response, or lack of it, to which
(depending on the flags set and the operating system probed) may indicate to an intruder whether a host is
live or not)

• Using a variety of firewalling techniques that allow throttling of probes and stateful mechanisms that
disallow unsolicited packets aimed at generating responses from target hosts.

A somewhat more proactive approach is suggested by Kang et al. who propose to generate false positive
responses to any probes attempting to detect hosts or enumerate ports targeting an unused address space
or closed ports on active hosts. Their approach, referred to as all-positive response (APR), is designed to
make it difficult for an intruder to distinguish active hosts from inactive ones, and open ports from closed
ones. To an intruder, all machines appear active and all ports appear open.

Such an approach could also help in detecting any packets that follow up after initial probes, which
attempt to probe the host further, enumerating ports or assessing some vulnerability. Using false
responses is useful in hiding any information about the network that an intruder may try to gather. But an
all-positive approach will certainly indicate to an intruder that false responses are being generated to all

Another important issue is that generating false responses for a very large network may require untenably
large resources, and may therefore not be scalable. Some factors to consider here are the size of the entire
(used and unused) address space that the false response needs to be generated for, the rate at which the

network is probed, the various types of probes launched (that need to be responded to) and memory state
required to detect any attempts at intrusion that follow up a false response.

Generating a false positive response to probes targeting a closed port on an active host could also result in
a conflict: an active host may have a port closed at the time of the probe, but the port may open (upon the
host initiating a connection or starting a service, for instance) sometime after the false response is

Some alternatives to APR could be designed so that such responses are generated:
• Randomly where some probes are replied to and some are not (chosen entirely at random) •To a
specified subset of the unused address space. This subset could be chosen randomly (from a given chunk
of addresses) or strategically (from an address space used non-contiguously)
• For all probes destined for the unused address space. This is similar to APR, except that only probes
destined for the unused parts of the address space are replied to and one or a few services depicted.

Handling specific types of incidents

• Denial of Service ( DoS )—an attack that prevents the usage of network, system, or application

• Malicious Code—a virus, worm, Trojan horse, or other code-based malicious entity that infects a host

• Unauthorized Access—a user gains access without permission to a network, system, application, data,
or other resource

• Inappropriate Usage—a user violates acceptable computing use policies

• Multiple Component—a single incident that encompasses two or more incidents; for example, a
malicious code infection leads to unauthorized access to a host, which is then used to gain unauthorized
access to additional hosts.

Denial of Service Incidents

DoS prevents authorized used of IT resources. Tips for responding to a network distributed denial-of-
service (DDoS) incident.

General Considerations

• DDoS attacks (Fig 4.1) often take the form of flooding the network with unwanted traffic; some attacks
focus on overwhelming resources of a specific system.

Fig 4.1 DDoS Diagram

• It will be very difficult to defend against the attack without specialized equipment or your ISP’s help.

• Often, too many people participate during incident response; limit the number of people on the team.
• DDoS incidents may span days. Consider how your team will handle a prolonged attack. Humans get
• Understand your equipment’s capabilities in mitigating a DDoS attack. Many underappreciated
the capabilities of their devices, or overestimate their performance.

Prepare for a Future Incident

• If you do not prepare for a DDoS incident in advance, you will waste precious time during the attack.
• Contact your ISP to understand the paid and free DDoS mitigation it offers and what process you should
• Create a white list of the source IPs and protocols you must allow if prioritizing traffic during an attack.
Include your big customers, critical partners, etc.
• Confirm DNS time-to-live (TTL) settings for the systems that might be attacked. Lower the TTLs, if
necessary, to facilitate DNS redirection if the original IPs get attacked.

• Establish contacts for your ISP, law enforcement, IDS, firewall, systems, and network teams.

• Document your IT infrastructure details, including business owners, IP addresses and circuit IDs;
prepare a network topology diagram and an asset inventory.

• Understand business implications ( lost) of likely DDoS attack scenarios. • If the risk of a
DDoS attack is high, consider purchasing specialized DDoS mitigation products or services.
• Collaborate with your BCP/DR planning team, to understand their perspective on DDoS incidents.
• Harden the configuration of network, OS, and application components that may be targeted by DDoS.
• Baseline your current infrastructure’s performance, so you can identify the attack faster and more

Analyze the Attack

• Understand the logical flow of the DDoS attack and identify the infrastructure components affected by
• Review the load and logs of servers, routers, firewalls, applications, and other affected infrastructure.
• Identify what aspects of the DDoS traffic differentiate it from benign traffic (e.g., specific source IPs,
destination ports, URLs, TCP flags, etc.).
• If possible, use a network analyzer (e.g. tcpdump, ntop, Aguri, MRTG, a NetFlow tool) to review the
• Contact your ISP and internal teams to learn about their visibility into the attack, and to ask for help.
• If contacting the ISP, be specific about the traffic you’d like to control (e.g., black hole what networks
blocks? rate-limit what source IPs?)
• Find out whether the company received an extortion demand as a precursor to the attack.
• If possible, create a NIDS signature to focus to differentiate between benign and malicious
• Notify your company’s executive and legal teams; upon their direction, consider involving law

Mitigate the Attack’s Effects

• While it is very difficult to fully block DDoS attacks, you may be able to mitigate their effects.

• Attempt to throttle or block DDoS traffic as close to the network’s “cloud” as possible via a router,
firewall, load balancer, specialized device, etc.

• Terminate unwanted connections or processes on servers and routers and tune their TCP/IP settings.
• If possible, switch to alternate sites or networks using DNS or another mechanism. Black hole
DDoS traffic targeting the original IPs.
• If the bottle neck is a particular a feature of an application, temporarily disable that feature.
• If possible, add servers or network bandwidth to handle the DDoS load. (This is an arms race, though.)
• If possible, route traffic through a traffic scrubbing service or product via DNS or routing changes.
• If adjusting defenses, make one change at a time, so you know the cause of the changes you may
• Configure egress filters to block the traffic your systems may send in response to DDoS traffic, to avoid
adding unnecessary packets to the network.

Wrap-Up the Incident and Adjust

• Consider what preparation steps you could have taken to respond to the incident faster or more
• If necessary, adjust assumptions that affected the decisions made during DDoS incident preparation.
• Assess the effectiveness of your DDoS response process, involving people and communications.
• Consider what relationships inside and outside your organizations could help you with future incidents.

Key DDoS Incident Response Steps

• Preparation: Establish contacts, define procedures, and gather tools to save time during an attack.
• Analysis: Detect the incident, determine its scope, and involve the appropriate parties.
• Mitigation: Mitigate the attack’s effects on the targeted environment.
• Wrap-up: Document the incident’s details, discuss lessons learned, and adjust plans and defenses.

Un authorized Access Incidents

Examples of unauthorized access include

• Performing a remote root compromise of an e-mail server

• Defacing a Web server
• Guessing and cracking passwords

• Copying a database containing credit card numbers
• Viewing sensitive data, including payroll records and medical information, without authorization
• Running a packet sniffer on a workstation to capture usernames and passwords
• Using a permission error on an anonymous FTP server to distribute pirated software and music files
• Dialing into an unsecured modem and gaining internal network access
• Posing as an executive, calling the help desk, resetting the executive’s e-mail password, and learning the
new password
• Using an unattended, logged-in workstation without permission.

• Configure network-based and host-based IDS software (such as file integrity checkers and log monitors)
to identify and alert on attempts to gain unauthorized access. Each type of intrusion detection software
may detect attacks that others are not able to detect.

• Use centralized log servers so pertinent information from hosts across the organization is stored in a
single secured location.
• Establish procedures to be followed when all users of an application, system, trust domain,or
organization should change their passwords because of a password compromise. The procedures should
adhere to the organization’s password policy.
• Discuss unauthorized access incidents with system administrators so that they understand their roles in
the incident handling process.


Network Security

Configure the network perimeter to deny all incoming traffic that is not expressly permitted. Properly
secure all remote access methods, including modems and VPNs. An unsecured modem can provide easily
attainable unauthorized access to internal systems and networks. War dialing is the most efficient
technique for identifying improperly secured modems. When securing remote access, carefully consider
the trustworthiness of the clients; if they are outside the organization’s control, they should be given as
little access to resources as possible, and their actions should be closely monitored. Put all publicly
accessible services on secured demilitarized zone (DMZ) network segments. The network perimeter can
then be configured so that external hosts can establish connections only to hosts on the DMZ, not internal

network segments. Use private IP addresses for all hosts on internal networks. This will severely restrict
the ability of attackers to establish direct connections to internal hosts.

Host Security

• Perform regular vulnerability assessments to identify serious risks and mitigate the risks to an
acceptable level.

• Disable all unneeded services on hosts. Separate critical services so they run on different hosts. If an
attacker then compromises a host, immediate access should be gained only to a single service.

• Run services with the least privileges possible, to reduce the immediate impact of successful exploits.

• Use host-based firewall software to limit individual hosts’ exposure to attacks.

• Limit unauthorized physical access to logged-in systems by requiring hosts to lock idle screens
automatically and asking users to log off before leaving the office.

• Regularly verify the permission settings for critical resources, including password files, sensitive
databases and public Web pages. This process can easily be automated to report changes in permissions
on a regular basis.

Authentication and Authorization

• Create a password policy that requires the use of complex, difficult-to-guess passwords,
forbids password sharing, and directs users to use different passwords on different systems, especially
external hosts and applications.

• Require sufficiently strong authentication, particularly for accessing critical resources.

• Create authentication and authorization standards for employees and contractors to follow when
developing software. For example, passwords should be strongly encrypted using a FIPS 140-2 validated
algorithm when they are transmitted or stored.

• Establish procedures for provisioning and de provisioning user accounts. These should include an
approval process for new account requests and a process for periodically disabling or deleting accounts
that are no longer needed.

Physical Security

• Implement physical security measures that restrict access to critical resources.

Detection and Analysis

Because unauthorized access incidents can occur in many forms, they can be detected through dozens of
types of precursors and indications.

Precursor: Unauthorized access incidents are often preceded by reconnaissance activity to map hosts and
services and to identify vulnerabilities. Activity may include port scans, host scans, vulnerability scans,
pings, trace routes, DNS zone transfers, OS fingerprinting, and banner grabbing. Such activity is detected
primarily through IDS software, secondarily through log analysis.

Response: Incident handlers should look for distinct changes in reconnaissance patterns for example, a
sudden interest in a particular port number or host. If this activity points out a vulnerability that could be
exploited, the organization may have time to block future attacks by mitigating the vulnerability (e.g.,
patching a host, disabling an unused service, modifying firewall rules).

Precursor: A new exploit for gaining unauthorized access is released publicly, and it poses a
significant threat to the organization.

Response: The organization should investigate the new exploit and, if possible, alter security controls to
minimize the potential impact of the exploit for the organization.

Precursor: Users report possible social engineering attempts—attackers trying to trick them into
revealing sensitive information, such as passwords, or encouraging them to download or run programs
and file attachments.

Response: The incident response team should send a bulletin to users with guidance on handling the
social engineering attempts. The team should determine what resources the attacker was interested in and
look for corresponding log-based precursors, as it is likely that the social engineering is only part of the

Precursor: A person or system may observe a failed physical access attempt (e.g., outsider attempting to
open a locked wiring closet door, unknown individual using a cancelled ID badge).

Response: If possible, security should detain the person. The purpose of the activity should be
determined, and it should be verified that the physical and computer security controls are strong enough
to block the apparent threat. (An attacker who cannot gain physical access may perform remote
computing- based attacks instead.) Physical and computer security controls should be strengthened if

Malicious Action: Root compromise of a host
• Hacker tools on system
• Unusual traffic to / from host
• System configuration changes
• Modification of critical files
• Unexplained account usage
• Strange OS / application log messages

Malicious Action: Unauthorized usage of standard user account

• Access attempts to critical files (e.g., password files)
• Unexplained account usage (e.g., idle account in use, account in use from multiple
locations at once, commands that are unexpected from a particular user, large number of locked-out
• Web proxy log entries showing the download of hacker tools

Malicious Action: Unauthorized data modification

(e.g., Web server defacement, FTP warez server)

• Network intrusion detection alerts
• Increased resource utilization
• User reports of the data modification (e.g.defaced Web site)
• Modifications to critical files (e.g., Web pages)
• New files or directories with unusual names (e.g., binary characters, leading spaces, leading dots)
• Significant changes in expected resource usage (e.g., CPU, network activity, full logs or file systems)

Containment, Eradication and Recovery

Initial containment elements

• Isolation of affected system

• Disabling affected service
• Eliminate attacker’s route
• Disable user accounts used in attack
• Enhance physical security

Eradication and Recovery

Successful attackers frequently install rootkits, which modify or replace dozens or hundreds of files,
including system binaries. Rootkits hide much of what they do, making it tricky to identify what was
changed. Therefore, if an attacker appears to have gained root access to a system, handlers cannot trust
the operating system software. Typically, the best solution is to restore the system from a known good
backup or reinstall the operating system and applications from scratch, and then secure the system

Changing all passwords on the system, and possibly on all systems that have trust relationships
with the victim system, is also highly recommended. Some unauthorized access incidents involve the
exploitation of multiple vulnerabilities, so it is important for handlers to identify all vulnerabilities that
were used and to determine strategies for correcting or mitigating each vulnerability. Other vulnerabilities
that are present should be mitigated as well, or an attacker
may use them instead.

If an attacker only gains a lesser level of access than administrator-level, eradication and recovery actions
should be based on the extent to which the attacker gained access. Vulnerabilities that were used to gain
access should be mitigated appropriately. Additional actions should be performed as merited to identify
and address weaknesses systemically. For example, if an attacker gained user-level access by guessing a
weak password, then not only should that account’s password be changed to a stronger password, but also
the system administrator and owner should consider enforcing stronger password requirements.

If the system was in compliance with the organization’s password policies, the organization
should consider revising its password policies.

Key recommendations for handling unauthorized access incidents are summarized
• Configure intrusion detection software to alert on attempts to gain unauthorized access. Network and
host-based intrusion detection software (including file integrity checking software) is valuable for
detecting attempts to gain unauthorized access. Each type of software may detect incidents that the other
types of software cannot, so the use of multiple types of computer security software is highly

• Configure all hosts to use centralized logging. Incidents are easier to detect if data from all hosts across
the organization is stored in a centralized, secured location.

• Establish procedures for having all users change their passwords. A password compromise
may force the organization to require all users of an application, system, or trust domain—or perhaps the
entire organization—to change their passwords.

• Configure the network perimeter to deny all incoming traffic that is not expressly permitted. By limiting
the types of incoming traffic, attackers should be able to reach fewer targets and should be able to reach
the targets using designated protocols only. This should reduce the number of unauthorized access

• Secure all remote access methods, including modems and VPNs. Unsecured modems provide easily
attainable unauthorized access to internal systems and networks. Remote access clients are often outside
the organization’s control, so granting them access to resources increases risk.

• Put all publicly accessible services on secured DMZ network segments. This permits the organization to
allow external hosts to initiate connections to hosts on the DMZ segments only, not to hosts on internal
network segments. This should reduce the number of unauthorized access incidents.

• Disable all unneeded services on hosts and separate critical services. Every service that is running
presents another potential opportunity for compromise. Separating critical services is important because if
an attacker compromises a host that is running a critical service, immediate access should be gained only
to that one service.

• Use host-based firewall software to limit individual hosts’ exposure to attacks. Deploying host-based
firewall software to individual hosts and configuring it to deny all activity that is not expressly permitted
should further reduce the likelihood of unauthorized access incidents.

• Create and implement a password policy. The password policy should require the use of complex,
difficult-to-guess passwords and ensure that authentication methods are sufficiently strong for accessing
critical resources. Weak and default passwords are likely to be guessed or cracked, leading to
unauthorized access.

• Provide change management information to the incident response team. Indications such as system
shutdowns, audit configuration changes, and executable modifications are probably caused by routine
system administration, rather than attacks. When such indications are detected, the team should be able to
use change management information to verify that the indications are caused by authorized activity.

• Select containment strategies that balance mitigating risks and maintaining services. Incident handlers
should consider moderate containment solutions that focus on mitigating the risks as much as is practical
while maintaining unaffected services.

• Restore or reinstall systems that appear to have suffered a root compromise. The effects of root
compromises are often difficult to identify completely. The system should be restored from a known good
backup, or the operating system and applications should be reinstalled from scratch. The system should
then be secured properly so the incident cannot recur.

Inappropriate usage incident

An inappropriate usage incident occurs when a user performs actions that violate acceptable computing
use policies. Although such incidents are often not security-related, handling them is very similar to

handling security related incidents. Therefore, it has become commonplace for incident response teams to
handle many inappropriate usage incidents.

Examples of incidents a team might handle include users who—

• Download password cracking tools or pornography
• Send spam promoting a personal business
• E-mail harassing messages to co-workers
• Set up an unauthorized Web site on one of the organization’s computers
• Use file or music sharing services to acquire or distribute pirated materials
• Transfer sensitive materials from the organization to external locations.


Key recommendations for handling inappropriate usage incidents include:

• Discuss the handling of inappropriate usage incidents with the organization’s human resources
and legal departments. Processes for monitoring and logging user activities should comply with the
organization’s policies and all applicable laws. Procedures for handling incidents that directly involve
employees should incorporate discretion and confidentiality.

• Discuss liability issues with the organization’s legal departments. Liability issues may arise during
inappropriate usage incidents, particularly for incidents that are targeted at outside parties. Incident
handlers should understand when they should discuss incidents with the allegedly attacked party and what
information they should reveal.

• Configure network-based intrusion detection software to detect certain types of inappropriate usage.
Intrusion detection software has built-in capabilities to detect certain inappropriate usage incidents, such
as the use of unauthorized services, outbound reconnaissance activity and attacks, and improper mail
relay usage (e.g.,sending spam).

• Log basic information on user activities. Basic information on user activities such as FTP commands,
Web requests, and e-mail headers may be valuable for investigative and evidentiary purposes.

• Configure all e-mail servers so they cannot be used for unauthorized mail relaying. Mail relaying is
commonly used to send spam.

• Implement spam filtering software on all e-mail servers. Spam filtering software can block much of the
spam sent by external parties to the organization’s users, as well as spam that is sent by internal users.

• Implement URL filtering software. URL filtering software prevents access to many inappropriate Web
sites. Users should be required to use the software, typically by preventing access to external Web sites
unless the traffic passes through a server that performs URL filtering.
Multiple component incidents
A multiple component incident is a single incident that encompasses two or more incidents.

For example the following could comprise a multiple component incident:

1.Malicious code spread through e-mail compromises an internal workstation.

2. An attacker (who may or may not be the one who sent the malicious code) uses the infected
workstation to compromise additional workstations and servers.

3. An attacker (who may or may not have been involved in steps 1 or 2) uses one of the compromised
hosts to launch a DDoS attack against another organization.

This multiple component incident consists of a malicious code incident, several unauthorized access
incidents, and a DoS incident.


The key recommendations for handling multiple component incidents are given below:

• Use centralized logging and event correlation software. Incident handlers should identify an incident as
having multiple components more quickly if all precursors and indications are accessible from a single
point of view.

• Contain the initial incident and then search for signs of other incident components. It can take an
extended period of time for a handler to authoritatively determine that an incident has only a single
component; meanwhile, the initial incident has not been contained. It is generally better to contain the
initial incident first.

• Separately prioritize the handling of each incident component. Resources are probably too limited to
handle all incident components simultaneously. Components should be prioritized based on response
guidelines for each component and how current each component is.
Out Handling Malicious code incidents
Malicious code refers to a program that is covertly inserted into another program with the intent to
destroy data, run destructive or intrusive programs, or otherwise compromise the security or integrity of
the victim’s data. Generally, malicious code is designed to perform these nefarious functions without the
system’s user knowing. Malicious code attacks can be divided into five categories: viruses, Trojan horses,
worms, mobile code, and blended.

Incident Handling Preparation

Preparation is the first step to handling an incident response and it accounts for establishing an incident
response capability so that the organization is ready to respond to incidents, but also preventing incidents
by ensuring that systems, networks, and applications are sufficiently secure.

Incident handling procedures include the following requirements:

• Contact information for team members and others within and outside the organization (primary and
backup contacts), such as law enforcement and other incident response teams; etc.
• On-call information for other teams within the organization, including escalation information
• Incident reporting mechanisms, such as phone numbers, email addresses, online forms, and secure
instant messaging systems that users can use to report suspected incidents;
• Issue tracking system for tracking incident information, status, etc.
• Encryption software to be used for communications among team members, within the organization and
with external parties; for Federal agencies, software must use a FIPSvalidated
encryption algorithm
• Digital forensic workstations and/or backup devices to create disk images, preserve log files, and save
other relevant incident data
• Laptops for activities such as analyzing data, sniffing packets, and writing reports
• Portable printer to print copies of log files and other evidence from non-networked systems
• Packet sniffers and protocol analyzers to capture and analyze network traffic
• Port lists, including commonly used ports and Trojan horse ports
• Documentation for OSs, applications, protocols, and intrusion detection and antivirus products

• Network diagrams and lists of critical assets, such as database servers
• Current baselines of expected network, system, and application activity
• Cryptographic hashes of critical files to speed incident analysis, verification, and eradication
• Access to images of clean OS and application installations for restoration and recovery Purposes

For malicious code incidents, the following preparation steps can be taken:
• Make Users Aware of Malicious Code Issues.
This information should include a basic review of the methods that malicious code uses to propagate and
the symptoms of infections. Holding regular user education sessions helps to ensure that users are aware
of the risks that malicious code poses.

• Read Antivirus Vendor Bulletins. Sign up for mailing lists from antivirus vendors that provide timely
information on new malicious code threats.

• Deploy Host-Based Intrusion Detection Systems to Critical Hosts. Host-based IDS software can
detect signs of malicious code incidents, such as configuration changes and system executable
modifications. File integrity checkers are useful in identifying the affected components of a system. Some
organizations configure their network perimeters to block connections to specific common Trojan horse
ports, with the goal of preventing Trojan horse client and server component communications. However,
this approach is generally ineffective. Known Trojan horses use hundreds of different port numbers, and
many Trojan horses can be configured to use any port number. Also, some Trojan horses use the same
port numbers that legitimate services use, so their communications cannot be blocked by port number.
Some organizations also implement port blocking incorrectly, so legitimate connections are sometimes
blocked. Implementing filtering rules for each Trojan horse port will also increase the demands placed on
the filtering device. Generally, a Trojan horse port should be blocked only if the organization has a
serious Trojan horse infestation.

Incident Prevention

Incident prevention objectively works on minimizing larger negative business (e.g. more extensive
damage, longer periods of service and data unavailability, etc.) impact and reduced number of incidents.
Although incident response teams are generally not responsible for securing resources, they can be
advocates of sound security practices. They can play a key role of identify problems that the organization
is otherwise not aware of; the team can play a key role in risk assessment and training by identifying

Some of the recommended practices for securing networks, systems, and applications include:

• Periodic risk assessments of systems and applications

• Hardening of hosts appropriately using standard configurations
• Configuring network perimeters such as securing all connection points, such as virtual private networks
(VPNs) and dedicated connections to other organizations
• Deploying malware protection the host level (e.g., server and workstation operating systems), the
application server level (e.g., email server, web proxies), and the application client level (e.g., email
clients, instant messaging clients)
• Applying lessons learned from previous incidents and sharing with users so they can see how their
actions could affect the organization

For preventing malicious code incidents the following steps can be taken:

• Use Antivirus Software. Antivirus software is a necessity to combat the threat of malicious code and
limit damage. The software should be running on all hosts throughout the organization, and all copies
should be kept current with the latest virus signatures so that the newest threats can be thwarted. Antivirus
software should also be used for applications used to transfer malicious code, such as e-mail, file transfer,
and instant messaging software. The software should be configured to perform periodic scans of the
system as well as real-time scans of each file as it is downloaded, opened, or executed.

The antivirus software should also be configured to disinfect and quarantine infected files. Some antivirus
products not only look for viruses, worms, and Trojan horses, but they also examine HTML, ActiveX,
JavaScript, and other types of mobile code for malicious content.

• Block Suspicious Files. Configure e-mail servers and clients to block attachments with file extensions
that are associated with malicious code (e.g., .pif, .vbs), and suspicious file extension combinations
(e.g., .txt. vbs, .htm.exe).

• Limit the Use of Nonessential Programs With File Transfer Capabilities. Examples include peer-to-
peer file and music sharing programs, instant messaging software, and IRC clients and servers. These
programs are frequently used to spread malicious code among users.

Educate Users on the Safe Handling of E-mail Attachments. Antivirus software should be configured
to scan each attachment before opening it. Users should not open suspicious attachments or attachments
from unknown sources. Users should also not assume that if the sender is known, the attachment is not

infected. Senders may not know that their systems are infected with malicious code that can extract e-
mail addresses from files and send copies of the malicious code to those addresses. This activity creates
the impression that the e-mails are coming from a trusted person even though the person is not aware that
they have been sent. Users can also be educated on file types that they should never open (e.g., .bat, .com,
.exe, .pif, .vbs). Although user awareness of good practices should lessen the number and severity of
malicious code incidents, organizations should assume that users will make mistakes and infect

Eliminate Open Windows Shares. Many worms spread through unsecured shares on hosts running
Windows. If one host in the organization is infected with a worm, it could rapidly spread to hundreds or
thousands of other hosts within the organization through their unsecured shares. Organizations should
routinely check all hosts for open shares and direct the system owners to secure the shares properly. Also,
the network perimeter should be configured to prevent traffic that uses NetBIOS ports from entering or
leaving the organization’s networks. This should not only prevent external hosts from directly infecting
internal hosts through open shares but should also prevent internal worm infections from spreading to
other organizations through open shares.
Use Web Browser Security to Limit Mobile Code. All Web browsers should have their security settings
configured so as to prevent unsigned ActiveX and other mobile code vehicles from unknowingly being
downloaded to and executed on local systems. Organizations should consider establishing an Internet
security policy that specifies which types of mobile code may be used from various sources (e.g., internal
servers, external servers).

Configure E-mail Clients to Act More Securely.

E-mail clients throughout the organization should be configured to avoid actions that may inadvertently
permit infections to occur. For example, e-mail clients should not automatically execute attachments.

Detection of Malicious Code

Detection of malicious code involves the preparation to handle incidents that use common attack vectors.

Some of the key aspects useful in determining malicious code detection:

• Screening attack vectors such as removable media or other peripheral device • Keeping a tab on network
flow information through routers and other networking devices that can be used to find anomalous
network activity caused by malware, data exfiltration, and other malicious acts

• Monitoring alerts sent by most IDPS products that uses attack signatures to identify malicious activity;
the signatures must be kept up to date so that the newest attacks can be detected
• Observing antivirus software alerts for detecting various forms of malware, generates alerts, and
prevents the malware from infecting hosts
• maintaining and using a rich knowledge base replete with explanations of the significance and validity
of precursors and indicators, such as IDPS alerts, operating system log entries, and application error
codes • following appropriate containment procedures which require disconnection of host from the
network, further damage Because malicious code incidents can take many forms, they may be detected
via a number of precursors and indications.
Some precursors and possible responses are listed below:
For Example:
Precursor: An alert warns of new malicious code that targets software that the organization uses.
Response: Research the new virus to determine whether it is real or a hoax. This can be done through
antivirus vendor Web sites and virus hoax sites.74 If the malicious code is confirmed as authentic, ensure
that antivirus software is updated with virus signatures for the new malicious code. If a virus signature is
not yet available, and the threat is serious and imminent, the activity might be blocked through other
means, such as configuring e-mail servers or clients to block e-mails matching characteristics of the new
malicious code. The team might also want to notify antivirus vendors of the new virus.

Precursor: Antivirus software detects and successfully disinfects or quarantines a newly received
infected file.
Response: Determine how the malicious code entered the system and what vulnerability or weakness it
was attempting to exploit. If the malicious code might pose a significant risk to other users and hosts,
mitigate the weaknesses that the malicious code used to reach the system and would have used to infect
the target host.

Similarly there are certain indications that can highlight the onset of a malicious action.
For example:
Malicious Action: A virus that spreads through e-mail infects a host.
• Antivirus software alerts of infected files
• Sudden increase in the number of e-mails being sent and received
• Changes to templates for word processing documents, spreadsheets, etc.
• Deleted, corrupted, or inaccessible files
• Unusual items on the screen, such as odd messages and graphics
• Programs start slowly, run slowly, or do not run at all

• System instability and crashes

Malicious Action: A worm that spreads through a vulnerable service infects a host.
• Antivirus software alerts of infected files
• Port scans and failed connection attempts targeted at the vulnerable service (e.g., open Windows shares,
• Increased network usage
• Programs start slowly, run slowly, or do not run at all
• System instability and crashes

Malicious Action: Malicious mobile code on a Web site is used to infect a host with a virus, worm or
Trojan horse.
• Indications listed above for the pertinent type of malicious code
• Unexpected dialog boxes, requesting permission to do something
• Unusual graphics, such as overlapping or overlaid message boxes

Malicious Action: A Trojan horse is installed and running on a host.

• Antivirus software alerts of Trojan horse versions of files
• Network intrusion detection alerts of Trojan horse client-server communications
• Firewall and router log entries for Trojan horse client-server communications
• Network connections between the host and unknown remote systems
• Unusual and unexpected ports open
• Unknown processes running
• High amounts of network traffic generated by the host, particularly if directed at external host(s)
• Programs start slowly, run slowly, or do not run at all
• System instability and crashes

Containment Strategy

Containment strategies vary based on the type of incident. For example, the strategy for containing an
email-borne malware infection is quite different from that of a network-based DDoS attack.
Organizations should create separate containment strategies for each major incident type, with criteria
documented clearly to facilitate decision-making.

Criteria for determining the appropriate strategy include:

• Potential damage to and theft of resources

• Need for evidence preservation
• Service availability (e.g., network connectivity, services provided to external parties)
• Time and resources needed to implement the strategy
• Effectiveness of the strategy (e.g., partial containment, full containment)
• Duration of the solution (e.g., emergency workaround to be removed in four hours,
temporary workaround to be removed in two weeks, permanent solution)

Containment strategy for malicious code incidents may include:

Any of the actions listed in prevention sections above and the following:

Identifying and Isolating Other Infected Hosts: Antivirus alert messages are a good source of
information, but not every infection will be detected by antivirus software. Incident handlers may need to
search for indications of infection through other means, such as:

• Performing port scans to detect hosts listening on a known Trojan horse or backdoor port
• Using antivirus scanning and clean-up tools released to combat a specific instance of malicious code
• Reviewing logs from e-mail servers, firewalls, and other systems that the malicious code may have
passed through, as well as individual host logs
• Configuring network and host intrusion detection software to identify activity associated with infections
• Auditing the processes running on systems to confirm that they are all legitimate.

Sending Unknown Malicious Code to Antivirus Vendors: Occasionally, malicious code that cannot be
definitively identified by antivirus software may enter the environment. Eradicating the malicious code
from systems and preventing additional infections may be difficult or impossible without having updated
antivirus signatures from the vendor. Incident handlers should be familiar with the procedures for
submitting copies of unknown malicious code to the organization’s antivirus vendors.

Configuring E-mail Servers and Clients to Block E-mails: Many e-mail programs can be configured
manually to block e-mails by particular subjects, attachment names, or other criteria that correspond to

the malicious code. This is neither a fool proof nor an efficient solution, but it may be the best option
available if an imminent threat exists and antivirus signatures are not yet available.

Blocking Outbound Access: If the malicious code attempts to generate outbound e-mails or connections,
handlers should consider blocking access to IP addresses or services to which the infected system may be
attempting to connect.

Shutting Down E-mail Servers: During the most severe malicious code incidents, with hundreds or
thousands of internal hosts infected, e-mail servers may become completely overwhelmed by viruses
trying to spread via e-mail. It may be necessary to shut down an e-mail server to halt the spread of e-mail-
borne viruses.

Isolating Networks from the Internet: Networks may become overwhelmed with worm traffic when a
severe worm infestation occurs. Occasionally a worm will generate so much traffic throughout the
Internet that network perimeters are completely overwhelmed. It may be better to disconnect the
organization from the Internet, particularly if the organization’s Internet access is essentially useless as a
result of the volume of worm traffic. This protects the organization’s systems from being attacked by
external worms; should the organization’s systems already be infected, prevents them from attacking
other systems and adding to the traffic congestion.

Evidence Gathering and Handling

The primary reason for gathering evidence during an incident is to resolve the incident however it may
also be needed for legal proceedings. In the case of incident analysis the procedure is implemented
through the application of Hardware and Software and related accessories such as hard-bound notebooks,
digital cameras, audio recorders, chain of custody forms, evidence storage bags and tags, and evidence
tape and to preserve evidence for possible legal actions. With respect to legal proceedings, it is important
to clearly document how all evidence, including compromised systems, has been preserved. Evidence
should be collected according to procedures that meet all applicable laws and regulations that have been
developed from previous discussions with legal staff and appropriate law enforcement agencies so that
any evidence can be admissible in court. Thus, Users and system administrators should be made ware of
the steps that they should take to preserve evidence.

Eradication and Recovery

After an incident has occurred, it is important to identify all affected hosts within the organization
so that they can be remediated. For some incidents, eradication is either not necessary or is performed
during recovery.

In recovery, administrators restore systems to normal operation, confirm that the systems are functioning
normally, and (if applicable) remediate vulnerabilities to prevent similar incidents.

Eradication procedures may be performed in the following ways:

• Identify and mitigate all vulnerabilities that were exploited

• Remove malware, inappropriate materials, and other components
• If more affected hosts are discovered (e.g. new malware infections) repeat the Detection and Analysis
steps to identify all other affected hosts,
• Contain and eradicate the incident in accordance with appropriate procedures

Recovery may involve such actions as restoring systems from clean backups, rebuilding systems from
scratch, replacing compromised files with clean versions, installing patches, changing passwords, and
tightening network perimeter security (e.g., firewall rule sets, boundary router access control lists).

Some of the recommended practices in recovery procedures are:

• Return affected systems to an operationally ready state

• Confirm that the affected systems are functioning normally
• If necessary, implement additional monitoring to look for future related activity Eradication and
recovery should be done in a phased approach so that remediation steps are prioritized.

Antivirus Systems
Antivirus software effectively identifies and removes malicious code infections; however, some infected
files cannot be disinfected. (Files can be deleted and replaced with clean backup copies; in the case of an
application, the affected application can be reinstalled.) If the malicious code provided attackers with
root-level access, it may not be possible to determine what other actions the attackers may have
performed. In such cases, the system should either be restored from a previous, uninfected backup or be
rebuilt from scratch. Of course, the system should then be secured so that it will not be susceptible to
another infection from the same malicious code.

Antivirus software sends alerts when it detects that a host is infected with malware; Antivirus software
detects various forms of malware, generates alerts, and prevents the malware from infecting hosts.
Current antivirus products are effective at stopping many instances of malware if their signatures are kept
up to date. Anti spam software is used to detect spam and prevent it from reaching users’ mailboxes.
Spam may contain malware, phishing attacks, and other malicious content, so alerts from anti spam
software may indicate attack attempts.
Set recommendations for organizing a computer security incident handling are summarized
• Develop an incident response plan based on the incident response policy
• Develop incident response procedures
• Establish policies and procedures regarding incident-related information sharing
• Consider the relevant factors when selecting an incident response team model
• Profile Networks and Systems
• Understand the normal behaviors of networks, systems, and applications
• Create a Log Retention Policy
• Perform Event Correlation
• Acquire tools and resources that may be of value during incident handling
• Prevent incidents from occurring by ensuring that networks, systems, and applications are sufficiently
• Identify precursors and indicators through alerts generated by several types of security software
• Establish mechanisms for outside parties to report incidents
• Require a baseline level of logging and auditing on all systems, and a higher baseline level on all critical
• Keep all host clocks synchronized
• Maintain and use a knowledge base of information Summary of recommendations for handling
malicious code incidents include:
• Make users aware of malicious code issues.
• Deploy host-based intrusion detection systems, including file integrity checkers, to critical hosts.
• Use antivirus software, and keep it updated with the latest virus signatures.
• Configure software to block suspicious files.
• Eliminate open Windows shares.
• Contain malicious code incidents as quickly as possible.

15. Additional topics

- Elaboration of CISCO Tools
- Key DDoS Incident Response

Purpose of each Mode

user EXEC mode is the initial startup mode. A router configuration session can be initiated using
terminal emulation programs such as Kermit, HyperTerminal, or telnet.

privileged EXEC mode is the system administrator mode. In this mode configuration files can be read,
the router can be rebooted, and operating parameters can be changed.

global configuration mode is used to modify system-wide configuration parameters, such as routing
tables and routing algorithms.

interface configuration mode is used to modify the Ethernet and serial port configurations.
How to Change Modes

user Exec mode is entered by starting a terminal emulation program, such as kermit or HyperTerminal,
or by starting a telnet session. The workstation must physically be connected to the console port on the
router by either a rollover cable (kermit or HyperTerminal) or to an Ethernet port by a standard patch
cable (telnet). See Figure 1 for port locations. Typically a password must be entered to establish the
connection. The user Exec mode prompt has the following form: RouterName>

privileged Exec mode is entered from user Exec mode by typing enable. A password must be supplied to
complete the connection. The privileged Exec mode prompt has the following form: RouterName#

global configuration mode is entered from privileged Exec mode by typing configure terminal or config
t. No password is required. The global configuration mode prompt has the following form: RouterName

interface configuration mode is entered from global configuration mode by

typing interface InterfaceName, where the InterfaceName is either Ethernet0, Serial0, or Serial1. The
interface configuration mode prompt has the following form: RouterName(config-if)
The following table describes some of the most commonly used modes, how to enter the modes, and the
resulting prompts. The prompt helps you identify which mode you are in and, therefore, which commands are
available to you

16. University Question papers of previous years