Beruflich Dokumente
Kultur Dokumente
NETWORKING
A network is a set of devices (often referred to as nodes) connected by communication links. A node can
be a computer, printer, or any other device capable of sending and/or receiving data generated by
other nodes on the network.
There should be a security framework of policies dealing with all aspects od physical
security, personnel security and information security.
There should be clear roles for users and information security officers of the security system
steering committee.
RISK ANALYSIS
-
RISK MANAGEMENT
Information system
Personnel
Physical
Review method
The word data refers to information presented in whatever form is agreed upon by the parties
creating and using the data.
Data communications are the exchange of data between two devices via some form of
transmission medium such as a wire cable.
For
Data
Communication
to
occur,
It r epr es ents an
Moreover, Data can flow in three different ways namely Simplex, Half- Duplex and Full Duplex.
In half-duplex mode, each station can both transmit and receive, but not at the same time.
i.e. When one device is sending, the other can only receive, and vice versa.
In full-duplex mode (also called duplex), both stations can transmit and receive
simultaneously.
In Multi-Point: Connection is one in which more than two devices share a single link.
Syntax: The term syntax refers to the structure or format of the data, meaning the order in
which they are presented.
Semantics: The word semantics refers to the meaning of each section of bits. How is a
particular pattern to be interpreted, and what action is to be taken based on that interpretation?
Timing: The term timing refers to two characteristics: when data should be sent and how fast
they can be sent.
Standards provide guidelines to manufacturers, vendors, government agencies, and other service
providers to ensure the kind of interconnectivity necessary in today's marketplace and in international
communication. Standards are developed through the cooperation of standards creation committees,
forums, and government regulatory agencies. The various standard creation committees are:
network. The OSI reference model divides the problem of moving information between computers
over a network medium into SEVEN smaller and more manageable problems. This separation into
smaller more manageable functions is known as layering.
The process of breaking up the functions or tasks of networking into layers reduces complexity.
Each layer provides a service to the layer above it in the protocol specification. Each layer
communicates with the same layers software or hardware on other computers. The lower 4 layers
(transport, network, data link and physical Layers 4, 3, 2, and 1) are concerned with the flow of data
from end to end through the network. The upper four layers of the OSI model (application, presentation
and sessionLayers 7, 6 and 5) are orientated more toward services to the applications. Data is
encapsulated with the necessary protocol information as it moves down the layers before network
transit. A message begins at the top application layer and moves down the OSI layers to the bottom
physical layer. As the message descends, each successive OSI model layer adds a header to it. A
header is layer-specific information that basically explains what functions the layer carried out.
Conversely, at the receiving end, headers are striped from the message as it travels up the
corresponding layers.
PHYSICAL LAYER
Defines rules by which bits are passed from one system to another on a physical
communication medium.
Covers all - mechanical, electrical, functional and procedural - aspects for physical
communication Such characteristics as voltage levels, timing of voltage changes, physical data
rates, maximum transmission distances, physical connectors, and other similar attributes are
defined by physical layer specifications.
Data link layer attempts to provide reliable communication over the physical layer interface.
Breaks the outgoing data into frames and reassemble the received frames.
NETWORK LAYER
Defines the most optimum path the packet should take from the source to the
destination.
Handles congestion in the network. The network layer also defines how to fragment a
packet into smaller packets to accommodate different media.
TRANSPORT LAYER
Purpose of this layer is to provide a reliable mechanism for the exchange of data between
two processes in different computers.
SESSION LAYER
Session layer provides mechanism for controlling the dialogue between the two end systems.
It defines how to start, control and end conversations (called sessions) between
applications.
This layer provides services like dialogue discipline which can be full duplex or half duplex.
Session layer can also provide check-pointing mechanism such that if a failure of some sort
occurs between checkpoints, all data can be retransmitted from the last checkpoint.
PRESENTATION LAYER
Presentation layer defines the format in which the data is to be exchanged between the two
communicating entities.
APPLICATION LAYER
Application layer interacts with application programs and is the highest level of OSI model.
Examples are applications such as file transfer, electronic mail, remote login etc.
Companies usually pay an outside local carrier to supply the physical media necessary for
transmitting data. The equipment and services provided by these vendors are usually on a monthly feefor-service basis, with a one-time installation and set-up charge. One example is when a company leases
telephone lines from a telecommunications company.
WANs can use either analog (telephone lines) or digital (such as satellite transmission) signals, or
a combination of both. WANs can be privately owned by large corporations or they can be public. One
difference between public MANs and WANs is that the telephone company used is a long distance rather
than local carrier.
ENTERPRISE NETWORKS
An Enterprise Network is the sum of the networked parts of an organization, encompassing all of
the organizations LANs, MANs (Metropolitan Area Network), and WANs, as well as clients, servers,
printers, and other networked nodes.
NETWORK TOPOLOGIES
There are different topologies that make up computer networks. Topology is the physical layout
of computers, cables, and other components on a network. Many networks are a combination of the
various topologies that we will look at:
Bus
Star
Mesh
Ring
BUS TOPOLOGIES
A bus topology uses one cable to connect multiple computers. The cable is also called a trunk, a
backbone, and a segment. Most of the time, as seen in Figure below, T-connectors is used to connect to
the cabled segment. They are called T-connectors because they are shaped like the letter T. You will
commonly see coaxial cable used in bus topologies.
10
In a bus topology, all computers are connected on one linear cable. Another key component of a bus
topology is the need for termination. To prevent packets from bouncing up and down the cable, devices
called terminators must be attached to both ends of the cable. A terminator absorbs an electronic signal
and clears the cable so that other computers can send packets on the network. If there is no termination,
the entire network fails. Only one computer at a time can transmit a packet on a bus topology. Computers
in a bus topology listen to all traffic on the network but accept only the packets that are addressed to
them. Broadcast packets are an exception because all computers on the network accept them. When a
computer sends out a packet, it travels in both directions from the computer. A bus is a passive topology.
The computers on a bus topology only listen or send data. They do not take data and send it on or
regenerate it. So if one computer on the network fails, the network is still up.
ADVANTAGES
Cost - uses less cable than the star topology or the mesh topology.
Ease of installation - simply connect the workstation to the cable segment, or backbone.
DISADVANTAGES
Difficulty of troubleshooting - When the network goes down, usually it is from a break in
the cable segment. With a large network this can be tough to isolate.
STAR TOPOLOGIES
In a star topology, all computers are connected through one central hub or switch, as illustrated in
Figure below. This is a very common network scenario.
Computer in a star topology are all connected to a central hub. A star topology actually comes from the
days of the mainframe system. The mainframe system had a centralized point where the terminals
connected.
ADVANTAGES
Centralization of cabling. With a hub, if one link fails, the remaining workstations are not
affected like they are with other topologies.
11
DISADVANTAGES
If the hub fails, the entire network, or a good portion of the network, comes down.
Cost - to connect each workstation to a centralized hub, much more cable is used.
MESH TOPOLOGIES
A mesh topology is not very common in computer networking, but you will have to know it for
the exam. The mesh topology is more commonly seen with something like the national phone network.
With the mesh topology, every workstation has a connection to every other component of the network.
Computers in a mesh topology are all connected to every other component of the network
ADVANTAGES
DISADVANTAGES
Cost - With a large network, the amount of cable needed to connect and the interfaces on the
workstations would be very expensive.
RING TOPOLOGIES
In a ring topology, all computers are connected with a cable that loops around. As shown in
Figure, the ring topology is a circle that has no start and no end. Terminators are not necessary in a ring
topology. Signals travel in one direction on a ring while they are passed from one computer to the next.
Each computer checks the packet for its destination and passes it on as a repeater would. If one of the
computers fails, the entire ring network goes down. Signals travel in one direction on a ring topology
12
ADVANTAGES
DISADVANTAGES
If one computer fails or the cable link is broken the entire network could go down.
13
INFORMATION FLOW
Network information theory deals studies the limits of information flow in networks. Unlike point-topoint problems almost all network information theory problems are open.
Suppose each source wants to communicate with its corresponding destination at rate Ri
Information Flow: transmission of information from one place to another. It may be Absolute or
probabilistic.
Confidentiality:
What subjects can see what objects. So, confidentiality specifies what is
allowed.
Flow:
Controls what subjects actually see. So, information flow describes how policy is
enforced.
y = x; // what do we know before & after assignment?
y = x/z;
A command sequence c causes a flow of information from x to y if the value of y after the commands
allows one to deduce information about the value of x before the commands executed.
14
tmp = x;
y = tmp;
Transitive
Consider a conditional statement
if x == 1 then y = 0 else y = 1
what do we know before & after execution?
What about: if x == 1 then y = 0
No explicit assignment to y in one case. This is called implicit information flow
Two categories of information flows
o
explicit opns causing flow are independent of value of x, e.g. assignment operation,
x=y
Compiler-based
Execution-based
Both analyze code. Execution-based typically requires tracking the security level of the PC as the
program executes.
Convert Channel
A communication channel is covert if it is neither designed nor intended to transfer information at all. It is
based on transmission by storage into variables that describe resource states
15
Information Security Models bridge the gap between security policy statements (which explain
which users should have access to data) and the operating system implementation (which allows
an administrator to configure access control).
The models help map abstract goals onto mathematical relationships that underpin whichever
implementation is eventually chosen (Windows, Unix, MacOS etc).
Each user subject and information object has a fixed security class labels
A subject s has read access to an object iff the class of the subject C(s) is greater than or
equal to the class of the object C(o)
Dominance Relation: the clearance level of a user (subject) maps to the classification of files
(object). Users with a particular clearance will only be able to access files of a particular
classification and below.
16
Data flows upwards: BLP enforces the confidentiality aspect of access control in that data can
only move up the lattice from lower levels of classification to higher.
Given its concentration on protecting information from flowing in the wrong direction, BLP is also
categorized as an Information-Flow Model.
17
BIBA MODEL
The major drawback of the BLP model was that it only considered the confidentiality of data.
There was no consideration given to the need-to-know principle users were free to read all data at their
own and lower levels of classification. Therefore, shortly after the development of BLP, Ken Biba
developed a model that considered data integrity. Focussed on the commercial sector where, at the time,
the integrity of data had more importance than its confidentiality, the Biba model is concerned with
preventing data from low integrity environments polluting high integrity data.
Like BLP, Biba has three properties:
The Simple Integrity Property Data can be read from a higher integrity level
The Star Integrity Property Data can be written to a lower integrity level
The Invocation Property User cannot request service (invoke) from a higher integrity level
Biba is the opposite of BLP: whereas BLP is a WURD model (Write Up, Read Down), Biba is RUWD
(Read Up, Write Down).
No execute up
18
A well formed transaction, as defined by Clark-Wilson is one that only permits modification of data if
that modification meets the three integrity goals listed above.
Brewer Nash Model
The Brewer Nash model also known as the Chinese Wall model provides access controls that change
dynamically depending on the previous actions of a user. It is typically used to protect against conflicts of
interest. Once a particular user has accessed a particular object in one half of a data store, their access to
the other half is immediately revoked. Again, Brewer Nash is an Information Flow Model no
information can flow between two entities that could result in a conflict of interest.
Graham-Denning Model
Graham-Denning is much less abstract than those previously considered. Whilst they dont define how
security or integrity ratings are defined or modified, Graham-Denning introduces several critical primitive
protection rights:
Create Object
Create Subject
Delete Object
Delete Subject
19
Security attacks are classified as either passive attacks, which include unauthorized reading of a
message of file and traffic analysis or active attacks, such as modification of messages or files,
and denial of service.
A security mechanism is any process (or a device incorporating such a process) that is designed
to detect, prevent, or recover from a security attack. Examples of mechanisms are encryption
algorithms, digital signatures, and authentication protocols.
Security services include authentication, access control, data confidentiality, data integrity, non
repudiation, and availability.
The field of network and Internet security consists of measures to deter, prevent, detect, and correct
security violations that involve the transmission of information. Consider the following examples of
security violations:
1. User A transmits a file to user B. The file contains sensitive information (e.g., payroll
records) that is to be protected from disclosure. User C, who is not authorized to read the file,
is able to monitor the transmission and capture a copy of the file during its transmission.
2. A network manager, D, transmits a message to a computer, E, under its management. The
message instructs computer E to update an authorization file to include the identities of a
number of new users who are to be given access to that computer. User F intercepts the
message, alters its contents to add or delete entries, and then forwards the message to
computer E, which accepts the message as coming from manager D and updates its
authorization file accordingly.
3. Rather than intercept a message, user F constructs its own message with the desired entries
and transmits that message to computer E as if it had come from manager D. Computer E
accepts the message as coming from manager D and updates its authorization file
accordingly.
4. An employee is fired without warning. The personnel manager sends a message to a server
system to invalidate the employees account. When the invalidation is accomplished, the
server is to post a notice to the employees file as confirmation of the action. The employee is
able to intercept the message and delay it long enough to make a final access to the server to
retrieve sensitive information. The message is then forwarded, the action taken, and the
confirmation posted. The employees action may go unnoticed for some considerable time.
20
5. A message is sent from a customer to a stockbroker with instructions for various transactions.
Subsequently, the investments lose value and the customer denies sending the message.
Threat
A potential for violation of security, which exists when there is a circumstance, capability, action, or
event that could breach security and cause harm. That is, a threat is a possible danger that might exploit
a vulnerability.
Attack
An assault on system security that derives from an intelligent threat; that is, an intelligent act that is a
deliberate attempt (especially in the sense of a method or technique) to evade security services and
violate the security policy of a system.
ITU-T 3 Recommendation X.800, Security Architecture for OSI, defines such a systematic
approach. The OSI security architecture is useful to managers as a way of organizing the task of providing
security. The OSI security architecture focuses on security attacks, mechanisms, and services. These can
be defined briefly as
Security attack: Any action that compromises the security of information owned by an
organization.
Security mechanism: A process (or a device incorporating such a process) that is designed to
detect, prevent, or recover from a security attack.
Security service: A processing or communication service that enhances the security of the data
processing systems and the information transfers of an organization. The services are intended to
counter security attacks, and they make use of one or more security mechanisms to provide the
service.
21
SECURITY ATTACKS
A useful means of classifying security attacks, used both in X.800 and RFC 2828, is in terms of passive
attacks and active attacks.
A passive attack attempts to learn or make use of information from the system but does not affect
system resources.
Passive Attacks
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal
of the opponent is to obtain information that is being transmitted. Two types of passive attacks are the
release of message contents and traffic analysis.
The release of message contents is easily understood from figure. A telephone conversation, an
electronic mail message, and a transferred file may contain sensitive or confidential information.
We would like to prevent an opponent from learning the contents of these transmissions.
A second type of passive attack, traffic analysis, is subtler (Figure). Suppose that we had a way
of masking the contents of messages or other information traffic so that opponents, even if they
captured the message, could not extract the information from the message.
If we had encryption protection in place, an opponent might still be able to observe the pattern of
these messages.
The opponent could determine the location and identity of communicating hosts and could
observe the frequency and length of messages being exchanged. This information might be useful
in guessing the nature of the communication that was taking place.
Passive attacks are very difficult to detect, because they do not involve any alteration of the data.
Typically, the message traffic is sent and received in an apparently normal fashion, and neither
the sender nor receiver is aware that a third party has read the messages or observed the traffic
pattern.
However, it is feasible to prevent the success of these attacks, usually by means of encryption.
Thus, the emphasis in dealing with passive attacks is on prevention rather than detection.
22
Active Attacks
Active attacks involve some modification of the data stream or the creation of a false stream and
can be subdivided into four categories: masquerade, replay, modification of messages, and denial of
service.
A masquerade takes place when one entity pretends to be a different entity (Figure 1.3a). A
masquerade attack usually includes one of the other forms of active attack. For example,
authentication sequences can be captured and replayed after a valid authentication sequence has
taken place, thus enabling an authorized entity with few privileges to obtain extra privileges by
impersonating an entity that has those privileges.
23
Replay involves the passive capture of a data unit and its subsequent retransmission to produce
an unauthorized effect (Figure 1.3b).
Modification of messages simply means that some portion of a legitimate message is altered, or
that messages are delayed or reordered, to produce an unauthorized effect (Figure 1.3c). For
example, a message meaning Allow John Smith to read confidential file accounts is modified
to mean Allow Fred Brown to read confidential file accounts.
The denial of service prevents or inhibits the normal use or management of communications
facilities (Figure 1.3d). This attack may have a specific target; for example, an entity may
suppress all messages directed to a particular destination (e.g., the security audit service). Another
form of service denial is the disruption of an entire network, either by disabling the network or by
overloading it with messages so as to degrade performance.
24
Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks are
difficult to detect, measures are available to prevent their success.
On the other hand, it is quite difficult to prevent active attacks absolutely because of the wide
variety of potential physical, software, and network vulnerabilities.
Instead, the goal is to detect active attacks and to recover from any disruption or delays caused by
them. If the detection has a deterrent effect, it may also contribute to prevention.
SECURITY SERVICES
X.800 defines a security service as a service that is provided by a protocol layer of
communicating open systems and that ensures adequate security of the systems or of data transfers.
Perhaps a clearer definition is found in RFC 2828, which provides the following definition: a processing
or communication service that is provided by a system to give a specific kind of protection to system
25
resources; security services implement security policies and are implemented by security mechanisms.
X.800 divides these services into five categories and fourteen specific services (Table 1.2).
AUTHENTICATION
The authentication service is concerned with assuring that a communication is authentic. In the
case of a single message, such as a warning or alarm signal, the function of the authentication service is to
assure the recipient that the message is from the source that it claims to be from. In the case of an ongoing
interaction, such as the connection of a terminal to a host, two aspects are involved. First, at the time of
connection initiation, the service assures that the two entities are authentic, that is, that each is the entity
26
that it claims to be. Second, the service must assure that the connection is not interfered with in such a
way that a third party can masquerade as one of the two legitimate parties for the purposes of
unauthorized transmission or reception. Two specific authentication services are defined in X.800:
Peer entity authentication: Provides for the corroboration of the identity of a peer entity in an
association. Two entities are considered peers if they implement to same protocol in different
systems; e.g., two TCP modules in two communicating systems. Peer entity authentication is
provided for use at the establishment of, or at times during the data transfer phase of, a
connection. It attempts to provide confidence that an entity is not performing either a masquerade
or an unauthorized replay of a previous connection.
Data origin authentication: Provides for the corroboration of the source of a data unit. It does
not provide protection against the duplication or modification of data units. This type of service
supports applications like electronic mail, where there are no prior interactions between the
communicating entities.
ACCESS CONTROL
In the context of network security, access control is the ability to limit and control the access to
host systems and applications via communications links. To achieve this, each entity trying to gain access
must first be identified, or authenticated, so that access rights can be tailored to the individual.
DATA CONFIDENTIALITY
Confidentiality is the protection of transmitted data from passive attacks. With respect to the
content of a data transmission, several levels of protection can be identified. The broadest service protects
all user data transmitted between two users over a period of time. For example, when a TCP connection is
set up between two systems, this broad protection prevents the release of any user data transmitted over
the TCP connection. Narrower forms of this service can also be defined, including the protection of a
single message or even specific fields within a message. These refinements are less useful than the broad
approach and may even be more complex and expensive to implement. The other aspect of confidentiality
is the protection of traffic flow from analysis. This requires that an attacker not be able to observe the
source and destination, frequency, length, or other characteristics of the traffic on a communications
facility.
DATA INTEGRITY
As with confidentiality, integrity can apply to a stream of messages, a single message, or selected
fields within a message. Again, the most useful and straightforward approach is total stream protection. A
27
connection-oriented integrity service, one that deals with a stream of messages, assures that messages are
received as sent with no duplication, insertion, modification, reordering, or replays. The destruction of
data is also covered under this service. Thus, the connection-oriented integrity service addresses both
message stream modification and denial of service. On the other hand, a connectionless integrity service,
one that deals with individual messages without regard to any larger context, generally provides
protection against message modification only.
We can make a distinction between service with and without recovery. Because the integrity
service relates to active attacks, we are concerned with detection rather than prevention. If a violation of
integrity is detected, then the service may simply report this violation, and some other portion of software
or human intervention is required to recover from the violation. Alternatively, there are mechanisms
available to recover from the loss of integrity of data, as we will review subsequently. The incorporation
of automated recovery mechanisms is, in general, the more attractive alternative.
NONREPUDIATION
It prevents either sender or receiver from denying a transmitted message. Thus, when a message
is sent, the receiver can prove that the alleged sender in fact sent the message. Similarly, when a message
is received, the sender can prove that the alleged receiver in fact received the message.
Availability Service
Both X.800 and RFC 2828 define availability to be the property of a system or a system resource being
accessible and usable upon demand by an authorized system entity, according to performance
specifications for the system (i.e., a system is available if it provides services according to the system
design whenever users request them). A variety of attacks can result in the loss of or reduction in
availability. Some of these attacks are amenable to automated countermeasures, such as authentication
and encryption, whereas others require some sort of physical action to prevent or recover from loss of
availability of elements of a distributed system. X.800 treats availability as a property to be associated
with various security services. However, it makes sense to call out specifically an availability service. An
availability service is one that protects a system to ensure its availability. This service addresses the
security concerns raised by denial-of-service attacks. It depends on proper management and control of
system resources and thus depends on access control service and other security services.
28
SECURITY MECHNANISMS
Table lists the security mechanisms defined in X.800. The mechanisms are divided into those that are
implemented in a specific protocol layer, such as TCP or an application-layer protocol, and those that are
not specific to any particular protocol layer or security service.
29
Table based on one in X.800, indicates the relationship between security services and security
mechanisms.
Some secret information shared by the two principals and, it is hoped, unknown to the opponent.
An example is an encryption key used in conjunction with the transformation to scramble the
message before transmission and unscramble it on reception.
A trusted third party may be needed to achieve secure transmission. For example, a third party
may be responsible for distributing the secret information to the two principals while keeping it
from any opponent. Or a third party may be needed to arbitrate disputes between the two
principals concerning the authenticity of a message transmission.
30
This general model shows that there are four basic tasks in designing a particular security service:
1. Design an algorithm for performing the security-related transformation. The algorithm should be
such that an opponent cannot defeat its purpose.
2. Generate the secret information to be used with the algorithm.
3. Develop methods for the distribution and sharing of the secret information.
4. Specify a protocol to be used by the two principals that makes use of the security algorithm and the
secret information to achieve a particular security service.
A general model of these other situations is illustrated by Figure 1.5, which reflects a concern for
protecting an information system from unwanted access. We are familiar with the concerns caused by the
existence of hackers, who attempt to penetrate systems that can be accessed over a network. The hacker
can be someone who, with no malign intent, simply gets satisfaction from breaking and entering a
computer system. The intruder can be a disgruntled employee who wishes to do damage or a criminal
who seeks to exploit computer assets for financial gain (e.g., obtaining credit card numbers or performing
illegal money transfers).
31
Another type of unwanted access is the placement in a computer system of logic that exploits
vulnerabilities in the system and that can affect application programs as well as utility programs, such as
editors and compilers. Programs can present two kinds of threats:
Information access threats: Intercept or modify data on behalf of users who should not have
access to that data.
Service threats: Exploit service flaws in computers to inhibit use by legitimate users. Viruses
and worms are two examples of software attacks. Such attacks can be introduced into a system by
means of a disk that contains the unwanted logic concealed in otherwise useful software. They
can also be inserted into a system across a network; this latter mechanism is of more concern in
network security.
The first category might be termed a gatekeeper function. It includes password-based login
procedures that are designed to deny access to all but authorized users and screening logic that is designed
to detect and reject worms, viruses, and other similar attacks. Once either an unwanted user or unwanted
software gains access, the second line of defense consists of a variety of internal controls that monitor
activity and analyze stored information in an attempt to detect the presence of unwanted intruders.
COMPUTER SECURITY
The protection afforded to an automated information system in order to attain the applicable
objectives of preserving the integrity, availability, and confidentiality of information system resources
(includes hardware, software, firmware, information/ data, and telecommunications).
This definition introduces three key objectives that are at the heart of computer security:
Confidentiality: This term covers two related concepts:
32
Data confidentiality: Assures that private or confidential information is not made available or
disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information related to them may be
collected and stored and by whom and to whom that information may be disclosed.
Data integrity: Assures that information and programs are changed only in a specified and
authorized manner.
System integrity: Assures that a system performs its intended function in an unimpaired manner,
free from deliberate or inadvertent unauthorized manipulation of the system.
Availability: Assures that systems work promptly and service is not denied to authorized users.
These three concepts form what is often referred to as the CIA triad.
33
Availability: Ensuring timely and reliable access to and use of information. A loss of availability is the
disruption of access to or use of information or an information system.
Examples
We now provide some examples of applications that illustrate the requirements just enumerated.
For these examples, we use three levels of impact on organizations or individuals should there be a breach
of security (i.e., a loss of confidentiality, integrity, or availability). These levels are defined in FIPS PUB
199:
Low: The loss could be expected to have a limited adverse effect on organizational operations,
organizational assets, or individuals. A limited adverse effect means that, for example, the loss of
confidentiality, integrity, or availability might (i) cause a degradation in mission capability to an
extent and duration that the organization is able to perform its primary functions, but the
effectiveness of the functions is noticeably reduced; (ii) result in minor damage to organizational
assets; (iii) result in minor financial loss; or (iv) result in minor harm to individuals.
Moderate: The loss could be expected to have a serious adverse effect on organizational
operations, organizational assets, or individuals. A serious adverse effect means that, for
example, the loss might (i) cause a significant degradation in mission capability to an extent and
duration that the organization is able to perform its primary functions, but the effectiveness of the
functions is significantly reduced; (ii) result in significant damage to organizational assets; (iii)
result in significant financial loss; or (iv) result in significant harm to individuals that does not
involve loss of life or serious, life-threatening injuries.
High: The loss could be expected to have a severe or catastrophic adverse effect on
organizational operations, organizational assets, or individuals. A severe or catastrophic adverse
effect means that, for example, the loss might (i) cause a severe degradation in or loss of mission
capability to an extent and duration that the organization is not able to perform one or more of its
primary functions; (ii) result in major damage to organizational assets; (iii) result in major
financial loss; or (iv) result in severe or catastrophic harm to individuals involving loss of life or
serious, life-threatening injuries.
34
mechanisms used to meet those requirements can be quite complex, and understanding them may involve
rather subtle reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider potential attacks
on those security features. In many cases, successful attacks are designed by looking at the problem in a
completely different way, therefore exploiting an unexpected weakness in the mechanism.
3. Because of point 2, the procedures used to provide particular services are often counterintuitive.
Typically, a security mechanism is complex, and it is not obvious from the statement of a particular
requirement that such elaborate measures are needed. It is only when the various aspects of the threat are
considered that elaborate security mechanisms make sense.
4. Having designed various security mechanisms, it is necessary to decide where to use them. This is true
both in terms of physical placement (e.g., at what points in a network are certain security mechanisms
needed) and in a logical sense [e.g., at what layer or layers of an architecture such as TCP/IP
(Transmission Control Protocol/Internet Protocol) should mechanisms be placed].
5. Security mechanisms typically involve more than a particular algorithm or protocol. They also require
that participants be in possession of some secret information (e.g., an encryption key), which raises
questions about the creation, distribution, and protection of that secret information. There also may be a
reliance on communications protocols whose behavior may complicate the task of developing the security
mechanism. For example, if the proper functioning of the security mechanism requires setting time limits
on the transit time of a message from sender to receiver, then any protocol or network that introduces
variable, unpredictable delays may render such time limits meaningless.
6. Computer and network security is essentially a battle of wits between a perpetrator who tries to find
holes and the designer or administrator who tries to close them. The great advantage that the attacker has
is that he or she need only find a single weakness, while the designer must find and eliminate all
weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little benefit from
security investment until a security failure occurs.
8. Security requires regular, even constant, monitoring, and this is difficult in todays short-term,
overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the design is complete
rather than being an integral part of the design process.
10. Many users and even security administrators view strong security as an impediment to efficient and
user-friendly operation of an information system or use of information.
35
36
public computer such as in a library in order to secretly monitor other users. While the term spyware
suggests that software that secretly monitors the user's computing, the functions of spyware extend well
beyond simple monitoring. Spyware programs can collect various types of personal information, such as
Internet surfing habits and sites that have been visited, but can also interfere with user control of the
computer in other ways, such as installing additional software and redirecting Web browser activity.
Spyware is known to change computer settings, resulting in slow connection speeds, different home
pages, and/or loss of Internet or functionality of other programs. Spyware is also known more formally as
privacy-invasive software.
ADWARE
Adware, or advertising-supported software, is any software package that automatically plays,
displays, or downloads advertisements to a computer after the software is installed on it or while the
application is being used. Common forms of this type of malware are on websites where popup windows
appear when you land on the website. Some types of adware are also spyware.
CRIMEWARE
Crime ware is a class of malware designed specifically to automate cybercrime. Its purpose is to
carry out identity theft. It is most often targeted at financial services companies such as banks online
retailers etc. for the purpose of taking funds from those accounts or making unauthorized transactions to
benefit the thief controlling the crimeware.
SPAM
Spam is the abuse of electronic messaging systems to send unsolicited bulk messages
indiscriminately. While the most widely recognized form of spam is e-mail spam, the term is applied to
similar abuses in other media: instant messaging spam web search engine spam and social networking
spam for example.
PHISHING
Phishing is an e-mail fraud method in which the criminal sends out legitimate-looking email in an
attempt to gather personal and financial information from recipients. Typically, the messages appear to
come from well-known and trustworthy Web sites. Web sites that are frequently spoofed by phishers
include PayPal, eBay, MSN and Yahoo. A phishing expedition, like the fishing expedition it's named
after, is a speculative venture. The criminal could then use the information to take money from the
persons account for example.
PRECAUTIONARY STEPS:
Never open an email attachment unless you are certain what the file contains. This is especially
true for emails received from someone you do not know.
37
Be careful when visiting websites especially if they are going to download a file to your
computer.
Never give out your personal details unless you are absolutely certain that the request is from a
reliable source.
Always make sure that antivirus and other protection software is up to date and turned on.
If you must leave your computer unattended whilst you are logged on make sure that the screen is
locked so that no one can use it.
When shopping on the Internet make sure that you use sites where the data is encrypted when you
send personal or financial details. You can tell this from the lock that appears in the bottom of the
browser window.
SYSTEM SECURITY
Security refers to providing a protection system to computer system resources such as CPU, memory,
disk, software programs and most importantly data/information stored in the computer system. If a
computer program is run by unauthorized user then he/she may cause severe damage to computer or data
stored in it. So a computer system must be protected against unauthorized access, malicious access to
system memory, viruses, worms etc.
Authentication
One time passwords
Program Threats
System threats
Computer security classifications
AUTHENTICATION
Authentication refers to identifying the each user of the system and associating the executing programs
with those users. It is the responsibility of the Operating System to create a protection system which
ensures that a user who is running a particular program is authentic. Operating Systems generally
identifies/authenticates users using following three ways
Username / Password - User need to enter a registered username and password with Operating
system to login into the system
38
User card/key - User need to punch card in card slot, or enter key generated by key generator in
option provided by operating system to login into the system
User attribute - fingerprint/ eye retina pattern/ signature - User need to pass his/her attribute via
designated input device used by operating system to login into the system
Random numbers - Users are provided cards having numbers printed along with corresponding
alphabets. System asks for numbers corresponding to few alphabets randomly chosen
Secret key - User are provided a hardware device which can create a secret id mapped with user
id. System asks for such secret id which is to be generated every time prior to login
Network password - Some commercial applications send one time password to user on registered
mobile/ email which is required to be entered prior to login
PROGRAM THREATS
Operating system's processes and kernel do the designated task as instructed. If a user program made
these process do malicious tasks then it is known as Program Threats. One of the common example of
program threat is a program installed in a computer which can store and send user credentials via network
to some hacker. Following is the list of some well known program threats
Trojan Horse - Such program traps user login credentials and stores them to send to malicious
user who can later on login to computer and can access system resources
Trap Door - If a program which is designed to work as required, have a security hole in its code
and perform illegal action without knowledge of user then it is called to have a trap door
Logic Bomb - Logic bomb is a situation when a program misbehaves only when certain
conditions met otherwise it works as a genuine program. It is harder to detect
Virus - Virus as name suggest can replicate themselves on computer system .They are highly
dangerous and can modify/delete user files, crash systems. A virus is generatlly a small code
39
embedded in a program. As user accesses the program, the virus starts getting embedded in other
files/ programs and can make system unusable for user
SYSTEM THREATS
System threats refer to misuse of system services and network connections to put user in trouble. System
threats can be used to launch program threats on a complete network called as program attack. System
threats create such an environment that operating system resources/ user files are misused. Following is
the list of some well known system threats
Worm -Worm is a process which can choked down a system performance by using system
resources to extreme levels. A Worm process generates its multiple copies where each copy uses
system resources, prevents all other processes to get required resources. Worms processes can
even shut down an entire network
Port Scanning - Port scanning is a mechanism or means by which a hacker can detects system
vulnerabilities to make an attack on the system
Denial of Service - Denial of service attacks normally prevents user to make legitimate use of the
system. For example user may not be able to use internet if denial of service attacks browser's
content settings
40
Countermeasures against unauthorized access via networks - While the multifunction products can be
found over the network, do not allow the intruder to access their internal features. User authentication
and filtering reduce the risk of information leaks via networks.
Countermeasures against unauthorized access via telephone lines - Although telephone lines
connected to devices can be a lead-in for external access, do not to allow access to the internal networks
via telephone lines. People of malicious intent cannot access the internal networks of the company via a
telephone line for fax.
Countermeasures against tapping and alteration of information over the network - Multifunction
products exchange critical information with personal computers and servers over networks. Unprotected,
this information is exposed to risks of alteration by people with malicious intent who tap into the network.
Countermeasures against unauthorized access via the operator panel - When multifunction products
are installed in an office, they are exposed to security risks of unauthorized operations via the operator
panel. Many cases of information leaks are reportedly committed by insiders. Using the user
authentication features to properly set up access privileges to individual users reduces those risks. It is
important to properly manage and run devices without letting users access the information and functions
they do not need.
41
Countermeasures against information leaks via storage media - Multifunction products have a built-in
storage device, such as a hard disk drive, for storing address books and accumulated documents. The hard
disk drive also contains temporary work images for transmission, reception and printing. If the storage
devices are removed, your important information may be read elsewhere. Using the data encryption and
overwrite-and-erase functions reduces the risk of information leaks.
Countermeasures against information leaks via hard copies - If a document is left on the tray of a
multifunction copier, it can be taken away or viewed by unauthorized persons. It can be a source of
information leakage. The risk can be minimized by using the user authentication and locked printing
features. Make sure that users make just as many copies as required and that they do not leave hardcopy
output unattended on the tray.
Countermeasures against information leaks due to carelessness - Sometimes one can make copies of
confidential information without knowing it, and the information can be spread and taken away.
Sometimes one can fax a document to the wrong destination. Carelessness can be a source of information
leaks. Ricoh's multifunction copiers feature functions that can help minimize the risk of information leaks
due to carelessness of the user.
COMMUNICATION SECURITY
Communications security (COMSEC) is the discipline of preventing unauthorized interceptors
from accessing telecommunications in an intelligible form, while still delivering content to the intended
recipients. COMSEC is used to protect both classified and unclassified traffic on communications
networks, including voice, video, and data. It is used for both analog and digital applications, and both
wired and wireless links.
SPECIALTIES
Cryptosecurity: The component of communications security that results from the provision of
technically sound cryptosystems and their proper use. This includes ensuring message
confidentiality and authenticity.
Emission Security (EMSEC): The protection resulting from all measures taken to deny
unauthorized personnel information of value that might be derived from communications systems
and cryptographic equipment intercepts and the interception and analysis of compromising
42
Physical security: The component of communications security that results from all physical
measures necessary to safeguard classified equipment, material, and documents from access
thereto or observation thereof by unauthorized persons.
Protection of Privacy: The communications security system shall not allow for identification of
a person through personally-identifiable information (PII) within messaging contents.
Secure Communications: All communications transmitted and received from a vehicle shall be
secure. This includes both one-way and two-way communications. Messages will support
delivery and management of security credentials and will be encrypted to prevent eavesdropping
and tampering over the communication channel.
Scalability to Enable Nationwide Adoption The security approach shall be scalable to support
a population of over 250 million vehicles using the system
Secure communication is when two entities are communicating and do not want a third party to
listen in. For that they need to communicate in a way not susceptible to eavesdropping or interception.
Secure communication includes means by which people can share information with varying degrees of
certainty that third parties cannot intercept what was said.
Types of security
Security can be broadly categorized under the following headings, with examples:
Code
Encryption
43
Steganography
Identity Based
"Crowds" and similar anonymous group structures it is difficult to identify who said
what when it comes from a "crowd"
Anonymous proxies
Random traffic creating random data flow to make the presence of genuine
communication harder to detect and traffic analysis less reliable
Code
A rule to convert a piece of information (for example, a letter, word, phrase, or gesture) into
another form or representation (one sign into another sign), not necessarily of the same type. In
communications and information processing, encoding is the process by which information from a source
is converted into symbols to be communicated. Decoding is the reverse process, converting these code
symbols back into information understandable by a receiver. One reason for coding is to enable
communication in places where ordinary spoken or written language is difficult or impossible. For
example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower
encodes parts of the message, typically individual letters and numbers. Another person standing a great
distance away can interpret the flags and reproduce the words sent.
Encryption
Encryption is where data is rendered hard to read by an unauthorised party. Since encryption can
be made extremely hard to break, many communication methods either use deliberately weaker
encryption than possible, or have backdoors inserted to permit rapid decryption. In some cases
government authorities have required backdoors be installed in secret. Many methods of encryption are
also subject to "man in the middle" attack whereby a third party who can 'see' the establishment of the
secure communication is made privy to the encryption method, this would apply for example to
44
interception of computer use at an ISP. Provided it is correctly programmed, sufficiently powerful, and
the keys not intercepted, encryption would usually be considered secure. The article on key size examines
the key requirements for certain degrees of encryption security.
Steganography
Steganography ("hidden writing") is the means by which data can be hidden within other more
innocuous data. Thus a watermark proving ownership embedded in the data of a picture, in such a way it
is hard to find or remove unless you know how to find it. Or, for communication, the hiding of important
data (such as a telephone number) in apparently innocuous data (an MP3 music file). An advantage of
steganography is plausible deniability, that is, unless one can prove the data is there (which is usually not
easy), it is deniable that the file contains any.)
Identity based networks
Unwanted or malicious behavior is possible on the web since it is inherently anonymous. True
identity based networks replace the ability to remain anonymous and are inherently more trustworthy
since the identity of the sender and recipient are known. (The telephone system is an example of an
identity based network.)
Anonymized networks
Recently, anonymous networking has been used to secure communications. In principle, a large
number of users running the same system, can have communications routed between them in such a way
that it is very hard to detect what any complete message is, which user sent it, and where it is ultimately
going from or to. Examples are Crowds, Tor, I2P, Mixminion, various anonymous P2P networks, and
others.
Anonymous communication devices
In theory, an unknown device would not be noticed, since so many other devices are in use. This
is not altogether the case in reality, due to the presence of systems such as Carnivore and Echelon which
can monitor communications over entire networks, and the fact that the far end may be monitored as
before. Examples include payphones, Internet cafe, etc.
45
BIOMETRIC SYSTEMS
Biometrical authentication or just biometrics is the process of making sure that the person is who
he claims to be. Authentication of identity of the user can be done in three ways: 1) something that person
knows (password), 2) something the person has (key, special card), 3) something the person is
(fingerprints, footprint ).
Biometrics is based on anatomic uniqueness of a person and as follow it can be used for
biometric identification of a person. Unique characteristics can be used to prevent unauthorized access to
the system with the help of automated method of biometric control which, by checking unique
physiological features or behavior characteristics identifies the person.
Biometric functionality
1. Universality- something that each person has
2. Uniqueness- something that separates this very person from others. This means that not all
characters can be suitable for biometrics.
3. Permanence- biometric measurement should be constant over time for each person.
4. Measurability (collectability)- it should be easy to measure, should not demand much time and costs
5. Performance- speed, accuracy and robustness
6. Acceptability- how well people accept biometrics
7. Circumvention- how easy it is to fool the system. This becomes very important as the value of
information grows rapidly. It gives an opportunity to be ready to two kinds of attacks: 1) privacy
attack when the attacker access to the data to which he is not authorized, and 2) subversive attack
when the attacker receives an opportunity to manipulate the system.
46
Matching Unit
This module compares the current input with the template. If the system performs identity
verification, it compares the new characteristics to the users master template and produces a score or
match value (one to one matching). A system performing identification matches the new characteristics
against the master templates of many users resulting in multiple match values (one to many matching).
Decision Maker
This module accepts or rejects the user based on a security threshold and matching score.
47
PHYSIOLOGICAL BIOMETRICS
Finger Scan
This is a technology that uses the unique fingerprint patterns present on the human finger to identify
or verify the identity of the individual. Several acquisition techniques can be used:
optical scanning
ultrasound scanning
It is a mature and proven core technology that has been vigorously tested, and delivers high
accuracy levels. It is also a flexible technology that can be used in a wide range of environments. It has
the advantage of employing ergonomic, easy-to-use devices. By performing multiple finger scans (of
different fingers) for each individual, the systems ease of use can be increased.
Some weaknesses that prevent it from being useful in certain applications:
that most devices are unable to enroll some small percentage of users. This is attributed to hardware
limitations as well as physiological reasons for special population groups. The performance generally tend
to deteriorate over time (for example, fingerprints can change due to aging or wear or tear
Facial Scan
This technology is suited for both authentication and identification. It is based on the analysis of
facial features. It can be easily integrated in an environment that already uses image acquisition
equipment. It can also be used to search against static images such as drivers license photographs. In
addition, it does not always require the users cooperation to obtain the necessary data.
The presence of many variables which constitute an implementation challenge and which can
greatly reduce the systems matching rate, for example, a change in the environment surrounding the
individual, or changes in the individuals physiological characteristics.
Iris Scan
This is a technology based on using the unique features of the human iris for
identification/authentication. So far, the technology has been successfully implemented in ATMs and is
currently being promoted for desktop usage. The technology promises exceptionally high levels of
accuracy as the characteristics of the human iris maintain a high level of stability over the individuals
lifetime. Nevertheless, the challenges to the technology stem from the image acquisition process, which
requires the use of proprietary devices and accurate positioning, and thus some specialized training. In
addition, for some users, using an eye-based technology represents a major discomfort.
Voice Scan
This is a technology that uses the unique aspects of the individuals voice for identification or
authentication purposes. This technique is text-dependent, which means that the system cannot verify any
48
phrase spoken by the user, but rather a specific phrase associated with that users account. Voice scan is
often coupled with speech recognition in systems that use verbal passwords. The processes of data
acquisition and data storage represent the main obstacles to this technique. Gathering accurate voice data
is entirely dependent on the quality of capture devices used and thus the absence of noise.
Hand Scan
This technology uses distinctive features of the hand, such as geometry of hand and fingers, for
Identity verification. Hand scan is a more application-specific solution than most biometric technologies,
used exclusively for physical access and time and attendance applications. The main advantages of this
method are:
It is generally considered to be non-intrusive from the users perspective. On the other hand, this
technology is of limited accuracy and the ergonomic design limits usage by certain populations
Retina Scan
This technology uses the distinct features of the retina for identification and authorization. It is
considered one of the least used technologies in the field of biometrics, almost only used in highly
classified government and military facilities. Even though this technique delivers very high levels of
accuracy, yet its unpopularity is attributed to the difficulty of usage, in addition to the users discomfort.
DNA Matching
A relatively new technology that relies on the analysis of DNA sequences for identification and
Authentication. The technology raises many concerns over privacy issues, invasiveness and data
misuse. And currently cannot be done fully automated.
Vein Identification
Another fairly new technology that uses the vein patterns on the back of the hand for
identification and authentication. The technology has the potential of delivering high accuracy, in
addition to the advantage of being non-intrusive to the user.
BEHAVIORAL BIOMETRICS
Signature Scan
This technology uses the human written signature for identity verification. This technique is noninvasive to the user and flexible in the sense that it can be changed by the user (unlike most of the other
biometric technologies), yet the error rates can be very high due to inconsistencies in ones signature. This
static analysis can be extended to incorporate dynamic features (e.g. velocity, acceleration, pressure),
claiming increased accuracy and reduced privacy concerns.
49
Keystroke Scan
This technology uses a persons distinctive typing patterns for verification. This technique is
combined with the traditional password scheme for increased security. It doesnt require any special
hardware for data acquisition, since all data is gathered from the keyboard. Furthermore, the process is
practically invisible to the user, since the user is merely asked to type his/her password. In addition, the
technique is highly flexible, as it accommodates to password changes. However, the method is fairly new,
and the underlying concepts have not been fully developed. In addition, keystroke scan inherits all the
flaws of password-based systems.
Gait Recognition
A technology based on the analysis of the rhythmic patterns associated with walking stride This
is another new concept, currently under development. Both of them are based on rather informal studies,
but can nevertheless be considered scientific.
The capacitive fingerprint scanners in the test allowed an attacker to restore latent images on the
surface of the scanner using graphite powder and adhesive film. This technique also allows to
capture residual fingerprints on other objects. Matsumo managed to produce gelatin fingers out of
the captured fingerprints. Using these gummy fingers, he was able to fool 11 different types of
fingerprint systems.
Face recognition systems could be fooled by displaying (secretly captured) photos or video clips
on a notebook screen presented to the camera. Even systems that claimed to have live detection
could be taken in by video clips.
Iris Systems were fooled using a high-quality photo of a human iris printed on special paper. A
hole was cut in the middle, and the attacker held the photo in front of his eyes, such that his own
pupils were visible through the hole. That was sufficient to overcome the live detection of the
tested systems.
50
Performance monitoring
Fault monitoring
Account monitoring
These goals are three of the five functional areas of network management proposed by OSI, Open
Systems Interconnect. The other two functional areas are not related to network monitoring. They are
configuration management and security management.
51
Performance monitoring deals with measuring the performance of the network. There are three
important issues in performance monitoring. First, performance monitoring information is usually used to
plan future network expansion and locate current network usage problems. Second, the time frame of
performance monitoring must be long enough to establish a network behavior model. Third, choosing
what to measure is important. There are too many measureable things in a network. But the list of items to
be measured should be meaningful and cost effective. This list of items to be measured is called network
indicators because they indicate attributes of the network.
Network
Description
indicators
Circuit
The actual time that a user can dial up to a network and the network connection is
Availability
Node Availability
The actual time that a user can use network nodes, multiplexers and routers without
having error.
Blocking Factor
The number of user who cannot access the network because of busy signal in theory.
Response Time
The time to transmit a signal and receive a response for the signal.
Fault monitoring deals with measuring the problems in the network. There are two important
issues in fault monitoring. First, fault monitoring deals with various layers of the network. When a
problem occurs, it can be at different layers of the network. Thus it is important to know which layer is
having problem. Second, fault monitoring requires establishing a normal characteristics of the network in
an extended period of time. There are always errors in the network but when there are errors, it does not
mean the network is having persistent problems. Some of these errors are expected to occur. For example,
noise in a network link can cause transmission errors. The network only has problem when the number of
errors has suddenly increased above its normal behavior. Thus, a record of normal behavior is important.
Account monitoring deals with how users use the network. The network keeps a record of what
devices of the network are used by users and how often they are used. This type of information is used for
billing user for network usage, and for predicting future network usage.
MONITORING INTERNET
Internet is a network of many networks. Each individual network is owned and operated by
different organizations. Monitoring the internet is different from monitoring a single network because in a
single network, all components are usually under the control of a single network management, but in the
52
case of internet, each individual network has different base layer platform and is managed by different
network management.
Monitoring difficulties
The internet is getting more and more difficult to monitor because more and more users are added
to it every day, and there is a lack of measurements of the quality for the internet as a whole. There is no
standardized metric being used in measuring the internet. But usually host response time, time delay, and
loss rate are being measured by individual network. The users of the internet has to measure aspects of the
internet which tell them the performance of their network applications.
There is no standardized monitoring tool for monitoring the internet. Different people use different tools
in monitoring the internet. The most common internet monitoring tools are public domain softwares
because they are available for the internet at extremely low cost and also these public domain softwares
can be easily customized.
Right now, there is no standardized effort in monitoring the internet as a whole and none is being
researched and developed. The only way to monitoring the internet now is to use existing public
softwares and extend their functionalities. There are couple problems with this approach. First, these
public softwares are not intended for monitoring. Their usage eats up network capacity; thus allowing
only a small amount of monitoring activities. Second, monitoring the internet is difficult and not many
people are doing it. As a result, problems are not often reported and consequently solved infrequently. As
a result, the internet performance is degrading. This phenomenon created by the lack of monitoring is
referred to gridlock
Objective-driven monitoring
Objective-driven monitoring is a new idea which can be useful to monitor the internet. The basic
idea is to use knowledge base to control a large number of sensors installed on the network. These large
numbers of sensors can be installed on different parts of network and work together to monitor the
network as a whole. Currently, there is no practical implementation for objective driven monitoring. But it
can possibly be applied to internet in the future.
Objective-driven monitoring is designed for monitoring a distributed computing environment which
carries diverse classes of traffic and traffic patterns. Tradition network monitoring uses log files to record
the events or states of the network. In objective-driven monitoring, many sensors are installed on the
53
network, but they are not always turned on as in the traditional network monitoring strategies. The
number and topology of them to be turned on is determined by a set of rules. A knowledge base is set up
to apply the rules and gives instructions on which sensors to be turned on. Then the readings from the
sensors can be recorded and analyzed to provide specific answers to questions on network monitoring.
For example, the time delay of a packet can be measured by adding up the time delay in the switch buffer,
link and the switch fabric.
Low cost, preferably constructed from parts taken from decommissioned PCs.
Offer secure (encrypted) network connections with other similar stations and with the
workstations of the network management staff.
Be resistant to tampering. In the case where there are indications that the station has been hacked,
its original configuration must be easily restored.
54
Offer a standard platform for the execution of common network management and monitoring
tools. It must also support the SNMP protocol. It must offer ways of establishing connections
with network elements of various vendors for the purposes of administration and configuration.
Finally, for troubleshooting purposes, it must be able to be deployed with minimal overheads in
any part of the network.
Campus Network
The network monitoring stations have been deployed within the campus in three roles:
Controller: the monitoring station is connected directly to a router or other network device (e.g. switch,
access server, etc.) so that configuration and administration of the device is carried out through the secure
network. These connections can be serial links connecting one of the station built-in serial ports to the
router console. Logins to the network device through its network ports are disabled so that administration
can be carried out only via the console port. Hosts with serial consoles (e.g. SUNs) can also be controlled
Traffic Monitor: We can monitor traffic on various LAN segments using tcpdump and send the output
via syslogd to a central logging host. The syslog traffic goes over the IPsec links so that it cannot be
intercepted.
55
Router: by adding high speed serial cards to the Network Monitoring Station we have created an
emergency router for 2Mbps connections to buildings located outside the main campus and linked to the
main building via leased lines (see figure 4). The OpenBSD kernel can support IP routing and through the
addition of routing software (e.g. gated), it can exchange routing information (OSPF) with dedicated
routers.
56
of communication between two entities. However, within TCP/IP RFCs, the term "handshake" is most
commonly used to reference the TCP three-way handshake.
A simple handshaking protocol might only involve the receiver sending a message meaning "I received
your last message and I am ready for you to send me another one." A more complex handshaking protocol
might allow the sender to ask the receiver if he is ready to receive or for the receiver to reply with a
negative acknowledgement meaning "I did not receive your last message correctly, please resend it" (e.g.
if the data was corrupted en route).
Handshaking makes it possible to connect relatively heterogeneous systems or equipment over a
communication channel without the need for human intervention to set parameters. One classic example
of handshaking is that of modems, which typically negotiate communication parameters for a brief period
when a connection is first established, and thereafter use those parameters to provide optimal information
transfer over the channel as a function of its quality and capacity. The "squealing" (which is actually a
sound that changes in pitch 100 times every second) noises made by some modems with speaker output
immediately after a connection is established are in fact the sounds of modems at both ends engaging in a
handshaking procedure; once the procedure is completed, the speaker might be silenced, depending on the
settings of operating system or the application controlling the modem.
57
58
1. The first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with
its own sequence number
An example showing a grid computing system connecting many personal computers over the internet
using inter-process network communication.
In computer science, inter-process communication (IPC) is the activity of sharing data across multiple and
commonly specialized processes using communication protocols. Typically, applications using IPC are
categorized as clients and servers, where the client requests data and the server responds to client
requests.[1] Many applications are both clients and servers, as commonly seen in distributed computing.
Methods for achieving IPC are divided into categories which vary based on software requirements, such
as performance and modularity requirements, and system circumstances, such as network bandwidth and
latency.[1]
There are several reasons for implementing inter-process communication systems:
Sharing information; for example, web servers use IPC to share web documents and media with
users through a web browser.
Distributing labor across systems; for example, Wikipedia uses multiple servers that
communicate with one another using IPC to process user requests.
Privilege separation; for example, HMI software systems are separated into layers based on
privileges to minimize the risk of attacks. These layers communicate with one another using
encrypted IPC.
Method
Short Description
File
A record stored on disk, or a record synthesized on demand by a file server, which can
be accessed by multiple processes.
Signal
A system message sent from one process to another, not usually used to transfer data but
instead used to remotely command the partnered process.
Socket
A data stream sent over a network interface, either to a different process on the same
computer or to another computer on the network.
Message
queue
system, that allows multiple processes to read and write to the message queue without
being directly connected to each other.
Pipe
A two-way data stream between two processes interfaced through standard input and
output and read in one character at a time.
Named pipe
A pipe implemented through a file on the file system instead of standard input and
output. Multiple processes can read and write to the file as a buffer for IPC data.
Semaphore
Shared
Multiple processes are given access to the same block of memory which creates a shared
memory
Message
passing
concurrency models.
Memory-
A file mapped to RAM and can be modified by changing memory addresses directly
mapped file
instead of outputting to a stream. This shares the same benefits as a standard file.
A network management system (NMS) is a set of hardware and/or software tools that allow an IT
professional to supervise the individual components of a network within a larger network management
framework. Network management system components assist with:
59
Network device monitoring - monitoring at the device level to determine the health of
network components and the extent to which their performance matches capacity plans and
intra-enterprise service-level agreements (SLAs).
Intelligent notifications - configurable alerts that will respond to specific network scenarios
by paging, emailing, calling or texting a network administrator.
PHYSICAL SECURITY
Physical security is almost everything that happens before you start typing commands on the
keyboard. Its the building alarm system. Its the key lock on your computers power supply, the locked
computer room with the closed-circuit camera, and the uninterruptible power supply and power
conditioners
60
61
approach is to place computers under strong tables. Also consider physically attaching the computer to the
surface on which it is resting. You can use bolts, tie-downs, straps, or other implements.
Temperature extremes
Computers, like people, operate best within certain temperature ranges. Most computer systems should be
kept between 10 to 32 degrees Celsius (50 and 90 degrees Fahrenheit). If the ambient temperature around
your computer gets too high, the computer cannot adequately cool itself, and internal components can be
damaged. If the temperature gets too cold, the system can undergo thermal shock when it is turned on,
causing circuit boards or integrated circuits to crack.
Electrical noise
Motors, fans, heavy equipment, and even other computers generate electrical noise that can cause
intermittent problems with the computer you are using. This noise can be transmitted through space or
nearby power lines. Electrical surges are a special kind of electrical noise that consists of one (or a few)
high-voltage spikes. If possible, each computer should have a separate electrical circuit with an isolated
ground and power filtering equipment; in no cases should a computer share a circuit with heavy
equipment.
Lightning
Lightning generates large power surges that can damage even computers with otherwise protected
electrical supplies. If lightning strikes your buildings metal frame (or hits your buildings lightning rod),
the resulting current can generate an intense magnetic field on its way to the ground. Computers should be
unplugged during lightning storms; if thats not possible, invest in surge suppression devices. Although
they wont protect against a direct strike, they can help when storms are distant. Magnetic media should
be stored as far as possible from the buildings structural steel members. Never run copper network cable
outdoors unless its in a metal conduit.
Water
Water can destroy your computer. The primary danger is an electrical short, which can happen if water
bridges between a circuit board trace carrying voltage and a trace carrying ground. Water usually comes
from rain or flooding. Sometimes it comes from an errant sprinkler system. Water also may come from
strange places, such as a toilet overflowing on a higher floor, vandalism, or the fire department Keep
computers out of basements that are prone to flooding. Mount water sensors on the floor of computer
rooms, as well as under raised floors, and use them to automatically cut off power in the event of a flood.
Food and drink
Foodespecially oily foodcollects on peoples fingers and from there gets on anything that a person
touches. Often this includes dirt-sensitive surfaces such as magnetic tapes and optical disks. One of the
62
fastest ways of putting a desktop keyboard out of commission is to pour a soft drink or cup of coffee
between the keys. Generally, the simplest rule is the safest: keep all food and drink away from your
computer systems.
Other environmental hazards
Several other environmental hazards bear consideration:
Dust. Keep computer rooms as dust-free as possible, and use a computer vacuum with a
microfilter on a regular basis.
Explosion. If you need to operate a computer in an area where there is a risk of explosion, you
might considerpurchasing a system with a ruggedized case.
Insects. Take active measures to limit the amount of insect life in your machine room.
Vibration. In a high-vibration environment, place computers on a rubber or foam mat if you can
do so without blocking ventilation openings.
Environmental monitoring
To detect spurious problems, continuously monitor and record your computer rooms temperature
and relative humidity. As a general rule of thumb, every 1,000 square feet of office space should have its
own recording equipment. Log and check recordings on a regular basis.
Controlling Physical Access
Simple common sense will tell you to keep your computer in a locked room. But how safe is that
room? Sometimes a room that appears to be safe is actually wide open.
Raised floors and dropped ceilings
In many modern office buildings, internal walls do not extend above dropped ceilings or beneath
raised floors. This type of construction makes it easy for people in adjoining rooms, and sometimes
adjoining offices, to gain access.
Entrance through air ducts
If the air ducts that serve your computer room are large enough, intruders can use them to gain
entrance to an otherwise secured area. Areas that need a lot of ventilation should be served by several
small ducts, or should have screened welded over air vents or inside the ducts. In a very high-security
environment, motion detectors can be placed inside air ducts.
Glass walls
Although glass walls and large windows frequently add architectural panache, they can be severe
security risks. Glass walls are easy to break; a brick and a bottle of gasoline thrown through a window can
cause an incredible amount of damage. An attacker can also gain critical knowledge, such as passwords or
information about system operations, simply by watching people on the other side of a glass wall or
63
window. It may even be possible to capture information from a screen by analyzing its reflective glow.
Interior glass walls are good for rooms which must be guarded but which the guard is not allowed to
enter; in most other cases, avoid them.
Defending Against Vandalism
Computer systems are good targets for vandalism. Reasons for vandalism include revenge, riots,
strikes, political or ideological statements, or simply entertainment for the feebleminded. In principle, any
part of a computer systemor the building that houses itmay be a target for vandalism. In practice,
some targets are more vulnerable than others.
Network cables
In many cases, a vandal can disable an entire subnet of workstations by cutting a single wire with
a pair of wire cutters. Compared with Ethernet, fiber optic cables are at the same time more vulnerable
(they can be more easily damaged), more difficult to repair (they are difficult to splice), and more
attractive targets (they often carry more information). Temporary cable runs often turn into permanent
installations, so take extra time and effort to install cable correctly the first time. One simple method for
protecting a network cable is to run it through physically secure locations. For example, Ethernet can be
run through steel conduits. Besides protecting against vandalism, this practice protects against some
forms of network eavesdropping, and may help protect your cables in the event of a small fire. Fiber optic
cable can suffer small fractures if someone steps on it. A fracture of this type is difficult to locate because
there is no break in the coating. Some high-security installations use double-walled, shielded conduits
with a pressurized gas between the layers. Pressure sensors on the conduit break off all traffic or sound a
warning bell if the pressure ever drops, as might occur if someone breached the walls of the pipe.
Network connectors
In addition to cutting a cable, a vandal who has access to a networks endpointa network connector
can electronically disable or damage the network. All networks based on wire are vulnerable to attacks
with high voltage In many buildings, electrical, gas, or water cutoffs may be accessiblesometimes even
from the outside of the building. Because computers require electrical power, and because temperature
control systems may rely on gas heating or water-cooling, these utility connections represent points of
attack for a vandal.
Preventing Theft
Computer theftespecially laptop theftcan be merely annoying or can be an expensive ordeal.
But if the computer contains information that is irreplaceable or extraordinarily sensitive, it can be
devastating.
64
Locks
One very good way to protect your computer from theft is to physically secure it. A variety of
physical tie-down Devices are available to bolt computers to tables or cabinets. Although they cannot
prevent theft, they make it more difficult. Mobility is one of the great selling points of laptops. It is also
the key feature that leads to laptop theft. One of the best ways to decrease the chance of having your
laptop stolen is to lock it, at least temporarily, to a desk, a pipe, or another large object. Most laptops sold
today are equipped with a security slot. For less than $50 you can purchase a cable lock that attaches to a
nearby object and locks into the security slot. Once set, the lock cannot be removed without either using
the key or damaging the laptop case, which makes it very difficult to resell the laptop. These locks
prevent most grab-and-run laptop thefts.
Tagging
Another way to decrease the chance of theft and increase the likelihood of return is to etch
equipment with your name and phone number or tag it with permanent or semi permanent equipment
tags. Tags make it very difficult for potential buyers or sellers to claim that they didnt know that the
computer was stolen. The best equipment tags are clearly visible and individually serial-numbered, so that
an organization can track its property.
Laptop recovery software and services
Several companies now sell PC tracing programs. The tracing program hides in several
locations on a laptop and places a call to the tracing service on a regular basis to reveal its location. The
calls can be made using either a telephone line or an IP connection. Normally these calls home are
ignored, but if the laptop is reported stolen to the tracing service, the police are notified about the location
of the stolen property.
Component theft
When RAM has been expensive, businesses and universities have suffered a rash of RAM thefts.
Many computer businesses and universities have also had major thefts of advanced processor chips. RAM
and late-model CPU chips are easily sold on the open market. They are virtually untraceable. And, when
thieves steal only some of the RAM inside a computer, weeks or months may pass before the theft is
noticed. If a user complains that a computer is suddenly running more slowly than it did the day before,
check its RAM, and then check to see that its case is physically secured.
Encryption
If your computer is stolen, the information it contains will be at the mercy of the equipments new
owners. They may erase it or they may read it. Sensitive information can be sold, used for blackmail, or
used to compromise other computer systems.
65
Routinely inspect all cables and wires carrying data for physical damage or modification, and
consider using shielded or armored cable to make wiretapping more difficult. If you are very
security-conscious, place cable in steel conduit.
Make sure unused offices do not have live Ethernet ports. Use Ethernet switches instead of hubs.
Run LAN monitoring software like arpwatch that detects packets with previously unknown MAC
addresses, or use switches that can perform MAC address filtering. Use fiber optic cables in
preference to twisted-pair networks when possible; they are harder to tap undetected.
Avoid using wireless networks; if you must build a wireless network, enable all possible security
features for defense-in-depth (e.g. encryption, firewalling, disabling SSID broadcasts, MAC
filters, etc.) Because most of these features provide very little security, educate your users to
always use a VPN or other encrypted tunnel for wireless networking. Place the wireless access
point outside your firewall (or between two firewalls).
Encryption provides significant protection against eavesdropping. Thus, in many cases, it makes
sense to assume that your communications are being monitored and to encrypt all
communications as a matter of course. When this is not feasible, at least encrypt all sensitive
traffic (such as login names and passwords for remote services).
Protecting Backups
Backups should be a prerequisite of any computer operationsecure or otherwisebut the
information stored on backup tapes is extremely vulnerable. Protect your backups at least as well as you
66
normally protect your computers themselves. Never leave them unattended in a generally accessible area,
keep then in physically secure locations (ideally, some in a location away from your computers) and be
careful who you trust to ship them from location to location. Most backup programs allow you to encrypt
the data before it is written to backup. Encrypted backups dramatically reduce the chance that a backup
tape or CD-ROM, if stolen, will be useful to an adversary. If you encrypt backups, be sure you protect the
encryption key, both so that an attacker cannot learn it and so that your key will not be lost if you should
change staff. Sometimes, backups in archives are slowly erased by environmental conditions. Magnetic
tape is also susceptible to a process called print through, in which the magnetic domains on one piece of
tape wound on a spool affect the next layer. The only way to find out if this process is harming your
backups is to test them periodically.
inventorying of backup media. You can choose any system of labeling and cataloging that you find
effective, as long as you choose one and document it clearly.
Sanitizing Media before Disposal
When you discard disk drives, CD-ROMs, or tapes, make sure that the data on the media has been
completely erased. This process is called sanitizing. Simply deleting a file that is on your hard disk
doesnt delete the data associated with the file. Parts of the original dataand sometimes entire files
can usually be easily recovered. Hard disks must be sanitized with special software that is specially
written for each particular disk drives model number and revision level. For tapes, you can use a
degaussing machine or bulk erasera hand-held electromagnet that has a hefty field. Experiment with
reading back the information stored on tapes that you have bulk erased until you know how much
erasing is necessary to eliminate your data. Some software exists to overwrite optical media, thus erasing
the contents of even write-once items. However, the effectiveness of these methods varies from media
type to media type, and the overwriting may still leave some residues. For this reason, physical
destruction may be preferable. Incinerators and acid baths do a remarkably good job of destroying tapes,
but are not environmentally friendly.
.Sanitizing Printed Media
Printed material that may find its way into the trash may contain information that is useful to
criminals or competitors. This includes printouts of software (including incomplete versions), memos,
design documents, preliminary code, planning documents, internal newsletters, company phone books,
manuals, and other material. Other information that may find its way into your dumpster includes the
types and versions of your operating systems and computers, serial numbers, patch levels, and so on. It
may include hostnames, IP numbers, account names, and other information critical to an attacker. We
have heard of some firms disposing of listings of their complete firewall configuration and filter rulesa
gold mine for someone seeking to infiltrate the computers.
67
SECURITY MECHANISM IN OS
The security mechanism discuss on providing systems security solutions as a hardening process
that includes planning, installation, configuration, update and maintenance. A system has number of
layers with the physical hardware at the bottom, the base operating system above including privileged
kernel code.
Computer client and server systems are central components of the IT infrastructure for most
organizations, may hold critical data and applications, and are necessary tool for the function of a n
organization. Accordingly three presences of vulnerabilities in OS and applications are to be known.
Thus it is quite possible for a system to be compromised during the installation process before it can
install the latest patches or implement other hardening measures. Hence building and deploying a system
should be a planned process designed to counter such a threat, and to maintain security during its
operational lifetime.
The process must
68
Secure the underlying operating system and then the key applications
The categories of the users of the system and the type of information they access
Who will administer the system and how will they manage the system.
Harden and configure the OS to adequately address the identified security needs of the system by
o
69
Install and configure additional security controls, such as antivirus, host based firewalls and
intrusion detection systems
Test the security of the basic OS to ensure that the steps taken adequately address its security
needs.
70
Not all users with access to the system will have the same access to all data and resources on that
system. Nearly all modern OS provide some form of discretionary access controls. Some systems provide
role based or mandatory access control mechanism. The system planning process should consider the
categories of users on the system, the privileges they have, the types of information they can access and
how and where they are defined and authenticated.
administer the system; others will be normal users, sharing appropriate access to files and other data as
required. It is highly desirable that only the users with high privilege are given access to perform some
task that requires them, otherwise access as the normal user. That improves security by providing a small
window of opportunity for an attacker to exploit the actions of such privileged user. Some OS provide
special tools or access mechanisms to assist administrative users to elevate their privileges only when
necessary and to appropriately log these actions.
Configure research controls
Once the users and their associated groups are defined, appropriate permissions can be set on data
and resources to match the specified policy. This may be to limit which users can execute some programs,
especially those that modify the system state. Or it may be to limit which users can read or write data in
certain directory trees. Many of the security hardening guides provide lists of recommended changes to
the default access configuration to improve security.
Install additional security controls
Further security improvement may be possible by installing and configuring additional security
tools such as antivirus software, host based firewall. Some of these may be supplied as part of the OS
systems installation, but not configured and enabled by default. Others are third party products that are
acquired and used. Firewalls are traditionally configured to limit access by port or protocol for some or all
external systems. Another control is to white list applications. This limit the programs that can execute on
the system to just those are in explicit list.
Test the system security
The final step in the process of initially securing the base OS is security testing. The goal is to
ensure that the previous security configuration steps are correctly implemented, and to identify any
possible vulnerability that must be corrected or managed. Suitable checklists are included in many
security hardening guides. There are also programs specifically designed to review a system to ensure that
71
a system meets the basic security requirements, and to scan for known vulnerabilities and poor
configuration practices. This should be done following the initial hardening of the system, and then
periodically as part of the security maintenance process.
APPLICATION SECURITY
Once the base OS is installed and appropriately secured, the required services and applications
must be next installed and configured. The concern as with the basic OS, is to only install software on the
system as required to meet the desired functionality, in order to reduce the number of places
vulnerabilities may be found. Software that provides remote access or service of particular concern should
be installed. Since an attacker may be able to exploit this to gain remote access to the system. Hence any
such software needs to be carefully selected and configures, and updated to the most recent version
available. Each selected service must be installed, and then patched to the most recent supported secure
version appropriate for the system. This may be from additional packages provided with the OS
distribution, or form a separate third party package. As with the base OS, utilizing an isolated, secure
build network is preferred.
Application Configuration
Any application specific configuration is then performed. This may include creating and
specifying appropriate data storage areas for the application, and making appropriate changes to the
application or service default configuration details. Some applications or services may include default
data, scripts or user accounts. These should be reviewed and retained only if required. As a part of
configuration process, careful consideration should be given to the access rights granted to the
application. Again this is for particular concern with remote4 accessed services, such as web and file
transfer services. The server application should not be granted the right to modify files, unless that the
function is specifically required.
Encryption Technology
Encryption is a key enabling technology that may be used to secure data both in transit and when
stored. If such technologies are required for the system, they must be configured, and appropriate
cryptographic keys created, signed and secured. Cryptographic file systems are another use of encryption.
If desired, they must be created and secured with suitable keys.
SECURITY MAINTENANCE
72
Once the system is appropriately built, secured and deployed, the process of maintaining security is
continuous. This results from constantly changing environment, the discovery of new vulnerabilities, and
hence exposure to new threats. The security maintenance includes following steps
Using appropriate software maintenance processes to patch and update all critical software, and to
monitor and revise configuration if required.
Logging
Logging is an active control that can only inform you about bad things that have already
happened. But effective logging helps to ensure that in the event of the system breach or failure, system
administrators can quickly and accurately identify what happened and thus more effectively focus their
remediation and recovery efforts. The key is to ensure capture of correct data logs, and are then able to
appropriately monitor and analyze this data. Logging information can be generated by the system,
network and applications. The range of logging data acquired should be determined during the system
planning stage, as it depends on the security requirements and information security of the server. Logging
can generate significant volumes of information. It is important that sufficient space is allocated to them.
A suitable automatic log rotation and archive system should also be configured to assist in the managing
of overall size of logging information.
Data Backup and Archive
Performing regular backups of data on a system is another critical control that assists with
maintaining the integrity of the system and user data. There are many reasons why data can be lost from a
system, including hardware or software failures, or accidental. There may be also legal or operational
requirements for the retention of data. Backup is the process of making copies of data at regular intervals
allowing the recovery of lost or corrupted data over relatively short time periods of a few hours to some
weeks. Archive is a process of retaining copies of data over extended periods of time, being months or
years, in order to meet legal and operational requirements to access past data. These procedures are often
linked and managed together.
73
VIRTUAL SECURITY
Virtualization, at its core, is the ability to emulate hardware via software. If we walk through the
processes, some form of operating system still needs to be booted from the hardware. This may be a full
blow OS such as Linux or Windows, or it may be a stripped down OS specifically designed to provide
virtualization, In either case an operating system is first booted and then an emulation software stack is
loaded which is referred to as a hypervisor. The hypervisor is the component which is responsible for
emulating specific hardware configurations to guest operating systems. When a guest is loaded into a
virtual machine (VM), the hardware that gets detected is the simulated hardware via the hypervisor, not
the actual hardware itself. The guest OS is abstracted from the true hardware, adding a component of
versatility. The hypervisor is capable of creating multiple simulated environments, or multiple VMs,
which permits us to run multiple operating systems that may have slightly different hardware
requirements.
The above diagram graphically represents the layout of a virtualized platform. The hypervisor
abstracts the VMs from the actual hardware by emulating these components. This can create abstracts the
VMs from the actual hardware by emulating these components. Virtualization is available in two different
flavors, host OS based or bare metal. When virtualization is run on a host OS system, it is run like any
other application. This permits the virtualization is run on a host OS system, it is run like any other
application. This permits the administrator to level any tools that are capable of running on the host OS
while managing the environment. On the down side, the host OS increases the amount of code being
executed on the system, and thus increases the surface area of potential attacks. A bare metal system still
uses an operating system, but the OS has been stripped down to only support the virtualized environment.
74
This reduces the number of available tools, but also decreases the amount of code that can be potentially
exploited. It can also be argued that bare metal system can be more difficult to patch and upgrade.
One of the benefits of virtualization is that system resources can be re-allocated as needed. This
permits the administrator to better optimize the environment. permits the administrator to better optimize
the environment. In a legacy network, some semblance of an air gap exists between operating systems.
For example two systems connected to the same Ethernet network can only communicate with each
example two systems connected to the same Ethernet network can only communicate with each other via
the Ethernet network. If that network is disconnected or firewalled, the systems will be unable to
communicate with each other. In a virtualized environment however, the hypervisor always creates a
software connection between systems. There is no way to completely isolate one operating system from
another, without migrating one of the operating systems to a different hardware platform. It is this
persistent software connection that has led many to feel that virtualization can never be configured as
securely as a legacy network.
75
When working with virtualized resources, pay attention to whether you are working with physical
or logical resources. A physical resource is an actual physical device, such as a network card or logical
resources. A physical resource is an actual physical device, such as a network card or storage drive. A
logical resource is a virtualized resource configured to appear as a physical resource. For example a
virtual machine may see a 50 GB hard drive. In reality this may simply be a logical partition of a 300
GB physical drive, which is being shared across multiple virtual machines (VMs).
While resources in a virtualized environment are typically shared between VMs, it is possible to allocate a
physical resource to a specific VM instance. For example a specific storage array could be dedicated for
use by a specific VM. While this reduces flexibility and increases cost, it does has the benefit of ensuring
data storage cannot be inadvertently accessed across VMs.
Typically when we determine which servers to virtualize, we look at performance metrics such as
average server utilization. The lower the utilization level, the more likely the server will make a average
server utilization. The lower the utilization level, the more likely the server will make a good candidate
for virtualization. Security also needs to be part of this equation. When we virtualize a server with no
additional security controls (such as hypervisor malware control), we can potentially increase the risk to
that server. This may be acceptable for low value data, or it may be completely unacceptable for
extremely sensitive information. A good risk analysis will guide us either way. This is where the risk
zones shown above come into our design. For example all of the virtualized servers in the medium trust
zone will most likely require only minor security enhancements to mitigate risk to the proper level. The
high trust zone, however, will contain servers that will most likely require additional security
76
precautions. So by grouping our servers by risk level we not only enhance manageability but make more
efficient use of our security resources.
77