Sie sind auf Seite 1von 74

University of Babylon

College of Information Technology


Department of Information Networks

Network Security Protocols and


Administration

Lecture 3
Types of Attacks

Assistant Lecturer
Rasha Hussein
3rd Class - Second Course
Network Security Protocols and Administration …………….…………………. Rasha Hussein

Types of Attacks
1. Denial of Service Attacks
1.1. Denial of Service Attacks Definition
The first type of attack to examine is the denial of service (DoS). A denial
of service attack is any attack that aims to deny legitimate users of the use
of the target system. This class of attack does not actually attempt to
infiltrate a system or to obtain sensitive information. It simply aims to
prevent legitimate users from accessing a given system.
This type of attack is one of the most common categories of attack. Many
experts feel that it is so common because most forms of denial of service
attacks are fairly easy to execute. The ease with which these attacks can
be executed means that even attackers with minimal technical skills can
often successfully perform a denial of service.
The concept underlying the denial of service attack is based on the fact
that any device has operational limits. This fact applies to all devices, not
just computer systems. For example, bridges are designed to hold weight
up to a certain limit, aircraft have limits on how far they can travel
without refuelling , and automobiles can accelerate unit a certain point.
All of these various devices share a common trait: They have set
limitations to their capacity to perform work. Computers are no different
from these, or any other machine; they, too, have limits. Any computer
system, web server, or network can only handle a finite load.
How a workload (and its limits) is defined varies from one machine to
another. A workload for a computer system might be defined in a number
of different ways, including the number of simultaneous users, the size of
files, the speed of data transmission, or the amount of data stored.
Exceeding any of these limits will stop the system from responding. For
example, if you can flood a web server with more requests than it can
process, it will be overloaded and will no longer be able to respond to
further requests. This reality underlies the DoS attack. Simply overload
the system with requests, and it will no longer be able to respond to
legitimate users attempting to access the web server.
DDoS attack uses many agents to send a lot of useless packets to the
victim in a short time which make the system’s resources unavailable for
legitimate users. It overwhelms network resources with harmful packets
and prevents normal users from accessing the system resources. Since it
is very difficult to set any predefined rules to correctly identify genuine
|Page2
Network Security Protocols and Administration …………….…………………. Rasha Hussein

network traffic, an anomaly-based Intrusion Detection System (IDS) for


network security is commonly used to detect and prevent DDoS attacks.
In DDoS attack, the compromised computers are called agents
(zombies). Figure (1) illustrates the architecture of DDoS attack.

Figure (1): Architecture of Distributed Denial of Service (DDoS) Attack

As we can notice in figure (1) the DDoS attacker performs multistep


attack using the following [1]-
1.1.1. Agent Selection Step
The DDoS attacker indirectly achieves access for the agents through the
handlers. Handlers are chosen in the first step by the attacker which has
security vulnerabilities, and the attacker intrudes them because of their
security weakness. The attacker chooses network-handlers and agents as
many as possible. These network-connected systems (handlers and
agents) are located outside the attacker network and from different
domains of victim.

[1 ] W. Bhaya, and Mehdi Ebady Manaa. A Dynamic DDoS Attack Detection Approach using Data Mining Techniques".

|Page3
Network Security Protocols and Administration …………….…………………. Rasha Hussein

1.1.2. Compromising Step


The attacker compromises the weakness security hosts to install
malicious code in a specific time. Internet Control Message Protocol
(ICMP) is usually used in this step. Furthermore, the DDoS attacker tries
to reveal the malicious code from detection where the security analyst
cannot detect the malicious code because the attacker’s agent uses only a
small amount of resources (both in memory and bandwidth). This leads to
a minimal change in the performance for users that use the system’s
resources.
1.1.3. Communication Step
In this step, the attacker has control to communicate with large number of
handlers and to recognize which agents are running and active. The
DDoS attacker can access these agents in a full control and able to
upgrade it to schedule the attack. Many protocols such as TCP, UDP, or
ICMP are used to run the communication.
1.1.4. Attacking Step
The compromised agents send a large number of useless packets to a
victim simultaneously. The victim is jeopardized and its service
availability is shutdown using many types of protocols such as TCP,
UDP, and ICMP protocols. Mostly, the attacker uses spoofing IP and
random port during the attack time to reveal his information. In addition
to all steps, the DDoS attack is easier to carry out, more harmful, hard to
be traced, difficult to prevent, and its threat is more serious.
1.2. Types of DDOS Attack
Table 1: Classification of DDOS Attack
1- Volume Based 2- Protocol Based 3- Application Layer
Attacks Attacks Attacks
• Floods bandwidth • Directly occupies • Server crash caused
of target server target server's by application layer
• Units: bits per resources vulnerabilities
second (bps) • Units: packets per • Units: request peer
• Example: second second
TCP Flood • Example: • Examples:
UDP flood Ping flood Hash DOS attack
ICMP flood Smurf attack Teardrop attack
SYN flood

|Page4
Network Security Protocols and Administration …………….…………………. Rasha Hussein

1.2.1. SYN Flood


A SYN Flood is a Traffic-based DDoS attack that targets the TCP. When
a client and a server are in communication, something called a “TCP
three-way handshake” should occur for both parties to understand each
other. Usually, the client will send the server a synchronize message
(SYN), the server will then acknowledge the synchronize (Syn-ACK),
and the client will acknowledge that acknowledgment (ACK). So,
effectively:
1. Client requests connection from server (step 1)
2. Server acknowledges the connection request (step 2)
3. Client acknowledges that the server is ok with connecting (step 3)

When these three steps happen, a connection is established between the


client and the server. This particular attack depends on the hacker’s
knowledge of how connections are made to a server. When a session is
initiated between the client and server in a network using the TCP
protocol, a small buffer space in memory is set aside on the server to
handle the “hand-shaking” exchange of messages that sets up the session.
The session-establishing packets include a SYN field that identifies the
sequence in the message exchange.

Figure 2: TCP SYN flood attack compared with neutral session

|Page5
Network Security Protocols and Administration …………….…………………. Rasha Hussein

A SYN flood attempts to disrupt this process. In this attack, an attacker


sends a number of connection requests very rapidly and then fails to
respond to the reply that is sent back by the server. In other words, the
attacker requests connections, and then never follows through with the
rest of the connection sequence. This has the effect of leaving
connections on the server half open, and the buffer memory allocated for
them is reserved and not available to other applications. Although the
packet in the buffer is dropped after a certain period of time (usually
about three minutes) without a reply, the effect of many of these false
connection requests is to make it difficult for legitimate requests for a
session to be established.
1.2.2. Smurf Attack
The Smurf attack is a popular type of DoS attack. It was named after the
application first used to execute this attack. In the Smurf attack, an ICMP
packet is sent out to the broadcast address of a network, but its return
address has been altered to match one of the computers on that network,
most likely a key server. All the computers on the network will then
respond by pinging the target computer.
ICMP packets use the Internet Control Message Protocol to send error
messages on the Internet. As the address of packets are sent to is a
broadcast address, that address responds by echoing the packet out to all
hosts on the network, who then send it to the spoofed source address.
Continually sending such packets will cause the network itself to perform
a DoS attack on one or more of its member servers. This attack is both
clever and simple. The greatest difficulty is getting the packets started on
the target network. This can be accomplished via some software such as a
virus or Trojan horse that will begin sending the packets.

Figure 3: Smurf Attack


|Page6
Network Security Protocols and Administration …………….…………………. Rasha Hussein

1.2.3. UDP Flood


UDP (User Datagram Protocol) is a connectionless protocol and it does
not require any connection setup procedure to transfer data. TCP packets
connect and wait for the recipient to acknowledge receipt before sending
the next packet. Each packet is confirmed. UDP packets simply send the
packets without confirmation. This allows packets to be sent much faster,
making it easier to perform a DoS attack.
A UDP flood attack occurs when an attacker sends a UDP packet to a
random port on the victim system. When the victim system receives a
UDP packet, it will determine what application is waiting on the
destination port. When it realizes that no application is waiting on the
port, it will generate an ICMP packet of destination unreachable to the
forged source address. If enough UDP packets are delivered to ports on
the victim, the system goes down.
1.3. DoS Tools
One reason that DoS attacks are becoming so common is that a number of
tools are available for executing DoS attacks. These tools are widely
available on the Internet, and in most cases are free to download. This
means that any cautious administrator should be aware of them. In
addition to their obvious use as an attack tool, they can also be useful for
testing your anti-DoS security measures.
Low Orbit Ion Cannon (LOIC) is probably the most well know and one
of the simplest DoS tool. You first put the URL or IP address into the
target box. Then click the Lock On button. You can change settings
regarding what method you choose, the speed, how many threads, and
whether or not to wait for a reply. Then simply click the IMMA
CHARGIN MAH LAZER button and the attack is underway.
High Orbit Ion Cannon (HOIC) is a bit more advanced than LOIC, but
actually simpler to run. Click the + button to add targets. A popup
window will appear where you put in the URL as well as a few settings.
2. Buffer Overflow Attacks
Another way of attacking a system is called a buffer overflow (or buffer
overrun) attack. Some experts would argue that the buffer overflow
occurs as often as the DoS attack, but this is less true now than it was a
few years ago. A buffer overflow attack is designed to put more data in a

|Page7
Network Security Protocols and Administration …………….…………………. Rasha Hussein

buffer than the buffer was designed to hold. This means that although this
threat might be less than it once was, it is still a very real threat.
Any program that communicates with the Internet or a private network
must receive some data. This data is stored, at least temporarily, in a
space in memory called a buffer. If the programmer who wrote the
application was careful, the buffer will truncate or reject any information
that exceeds the buffer limit.
Given the number of applications that might be running on a target
system and the number of buffers in each application, the chance of
having at least one buffer that was not written properly is significant
enough to cause any cautious system administrator some concern. A
person moderately skilled in programming can write a program that
purposefully writes more data into the buffer than it can hold. For
example, if the buffer can hold 1024 bytes of data and you try to fill it
with 2048 bytes, the extra 1024 bytes is then simply loaded into memory.
If the extra data is actually a malicious program, then it has just been
loaded into memory and is running on the target system. Or perhaps the
perpetrator simply wants to flood the target machine’s memory, thus
overwriting other items that are currently in memory and causing them to
crash. Either way, the buffer overflow is a very serious attack.
Fortunately, buffer overflow attacks are a bit harder to execute than the
DoS or a simple MS Outlook script virus. To create a buffer overflow
attack, a hacker must have a good working knowledge of some
programming language (C or C++ is often chosen) and understand the
target operating system/application well enough to know whether it has a
buffer overflow weakness and how it might exploit the weakness.
3. IP Spoofing
IP spoofing is essentially a technique used by hackers to gain
unauthorized access to computers. Although this is the most common
reason for IP spoofing, it is occasionally done simply to mask the origins
of a DoS attack. In fact DoS attacks often mask the actual IP address
from which the attack is originating.
With IP spoofing, the intruder sends messages to a computer system with
an IP address indicating that the message is coming from a different IP
address than it is actually coming from. If the intent is to gain

|Page8
Network Security Protocols and Administration …………….…………………. Rasha Hussein

unauthorized access, then the spoofed IP address will be that of a system


the target considers a trusted host.
To successfully perpetrate an IP spoofing attack, the hacker must first
find the IP address of a machine that the target system considers a trusted
source. Hackers might employ a variety of techniques to find an IP
address of a trusted host. After they have that trusted IP address, they can
then modify the packet headers of their transmissions so it appears that
the packets are coming from that host.
IP spoofing, unlike many other types of attacks, was actually known to
security experts on a theoretical level before it was ever used in a real
attack. The concept of IP spoofing was initially discussed in academic
circles as early as the 1980s. Although the concept behind this technique
was known for some time, it was primarily theoretical until Robert Morris
discovered a security weakness in the TCP protocol known as sequence
prediction.
IP spoofing attacks are becoming less frequent, primarily because the
venues they use are becoming more secure and in some cases are simply
no longer used. However, spoofing can still be used, and all security
administrators should address it.
A couple of different ways to address IP spoofing include:
• Do not reveal any information regarding your internal IP addresses.
This helps prevent those addresses from being “spoofed.”
• Monitor incoming IP packets for signs of IP spoofing using
network monitoring software. One popular product is Netlog. This
and similar products seek incoming packets to the external
interface that have both the source and destination IP addresses in
your local domain, which essentially means an incoming packet
that claims to be from inside the network, when it is clearly coming
from outside your network. Finding one means an attack is
underway.
The danger from IP spoofing is that some firewalls do not examine
packets that appear to come from an internal IP address. Routing packets
through filtering routers is possible if they are not configured to filter
incoming packets whose source address is in the local domain.
Examples of router configurations that are potentially vulnerable include:

|Page9
Network Security Protocols and Administration …………….…………………. Rasha Hussein

• Routers to external networks that support multiple internal


interfaces
• Proxy firewalls where the proxy applications use the source IP
address for authentication
• Routers with two interfaces that support subnetting on the internal
network
• Routers that do not filter packets whose source address is in the
local domain.

| P a g e 10
Security Association

Security Association is a very important aspect of IPSec. IPSec requires a logical


relationship, called a Security Association (SA), between two hosts. This section
first discusses the idea and then shows how it is used in IPSec.

Idea of Security Association

A Security Association is a contract between two parties; it creates a secure channel


between them. Let us assume that Alice needs to unidirectionally communicate with
Bob. If Alice and Bob are interested only in the confidentiality aspect of security,
they can create a shared secret key between themselves. We can say that there are
two Security Associations (SAs) between Alice and Bob; one outbound SA and one
inbound SA.

Each of them stores the value of the key in a variable and the name of the encryption/
decryption algorithm in another. Alice uses the algorithm and the key to encrypt a
message to Bob; Bob uses the algorithm and the key when he needs to decrypt the
message received from Alice. Figure 30.8 shows a simple SA.

The Security Associations can be more involved if the two parties need message
integrity and authentication. Each association needs other data such as the algorithm
for message integrity, the key, and other parameters. It can be much more complex if
the parties need to use specific algorithms and specific parameters for different
protocols, such as IPSec AH or IPSec ESP.

Security Association Database (SAD)


A Security Association can be very complex. This is particularly true if Alice
wants to send messages to many people and Bob needs to receive messages
from many people.
In addition, each site needs to have both inbound and outbound SAs to allow
bidirectional communication. In other words, we need a set of SAs that can be
collected into a database. This database is called the Security Association
Database (SAD). The database can be thought of as a two-dimensional table
with each row defining a single SA. Normally, there are two SADs, one inbound
and one outbound. Figure 30.9 shows the concept of outbound or inbound SADs
for one entity.

When a host needs to send a packet that must carry an IPSec header, the host needs to
find the corresponding entry in the outbound SAD to find the information for
applying security to the packet. Similarly, when a host receives a packet that carries
an IPSec header, the host needs to find the corresponding entry in the inbound SAD
to find the information for checking the security of the packet. This searching must
be specific in the sense that the receiving host needs to be sure that correct
information is used for processing the packet.
**Each entry in an inbound SAD is selected using a triple index:

1-security parameter index (a 32-bit number that defines the SA at the destination),

2-destination address, and

3- protocol (AH or ESP).

Security Policy

Another important aspect of IPSec is the Security Policy (SP), which defines the
type of security applied to a packet when it is to be sent or when it has arrived. Before
using the SAD, a host must determine the predefined policy for the packet.

Security Policy Database

Each host that is using the IPSec protocol needs to keep a Security Policy Database
(SPD). Again, there is a need for an inbound SPD and an outbound SPD. Each entry
in the SPD can be accessed using a sextuple index: source address, destination
address, name, protocol, source port, and destination port, as shown in Figure 30.10.

Source and destination addresses can be unicast, multicast, or wildcard addresses.


The name usually defines a DNS entity. The protocol is either AH or ESP. The
sourceand destination ports are the port addresses for the process running at the
source and destination hosts.
Internet Key Exchange (IKE)

The Internet Key Exchange (IKE) is a protocol designed to create both inbound and
outbound Security Associations. As we discussed in the previous section, when a
peer needs to send an IP packet, it consults the Security Policy Database (SPD) to see
if there is an SA for that type of traffic. If there is no SA, IKE is called to establish
one.

IKE is a complex protocol based on three other protocols: Oakley, SKEME, and
ISAKMP, as shown in Figure 30.13.

The Oakley protocol was developed by Hilarie Orman. It is a key creation protocol.
SKEME, designed by Hugo Krawcyzk, is another protocol for key exchange. It uses
public-key encryption for entity authentication in a key-exchange protocol. The
Internet Security Association and Key Management Protocol (ISAKMP) is a
protocol designed by the National Security Agency (NSA) that actually implements
the exchanges defined in IKE. It defines several packets, protocols, and parameters
that allowthe IKE exchanges to take place in standardized, formatted messages to
create SAs. We leave the discussion of these three protocols for books dedicated to
security.
Virtual Private Network (VPN)

One of the applications of IPsec is in virtual private networks. A virtual private


network (VPN) is a technology that is gaining popularity among large organizations
that use the global Internet for both intra- and inter-organization communication, but
require privacy in their intra-organization communication.

VPN is a network that is private but virtual. It is private because it guarantees privacy
inside the organization. It is virtual because it does not use real private WANs; the
network is physically public but virtually private.

Figure 30.14 shows the idea of a virtual private network. Routers R1 and R2 use VPN
technology to guarantee privacy for the organization. VPN technology uses ESP
protocol of IPSec in the tunnel mode. A private datagram, including the header, is
encapsulated in an ESP packet. The router at the border of the sending site uses its
own IP address and the address of the router at the destination site in the new
datagram. The public network (Internet) is responsible for carrying the packet from
R1 to R2. Outsiders cannot decipherthe contents of the packet or the source and
destination addresses. Deciphering takes place at R2, which finds the destination
address of the packet and delivers it.
TRANSPORT LAYER SECURITY

Two protocols are dominant today for providing security at the transport layer: the
Secure Sockets Layer (SSL) protocol and the Transport Layer Security (TLS)
protocol. We discuss SSL in this section; TLS is very similar. Figure 30.15 shows
the position of SSL and TLS in the Internet model.

**One of the goals of these protocols is to provide server and client authentication,
data confidentiality, and data integrity Application-layer client/server programs, such
as HTTP, that use the services of TCP can encapsulate their data in SSL packets.
If the server and client are capable of running SSL (or TLS) programs, then the client
can use the URL https://... instead of http://... to allow HTTP messages to be
encapsulated in SSL (or TLS) packets. For example, credit card numbers can be
safely transferred via the Internet for online shoppers.

SSL Architecture

SSL is designed to provide security and compression services to data generated from
the application layer. Typically, SSL can receive data from any application layer
protocol, but usually the protocol is HTTP. The data received from the application is
compressed(optional),signed, and encrypted.

The data is then passed to reliable‫ موثوق‬transport layer protocol such as TCP.
Netscape developed SSL in 1994. Versions 2 and 3 were releasedin 1995. In this
section, we discuss SSLv3.
Services

SSL provides several services on data received from the application layer.

❑Fragmentation. First, SSL divides the data into blocks of 214 bytes or less.

❑Compression. Each fragment of data is compressed using one of the lossless


compression methods negotiated between the client and server. This service is
optional.

❑Message Integrity. To preserve the integrity of data, SSL uses a keyed-hash


functionto create a MAC.

❑Confidentiality. To provide confidentiality, the original data and the MAC are

encrypted using symmetric-key cryptography.

❑Framing. A header is added to the encrypted payload. The payload is then


passedto a reliable transport layer protocol.

Key Exchange Algorithms

To exchange an authenticated and confidential message , the client and the server
eachneed a set of cryptographic secrets. However, to create these secrets, one pre-
mastersecret must be established between the two parties. SSL defines several key-
exchangemethods to establish this pre-master secret.

Some Concepts: -
Encryption/Decryption Algorithms

The client and server also need to agree to a set of encryption and decryption
algorithms.

Hash Algorithms

SSL uses hash algorithms to provide message integrity (message


authentication).Several hash algorithms have also been defined for this purpose.

Cipher Suite

The combination of key exchange, hash, and encryption algorithms defines a


ciphersuite for each SSL session.

Compression Algorithms

Compression is optional in SSL. No specific compression algorithm is defined.


Thereforea system can use whatever compression algorithm it desires.
Cryptographic Parameter Generation

To achieve message integrity and confidentiality, SSL needs six cryptographic


secrets, four keys and two IVs (initialization vectors). The client needs one key for
message authentication, one key for encryption, and one IV as original block in
calculation. The server needs the same. SSL requires that the keys for one direction
be different from thosefor the other direction. If there is an attack in one direction, the
other direction is notaffected. The parameters are generated using the following
procedure:

1. The client and server exchange two random numbers; one is created by the client
andthe other by the server.

2. The client and server exchange one pre-master secret using one of the
predefinedkey- exchange algorithms.

3. A 48-byte master secret is created from the pre-master secret by applying two
hashfunctions (SHA-1 and MD5).

4. The master secret is used to create variable-length key material by applying the
sameset of hash functions and prepending with different constants, as shown in
Figure 30.17.The module is repeated until key material of adequate size is created.
Note that the length of the key material block depends on the cipher suite selectedand
the size of keys needed for this suite.5. Six different secrets are extracted from the
key material, as shown in Figure 30.18.

Sessions and Connections

SSL differentiates a connection from a session. A session is an association between a


client and a server. After a session is established, the two parties have common
information such as the session identifier, the certificate authenticating each of them
(if necessary), the compression method (if needed), the cipher suite, and a master
secret that is used to create keys for message authentication encryption.

For two entities to exchange data, the establishment of a session is necessary, but not
sufficient; they need to create a connection between themselves. The two entities
exchange two random numbers and create, using the master secret, the keys and
parameters needed for exchanging messages involving authentication and
privacy.

A session can consist of many connections. A connection between two parties can be
terminated and re-established within the same session. When a connection is
terminated, the two parties can also terminate the session, but it is not mandatory. A
session can be suspended and resumed again.

Four Protocols

We have discussed the idea of SSL without showing how SSL accomplishes its
tasks.SSL defines four protocols in two layers, as shown in Figure 30.19.
1.The Record Protocol is the carrier. It carries messages from three other protocols
as well as the data coming from the application layer. Messages from the Record
Protocol are payloads to the transport layer, normally TCP.

2.The Handshake Protocol provides security parameters for the Record Protocol. It
establishes a cipher set and provides keys and security parameters. It also
authenticates the server to the client and the client to the server if needed.
Phase IV: Finalizing and Finishing In Phase IV, the client and server send
messagesto change cipher specification and to finish the handshaking protocol.

3.The ChangeCipherSpec Protocol is used for signaling the readiness of


cryptographicsecrets.

4. The Alert Protocol is used to report abnormal conditions.


Internet Security Objectives

❑To introduce the idea of Internet security at the network layer and the IPSec protocol
that implements that idea in two modes: transport and tunnel.

❑ To discuss two protocols in IPSec, AH and ESP, and explain the security services
each provide.

❑ To introduce security association and its implementation in IPSec.

❑ To introduce virtual private networks (VPN) as an application of IPSec in the


tunnel mode.

❑ To introduce the idea of Internet security at the transport layer and the SSL protocol
that implements that idea.

❑ To show how SSL creates six cryptographic secrets to be used by the client and the
server.

❑ To discuss four protocols used in SSL and how they are related to each other.

❑ To introduce Internet security at the application level and two protocols, PGP and
S/MIME, that implement that idea.

❑ To show how PGP and S/MIME can provide confidentiality and message
authentication.

❑ To discuss firewalls and their applications in protecting a site from intruders.

NETWORK LAYER SECURITY

We start with the discussion of security at the network layer. Although in the next two
sections we discuss security at the transport and application layers, we also need
security at the network layer for three reasons. First, not all client/server programs
are protected at the application layer. Second, not all client/server programs at the
application layer use the services of TCP to be protected by the transport layer security
that we discuss for the transport layer; some programs use the service of UDP.

Third, many applications, such as routing protocols, directly use the service of IP;
they need security services at the IP layer.

IP Security (IPSec) is a collection of protocols designed by the Internet Engineering

Task Force (IETF) to provide security for a packet at the network level. IPSec helps
create authenticated and confidential packets for the IP layer.

Two Modes

IPSec operates in one of two different modes: transport mode or tunnel mode.

Transport Mode

In transport mode, IPSec protects what is delivered from the transport layer to the
network layer. In other words, transport mode protects the payload to be encapsulated
in the network layer, as shown in Figure 30.1.

Note that transport mode does not protect the IP header. In other words, transport
mode does not protect the whole IP packet; it protects only the packet from the
transport layer (the IP layer payload). In this mode, the IPSec header (and trailer) are
added to the information coming from the transport layer. The IP header is added later.
Transport mode is normally used when we need host-to-host (end-to-end) protection
of data.

The sending host uses IPSec to authenticate and/or encrypt the payload delivered from
the transport layer. The receiving host uses IPSec to check the authentication and/or
decrypt the IP packet and deliver it to the transport layer. Figure 30.2 shows this
concept.
Tunnel Mode
In tunnel mode, IPSec protects the entire IP packet. It takes an IP packet, including
the header, applies IPSec security methods to the entire packet, and then adds a new IP
header, as shown in Figure 30.3.

The new IP header, has different information than the original IP header. Tunnel mode
is normally used between two routers, between a host and a router, or between a router
and a host, as shown in Figure 30.4. The entire original packet is protected from
intrusion between the sender and the receiver, as if the whole packet goes through an
imaginary tunnel.

Comparison
In transport mode, the IPSec layer comes between the transport layer and the network
layer. In tunnel mode, the flow is from the network layer to the IPSec layer and then
back to the network layer again. Figure 30.5 compares the two modes.
Two Security Protocols
IPsec defines two protocols
the Authentication Header (AH) Protocol and the Encapsulating Security Payload
(ESP) Protocol.
to provide authentication and/or encryption for packets at the IP level.

Authentication Header (AH)


The Authentication Header (AH) Protocol is designed to authenticate the source
host and to ensure the integrity of the payload carried in the IP packet. The protocol
uses a hash function and a symmetric (secret) key to create a message digest; the digest
is inserted in the authentication header. The AH is then placed in the appropriate
location, based on the mode (transport or tunnel). Figure 30.6 shows the fields and the
position of the authentication header in transport mode.

When an IP datagram carries an authentication header, the original value in the


protocol field of the IP header is replaced by the value 51. A field inside the
authentication header (the next header field) holds the original value of the protocol
field (the type of payload being carried by the IP datagram). The addition of an
authentication header follows these steps:
1. An authentication header is added to the payload with the authentication data field
set to 0.
2. Padding may be added to make the total length even for a particular hashing
algorithm.
3. Hashing is based on the total packet. However, only those fields of the IP header
that do not change during transmission are included in the calculation of the message
digest (authentication data).
4. The authentication data are inserted in the authentication header.
5. The IP header is added after changing the value of the protocol field to 51.
A brief description of each field follows:
❑ Next header. The 8-bit next header field defines the type of payload carried by the
IP datagram (such as TCP, UDP, ICMP, or OSPF).
❑ Payload length. The name of this 8-bit field is misleading. It does not define the
length of the payload; it defines the length of the authentication header in 4-byte
multiples, but it does not include the first 8 bytes.
❑ Security parameter index. The 32-bit security parameter index (SPI) field plays
the role of a virtual circuit identifier and is the same for all packets sent during a
connection called a Security Association (discussed later).
❑ Sequence number. A 32-bit sequence number provides ordering information for a
sequence of datagrams. The sequence numbers prevent a playback. Note that the
sequence number is not repeated even if a packet is retransmitted. A sequence number
does not wrap around after it reaches 232; a new connection must be established.
❑ Authentication data. Finally, the authentication data field is the result of applying
a hash function to the entire IP datagram except for the fields that are changed during
transit (e.g., time-to-live).

Encapsulating Security Payload (ESP)


The AH protocol does not provide confidentiality, only source authentication and data
integrity. IPSec later defined an alternative protocol, Encapsulating Security Payload
(ESP), that provides source authentication, integrity, and confidentiality. ESP adds a
header and trailer. Note that ESP’s authentication data are added at the end of the
packet, which makes its calculation easier. Figure 30.7 shows the location of the ESP
header and trailer.
When an IP datagram carries an ESP header and trailer, the value of the protocol
field in the IP header is 50. A field inside the ESP trailer (the next-header field) holds.
the original value of the protocol field (the type of payload being carried by the IP
datagram, such as TCP or UDP).
The ESP procedure follows these steps:
1. An ESP trailer is added to the payload.
2. The payload and the trailer are encrypted.
3. The ESP header is added.
4. The ESP header, payload, and ESP trailer are used to create the authentication
data.
5. The authentication data are added to the end of the ESP trailer.
6. The IP header is added after changing the protocol value to 50.
The fields for the header and trailer are as follows:
❑Security parameter index. The 32-bit security parameter index field is similar to
the one defined for the AH protocol.
❑Sequence number. The 32-bit sequence number field is similar to the one defined
for the AH protocol.
❑Padding. This variable-length field (0 to 255 bytes) of 0s serves as padding.
❑Pad length. The 8-bit pad-length field defines the number of padding bytes. The
value is between 0 and 255; the maximum value is rare.
❑Next header. The 8-bit next-header field is similar to that defined in the AH
protocol. It serves the same purpose as the protocol field in the IP header before
encapsulation.
❑Authentication data. Finally, the authentication data field is the result of
applying an authentication scheme to parts of the datagram. Note the difference
between the authentication data in AH and ESP. In AH, part of the IP header is
included in the calculation of the authentication data; in ESP, it is not.

IPv4 and IPv6


IPSec supports both IPv4 and IPv6. In IPv6, however, AH and ESP are part of the
extension header.
AH versus ESP
The ESP protocol was designed after the AH protocol was already in use. ESP does
whatever AH does with additional functionality (confidentiality). The question is,why
do we need AH? The answer is that we don’t. However, the implementation of AH is
already included in some commercial products, which means that AH will remain part
of the Internet until these products are phased out.
Services Provided by IPSec
The two protocols, AH and ESP, can provide several security services for packets at
the network layer. Table 30.1 shows the list of services available for each protocol.

Access Control
IPSec provides access control indirectly using a Security Association Database
(SAD), as we will see in the next section. When a packet arrives at a destination,
and there is no Security Association already established for this packet, the packet
is discarded.
Message Integrity
Message integrity is preserved ‫ الحفاظ‬in both AH and ESP. A digest of data is
created and sent by the sender to be checked by the receiver.

Entity Authentication
The Security Association and the keyed-hash digest of the data sent by the sender
authenticate the sender of the data in both AH and ESP.
Confidentiality
The encryption of the message in ESP provides confidentiality. AH, however, does not
provide confidentiality. If confidentiality is needed, one should use ESP instead of AH.
Replay Attack Protection
In both protocols, the replay attack is prevented by using sequence numbers and a
sliding receiver window. Each IPSec header contains a unique sequence number when
the Security Association is established. The number starts from 0 and increases until
the value reaches 232 − 1. When the sequence number reaches the maximum, it is reset
to 0 and, at the same time, the old Security Association (see the next section) is deleted
and a new one is established.
Key Distribution Using Asymmetric Encryption

One of the major roles of public-key encryption is to address the


problem of key distribution. There are actually two distinct aspects to
the use of public-key encryption in this regard.

• The distribution of public keys.


• The use of public-key encryption to distribute secret keys.

1-Public-Key Certificates
public-key encryption is that the public key is public. Thus, if there is
some broadly accepted public-key algorithm, such as RSA, any
participant can send his or her public key to any other participant or
broadcast the key to the community at large. Although this approach is
convenient, it has a major weakness.
Anyone can forge such a public announcement. That is, some user
could pretend to be user A and send a public key to another participant
or broadcast such a public key. Until such time as user A discovers the
forgery and alerts other participants, the forger is able to read all
encrypted messages intended for A and can use the forged keys for
authentication.
The solution to this problem is the public-key certificate. a certificate
consists of a public key plus a user ID of the key owner, with the
whole block signed by a trusted third party.
*Typically, the third party is a certificate authority (CA) that is trusted
by the user community, such as a government agency or a financial
institution.
* A user can present his or her public key to the authority in a secure
manner and obtain a certificate. The user can then publish the
certificate.
*Anyone needing this user’s public key can obtain the certificate and
verify that it is valid by way of the attached trusted signature. Figure
4.3 illustrates the process.
One scheme has become universally accepted for formatting ‫لصياغة‬
public-key certificates: the X.509 standard. X.509 certificates are
used in most network security applications, including IP security,
secure sockets layer (SSL), secure electronic transactions (SET), and
S/MIME.
2-Public-Key Distribution of Secret Keys

With conventional encryption, a fundamental requirement for two


parties to communicate securely is that they share a secret key. With
conventional encryption, Bob and his correspondent, say, Alice, must
come up with a way to share a unique secret key that no one else
knows. How are they going to do that? He could encrypt this key
using conventional encryption and e-mail it to Alice, but this means
that Bob and Alice must share a secret key to encrypt this new secret
key.
One approach is the use of Diffie-Hellman key exchange. This
approach is indeed widely used. However, it suffers the drawback
that, in its simplest form, Diffie-Hellman provides no authentication
of the two communicating partners.
A powerful alternative is the use of public-key certificates. When
Bob wishes to communicate with Alice, Bob can do the following:
1. Prepare a message.
2. Encrypt that message using conventional encryption with a one-
time conventional session key.
3. Encrypt the session key using public-key encryption with Alice’s
public key.

4. Attach the encrypted session key to the message and send it to


Alice.
Only Alice is capable of decrypting the session key and therefore of
recovering the original message. If Bob obtained Alice’s public key
by means of Alice’s publickey certificate, then Bob is assured that it is
a valid key.
X.509 Certificates
ITU-T recommendation X.509 (issued in 1988.) is part of the X.500
series of recommendations that define a directory service. The
directory is, a server or distributed set of servers that maintains a
database of information about users. The information includes a
mapping from user name to network address, as well as other
attributes ‫ السمات‬and information about the users.
X.509 defines a framework for the facility of authentication services
by the X.500 directory to its users. The directory may serve as a
repository ‫ مخزن‬of public-key certificates. Each certificate contains the
public key of a user and is signed with the private key of a trusted
certification authority.
In addition, X.509 defines alternative authentication protocols based
on the use of public-key certificates X.509 is an important standard
because the certificate structure and authentication protocols defined
in X.509 are used in a variety of contexts. For example, the X.509
certificate format is used in S/MIME, IP Security, and SSL/TLS
protocols.
X.509 is based on the use of public-key cryptography and digital
signatures. The standard does not dictate the use of a specific
algorithm but recommends RSA. The digital signature scheme is
assumed to require the use of a hash function. Again, the
standard does not dictate a specific hash algorithm. Figure 4.3
illustrates the generation of a public-key certificate.
Certificates
The heart of the X.509 scheme is the public-key certificate associated
with each user. These user certificates are assumed to be created by
some trusted certification
authority (CA) and placed in the directory by the CA or by the user.
The directory server itself is not responsible for the creation of public
keys or for the certification function; it merely provides an easily
accessible location for users to obtain certificates. Figure 4.4a shows
the general format of a certificate, which includes the following
elements.
• Version: Differentiates among successive versions of the certificate
format; the default is version 1. If the Issuer Unique Identifier or
Subject Unique Identifier are present, the value must be version 2. If
one or more extensions are present, the version must be version 3.
• Serial number: An integer value, unique within the issuing CA, that
is unambiguously ‫ فيه اللبس‬associated with this certificate.
• Signature algorithm identifier: The algorithm used to sign the
certificate, together with any associated parameters. Because this
information is repeated in
the Signature field at the end of the certificate, this field has little, if
any, utility.
• Issuer name: X.500 name of the CA that created and signed this
certificate.
• Period of validity: Consists of two dates: the first and last on which
the certificate is valid.
• Subject name: The name of the user to whom this certificate refers.
That is, this certificate certifies the public key of the subject who
holds the corresponding private key.
The CA signs the certificate with its private key. If the corresponding
public key is known to a user, then that user can verify that a
certificate signed by the CA is valid. This is the typical digital
signature approach, as illustrated in Figure 4.5.

OBTAINING A USER’S CERTIFICATE User certificates


generated by a CA have the following characteristics:
• Any user with access to the public key of the CA can verify the user
public key that was certified.
• No party other than the certification authority CA can modify the
certificate without this being detected.
Because certificates are unforgeable ‫ للتنفيذ قابلة غيز‬they can be placed in
a directory without the need to make special efforts to protect them.
If all users subscribe to the same CA, then there is a common trust of
that CA.
All user certificates can be placed in the directory for access by all
users. In addition,a user can transmit his or her certificate directly to
other users. In either case, once B is in possession of A’s certificate, B
has confidence that messages it encrypts with A’s public key will be
secure from eavesdropping and that messages signed with A’s private
key are unforgeable.
If there is a large community of users, it may not be practical for all
users to subscribe to the same CA. This CA’s public key must be
provided to each user in an absolutely secure way (with respect to
integrity and authenticity) so that the user has confidence in the
associated certificates. Thus, with many users, it may be more
practical for there to be a number of CAs, each of which securely
provides its public key to some fraction of the users.
X.509 Version 3
The X.509 version 2 format does not convey all of the information
that recent design and implementation experience has shown to be
needed. [FORD95] lists the following requirements not satisfied by
version 2:
1. The Subject field is inadequate to convey the identity of a key
owner to a public key user. X.509 names may be relatively short and
lacking ‫ تفتقز‬in obvious identification details that may be needed by the
user.
2. The Subject field is also inadequate for many applications, which
typically recognize entities by an Internet e-mail address, a URL, or
some other Internet-related identification.
3. There is a need to indicate security policy information. This enables
a security
application or function, such as IPSec, to relate an X.509 certificate to
a given policy.
4. There is a need to limit the damage‫ االضزار‬that can result from a
faulty ‫ معيبة‬or malicious CA by setting constraints on the applicability
‫ انطباق‬of a particular certificate.
5. It is important to be able to identify different keys used by the same
owner at different times. This feature supports key life cycle
management, in particular the ability to update key pairs for users and
CAs on a regular basis or under exceptional circumstances ‫ظزوف‬
‫ استثنائية‬.
KEY AND POLICY INFORMATION These extensions convey
additional information about the subject and issuer keys, plus
indicators of certificate policy. A certificate policy is a named set of
rules that indicates the applicability of a certificate to a particular
community and/or class of application with common security
requirements. This area includes:
• Authority key identifier: Identifies the public key to be used to
verify the
signature on this certificate or certificate revocation list CRL. Enables
distinct keys of the same CA to be differentiated.
• Subject key identifier: Identifies the public key being certified.
(e.g., digital signature and encryption key agreement).
• Key usage: Indicates a restriction imposed as to the purposes for
which, and the policies under which, the certified public key may be
used. e.g. digital signature, nonrepudiation, key encryption, data
encryption, key agreement, CA signature verification on certificates,
and CA signature verification on CRLs.
• Private-key usage period: Indicates the period of use of the private
key corresponding to the public key.
• Certificate policies: Certificates may be used in environments
where multiple policies apply.
• Policy mappings: Used only in certificates for CAs issued by other
CAs. Policy mappings allow an issuing CA to indicate that one or
more of that issuer’s policies can be considered equivalent to another
policy used in the subject CA’s domain.
Public-Key Infrastructure
RFC 2822 (Internet Security Glossary) defines public-key
infrastructure (PKI) as the set of hardware, software, people, policies,
and procedures needed to create, manage, store, distribute, and revoke
digital certificates based on asymmetric cryptography.
The principal objective for developing a PKI is to enable secure,
convenient, and efficient acquisition ‫ استحواذ‬of public keys. The
Internet Engineering Task Force (IETF) Public Key Infrastructure
X.509 (PKIX) working group has been the driving force behind
setting up a formal (and generic) model based on X.509 that is
suitable for deploying a certificate-based architecture on the Internet.
This section describes the PKIX model. Figure 4.7 shows the
interrelationship among the key elements of the PKIX model. These
elements are:-
• End entity: A generic term used to denote end users, devices (e.g.,
servers, routers), or any other entity that can be identified in the
subject field of a public key certificate. End entities typically consume
and/or support PKI-related services.
• Certification authority (CA): The issuer of certificates and
(usually) certificate
revocation lists (CRLs). It may also support a variety of administrative
functions, although these are often delegated to one or more
registration authorities.
• Registration authority (RA): An optional component that can
assume a number of administrative functions from the CA. The RA is
associated with the end entity registration process, but can assist in a
number of other areas as well.
• CRL issuer: An optional component that a CA can delegate ‫ فوض‬to
publish CRLs.
• Repository: A generic term used to denote any method for storing
certificates and CRLs so that they can be retrieved by end entities.
PKIX Management Functions
PKIX identifies a number of management functions that potentially
need to be supported by management protocols. These are indicated in
Figure 4.7 and include the following:
• Registration: This is the process a user first makes itself known to a
CA (directly, or through an RA), prior to that CA issuing a certificate
or certificates for that user. Registration begins the process of
enrolling in a PKI. Registration usually involves some off-line or
online procedure for mutual authentication. Typically, the end entity is
issued one or more shared secret keys used for subsequent
authentication.
• Initialization: Before a client system can operate securely, it is
necessary to install key materials that have the appropriate
relationship with keys stored elsewhere in the infrastructure. For
example, the client needs to be securely initialized with the public key
and other assured information of the trusted CA(s) to be used in
validating certificate paths.
• Certification: This is the process in which a CA issues a certificate
for a user’s public key and returns that certificate to the user’s client
system and/or posts that certificate in a repositor
KEY DISTRIBUTION AND USER AUTHENTICATION

In most computer security contexts, user authentication is the fundamental


building block and the primary line of defense. User authentication is the
basis for most types of access control and for user accountability. RFC4949
(Internet Security Glossary )defines user authentication.

4.1 Symmetric Key Distribution Using Symmetric Encryption

For symmetric encryption to work, the two parties to an exchange must


share the same key, and that key must be protected from access by others.
Furthermore, frequent key changes are usually desirable to limit the
amount of data compromised if an attacker learns the key. Therefore, the
strength of any cryptographic system rests with the key distribution
technique, a term that refers to the means of delivering a key to two parties
that wish to exchange data, without allowing others to see the key. Key
distribution can be achieved in a number of ways. For two parties A and B,
following options are:
1. A key could be selected by A and physically delivered to B.
2. A third party could select the key and physically deliver it to A and B.
3. If A and B have previously and recently used a key, one party could
transmit thenew key to the other, using the old key to encrypt the new key.
4. If A and B each have an encrypted connection to a third party C, C could
deliver a key on the encrypted links to A and B.
4.2 Kerberos
Kerberos is a key distribution and user authentication service
developed at Massachusetts Institute of Technology(MIT)RFC
4120. The problem that Kerberos addresses is this: Assume an
open distributed environment in which users at workstations
wish to access services on servers distributed throughout the
network. We would like for servers to be able to restrict access to
authorized users and to be able to authenticate requests for
service. In this environment, a workstation cannot be trusted to
identify its users correctly to network services.
In particular, the following three threats exist:

1. A user may gain access to a particular workstation and pretend to be


another user operating from that workstation.
2. A user may alter the network address of a workstation so that the
requests sent from the altered workstation appear to come from the
impersonated ‫انتحال‬workstation.
3. A user may eavesdrop on exchanges and use a replay attack to gain
entrance ‫مدخم‬to a server or to disrupt operations.

In any of these cases, an unauthorized user may be able to gain access to


services and data that he or she is not authorized to access. Rather than
building elaborate ‫ مفصم‬authentication protocols at each server, Kerberos
provides a centralized authentication server whose function is to
authenticate users to servers and servers to users. Kerberos relies
exclusively on symmetric encryption, making no use of public-key
encryption.

**Two versions of Kerberos are in use. Version 4 [MILL88, STEI88]


implementations still exist, although this version is being phased out.
Version 5 [KOHL94] corrects some of the security deficiencies of version 4
and has been issued as a proposed Internet Standard (RFC 4120).
Kerberos Version 4

Version 4 of Kerberos makes use of DES, in protocol, to provide the


authentication service. In an unprotected network environment, any client
can apply to any server for service. An opponent can pretend ‫ يتظاهر‬to be
another client and obtain unauthorized privileges on server machines. To
counter this threat, servers must be able to confirm‫ تأكيد‬the identities of
clients who request service.

Solution

So, use an authentication server (AS) that knows the passwords of all
users and stores these in a centralized database. In addition, the AS shares
aAS shares a unique secret key unique secret key with each server with
each server. These keys have been distributed physically or in some other
secure manner.

The heart of the first problem is the lifetime associated with the ticket-
granting ticket.
If this lifetime is very short (e.g., minutes), then the user will be repeatedly
asked for a password. If the lifetime is long (e.g., hours), then an opponent
has a greater opportunity ‫ فرصة‬for replay. An opponent could eavesdrop on
the network and capture a copy of the ticket-granting ticket TGT and then
wait for the legitimate user to log out.
Then the opponent could forge ‫ صياغة‬the legitimate user’s network address
and send the message of step (3) to the TGS. This would give the opponent
unlimited access to the resources and files available to the legitimate user.

Similarly, if an opponent captures a service-granting ticket ‫ انخدمة منح بطاقة‬and


uses it before it expires, the opponent has access to the corresponding
service. Thus, we arrive at an additional requirement. A network service
(the TGS or an application service) must be able to prove that the person
using a ticket is the same person to whom that ticket was issued.

The second problem is that there may be a requirement for servers to


authenticate themselves to users. Without such authentication, an opponent
could sabotage ‫تخريب‬the configuration so that messages to a server were
directed to another location. The false server would then be in a position to
act as a real server and capture any information from the user and deny the
true service to the user.

Consider the following hypothetical dialogue:


(1) C →AS: IDC ||PC ||IDV
(2) AS →C: Ticket
(3) C →V: IDC ||Ticket Ticket = E( Kv, [IDC ||ADC ||IDV])

*In this scenario, the user logs on to a workstation and requests access to server V.
The client module C in the user’s workstation requests the user’s password
and then sends a message to the AS that includes the user’s ID, the server’s
ID, and the user’s password.

**The AS checks its database to see if the user has supplied the proper
password for this user ID and whether this user is permitted access to
server V . If both If both tests are tests are passed, passed, the AS accepts
the user as authentic and must now convince ‫ إقناع‬the server that this user is
authentic.
C =client
AS =authentication server
V = server
ID

C = identifier of user on C
IDV = identifier of V
PC =password of user on C
ADC =network address of C
key shared by AS and V
To do so, the AS creates a ticket that contains the user’s ID and network
address and the server’s ID. This ticket is encrypted using the secret key
shared by the AS and this server. This ticket is then sent back to C.
**Because the ticket is encrypted, it cannot be altered by C or by an
opponent.
With this ticket, C can now apply to V for service. C sends a message to V
containing C’s ID and the ticket.
V decrypts the ticket and verifies that the user ID in the ticket is the same as
the unencrypted user ID in the message. If these two match, the server
considers the user authenticated and grants ‫ منح‬the requested service.
Kerberos Realms ‫العوالم‬ : A full-service Kerberos environment
consisting of a Kerberos server, a number of clients, and a number of
application servers requires the following:
1. The Kerberos server must have the user ID and hashed passwords of all
participating users in its database. All users are registered with the
Kerberos server.
2. The Kerberos server must share a secret key with each server. All servers
are registered with the Kerberos server.
A Kerberos realm is a set of managed nodes that share the same Kerberos
database. The Kerberos database resides on the Kerberos master
computer system, which should be kept in a physically secure room. A
read-only copy of the Kerberos database might also reside on other
Kerberos computer systems.
**However, all changes to the database must be made on the master
computer system. Changing or accessing the contents of a Kerberos
database requires the
Kerberos master password. For two realms to support interrealm
authentication, a third requirement is added:
3. The Kerberos server in each interoperating realm shares a secret key
with the server in the other realm. The two Kerberos servers are registered
with each other.

In this (Figure 15.3): A user wishing service on a server in another realm


needs a ticket for that server. The user’s client follows the usual
procedures to gain access to the local TGS and then requests a ticket-
granting ticket for a remote TGS (TGS in another realm). The client can then
apply to the remote TGS for a service-granting ticket for the desired server
in the realm of the remote TGS.

Kerberos Version 5

Kerberos version 5 is specified in RFC 4120 and provides a


number of improvements over version 4 [KOHL94]. To begin, we
provide an overview of the changes from version 4 to version 5
and then look at the version 5 protocol.
DIFFERENCES BETWEEN VERSIONS 4 AND 5 Version 5 is intended
to address the limitations of version 4 in two areas: environmental
shortcomings and technical deficiencies. We briefly summarize the
improvements in each area. Kerberos version 4 did not fully address the
need to be of general purpose. This led to the following

environmental shortcomings.

1. Encryption system dependence: Version 4 requires the use of


DES. Export restriction on DES as well as doubts ‫ انشكىك‬about the
strength of DES were thus of concern. In version 5, ciphertext is
tagged with an encryption-type identifier so that any encryption
technique may be used. Encryption keys are tagged with a type
and a length, allowing the same key to be used in different
algorithms and allowing the specification of different variations
on a given algorithm.
2. Internet protocol dependence: Version 4 requires the use of
Internet Protocol (IP) addresses. Other address types, such as the
ISO network address, are not accommodated ‫ تستىعب ال‬. Version 5
network addresses are tagged with type and length, allowing any
network address type to be used.

3. Message byte ordering: In version 4, the sender of a message employs a


byte ordering of its own choosing and tags the message to indicate least
significant byte in lowest address or most significant byte in lowest address.
This techniques works but does not follow established conventions ‫االتفاقيات‬
‫ بها انمعمىل‬. In version 5, all message structures are defined using Abstract
Syntax Notation One (ASN.1) and Basic Encoding Rules (BER), which
provide an unambiguous byte ordering.

4. Ticket lifetime: Lifetime values in version 4 are encoded in an 8-bit


quantity in units of five minutes. Thus, the maximum lifetime that can be
expressed is 28 × 5 =1280 minutes (a little over 21 hours). This may be
inadequate for some applications (e.g., a long-running simulation that
requires valid Kerberos credentials throughout execution). In version 5,
tickets include an explicit start time and end time, allowing tickets with
arbitrary lifetimes.
5. Authentication forwarding: Version 4 does not allow credentials‫وثائق‬
issued to one client to be forwarded to some other host and used by some
other client. This capability would enable a client to access a server and
have that server access another server on behalf of the client. For example,
a client issues a request to a print server that then accesses the client’s file
from a file server, using the client’s credentials for access. Version 5
provides this capability.
6. Interrealm authentication: In version 4, interoperability among N
realms requires on the order of N2 Kerberos-to-Kerberos relationships,
Version 5 supports a method that requires fewer relationships, as described
shortly.
Table 1 Notable encryption and authentication methods
ISO-OSI Reference Model:

The ISO (International Standards Organization) has developed OSI (Open

System Interconnect) model in 1982 for computer network connection. The OSI

reference model specifies the seven layers of functionality, as shown in

Figure(3).

Figure (3): OSI Reference Model.

The layers represent different actives performed in the actual transmission of a

message. Figure (4) shows a typical message that has been acted upon by the

seven layers to prepare it for transmission. Layer 6 breaks the original message

data into blocks. At the layer 5, a session header is added to show the sender,

the receiver and some sequencing information. Layer 4 adds information

concerning the logical connection between the sender and receiver. At the

layer 3 routing information is added, it also divides the message into unties

called 'packets' which are the standard units of communication in a network.


The layer 2 adds both a header and trailer to ensure correct sequencing of the

message blocks, and to detect and correct transmission errors. The individual

bits of the message and the control information are transmitted on the physical

medium by level 1.

**All the additions to the message are checked and removed by the

corresponding layer on the receiving side.

Figure (4): Message Prepared for Transmission

S: Session Header: Sequence Info.; Sender/Receiver Identification

T: Transport Header: Connection Info.

N: Network Header: Routing Info.

B: Data Link Header: Sequence Info.

E: Data Link Trailer: Error Correction Info


TCP/IP Model:

The TCP/IP four-layer model is created with reference to the seven-layer OSI

model,as shown in Figure (5). Both the OSI model and the TCP/IP layered

model are based on many similarities, but there are philosophical and practical

differences between the two models.

However, they both deal with communications among heterogeneous

computers.

Figure (5): TCP/IP model and its protocols

1.Network Access Layer:

The network access layer contains protocols that provide access to a

communication network.

2. Internet Layer:

The Internet layer provides a routing function. This layer consists of the Internet

Protocol (IP) and the Internet Control Message Protocol (ICMP).


3. Transport Layer:

The transport layer delivers data between two processes on different host

computers. This layer contains the Transmission Control Protocol (TCP) and the

User Datagram Protocol (UDP).

4. Application Layer:

This layer provides a direct interface with users or applications. Some of the

important application protocols are File Transfer Protocol (FTP) for file

transfers,Hypertext Transfer Protocol (HTTP) for the World Wide Web, Simple

Network Management Protocol (SNMP) for controlling network devices, Simple

Mail Transport Protocol (SMTP), Post Office Protocol (POP), Internet Mail

Access Protocol (IMAP), Internet Control Message Protocol (ICMP) for email,

Privacy Enhanced Mail (PEM), Pretty Good Privacy (PGP) and Secure

Multimedia Internet Mail Extensions (S/MIME) for e-mail security.

========================================================

IP Addresses:

IP address is the address of a computer which attached to a TCP/IP network

(e.g. the Internet). Every client device (defined as a requester device of

services) and server device (defined as the provider of services) must have a

unique IP address.
Client workstations have either a static address or a dynamic address which

assigned to them each dial-up session. IP addresses version 4 (IPv4)are written

as four sets of numbers separated by periods; for example, (192.168.111.222)

or (10.123.1.102) or (172.16.4.30), etc.

Ports:

A port represents an endpoint in the establishment of a connection between

two or more computers. For the computer acting as the client, the destination

port number will typically identify the type of application/service being hosted

by the server.

For example:

TCP port 21 is the destination port number used when communicating with an FTP server.

TCP port 22 is the destination port number used when communicating with an SSH server.

TCP port 23 is the destination port number used when communicating with an Telnet server.

TCP port 25 is the destination port number used when communicating with an SMTP server.

TCP port 80 is the destination port number used when communicating with an HTTP server.

TCP port 110 is the destination port number used when communicating with a POP3 server.

TCP port 5190 is the destination port number used when communicating with an AOLIM server.

TCP port 6667 is the destination port number used when communicating with an IRC server.

The above is a small selection from a possible 65,535 (64K) port numbers.

The port numbers are divided into three ranges: the Well Known Ports (from 0

through 1023), the Registered Ports (from 1024 through 49151), and the

Dynamic and/or Private Ports (from 49152 through 65535).


Networks are System :-

The computer network is a large computing system containing other computing

systems. The computing networks have similar characteristics.

Figure (6): Distributed Computing System.

Computer networks offer several advantages over single processor system:

1- Resource Sharing: Users of a network can access a variety of resources

through the network.

2- Increased Reliability: The failure of one system in the network (which

consists of more than one computing system) not block users from continuing
to computer. If similar systems exist, users can move their computing tasks to

other when one system fails.

3- Distributing the Workload: The workload can be shifted from a heavily

loaded system to an underutilized one.

4- Expandability: Network systems can be expanded easily by adding new

nodes.

=========================================================

The OSI Security Architecture:-

The OSI security architecture focuses on security attacks, mechanisms, and

1.Security attack: Any action that compromises the security of information

owned by an organization.

2.Security mechanism: A process (or a device incorporating such a process)

that is designed to detect, prevent, or recover from a security attack.

3.Security service: A processing or communication service that enhances the

security of the data processing systems and the information transfers of an

organization. The services are intended to counter security attacks, and they

make use of one or more security mechanisms to provide the service.

** the terms threat and attack are commonly used to mean more or less the

same thing. provides definitions taken from RFC 2828, Internet Security

Glossary.
Threat

A potential for violation of security, which exists when there is a circumstance,

capability, action, or event that could breach ‫ ينتهك‬security and cause harm.

That is, a threat is a possible danger that might exploit a vulnerability.

Attack

An assault ‫ اعتداء‬on system security that derives from an intelligent threat; that

is, an intelligent act that is a deliberate ‫ متعمدة‬attempt (especially in the sense

of a method or technique) to escape security services and violate the security

policy of a system.

Security Mechanisms

Table 1.3 lists the security mechanisms defined in X.800. As can be seen the

mechanisms are divided into those that are implemented in a specific protocol

layer. X.800 distinguishes between reversible encipherment mechanisms and

irreversible encipherment mechanisms. A reversible encipherment mechanism

is simply an encryption algorithm that allows data to be encrypted and

subsequently decrypted. Irreversible encipherment mechanisms include hash

algorithms and message authentication codes, which are used in digital

signature and message authentication applications.


A Model for Network Security

in Figure 1.5. A message is to be transferred from one party to another across

some sort of internet. The two parties, who are the principals(headmaster ) in
this transaction, must cooperate for the exchange to take place. A logical

information channel is established by defining a route through the internet

from source to destination and by the cooperative use of communication

protocols (e.g., TCP/IP) by the two principals.

Figure 1.5. Model for Network Security

Security aspects come into play when it is necessary or desirable to protect the

information transmission from an opponent ‫ العدو‬who may present a threat to

confidentiality, authenticity, and so on. All the techniques for providing security

have two components:

● A security-related transformation on the information to be sent. Examples

include the encryption of the message, which scrambles the message so that it

is unreadable by the opponent, and the addition of a code based on the

contents of the message, which can be used to verify the identity of the sender
● Some secret information shared by the two principals and, it is hoped,

unknown to the opponent. An example is an encryption key used in

conjunction with the transformation to scramble the message before

transmission and unscramble it on reception. [5]

A trusted third party may be needed to achieve secure transmission. For

example, a third party may be responsible for distributing the secret

information to the two principals while keeping it from any opponent. Or a

third party may be needed to arbitrate disputes between the two principals

concerning the authenticity ‫ اصالة‬of a message transmission.

This general model shows that there are four basic tasks in designing a

particular security service:

1. Design an algorithm for performing the security-related transformation. The

algorithm should be such that an opponent cannot defeat its purpose.

2.Generate the secret information to be used with the algorithm.

3. Develop methods for the distribution and sharing of the secret information.

4.Specify a protocol to be used by the two principals that makes use of the

security algorithm and the secret information to achieve a particular security

service.
What Is Network Security?
As we know, computer networks are distributed networks of
computers that are either strongly connected meaning that they share a
lot of resources from one central computer or loosely connected,
meaning that they share only those resources that can make the
network work. When we talk about computer network security, It is
no longer one computer but a network.
So computer network security is a broader study of computer security.
It is still a branch of computer science, but a lot broader than that of
computer security.
It involves creating an environment in which a computer network,
including all its resources, which are many; all the data in it both in
storage and in transit; and all its users are secure. Because it is wider
than computer security, this is a more complex field of study
involving more de tailed mathematical designs of cryptographic,
communication, transport, and exchange protocols and best practices.
How does network security work?
Network security combines multiple layers of defenses at the edge and
in the network. Each network security layer implements policies and
controls. Authorized users gain access to network resources, but
malicious actors are blocked from carrying out exploits and threats.

Securing the Computer Network


In short, we protect objects. System objects are either real or not real.
In a computer network model, the real objects are the hardware
resources in the system, and the not real object is the information and
data in the system, both in transition and static in storage.
1.Hardware
Protecting hardware resources include protecting :-
• End user objects that include the user interface hardware components
such as all client system input components, including a keyboard,
mouse, touch screen, light pens, and others.
• Network objects like firewalls, hubs, switches, routers and gateways
which are vulnerable to hackers.
• Network communication channels to prevent eavesdroppers from
intercepting network communications.
2.Software
Protecting software resources includes protecting hardware-based
software, operating systems, server protocols, browsers, application
software, and intellectual property ‫ فكرية ملكية‬stored on network storage
disks and databases.
It also involves protecting client software such as investment
portfolios, financial data, real estate records, images or pictures, and
other personal files commonly stored on home and business
computers.
Forms of Protection
survey ways and forms of protecting these objects. Prevention of
unauthorized access to system resources is achieved through a number
of services that include :-
1.access control,
2.authentication,
3.confidentiality and
4.integrity.

1.Access Control
is a service the system uses, together with a user pre-provided
identification
information such as a password, to determine who uses what of its
services. Let us
look at some forms of access control based on hardware and software.
1.1 Hardware Access Control Systems
Access control tools falling in this category include the following:
•Access terminal.
These activities can be done in a variety of ways including fingerprint
verification and real-time anti-break-in sensors.
• Visual event monitoring. This is a combination of many
technologies into one very useful and rapidly growing form of access
control using a variety of real time technologies including video and
audio signals, aerial photographs, and global positioning system
(GPS) technology to identify locations.
• Identification cards. Sometimes called proximity cards, these cards
have become very common these days as a means of access control in
buildings, financial institutions, and other restricted areas.
The cards come in a variety of forms, including magnetic, bar coded,
contact chip, and a combination of these.
• Biometric identification. This is perhaps the fastest growing form
of control access tool today. Some of the most popular forms include
fingerprint, iris, and voice recognition. However, fingerprint
recognition offers a higher level of security.
• Video surveillance. This is a replacement of CCTV of yester year,
and it is gaining popularity as an access control tool. With fast
networking technologies and digital cameras, images can now be
taken and analyzed very quickly, and action taken in minutes.
1.2 Software Access Control Systems
Software access control falls into two types:
a. point of access monitoring (POA), personal activities can be
monitored by a PC-based application. The application can even be
connected to a network or to machines.
b. In remote mode, the terminals can be linked in a variety of ways,
including the use of modems, telephone lines, and all forms of
wireless connections.

2.Authentication
Authentication is a service used to identify a user. User identity,
especially of remote users.
This service provides a system with the capability to verify that a user
is the very one he or she claims to be based on what the user is,
knows, and has.
Physically, we can authenticate users based on checking one or more
of the following user items:
• User name (sometimes screen name)
• Password

• Retinal images: The user looks into an electronic device that maps
his or her eye retina image; the system then compares this map with a
similar map stored on the system.
• Fingerprints: The user presses on or sometimes inserts a particular
finger into a device that makes a copy of the user fingerprint and then
compares it with a similar image on the system user file.
• Physical location: The physical location of the system initiating an
entry request is checked to ensure that a request is actually originating
from a known and authorized location. In networks, to check the
authenticity of a client’s location a network or Internet protocol (IP)
address of the client machine is compared with the one on the system
user file.
This method is used mostly in addition to other security measures
because it alone cannot guarantee security. If used alone, it provides
access to the requested system to anybody who has access to the client
machine.
• Identity cards: Increasingly, cards are being used as authenticating
documents.
Whoever is the carrier of the card gains access to the requested
system.
card authentication is usually used as a second-level authentication
tool because whoever has access to the card
automatically can gain access to the requested system.
3. Confidentiality
The confidentiality service protects system data and information from
unauthorized disclosure.
This service uses encryption algorithms to ensure that nothing of the
sort(such as third party like a cryptanalysis or a man-in-the middle has
eavesdropped on the data) happened while the data was in the
network.
Encryption protects the communications channel from sniffers.
Sniffers are programs written for and installed on the communication
channels to eavesdrop on network traffic, examining all traffic on
selected network segments. Sniffers are easy to write and install and
difficult to detect. The encryption algorithm can either be symmetric
or asymmetric.
*Symmetric encryption or secret key encryption, as it is usually
called, uses a common key and the same cryptographic algorithm to
scramble and unscramble the message.
* Asymmetric encryption commonly known as public key encryption
uses two different keys: a public key known by all and a private key
known by only the sender and the receiver.
Both the sender and the receiver each has a pair of these keys, one
public and one private.
To encrypt a message, a sender uses the receiver’s public key which
was published. Upon receipt, the recipient of the message decrypts it
with his or her private key.
4. Integrity
The integrity service protects data against active threats such as those
that may alter it. Just like data confidentiality, data in transition
between the sending and receiving parties is susceptible ‫ تتعرض‬to
many threats from hackers, eavesdroppers, and cryptanalysts whose
goal is to intercept the data and alter it based on their motives.
This service, through encryption and hashing algorithms, ensures that
the integrity of the transient data is intact.
A hash function takes an input message M and creates a code from it.
The code is commonly referred to as a hash or a message digest.
A one-way hash function is used to create a signature of the message –
just like a human fingerprint.

The hash function is, therefore, used to provide the message’s


integrity and authenticity. The signature is then attached to the
message before it is sent by the sender to the recipient

Das könnte Ihnen auch gefallen