Sie sind auf Seite 1von 73

INTERNETWORKING BASICS

What Is an Internet work?


An Internet work is a collection of individual networks, connected by intermediate networking devices, that
functions as a single large network. Internetworking refers to the industry, products, and procedures that meet
the challenge of creating and administering internet works. Figure 1-1 illustrates some different kinds of
network technologies that can be interconnected by routers and other networking devices to create an internet
work.
Figure 1 Different Network Technologies Can Be Connected to Create an Internet work

Figure 1

History of Internetworking: -

The first networks were time-sharing networks that used mainframes and attached terminals. Both IBM’s
Systems Network Architecture (SNA) and Digital’s network architecture implemented such environments.
Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a relatively
small geographical area to exchange files and messages, as well as access shared resources such as file servers
and printers.
Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity.
Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links,
and others. New methods of connecting dispersed LANs are appearing everyday.
Today, high-speed LANs and switched internet works are becoming widely used, largely because they operate
at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing.
Internetworking evolved as a solution to three key problems: isolated LANs, duplication
of resources, and a lack of network management. Isolated LANs made electronic communication between
different offices or departments impossible. Duplication of resources meant that the same hardware and
software had to be supplied to each office or department, as did separate support staff. This lack of network
management meant that no centralized method of managing and troubleshooting networks existed.

Internetworking Challenges

The Technical Zone Page 1


Implementing a functional internetwork is no simple task. Many challenges must be faced, especially in the
areas of connectivity, reliability, network management, and flexibility. Each area is key in establishing an
efficient and effective internetwork.
The challenge when connecting various systems is to support communication among disparate technologies.
Different sites, for example, may use different types of media operating at varying speeds, or may even include
different types of systems that need to communicate.
Because companies rely heavily on data communication, internetworks must provide a certain level of
reliability. This is an unpredictable world; so many large internetworks include redundancy to allow for
communication even when problems occur.
Furthermore, network management must provide centralized support and troubleshooting capabilities in an
internetwork. Configuration, security, performance, and other issues must be adequately addressed for the
internetwork to function smoothly. Security within an internetwork is essential. Many people think of network
security from the perspective of protecting the private network from outside attacks. However, it is just as
important to protect the network from internal attacks, especially because most security breaches come from
inside. Networks must also be secured so that the internal network cannot be used as a tool to attack other
external sites.
Early in the year 2000, many major web sites were the victims of distributed denial of service (DDOS) attacks.
These attacks were possible because a great number of private networks currently connected with the Internet
were not properly secured. These private networks were used as tools for the attackers. Because nothing in this
world is stagnant, internetworks must be flexible enough to change with new demands.
Internetworking Models
When networks first came into being, computers could typically communicate only with computers from the
same manufacturer. For example, companies ran either a complete DECnet solution or an IBM solution—not
both together. In the late 1970s, the OSI (Open Systems Interconnection) model was created by the International
Organization for Standardization (ISO) to break this barrier. The OSI model was meant to help vendors create
interoperable network devices. Like world peace, it’ll probably never happen completely, but it’s still a great
goal. The OSI model is the primary architectural model for networks. It describes how data and network
information are communicated from applications on one computer, through the network media, to an
application on another computer. The OSI reference model breaks this approach into layers.
The Layered Approach

A reference model is a conceptual blueprint of how communications should take place. It addresses all the
processes required for effective communication and divides these processes into logical groupings called layers.
When a communication system is designed in this manner, it’s known as layered architecture.
Think of it like this: You and some friends want to start a company. One of the first things you’d do is sit down
and think through what must be done, who will do them, what order they will be done in, and how they relate
to each other. Ultimately, you might group these tasks into departments. Let’s say you decide to have an order-
taking department, an inventory department, and a shipping department. Each of your departments has its own
unique tasks, keeping its staff members busy and requiring them to focus on only their own duties.
Similarly, software developers can use a reference model to understand computer communication processes
and to see what types of functions need to be accomplished on any one layer. If they are developing a protocol
for a certain layer, all they need to concern themselves with is the specific layer’s functions, not those of any
other layer. Another layer and protocol will handle the other functions. The technical term for this idea is
binding. The communication processes that are related to each other are bound, or grouped together, at a
particular layer.

The Technical Zone Page 2


Advantages of Reference Models

The OSI model is hierarchical, and the same benefits and advantages can apply to any layered model. The
primary purpose of all models, and especially the OSI model, is to allow different vendors to interoperate. The
benefits of the OSI model include, but are not limited to, the following:
• Dividing the complex network operation into more manageable layers
• Changing one layer without having to change all layers. This allows application developers to specialize
in design and development.
• Defining the standard interface for the “plug-and-play” multi-vendor integration

Open System Interconnection Reference Model


The Open System Interconnection (OSI) reference model describes how information from a software application
in one computer moves through a network medium to a software application in another computer. The OSI
reference model is a conceptual model composed of seven layers, each specifying particular network functions.
The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now
considered the primary architectural model for interceptor communications. The OSI model divides the tasks
involved with moving information between networked computers into seven smaller, more manageable task
groups. A task or group of tasks is then assigned to each of the seven OSI layers. Each layer is reasonably self-
contained so that the tasks assigned to each layer can be implemented independently. This enables the
solutions offered by one layer to be updated without adversely affecting the other layers. The following list
details the seven layers of the Open System Interconnection (OSI) reference model:
• Layer 7—Application
• Layer 6—Presentation
• Layer 5—Session
• Layer 4—Transport
• Layer 3—Network
• Layer 2—Data link
• Layer 1—Physical

The OSI Reference Model Contains Seven Independent Layers


Application
Presentation
Sessions
Transport
Network
Data-Link
Physical

Characteristics of the OSI Layers


The seven layers of the OSI reference model can be divided into two categories: upper layers and lower layers.
The upper layers of the OSI model deal with application issues and generally are implemented only in software.
The highest layer, the application layer, is closest to the end user. Both users and application layer processes
interact with software applications that contain a communications component. The term upper layer is
sometimes used to refer to any layer above another layer in the OSI model.

The Technical Zone Page 3


The lower layers of the OSI model handle data transport issues. The physical layer and the data link layer are
implemented in hardware and software. The lowest layer, the physical layer, is closest to the physical network
medium (the network cabling, for example) and is responsible for actually placing information on the medium.
Figure 2 illustrates the division between the upper and lower OSI layers.
Figure 2: Two Sets of Layers Make Up the OSI Layers Protocols

Application
Application Presentation
Session

Transport
Network
Data Transport Data-Link
Physical

The OSI model provides a conceptual framework for communication between computers, but the model itself is
not a method of communication. Actual communication is made possible by using communication protocols. In
the context of data networking, a protocol is a formal set of rules and conventions that governs how computers
exchange information over a network medium. A protocol implements the functions of one or more of the OSI
layers.
A wide variety of communication protocols exist. Some of these protocols include LAN protocols, WAN
protocols, network protocols, and routing protocols. LAN protocols operate at the physical and data link layers
of the OSI model and define communication over the various LAN media. WAN protocols operate at the lowest
three layers of the OSI model and define communication over the various wide-area media. Routing protocols
are network layer protocols that are responsible for exchanging information between routers so that the
routers can select the proper path for network traffic. Finally, network protocols are the various upper-layer

The Technical Zone Page 4


protocols that exist in a given protocol suite. Many protocols rely on others for operation. For example, many
routing protocols use network protocols to exchange information between routers. This concept of building
upon the layers already in existence is the foundation of the OSI model.

OSI Model and Communication between Systems

Information being transferred from a software application in one computer system to a software application in
another must pass through the OSI layers. For example, if a software application in System A has information to
transmit to a software application in System B, the application program in System A will pass its information to
the application layer (Layer 7) of System A. The application layer then passes the information to the
presentation layer (Layer 6), which relays the data to the session layer (Layer 5), and so on down to the
physical layer (Layer 1). At the physical layer, the information is placed on the physical network medium and is
sent across the medium to System B. The physical layer of System B removes the information from the physical
medium, and then its physical layer passes the information up to the data link layer (Layer 2), which passes it
to the network layer (Layer 3), and so on, until it reaches the application layer (Layer 7) of System B. Finally,
the application layer of System B passes the information to the recipient application program to complete the
communication process.

Interaction between OSI Model Layers


A given layer in the OSI model generally communicates with three other OSI layers: the layer directly above it,
the layer directly below it, and its peer layer in other networked computer systems. The data link layer in
System A, for example, communicates with the network layer of System A, the physical layer of System A, and
the data link layer in System B. Figure 1-4 illustrates this example.

Figure 2 OSI Model Layers Communicate with Other Layers

Figure 3

The Technical Zone Page 5


Figure 3
OSI Model Layers and Information Exchange
The seven OSI layers use various forms of control information to communicate with their peer layers in other
computer systems. This control information consists of specific requests and instructions that are exchanged
between peer OSI layers.
Control information typically takes one of two forms: headers and trailers. Headers are prepended to data that
has been passed down from upper layers. Trailers are appended to data that has been passed down from upper
layers. An OSI layer is not required to attach a header or a trailer to data from upper layers.
Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information unit. At
the network layer, for example, an information unit consists of a Layer 3 header and data. At the data link layer,
however, all the information passed down by the network layer (the Layer 3 header and the data) is treated as
data.
In other words, the data portion of an information unit at a given OSI layer potentially
can contain headers, trailers, and data from all the higher layers. This is known as encapsulation. Figure 1-6
shows how the header and data from one layer are encapsulated into the header of the next lowest layer.

Figure 4: Headers and Data Can Be Encapsulated During Information Exchange

Figure 4

The Technical Zone Page 6


Information Exchange Process
The information exchange process occurs between peer OSI layers. Each layer in the source system adds
control information to data, and each layer in the destination system analyzes and removes the control
information from that data.
If System A has data from a software application to send to System B, the data is passed to the application layer.
The application layer in System A then communicates any control information required by the application layer
in System B by prepending a header to the data. The resulting information unit (a header and the data) is
passed to the presentation layer, which prepends its own header containing control information intended for
the presentation layer in System B. The information unit grows in size as each layer prepends its own header
(and, in some cases, a trailer) that contains control information to be used by its peer layer in System B. At the
physical layer, the entire information unit is placed onto the network medium.
The physical layer in System B receives the information unit and passes it to the data link layer. The data link
layer in System B then reads the control information contained in the header prepended by the data link layer
in System A. The header is then removed, and the remainder of the information unit is passed to the network
layer. Each layer performs the same actions: The layer reads the header from its peer layer, strips it off, and
passes the remaining information unit to the next highest layer. After the application layer performs these
actions, the data is passed to the recipient software application in System B, in exactly the form in which it was
transmitted by the application in System A.

OSI Model Physical Layer


The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating,
maintaining, and deactivating the physical link between communicating network systems. Physical layer
specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates,
maximum transmission distances, and physical connectors. Physical layer implementations can be categorized
as either LAN or WAN specifications. Figure 1-7 illustrates some common LAN and WAN physical layer
implementations.

Figure 5: Physical Layer Implementations Can Be LAN or WAN Specifications

Figure 5

• OSI Model Data Link Layer


The data link layer provides reliable transit of data across a physical network link. Different data link layer
specifications define different network and protocol characteristics, including physical addressing, network
topology, error notification, sequencing of frames, and flow control. Physical addressing (as opposed to
network addressing) defines how devices are addressed at the data link layer. Network topology consists of the

The Technical Zone Page 7


data link layer specifications that often define how devices are to be physically connected, such as in a bus or a
ring topology. Error notification alerts upper-layer protocols that a transmission error has occurred, and the
sequencing of data frames reorders frames that are transmitted out of sequence. Finally, flow control
moderates the transmission of data so that the receiving device is not overwhelmed with more traffic than it
can handle at one time.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into two sub
layers: Logical Link Control (LLC) and Media Access Control (MAC). Figure 1-8 illustrates the IEEE sub layers of
the data link layer.

Figure 6: The Data Link Layer Contains Two Sub layers

Figure 6
The Logical Link Control (LLC) sub layer of the data link layer manages communications between devices over a
single link of a network. LLC is defined in the IEEE 802.2 specification and supports both connectionless and
connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a number of fields in data link
layer frames that enable multiple higher-layer protocols to share a single physical data link. The Media Access
Control (MAC) sub layer of the data link layer manages protocol access to the physical network medium. The
IEEE MAC specification defines MAC addresses, which enable multiple devices to uniquely identify one another
at the data link layer.

• OSI Model Network Layer


The network layer defines the network address, which differs from the MAC address. Some network layer
implementations, such as the Internet Protocol (IP), define network addresses in a way that route selection can
be determined systematically by comparing the source network address with the destination network address
and applying the subnet mask. Because this layer defines the logical network layout, routers can use this layer
to determine how to forward packets. Because of this, much of the design and configuration work for internet
works happens at Layer 3, the network layer.

• OSI Model Transport Layer


The transport layer accepts data from the session layer and segments the data for transport across the
network. Generally, the transport layer is responsible for making sure that the data is delivered error-free and
in the proper sequence. Flow control generally occurs at the transport layer.
Flow control manages data transmission between devices so that the transmitting device does not send more
data than the receiving device can process. Multiplexing enables data from several applications to be
transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated by the
transport layer. Error checking involves creating various mechanisms for detecting transmission errors, while
error recovery involves acting, such as requesting that data be retransmitted, to resolve any errors that occur.
The transport protocols used on the Internet are TCP and UDP.
Flow Control Basics
Flow control is a function that prevents network congestion by ensuring that transmitting devices do not
overwhelm receiving devices with data. A high-speed computer, for example, may generate traffic faster than
the network can transfer it, or faster than the destination device can receive and process it. The three

The Technical Zone Page 8


commonly used methods for handling network congestion are buffering, transmitting source-quench messages,
and windowing.
Buffering is used by network devices to temporarily store bursts of excess data in memory until they can be
processed. Occasional data bursts are easily handled by buffering. Excess data bursts can exhaust memory,
however, forcing the device to discard any additional datagrams that arrive.
Source-quench messages are used by receiving devices to help prevent their buffers from overflowing. The
receiving device sends source-quench messages to request that the source reduce its current rate of data
transmission. First, the receiving device begins discarding received data due to overflowing buffers. Second, the
receiving device begins sending source-quench messages to the transmitting device at the rate of one message
for each packet dropped. The source device receives the source-quench messages and lowers the data rate until
it stops receiving the messages. Finally, the source device then gradually increases the data rate as long as no
further source-quench requests are received.
Windowing is a flow-control scheme in which the source device requires an acknowledgment from the
destination after a certain number of packets have been transmitted. With a window size of 3, the source
requires an acknowledgment after sending three packets, as follows. First, the source device sends three
packets to the destination device. Then, after receiving the three packets, the destination device sends an
acknowledgment to the source. The source receives the acknowledgment and sends three more packets. If the
destination does not receive one or more of the packets for some reason, such as overflowing buffers, it does
not receive enough packets to send an acknowledgment. The source then retransmits the packets at a reduced
transmission rate.
Error-Checking Basics
Error-checking schemes determine whether transmitted data has become corrupt or otherwise damaged while
traveling from the source to the destination. Error checking is implemented at several of the OSI layers.
One common error-checking scheme is the cyclic redundancy check (CRC), which detects and discards
corrupted data. Error-correction functions (such as data retransmission) are left to higher-layer protocols. A
CRC value is generated by a calculation that is performed at the source device. The destination device compares
this value to its own calculation to determine whether errors occurred during transmission. First, the source
device performs a predetermined set of calculations over the contents of the packet to be sent. Then, the source
places the calculated value in the packet and sends the packet to the destination. The destination performs the
same predetermined set of calculations over the contents of the packet and then compares its computed value
with that contained in the packet. If the values are equal, the packet is considered valid. If the values are
unequal, the packet contains errors and is discarded.
• OSI Model Session Layer
The Session layer is responsible for setting up, managing, and then tearing down sessions between Presentation
layer entities. The Session layer also provides dialog control between devices, or nodes. It coordinates
communication between systems and serves to organize their communication by offering three different
modes:
• Simplex
• half-duplex
• full-duplex
The Session layer basically keeps different applications’ data separate from other applications data.
• OSI Model Presentation Layer
The Presentation layer gets its name from its purpose: It presents data to the Application layer. It’s essentially a
translator and provides coding and conversion functions. A successful data transfer technique is to adapt the
data into a standard format before transmission. Computers are configured to receive this generically
formatted data and then convert the data back into its native format for actual reading (for example, EBCDIC to
ASCII). By providing translation services, the Presentation layer ensures that data transferred from the
Application layer of one system can be read by the Application layer of another host. The OSI has protocol

The Technical Zone Page 9


standards that define how standard data should be formatted. Tasks like data compression, decompression,
encryption, and decryption are associated with this layer. Some Presentation layer standards are involved in
multimedia operations. The following serve to direct graphic and visual image presentation:
PICT: This is picture format used by Macintosh or PowerPC programs for transferring Quick Draw graphics.
TIFF: The Tagged Image File Format is a standard graphics format for high-resolution, bitmapped images.
JPEG: The Joint Photographic Experts Group brings these photo standards to us. Other standards guide movies
and sound.
MIDI: The Musical Instrument Digital Interface is used for digitized music.
MPEG: The Moving Picture Experts Group’s standard for the compression and coding of motion video for CDs is
increasingly popular. It provides digital storage and bit rates up to 1.5Mbps .
• OSI Model Application Layer
The application layer is the OSI layer closest to the end user, which means that both the OSI application layer
and the user interact directly with the software application.
This layer interacts with software applications that implement a communicating component. Such application
programs fall outside the scope of the OSI model. Application layer functions typically include identifying
communication partners, determining resource availability, and synchronizing communication.
When identifying communication partners, the application layer determines the identity and availability of
communication partners for an application with data to transmit. When determining resource availability, the
application layer must decide whether sufficient network resources for the requested communication exist. In
synchronizing communication, all communication between applications requires cooperation that is managed
by the application layer.
Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and Simple
Mail Transfer Protocol (SMTP).

Information Formats
The data and control information that is transmitted through internetworks takes a variety of forms. The terms
used to refer to these information formats are not used consistently in the internetworking industry but
sometimes are used interchangeably. Common information formats include frames, packets, datagrams,
segments, messages, cells, and data units.
A frame is an information unit whose source and destination are data link layer entities. A frame is composed of
the data link layer header (and possibly a trailer) and upper-layer data. The header and trailer contain control
information intended for the data link layer entity in the destination system. Data from upper-layer entities is
encapsulated in the data link layer header and trailer. Figure 1-9 illustrates the basic components of a data link
layer frame.

Figure 7: Data from Upper-Layer Entities Makes Up the Data Link Layer Frame

Figure 7
A packet is an information unit whose source and destination are network layer entities. A packet is composed
of the network layer header (and possibly a trailer) and upper-layer data. The header and trailer contain
control information intended for the network layer entity in the destination system. Data from upper-layer
entities is encapsulated in the network layer header and trailer. Figure 1-10 illustrates the basic components of
a network layer packet.

The Technical Zone Page 10


Figure 8: Three Basic Components Make Up a Network Layer Packet

Figure 8
The term datagram usually refers to an information unit whose source and destination are network layer
entities that use connectionless network service.
The term segment usually refers to an information unit whose source and destination are transport layer
entities.
A message is an information unit whose source and destination entities exist above the network layer (often at
the application layer).
A cell is an information unit of a fixed size whose source and destination are data link layer entities. Cells are
used in switched environments, such as Asynchronous Transfer Mode (ATM) and Switched Multimegabit Data
Service (SMDS) networks. A cell is composed of the header and payload. The header contains control
information intended for the destination data link layer entity and is typically 5 bytes long. The payload
contains upper-layer data that is encapsulated in the cell header and is typically 48 bytes long.
The length of the header and the payload fields always are the same for each cell.
Figure 1picts the components of a typical cell.

Figure below Two Components Make Up a Typical Cell

Figure 9
Data unit is a generic term that refers to a variety of information units. Some common data units are service
data units (SDUs), protocol data units, and bridge protocol data units (BPDUs). SDUs are information units from
upper-layer protocols that define a service request to a lower-layer protocol. PDU is OSI terminology for a
packet. BPDUs are used by the spanning-tree algorithm as hello messages .

Connection-Oriented and Connectionless Network Services


In general, transport protocols can be characterized as being either connection-oriented or connectionless.
Connection-oriented services must first establish a connection with the desired service before passing any data.
A connectionless service can send the data without any need to establish a connection first. In general,
connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not.
Connection-oriented service involves three phases: connection establishment, data transfer, and
connection termination.
During connection establishment, the end nodes may reserve resources for the connection. The end nodes also
may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections.
This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking
system will send many requests for establishing a connection but then will never complete the connection. The

The Technical Zone Page 11


attacked computer is then left with resources allocated for many never-completed connections. Then, when an
end node tries to complete an actual connection, there are not enough resources for the valid connection.
The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer,
most connection-oriented services will monitor for lost packets and handle resending them. The protocol is
generally also responsible for putting the packets in the right sequence before passing the data up the protocol
stack.
When the transfer of data is complete, the end nodes terminate the connection and release resources reserved
for the connection.
Connection-oriented network services have more overhead than connectionless ones. Connection-oriented
services must negotiate a connection, transfer data, and tear down the connection, whereas a connectionless
transfer can simply send the data without the added overhead of creating and tearing down a connection. Each
has its place in internetworks.

MAC Addresses
Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC addresses identify
network entities in LANs that implement the IEEE MAC addresses of the data link layer. As with most data-link
addresses, MAC addresses are unique for each LAN interface. Figure 1-14 illustrates the relationship between
MAC addresses, data-link addresses, and the IEEE sub layers of the data link layer.

Figure 10: MAC Addresses, Data-Link Addresses, and the IEEE Sub layers of the Data Link Layer
Are All Related

Figure 10
MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits,
which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the
Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number, or
another value administered by the specific vendor. MAC addresses sometimes are called burned-in addresses
(BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory
(RAM) when the interface card initializes. Figure 1-15 illustrates the MAC address format.

The Technical Zone Page 12


Figure 11: The MAC Address Contains a Unique Format of Hexadecimal Digits

Figure 11

Mapping Addresses
Because internetworks generally use network addresses to route traffic around the network, there is a need to
map network addresses to MAC addresses. When the network layer has determined the destination station's
network address, it must forward the information over a physical network using a MAC address. Different
protocol suites use different methods to perform this mapping, but the most popular is Address Resolution
Protocol (ARP). Different protocol suites use different methods for determining the MAC address of a device.
The following three methods are used most often. Address Resolution Protocol (ARP) maps network addresses
to MAC addresses. The Hello protocol enables network devices to learn the MAC addresses of other network
devices. MAC addresses either are embedded in the network layer address or are generated by an algorithm.
Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to
send data to another device on the same network, it knows the source and destination network addresses for
the data transfer. It must somehow map the destination address to a MAC address before forwarding the data.
First, the sending station will check its ARP table to see if it has already discovered this destination station's
MAC address. If it has not, it will send a broadcast on the network with the destination station's IP address
contained in the broadcast. Every station on the network receives the broadcast and compares the embedded
IP address to its own. Only the station with the matching IP address replies to the sending station with a packet
containing the MAC address for the station. The first station then adds this information to its ARP table for
future reference and proceeds to transfer the data.
When the destination device lies on a remote network, one beyond a router, the process is the same except that
the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the
information to that device. The default gateway will then forward the information over whatever networks
necessary to deliver the packet to the network on which the destination device resides. The router on the
destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the
packet. The Hello protocol is a network layer protocol that enables network devices to identify one another and
indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello
messages onto the network. Devices on the network then return hello replies, and hello messages are also sent
at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of
other devices by examining Hello protocol packets.
.

The Technical Zone Page 13


Network Layer Addresses
A network layer address identifies an entity at the network layer of the OSI layers. Network addresses usually
exist within a hierarchical address space and sometimes are called virtual or logical addresses.
The relationship between a network address and a device is logical and unfixed; it typically is based either on
physical network characteristics (the device is on a particular network segment) or on groupings that have no
physical basis (the device is part of an AppleTalk zone). End systems require one network layer address for
each network layer protocol that they support. (This assumes that the device has only one physical network
connection.) Routers and other internetworking devices require one network layer address per physical
network connection for each network layer protocol supported. For example, a router with three interfaces
each running AppleTalk, TCP/IP, and OSI must have three network layer addresses for each interface. The
router therefore has nine network layer addresses. Figure 1-16 illustrates how each network interface must be
assigned a network address for each protocol supported.

Figure 12: Each Network Interface Must Be Assigned a Network Address for Each Protocol
supported

Figure 12

The Technical Zone Page 14


Address Assignments
Addresses are assigned to devices as one of two types: static and dynamic. Static addresses are assigned by a
network administrator according to a preconceived internetwork addressing plan. A static address does not
change until the network administrator manually changes it. Dynamic addresses are obtained by devices when
they attach to a network, by means of some protocol-specific process. A device using a dynamic address often
has a different address each time that it connects to the network. Some networks use a server to assign
addresses. Server-assigned addresses are recycled for reuse as devices disconnect. A device is therefore likely
to have a different address each time that it connects to the network.
Addresses versus Names
Internet work devices usually have both a name and an address associated with them. Internet work names
typically are location-independent and remain associated with a device wherever that device moves (for
example, from one building to another). Internetwork addresses usually are location-dependent and change
when a device is moved (although MAC addresses are an exception to this rule). As with network addresses
being mapped to MAC addresses, names are usually mapped to network addresses through some protocol. The
Internet uses Domain Name System (DNS) to map the name of a device to its IP address. For example, it's easier
for you to remember www.cisco.com instead of some IP address. Therefore, you type www.cisco.com into your
browser when you want to access Cisco's web site. Your computer performs a DNS lookup of the IP address for
Cisco's web server and then communicates with it using the network address.

TCP/IP Model
The TCP/IP model is a condensed version of the OSI model. It is comprised of four, instead of seven, layers:
• The Process/Application layer
• The Host-to-Host layer
• The Internet layer
• The Network Access layer
Figure given bellow shows a comparison of the TCP/IP or DoD model and the OSI reference model. As you can
see, the two are similar in concept, but each has a different number of layers with different names.

A vast array of protocols combines at the DoD model’s Process/Application layer to integrate the various
activities and duties spanning the focus of the OSI’s corresponding top three layers (Application, Presentation,
and Session). The Process/Application layer defines protocols for node-to-node application communication and
also controls user-interface specifications. The Host-to-Host layer parallels the functions of the OSI’s Transport

The Technical Zone Page 15


layer, defining protocols for setting up the level of transmission service for applications. It tackles issues like
creating reliable end-to-end communication and ensuring the error-free delivery of data. It handles packet
sequencing and maintains data integrity.
The Internet layer corresponds to the OSI’s Network layer, designating the protocols relating to the logical
transmission of packets over the entire network. It takes care of the addressing of hosts by giving them an IP
(Internet Protocol) address, and it handles the routing of packets among multiple networks. It also controls the
communication flow between two hosts. At the bottom of the model, the Network Access layer monitors the
data exchange between the host and the network. The equivalent of the Data Link and Physical layers of the OSI
model, the Network Access layer oversees hardware addressing and defines protocols for the physical
transmission of data. While the DoD and OSI models are alike in design and concept and have similar functions
in similar places, how those functions occur is different. Figure given bellow shows the TCP/IP protocol suite
and how its protocols relate to the DoD model layers.

The Process/Application Layer Protocols


In this section, we will describe the different applications and services typically used in IP networks. The
different protocols and applications covered in this section include the following:
• TELNET
• FTP
• TFTP
• NFS
• SMTP
• LPD
• X Window
• SNMP
• DNS
• DHCP
Telnet
Telnet is the chameleon of protocols—its specialty is terminal emulation. It allows a user on a remote client
machine, called the Telnet client, to access the resources of another machine, the Telnet server. Telnet achieves
this by pulling a fast one on the Telnet server and making the client machine appear as though it were a
terminal directly attached to the local network. This projection is actually a software image, a virtual terminal

The Technical Zone Page 16


that can interact with the chosen remote host. These emulated terminals are of the text-mode type and can
execute refined procedures like displaying menus that give users the opportunity to choose options from them
and access the applications on the duped server. Users begin a Telnet session by running the Telnet client
software and then logging on to the Telnet server.
File Transfer Protocol (FTP)
The File Transfer Protocol (FTP) is the protocol that actually lets us transfer files; it can facilitate this between
any two machines using it. But FTP isn’t just a protocol; it’s also a program. Operating as a protocol, FTP is used
by applications. As a program, it’s employed by users to perform file tasks by hand. FTP also allows for access
to both directories and files and can accomplish certain types of directory operations, like relocating into
different ones. FTP teams up with Telnet to transparently log you in to the FTP server and then provides for the
transfer of files. Accessing a host through FTP is only the first step, though. Users must then be subjected to an
authentication login that’s probably secured with passwords and usernames implemented by system
administrators to restrict access. But you can get around this somewhat by adopting the username
“anonymous”—though what you’ll gain access to will be limited. Even when employed by users manually as a
program, FTP’s functions are limited to listing and manipulating directories, typing file contents, and copying
files between hosts. It can’t execute remote files as programs.

Trivial File Transfer Protocol (TFTP)


The Trivial File Transfer Protocol (TFTP) is the stripped-down, stock version of FTP, but it’s the protocol of
choice if you know exactly what you want and where to find it. It doesn’t give you the abundance of functions
that FTP does, though. TFTP has no directory-browsing abilities; it can do nothing but send and receive files.
This compact little protocol also skimps in the data department, sending much smaller blocks of data than FTP,
and there’s no authentication as with FTP, so it’s insecure. Few sites support it because of the inherent security
risks.
Network File System (NFS)
Network File System (NFS) is a jewel of a protocol specializing in file sharing. It allows two different types of
file systems to interoperate. It works like this: Suppose the NFS server software is running on an NT server, and
the NFS client software is running on a Unix host. NFS allows for a portion of the RAM on the NT server to
transparently store Unix files, which can, in turn, be used by Unix users. Even though the NT file system and
Unix file system are unlike—they have different case sensitivity, filename lengths, security, and so on—both
Unix users and NT users can access that same file with their normal file systems, in their normal way.
Simple Mail Transfer Protocol (SMTP)
Simple Mail Transfer Protocol (SMTP), answering our ubiquitous call to e-mail, uses a spooled, or queued,
method of mail delivery. Once a message has been sent to a destination, the message is spooled to a device—
usually a disk. The server software at the destination posts a vigil, regularly checking this queue for messages.
When it detects them, it proceeds to deliver them to their destination. SMTP is used to send mail; POP3 is used
to receive mail.
Line Printer Daemon (LPD)
The Line Printer Daemon (LPD) protocol is designed for printer sharing. The LPD, along with the LPR (Line
Printer) program, allows print jobs to be spooled and sent to the network’s printers using TCP/IP.
X Window
Designed for client-server operations, X Window defines a protocol for the writing of graphical user interface–
based client/server applications. The idea is to allow a program, called a client, to run on one computer and
have it display a program called a window server on another computer.

Simple Network Management Protocol (SNMP)

The Technical Zone Page 17


Simple Network Management Protocol (SNMP) collects and manipulates this valuable network information. It
gathers data by polling the devices on the network from a management station at fixed or random intervals,
requiring them to disclose certain information. When all is well, SNMP receives something called a baseline— a
report delimiting the operational traits of a healthy network. This protocol can also stand as a watchdog over
the network, quickly notifying managers of any sudden turn of events. These network watchdogs are called
agents, and when aberrations occur, agents send an alert called a trap to the management station.
Domain Name Service (DNS)
Domain Name Service (DNS) resolves host names, specifically Internet names, like www.routersim.com. You
don’t have to use DNS; you can just type in the IP address of any device you want to communicate with. An IP
address identifies hosts on a network and the Internet as well. However, DNS was designed to make our lives
easier. Also, what would happen if you wanted to move your Web page to a different service provider? The IP
address would change and no one would know what the new one was. DNS allows you to use a domain name to
specify an IP address. You can change the IP address as often as you want and no one will know the difference.
The Host-to-Host Layer Protocols
The Host-to-Host layer’s main purpose is to shield the upper-layer applications from the complexities of the
network. This layer says to the upper layer, “Just give me your data stream, with any instructions, and I’ll begin
the process of getting your information ready to send.” The following sections describe the two protocols at this
layer:
• Transmission Control Protocol (TCP)
• User Datagram Protocol (UDP)
Transmission Control Protocol (TCP)
The Transmission Control Protocol (TCP) takes large blocks of information from an application and breaks
them into segments. It numbers and sequences each segment so that the destination’s TCP protocol can put the
segments back into the order the application intended. After these segments are sent, TCP (on the transmitting
host) waits for an acknowledgment of the receiving end’s TCP virtual circuit session, retransmitting those that
aren’t acknowledged. Before a transmitting host starts to send segments down the model, the sender’s TCP
protocol contacts the destination’s TCP protocol to establish a connection. What is created is known as a virtual
circuit. This type of communication is called connection-oriented. During this initial handshake, the two TCP
layers also agree on the amount of information that’s going to be sent before the recipient’s TCP sends back an
acknowledgment. With everything agreed upon in advance, the path is paved for reliable communication to
take place. TCP is a full-duplex, connection-oriented, reliable, accurate protocol, and establishing all these terms
and conditions, in addition to error checking, is no small task. TCP is very complicated and, not surprisingly,
costly in terms of network overhead. Since today’s networks are much more reliable than those of yore, this
added reliability is often unnecessary.
User Datagram Protocol (UDP)
Application developers can use the User Datagram Protocol (UDP) in place of TCP. UDP is the scaled-down
economy model and is considered a thin protocol. Like a thin person on a park bench, a thin protocol doesn’t
take up a lot of room—or in this case, much bandwidth on a network. UDP also doesn’t offer all the bells and
whistles of TCP, but it does do a fabulous job of transporting information that doesn’t require reliable delivery
— and it does so using far fewer network resources. There are some situations where it would definitely be
wise for application developers to opt for UDP rather than TCP. Remember the watchdog SNMP up there at the
Process/Application layer? SNMP monitors the network, sending intermittent messages and a fairly steady flow
of status updates and alerts, especially when running on a large network. The cost in overhead to establish,
maintain, and close a TCP connection for each one of those little messages would reduce what would be an
otherwise healthy, efficient network to a dammed-up bog in no time. Another circumstance calling for UDP
over TCP is when the matter of reliability is already accomplished at the Process/Application layer. Network
File System (NFS) handles its own reliability issues, making the use of TCP both impractical and redundant.
However, the application developer decides whether to use UDP or TCP, not the user who wants to transfer

The Technical Zone Page 18


data faster. UDP receives upper-layer blocks of information, instead of data streams as TCP does, and breaks
them into segments. Like TCP, each UDP segment is given a number for reassembly into the intended block at
the destination. However, UDP does not sequence the segments and does not care in which order the segments
arrive at the destination. At least it numbers them, though. But after that, UDP sends the segments off and
forgets about them. It doesn’t follow through, check up on them, or even allow for an acknowledgment of safe
arrival—complete abandonment. Because of this, it’s referred to as an unreliable protocol. This does not mean
that UDP is ineffective, only that it doesn’t handle issues of reliability. Further, UDP doesn’t create a virtual
circuit, nor does it contact the destination before delivering information to it. It is, therefore, also considered a
connectionless protocol. Since UDP assumes that the application will use its own reliability method, it doesn’t
use any. This gives an application developer a choice when running the Internet Protocol stack: TCP for
reliability or UDP for faster transfers.
The Internet Layer Protocols
There are two main reasons for the Internet layer’s existence: routing, and providing a single network interface
to the upper layers. None of the upper- or lower-layer protocols have any functions relating to routing. The
complex and important task of routing is the job of the Internet layer. The Internet layer’s second job is to
provide a single network interface to the upper-layer protocols. Without this layer, application programmers
would need to write “hooks” into every one of their applications for each different Network Access protocol.
This would not only be a pain in the neck, but it would lead to different versions of each application—one for
Ethernet, another one for Token Ring, and so on. To prevent this, IP provides one single network interface for
the upper-layer protocols. That accomplished, it’s then the job of IP and the various Network Access protocols
to get along and work together. All network roads don’t lead to Rome—they lead to IP. And all the other
protocols at this layer, as well as all those at the upper layers, use it. Never forget that. All paths through the
model go through IP. The following sections describe the protocols at the Internet layer. These are the
protocols that work at the Internet layer:
• Internet Protocol (IP)
• Internet Control Message Protocol (ICMP)
• Address Resolution Protocol (ARP)
• Reverse Address Resolution Protocol (RARP)
Internet Protocol (IP)
The Internet Protocol (IP) essentially is the Internet layer. The other protocols found here merely exist to
support it. IP contains the big picture and could be said to “see all,” in that it is aware of all the interconnected
networks. It can do this because all the machines on the network have software, or logical, address called an IP
address. IP looks at each packet’s address. Then, using a routing table, it decides where a packet is to be sent
next, choosing the best path. The Network Access–layer protocols at the bottom of the model don’t possess IP’s
enlightened scope of the entire network; they deal only with physical links (local networks). Identifying devices
on networks requires answering these two questions: Which network is it on? And what is its ID on that
network? The first answer is the software, or logical, address (the correct street). The second answer is the
hardware address (the correct mailbox). All hosts on a network have a logical ID called an IP address. This is
the software, or logical, address and contains valuable encoded information greatly simplifying the complex
task of routing. IP receives segments from the Host-to-Host layer and fragments them into datagrams (packets).
IP then reassembles datagrams back into segments on the receiving side. Each datagram is assigned the IP
address of the sender and of the recipient. Each router (layer-3 device) that receives a datagram makes routing
decisions based upon the packet’s destination IP address. IP protocol has to go through every time user data is
sent from the upper layers and wants to be sent to a remote network.
Internet Control Message Protocol (ICMP)
The Internet Control Message Protocol (ICMP) works at the Network layer and is used by IP for many different
services. ICMP is a management protocol and messaging service provider for IP. Its messages are carried as IP
datagrams. RFC 1256, ICMP Router Discovery Messages, is an annex to ICMP, which affords hosts’ extended

The Technical Zone Page 19


capability in discovering routes to gateways. Periodically, router advertisements are announced over the
network, reporting IP addresses for the routers network interfaces. Hosts listen for these network infomercials
to acquire route information. A router solicitation is a request for immediate advertisements and may be sent
by a host when it starts up. If a router can’t send an IP datagram any further, it uses ICMP to send a message
back to the sender, advising it of the situation. For example, if a router receives a packet destined for a network
that the router doesn’t know about, it will send an ICMP Destination Unreachable message back to the sending
station.
Buffer Full: If a router’s memory buffer for receiving incoming datagrams is full, it will use ICMP to send out
this message.
Hops: Each IP datagram is allotted a certain number of routers, called hops, which it may go through. If it
reaches its limit of hops before arriving at its destination, the last router to receive that datagram deletes it. The
executioner router then uses ICMP to send an obituary message, informing the sending machine of the demise
of its datagram.
Ping: Packet Internet Groper uses ICMP echo messages to check the physical connectivity of machines on an
internetwork.
Trace route: Using ICMP timeouts, trace route is used to find a path a packet takes as it traverses an
internetwork. The following data is from a network analyzer catching an ICMP echo request. Notice that even
though ICMP works at the Network layer, it still uses IP to do the Ping request.
Address Resolution Protocol (ARP)
The Address Resolution Protocol (ARP) finds the hardware address of a host from a known IP address. Here’s
how it works: When IP has a datagram to send, it must inform a Network Access protocol, such as Ethernet or
Token Ring, of the destination’s hardware address on the local network. (It has already been informed by
upper-layer protocols of the destination’s IP address.) If IP doesn’t find the destination host’s hardware address
in the ARP cache, it uses ARP to find this information. As IP’s detective, ARP interrogates the local network by
sending out a broadcast asking the machine with the specified IP address to reply with its hardware address. In
other words, ARP translates the software (IP) address into a hardware address—for example, the destination
machine’s Ethernet board address—and from it, deduces its whereabouts. This hardware address is technically
referred to as the media access control (MAC) address or physical address. Figure given bellow shows how an
ARP might look to a local network.

Reverse Address Resolution Protocol (RARP)


When an IP machine happens to be a diskless machine, it has no way of initially knowing its IP address, but it
does know its MAC address. The Reverse Address Resolution Protocol (RARP) discovers the identity of the IP
address for diskless machines by sending out a packet that includes its MAC address and a request for the IP

The Technical Zone Page 20


address assigned to that MAC address. A designated machine, called a RARP server, responds with the answer,
and the identity crisis is over. RARP uses the information it does know about the machine’s MAC address to
learn its IP address and complete the machine’s ID portrait.

Ways of Communication
Unicasting
• Communication between two devices is one-on-one. Create least traffic while
communicating. Best in when one device want to communicate with one device only as no
extra bothering the other hosts on the segment. Cannot be use in one-on-many devices to
communicate as one hub device need to send the many copies of the same packet to all the
hosts and will get the Acks from them.
Broadcasting
• Communication between two devices is one-on-all. One-n-all means all the host in the
network on the same switch. When host send the packet on broadcast address then the switch
will duplicate the packet and will send it on all the host in the network.
Multicasting
• Communication with one-on-one and one-on-many has too many limitations like large traffic
to handle and security breach. It is used when one-on-group one way communication is
required. For example live telecasting of video stream on internet, in this case the users are
group of people who may need the particular stream but not all the hosts. So the user will join
the particular multicast group to get that particular stream.

IP Addressing
One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a
numeric identifier assigned to each machine on an IP network. It designates the location of a device on
the network. An IP address is a software address, not a hardware address—the latter is hardcoded on
a network interface card (NIC) and used for finding hosts on a local network. IP addressing was
designed to allow a host on one network to communicate with a host on a different network,
regardless of the type of LANs the hosts is participating in.
IP stands for Internet Protocol, it's a communications protocol used from the smallest private network
to the massive global Internet. An IP address is a unique identifier given to a single device on an IP
network. The IP address consists of a 32-bit number that ranges from 0 to 4294967295. This means
that theoretically, the Internet can contain approximately 4.3 billion unique objects. But to make such

The Technical Zone Page 21


a large address block easier to handle, it was chopped up into four 8-bit numbers, or "octets,"
separated by a period. Instead of 32 binary base-2 digits, which would be too long to read, it's
converted to four base-256 digits. Octets are made up of numbers ranging from 0 to 255. The numbers
below show how IP addresses increment.
0.0.0.0
0.0.0.1
...increment 252 hosts...
0.0.0.254
0.0.0.255
0.0.1.0
0.0.1.1
...increment 252 hosts..
0.0.1.254
0.0.1.255
0.0.2.0
0.0.2.1
...increment 4+ billion hosts...
255.255.255.255
IP Terminology
Here are a few of the most important terms: -
Bit One digit; either a 1 or a 0.
Byte 8 bits.
Octet Always 8 bits. Base-8 addressing scheme.
Network address The designation used in routing to send packets to a remote network, for example, 10.0.0.0,
172.16.0.0, and 192.168.10.0.
Broadcast address
Used by applications and hosts to send information to all nodes on a network. Examples include
255.255.255.255, which is all networks, all nodes; 172.16.255.255, which is all subnets and hosts on network
17.16.0.0; and 10.255.255.255, which broadcasts to all subnets and hosts on network 10.0.0.0.

The Technical Zone Page 22


The Hierarchical IP Addressing Scheme
An IP address consists of 32 bits of information. These bits are divided into four sections, referred to as octets
or bytes, each containing 1 byte (8 bits).
You can depict an IP address using one of three methods:
• Dotted-decimal, as in 172.16.30.56
• Binary, as in 10101100.00010000.00011110.00111000
• Hexadecimal, as in 82 39 1E 38

Network Addressing
The Technical Zone Page 23
The network address uniquely identifies each network. Every machine on the same network shares that
network address as part of its IP address. In the IP address 172.16.30.56, for example, 172.16 is the network
address.
The node address is assigned to, and uniquely identifies, each machine on a network. This part of the address
must be unique because it identifies a particular machine—an individual—as opposed to a network, which is a
group. This number can also be referred to as a host address. In the sample IP address 172.16.30.56, .30.56 is
the node address. The designers of the Internet decided to create classes of networks based on network size.
For the small number of networks possessing a very large number of nodes, they created the rank Class A
network. At the other extreme is the Class C network, which is reserved for the numerous networks with a small
number of nodes. The class distinction for networks between very large and very small is predictably called the
Class B network. Subdividing an IP address into a network and node address is determined by the class
designation of one’s network.
Figure summarizes the three classes of networks: -

Network Address Range: Class A


The designers of the IP address scheme said that the first bit of the first byte in a Class A network address must
always be off, or 0. This means a Class A address must be between 0 and 127.
Here is how those numbers are defined:
0xxxxxxx: If we turn the other 7 bits all off and then turn them all on, we will find your Class A range of network
addresses.
00000000=0
01111111=127
Network Address Range: Class B
In a Class B network, the RFCs state that the first bit of the first byte must always be turned on, but the second
bit must always be turned off. If you turn the other six bits all off and then all on, you will find the range for a
Class B network:
10000000=128
10111111=191
As you can see, this means that a Class B network can be defined when the first byte is configured from 128 to
191.
Network Address Range: Class C
For Class C networks, the RFCs define the first two bits of the first octet always turned on, but the third bit can
never be on. Following the same process as the previous classes, convert from binary to decimal to find the
range.
Here is the range for a Class C network:
11000000=192
11011111=223

The Technical Zone Page 24


So, if you see an IP address that starts at 192 and goes to 223, you’ll know it is a Class C IP address.
Network Address Ranges: Classes D and E
The addresses between 224 and 255 are reserved for Class D and E networks.
Class D is used for multicast addresses and Class E for scientific purposes.

Network Addresses: Special Purpose


Some IP addresses are reserved for special purposes, and network administrators shouldn’t assign these
addresses to nodes. Table given bellow lists the members of this exclusive little club and why they’re included
in it.

Network –Id
• Can be defined as the Id to represent the no. of host addresses in the same network in the
topology. Cannot be assign to any host in the network. When all the host past is zero then it
is called network-id. Or simply the first address of the network is always Network-Id
Broadcast-Id
• Address on which if packets are send these will be receive by all the hosts in the network. T his
address is used when all the host in the network are suppose to get the same message. Cannot
be assign to any host in the network. When all the host bits are one then it is called
broadcast-id. Simply the last address of the network is called broadcast-id.
Class A Addresses
In a Class A network address, the first byte is assigned to the network address and the three remaining bytes
are used for the node addresses. The Class A format is Network.Node.Node.Node For example, in the IP
address 49.22.102.70, 49 is the network address, and 22.102.70 is the node address. Every machine on this
particular network would have the distinctive network address of 49. Class A addresses are one byte long, with

The Technical Zone Page 25


the first bit of that byte reserved and the seven remaining bits available for manipulation. As a result, the
maximum number of Class A networks that can be created is 128. Why?
Because each of the seven bit positions can either be a 0 or a 1, thus 27 or 128.
To complicate matters further, the network address of all 0s (0000 0000) is reserved to designate the default
route. Additionally, the address 127, which is reserved for diagnostics, can’t be used either, which means that
you can only use the numbers 1 to 126 to designate Class A network addresses. This means the actual number
of usable Class A network addresses is 128 minus 2, or 126. Got it? Each Class A address has three bytes (24-bit
positions) for the node address of a machine. Thus, there are 224—or 16,777,216—unique combinations and,
therefore, precisely that many possible unique node addresses for each Class A network. Because addresses
with the two patterns of all 0s and all 1s are reserved, the actual maximum usable number of nodes for a Class
A network is 224 minus 2, which equals 16,777,214.
Class A Valid Host IDs
Here is an example of how to figure out the valid host IDs in a Class A network address: 10.0.0.0 All host bits off
is the network address. 10.255.255.255 All host bits on is the broadcast address. The valid hosts are the
number in between the network address and the broadcast address: 10.0.0.1 through 10.255.255.254. Notice
that 0s and 255s are valid host IDs. All you need to remember when trying to find valid host addresses is that
the host bits cannot all be turned off or on at the same time.
Class B Addresses
In a Class B network address, the first two bytes are assigned to the network address, and the remaining two
bytes are used for node addresses. The format is Network. Network. Node. Node. For example, in the IP
address 172.16.30.56, the network address is 172.16, and the node address is 30.56. With a network address
being two bytes (eight bits each), there would be 216 unique combinations. But the Internet designers decided
that all Class B network addresses should start with the binary digit 1, then 0. This leaves 14 bit positions to
manipulate, therefore 16,384 (214) unique Class B network addresses. A Class B address uses two bytes for
node addresses. This is 216 minus thetwo reserved patterns (all 0s and all 1s), for a total of 65,534 possible
node addresses for each Class B network.
Class B Valid Host IDs
Here is an example of how to find the valid hosts in a Class B network: 172.16.0.0 All host bits turned off is the
network address.172.16.255.255 All host bits turned on is the broadcast address. The valid hosts would be the
numbers in between the network address and the broadcast address: 172.16.0.1 through 172.16.255.254.
Class C Addresses
The first three bytes of a Class C network address are dedicated to the network portion of the address, with
only one measly byte remaining for the node address. The format is Network.Network.Network.Node. Using
the example IP address 192.168.100.102, the network address is192.168.100, and the node address is 102.In a
Class C network address, the first three bit positions are always the binary 110. The calculation is such: 3 bytes,
or 24 bits, minus 3 reserved positions, leaves 21 positions. Hence, there are 221, or 2,097,152, possible Class C
networks. Each unique Class C network has one byte to use for node addresses. This leads to 28 or 256,
minus the two reserved patterns of all 0s and all 1s, for a total of 254 node addresses for each Class C network.
Class C Valid Host IDs
Here is an example of how to find a valid host ID in a Class C network: 192.168.100.0 All host bits turned off is
the network ID.192.168.100.255 All host bits turned on is the broadcast address. The valid hosts would be the
numbers in between the network address and the broadcast address: 192.168.100.1 through 192.168.100.254

So while assigning IP addresses to host, two addresses can never assign one Network-Id and other is
Broadcast-Id. Always subtract 2 from the total no of IPs in the network.

Network Subnet-mask Total No. of Usable Network –Id


IPs IPs Broadcast-Id

The Technical Zone Page 26


10.0.0.0 255.0.0.0 2^24 2^24 - 2 10.0.0.0 /
10.255.255.255
172.31.0.0 255.255.0.0 65536 65534 172.31.0.0 /
172.31.255.255
192.168.0.0 255.255.255.0 256 254 192.168.0.0 /
192.168.0.1

Subnetting
The word subnet is short for sub network--a smaller network within a larger one. The smallest subnet
that has no more subdivisions within it is considered a single "broadcast domain," which directly
correlates to a single LAN (local area network) segment on an Ethernet switch. The broadcast domain
serves an important function because this is where devices on a network communicate directly with
each other's MAC addresses, which don't route across multiple subnets, let alone the entire Internet.
MAC address communications are limited to a smaller network because they rely on ARP broadcasting
to find their way around, and broadcasting can be scaled only so much before the amount of broadcast
traffic brings down the entire network with sheer broadcast noise. For this reason, the most common
smallest subnet is 8 bits, or precisely a single octet, although it can be smaller or slightly larger.
Subnetting is just the concept of borrowing the bits from the host part to reduce the host part and to
include it in the network part. With this the no. of available network will be increase and the no of
hosts the subnetted will be decreased. This way more efficient assignment of IP addressing in the
network is possible with least possible wasting of IPs as they very limited in no .in IPv4
Subnets have a beginning and an ending, and the beginning number is always even and the ending
number is always odd. The beginning number is the "Network ID" and the ending number is the
"Broadcast ID." You're not allowed to use these numbers because they both have special meaning with
special purposes. The Network ID is the official designation for a particular subnet, and the ending
number is the broadcast address that every device on a subnet listens to.
With the Subnetting one bigger network can break down into smaller no. of Sub networks. With each
sub network they must have their own Network-Id and Broadcast-Id.
For example
192.168.1.0 255.255.255.0
Network-Id 192.168.0.0 Broadcast-Id 192.168.0.255
By doing binary of last octet we will get following
192.168.0.00000000
Now here we have last 8 digits as host bits and first 24 bits are for network and are reserve.
Lets we have N no. of requirement of IP addresses
Now we have to find out how many bits are suppose to require to reserve for hosts and rest left bits
are subnet bits
With N no. of hosts we require one Network-Id and Broadcast-Id so total no. of IPs required are
N + 2. To generate N options we need M(say) bits to reserve for network.
N + 2 ≤ 2^M (General for all classes)

Now the No. of Subnet Networks will be as given below


2^ (8-M)
Considering the requirement of 60 people

The Technical Zone Page 27


No. of Ips required are N + 2 = 62 where N = 60
By putting the values we will get M = 6
So no of Subnets will be 2^(8-6) = 4
And no. of people in the each subnet will be is 2^6 = 64
192.168.0. 00 000000
Subnet bits Host bits
Now Ist will be
192.168.0.00 ****** Decimal Form 192.168.0.0
192.168.0.01 ****** Decimal Form 192.168.0.64
192.168.0.10 ****** Decimal Form 192.168.0.128
192.168.0.11 ****** Decimal Form 192.168.0.192

Network-Id Broadcast-Id Network-Id Broadcast- Id


Decimal Form
192.16 8.0.00000000 192.168.0.00111111 192.168.0.0 192.168.0.63
192.168.0.01000000 192.168.0.01111111 192.168.0.64 192.168.127
192.168.0.10000000 192.168.0.10111111 192.168.0.128 192.168.0.191
192.168.0.11000000 192.168.0.11111111 192.168.0.192 192.168.0.255

IP Variable Length Subnet Masking (VLSM)


Conventional Subnet masking replaces the two-level IP addressing scheme with a more flexible three-
level method. Since it lets network administrators assign IP addresses to hosts based on how they are
connected in physical networks, subnetting is a real breakthrough for those maintaining large IP
networks. It has its own weaknesses though, and still has room for improvement. The main weakness
of conventional subnetting is in fact that the subnet ID represents only one additional hierarchical
level in how IP addresses are interpreted and used for routing.
The Problem With Single-Level Subnetting
It may seem “greedy” to look at subnetting and say “what, only one additional level”? J However, in
large networks, the need to divide our entire network into only one level of subnetworks doesn't
represent the best use of our IP address block. Furthermore, we have already seen that since the
subnet ID is the same length throughout the network, we can have problems if we have subnetworks
with very different numbers of hosts on them—the subnet ID must be chosen based on whichever
subnet has the greatest number of hosts, even if most of subnets have far fewer. This is inefficient
even in small networks, and can result in the need to use extra addressing blocks while wasting many
of the addresses in each block.
For example, consider a relatively small company with a Class C network, 201.45.222.0/24. They have
six subnetworks in their network. The first four subnets (S1, S2, S3 and S4) are relatively small,
containing only 10 hosts each. However, one of them (S5) is for their production floor and has 50
hosts, and the last (S6) is their development and engineering group, which has 100 hosts.
The total number of hosts needed is thus 196. Without subnetting, we have enough hosts in our Class
C network to handle them all. However, when we try to subnet, we have a big problem. In order to
have six subnets we need to use 3 bits for the subnet ID. This leaves only 5 bits for the host ID, which
means every subnet has the identical capacity of 30 hosts. This is enough for the smaller subnets but

The Technical Zone Page 28


not enough for the larger ones. The only solution with conventional subnetting, other than shuffling
the physical subnets, is to get another Class C block for the two big subnets and use the original for the
four small ones. But this is expensive, and means wasting hundreds of IP addresses.

Suppose requirement is as following.


120 People for marketing people
60 people for Finance
30 Tell callers
14 Team Leaders
6 Managers
2 Directors
2 Senate Members

The Technical Zone Page 29


TRANSMISSION MEDIUM USED
Unshielded Twisted Pair (UTP) Cable
Unshielded Twisted Pair (UTP) is undoubtedly the most common transmission system. Twisted pair
cables are available unshielded (UTP) or shielded (STP). UTP is the most common. STP is used in noisy
environments where the shield protects against excessive electromagnetic interference. Both UTP and
STP come in stranded and solid wire varieties. The stranded wire is the most common and is also very
flexible for bending around corners. Solid wire cable has less attenuation and can span longer
distances, but is less flexible than stranded wire and cannot be repeatedly bent
Shielded Twisted Pair (STP) involves a metal foil, or shield, that surrounds each pair in a cable,
sometimes with another shield surrounding all the pairs in a multi-pair cable.
The shields serve to block ambient interference by absorbing it and conducting it to ground. That
means that the foils have to be spliced just as carefully as the conductors, and that the connection to

ground has to be rock-solid.


Twisted pair comes in following categories: -

1. UTP Analog voice


2. UTP Digital voice (1 Mbps data)
3. UTP, STP Digital voice (16 Mbps data)
4. UTP, STP Digital voice (20 Mbps data)
5. UTP, STP Digital voice (100 Mbps data)

Unshielded Twisted Pair (UTP) Cable


Twisted pair cabling comes in two varieties: shielded and unshielded .

Unshielded twisted pair


The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The cable has four pairs
of wires inside the jacket. Each pair is twisted with a different number of twists per inch to help eliminate
interference from adjacent pairs and other electrical devices. The tighter the twisting, the higher the supported
transmission rate and the greater the cost per foot.
Unshielded Twisted Pair Connector

The Technical Zone Page 30


The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is a plastic connector that
looks like a large telephone-style connector (fig.). A slot allows the RJ-45 to be inserted only one way. RJ stands
for Registered Jack, implying that the connector follows a standard borrowed from the telephone industry. This
standard designates which wire goes with each pin inside the connector.
The RJ-45 connector is clear so you can see the eight colored wires that connect to the connector’s pins. These
wires are twisted into four pairs. Four wires (two pairs) carry the voltage and are considered tip. The other four
wires are grounded and are called ring. The RJ-45 connector is crimped onto the end of the wire, and the
pin locations of the connector are numbered from the left, 8 to 1.

RJ-45 connector
Pin Wire Pair (T is tip, R is Ring)

1 Pair 2 T2
2 Pair 2 R2
3 Pair 3 T3
4 Pair 1 R1
5 Pair 1 T1
6 Pair 3 R3
7 Pair 4 T4
8 Pair 4 R4

Straight-Through
In a UTP implementation of a straight-through cable, the wires on both cable ends are in the same order.
You can use a straight-through cable for the following tasks:
 Connecting a router to a hub or switch
 Connecting a server to a hub or switch
 Connecting workstations to a hub or switch

Crossover
In the implementation of a crossover, the wires on each end of the cable are crossed. Transmit to
receive and receive to Transmit on each side, for both tip and ring.
You can use a crossover cable for the following tasks:
 Connecting uplinks between switches
 Connecting hubs to switches
 Connecting a hub to another hub

Coaxial Cable

Coaxial cabling has a single copper conductor at its center. A plastic layer provides insulation between the center
conductor and a braided metal shield. The metal shield helps to block any outside interference from fluorescent
lights, motors, and other computers.

The Technical Zone Page 31


Coaxial cable
Although coaxial cabling is difficult to install, it is highly resistant to signal interference. In addition, it can
support greater cable lengths between network devices than twisted pair cable. The two types of coaxial cabling
are thick coaxial and thin coaxial.
Coaxial Cable Connectors
The most common type of connector used with coaxial cables is the Bayone-Neill-Concelman (BNC) connector.
Different types of adapters are available for BNC connectors, including a T-connector, barrel connector, and
terminator. Connectors on the cable are the weakest points in any network.

BNC connector
Fiber Optic Cable
Fiber optic cabling consists of a center glass core surrounded by several layers of protective materials. It
transmits light rather than electronic signals eliminating the problem of electrical interference. This makes it
ideal for certain environments that contain a large amount of electrical interference. It has also made it the
standard for connecting networks between buildings, due to its immunity to the effects of moisture and lighting.
Fiber optic cable has the ability to transmit signals over much longer distances than coaxial and twisted pair. It
also has the capability to carry information at vastly greater speeds. This capacity broadens communication
possibilities to include services such as video conferencing and interactive services.

Fiber optic cable


Fiber Optic Connector
The most common connector used with fiber optic cable is an ST connector. It is barrel shaped, similar to a BNC
connector. A newer connector, the SC has a squared face and is easier to connect in a confined space .

Switches

Switch is an intelligent device that forwards only those packets that are meant for that subnet.

Here we will discuss in detail 3com super stack 3300 switch in detail: -
3com Switch:
The Super Stack 3 Switch 3300 connects your existing 10Mbps devices, connects high-performance
workgroups with a 100Mbps backbone or server connection, and connects power users to dedicated
100Mbps ports - all in one switch. In addition, as part of the 3Com Super Stack 3 range of products,
you can combine it with any Super Stack 3 system as your network grows.

The Technical Zone Page 32


Features:

The Switch has the following hardware features:


• 12 or 24 Fast Ethernet auto-negotiating 10BASE-T/100BASE-TX ports
• Matrix port for connecting units in the Switch 1100/3300 family to form a stack:
• Connect two units back-to-back using a single Matrix Cable
• Connect up to four units using Matrix Cables linked to a Matrix Module
• Slot for an Expansion Module

Front view:

Rear View:

Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each packet and process
it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the
nodes residing on each network segment and then allow only the necessary traffic to pass through the switch.
When a packet is received by the switch, the switch examines the destination and source hardware addresses and
compares them to a table of network segments and addresses. If the segments are the same, the packet is
dropped ("filtered"); if the segments are different, then the packet is "forwarded" to the proper segment.
Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.

The Technical Zone Page 33


Filtering of packets and the regeneration of forwarded packets enables switching technology to split a network
into separate collision domains. Regeneration of packets allows for greater distances and more nodes to be used
in the total network design, and dramatically lowers the overall collision rates. In switched networks, each
segment is an independent collision domain. In shared networks all nodes reside in one, big shared collision
domain. Easy to install, most switches are self-learning. They determine the Ethernet addresses in use on each
segment, building a table as packets are passed through the switch. This "plug and play" element makes switches
an attractive alternative to hubs.
Switches can connect different networks types (such as Ethernet and Fast Ethernet) or networks of the same
type. Many switches today offer high-speed links, like Fast Ethernet or FDDI that can be used to link the switches
together or to give added bandwidth to important servers that get a lot of traffic. A network composed of a
number of switches linked together via these fast uplinks is called a "collapsed backbone" network.
Dedicating ports on switches to individual nodes is another way to speed access for critical computers. Servers
and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes
to a dedicated switch port.

Hubs
In data communications, a hub is the pivot of convergence where data arrives from one or more directions and is
forwarded out in one or more directions. A hub usually includes a switch (in telecommunications, a switch is a
network device that selects a path or circuit for sending a unit of data to its next destination) of some kind. The
distinction seems to be that the hub is the point where data comes together and the switch is what determines
how and where data is forwarded from the place where data comes together. A hub is a hardware device that acts
as a central connecting point and joins lines in a star network configuration.

Routers
A router is a device that interconnects two or more computer networks, and selectively interchanges
packets of data between them. Each data packet contains address information that a router can use to
determine if the source and destination are on the same network, or if the data packet must be
transferred from one network to another. A router is a device whose software and hardware are
customized to the tasks of routing and forwarding information. A router has two or more network
interfaces, which may be to different types of network or different network standards.
Types of routers
Basically these are of two types–
1) Modular: - these routers do not have fixed interfaces. These can be added and removed
according to need.
2) Non-modular routers:- These routers have fixed interfaces and these cannot be removed.
Ports
We can connect to a Cisco router to configure it, verify its configuration and check the statistics by
using various ports. There are many ports but the most important is the console port.
Console Port: -

The Technical Zone Page 34


• The console port is usually an RJ-48 connection located at the back of the router. Console is
used to configure router when the router is freshly boot and when any time admin wanted to
change the running configuration.
• We can also connect to the Cisco router by using an auxiliary port, which is the same as the
console port. But the auxiliary port also allows us to configure modem commands.

Router Components

Some of the parts of a cisco router are: Chassis, motherboard, processor, RAM, NVRAM, flash memory,
Power supply, Rom etc.
ROM:
• The ROM in a router contains the bootstrap program that searches for a suitable system image
when the router is switched on. When the router is switched on, the ROM performs a Power-on
self-test (POST) to check the hardware. POST checks if everything is working in a proper way
or not. The ROM also provides a monitor mode that can be used for recovering from a crisis.
The Technical Zone Page 35
The information present in the ROM can be erased. ROM contains the basic information which
interprets the information to the device.
Flash Memory:
• Flash memory is an erasable, reprogrammable ROM that holds the system image and the
microcode. Flash memory gets its name from the fact that sections of its memory cells are
erased in a single action or flash. Flash memory is commonly called Flash. Flash is a variation of
EEPROM (Electrically Erasable Programmable Read-Only Memory). The process of erasing and
rewriting in EEPROM is slow, while flash is erased and rewritten faster. Flash memory holds
the Operating System of a router. The operating system of a Cisco router is IOS (Internetwork
Operating System). When a router is switched on, it checks for the compressed form of IOS in
Flash memory. If the IOS is present, then it continues else it checks it in the TFTS (Trivial File
Transfer Server).

RAM:
• This is much faster to read from and write to than other kinds of storage, provides catching,
buffers network packets, and stores routing table information. RAM contains the running
configuration file, which is the current configuration file. All configuration changes are saved to
this file unless we explicitly save the changes to the NVRAM. Information in the RAM requires a
constant power source to be sustained. When the router is powered down, or there is a power
cycle, data stored in RAM ceases to exist. NVRAM is Nonvolatile Random Access Memory.
Information in NVRAM is retained in storage when the router is switched off or rebooted.
NVRAM
• (NVRAM) is the general name used to describe any type of random access memory which does
not lose its information when power is turned off. The Startup-configuration is stored in the
NVRAM of Router. If the router get reboot it will search the NVRAM for startup-config. If
available then the router will copy that Startup-config and put it in running configuration.
Internal part of a router
CPU:-
• As the function of the CPU, it executes instructions coded in the operating system and its
subsystems to perform the basic operations necessary in order to accomplish the functionality
of the router, for example, all of the routing functions, network module high-level control, and
system initialization.
Motherboard Same function as of Computer or Laptop.
Router Interface Types
Network Module It is type of circuit board on which WIC cards are installed and have permanent Fast
Ethernet or Ethernet slots.
WIC Cards Are used to connect the router to other routers in the network or with the Wide area
Network like lease lines or frame-relay switch.
• Smart serial
• Serial
Fast Ethernet Cards with max-speed of 100Mbps per second. And follow the Ethernet standards
Ethernet Cards with max-speed of 10Mbps per second. And follow the Ethernet Standards

Boot Sequence
Complete these steps:
1. After you power on the router, the ROM monitor starts first. ROMMON/BOOTSTRAP functions
are important at router boot, and complete these operations at boot up:

The Technical Zone Page 36


o Configure power-on register settings—these settings are of the Control Registers of the
processor and of other devices such as Dual Universal Asynchronous Receiver
Transmitter (DUART) for console access, as well as the configuration register.
o Perform power-on diagnostics—Tests are performed on NVRAM and DRAM, writing
and reading various data patterns.
o Initialize the hardware—Initialization of the interrupt vector and other hardware is
performed, and memory, for example, DRAM, SRAM, and so forth, is sized.
o Initialize software structures—Initialization of the NVRAM data structure occurs so that
information about the boot sequence, stack trace, and environment variables can be
read. Also, information about accessible devices is collected in the initial device table.
2. Next, the ROM looks for the Cisco IOS software image in the Flash. Even if you want to boot the
router with the Trivial File Transfer Protocol (TFTP), you need a valid image in the Flash in
order to boot that image first, and to use that image as a boot-helper image in order to initialize
the system, and bring up the interfaces in order to load the main image from the TFTP server.
3. After the router find the image, the router decompresses it and loads it into the Dynamic RAM.
Then the Cisco IOS software image starts to run. Cisco IOS software performs important
functions during boot up, such as:
o Recognition and analysis of interfaces and other hardware
o Setup of proper data structures such as Interface Descriptor Blocks (IDBs)
o Allocation of buffers
o Reading the configuration from NVRAM to RAM (startup-config) and the configuration
of the system

This is an example of a boot sequence from a 2600 router:


System Bootstrap, Version 11.3(2)XA4, RELEASE SOFTWARE (fc1)
Copyright (c) 1999 by cisco Systems, Inc.
TAC:Home:SW:IOS:Specials for info
C2600 platform with 65536 Kbytes of main memory

program load complete, entry point: 0x80008000, size: 0x43b7fc

Self decompressing the image:


######################################################################
######################################################################
######################################################################
######################################################################
####################################################### [OK]

Restricted Rights Legend

Use, duplication, or disclosure by the Government is


subject to restrictions as set forth in subparagraph
(c) of the Commercial Computer Software - Restricted
Rights clause at FAR sec. 52.227-19 and subparagraph
(c) (1) (ii) of the Rights in Technical Data and Computer
Software clause at DFARS sec. 252.227-7013.

cisco Systems, Inc.

The Technical Zone Page 37


170 West Tasman Drive
San Jose, California 95134-1706

Cisco Internetwork Operating System Software


IOS (tm) C2600 Software (C2600-I-M), Version 12.1(8), RELEASE SOFTWARE (fc1)
Copyright (c) 1986-2001 by cisco Systems, Inc.
Compiled Tue 17-Apr-01 04:55 by kellythw
Image text-base: 0x80008088, data-base: 0x8080853C

cisco 2611 (MPC860) processor (revision 0x203) with 56320K/9216K bytes of memory.
Processor board ID JAD05020BV5 (1587666027)
M860 processor: part number 0, mask 49
Bridging software.
X.25 software, Version 3.0.0.
2 Ethernet/IEEE 802.3 interface(s)
2 Serial(sync/async) network interface(s)
32K bytes of non-volatile configuration memory.
16384K bytes of processor board System flash (Read/Write)

Press RETURN to get started!


00:00:09: %LINK-3-UPDOWN: Interface Ethernet0/0, changed state to up
00:00:09: %LINK-3-UPDOWN: Interface Ethernet0/1, changed state to up
00:00:09: %LINK-3-UPDOWN: Interface Serial0/0, changed state to up
00:00:09: %LINK-3-UPDOWN: Interface Serial0/1, changed state to up
00:00:10: %SYS-5-CONFIG_I: Configured from memory by console
00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/0,
changed state to up
00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0/1,
changed state to up
00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0,
changed state to up
00:00:10: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/1,
changed state to up
00:00:13: %SYS-5-RESTART: System restarted --
Cisco Internetwork Operating System Software
IOS (tm) C2600 Software (C2600-I-M), Version 12.1(8), RELEASE SOFTWARE (fc1)
Copyright (c) 1986-2001 by cisco Systems, Inc.
Compiled Tue 17-Apr-01 04:55 by kellythw

router>

DCE and DTE


DCE stands for Data Communication Equipment. The DCE end of a link determines the speed of the
link. DCE end is usually located at the Service Provider end. It controls the speed of the DTE end by
clock rate. Clock Rate is defined as Bits per second. It is essential to configure clock rate on the DCE
side. No communication will start between routers without clockrate.
DTE stand for Data Terminal Equipment. The DTE end is connected to the device. The services
available to the DTE are most often accessed via modem or channel service unit/data service unit
(CSU/DSU). No need to configure clock rate on DTE end.

The Technical Zone Page 38


Configuring a Router
A router can be configured in three ways:
• Console
• Telnet
• Auxiliary line telephone link (not used these days)
By default a router has no configuration and do not work. To enable it we use the enable command as-
Router>enable
Command-Line Interface (CLI)

To use the CLI, press Enter after the router finishes booting up. After you do that, the router will
respond with messages that tell you all about the status of each and every one of its interfaces and then
display a banner and ask you to log in

Cisco Router Basic Operations

Enter privileged mode Router> enable

Return to user mode from privileged disable

Exit Router Logout or exit or quit

Recall last command up arrow or <Ctrl-P>

Recall next command down arrow or <Ctrl-N>

Suspend or abort <Shift> and <Ctrl> and 6 then x

Refresh screen output <Ctrl-R>

Complete Command TAB

Cisco Router Copy Commands (On Privilege Mode)

The Technical Zone Page 39


Save the current configuration from DRAM to Router# copy running-config startup-config
NVRAM

Merge NVRAM configuration to DRAM Router# copy startup-config running-config

Copy DRAM configuration to a TFTP server Router# copy running-config tftp

Merge TFTP configuration with current router Router# copy tftp running-config
configuration held in DRAM

Backup the IOS onto a TFTP server Router# copy flash tftp

Upgrade the router IOS from a TFTP server Router# copy tftp flash

Cisco Router Debug Commands (On Privilege Mode)

Enable debug for RIP Router# debug ip rip

To See IP Packet Router# debug ip packet

To debug ip reply packet Router# debug ip icmp

Switch all debugging off Router# no debug all


Router# u all

Some basic commands

Set a console password to cisco Router(config)# line con 0


Router(config-line)# login
Router(config-line)# password cisco

Set a telnet password Router(config)# line vty 0 4


Router(config-line)# login
Router(config-line)# password cisco

Stop console timing out Router(config)# line con 0


Router(config-line)# exec-timeout 0 0

Set the enable password to cisco Router(config)# enable password cisco

Set the enable secret password to peter. Router(config)# enable secret peter
This password overrides the enable password

The Technical Zone Page 40


and is encypted within the config file

To enter in Interface mode Router(config)# interface serial x/y or


Router(config)# interface fastethernet
x/y

Enable an interface Router(config-if)#no shutdown

To disable an interface Router(config-if)#shutdown

Set the clock rate for a router with a DCE Router(config-if)clock rate 64000
cable to 64K

Set a logical bandwidth assignment of 64K to Router(config-if)bandwidth 64


the serial interface Note that the zeroes are not missing

To add an IP address to a interface Router(config-if)# ip address 10.1.1.1


255.255.255.0

Disable CDP for the whole router Router(config)# no cdp run

Enable CDP for he whole router Router(config)# cdp run

Disable CDP on an interface Router(config-if)# no cdp enable

Cisco Router Show Commands (Privilege Mode)

View version information Router# show version

View current configuration (DRAM) Router# show running-config

View startup configuration (NVRAM) Router# show startup-config

Show IOS file and flash space Router# show flash

Shows all logs that the router has in its memory Router# show log

Overview all interfaces on the router Router# show ip interfaces brief

Display a summary of connected cdp devices Router#show cdp neighbor

Display detailed information on all devices Router#show cdp entry *

Display current routing protocols Router#show ip protocols

The Technical Zone Page 41


Display IP routing table Router#show ip route

Display Interface Properties Router# Show Interface serial x/y


Router# Show interface fastehternet x/y

Display IP Properties of Interface Router# Show ip interface serial x/y


Router# Show ip interface fastehternet x/y

Ping
Ping is a computer network administration utility used to test whether a particular host is
reachable across an Internet Protocol (IP) network and to measure the round-trip time for
packets sent from the local host to a destination computer, including the local host's own
interfaces.
By default the packet will take the source address of the outgoing interface from which the packet
is suppose to leave for the destination.
Ping operates by sending Internet Control Message Protocol (ICMP) echo request packets to the
target host and waits for an ICMP response. In the process it measures the round-trip time and
records any packet loss. The results of the test are printed in form of a statistical summary of
the response packets received, including the minimum, maximum, and the mean round-trip
times, and sometimes the standard deviation of the mean.
Command can be used in given formant for any device whether Microsoft OS or the Cisco Routers
C:\> ping Address(IP or www.xyz.com)
C:\>ping 127.0.0.254
Pinging 127.0.0.254 with 32 bytes of data:
Reply from 127.0.0.254: bytes=32 time<1ms TTL=128
Reply from 127.0.0.254: bytes=32 time<1ms TTL=128
Reply from 127.0.0.254: bytes=32 time<1ms TTL=128
Reply from 127.0.0.254: bytes=32 time<1ms TTL=128

Ping statistics for 127.0.0.254:


Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

Router# ping A.B.C.D (IP Address)


Router#ping 1.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/3/4 ms

Extended Ping
Ping has various options depending on the implementation that enable special operational
modes, such as to specify the packet size used as the probe, automatic repeated operation for
The Technical Zone Page 42
sending a specified count, request timeout and the source address that is carry by the ping
packet.

Router# ping
Protocol [ip]: ip
Target IP address: 1.1.1.1
Repeat count [5]: 1000
Datagram size [100]: 200
Timeout in seconds [2]: 1
Extended commands [n]: y to (change the Source address use y )
Source address or interface: 1.1.1.1
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 1000, 200-byte ICMP Echos to 1.1.1.1, timeout is 1 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Success rate is 100 percent (1000/1000), round-trip min/avg/max = 1/1/4 ms

Traceroute
Traceroute is a computer network tool used to show the route taken by packets across an IP network.
It is used to find out on which router the packets are actually dropped if the packet is unable to reach
the destination. It is very useful tool for network professional.
Traceroute works by increasing the "time-to-live" value of each successive batch of packets sent. The
first three packets sent have a time-to-live (TTL) value of one (implying that they are not forwarded
by the next router and make only a single hop). The next three packets have a TTL value of 2, and so
on. When a packet passes through a host, normally the host decrements the TTL value by one, and
forwards the packet to the next host. When a packet with a TTL of one reaches a host, the host
discards the packet and sends an ICMP time exceeded packet to the sender, or an echo reply if its IP
address matches the IP address that the packet was originally sent to. The traceroute utility uses these
returning packets to produce a list of hosts that the packets have traversed in transit to the
destination.
Command for Microsoft Operating systems.
C:\>tracert www.google.com

The Technical Zone Page 43


Tracing route to www.l.google.com [209.85.231.104] over a maximum of 30 hops:

1 2 ms 2 ms 1 ms 10.16.32.96
2 18 ms 18 ms 16 ms 122.160.236.2
3 17 ms 16 ms 16 ms ABTS-North-Static-014.236.160.122.airtelbroadband.in
[122.160.236.14]
4 18 ms 15 ms 15 ms 125.19.65.101
5 75 ms 85 ms 74 ms 203.101.100.210
6 77 ms 96 ms 76 ms 72.14.216.229
7 82 ms 82 ms 81 ms 66.249.94.170
8 88 ms 89 ms 92 ms 72.14.238.90
9 83 ms 83 ms 81 ms maa03s01-in-f104.1e100.net [209.85.231.104]

Trace complete

C:\>tracert 4.2.2.2
Tracing route to vnsc-bak.sys.gtei.net [4.2.2.2]
over a maximum of 30 hops:
1 2 ms 1 ms 1 ms 10.16.32.96
2 16 ms 20 ms 17 ms ABTS-North-Static-002.236.160.122.airtelbroadband.in
[122.160.236.2]
3 18 ms 17 ms 16 ms ABTS-North-Static-006.236.160.122.airtelbroadband.in
[122.160.236.6]
4 17 ms 19 ms 16 ms 125.19.65.101
5 71 ms 69 ms 73 ms 203.101.95.30
6 225 ms 222 ms 221 ms so-5-3-0-dcr2.par.cw.net [195.10.54.77]
7 221 ms 221 ms 231 ms xe-4-3-0-xcr1.par.cw.net [195.2.9.233]
8 216 ms 225 ms 215 ms xe-0-1-0-xcr1.fra.cw.net [195.2.9.225]
9 328 ms 322 ms 319 ms 212.162.4.201
10 304 ms 307 ms 304 ms vnsc-bak.sys.gtei.net [4.2.2.2]
Trace complete.

For Cisco routers


Router#traceroute A.B.C.D

Routed Protocols
- Protocol that can be routed by a router. It is used between routers to carry user traffic. A router must
be able to interpret the logical internetwork as specified by that routed protocol. Examples of routed
protocols include AppleTalk, DECnet, and IP, IPX etc.
Routing Protocols
- Protocol that accomplishes routing through the implementation of a specific routing algorithm.
Examples of routing protocols include IGRP, OSPF, and RIP. It is used between routers to maintain
tables. Dynamic Routing is performed by Routing Protocols

Routing

The Technical Zone Page 44


Routing is the act of moving information across an internetwork from a source to a destination.
Routing is used for taking a packet from one device and sending it through the network to another
device on a different network. If your network has no routers, then you are not routing. Routers route
traffic to all the networks in your inter-network. routing directs packet forwarding, the transit of
logically addressed packets from their source toward their ultimate destination through intermediate
nodes; typically hardware devices called routers, bridges, gateways, firewalls, or switches. General-
purpose computers with multiple network cards can also forward packets and perform routing,
though they are not specialized hardware and may suffer from limited performance. The routing
process usually directs forwarding on the basis of routing tables which maintain a record of the routes
to various network destinations. Thus, constructing routing tables, which are held in the routers'
memory, is very important for efficient routing. Most routing algorithms use only one network path at
a time, but multipath routing techniques enable the use of multiple alternative paths.
To be able to route packets, a router must know, at a minimum, the following:
• Destination address
• Neighbor routers from which it can learn about remote networks
• Possible routes to all remote networks
• The best route to each remote network

Different Types of Routing


• Static routing
• Default routing
• Dynamic routing

How to maintain and verify routing information


The router learns about remote networks from neighbor routers or from an administrator. The router
then builds a routing table that describes how to find the remote networks. If the network is directly
connected, then the router already knows how to get to the network. If the networks are not attached,
the router must learn how to get to the remote network with either static routing, which means that
the administrator must hand-type all network locations into the routing table, or use dynamic routing.

What is the Routing Table?


Routing Table is the table in which all the best routes to all possible networks learn by router are
placed.
All the decisions are taken by route engine are based on Routing Table. So the routing table should be
the populated with latest entries and latest updates of the Networks.
Routing Table is populated on the basis of the following Information
1. Highest subnet mask of the Network.
For example if 2.0.0.0 |28 is advertize by the RIP and 2.0.0.0 |24 is advertize by the OSPF then
the decision from IP 2.0.0.1 to 2.0.0.31 is taken on the basis of RIP and rest all the IPs of the
Network 2.0.0.0 |24 are taken on the basis of OSPF
2. Lowest Administrative Distance of the coming Network route.
For Example is same network is advertized by the two routing protocols then the protocol
with lower Admin distance is considered as the best route to at particular Network

The Technical Zone Page 45


3. Lowest Metric if multiple routes of the particular network are advertizing by same routing
protocol. For example RIP is advertizing 2.0.0.0 |24 network with 4 and 5 hop away by two
interfaces then the advertisement with lowest metric will be selected as best route
4. Load Balancing if the metric of all the routes are Equal and it Depend upon routing protocol
that how many path it support.

Contents of routing tables

The routing table consists of at least three information fields:


• The network id: i.e. the destination network id
• Cost: i.e. the cost or metric of the path through which the packet is to be sent
• Next hop: The next hop, or gateway, is the address of the next station to which the packet is to
be sent on the way to its final destination
Router#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

1.0.0.0/24 is subnetted, 5 subnets


R 1.0.1.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0
R 1.0.0.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0
R 1.0.3.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0
R 1.0.2.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0
R 1.0.4.0 [120/1] via 192.168.0.2, 00:00:22, Serial0/0
C 192.168.0.0/24 is directly connected, Serial0/0

Load Balancing
Load sharing, also known as load balancing, allows routers to take advantage of multiple paths to the
same destination by sending packets over all the available routes.

Load sharing can be equal cost or unequal cost, where cost is a generic term referring to whatever
metric (if any) is associated with the route.
Equal-cost load sharing distributes traffic equally among multiple paths with equal metrics.
Unequal-cost load sharing distributes packets among multiple paths with different metrics. The
traffic is distributed inversely proportional to the cost of the routes. That is, paths with lower costs are
assigned more traffic, and paths with higher costs are assigned less traffic.

The Technical Zone Page 46


Some routing protocols support both equal-cost and unequal-cost load sharing, whereas others
support only equal cost. Static routes, which have no metric, support only equal-cost load sharing.
Routing Protocol like RIP, OSPF , IS-IS support only Equal Load Balancing where as EIGRP support
both equal and unequal Load Balancing Load sharing is also either per destination or per packet.

Per Destination Load Sharing and Fast Switching


Per destination load balancing distributes the load according to destination address. Given two paths
to the same network, all packets for one destination on the network may travel over the first path, all
packets for a second destination on the same network may travel over the second path, all packets for
a third destination may again be sent over the first path, and so on. This type of load balancing occurs
in Cisco routers when they are fast switching, the default Cisco switching mode. Fast switching works
as follows: When a router switches the first packet to a particular destination, a route table lookup is
performed and an exit interface is selected. The necessary data-link information to frame the packet
for the selected interface is then retrieved (from the ARP cache, for instance), and the packet is
encapsulated and transmitted. The retrieved route and data-link information is then entered into a
fast switching cache, and as subsequent packets to the same destination enter the router, the
information in the fast cache allows the router to immediately switch the packet without performing
another route table and ARP cache lookup. While switching time and processor utilization are
decreased, fast switching means that all packets to a specific destination are routed out the same
interface. When a packet addressed to a different host on the same network enters the router and an
alternate route exists, the router may send all packets for that destination on the alternate route.
Therefore, the best the router can do is balance traffic on a per destination basis.
Per Packet Load Sharing and Process Switching
Per packet load sharing means that one packet to a destination is sent over one link, the next packet to
the same destination is sent over the next link, and so on, given equal-cost paths. If the paths are
unequal cost, the load balancing may be one packet over the higher-cost link for every three packets
over the lower-cost link, or some other proportion depending upon the ratio of costs. Cisco routers
will do per packet load balancing when they are process switching.
Process switching simply means that for every packet, the router performs a route table lookup,
selects an interface, and then looks up the data link information. Because each routing decision is
independent for each packet, all packets to the same destination are not forced to use the same
interface.

Loopbacks: - a loopback device is a virtual network interface implemented in software only and not
connected to any hardware, but which is fully integrated into the router’s internal network
infrastructure. Any traffic that router sends to the loopback interface is immediately received on the
same interface.
Any address can be given to loopbacks and it behave as the real interface to all the other devices the
traffic send to the loopback is equivalent to the traffic send to the real interface or host and proper
reply is send to the sender. As is testing environment we cannot create a large real networks so the
loopbacks are the only tool which helps in creating the large virtual network.

Router(config)#interface loopback ?
<0-2147483647> Loopback interface number

The Technical Zone Page 47


Router(config)#interface loopback 1
Router(config-if)#
*Mar 1 00:01:32.399: %LINEPROTO-5-UPDOWN: Line protocol on Interface Loopback1, changed
state to up
Router(config-if)#ip address 1.0.0.1 255.255.255.0
Router(config-if)#no shut

Router# show ip interface brief


Interface IP-Address OK? Method Status Protocol
Loopback0 unassigned YES unset up up
Loopback1 1.0.0.1 YES manual up up

As shown above the no. of loopbacks that can be created on any router is from 0-2147483647
And there values can be loopback no. for the identification of the particular loopback
The loopback no. is locally significant on the router and it must be different for all the loopbacks in
particular router and need not to be different on other routers. Example like Loopback 0 can be
created on all the routers in the network but every loopback must have differnet network address.

Router# show ip route


Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route

Gateway of last resort is not set

1.0.0.0/24 is subnetted, 1 subnets


C 1.0.0.0 is directly connected, Loopback1

Static Routing
Static routing is the process of an administrator manually adding routes in each router’s routing
table. There are benefits and disadvantages to all routing processes. Static routing is not really a
protocol, simply the process of manually entering routes into the routing table via a configuration file
that is loaded when the routing device starts up.

In these systems, routes through a data network are described by fixed paths (statically). These routes
are usually entered into the router by the system administrator. An entire network can be configured
using static routes, but this type of configuration is not fault tolerant. When there is a change in the

The Technical Zone Page 48


network or a failure occurs between two statically defined nodes, traffic will not be rerouted. This
means that anything that wishes to take an affected path will either have to wait for the failure to be
repaired or the static route to be updated by the administrator before restarting its journey. Most
requests will time out (ultimately failing) before these repairs can be made. There are, however, times
when static routes make sense and can even improve the performance of a network
Static routing has the following benefits:
• No overhead on the router CPU
• No bandwidth usage between routers for updates each other.
• Security (because the administrator only allows routing to certain networks)

Static routing has the following disadvantages:


• The administrator must really understand the internetwork and how each router is connected
to configure the routes correctly.
• If one network is added to the internetwork, the administrator must add a route to it on all
routers.
• It’s not feasible in large networks because it would be a full-time job.
• One major problem in Static Routing is that Admin has to select the Best route to each network
when redundant paths are available.

Static Routing work good with Small networks and where small series of routers are placed as these
routers are not capable of taken another burden of routing protocols and where topology is like hub
and spoke.
But when it comes to the large Network the static routing get very complicated and admin has to
manually design the network by him-self for all best routes and the backup routes and with addition
of new router the whole topology may be needed to revise for the better utilization of the recourses

The command used to add a static route to a routing table is

Router(config)# ip route [destination-network] [mask] [next-hop-address or exit-interface]


[administrative-distance][permanent]

• Ip route The command used to create the static route.


• Destination network The network you are placing in the routing table.
• Mask Indicates the subnet mask being used on the network.
• Next hop address The address of the next hop router that will receive the packet and forward
it to the remote network. This is a router interface that is on a directly connected network. You
must be able to ping the router interface before you add the route.
• Exit interface Used in place of the next hop address if desired. Must be on a point-to-point link,
such as a WAN. This command does not work on a LAN; for example, Ethernet.
• Administrative distance By default, static routes have an administrative distance of 1. You
can change the default value by adding an administrative weight at the end of the command.
• Permanent If the interface is shut down or the router cannot communicate to the next hop
router, the route is automatically discarded from the routing table. Choosing the permanent
option keeps the entry in the routing table no matter what happens.
The Technical Zone Page 49
Figure 1
R1(config)# ip route 2.0.0.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 2.0.1.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 3.0.0.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 3.0.1.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 4.0.0.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 4.0.1.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 192.168.23.0 255.255.255.0 fastethernet 0/0 192.168.12.2
R1(config)# ip route 192.168.34.0 255.255.255.0 fastethernet 0/0 192.168.12.2

R2(config)# ip route 1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.12.1


R2(config)# ip route 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.12.1
R2(config)# ip route 3.0.0.0 255.255.255.0 fastethernet 0/1 192.168.23.2
R2(config)# ip route 3.0.1.0 255.255.255.0 fastethernet 0/1 192.168.23.2
R2(config)# ip route 4.0.0.0 255.255.255.0 fastethernet 0/1 192.168.23.2
R2(config)# ip route 4.0.1.0 255.255.255.0 fastethernet 0/1 192.168.23.2
R2(config)# ip route 192.168.34.0 255.255.255.0 fastethernet 0/1 192.168.23.2

R3(config)# ip route 1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.23.1


R3(config)# ip route 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.23.1
R3(config)# ip route 2.0.0.0 255.255.255.0 fastethernet 0/0 192.168.23.1
R3(config)# ip route 2.0.1.0 255.255.255.0 fastethernet 0/0 192.168.23.1
R3(config)# ip route 4.0.0.0 255.255.255.0 fastethernet 0/1 192.168.34.2
R3(config)# ip route 4.0.1.0 255.255.255.0 fastethernet 0/1 192.168.34.2
R3(config)# ip route 192.168.12.0 255.255.255.0 fastethernet 0/0 192.168.23.1

R4(config)# ip route 1.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1


R4(config)# ip route 1.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1
R4(config)# ip route 2.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1

The Technical Zone Page 50


R4(config)# ip route 2.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1
R4(config)# ip route 3.0.0.0 255.255.255.0 fastethernet 0/0 192.168.34.1
R4(config)# ip route 3.0.1.0 255.255.255.0 fastethernet 0/0 192.168.34.1
R4(config)# ip route 192.168.12.0 255.255.255.0 fastethernet 0/0 192.168.34.1
R4(config)# ip route 192.168.23.0 255.255.255.0 fastethernet 0/0 192.168.34.1

Test your Skill

Configure the routers for static routes

The Technical Zone Page 51


Configure R1’s Loopback 1.1.1.1 to reach R4 via R5 – R2 – R4
and Loopback of R4 4.4.4.4 with path R4 – R3 – R5 to R1.

Default Routing
Default Routing is the routing in which all the packets to unknown addresses are routed through
particular interface of the router and this interface will act as the default gateway for that particular
router and one router can only have on gateway.
Router(config)#ip route 0.0.0.0 0.0.0.0 (Interface out address) (Next hop address) (Admin
Distance)
Ip route 0.0.0.0 0.0.0.0 int Serial x/y A.B.C.D 20
Admin Distance is use to give the priority of the default route.
And with this command one can set the default gateway to the router and when using the Show ip
route command then the Gateway to last resort will be set to the next hop address of the Adjacent
router.
Default routing works well in Hub and Spoke Topology in which all routers at spokes should have
default route to the hub router and hub router is configured with static routes to all spokes routers
Networks

Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 1.0.0.1 ?


<1-255> Distance metric for this route
name Specify name of the next hop
permanent permanent route
tag Set tag for this route

The Technical Zone Page 52


track Install route depending on tracked item
<cr>

Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 1.0.0.1 20 ?


name Specify name of the next hop
permanent Permanent route
tag Set tag for this route
track Install route depending on tracked item
<cr>

Router(config)#ip route 0.0.0.0 0.0.0.0 loopback 0 2.0.0.1 20 permanent

Router#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route

Gateway of last resort is 0.0.0.0 to network 0.0.0.0

1.0.0.0/24 is subnetted, 1 subnets


C 1.0.0.0 is directly connected, Loopback1
S* 0.0.0.0/0 is directly connected, Null0

Figure 2
R1(config)ip route 0.0.0.0 0.0.0.0 serial 0/0 192.168.0.2

R3(config)ip route 0.0.0.0 0.0.0.0 serial 0/0 192.168.1.2

R2(config)# ip route 1.0.0.0 255.255.255.0 serial 0/0 192.168.0.1


R2(config)# ip route 1.0.1.0 255.255.255.0 serial 0/0 192.168.0.1

The Technical Zone Page 53


R2(config)# ip route 1.0.2.0 255.255.255.0 serial 0/0 192.168.0.1
R2(config)# ip route 2.0.0.0 255.255.255.0 serial 0/1 192.168.1.1
R2(config)# ip route 2.0.1.0 255.255.255.0 serial 0/1 192.168.1.1
R2(config)# ip route 2.0.2.0 255.255.255.0 serial 0/1 192.168.1.1

Dynamic routing
Dynamic Routing is the process of routing protocols running on the router communicating with
neighbor routers. The routers then update each other about all the networks they know about. If a
change occurs in the network, the dynamic routing protocols automatically inform all routers about
the change. If static routing is used, the administrator is responsible for updating all changes by hand
into all routers.

Dynamic routing that adjust automatically to network topology or traffic changes. Also called adaptive
routing. Use a route that a network routing protocol adjusts automatically for topology or traffic
changes. The success of dynamic routing depends on two basic router functions:
• Maintenance of a routing table
• Timely distribution of knowledge in the form of routing updates to other routers

This is the process of using protocols to find and update routing tables on routers. This is easier than
static or default routing, but one can use it at the expense of router CPU processes and bandwidth on
the network links.
A routing protocol defines the set of rules used by a router when it communicates between neighbor
routers.

Dynamic Routing is of two types:-


• Distance Vector Routing Protocols
Routing Information Protocol (RIP)
Enhanced Interior Gateway Routing Protocols (EIGRP)
• Link State Routing Protocols
Open Shortest Path First (OSPF)
Integrated Intermediate Systems-Intermediate Systems (IS-IS)

Routing Protocols are of two types:-


• Interior Routing Protocol
Routing Information Protocol
Enhanced Interior Gateway Routing Protocols
Integrated Intermediate Systems-Intermediate Systems
Open Shortest Path First (OSPF)

• Exterior Routing Protocols


Border Gateway Routing (BGP)
The Technical Zone Page 54
Routing Protocols Basics
All dynamic routing protocols are built around an algorithm. Generally, an algorithm is a step-by-step
procedure for solving a problem. A routing algorithm must, at a minimum, specify the following:

• A procedure for passing reach-ability information about networks to other routers.


• A procedure for receiving reach-ability information from other routers
• A procedure for determining optimal routes based on the reach-ability information it has and
for recording this information in a route table.
• A procedure for reacting to, compensating for, and advertising topology changes in an
Internetwork.
• A few issues common to any routing protocol are path determination, metrics, convergence,
and load- balancing.

Figure 3

Path Determination
• All networks within an internetwork must be connected to a router, and wherever a router has
an interface on a network that interface must have an address on the network. This address is
the originating point for reach-ability information.
As shows in the above figure
A simple three-router inter-network. Router A knows about networks 192.168.1.0,
192.168.2.0, and 192.168.3.0 because it has interfaces on those networks with corresponding
addresses and appropriate address masks. Likewise, router B knows about 192.168.3.0,
192.168.4.0, 192.168.5.0, and 192.186.6.0; Router C knows about 192.168.6.0, 192.168.7.0, and
198.168.1.0. Each interface implements the data link and physical protocols of the network to
which it is attached, so the router also knows the state of the network (up or down).

Each router knows about its directly connected networks from its assigned addresses and
masks. And Network that are not directly connected to router must be known to router via
static routing or dynamic routing

The Technical Zone Page 55


• Router A examines its IP addresses and associated masks and deduces that it is attached to
networks 192.168.1.0, 192.186.2.0, and 192.168.3.0.
• Router A enters these networks into its route table, along with some sort of flag indicating that
the networks are directly connected.
• Router A places the information into a packet: "My directly connected networks are
192.168.1.0, 192.186.2.0, and 192.168.3.0."
• Router A transmits copies of these route information packets, or routing updates, to routers B
and C.
• Routers B and C, having performed the same steps, have sent updates with their directly
connected networks to A. Router A enters the received information into its route table, along
with the source address of the router that sent the update packet. Router A now knows about
all the networks, and it knows the addresses of the routers to which they are attached.

Metrics
• When there are multiple routes to the same destination, a router must have a mechanism for
calculating the best path. A metric is a variable assigned to routes as a means of ranking them
from best to worst or from most preferred to least preferred.
Different Routing Protocols uses different metrics so there is no comparison between the two
or more routing protocols which one is better. Metric is only used to find the best route with-in
the routing protocol.

RIP Ver1 &2 Hop Counts


OSPF Bandwidth
EIGRP Bandwidth + Delay
IS-IS Reference Value

Hop Count
• Hop count is simply counting the no. of hops the network is away. Hops are no of routers which
are on the way to the network. RIP works on the basis of Hop counts and when hop count is
considered no other parameters are consider. Like if the path with lower hop count is bad and
path with higher hop count is good RIP will always use lower hop count rather than good one.

The Technical Zone Page 56


Figure 4

Bandwidth
• A bandwidth metric would choose a higher-bandwidth path over a lower-bandwidth link.
However, bandwidth by itself still may not be a good metric. What if one or both of the T1 links
are heavily loaded with other traffic and the 56K link is lightly loaded? Or what if the higher-
bandwidth link also has a higher delay?

Load
• This metric reflects the amount of traffic utilizing the links along the path. The best path is the
one with the lowest load.

Delay
• Delay is a measure of the time a packet takes to traverse a route. A routing protocol using delay
as a metric would choose the path with the least delay as the best path. There may be many
ways to measure delay. Delay may take into account not only the delay of the links along the
route but also such factors as router latency and queuing delay. On the other hand, the delay of
a route may be not measured at all; it may be a sum of static quantities defined for each
interface along the path. Each individual delay quantity would be an estimate based on the type
of link to which the interface is connected.
Reliability
• Reliability measures the likelihood that the link will fail in some way and can be either variable
or fixed. Examples of variable-reliability metrics are the number of times a link has failed or the
number of errors it has received within a certain time period. Fixed-reliability metrics are
based on known qualities of a link as determined by the network administrator. The path with
highest reliability would be selected as best.

Cost
The Technical Zone Page 57
• This metric is configured by a network administrator to reflect more- or less-preferred routes.
Cost may be defined by any policy or link characteristic or may reflect the arbitrary judgment
of the network administrator. The term cost is often used as a generic term when speaking of
route choices. For example, "RIP chooses the lowest-cost path based on hop count." Another
generic term is shortest, as in "RIP chooses the shortest path based on hop count." When used
in this context, either lowest-cost (or highest-cost) and shortest (or longest) merely refer to a
routing protocol's view of paths based on its specific metrics.
Cost is the value derived by using the metric for that particular protocol.
Example RIP use directly HOP count
OSPF calculate value with the given formulae (100/Bandwidth in Mbps)
For serial link of T1 line OSPF cost is 64 and for fast Ethernet it is 1
Convergence
• A dynamic routing protocol must include a set of procedures for a router to inform other
routers about its directly connected networks, to receive and process the same information
from other routers, and to pass along the information it receives from other routers. Further, a
routing protocol must define a metric by which best paths may be determined. RIP converge
very slowly where as EIGRP converge very fast. Faster the convergence higher is the
bandwidth used for updates of that protocol. Slower is the convergence protocol will take large
time to recover from failure.

Administrative distance: -
Admin Distance is the measure used by Cisco routers to select the best path when there are two or
more different routes to the same destination from two different routing protocols. Administrative
distance defines the reliability of a routing protocol. Each routing protocol is prioritized in order of
most to least reliable (believable) using an administrative distance value. This value is assigned on
basis of the reliability and convergence value of the routing protocol. RIP 120 EGP 140 ODR 160
External EIGRP 170 Internal BGP 200 Unknown 255
Protocol Default Admin Dist Protocol Default Admin-Dist
Directly connected 0 RIP 120
Static route 1 EGP 140
EIGRP summary 5 ODR 160
route
External BGP 20 External EIGRP 170
EIGRP 90 Internal BGP 200
OSPF 110 Unknown 255
IS-IS 115

Distance Vector Routing Protocols (DVRP)


Most routing protocols fall into one of two classes: distance vector or link state. The basics of distance
vector routing protocols are examined here; the next section covers link state routing protocols.
Distance vector algorithms are based on the work done of R. E. Bellman, L. R. Ford, and D. R. Fulkerson
and for this reason occasionally are referred to as Bellman-Ford or Ford-Fulkerson algorithms.
• R. E. Bellman. Dynamic Programming. Princeton, New Jersey: Princeton University Press; 1957.
• L. R. Ford Jr. and D. R. Fulkerson. Flows in Networks. Princeton, New Jersey: Princeton University
Press; 1962.

The Technical Zone Page 58


The name distance vector is derived from the fact that routes are advertised as vectors of (distance,
direction), where distance is defined in terms of a metric and direction is defined in terms of the next-
hop router. For example, "Destination A is a distance of 5 hops away, in the direction of next-hop
router X."
As that statement implies, each router learns routes from its neighboring routers' perspectives and
then advertises the routes from its own perspective. Because each router depends on its neighbors for
information, which the neighbors in turn may have learned from their neighbors, and so on, distance
vector routing is sometimes facetiously referred to as "Routing by Rumor."

Routing by Rumors means that each router relies on the information obtained for m the neighbor
and do not verify itself so potentially fear of loops in the network

Distance vector routing protocols include the following:


• Routing Information Protocol (RIP) for IP
• Xerox Networking System's XNS RIP
• Novell's IPX RIP
• Cisco's Internet Gateway Routing Protocol (IGRP)
• DEC's DNA Phase IV
• AppleTalk's Routing Table Maintenance Protocol (RTMP)
Common Characteristics
A typical distance vector routing protocol uses a routing algorithm in which routers periodically send
routing updates to all neighbors by broadcasting their entire route tables. The preceding statement
contains a lot of information. Following sections consider it in more detail.

Periodic Updates
Periodic updates means that at the end of a certain time period, updates will be transmitted. This
period typically ranges from 10 seconds for AppleTalk's RTMP to 90 seconds for Cisco's IGRP. At issue
here is the fact that if updates are sent too frequently, congestion may occur; if updates are sent too
infrequently, convergence time may be unacceptably high.

Neighbors
In the context of routers, neighbors always mean routers sharing a common data link. A distance
vector routing protocol sends its updates to neighboring routers and depends on them to pass the
update information along to their neighbors. For this reason, distance vector routing is said to use
hop-by-hop updates.

Broadcast or Multicast Updates


When a router first becomes active on a network, how does it find other routers and how does it
announce its own presence? Several methods are available. The simplest is to send the updates to the
broadcast address (in the case of IP, 255.255.255.255). Neighboring routers speaking the same
routing protocol will hear the broadcasts and take appropriate action. Hosts and other devices
uninterested in the routing updates will simply drop the packets. But now the days every routing
protocol uses Multicast address to send its update to the neighbor router.
RIP Ver. 1 is only protocol which sends its update at Broadcast address.
RIP Ver. 2 224.0.0.9
The Technical Zone Page 59
OSPF 224.0.0.5, 224.0.0.6
EIGRP 224.0.0.10

Distance vector protocols converge hop-by-hop

At time t1, the first updates have been received and processed by the routers. Look at R1 table at t 1.
R2's update to R1 said that R2 can reach networks 10.0.0.0 and 10.0.2.0, both 0 hops away. If the
networks are 0 hops from R2, they must be 1 hop from R1. R1 incremented the hop count by 1 and
then examined its route table. It already knew about 10.0.0.0, and the hop count (0) was less than the
hop count R2 advertised, (1), so R1 disregarded that information.
Network 10.0.2.0 was new information, however, so R1 entered this in the route table. The source
address of the update packet was router R2's interface (10.0.0.2) so that information is entered along
with the calculated hop count.
Notice that the other routers performed similar operations at the same time t 1 R3, for instance,
disregarded the information about 10.0.3.0 from R2 and 10.0.4.0 from R4 but entered information
about 10.0.0.0, reachable via R2's interface address 10.0.2.1, and 4.0.0.0, reachable via R3's interface
10.0.3.2 Both networks were calculated as 1 hop away. At time t 2, the update period has again expired
and another set of updates has been broadcast. R2 sent its latest table; R1 again incremented R2's
advertised hop counts by 1 and compared. The information about 10.0.0.0 is again discarded for the
same reason as before. 10.0.2.0 is already known, and the hop count hasn't changed, so that
information is also discarded. 10.0.3.0 is new information and is entered into the route table.
The network is converged at time t 3. Every router knows about every network, the address of the
next-hop router for every network, and the distance in hops to every network.

Distance vector algorithms provide road signs to networks. They provide the direction and the
distance, but no details about what lies along the route. And like the sign at the fork in the trail, they
are vulnerable to accidental or intentional misdirection. Following are some of the difficulties and
refinements associated with distance vector algorithms.

Route Invalidation Timers


Now that the internetwork in Figure 5 is fully converged, how will it handle re-convergence when
some part of the topology changes? If network 4.0.0.0 goes down, the answer is simple enough—R4, in
its next scheduled update, flags the network as unreachable and passes the information along.
But what if, instead of 4.0.0.0 going down, router R4 fails? Routers R1, R2, R3 still have entries in their
route tables about 4.0.0.0; the information is no longer valid, but there's no router to inform them of
this fact. They will unknowingly forward packets to an unreachable destination—a black hole has
opened in the internetwork.
This problem is handled by setting a route invalidation timer for each entry in the route table. For
example, when R3 first hears about 4.0.0.0 and enters the information into its route table, R4 sets a
timer for that route. At every regularly scheduled update from router R4, R3 discards the update's
already known information about 4.0.0.0 as described in "Routing by Rumor." But as R3 does so, it
resets the timer on that route.
If router R4 goes down, R4 will no longer hear updates about 10.1.5.0. The timer will expire, R3 will
flag the route as unreachable and will pass the information along in the next update.

The Technical Zone Page 60


Typical periods for route timeouts range from three to six update periods. A router would not want to
invalidate a route after a single update has been missed, because this event may be the result of a
corrupted or lost packet or some sort of network delay. At the same time, if the period is too long, re-
convergence will be excessively slow.

Figure 5

Link State Routing Protocols


The information available to a distance vector router has been compared to the information available
from a road sign. Link state routing protocols are like a road map. A link state router cannot be fooled
as easily into making bad routing decisions, because it has a complete picture of the network. The
reason is that unlike the routing-by-rumor approach of distance vector, link state routers have
firsthand information from all their peer routers. Each router originates information about itself, its
directly connected links, and the state of those links (hence the name). This information is passed
around from router to router, each router making a copy of it, but never changing it. The ultimate

The Technical Zone Page 61


objective is that every router has identical information about the internetwork, and each router will
independently calculate its own best paths.

Link state protocols, sometimes called shortest path first or distributed database protocols, are built
around a well-known algorithm from graph theory, E. W. Dijkstra'a shortest path algorithm.

Examples of link state routing protocols are:


• Open Shortest Path First (OSPF) for IP
• The ISO's Intermediate System to Intermediate System (IS-IS) for CLNS and IP
• DEC's DNA Phase V
• Novell's NetWare Link Services Protocol (NLSP)
Although link state protocols are rightly considered more complex than distance vector protocols, the
basic functionality is not complex at all:
NOTE
Link state advertisement
1. Each router establishes a relationship—an adjacency—with each of its neighbors.
2. Each router sends link state advertisements (LSAs), sometimes called link state packets (LSPs),
to each neighbor. One LSA is generated for each of the router's links, identifying the link, the
state of the link, the metric cost of the router's interface to the link, and any neighbors that may
be connected to the link. Each neighbor receiving an advertisement in turn forwards (floods)
the advertisement to its own neighbors.
3. Each router stores a copy of all the LSAs it has seen in a database. If all works well, the
databases in all routers should be identical.
4. The completed topological database, also called the link state database, describes a graph of the
internetwork. Using the Dijkstra algorithm, each router calculates the shortest path to each
network and enters this information into the route table.

The Technical Zone Page 62


Routing Information Protocol (RIP)
Distance vector protocols, based on the algorithms developed by Bellman, Ford, and Fulkerson, were
implemented as early as 1969 in networks such as ARPANET and CYCLADES. In the mid-1970s Xerox
developed a protocol called PARC Universal Protocol, or PUP, to run on its 3Mbps experimental
predecessor to modern Ethernet. PUP was routed by the Gateway Information Protocol (GWINFO). PUP
evolved into the Xerox Network Systems (XNS) protocol suite; concurrently, the Gateway Information
Protocol became the XNS Routing Information Protocol. In turn, XNS RIP has become the precursor of
such common routing protocols as Novell's IPX RIP, AppleTalk's Routing Table Maintenance Protocol
(RTMP), and, of course, IP RIP.

The metric for RIP is hop count.


The RIP process operates from UDP port 520; all RIP messages are encapsulated in a UDP segment with
both the Source and Destination Port fields set to that value.

RIP defines two message types:


• Request messages
A Request message is used to ask neighboring routers to send an update.
• Response messages.
A Response message carries the update.
The metric used by RIP is hop count, with 1 signifying a directly connected network of the advertising
router.
On startup, RIP broadcasts a packet carrying a Request message out each RIP-enabled interface. The RIP
process then enters a loop, listening for RIP Request or Response messages from other routers. Neighbors
receiving the Request send a Response containing their routing table. When the requesting router receives
the Response messages, it processes the enclosed information. If a particular route entry included in the
update is new, it is entered into the routing table along with the address of the advertising router, which is
read from the source address field of the update packet. If the route is for a network that is already in the
table, the existing entry will be replaced only if the new route has a lower hop count. If the advertised hop
count is higher than the recorded hop count and the update was originated by the recorded next-hop router,
the route will be marked as unreachable for a specified holddown (Explained in next session) period. If at
the end of that time the same neighbor is still advertising the higher hop count, the new metric will be
accepted.

RIP Timers and Stability Features

Asynchronous Updates
Figure1 shows a group of routers connected to an Ethernet backbone. The routers should not broadcast their
updates at the same time; if they do, the update packets will collide. Yet this situation is exactly what can
happen when a several routers share a broadcast network. System delays related to the processing of updates
in the routers tend to cause the update timers to become synchronized. As a few routers become
synchronized, collisions will begin to occur, further contributing to system delays and eventually all routers
sharing the broadcast network may become synchronized.

The Technical Zone Page 63


Figure 2

Asynchronous updates may be maintained by one of two methods:


Each router's update timer is independent of the routing process and is, therefore, not affected by processing
loads on the router.
A small random time, or timing jitter, is added to each update period as an offset.
If routers implement the method of rigid, system-independent timers, then all routers sharing a broadcast
network must be brought online in a random fashion. Rebooting the entire group of routers simultaneously
could result in all the timers attempting to update at the same time.
Adding randomness to the update period is effective if the variable is large enough in proportion to the
number of routers sharing the broadcast network. Sally Floyd and Van Jacobson, have calculated that a too-
small randomization will be overcome by a large enough network of routers and that to be effective the
update timer should range as much as 50% of the median update period.
After startup, the router gratuitously sends a Response message out every RIP-enabled interface every 30
seconds, on average. The Response message, or update, contains the router's full routing table with the
exception of entries suppressed by the split horizon rule. The update timer initiating this periodic update
includes a random variable to prevent table synchronization. As a result, the time between individual updates
from a typical RIP process may be from 25 to 35 seconds. The specific random variable used by Cisco IOS,
RIP_JITTER, subtracts up to 15% (4.5 seconds) from the update time. Therefore, updates from Cisco
routers vary between 25.5 and 30 seconds. The destination address of the update is the all-hosts broadcast
255.255.255.255
RIP Packet Format

The Technical Zone Page 64


This router has not heard an update for subnet 10.3.0.0 for more than six update periods. The route has been marked
unreachable, but has not yet been flushed from the routing table.

Hello Timer 25.5 to 30.0 sec


Invalid Timer 180 sec
Hold-down Timer 240 sec
Flush Timer 240 sec

The invalidation timer, which distance vector protocols use to limit the amount of time a route can stay in a
routing table without being updated. RIP calls this timer the expiration timer, or timeout. Cisco's IOS calls it
the invalid timer. The expiration timer is initialized to 180 seconds whenever a new route is established and
is reset to the initial value whenever an update is heard for that route. If an update for a route is not heard
within that 180 seconds (six update periods), the hop count for the route is changed to 16, marking the route
as unreachable.
Another timer, the garbage collection or flush-timer, is set to 240 seconds–60 seconds longer than the
expiration time. The route will be advertised with the unreachable metric until the garbage collection timer
expires, at which time the route is removed from the routing table.

Loop Prevention methods

Max-Hop-Count
RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops
allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router
receives a routing update that contains a new or changed entry, and if increasing the metric value by 1
causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The
downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16
hops.

Split Horizon
According to the distance vector algorithm as it has been described so far, at every update period each router
broadcasts its entire route table to every neighbor. But is this really necessary? Every network known by R1

The Technical Zone Page 65


in Figure 2, with a hop count higher than 0, has been learned from R2. Common sense suggests that for R1
to broadcast the networks it has learned from R2 back to R2 is a waste of resources. Obviously, R2 already
knows about those networks.

A route pointing back to the router from which packets were received is called a reverse route. Split
horizon is a technique for preventing reverse routes between two routers.
Besides not wasting resources, there is a more important reason for not sending reachability information
back to the router from which the information was learned. The most important function of a dynamic
routing protocol is to detect and compensate for topology changes—if the best path to a network becomes
unreachable, the protocol must look for a next-best path.
Look yet again at the converged internetwork of Figure 2 and suppose that network 4.0.0.0 goes down. R4
will detect the failure, flag the network as unreachable, and pass the information along to R3 at the next
update interval. However, before R4's update timer triggers an update, something unexpected happens. R3's
update arrives, claiming that it can reach 4.0.0.0, one hop away! R4 has no way of knowing that R3 is not
advertising a legitimate next-best path. It will increment the hop count and make an entry into its route table
indicating that 4.0.0.0 is reachable via R3's interface 10.0.3.1, just 2 hops away.
Now a packet with a destination address of 4.0.0.0 arrives at R3. R3 consults its route table and forwards the
packet to R4. R4 consults its route table and forwards the packet to R3, R3 forwards it back to R4, ad
infinitum. A routing loop has occurred.

Implementing split horizon prevents the possibility of such a routing loop. There are two categories of split
horizon: simple split horizon and split horizon with poisoned reverse. The rule for simple split horizon is,
when sending updates out a particular interface, do not include networks that were learned from updates
received on that interface.

Figure 2

The routers in Figure 2 implement simple split horizon. R3 sends an update to R4 for network 10.0.0.0,
10.0.3.0 and 1.0.0.0 networks 4.0.0.0 are not included because this was learned from R4. Likewise, updates
to router B include 10.0.3.0, 10.0.2.0, and 4.0.0.0 with no mention of 10.0.0.0, 1.0.0.0.
Simple split horizon works by suppressing information. Split horizon with poisoned reverse is a
modification that provides more positive information.
The rule for split horizon with poisoned reverse is, when sending updates out a particular interface,
designate any networks that were learned from updates received on that interface as unreachable.

Triggered Updates

The Technical Zone Page 66


Triggered updates, also known as flash updates, are very simple: If a metric changes for better or for worse,
a router will immediately send out an update without waiting for its update timer to expire. Reconvergence
will occur far more quickly than if every router had to wait for regularly scheduled updates, and the problem
of counting to infinity is greatly reduced, although not completely eliminated.
Regular updates may still occur along with triggered updates. Thus a router might receive bad information
about a route from a not-yet-reconverged router after having received correct information from a triggered
update. Such a situation shows that confusion and routing errors may still occur while an internetwork is
reconverging, but triggered updates will help to iron things out more quickly.
A further refinement is to include in the update only the networks that actually triggered it, rather than the
entire route table. This technique reduces the processing time and the impact on network bandwidth.

Holddown Timers
Triggered updates add responsiveness to a reconverging internetwork. Holddown timers introduce a certain
amount of skepticism to reduce the acceptance of bad routing information. If the distance to a destination
increases (for example, the hop count increases from 2 to 4), the router sets a holddown timer for that route.
Until the timer expires, the router will not accept any new updates for the route.
Obviously, a trade-off is involved here. The likelihood of bad routing information getting into a table is
reduced but at the expense of the reconvergence time. Like other timers, holddown timers must be set with
care. If the holddown period is too short, it will be ineffective, and if it is too long, normal routing will be
adversely affected.

Route Poisoning
Route poisoning is a method to prevent routing loops within networks topology. Distance-vector routing
protocols in routers use route poisoning to indicate to other routers that a route is no longer reachable and
should be removed from their routing tables. A variation of route poisoning is split horizon with poison
reverse whereby a router sends updates with unreachable hop counts back to the sender for every route
received to help prevent routing loops In RIP router send metric of 16 hops to the neighbor router, which by
default the neighbor router take it as unreachable.

Passive Interface
Passive interface is used in Routing Protocol configuration to suppress the update on particular interface.
This command is used on the interface of the router on which network is connected and we don’t expect any
router on that network. This is enabled under routing protocol configuration as below: -
Router(config-router)# passive-interface Serial/Fastehternet x/y
Router(config-router)# passive-interface default is used to suppress update or hello packet on all
interfaces .
When using this command in RIP all the broadcast or multicast updates to that interface will be blocked and
if the router on the other side is sending the update RIP will receive the update and add the information in
the Routing Table but do not send any update to that interface.

Contiguous and Discontiguous Networks


Contiguous Network
When the two subnetted networks are connected with the network of same subnet on two or more routers.
For example: -

The Technical Zone Page 67


Figure 4
As in the Figure 3 Networks on Router R1, R2 & R3 are all subnet of the Network 10.0.0.0 /8 and all are
connected to routers and routers are connected to each other with same subnets of network 10.0.0.0 /8.

Discontiguous Networks
When two subnetted networks are separated by the different networks with two or more routers it is called
Discontiguous networks. For example: -

Figure 5

In the above Figure 5 Networks 10.0.3.0/24 and 10.0.4.0/24 are separated from 10.1.1.0/24 and 10.2.1.0/24
with 1.1.1.0/30 and 2.2.2.0/30.

All routing Protocols support contiguous Networks but all routing protocols donot support Discontiguous
Networks. RIP ver1 do not support Discontiguous Networks.

Packet format of RIP Ver1

• Command—Indicates whether the packet is a request or a response. The request asks that a router
send all or part of its routing table. The response can be an unsolicited regular routing update or a
reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey
information from large routing tables.
• Version number—Specifies the RIP version used. This field can signal different potentially
incompatible versions.
• Zero—This field is not actually used by RFC 1058 RIP; it was added solely to provide backward
compatibility with pre-standard varieties of RIP. Its name comes from its defaulted value: zero.

The Technical Zone Page 68


• Address-family identifier (AFI)—Specifies the address family used. RIP is designed to carry
routing information for several different protocols. Each entry has an address-family identifier to
indicate the type of address being specified. The AFI for IP is 2.
• Address—Specifies the IP address for the entry.
• Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the
destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Packet Format of RIP ver2

• Command—Indicates whether the packet is a request or a response. The request asks that a router
send all or a part of its routing table. The response can be an unsolicited regular routing update or a
reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey
information from large routing tables.
• Version—Specifies the RIP version used. In a RIP packet implementing any of the RIP 2 fields or
using authentication, this value is set to 2.
• Unused—Has a value set to zero.
• Address-family identifier (AFI)—Specifies the address family used. RIPv2’s AFI field functions
identically to RFC 1058 RIP’s AFI field, with one exception: If the AFI for the first entry in the
message is 0xFFFF, the remainder of the entry contains authentication information. Currently, the
only authentication type is simple password.
• Route tag—Provides a method for distinguishing between internal routes (learned by RIP) and
external routes (learned from other protocols).
• IP address—Specifies the IP address for the entry.
• Subnet mask—Contains the subnet mask for the entry. If this field is zero, no subnet mask has been
specified for the entry.
• Next hop—Indicates the IP address of the next hop to which packets for the entry should be
forwarded.
• Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the
destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Classfull Networks
When routing protocol do not send the subnet mask information with the update packet, and the
Router receiving that update will assume that the complete network is running on sending router and keep
the Classfull entry of that route. Hence when come to discontiguous networks the router receiving the update
with lower hopcount will be selected the best path to that network.
For Example
When R1 send the update of RIP Ver1 it will update the R2 router that it contained the Network 10.0.3.0 and
10.0.4.0 but do not send the subnet mask info so the R2 will assume that it contained the complete network
10.0.0.0/8 Similarly it will get update from R3 regarding the network 10.1.1.0 and 10.2.1.0 but it will again
assume the same thing and since both the information is with 1 hop away info it will load balance for
network 10.0.0.0/8 and hence no router will get the complete packets and other thing communication cannot
be possible between two subnetted networks 10.0.0.0/8 on R1 & R3

The Technical Zone Page 69


Classless Network
When routing protocol send the subnet mask information with the update packet and receiving router
enter the complete information of all the subnetworks of the particular network along with the subnet mask
info. So this way subnetted networks can be supported by router and other way discontiguous network
support.

Difference and Similarities between RIP ver. 1 and Ver. 2

RIP Version 1 RIP Version 2


Similarities Similarities
• Follows are Loop prevention methods. • Follow all Loop prevention methods.
• Metric is HOP COUNT • Metric is HOP COUNT
• Max-Hop Count is 15 • Max-Hop Count is 15
• Support Contiguous Networks • Support Contiguous Networks.
Dissimilarities Dissimilarities
• Classfull routing protocol • Classless routing protocol
• Do not send subnet mask information • Send subnet mask information in RIP
in RIP Update. Update.
• Do not support Discontiguous Network • Support Discontiguous Network.
• Do not support VLSM and Subnetting • Support VLSM and Subnetting.
• Do not support Authentication • Support Authentication
• Send Update by Broadcast Address • Send Update via Multicast address
255.255.255.255 224.0.0.9
• Do not send Next-Hop Information • Send Next-Hop-Information

Configuring RIP on Cisco Routers

One can enable Rip on the Router by giving the “router rip” command on the privilege mode following by
the interfaces on which update is supposed to be send and the network of which the advertisement is need to
send.
For both the interface on which advertisement is set to send and the network of which update is send are set
by the same command.
Router(config)# router rip
Router(config-router)# network A.B.C.D
By the above commands only RIP Ver1 will be enabled and to enable RIP Ver2 add the following
commands
Router(config-router)# version 2
Router(config-router)# no auto-summary

The Technical Zone Page 70


Figure 3

Router 0 Router R1
router rip router rip
passive-interface default passive-interface default
no passive-interface Serial0/0 no passive-interface Serial0/0
no passive-interface Serial0/1 no passive-interface Serial0/1
network 3.0.0.0 network 192.168.1.0
network 1.0.0.0 network 192.168.2.0
network 30.0.0.0 network 2.0.0.0
network 192.168.1.0
network 192.168.3.0
Router R2 Router R3
router rip router rip
passive-interface default passive-interface default
no passive-interface Serial0/0 no passive-interface Serial0/0
no passive-interface Serial0/1 no passive-interface Serial0/1
network 1.0.0.0 network 192.168.0.0
network 10.0.0.0 network 192.168.3.0
network 20.0.0.0
network 192.168.0.0
network 192.168.2.0

To see the RIP routes


Show ip route rip (Output of R0 )
R0#sh ip route rip
1.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
R 1.0.0.0/8 [120/1] via 192.168.3.2, 00:01:36, Serial0/1
R 20.0.0.0/8 [120/1] via 192.168.3.2, 00:00:16, Serial0/1
R 10.0.0.0/8 [120/1] via 192.168.3.2, 00:00:16, Serial0/1
R 192.168.0.0/24 [120/1] via 192.168.3.2, 00:00:16, Serial0/1
R 192.168.2.0/24 [120/1] via 192.168.3.2, 00:00:16, Serial0/1
As from above command one can clearly see the Routing Table contain Classfull Networks though all the
networks are subnetted

R3#debug ip rip events


The Technical Zone Page 71
RIP event debugging is on
R3#
1 00:15:23.143: RIP: sending v1 update to 255.255.255.255 via Serial0/1 (192.168.3.2)
1 00:15:23.151: RIP: Update contains 4 routes
1 00:15:23.155: RIP: Update queued
1 00:15:23.155: RIP: Update sent via Serial0/1
1 00:15:27.415: RIP: received v1 update from 192.168.3.1 on Serial0/1
1 00:15:27.419: RIP: Update contains 4 routes
1 00:15:29.899: RIP: received v1 update from 192.168.0.2 on Serial0/0
1 00:15:29.907: RIP: Update contains 4 routes
1 00:15:30.607: RIP: sending v1 update to 255.255.255.255 via Serial0/0 (192.168.0.1)
1 00:15:30.615: RIP: Update contains 4 routes
1 00:15:30.615: RIP: Update queued
1 00:15:30.619: RIP: Update sent via Serial0/0

Enabling RIP Ver2

After enabling RIP Ver2 on all routers the following above outputs are as below

When enabling RIP Ver2 enable it on all the routers in the topology and note that all the routers in
the topology support RIP Ver2
If all the routers are not enabled with Ver2 then the routers working with RIP V1 will send only Ver1
updates and hence the routers down the line will only get Ver1 updates

R0#sh ip route rip


1.0.0.0/8 is variably subnetted, 8 subnets, 2 masks
R 1.0.1.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 1.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 1.0.0.0/8 [120/1] via 192.168.3.2, 00:00:38, Serial0/1
R 1.0.3.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 1.0.2.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
20.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
R 20.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 20.0.0.0/8 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
R 10.0.0.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 10.0.0.0/8 [120/2] via 192.168.3.2, 00:00:11, Serial0/1
R 192.168.0.0/24 [120/1] via 192.168.3.2, 00:00:11, Serial0/1
R 192.168.2.0/24 [120/2] via 192.168.3.2, 00:00:11, Serial0/1

R3#debug ip rip events


RIP event debugging is on
R3#
1 00:24:42.479: RIP: sending v2 update to 224.0.0.9 via Serial0/1 (192.168.3.2)
1 00:24:42.487: RIP: Update contains 8 routes
1 00:24:42.487: RIP: Update queued
1 00:24:42.491: RIP: Update sent via Serial0/1
1 00:24:50.027: RIP: received v2 update from 192.168.0.2 on Serial0/0
1 00:24:50.035: RIP: Update contains 7 routes
1 00:24:52.319: RIP: sending v2 update to 224.0.0.9 via Serial0/0 (192.168.0.1)
1 00:24:52.327: RIP: Update contains 8 routes
1 00:24:52.327: RIP: Update queued

The Technical Zone Page 72


1 00:24:52.331: RIP: Update sent via Serial0/0
1 00:24:53.599: RIP: received v2 update from 192.168.3.1 on Serial0/1
1 00:24:53.603: RIP: Update contains 7 routes

R1#sh ip route rip


1.0.0.0/24 is subnetted, 4 subnets
R 1.0.1.0 [120/1] via 192.168.2.2, 00:00:41, Serial0/1
R 1.0.0.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1
R 1.0.3.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1
R 1.0.2.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1
3.0.0.0/24 is subnetted, 2 subnets
R 3.0.1.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
R 3.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
10.0.0.0/24 is subnetted, 1 subnets
R 10.0.0.0 [120/1] via 192.168.2.2, 00:00:42, Serial0/1
R 192.168.0.0/24 [120/1] via 192.168.2.2, 00:00:42, Serial0/1
R 192.168.3.0/24 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
30.0.0.0/24 is subnetted, 1 subnets
R 30.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0

After Invalid Timer expires the routes receive from faulty routers will upon possibly down status.

Below status is when router R2 stop sending its update to R1


R1#sh ip route rip
1.0.0.0/24 is subnetted, 4 subnets
R 1.0.1.0/24 is possibly down,
routing via 192.168.2.2, Serial0/1
R 1.0.0.0/24 is possibly down,
routing via 192.168.2.2, Serial0/1
R 1.0.3.0/24 is possibly down,
routing via 192.168.2.2, Serial0/1
R 1.0.2.0/24 is possibly down,
routing via 192.168.2.2, Serial0/1
3.0.0.0/24 is subnetted, 2 subnets
R 3.0.1.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
R 3.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
10.0.0.0/24 is subnetted, 1 subnets
R 10.0.0.0/24 is possibly down,
routing via 192.168.2.2, Serial0/1
R 192.168.0.0/24 is possibly down, routing via 192.168.2.2, Serial0/1
R 192.168.3.0/24 [120/1] via 192.168.1.1, 00:00:19, Serial0/0
30.0.0.0/24 is subnetted, 1 subnets
R 30.0.0.0 [120/1] via 192.168.1.1, 00:00:19, Serial0/0

Rip Commands
• Show ip routes
• Show ip route rip to see rip route only
• Clear ip route * to clear routing table entries and then new table will be formed with new updates
• Debug ip rip events

The Technical Zone Page 73

Das könnte Ihnen auch gefallen