Sie sind auf Seite 1von 136

DHCP Tutorial

In IP environment, before a computer can communicate to another one, they need to have their own IP addresses.
There are two ways of configuring an IP address on a device:
+ Statically assign an IP address. This means we manually type an IP address for this computer
+ Use a protocol so that the computer can obtain its IP address automatically (dynamically). The most popular
protocol nowadays to do this task is called Dynamic Host Configuration Protocol (DHCP) and we will learn
about it in this tutorial.
A big advantage of using DHCP is the ability to join a network without knowing detail about it. For example you go
to a coffee shop, with DHCP enabled on your computer, you can go online without doing anything. Next day you go
online at your school and you dont have to configure anything either even though the networks of the coffee shop
and your school are different (for example, the network of the coffee shop is 192.168.1.0/24 while that of your
company is 10.0.0.0/8). Really nice, right? Without DHCP, you have to ask someone who knows about the networks
at your location then manually choosing an IP address in that range. In bad situation, your chosen IP can be same
as someone else who is also using that network and an address conflict may occur. So how can DHCP obtain an
suitable IP address for you automatically? Lets find out.

How DHCP works


1. When a client boots up for the first time (or try to join a new network), it needs to obtain an IP address to
communicate. So it first transmits a DHCPDISCOVERmessage on its local subnet. Because the client has no way of
knowing the subnet to which it belongs, the DHCPDISCOVER is an all-subnets broadcast (destination IP address of
255.255.255.255, which is a layer 3 broadcast address) and a destination MAC address of FF-FF-FF-FF-FF-FF (which
is a layer 2 broadcast address). The client does not have a configured IP address, so the source IP address of
0.0.0.0 is used. The purpose of DHCPDISCOVER message is to try to find out a DHCP Server (a server that can
assign IP addresses).

2. After receiving the discover message, the DHCP Server will dynamically pick up an unassigned IP address from
its IP pool and broadcast a DHCPOFFER message to the client(*). DHCPOFFER message could contain other
information such as subnet mask, default gateway, IP address lease time, and domain name server (DNS).

(*)Note:

In fact, the DHCPOFFER is a layer 3 broadcast message (the IP destination is 255.255.255.255) but a layer
2 unicast message (the MAC destination is the MAC of the DHCP Client, not FF-FF-FF-FF-FF-FF). So in some books
they may say it is a broadcast or unicast message.
3. If the client accepts the offer, it then broadcasts a DHCPREQUEST message saying it will take this IP address. It
is called request message because the client might deny the offer by requesting another IP address. Notice that
DHCPREQUEST message is still a broadcast message because the DHCP client has still not received an
acknowledged IP. Also a DHCP Client can receive DHCPOFFER messages from other DHCP Servers so sending
broadcast DHCPREQUEST message is also a way to inform other offers have been rejected.

4. When the DHCP Server receives the DHCPREQUEST message from the client, the DHCP Server accepts the
request by sending the client a unicastDHCPACKNOWLEDGEMENT message (DHCPACK).

In conclusion there are four messages sent between the DHCP Client and DHCP Server: DHCPDISCOVER,
DHCPOFFER, DHCPREQUEST and DHCPACKNOWLEDGEMENT. This process are often abbreviated as DORA (for
Discover, Offer, Request, Acknowledgement).
After receiving DHCPACKNOWLEDGEMENT, the IP address is leased to the DHCP Client. A client will usually keep the
same address by periodically contacting the DHCP server to renew the lease before the lease expires.
If the DHCP Server is not on the same subnet with the DHCP Client, we need to configure the router on the DHCP
client side to act as a DHCP Relay Agent so that it can forward DHCP messages between the DHCP Client & DHCP
Server. To make a router a DHCP Relay Agent, simply put the ip helper-address <IP-address-of-DHCP-Server>
command under the interface that receives the DHCP messages from the DHCP Client.

As we know, router does not forward broadcast packets (it drops them instead) so DHCP messages like
DHCPDISCOVER message will be dropped. But with the ip helper-address command, the router will accept that
broadcast message and cover it into a unicast packet and forward it to the DHCP Server. The destination IP address
of the unicast packet is taken from the ip helper-address command.
When a DHCP address conflict occurs
During the IP assignment process, the DHCP Server uses ping to test the availability of an IP before issuing it to the
client. If no one replies then the DHCP Server believes that IP has not been allocated and it can safely assign that IP
to a client. If someone answers the ping, the DHCP Server records a conflict, the address is then removed from the
DHCP pool and it will not be assigned to a client until the administrator resolves the conflict manually.
Configure a DHCP Server on Cisco router
Instead of using a separate computer/server as a DHCP Server, we can save the cost and configure a Cisco router
(even a Layer 3 Cisco switch) to work as a DHCP Server. The following example configuration will complete this
task:
Configuration

Description

Router(config)#ip dhcp pool


CLIENTS

Create a DHCP Pool named CLIENTS

Router(dhcp-config)#network
10.1.1.0 /24

Specifies the subnet and mask of the DHCP address pool

Router(dhcp-config)#defaultrouter 10.1.1.1

Set the default gateway of the DHCP Clients

Router(dhcp-config)#dnsserver 10.1.1.1

Configure a Domain Name Server (DNS)

Router(dhcp-config)#domainname 9tut.com

Configure a domain-name

Router(dhcp-config)#lease 0
12

Duration of the lease (the time during which a client computer can use an
assigned IP address). The syntax is lease{days[hours] [minutes] | infinite}. In
this case the lease is 12 hours. The default is a one-day lease.
Before the lease expires, the client typically needs to renew its address lease
assignment with the server

Router(dhcp-config)#exit
Router(config)# ip dhcp
excluded-address 10.1.1.1
10.1.1.10

The IP range that a DHCP Server should not assign to DHCP Clients. Notice this
command is configured under global configuration mode

Simple Network Management Protocol SNMP Tutorial


Building a working network is important but monitoring its health is as important as building it. Luckily we have
tools to make administrators life easier and SNMP is one among of them. SNMP presents in most of the network
regardless of the size of that network. And understanding how SNMP works is really important and that what we will
learn in this tutorial.
Understand SNMP
SNMP consists of 3 items:
+ SNMP Manager (sometimes called Network Management System NMS): a software runs on the device of the
network administrator (in most case, a computer) to monitor the network.
+ SNMP Agent: a software runs on network devices that we want to monitor (router, switch, server)
+ Management Information Base (MIB): is the collection of managed objects. This components makes sure that
the data exchange between the manager and the agent remains structured. In other words, MIB contains a set of
questions that the SNMP Manager can ask the Agent (and the Agent can understand them). MIB is commonly
shared between the Agent and Manager.

For example, in the topology above you want to monitor a router, a server and a Multilayer Switch. You can run
SNMP Agent on all of them. Then on a PC you install a SNMP Manager software to receive monitoring information.
SNMP is the protocol running between the Manager and Agent. SNMP communication between Manager and Agent
takes place in form of messages. The monitoring process must be done via a MIB which is a standardized database
and it contains parameters/objects to describe these networking devices (like IP addresses, interfaces, CPU
utilization, ). Therefore the monitoring process now becomes the process of GET and SET the information from the
MIB.
SNMP Versions
SNMP has multiple versions but there are three main versions:
+ SNMP version 1
+ SNMP version 2c
+ SNMP version 3
SNMPv1 is the original version and is very legacy so it should not be used in our network. SNMPv2c updated the
original protocol and offered some enhancements. One of the noticeable enhancement is the introduction of INFORM
and GETBULK messages which will be explain later in this tutorial.
Both SNMPv1 and v2 did not focus much on security and they provide security based on community string only.
Community string is really just a clear text password (without encryption). Any data sent in clear text over a
network is vulnerable to packet sniffing and interception. There are two types of community strings in SNMPv2c:

+ Read-only (RO): gives read-only access to the MIB objects which is safer and preferred to other method.
+ Read-write (RW): gives read and write access to the MIB objects. This method allows SNMP Manager to change
the configuration of the managed router/switch so be careful with this type.
The community string defined on the SNMP Manager must match one of the community strings on the Agents in
order for the Manager to access the Agents.
SNMPv3 provides significant enhancements to address the security weaknesses existing in the earlier versions. The
concept of community string does not exist in this version. SNMPv3 provides a far more secure communication
using entities, users and groups. This is achieved by implementing three new major features:
+ Message integrity: ensuring that a packet has not been modified in transit.
+ Authentication: by using password hashing (based on the HMAC-MD5 or HMAC-SHA algorithms) to ensure the
message is from a valid source on the network.
+ Privacy (Encryption): by using encryption (56-bit DES encryption, for example) to encrypt the contents of a
packet.
Note: Although SNMPv3 offers better security but SNMPv2c however is still more common. Cisco has supported
SNMPv3 in their routers since IOS version 12.0.3T.
In the next part we will learn the SNMP messages used in each version.
SNMP Messages
SNMP Messages are used to communicate between the SNMP Manager and Agents. SNMPv1 supports five basic
SNMP messages:
+
+
+
+
+

SNMP
SNMP
SNMP
SNMP
SNMP

GET
GET-NEXT
GET-RESPONSE
SET
TRAP

In general, the GET messages are sent by the SNMP Manager to retrieve information from the SNMP Agents while
the SET messages are used by the SNMP Manager to modify or assign the value to the SNMP Agents.
Note: GET-NEXT retrieves the value of the next object in the MIB.
The GET-RESPONSE message is used by the SNMP Agents to reply to GET and GET-NEXT messages.
Unlike GET or SET messages, TRAP messages are initiated from the SNMP Agents to inform the SNMP Manager on
the occurrence of an event. For example, suppose you want to be alarmed when the CPU usage of your server goes
above 80%. But it would be very annoying if the administrator has to actively use the GET message to check the
CPU usage from time to time. In this case, the TRAP message is very suitable for that purpose because the
administrator would only be informed from the CPU itself when that event occurs. The figure below shows the
direction of SNMP messages:

From SNMPv2c, two new messages were added: INFORM and GETBULK.

INFORM: An disadvantage of TRAP message is unreliable. SNMP communicates via UDP so it is unreliable because
when the SNMP Agents send TRAP message to the SNMP Manager it cannot know if its messages arrive to the SNMP
Manager. To amend this problem, a new type of message, called INFORM, was introduced from SNMPv2. With
INFORM message, the SNMP Manager can now acknowledge that the message has been received at its end with an
SNMP response protocol data unit (PDU). If the sender never receives a response, the INFORM can be sent again.
Thus, INFORMs are more likely to reach their intended destination.
GETBULK: The GETBULK operation efficiently retrieve large blocks of data, such as multiple rows in a table.
GETBULK fills a response message with as much of the requested data as will fit.
Note: There is no new message types on SNMPv3 compared to SNMPv2c.
SNMP Configuration
In the last part we will go through a simple SNMP configuration so that you can have a closer look at how SNMP
works. SNMPv2c is still more popular than SNMPv3 so we will configure SNMPv2c.
1. Configure a community string
Router(config)#snmp-server community 9tut ro
In this case our community string named 9tut. The ro stands for read-only method.
2. Configure the IP address of a host receiver (SNMP Manager) for SNMPv2c TRAPs or INFORMs
Router(config)#snmp-server host 10.10.10.12 version 2c TRAPCOMM
TRAPCOMM is the community string for TRAP.
3. Enable the SNMP Traps
Router(config)#snmp-server enable traps
If we dont want to enable all trap messages we can specify which traps we want to be notified. For example, if you
only want to receive traps about link up/down notification type then use this command instead:
Router(config)#snmp-server enable traps link cisco
Of course we have to configure an SNMP Manager on a computer with these community strings so that they can
communicate.

Syslog Tutorial
As an administrator of a network, you have just completed all the configuration and they are working nicely. Now
maybe the next thing you want to do is to set up something that can alert you when something goes wrong or down
in your network. Syslog is an excellent tool for system monitoring and is almost always included in your distribution.
Places to store and display syslog messages
There are some places we can send syslog messages to:
Place to store syslog messages

Command to use

Internal buffer (inside a switch or router)

logging buffered [size]

Syslog server

logging

Flash memory

logging file flash:filename

Nonconsole terminal (VTY connection)

terminal monitor

Console line

logging console

Note: If sent to a syslog server, messages are sent on UDP port 514.
By default, Cisco routers and switches send log messages to the console. We should use a syslog server to contain
our logging messages with the logging command. Syslog server is the most popular place to store logging
messages and administrators can easily monitor the wealth of their networks based on the received information.
Syslog syntax
A syslog message has the following format:
seq no:timestamp%FACILTY-SEVERITY-MNEMONIC: message text
Each portion of a syslog message has a specific meaning:
+ Seq no: a sequence number only if the service sequence-numbers global configuration command is configured
+ Timestamp: Date and time of the message or event. This information appears only if the service
timestamps global configuration command is configured.
+ FACILITY: This tells the protocol, module, or process that generated the message. Some examples are SYS for
the operating system, IF for an interface
+ SEVERITY: A number from 0 to 7 designating the importance of the action reported. The levels are:
Level

Keyword

Description

emergencies

System is unusable

alerts

Immediate action is needed

critical

Critical conditions exist

errors

Error conditions exist

warnings

Warning conditions exist

notification

Normal, but significant, conditions exist

informational

Informational messages

debugging

Debugging messages

Note: You can remember the order above with the sentence: Eventually All Critical Errors
Will Not Involve Damage.
The highest level is level 0 (emergencies). The lowest level is level 7. To change the minimum severity level that is
sent to syslog, use the logging trap levelconfiguration command. If you specify a level, that level and all the
higher levels will be displayed. For example, by using the logging console warnings command, all the logging of
emergencies, alerts, critical, errors, warnings will be displayed. Levels 0 through 4 are for events that could
seriously impact the device, whereas levels 5 through 7 are for less-important events. By default, syslog servers
receive informational messages (level 6).
+ MNEMONIC: A code that identifies the action reported.
+ message text: A plain-text description of the event that triggered the syslog message.
Lets see an example of the syslog message:
39345: May 22 13:56:35.811: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/1, changed state to
down
+
+
+
+
+
+

seq no: 39345


Timestamp: May 22 13:56:35.811
FACILTY: LINEPROTO
SEVERITY level: 5 (notification)
MNEMONIC: UPDOWN
message text: Line protocol on Interface Serial0/0/1, changed state to down

Syslog Configuration
The following example tells the device to store syslog messages to a server on 10.10.10.150 and limit the messages
for levels 4 and higher (0 through 4):
Router(config)#logging 10.10.10.150
Router(config)#logging trap 4
Of course on the server 10.10.10.150 we have to use a syslog software to capture the syslog messages sent to this
server.

Gateway Load Balancing Protocol GLBP Tutorial


The main disadvantage of HSRP and VRRP is that only one gateway is elected to be the active gateway and used to
forward traffic whilst the rest are unused until the active one fails. Gateway Load Balancing Protocol (GLBP) is a
Cisco proprietary protocol and performs the similar function to HSRP and VRRP but it supports load balancing among
members in a GLBP group. In this tutorial, we will learn how GLBP works.
Note: Although we can partially configure load balancing via HSRP or VRRP using multiple groups but we have to
assign different default gateways on the hosts. If one group fails, we must reconfigure the default gateways on the
hosts, which results in extra administrative burden.
GLBP Election
When the routers are configured to a GLBP group, they first elect one gateway to be the Active Virtual Gateway
(AVG) for that group. The election is based on the priority of each gateway (highest priority wins). If all of them
have the same priority then the gateway with the highest real IP address becomes the AVG. The AVG, in turn,
assigns a virtual MAC address to each member of the GLBP group. Each gateway which is assigned a virtual MAC
address is called Active Virtual Forwarder (AVF). A GLBP group only has a maximum of four AVFs. If there are more
than 4 gateways in a GLBP group then the rest will become Standby Virtual Forwarder (SVF) which will take the
place of a AVF in case of failure. The virtual MAC address in GLBP is 0007.b400.xxyy where xx is the GLBP group
number and yy is the different number of each gateway (01, 02, 03).
Note:
+ In this tutorial, the words gateway and router are use interchangeable. In fact, GLBP can run on both router
and switch so the word gateway, which can represent for both router and switch, is better to describe GLBP.
+ For switch, GLBP is supported only on Cisco 4500 and 6500 series.
The gateway with the highest priority among the remaining ones is elected the Standby AVG (SVG) which will take
the role of the AVG in the case it is down.

For example in the topology above suppose all of the gateways have the same priority and GLBP is turned on at the
same time on all gateways (or they are configured with the preempt feature), R4 will be elected AVG because of its
highest IP address 10.10.10.4. R3 will be elected SVG because of its second highest IP address (10.10.10.3). The
AVFs are elected based on the weight so the four highest weight values would win for the four AVFs. In this case we
only have four gateways so surely they are all elected AVFs. With GLBP, there is still one virtual IP address which is
assigned by the administrator via the glbp ip command (for example glbp 1 ip 10.10.10.100).
How GLBP works

After the election ends, R4 is both the AVG and AVF; R3 is SVG and AVF; R2 & R1 are pure AVFs. R4 assigned the
MAC addresses of 0007.b4000101, 0007.b4000102, 0007.b4000103, 0007.b4000104 to R1, R2, R3, R4
respectively; we will abbreviate the MAC addresses as 01, 02, 03 and 04. Lets see how GLBP works!
The default gateway of PC1, PC2 and PC3 were set to 10.10.10.100 so if they want to send traffic outside they have
to send ARP Request first to their default gateway. They broadcast an ARP Request to ask Hey, I need to know the
MAC address of the guy 10.10.10.100!. R4, which is the AVG, is responsible for answering the ARP Request. But
the trick here is it does not always give the same answer to that question:
For
For
For
For

PC1,
PC2,
PC3,
PC4,

R4
R4
R4
R4

will
will
will
will

answer
answer
answer
answer

The
The
The
The

MAC
MAC
MAC
MAC

address
address
address
address

of
of
of
of

the
the
the
the

guy
guy
guy
guy

10.10.10.100
10.10.10.100
10.10.10.100
10.10.10.100

is
is
is
is

01!.
02!.
03!.
04!.

As the result of this, PC1 will send the traffic to R1; PC2 will send traffic to R2; PC3 will send traffic to R3 and PC4
will send traffic to R4! And load balancing is achieved!
When AVG fails
Everything is working smoothly then suddenly R4 (AVG) is down. What will happen now?
As we know R3 was chosen as SVG because of its second highest priority so when R4 is down, R3 becomes the new
AVG and is responsible for forwarding traffic sent to the virtual MAC address of R4. In other words, R3 is now
responsible for traffic from PC3 & PC4 with two MAC addresses 03, 04. Communication between R4 continues
without disruption or change at the host side.

Wait! Maybe you have a question to ask here. So how about the Switch? How can the switch forward the frames to
the new SVG on another port? Remember that Switch saved the MAC 04 for the port connecting to R4. Well, the

answer here is when the standby becomes the active it will send a gratuitous ARP reply to flush the CAM tables of
the switches and the ARP cache of the hosts. So the switch will learn the new port for MAC 04.
Each AVF listens to others, if one AVF can no more forward traffic, all listening AVFs will compete to take the
responsibility of the failed AVF vMAC along with its own (AVF with higher weighting wins).
To detect a gateway failure, GLBP members communicate between each other through hello messages sent every 3
seconds to the multicast address 224.0.0.102, User Datagram Protocol (UDP) port 3222.
GLBP supports up to 1024 virtual routers (GLBP groups) per physical interface of a router.
Load balancing algorithm
GLBP load sharing is done in one of three ways:
Round-robin load-balancing algorithm: Each router MAC is used sequentially to respond to ARP requests. This is
the default load balancing mode in GLBP and is suitable for any number of end hosts.
Weighted load-balancing algorithm: Traffic is balanced proportional to a configured weight. Each GLBP router in
the group will advertise its weighting and assignment; the AVG will act based on that value. For example, if there
are two routers in a group and R1 has double the forwarding capacity of router B, the weighting value of router A
should be configured to be double the amount of R2.
Host-dependent load-balancing algorithm: A given host always uses the same router.
Interface Tracking
Like HSRP, GLBP can be configured to track interfaces. For example, if the WAN link from Router R4 is lost, GLBP
detects the failure and decrements the router priority (when a tracked interface fails). The second router then
becomes primary. This transition is transparent for the hosts.

GLBP Authentication
GLBP has three authentication types:
+ No authentication

+ MD5 authentication
+ Plain text authentication
MD5 is the most security method so far. With this method, the same keys are configured on both ends. One end will
send the encrypted key (called hash, using MD5) to the other. At the other side, the same key is also encrypted and
compared with the receiving encrypted key. If the two encrypted keys are the same then authentication is
approved. The advantage of this method is only the encrypted key is sent through the link. The key for the MD5
hash can either be given directly in the configuration using a key string or supplied indirectly through a key chain.

EtherChannel Tutorial
EtherChannel is the technology which is used to combine several physical links between switches or routers into one
logical connection and treat them as a single link. Lets take an example to see the benefits of this technology:
Suppose your company has two switches connecting with each other via a FastEthernet link (100Mbps):

Your company is growing and you need to transfer more than 100 Mbps between these switches. If you only
connect other links between the two switches it will not work because Spanning-tree protocol (STP) will block
redundant links to prevent a loop:

To extend the capacity of the link you have two ways:


+ Buy two 1000Mbps (1Gbps) interfaces
+ Use EtherChannel technology to bundle them into a bigger link
The first solution is expensive with the new hardware installed on the two switches. By using EtherChannel you only
need some more unused ports on your switches:

EtherChannel bundles the physical links into one logical link with the combined bandwidth and it is awesome! STP
sees this link as a single link so STP will not block any links! EtherChannel also does load balancing among the links
in the channel automatically. If a link within the EtherChannel bundle fails, traffic previously carried over the failed
link is carried over the remaining links within the EtherChannel. If one of the links in the channel fails but at least
one of the links is up, the logical link (EtherChannel link) remains up.
EtherChannel also works well for router connections:

When an EtherChannel is created, a logical interface will be created on the switches or routers representing for that
EtherChannel. You can configure this logical interface in the way you want. For example, assign access/trunk mode
on switches or assign IP address for the logical interface on routers
Note: A maximum of 8 Fast Ethernet or 8 Gigabit Ethernet ports can be grouped together when forming an
EtherChannel.
There are three mechanisms you can choose to configure EtherChannel:
+ Port Aggregation Protocol (PAgP)
+ Link Aggregation Control Protocol (LACP)
+ Static (On)
LACP is the IEEE Standard (IEEE 802.3ad) and is the most common dynamic ether-channel protocol,
whereas PAgP is a Cisco proprietary protocol and works only between supported vendors and Cisco devices. All

ports in an EtherChannel must use the same protocol; you cannot run two protocols on two ends. In other words,
PAgP and LACP are not compatible so both ends of a channel must use the same protocol.
The Static Persistence (or on mode) bundles the links unconditionally and no negotiation protocol is used. In this
mode, neither PAgP nor LACP packets are sent or received.
(http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml)
Next we will learn more about the three EtherChannel mechanisms above.
Port Aggregation Protocol (PAgP)
PAgP dynamically negotiates the formation of a channel. There are two PAgP modes:
Auto

Responds to PAgP messages but does not aggressively negotiate a PAgP EtherChannel. A channel is
formed only if the port on the other end is set to Desirable. This is the default mode.

Desirable

Port actively negotiates channeling status with the interface on the other end of the link. A channel is
formed if the other side is Auto or Desirable.

The table below lists if an EtherChannel will be formed or not for PAgP:
PAgP

Desirable

Auto

Desirable

Yes

Yes

Auto

Yes

No

Link Aggregation Protocol (LACP)


LACP also dynamically negotiates the formation of a channel. There are two LACP modes:
Passive

Responds to LACP messages but does not aggressively negotiate a LACP EtherChannel. A channel is
forms only if the other end is set to Active

Active

Port actively negotiates channeling with the interface on the other end of the link. A channel is formed if
the other side is Passive or Active

The table below lists if an EtherChannel will be formed or not for LACP:
LACP

Active

Passive

Active

Yes

Yes

Passive

Yes

No

In general, Auto mode in PAgP is the same as Passive mode in LACP and Desirable mode is same
as Active mode.
Auto = Passive
Desirable = Active
Static (On)
In this mode, no negotiation is needed. The interfaces become members of the EtherChannel immediately. When
using this mode make sure the other end must use this mode too because they will not check if port parameters
match. Otherwise the EtherChannel would not come up and may cause some troubles (like loop).
Note: All interfaces in an EtherChannel must be configured identically to form an EtherChannel. Specific settings
that must be identical include:
+ Speed settings
+ Duplex settings

+
+
+
+
+

STP settings
VLAN membership (for access ports)
Native VLAN (for trunk ports)
Allowed VLANs (for trunk ports)
Trunking Encapsulation (ISL or 802.1Q, for trunk ports)

Note: EtherChannels will not form if either dynamic VLANs or port security are enabled on the participating
EtherChannel interfaces.
In the next part we will learn how to configure EtherChannel on switch/router interfaces.
EtherChannel Configuration
To assign and configure an EtherChannel interface to an EtherChannel group, use the channel-group command in
interface mode:
channel-group number mode { active | on | {auto [non-silent]} | {desirable [non-silent]} | passive}
For example we will create channel-group number 1:
Switch(config-if)#channel-group 1 mode ?
active Enable LACP unconditionally
auto Enable PAgP only if a PAgP device is detected
desirable Enable PAgP unconditionally
on Enable Etherchannel only
passive Enable LACP only if a LACP device is detected
If a port-channel interface has not been created before using this command, it will be created automatically and you
will see this line: Creating a port-channel interface Port-channel 1.
In this example, we will create an EtherChannel via LACP between SwA & SwB with the topology shown below:

SwA Configuration

SwB Configuration

//Assign EtherChannel group 1 to fa0/0 and fa0/1 and


set Active mode on them
SwA(config)#interface range fa0/0 1
SwA(config-if-range)#channel-group 1 mode active
Creating a port-channel interface Port-channel 1
//Next configure the representing port-channel interface
as trunk
SwA(config)#interface port-channel 1
SwA(config-if)#switchport trunk encapsulation dot1q
SwA(config-if)#switchport mode trunk

//Assign EtherChannel group 2 to fa0/5 and fa0/6 and


set Passive mode on them
SwB(config)#interface range fa0/5 6
SwB(config-if-range)#channel-group 2 mode passive
Creating a port-channel interface Port-channel 2
//Next configure the representing port-channel interface
as trunk
SwB(config)#interface port-channel 2
SwB(config-if)#switchport trunk encapsulation dot1q
SwB(config-if)#switchport mode trunk

That is all the configuration for the EtherChannel to work well on both switches. We can verify with the show
etherchannel <port-channel number> port-channel or show etherchannel summary command.
SwA# show etherchannel 1 port-channel
Port-channels in the group:
---------------------------Port-channel: Po1
Age of the Port -channel

= 0d:00h:02m:37s

Logical slot/port

= 2/1

GC

= 0x00010001

Number of ports = 2
HotStandBy port = null

Port state

= Port-channel Ag -Inuse

Protocol

Port security

= Disabled

LACP

Ports in the Port-channel:

Index

Load

Port

EC state

No of bits

------+------+------+------------------ +----------0

00

Fa0/0

Active

00

Fa0/1

Active

Time since last port bundled:

0d:00h:02m:27s

Fa0/1

The show etherchannel number port-channel command can be used to display information about a specific
port channel (in this case port-channel 1). From the command above we can see Port-channel 1 consists of Fa0/0 &
Fa0/1 and they are in Active state.
SwA# show etherchannel summary
Flags:

- down

P - bundled in port-channel

I - stand -alone s

- suspended

H - Hot -standby (LACP only)


R - Layer3

S - Layer2

U - in use

f - failed to allocate aggregator

M - not in use, minimum links not met


u - unsuitable for bundling
w - waiting to be aggregated
d - default port

Number of channel-groups in use: 1


Number of aggregators:

Group

Ports

Port-channel

Protocol

------+------------- +-----------+----------------------------------------------1

Po1(SU)

LACP

Fa0/0(P)

Fa0/1(P)

The show etherchannel summary can be used to simply display one line of information per port-channel. In this
case we learn from the last line that Group 1 uses LACP. This is a Layer 2 EtherChannel (symbolized by SU, in
which S means Layer2 & U means this port-channel is up.
EtherChannel Load-Balancing
EtherChannel load-balances traffic among port members of the same channel. Load balancing between member
interface is based on:
+ Source MAC address
+ Destination MAC address
+ Source IP Address
+ Destination IP Address
+ Combinations of the four
Note: Some old switch/router flatforms do not support all the load-balancing methods above.

To configure load-distribution method, use the command port-channel load-balance under global configuration
mode. For example to load-balance based on destination MAC use the command:
Router(config)#port-channel load-balance dst-mac

Hot Standby Router Protocol HSRP Tutorial


In this tutorial we will learn what is HSRP and the need of HSRP in a network.
Most of the company in the world has a connection to the Internet. The picture below shows a most simple topology
of such a company:

To make above topology work we need to:


+ Configure IP addresses on two interfaces of the Router. Suppose the IP address of Fa0/0 interface (the interface
connecting to the switch) is 192.168.1.1.
+ Assign the IP addresses, default gateways and DNS servers on all PCs. In this case we have to set the default
gateways to Fa0/0 interface (with the IP address 192.168.1.1) of the router. This can be done manually or
automatically via DHCP.
After some time, your boss wants to implement some redundant methods so that even the Router fails, all PCs can
still access the Internet without any manual configuration at that time. So we need one more router to connect to
the Internet as the topology below:

But now we have a problem: There is only one default gateway on each host, so if Router1 is down and we want to
access the Internet via Router2, we have to change the default gateway (to 192.168.1.2). Also, when Router1
comes back we have to manually change back to the IP address on Router1. And no one can access to the Internet
in the time of changing the default gateway. HSRP can solve all these problems!
HSRP Operation
With HSRP, two routers Router1 and Router2 in this case will be seen as only one router. HSRP uses a virtual MAC
and IP address for the two routers to represent with hosts as a single default gateway. For example, the virtual IP
address is 192.168.1.254 and the virtual MAC is 0000.0c07.AC0A. All the hosts will point their default gateway to
this IP address.

One router, through the election process, is designated as active router while the other router is designated
as standby router. Both active and standby router listen but only the active router proceeds and forwards packets.
Standby router is backup when active router fails by monitoring periodic hellos sent by the active router (multicast
to 224.0.0.2, UDP port 1985) to detect a failure of the active router.

When a failure on the active router detected, the


standby router assumes the role of the forwarding router. Because the new forwarding router uses the same
(virtual) IP and MAC addresses, the hosts see no disruption in communication. A new standby router is also elected
at that time (in the case of there are more than two routers in a HSRP group).
Note: All routers in a HSRP group send hello packets. By default, the hello timer is set to 3 seconds and the dead
timer is set to 10 seconds. It means that a hello packet is sent between the HSRP standby group devices every 3
seconds, and the standby device becomes active when a hello packet has not been received for 10 seconds

Note: The virtual MAC address of HSRP version 1 is 0000.0C07.ACxx, where xx is the HSRP group number in
hexadecimal based on the respective interface. For example, HSRP group 10 uses the HSRP virtual MAC address of
0000.0C07.AC0A. HSRP version 2 uses a virtual MAC address of 0000.0C9F.FXXX (XXX: HSRP group in
hexadecimal). But please notice that the virtual MAC address can be configured manually.
HSRP version 1 hello packets are sent to multicast address 224.0.0.2 while HSRP version 2 hello packets are sent to
multicast address 224.0.0.102. Currently HSRPv1 is the default version when running HSRP on Cisco devices.
HSRP States
HSRP consists of 5 states:
State

Description

Initial

This is the beginning state. It indicates HSRP is not running. It happens when the configuration changes
or the interface is first turned on

Listen

The router knows both IP and MAC address of the virtual router but it is not the active or standby router.
For example, if there are 3 routers in HSRP group, the router which is not in active or standby state will
remain in listen state.

Speak

The router sends periodic HSRP hellos and participates in the election of the active or standby router.

Standby

In this state, the router monitors hellos from the active router and it will take the active state when the
current active router fails (no packets heard from active router)

Active

The router forwards packets that are sent to the HSRP group. The router also sends periodic hello
messages

Please notice that not all routers in a HSRP group go through all states above. In a HSRP group, only one router
reaches active state and one router reaches standby state. Other routers will stop at listen state.
Now lets take an example of a router passing through these states. Suppose there are 2 routers A and B in the
network; router A is turned on first. It enters theinitial state. Then it moves to listen state in which it tries to
hear if there are already active or standby routers for this group. After learning no one take the active or standby
state, it determines to take part in the election by moving to speak state. Now it starts sending hello messages
containing its priority. These messages are sent to the multicast address 224.0.0.2 (which can be heard by all
members in that group). When it does not hear a hello message with a higher priority it assumes the role of active
router and moves to active state. In this state, it continues sending out periodic hello messages.
Now router B is turned on. It also goes through initial and listen state. In listen state, it learns that router A has
been already the active router and no other router is taking standby role so it enters speak state to compete for
the standby router -> it promotes itself as standby router.
Suppose router A is in active state while router B is in standby state. If router B does not hear hello messages from
router A within the holdtime (10 seconds by default), router B goes into speak state to announce its priority to all
HSRP members and compete for the active state. But if at some time it receives a message from the active router
that has a lower priority than its priority (because the administrator change the priority in either router), it can take
over the active role by sending out a hello packet with parameters indicating it wants to take over the active router.
This is called a coup hello message.
Quick summarization:
+ HSRP is Cisco proprietary which allows several routers or multilayer switches to appear as a single gateway IP
address.
+ HSRP has 5 states: Initial, listen, speak, standby and active.
+ HSRP allows multiple routers to share a virtual IP and MAC address so that the end-user hosts do not realize
when a failure occurs.
+ The active (or Master) router uses the virtual IP and MAC addresses.
+ Standby routers listen for Hellos from the Active router. A hello packet is sent every 3 seconds by default. The
hold time (dead interval) is 10 seconds.
+ Virtual MAC of 0000.0C07.ACxx , where xx is the hexadecimal number of HSRP group.
+ The group numbers of HSRP version 1 range from 0 to 255. HSRP does support group number of 0 (we do check
it and in fact, it is the default group number if you dont enter group number in the configuration) so HSRP version 1
supports up to 256 group numbers. HSRP version 2 supports 4096 group numbers.

InterVLAN Routing Tutorial


In the previous VLAN tutorial we learned how to use VLAN to segment the network and create logical broadcast
domains. In this tutorial we will learn about InterVLAN Routing.
What is InterVLAN routing?
As we learned, devices within a VLAN can communicate with each other without the need of Layer 3 routing. But
devices in separate VLANs require a Layer 3 routing device to communicate with one another. For example, in the
topology below host A and B can communicate with each other without a router in the same VLAN 10; host C and D
can communicate in the same VLAN 20. But host A cant communicate with host C or D because they are in
different VLANs.

To allow hosts in different VLANs communicate with each other, we need a Layer 3 device (like a router) for routing:

The routing traffic from one VLAN to another VLAN is called InterVLAN routing.
Now host A can communicate with host C or D easily. Now lets see how the traffic is sent from host A to host D.
First, traffic from host A is sent to the switch. The switch tags the frame as originating on VLAN 10 and checks the
destination. Switch knows the destination host is in a different VLAN so it forwards that traffic to the router. In turn,
the router makes routing decision from VLAN 10 to VLAN 20 and sends back that traffic to the switch, where it is
forwarded out to host D.

Notice that the routing decision to another VLAN is done by the router, not the switch. When frames leave the
router (step 3 in the picture above), they are tagged with VLAN 20.
Also notice that receiving ends (host A & D in this case) are unaware of any VLAN information. Switch attaches
VLAN information when receiving frames from host A and removes VLAN information before forwarding to host D.
But there is one disadvantage in the topology above: for each VLAN we need a physical connection from the router
to the switch but in practical, the interfaces of the router are very limited. To overcome this problem, we can create
many logical interfaces in one physical interface. For example from a physical interface fa0/0 we can create many
sub-interfaces like fa0/0.0, fa0/0.1 Now this router is often called router on a stick (maybe because there is
only one physical link connecting from router so it looks like a router on a stick ^^)

The router treats each sub-interface as a separate physical interface in routing decisions -> data can be sent and
received in the same physical interface (but different sub-interfaces) without being dropped by the split-horizon rule
in the case you want to send routing updates through the router from one VLAN to another.

Configuring InterVLAN routing


Now you understand how InterVLAN works. To accomplish InterVLAN routing, some configuration must be
implemented on both router and switch. Lets see what actions need to be completed when we want to configure
InterVLAN in router on a stick model using the above topology.
+ The switch port connected to the router interface must be configured as trunk port.
+ The router sub-interfaces must be running a trunking protocol. Two popular trunking protocols in CCNA are
802.1q (open standard) and InterSwitch Link (ISL, a Cisco propriety protocol).
+ Set IP address on each sub-interface.

To help you understand more clearly about InterVLAN, the main configuration of router & switch are shown below:
Configure trunk port on switch:
Switch(config)#interface f0/0
Switch(config-if)#no shutdown
Switch(config-if)#switchport mode trunk
Create sub-interfaces, set 802.1Q trunking protocol and ip address on each sub-interface

Router(config)#interface f0/0
Router(config-if)#no shutdown
(Note: The main interface f0/0 doesnt need an IP address but it must be turned on)
Router(config)#interface f0/0.0
Router(config-subif)#encapsulation dot1q 10
Router(config-subif)#ip address 192.168.1.1 255.255.255.0
Router(config-subif)#interface f0/0.1
Router(config-subif)#encapsulation dot1q 20
Router(config-subif)#ip address 192.168.2.1 255.255.255.0
(Note: In the encapsulation dot1q 10 command, 10 is the VLAN ID this interface operates in)
I also list the full configuration of the above topology for your reference:
Configure VLAN
Switch(config)#vlan 10
Switch(config-vlan)#name SALES
Switch(config-vlan)#vlan 20
Switch(config-vlan)#name TECH
Set ports to access mode & assign ports to VLAN
Switch(config)#interface range fa0/1-2
Switch(config-if)#no shutdown
Switch(config-if)# switchport mode access
Switch(config-if)# switchport access vlan 10
Switch(config-if)#interface range fa0/3-4
Switch(config-if)#no shutdown
Switch(config-if)#switchport mode access
Switch(config-if)# switchport access vlan 20
In practical, we often use a Layer 3 switch instead of a switch and a router on the stick, this helps reduce the
complexity of the topology and cost.

Note: With this topology, we dont need to use a trunking protocol and the switchport mode trunk command. The
full configuration of Layer 3 switch is listed below:
Switch configuration
ip routing
!
interface FastEthernet0/1
switchport access vlan 10
switchport mode access
!
interface FastEthernet0/2
switchport access vlan 20

switchport mode access


interface Vlan10
ip address 192.168.10.1 255.255.255.0
!
interface Vlan20
ip address 192.168.20.1 255.255.255.0
And on hosts just assign IP addresses and default gateways (to the corresponding interface VLANs) -> hosts in
different VLANs can communicate.
In summary, InterVLAN routing is used to permit devices on separate VLANs to communicate. In this tutorial you
need to remember these important terms:
+ Router-on-a-stick: single physical interface routes traffic between multiple VLANs on a network.
+ Subinterfaces are multiple virtual interfaces, associated with one physical interface. These subinterfaces are
configured in software on a router that is independently configured with an IP address and VLAN assignment.

Cisco Command Line Interface CLI


In the previous tutorial we learned about the boot sequence of a Cisco router/switch. After that, the router will allow
us to type commands but in different modes we can only used specific commands. So in this tutorial we will learn
about the Command Line Interface (CLI) and different modes in a Cisco router/switch.
Below lists popular modes in Cisco switch/router:
Router>

User mode

Router#

Privileged mode

Router(config)#

Configuration mode

Router(config-if)#

Interface level (within configuration mode)

Router(config-router)#

Routing engine level (within configuration mode)

Router(config-line)#

Line level (vty, tty, async) within configuration mode

Now lets discuss each mode in more detail


User mode (Unprivileged mode)
In most case this is the mode you will see on the screen after connecting to it. This mode provides limited access to
the router. You are provided with a set of nondestructive commands that allow examination of certain router
configuration parameters (mostly to view statistics). You cannot, however, make any changes to the router
configuration.
Privileged mode
Also known as the Enabled mode, this mode allows greater examination of the router and provides a more robust
command set than the User mode. In Privileged mode, you have access to the configuration commands supplied in
the Global Configuration mode, meaning you can edit the configuration for the router.
Configuration mode
Also called the Global Configuration mode, this mode is entered from the Privileged mode and supplies the complete
command set for configuring the router. In this mode you can access interface level, routing engine level, line
level
Interface level
In some books, this level is also referred as interface configuration mode or interface mode. In fact, it is a level
inside Configuration mode so you can see the configuration part in its prompt (config-if). This level can be
accessed by typing a specific interface in Configuration mode. For example:
Router(config)#interface fa0/0
Router(config-if)#
But notice that the prompt doesnt give you information about which interface is being configured so be careful with
this level while you are configuring! This lack of information can make you configure wrong interface easily!
Routing engine level
This is the level where we configure dynamic routing protocols (RIP, OSPF, EIGRP). You will learn about them later
in CCNA.
Line level
In this level we can configure Telnet, Console, AUX port parameters. Also notice that the prompt (config-line) is
used for all lines on the router so you must be careful about which line you are configuring!
Note: The line here can be a physical Console port or a virtual connection like Telnet.

The image below shows how to access each mode and popular levels inside Configuration mode:

Learning about modes is not difficult and you will get familiar with them while configuring routers & switches. Just
pay a little attention to them each time you practice and surely you can grasp them easily.

Cisco Router Boot Sequence Tutorial


In this article we will learn about the main components of a Cisco router and how the boot process takes place.
Types of memory
Generally Cisco routers (and switches) contain four types of memory:
Read-Only Memory (ROM): ROM stores the routers bootstrap startup program, operating system software, and
power-on diagnostic test programs (POST).
Flash Memory: Generally referred to simply as flash, the IOS images are held here. Flash is erasable and
reprogrammable ROM. Flash memory content is retained by the router on reload.
Random-Access Memory (RAM): Stores operational information such as routing tables and the running
configuration file. RAM contents are lost when the router is powered down or reloaded. By default, routers look here
first for an Internetwork Operating System (IOS) file during boot.
Non-volatile RAM (NVRAM): NVRAM holds the routers startup configuration file. NVRAM contents are not lost
when the router is powered down or reloaded.
Some comparisons to help you remember easier:
+ RAM is a volatile memory so contents are lost on reload, where NVRAM and Flash contents are not.
+ NVRAM holds the startup configuration file, where RAM holds the running configuration file.
+ ROM contains a bootstrap program called ROM Monitor (or ROMmon). When a router is powered on, the bootstrap
runs a hardware diagnostic called POST (Power-On Self Test).
Router boot process
The following details the router boot process:
1. The router is powered on.
2. The router first runs Power-On Self Test (POST)
3. The bootstrap checks the Configuration Register value to specify where to load the IOS. By default (the default
value of Configuration Register is 2102, in hexadecimal), the router first looks for boot system commands in
startup-config file. If it finds these commands, it will run boot system commands in order they appear in startupconfig to locate the IOS. If not, the IOS image is loaded from Flash . If the IOS is not found in Flash, the bootstrap
can try to load the IOS from TFTP server or from ROM (mini-IOS).
4. After the IOS is found, it is loaded into RAM.
5. The IOS attempts to load the configuration file (startup-config) from NVRAM to RAM. If the startup-config is not
found in NVRAM, the IOS attempts to load a configuration file from TFTP. If no TFTP server responds, the router
enters Setup Mode (Initial Configuration Mode).

And this is the process we can see on our screen when the router is turned on:

In short, when powered on the router needs to do:


1. Run POST to check hardware
2. Search for a valid IOS (the Operating System of the router)
3. Search for a configuration file (all the configurations applied to this router)
Specify how much RAM, NVRAM and Flash of a router
Also, from the information shown above, we can learn some information about routers model, RAM, Flash, NVRAM
memories as shown below:

Note: The show version command also gives us this information.


All the above information is straight-forwarding except the information of RAM. In some series of routers, the RAM
information is displayed by 2 parameters (in this case 60416K/5120K). The first parameter indicates how much RAM
is in the router while the second parameter (5120K) indicates how much DRAM is being used for Packet memory.
Packet memory is used for buffering packets.
So, from the output above we can learn:
Amount of RAM: 60416 + 5120 = 65536KB / 1024 = 64MB
Amount of NVRAM: 239KB
Amount of Flash: 62720KB

OSI Model Tutorial


Welcome to the most basic tutorial for networker! Understanding about OSI model is one of the most important
tools to help you grasp how networking devices like router, switch, PC work.
Lets take an example in our real life to demonstrate the OSI model. Maybe you have ever sent a mail to your
friend, right? To do it, you have to follow these steps:
1.
2.
3.
4.
5.

Write your letter


Insert it into an envelope
Write information about sender and receiver on that envelope
Stamp it
Go to the post office and drop it into a mail inbox

From the example above, I want to imply we have to go through some steps in a specific order to complete a task.
It is also applied for two PCs to communicate with each other. They have to use a predefined model, named OSI, to
complete each step. There are 7 steps in this model as listed below:

This is also the well-known table of the OSI model so you must take time to learn by heart. A popular way to
remember this table is to create a fun sentence with the first letters of each layer. For
example: All People Seem To Need Data Processing or a more funny sentence sorted from layer 1 to layer
7: Please Do Not ThrowSausage Pizza Away.
There are two notices about this table:
1. First, the table is arranged from top to bottom (numbering from 7 to 1). Each step is called a layer so we have
7 layers (maybe we usually call them layers to make them more technical ^^).
When a device wants to send information to another one, its data must go from top to bottom layer. But when a
device receives this information, it must go from bottom to top to decapsulate it. In fact, the reverse action at the
other end is very natural in our life. It is very similar when two people communicate via mail. First, the writer must
write the letter, insert it into an envelope while the receiver must first open the envelope and then read the mail.
The picture below shows the whole process of sending and receiving information.

Note: The OSI model layers are often referred to by number than by name (for example, we refer saying layer 3
to network layer) so you should learn the number of each layer as well.
2. When the information goes down through layers (from top to bottom), a header is added to it. This is called
encapsulation because it is like wrapping an object in a capsule. Each header can be understood only by the
corresponding layer at the receiving side. Other layers only see that layers header as a part of data.

At the receiving side, corresponding header is stripped off in the same layer it was attached.
Understand each layer
Layer 7 Application layer
This is the closest layer to the end user. It provides the interface between the applications we use and the
underlying layers. But notice that the programs you are using (like a web browser IE, Firefox or Opera) do not
belong to Application layer. Telnet, FTP, email client (SMTP), HyperText Transfer Protocol (HTTP) are examples of
Application layer.
Layer 6 Presentation layer
This layer ensures the presentation of data, that the communications passing through are in the appropriate form
for the recipient. In general, it acts as a translator of the network. For example, you want to send an email and the
Presentation will format your data into email format. Or you want to send photos to your friend, the Presentation
layer will format your data into GIF, JPG or PNG format.
Layer 5 Session layer
Layer 5 establishes, maintains and ends communication with the receiving device.
Layer 4 Transport layer
This layer maintains flow control of data and provides for error checking and recovery of data between the devices.
The most common example of Transport layer is Transmission Control Protocol (TCP) and User Datagram Protocol
(UDP).
Layer 3 Network layer
This layer provides logical addresses which routers will use to determine the path to the destination. In most cases,
the logic addresses here means the IP addresses (including source & destination IP addresses).
Layer 2 Data Link Layer
The Data Link layer formats the message into a data frame, and adds a header containing the hardware destination
and source address to it. This header is responsible for finding the next destination device on a local network.

Notice that layer 3 is responsible for finding the path to the last destination (network) but it doesnt care about who
will be the next receiver. It is the Layer 2 that helps data to reach the next destination.
This layer is subdivide into 2 sub-layers: logical link control (LLC) and media access control (MAC).
The LLC functions include:
+ Managing frames to upper and lower layers
+ Error Control
+ Flow control
The MAC sublayer carries the physical address of each device on the network. This address is more commonly called
a devices MAC address. MAC address is a 48 bits address which is burned into the NIC card on the device by its
manufacturer.
Layer 1 Physical layer
The Physical Layer defines the physical characteristics of the network such as connections, voltage levels and
timing.
To help you remember the functions of each layer more easily, I created a fun story in which Henry (English) wants
to send a document to Charles (French) to demonstrate how the OSI model works.

Lastly, I summarize all the important functions of each layer in the table below (please remember them, they are
very important knowledge you need to know about OSI model):
Layer

Description

Popular Protocols

Protocol Data
Unit

Application

+ User interface

HTTP, FTP, TFTP,


Telnet, SNMP,
DNS

Data

Presentation

+ Data representation, encryption &


decryption

+ Video (WMV,
AVI)
+ Bitmap (JPG,
BMP, PNG)
+ Audio (WAV,
MP3, WMA)
.

Data

Session

+ Set up, monitor & terminate the


connection session

+ SQL, RPC,
NETBIOS names

Data

Transport

+ Flow control (Buffering,


Windowing, Congestion Avoidance)
helps prevent the loss of segments
on the network and the need for
retransmission

+ TCP (ConnectionOriented, reliable)


+ UDP
(Connectionless,
unreliable)

Segment

Network

+ Path determination
+ Source & Destination logical
addresses

+ IP
+ IPX
+ AppleTalk

Packet/Datagram

Router

Data Link

+ Physical addresses

+ LAN
+ WAN (HDLC, PPP,
Frame Relay)

Frame

Switch,
Bridge

+ FDDI, Ethernet

Bit (0, 1)

Hub,
Repeater

Includes 2 layers:
+ Upper layer: Logical Link Control
(LLC)
+ Lower layer: Media Access Control
(MAC)
Physical

Encodes and transmits data bits


+ Electric signals
+ Radio signals

Devices
operate in
this layer

Note: In fact, OSI is just is a theoretical model of networking. The practical model used in modern networks is the
TCP/IP model. You may think Hm, its just theoretic and has no use in real life! I dont care! but believe me, you
will use this model more often than the TCP/IP model so take time to grasp it, you will not regret I promise :)

Subnetting Tutorial Subnetting Made Easy


In this article, we will learn how to subnet and make subnetting an easy task.
The table below summarizes the possible network numbers, the total number of each type, and the number of hosts
in each Class A, B, and C network.
Default subnet mask

Range

Class A

255.0.0.0 (/8)

1.0.0.0 126.255.255.255

Class B

255.255.0.0 (/16)

128.0.0.0 191.255.255.255

Class C

255.255.255.0 (/24)

192.0.0.0 223.255.255.255

Table 1 Default subnet mask & range of each class


Class A addresses begin with a 0 bit. Therefore, all addresses from 1.0.0.0 to 126.255.255.255 belong to class A
(1=0000 0001; 126 = 0111 1110).
The 0.0.0.0 address is reserved for default routing and the 127.0.0.0 address is reserved for loopback testing so
they dont belong to any class.
Class B addresses begin with a 1 bit and a 0 bit. Therefore, all addresses from 128.0.0.0 to 191.255.255.255 belong
to class B (128=1000 0000; 191 = 1011 1111).
Class C addresses begin with two 1 bits and a 0 bit. Class C addresses range from 192.0.0.0 to 223.255.255.255
(192 = 1100 0000; 223 = 1101 1111).
Class D & E are used for Multicast and Research purposes and we are not allowed to subnet them so they are not
mentioned here.
Note: The number behind the slash notation (/) specifies how many bits are turned on (bit 1). For example:
+ /8 equals 1111 1111.0000 0000.0000 0000.0000 0000 -> 8 bits are turned on (bit 1)
+ /12 equals 1111 1111.1111 0000.0000 0000.0000 0000 -> 12 bits are turned on (bit 1)
+ /28 equals 1111 1111.1111 1111.1111 1111.1111 0000 -> 28 bits are turned on (bit 1)
+ /32 equals 1111 1111.1111 1111.1111 1111.1111 1111 -> 32 bits are turned on (bit 1) and this is also the
maximum value because all bits are turned on.
The slash notation (following with a number) is equivalent to a subnet mask. If you know the slash notation you can
figure out the subnet mask and vice versa. For example, /8 is equivalent to 255.0.0.0; /12 is equivalent to
255.240.0.0; /28 is equivalent to 255.255.255.240; /32 is equivalent to 255.255.255.255.

The Network & Host parts of each class by default

From the default subnet mask shown above, we can identify the network and host part of each class. Notice that
in the subnet mask, bit 1 represents for Network part while bit 0 presents for Host part (255 equals to 1111 1111
and 0 equals to 0000 0000 in binary form).
What is subnetting?
When changing a number in the Network part of an IP address we will be in a different network from the previous
address. For example, the IP address 11.0.0.1 belongs to class A and has a default subnet mask of 255.0.0.0; if we
change the number in the first octet (a block of 8 bits, the first octet is the leftmost 8 bits) we will create a different
network. For example, 12.0.0.1 is in a different network from 11.0.0.1. But if we change a number in the Host part,
we are still in the same Network. For example, 11.1.0.1 is in the same network of 11.0.0.1.
The problem here is if we want to create 300 networks how can we do that? In the above example, we can only
create different networks when changing the first octet so we can create a maximum of 255 networks because the
first octet can only range from 1 to 255 (in fact it is much smaller because class A only range from 1 to 126). Now
we have to use a technique called subnetting to achieve our purpose.
Subnetting means we borrow some bits from the Host part to add to the Network part. This allows us to
have more networks than using the default subnet mask. For example, we can borrow some bits in the next octet to
make the address 11.1.0.1 belong to a different network from 11.0.0.1.
How to subnet?
Do you remember that I said in the subnet mask, bit 1 represents for Network part while bit 0 presents for Host
part? Well, this also means that we can specify how many bits we want to borrow by changing how many bit 0 to
bit 1 in the subnet mask.
Lets come back to our example with the IP 11.0.0.1, we will write all numbers in binary form to reveal what a
computer really sees in an IP address.

Now you can clearly see that the subnet mask will decide which is the Network part, which is the Host part. By
borrowing 8 bits, our subnet mask will be like this:

After changing the second octet of the subnet mask from all 0 to all 1, the Network part is now extended. Now
we can create new networks by changing number in the first or second octet. This greatly increases the number of
networks we can create. With this new subnet mask, IP 11.1.0.1 is in different network from IP 11.0.0.1 because
1 in the second octet now belongs to the Network part.
So, in conclusion we subnet by borrowing bit 0 in the Host portion and converting them to bit 1. The number
of borrowed bits is depended on how many networks we need.

Note: A rule of borrowing bits is we can only borrow bit 0 from the left to the right without skipping any bit 0. For
example, you can borrow like this: 1111 1111. 1100 0000.0000 0000.0000 0000 but not this: 1111 1111. 1010
0000.0000 0000.0000 0000. In general, just make sure all your bit 1s are successive on the left and all your bit
0s are successive on the right.
In the next part we will learn how to calculate the number of sub-networks and hosts-per-subnet
Calculate how many networks and hosts-per-subnet
In our example, you may raise a question: when we borrow 8 bits, how many sub-networks and how many hosts
per sub-network do it create?
Note: From now, we will call sub-networks subnets. This term is very popular so you should be familiar with it.
How many new subnets?
Because we can change any bit in the second octet to create a new subnet, each bit can be 0 or 1 so with this
subnet mask (255.255.0.0) we can create 28 more subnets. From here we can deduce the formula to calculate the
newly created subnets. Suppose n is the number of bits we borrow:

The number of newly created subnets = 2n


In our example, we borrow 8 bits so we will have 2n = 28 = 256 subnets!
How many hosts per subnet?
The number of hosts per subnet is depended on the Host part, which is indicated by the 0 part of the subnet
mask. So suppose k is the number of bits 0 in the subnet mask. The formula to calculate the number of hosts is
2k. But notice that with each subnet, there are two addresses we cant assign for hosts because they are used for
network address & broadcast address. Thus we must subtract the result to 2. Therefore the formula should be:

The number of hosts per subnet = 2k 2


In our example, the number of bit 0 in the subnet mask 255.255.0.0 (in binary form) is 16 so we will have 2 k 2
= 216 2 = 65534 hosts-per-subnet!
Some other examples
Well, practice makes perfect so we should have some more exercises to be familiar with them. But remember that
this is only the beginning in your journey to become a subnetting guru :)
Exercise 1
Your company has just been assigned the network 4.0.0.0. How many subnets and hosts-per-subnet you can create
with a subnet mask of 255.255.255.0?
(Please try to solve by yourself before reading the solution ^^)
Solution
First of all you have to specify which class this network belongs to. According to Table 1, it belongs to class A
(simply, class A ranges from 1 to 126) and its default subnet mask is 255.0.0.0. Therefore if we use a subnet mask
of 255.255.255.0, it means we borrowed 16 bits (to convert from 0 to 1).
255.0.0.0 = 1111 1111.0000 0000.0000 0000.0000 0000
255.255.255.0 = 1111 1111.1111 1111.1111 1111.0000 0000
Now use our above formulas to find the answers:
The number of newly created subnets = 216 = 65536 (with 16 is the borrowed bits)
The number of hosts per subnet = 28 2 = 254 (with 8 is the bit 0s left in the 255.255.255.0 subnet mask)
Exercise 2
Your company has just been assigned the network 130.0.0.0. How many subnets and hosts-per-subnet you can
create with a subnet mask of 255.255.128.0?
(Please try to solve by yourself before reading the solution ^^)

Solution
130.0.0.0 belongs to class B with the default subnet mask of 255.255.0.0. But is the subnet mask of 255.255.128.0
strange? Ok, lets write all subnet masks in binary:
255.255.128.0 = 1111 1111.1111 1111.1000 0000.0000 0000
This is a valid subnet because all bit 1s and 0s are successive. Comparing to the default subnet mask, we
borrowed only 1 bit:
255.255.0.0 = 1111 1111.1111 1111.0000 0000.0000 0000
Therefore:
The number of newly created subnets = 21 = 2 (with 1 is the borrowed bits)
The number of hosts per subnet = 215 2 = 32766 (with 15 is the bit 0s left in the 255.255.128.0 subnet mask)
Exercise 3
Your company has just been assigned the network 198.23.16.0/28. How many subnets and hosts-per-subnet you
can create with a subnet mask of 255.255.255.252?
(Please try to solve by yourself before reading the solution ^^)
Solution
In this exercise, your company was given a subnetted network from the beginning and it is not using the default
subnet mask. So we will compare two subnet masks above:
/28 = 1111 1111.1111 1111.1111 1111.1111 0000 (=255.255.255.240)
255.255.255.252 = 1111 1111.1111 1111.1111 1111.1111 1100 (= /30)
In this case we borrowed 2 bits. Therefore:
The number of newly created subnets = 22 = 4 (with 2 is the borrowed bits)
The number of hosts per subnet = 22 2 = 2 (with 2 is the bit 0s left in the 255.255.255.252 subnet mask)
In this exercise I want to go a bit deeper into the subnets created. We learned there are 4 created subnets but what
are they? To find out, we should write all things in binary:

Because two subnet masks (/28 & /30) only affect the 4th octet so we dont care about the first three octets. In the
4th octet we are allowed to change 2 bits (in the green box) of the IP address to create a new subnet. So there are
4 values we can use: 00, 01, 10 & 11. After changing, we convert them back to decimal numbers. We get 4
subnets:
+
+
+
+

First subnet: 198.23.16.0/30 (the 4th octet is 00000000)


Second subnet: 198.23.16.4/30 (the 4th octet is 00000100)
Third subnet: 198.23.16.8/30 (the 4th octet is 00001000)
Fourth subnet: 198.23.16.12/30 (the 4th octet is 00001100)

So how about hosts per subnet? Please notice that all these 4 subnets are successive. So we can deduce the range
of these subnets:

+
+
+
+

First subnet: ranges from 198.23.16.0 to 198.23.16.3


Second subnet: ranges from 198.23.16.4 to 198.23.16.7
Third subnet: ranges from 198.23.16.8 to 198.23.16.11
Fourth subnet: ranges from 198.23.16.12 to 198.23.16.15

Lets analyze the first subnet which ranges from 198.23.16.0 to 198.23.16.3. Notice that all networks (and subnets)
have a network address and a broadcast address. In this case, the network address is 198.23.16.0 and the
broadcast address is 198.23.16.3 and they are not assignable or usable for hosts. This is the reason why we have to
subtract 2 in the formula The number of hosts per subnet = 2k 2. After eliminating these 2 addresses we have 2
addresses left (which are 198.23.16.1 & 198.23.16.2) as calculated above.
In the next part we will learn how to calculate subnet quickly. This is also a must requirement for CCNA so you
have to grasp it.
In the previous examples, we have to write all subnet masks and IP addresses in binary numbers to find out the
results. It is a boring and time-consuming task. In this part I will show you a shortcut to subnet without using a
calculator or rough paper!
Subnetting The quick & easy way
One important thing we should notice is that a valid subnet mask must have all bit 1s and 0s successive, in
which bit 1s must be on the left; bit 0s must be on the right. Therefore we only have 8 situations:

Table 2 lists all valid subnet masks


This is a very important table to do subnet quickly! Please take some time to learn it by heart. Make sure you
remember the right-most bit 1 position (the least significant bit 1, which are in red in the above table) and their
equivalent decimal values.
In most cases, this table is used to quickly convert a number from decimal to binary value without any calculation.
For example, you can quickly convert the 4th octet of the subnet mask 255.255.255.248 to 11111000. Or if you are
given a subnet of /29 you will know it equals to 255.255.255.248 (by thinking /24 is the default subnet mask of
class C so /29 will have the right-most bit 1 at 5th position).
Try to practice with these questions:
+
+
+
+
+
+

/28 in binary form?


255.255.224.0 in binary form?
255.192.0.0 in slash notation form?
/26 in binary form?
255.128.0.0 in binary form?
248.0.0.0 in slash notation form?

(Please try to solve by yourself before reading the solution)


Answers:

+
+
+
+
+
+

/28 -> 1111 1111.1111 1111.1111 1111.1111 0000


255.255.224.0 -> 1111 1111.1111 1111.1110 0000.0000 0000
255.192.0.0 -> /10
/26 -> 1111 1111.1111 1111.1111 1111.1100 0000
255.128.0.0 -> 1111 1111.1000 0000.0000 0000.0000 0000
248.0.0.0 -> /5

How to find out the increment number?


The increment is the heart of subnetting; if you can find out the increment, you can find all the information to solve
a subnetting question. So it is usually the first thing you must find out in a subnetting question.
The increment number is the number specifying how big your subnets are. Lets take an example of the increment
number! Did you remember the subnets in Exercise 3 in the previous part? By changing bits in the Network part,
we found out 4 subnets:
+
+
+
+

First subnet: 198.23.16.0/30 (the 4th octet is 00000000)


Second subnet: 198.23.16.4/30 (the 4th octet is 00000100)
Third subnet: 198.23.16.8/30 (the 4th octet is 00001000)
Fourth subnet: 198.23.16.12/30 (the 4th octet is 00001100)

In this case the increment is 4 (in the 4th octet) because the difference between two successive subnets is 4
(from 0 -> 4; from 4 -> 8; from 8 -> 12)
There are 2 popular ways to find out the increment number:
1) Use the formula:
Increment = 256 x
In which x is the first octet (counting from the left) which is smaller than 255 in a subnet mask. For example:
+ In a subnet mask of 255.224.0.0 -> x = 224
+ In a subnet mask of /29 -> x = 248 (because /29 = 255.255.255.248)
+ In a subnet mask of 1111 1111.1111 1100.0000 0000.0000 0000 -> x = 252
In the case you see a subnet mask of 255.255.255.255 (which is very rare in CCNA), x = 255
Note: Also remember which octet x belongs to because we have to plus the increment to that octet.
Now lets solve Exercise 3 again by using this formula:
Exercise 3 one again (with the formula 256 x):
Your company has just been assigned the network 198.23.16.0/28. How many subnets and hosts-per-subnet you
can create with a subnet mask of 255.255.255.252?
The subnet mask is 255.255.255.252 -> x = 252 (x belongs to 4th octet)
Therefore the Increment = 256 252 = 4
The initial network 198.23.16.0/28 is also the first subnet, so:
+ The first subnet: 198.23.16.0/30
+ The second subnet: 198.23.16.4/30 because the increment is 4 so we plus the network address with it to get the
next network address (0 + 4 = 4)
+ The third subnet: 198.23.16.8/30 (4 + 4 = 8)
+ The fourth subnet: 198.23.16.12/30 (8 + 4 = 12)
Note: We know there are only 4 subnets because we borrow 2 bits.
2) Learn by heart the decimal value of the rightmost bit 1 in the subnet mask:
Another way to find the increment value is to write x in binary: 11110000. Consider the rightmost bit 1, the
decimal value of this bit is the increment value. In this case it equals to 16.
The table below summarizes the decimal values of bit 1 depending on its position. To use this method, you should
learn by heart this table:

Table 3 How to find out increment based on the least-significant (rightmost) bit 1
Now lets solve Exercise 3 again by using this method:
Exercise 3 one again (with the decimal value of the rightmost bit 1 method):
Your company has just been assigned the network 198.23.16.0/28. How many subnets and hosts-per-subnet you
can create with a subnet mask of 255.255.255.252?
First use Table 2 to convert 252 to 1111 1100. The decimal value of the rightmost bit 1 is 4 (according to Table 3)
-> The Increment is 4.
After finding out the increment we can deduce 4 subnets it creates.
The initial network 198.23.16.0/28 is also the first subnet, so:
+ The first subnet: 198.23.16.0/30
+ The second subnet: 198.23.16.4/30 because the increment is 4 so we plus the network address with it to get the
next network address (0 + 4 = 4)
+ The third subnet: 198.23.16.8/30 (4 + 4 = 8)
+ The fourth subnet: 198.23.16.12/30 (8 + 4 = 12)
Note: We should only choose one method to use and try to practice, practice & practice more with it. Practice until
you can solve any subnetting questions within 20 seconds!
Maybe you will ask why 256 can help you find the increment. In fact, by using the formula Increment = 256 x you
are trying to separate the rightmost bit 1 from other bits:
256 x = 255 x + 1
In which 255 x will convert all bit 0s to bit 1s and all bit 1s to 0s while +1 part will make our result
have only one bit 1 left. For example, if x = 240 then:

So in fact we can say two above methods are the same!


Now you learned all necessary things to become a subnetting guru. Please take some time to practice as much as
possible, only practice makes perfect! Below lists some subnetting questions you can practice with:
+
+
+
+
+
+

http://www.9tut.com/ccna-subnetting
http://www.9tut.com/ccna-subnetting-questions-2
http://www.9tut.com/ccna-subnetting-questions-3
http://www.9tut.com/ccna-subnetting-questions-4
http://www.9tut.net/icnd1/subnetting
http://www.9tut.net/icnd2/icnd2-subnetting

(Please try solving them before reading the answers ^^)

Frame Relay Tutorial


Lets start this article with the question: Why do we need Frame Relay?
Lets take a simple example. Suppose you are working in a big company and your company has just expanded to
two new locations. The main site is connected to two branch offices, named Branch 1 & Branch 2 and your boss
wants these two branches can communicate with the main site. The most simple solution is to connect them directly
(called a leased line) as shown below:

To connect to these two branches, the main site router, HeadQuarter, requires two serial interfaces which a router
can provide. But what happens when the company expands to 10 branches, 50 branches? For each point-to-point
line, HeadQuarter needs a separate physical serial interface (and maybe a separate CSU/DSU if it is not integrated
into the WAN card). As you can imagine, it will need many routers with many interfaces and lots of rack space for
the routers and CSU/DSUs. Maybe we should use another solution for this problem? Luckily, Frame Relay can do it!
By using Frame Relay we only need one serial interface at the HeadQuarter to connect to all branches. This is also
true when we expand to 10 or 50 branches. Moreover, the cost is much lesser than using leased-lines.

Frame Relay is a high-performance WAN protocol that operates at the physical and data link layers of the OSI
reference model. It offers lower-cost data transfer when compared to typical point-to-point applications, by using
virtual connections within the frame relay network and by combining those connections into a single physical
connection at each location. Frame relay providers use a frame relay switch to route the data on each virtual circuit
to the appropriate destination.
Maybe these terminologies of Frame Relay are difficult to understand so we will explain them in more detail in this
article.
DCE & DTE
The first concept in Frame Relay you must grasp is about DTE & DCE:
+ Data terminal equipment (DTE), which is actually the user device and the logical Frame-relay end-system
+ Data communication equipment (DCE, also called data circuit-terminating equipment), which consists of modem
and packet switch
In general, the routers are considered DTE, and the Frame Relay switches are DCE. The purpose of DCE equipment
is to provide clocking and switching services in a network. In our example, HeadQuarter, Branch 1 & Branch 2 are
DTEs while Frame Relay switches are DCEs.
Virtual Circuits

The logical connection through the Frame Relay network between two DTEs is called a virtual circuit (VC). The term
virtual here means that the two DTEs are not connected directly but through a network. For example, the
HeadQuarter & Branch 1 (or Branch 2) can communicate with each other as if they were directly connected but in
fact they are connected through a Frame Relay network with many Frame Relay switches between them.

There are two types of VCs


+ switched virtual circuits (SVCs): are temporary connections that are only used when there is sporadic data
transfer between DTE devices across the Frame Relay network. SVC is set up dynamically when needed. SVC
connections require call setup and termination for each connection.
+ permanent virtual circuits (PVCs): A predefined VC. A PVC can be equated to a leased line in concept.
Nowadays most service providers offer PVC service only to save additional costs for signaling and billing procedures.
In this part we will continue to discuss about other important Frame Relay parameters
DLCI
Although the above picture shows two VCs from the HeadQuarter but do you remember that the HeadQuarter only
has only one serial interface? So how can it know which branch it should send the frame to?
Frame-relay uses data-link connection identifiers (DLCIs) to build up logical circuits. The identifiers have local
meaning only, that means that their values are unique per router, but not necessarily in the other routers. For
example, there is only one DLCI of 23 representing for the connection from HeadQuarter to Branch 1 and only one
DLCI of 51 from HeadQuarter to Branch 2. Branch 1 can use the same DLCI of 23 to represent the connection from
it to HeadQuarter. Of course it can use other DLCIs as well because DLCIs are just local significant.

By including a DLCI number in the Frame Relay header, HeadQuarter can communicate with both Branch 1 and
Branch 2 over the same physical circuit.
DLCI values typically are assigned by the Frame Relay service provider (for example, the telephone company). In
Frame Relay, DLCI is a 10-bit field.
Before DLCI can be used to route traffic, it must be associated with the IP address of its remote router. For
example, suppose that:
+ HeadQuarters IP address is 9.9.9.9
+ Branch 1s IP address is 1.1.1.1
+ Branch 2s IP address is 2.2.2.2

Then the HeadQuarter will need to map Branch 1 IP address to DLCI 23 & map Branch 2 IP address to DLCI 51.
After that it can encapsulate data inside a Frame Relay frame with an appropriate DLCI number and send to the
destination. The mapping of DLCIs to Layer 3 addresses can be handled manually or dynamically.

* Manually (static): the administrators can statically assign a DLCI to the remote IP address by the following
statement:
Router(config-if)#frame-relay map protocol dlci [broadcast]
For example HeadQuarter can assign DLCIs of 23 & 51 to Branch 1 & Branch 2 with these commands:
HeadQuarter(config-if)#frame-relay map ip 1.1.1.1 23 broadcast
HeadQuarter(config-if)#frame-relay map ip 2.2.2.2 51 broadcast
We should use the broadcast keyword here because by default split-horizon will prevent routing updates from
being sent back on the same interface it received. For example, if Branch 1 sends an update to HeadQuarter then
HeadQuarter cant send that update to Branch 2 because they are received and sent on the same interface. By
using the broadcast keyword, we are telling the HeadQuarter to send a copy of any broadcast or multicast packet
received on that interface to the virtual circuit specified by the DLCI value in the frame-relay map statement. In
fact the copied packet will be sent via unicast (not broadcast) so sometimes it is called pseudo-broadcast.
Note: frame-relay interface-dlci command can be used to statically assign (bind) a DLCI number to a physical
interface.
Note: In fact, we need to run a routing protocol (like OSPF, EIGRP or RIP) to make different networks see each
other
* Dynamic: the router can send an Inverse ARP Request to the other end of the PVC for its Layer 3 address. In
short, Inverse ARP will attempt to learn its neighboring devices IP addresses and automatically create a dynamic
map table. By default, physical interfaces have Inverse ARP enabled.
We will take an example of how Inverse ARP works with the topology above. At the beginning, all routers are not
configured with static mapping and HeadQuarter has not learned the IP addresses of Branch 1 & 2 yet. It only has 2
DLCI values on s0/0 interface (23 & 51). Now it needs to find out who are attached to these DLCIs so it sends an
Inverse ARP Request on s0/0 interface. Notice that the router will send Inverse ARP Request out on every DLCI
associated with the interface.

In the Inverse ARP Request, HeadQuarter also includes its IP 9.9.9.9. When Branch 1 & 2 receive this request, they
send back an Inverse ARP Reply with their own IP addresses.

Now all the routers have a pair of DLCI & IP address of the router at the other end so data can be forwarded to the
right destination.
In this example you can see that each router has a DLCI first (Layer 2) and it needs to find out the IP address
(Layer 3). This process is opposite of the ARP process (ARP translates Layer 3 address to Layer 2 address) so it is
called Inverse ARP.
After the Inverse ARP process completes, we can use the show frame-relay map to check. The word dynamic
indicates the mapping was learned through Inverse ARP (the output below is not related to the above topology):

By default, routers send Inverse ARP messages on all active DLCIs every 60 seconds.
Another thing you should notice is when you supply a static map (via frame-relay map command), Inverse ARP is
automatically disabled for the specified protocol on the specified DLCI.
In the last part we will mainly learn about LMI, which is the signaling protocol of Frame Relay
LMI
Local Management Interface (LMI) is a signaling standard protocol used between your router (DTE) and the first
Frame Relay switch. The LMI is responsible for managing the connection and maintaining the status of your PVC.

LMI
+A
+A
+A

includes:
keepalive mechanism, which verifies that data is flowing
multicast mechanism, which provides the network server (router) with its local DLCI.
status mechanism, which provides PVC statuses on the DLCIs known to the switch

In our example, when HeadQuarter is configured with Frame Relay, it sends an LMI Status Inquiry message to the
DCE. The response from the DCE might be a small Hello message or a full status report about the PVCs in use
containing details of all the VCs configured (DLCI 23 & 51). By default, LMI messages are sent out every 10
seconds.
The four possible PVC states are as follows:
+ Active state: Indicates that the connection is active and that routers can exchange data.
+ Inactive state: Indicates that the local connection to the Frame Relay switch is working, but the remote router
connection to the Frame Relay switch is not working.
+ Deleted state: Indicates that no LMI is being received from the Frame Relay switch, or that there is no service
between the customer router and Frame Relay switch.
+ Static state: the Local Management Interface (LMI) mechanism on the interface is disabled (by using the no
keepalive command). This status is rarely seen so it is ignored in some books.
We can use the show frame-relay lmi to display LMI statistics of Frame Relay on enabled interfaces of the router.
The output shows the LMI type used by the Frame Relay interface and the counters for the LMI status exchange
sequence, including errors such as LMI timeouts.

Cisco routers support the following three LMI types:


* Cisco: LMI type de?ned jointly by Cisco, StrataCom, Northern Telecom (Nortel), and Digital Equipment
Corporation
* ANSI: ANSI T1.617 Annex D
* Q.933A: ITU-T Q.933 Annex A
Notice that three types of LMI are not compatible with each others so the LMI type must match between the
provider Frame Relay switch and the customer DTE device.
From Cisco IOS Release 11.2, the router attempts to automatically detect the type of LMI used by the provider
switch.
Note: LMI is required for Inverse ARP to function because it needs to know that the PVC is up before sending out
Inverse ARP Request.
Now you learn most of Frame Relay mentioned in CCNA, some other Frame Relays characteristics you should know
are mentioned below.

Other Frame Relay characteristics


+ Frame Relay provides no error recovery mechanism. It only provides CRC error detection.
+ Unlike with LANs, you cannot send a data link layer broadcast over Frame Relay. Therefore, Frame Relay
networks are called nonbroadcast multiaccess (NBMA) networks.
+ Depending on the bandwidth needed for each virtual connection, the customer can order a circuit with a
guaranteed amount of bandwidth. This amount is theCommitted Information Rate (CIR). CIR defines how much
bandwidth the customer is guaranteed during normal network operation. Any data transmitted above this
purchased rate (CIR) is available for discard by the network if the network doesnt have available bandwidth.
+ If the Frame relay switch begins to experience congestion, it sends the upstream site (to the source) a Backward
explicit congestion notification (BECN) and the downstream site (to the destination) a Forward explicit
congestion notification (FECN).

+ There are two Frame Relay encapsulation types: the Cisco encapsulation and the IETF Frame Relay
encapsulation, which is in conformance with RFC 1490 and RFC 2427. The former is often used to connect two
Cisco routers while the latter is used to connect a Cisco router to a non-Cisco router.
+ Frame Relay does not define the way the data is transmitted within the service providers network once the traffic
reaches the providers switch. So the providers can use Frame Relay, ATM or PPP inside their networks.
Layer 2 Encapsulation Protocols
Besides Frame Relay there are other Layer 2 Encapsulation Protocols that you can implement instead:
High-Level Data Link Control (HDLC): The default encapsulation type for Cisco routers on point-to-point
dedicated links and circuit-switched connections. HDLC is a Cisco proprietary protocol.
Point-to-Point Protocol (PPP): Provides connections between devices over several types of physical interfaces,
such as asynchronous serial, High-Speed Serial Interface (HSS1), ISDN, and synchronous. PPP works with many
network layer protocols, including IP and IPX. PPP can use either Password Authentication Protocol (PAP) or
Challenge Handshake Authentication Protocol (CHAP) for authentication.
X.25/Link Access Procedure, Balanced (LAPB): Defines connections between DTE and DCE for remote terminal
access. LAPB is a data link layer protocol specified by X.25.
Asynchronous Transfer Mode (ATM): International standard for cell relay using fixed-length (53-byte) cells for
multiple service types. Fixed-length cells allow hardware processing, which greatly reduces transit delays. ATM
takes advantage of high-speed transmission media such as E3, T3, and Synchronous Optical Network (SONET).
If you want to learn how to configure Frame Relay in GNS3, please read my Frame Relay Lab in GNS3 tutorial.

Wireless Tutorial
In this article we will discuss about Wireless technologies mentioned in CCNA.
Wireless LAN (WLAN) is very popular nowadays. Maybe you have ever used some wireless applications on your
laptop or cellphone. Wireless LANs enable users to communicate without the need of cable. Below is an example of
a simple WLAN:

Each WLAN network needs a wireless Access Point (AP) to transmit and receive data from users. Unlike a wired
network which operates at full-duplex (send and receive at the same time), a wireless network operates at halfduplex so sometimes an AP is referred as a Wireless Hub.
The major difference between wired LAN and WLAN is WLAN transmits data by radiating energy waves, called radio
waves, instead of transmitting electrical signals over a cable.
Also, WLAN uses CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) instead of CSMA/CD for media
access. WLAN cant use CSMA/CD as a sending device cant transmit and receive data at the same time. CSMA/CA
operates as follows:
+ Listen to ensure the media is free. If it is free, set a random time before sending data
+ When the random time has passed, listen again. If the media is free, send the data. If not, set another random
time again
+ Wait for an acknowledgment that data has been sent successfully
+ If no acknowledgment is received, resend the data
IEEE 802.11 standards:
Nowadays there are three organizations influencing WLAN standards. They are:
+ ITU-R: is responsible for allocation of the RF bands
+ IEEE: specifies how RF is modulated to transfer data
+ Wi-Fi Alliance: improves the interoperability of wireless products among vendors

But the most popular type of wireless LAN today is based on the IEEE 802.11 standard, which is known informally
as Wi-Fi.
* 802.11a: operates in the 5.7 GHz ISM band. Maximum transmission speed is 54Mbps and approximate wireless
range is 25-75 feet indoors.
* 802.11b: operates in the 2.4 GHz ISM band. Maximum transmission speed is 11Mbps and approximate wireless
range is 100-200 feet indoors.
* 802/11g: operates in the 2.4 GHz ISM band. Maximum transmission speed is 54Mbps and approximate wireless
range is 100-200 feet indoors.
ISM Band: The ISM (Industrial, Scientific and Medical) band, which is controlled by the FCC in the US, generally
requires licensing for various spectrum use. To accommodate wireless LANs, the FCC has set aside bandwidth for
unlicensed use including the 2.4Ghz spectrum where many WLAN products operate.
Wi-Fi: stands for Wireless Fidelity and is used to define any of the IEEE 802.11 wireless standards. The term Wi-Fi
was created by the Wireless Ethernet Compatibility Alliance (WECA). Products certified as Wi-Fi compliant are
interoperable with each other even if they are made by different manufacturers.
Access points can support several or all of the three most popular IEEE WLAN standards including 802.11a, 802.11b
and 802.11g.
WLAN Modes:
WLAN has two basic modes of operation:
* Ad-hoc mode: In this mode devices send data directly to each other without an AP.

* Infrastructure mode: Connect to a wired LAN, supports two modes (service sets):
+ Basic Service Set (BSS): uses only a single AP to create a WLAN
+ Extended Service Set (ESS): uses more than one AP to create a WLAN, allows roaming in a larger area than a
single AP. Usually there is an overlapped area between two APs to support roaming. The overlapped area should be
more than 10% (from 10% to 15%) to allow users moving between two APs without losing their connections (called
roaming). The two adjacent APs should use non-overlapping channels to avoid interference. The most popular nonoverlapping channels are channels 1, 6 and 11 (will be explained later).

Roaming: The ability to use a wireless device and be able to move from one access points range to another without
losing the connection.
When configuring ESS, each of the APs should be configured with the same Service Set Identifier (SSID) to support
roaming function. SSID is the unique name shared among all devices on the same wireless network. In public
places, SSID is set on the AP and broadcasts to all the wireless devices in range. SSIDs are case sensitive text
strings and have a maximum length of 32 characters. SSID is also the minimum requirement for a WLAN to
operate. In most Linksys APs (a product of Cisco), the default SSID is linksys.
In the next part we will discuss about Wireless Encoding, popular Wireless Security Standard and some sources of
wireless interference.
Wireless Encoding
When a wireless device sends data, there are some ways to encode the radio signal including frequency, amplitude
& phase.
Frequency Hopping Spread Spectrum(FHSS): uses all frequencies in the band, hopping to different ones after
fixed time intervals. Of course the next frequency must be predetermined by the transmitter and receiver.

The main idea of this method is signals sent on different frequencies will be received at different levels of quality. By
hopping to different frequencies, signals will be greatly improved the possibility that most of it will get through. For
example, suppose there is another device using the 150-250 kHz range. If our device transmits in this range then
the signals will be significantly interfered. By hopping at different frequencies, there is only a small interference
while transmitting and it is acceptable.
Direct Sequence Spread Spectrum (DSSS): This method transmits the signal over a wider frequency band than
required by multiplying the original user data with a pseudo random spreading code. The result is a wide-band
signal which is very durable to noise. Even some bits in this signal are damaged during transmission, some
statistical techniques can recover the original data without the need for retransmission.
Note: Spread spectrum here means the bandwidth used to transfer data is much wider than the bandwidth needs to
transfer that data.
Traditional communication systems use narrowband signal to transfer data because the required bandwidth is
minimum but the signal must have high power to cope with noise. Spread Spectrum does the opposite way when
transmitting the signal with much lower power level (can transmit below the noise level) but with much wider
bandwidth. Even if the noise affects some parts of the signal, the receiver can easily recover the original data with
some algorithms.

Now you understand the basic concept of DSSS. Lets discuss about the use of DSS in the 2.4 GHz unlicensed band.

The 2.4 GHz band has a bandwidth of 82 MHz, with a range from 2.402 GHz to 2.483 GHz. In the USA, this band
has 11 different overlapping DSSS channels while in some other countries it can have up to 14 channels. Channels
1, 6 and 11 have least interference with each other so they are preferred over other channels.

Orthogonal Division Multiplexing (OFDM): encodes a single transmission into multiple sub-carriers to save
bandwidth. OFDM selects channels that overlap but do not interfere with each other by selecting the frequencies of
the subcarriers so that at each subcarrier frequency, all other subcarriers do not contribute to overall waveform.
In the picture below, notice that only the peaks of each subcarrier carry data. At the peak of each of the
subcarriers, the other two subcarriers have zero amplitude.

Below is a summary of the encoding classes which are used popularly in WLAN.
Encoding

Used by

FHSS

The original 802.11 WLAN standards used FHSS, but the current standards (802.11a, 802.11b, and
802.11g) do not

DSSS

802.11b

OFDM

802.11a, 802.11g, 802.11n

WLAN Security Standards


Security is one of the most concerns of people deploying a WLAN so we should grasp them.
Wired Equivalent Privacy (WEP)
WEP is the original security protocol defined in the 802.11b standard so it is very weak comparing to newer security
protocols nowadays.
WEP is based on the RC4 encryption algorithm, with a secret key of 40 bits or 104 bits being combined with a 24-bit
Initialisation Vector (IV) to encrypt the data (so sometimes you will hear 64-bit or 128-bit WEP key). But RC4 in
WEP has been found to have weak keys and can be cracked easily within minutes so it is not popular nowadays.

The weak points of WEP is the IV is too small and the secret key is static (the same key is used for both encryption
and decryption in the whole communication and never expires).
Wi-Fi Protected Access (WPA)
In 2003, the Wi-Fi Alliance developed WPA to address WEPs weaknesses. Perhaps one of the most important
improvements of WPA is the Temporal Key Integrity Protocol (TKIP) encryption, which changes the encryption key
dynamically for each data transmission. While still utilizing RC4 encryption, TKIP utilizes a temporal encryption key
that is regularly renewed, making it more difficult for a key to be stolen. In addition, data integrity was improved
through the use of the more robust hashing mechanism, the Michael Message Integrity Check (MMIC).
In general, WPA still uses RC4 encryption which is considered an insecure algorithm so many people viewed WPA as
a temporary solution for a new security standard to be released (WPA2).
Wi-Fi Protected Access 2 (WPA2)
In 2004, the Wi-Fi Alliance updated the WPA specification by replacing the RC4 encryption algorithm with Advanced
Encryption Standard-Counter with CBC-MAC (AES-CCMP), calling the new standard WPA2. AES is much stronger
than the RC4 encryption but it requires modern hardware.
Standard

Key Distribution

Encryption

WEP

Static Pre-Shared

Weak

WPA

Dynamic

TKIP

WPA2

Both (Static & Dynamic)

AES

Wireless Interference
The 2.4 GHz & 5 GHz spectrum bands are unlicensed so many applications and devices operate on it, which cause
interference. Below is a quick view of the devices operating in these bands:
+ Cordless phones: operate on 3 frequencies, 900 MHz, 2.4 GHz, and 5 GHz. As you can realize, 2.4 GHz and 5
GHz are the frequency bands of 802.11b/g and 802.11a wireless LANs.
Most of the cordless phones nowadays operate in 2.4 GHz band and they use frequency hopping spread spectrum
(FHSS) technology. As explained above, FHSS uses all frequencies in the the entire 2.4 GHz spectrum while
802.11b/g uses DSSS which operates in about 1/3 of the 2.4 GHz band (1 channel) so the use of the cordless
phones can cause significant interference to your WLAN.

An example of cordless phone


+ Bluetooth: same as cordless phone, Bluetooth devices also operate in the 2.4 GHz band with FHSS technology.
Fortunately, Bluetooth does not cause as much trouble as cordless phone because it usually transfers data in a short
time (for example you copy some files from your laptop to your cellphone via Bluetooth) within short range.
Moreover, from version 1.2 Bluetooth defined the adaptive frequency hopping (AFH) algorithm. This algorithm
allows Bluetooth devices to periodically listen and mark channels as good, bad, or unknown so it helps reduce the
interference with our WLAN.

+ Microwaves (mostly from oven): do not transmit data but emit high RF power and heating energy. The
magnetron tubes used in the microwave ovens radiate a continuous-wave-like at frequencies close to 2.45 GHz (the
center burst frequency is around 2.45 2.46 GHz) so they can interfere with the WLAN.
+ Antenna: There are a number of 2.4 GHz antennas on the market today so they can interfere with your wireless
network.
+ Metal materials or materials that conduct electricity deflect Wi-Fi signals and create blind spots in your
coverage. Some of examples are metal siding and decorative metal plates.
+ Game controller, Digital Video Monitor, Wireless Video Camera, Wireless USB may also operate at 2.4
GHz and cause interference too.

Virtual Local Area Network VLAN Tutorial


VLAN Introduction
A virtual LAN (VLAN) is a group of networking devices in the same broadcast domain
It is the concept of VLAN that most of the books are using but it doesnt help us understand the benefits of VLANs.
If you ask What is a LAN? you will receive the same answer: it is also a group of networking devices in the same
broadcast domain!
To make it clearer, I expanded the above statement into a bit longer statement :)
A virtual LAN (VLAN) is a group of networking devices in the same broadcast domain, logically
It means that the devices in the same VLAN may be widely separated in the network, both by geography and
location. VLANs logically segment the network into different broadcast domains so that packets are only switched
between ports that are designated for the same VLAN.
Lets take an example to understand the benefits of VLAN. Suppose you are working in a big company with many
departments, some of them are SALES and TECHNICAL departments. You are tasked to separate these departments
so that each of them can only access specific resources in the company.
This task is really easy, you think. To complete this task, you just need to use different networks for these
departments and use access-list to allow/deny that network to a specific resource. For example, you assign network
192.168.1.0/24 for SALES and 192.168.2.0/24 for TECH. At the Company router you apply an access-list to filter
traffic from these networks. Below is the topology of your network without VLANs:

Everything looks good and you implement this design to your company. But after one month you receive many
complaints from both your colleagues and leaders.
+ First, your department leaders need to access to additional private resources which employees are not allowed.
+ Second, the company has just recruited some new SALES employees but now the SALES room is full so they have
to sit at the 1st floor (in the TECH area). They want to access to SALES resources but they can only access to the
TECH resources because they are connecting to TECH switch.
To solve the first problem maybe you will create a new and more powerful network for your leaders. But notice that
each leader sits at different floor so you will need to link all of them to a switch -> what a mess!
The second problem is more difficult than the first one. Maybe you have to create another network at the TECH area
and apply the same policy as the SALES department for these hosts -> another mess in management!
Maybe you will be glad to know VLAN can solve all these problems. VLAN helps you group users together according
to their function rather than their physical location. This means you can use the same network for hosts in different
floors (of course they can communicate with each other).

In this design:
+ you can logically create a new network with additional permissions for your leaders (LEADER network) by adding
another VLAN.
+ employees can sit anywhere to access the resources in their departments, provided that you allow them to do so.
+ computers in the same department can communicate with each other although they are at different floors.
If these departments expand in the future you can still use the same network in any other floor. For example,
SALES needs to have 40 more employees -> you can use 4th floor for this expansion without changing the current
network.
But wait maybe you recognize something strange in the above design? How can 2 computers connecting to 2
different switches communicate? If one computer sends a broadcast packet will it be flooded to other departments
as switch doesnt break up broadcast domains?
The answer is Yes, they can! and it is the beauty of VLAN. Hosts in the same VLAN can communicate normally
even they are connecting to 2 or more different switches. This makes the management much more simple.
Although layer 2 switches can only break up collision domains but VLANs can be used to break up broadcast
domains. So if a computer in SALES broadcasts, only computers in SALES will receive that frame.
So we dont need a router, right? The answer is we still need a router to enable different VLANs to communicate
with each other. Without a router, the computers within each VLAN can communicate with each other but not with
any other computers in another VLAN. For example, we need a router to transfer file from LEADER to TECH. This is
called interVLAN routing.
When using VLANs in networks that have multiple interconnected switches, you need to use VLAN trunking
between the switches. With VLAN trunking, the switches tag each frame sent between switches so that the
receiving switch knows which VLAN the frame belongs to. This tag is known as a VLAN ID. A VLAN ID is a number
which is used to identify a VLAN.

Notice that the tag is only added and removed by the switches when frames are sent out on the trunk links. Hosts
dont know about this tag because it is added on the first switch and removed on the last switch. The picture below
describes the process of a frame sent from PC A to PC B.

Note: Trunk link does not belong to a specific VLAN, rather it is a conduit for VLANs between switches and routers.
To allow interVLAN routing you need to configure trunking on the link between router and switch.
Therefore in our example we need to configure 3 links as trunk.

Cisco switches support two different trunking protocols, Inter-Switch Link (ISL) and IEEE 802.1q. Cisco created
ISL before the IEEE standardized trunking protocol. Because ISL is Cisco proprietary, it can be used only between
two Cisco switches -> 802.1q is usually used in practical.
In 802.1q encapsulation, there is a concept called native VLAN that was created for backward compatibility with old
devices that dont support VLANs. Native VLAN works as follows:
+ Frame belonging to the native VLAN is not tagged when sent out on the trunk links
+ Frame received untagged on the trunk link is set to the native VLAN.

So if an old switch doesnt support VLAN it can still understand that frame and continue sending it (without
dropping it).
Every port belongs to at least one VLAN. If a switch receives untagged frames on a trunkport, they are assumed to
be part of the native vlan. By default, VLAN 1 is the default and native VLAN but this can be changed on a per port
basis by configuration.
Now to the configuration part ^^, in this part I use the building topology with two switches at the 1st & 3rd floors
and one Main Sw.
VLAN Configuration
Creating VLAN
1st_Floor_Switch#configure terminal
1st_Floor_Switch(config)#vlan 2
1st_Floor_Switch(config-vlan)#name SALES
1st_Floor_Switch(config-vlan)#vlan 3
1st_Floor_Switch(config-vlan)#name TECH
1st_Floor_Switch(config-vlan)#vlan 10
1st_Floor_Switch(config-vlan)#name LEADER
Notice that we dont need to exit out of Vlan mode to create another VLAN.
We also use the above configuration for 3rd_Floor_Switch & Main Sw.
Set VLAN Membership
Assign VLAN to each port:
1st_Floor_Switch(config)#interface f0/0
1st_Floor_Switch(config-if)#switchport access vlan 2
1st_Floor_Switch(config-if)#interface f0/1
1st_Floor_Switch(config-if)#switchport access vlan 3
Notice that for port connecting to host we must configure it as access port.
Create Trunk Ports:
+ On 2950 & 2960 Switches: Switches 2950 & 2960 only have 802.1q encapsulation so to turn it on we simply use
this command:
Main_Sw(config-if)#switchport mode trunk
+ On 3550 & 3560 Switches: There are two encapsulation types in 3550 & 3560 Cisco switch: 802.1q and ISL but
there are 3 encapsulation methods: 802.1q, ISL and negotiate.The default encapsulation is negotiate. This method

signals between the trunk ports to choose an encapsulation method. ISL is preferred over 802.1q so we have to
configure to dot1q if we want to use this standard.
Main_Sw(config-if)#switchport trunk encapsulation dot1q
Main_Sw(config-if)#switchport mode trunk
In fact, if you use VLAN Trunking Protocol (VTP) then you only need to configure VLAN on the Main Sw, set the Main
Sw to Server mode and 2 other switches to Client mode. To learn more about VTP, please read my VTP tutorial.
VLAN Summaries:
+ VLANs are used to create logical broadcast domains and Layer 3 segments in a given network
+ A VLAN is considered a logical segment because the traffic it carries may traverse multiple physical network
segments
Cisco switches support two different trunking protocols, Inter-Switch Link (ISL) and IEEE 802.1q. In 802.1q, native
VLAN frames are untagged.
The benefits of VLANs
1. Segment networks into multiple smaller broadcast domains without Layer 3 network devices such as routers.
VLANs make switched Ethernet networks more bandwidth-efficient through this segmentation of broadcast domains.
2. Group users together according to function rather than physical location. In a traditional network, users in a
given work area are on the same network segment regardless of their job description or department. Using VLANs,
however, you could have one salesperson in each work area of the building sitting next to engineers in their work
area, yet on a separate logical network segment.
3. The ability to reconfigure ports logically without the need to unplug wires and move them around. If a user takes
his or her computer to a new work area, no cables need to be swapped on the switch, just access the switch and
issue commands to change the VLAN assignments for the old and new ports. VLANs thus simplify the process of
adding, moving, and deleting users on the network. They also improve network security by avoiding cabling
mishaps that can arise when users are moved in traditional Ethernet networks.
If you want to learn about VTP, please read my VTP tutorial.
If you want to find out how different VLANs can communicate, please read my InterVLAN Routing tutorial.

VLAN Trunking Protocol VTP Tutorial


This topic describes the features that VLAN Trunking Protocol (VTP) offers to support VLANs. To help you
understand the basic concept, this is a summary of what VTP is:
VTP allows a network manager to configure a switch so that it will propagate VLAN configurations to
other switches in the network
VTP minimizes misconfigurations and configuration inconsistencies that can cause problems, such as duplicate VLAN
names or incorrect VLAN-type specifications. VTP helps you simplify management of the VLAN database across
multiple switches.
VTP is a Cisco-proprietary protocol and is available on most of the Cisco switches.
Why we need VTP?
To answer this question, lets discuss a real and popular network topology.
Suppose you are working in a medium company in a 5-floor office. You assigned each floor to a switch for easy
management and of course they can be assigned to different VLANs. For example, your bosses can sit in any floor
and still access Manage VLAN (VLAN 7). Your technical colleagues can sit anywhere on the floors to access Technical
VLAN (VLAN 4). This is the best design because each persons permission is not limited by the physical location.

Now lets discuss about VTP role in this topology! Suppose VTP is not running on these switches. One day, your boss
decides to add a new department to your office, the Support Department, and you are tasked to add a new
SUPPORT VLAN for this department. How will you do that? Well, without VTP you have to go to each switch to
enable this new VLAN. Fortunately your office only has 5 floors so you can finish this task in some hours :)
But just imagine if your company was bigger with 100-floor office and some VLANs needed to be added every
month! Well, it will surely become a daunting task to add a new VLAN like this. Luckily, Cisco always thinks big to
create a method for you to just sit at the Main Sw, adding your new VLANs and magically, other switches
automatically learn about this VLAN, sweet, right? It is not a dream, it is what VTP does for you!

How VTP Works


To make switches exchange their VLAN information with each other, they need to be configured in the same VTP
domain. Only switches belonging to the same domain share their VLAN information. When a change is made to the
VLAN database, it is propagated to all switches via VTP advertisements.
To maintain domain consistency, only one switch should be allowed to create (or delete, modify) new VLAN. This
switch is like the master of the whole VTP domain and it is operated in Server mode. This is also the default
mode.
Other switches are only allowed to receive and forward updates from the server switch. They are operated
in Client mode.

In some cases, the network manager doesnt want a switch to learn VTP information from other switches. He can
set it to Transparent mode. In this mode, a switch maintains its own VLAN database and never learn VTP
information from other switches (even the server). However, it still forwards VTP advertisements from the server to
other switches (but doesnt read that update). A transparent switch can add, delete and modify VLAN database
locally.
Now return to the example above, we can configure any switches as the server but for our convenience, the Main
Sw should be assigned this function and we should place it in a safe place.

As said above, VTP advertisements bring VLAN information to all the switches in a VTP domain. Each VTP
advertisement is sent with a Revision number. This number is used in order to determine whether the VTP
advertisement is more recent than the current version of that switch. Because each time you make a VLAN change
in a switch, the configuration revision is incremented by one. So the higher the revision number, the better your
VTP advertisement.
For example, the first time the Main Sw sends a VTP advertisement, its Revision number is 1. When you add a new
VLAN to the Main Sw, it will send a VTP advertisement with the Revision number of 2. Client switches first receive
the VTP advertisement with the Revision number of 1, which is bigger than its current Revision number (0) so it
updates its VLAN database. Next it receives the VTP advertisement with the Revision number of 2, it continues
comparing with its current Revision number (1) -> it continues update its VLAN database.
One important thing you must know is when a switch receives a better VTP advertisement, it deletes its whole VTP
information and copy the new information from the better VTP advertisement to its VLAN database. A switch does
not try to compare its own VLAN database with information from the received VTP advertisements to find out and
update the difference!
Note: VTP advertisements are sent as multicast frames and all neighbors in that domain receive the frames.
The show vtp status command analysis
The most important command to view the status of VTP on Cisco switches that each CCNA learners must grasp is
the show vtp status command. Lets have a look at the output of this command:

+ VTP Version: displays the VTP version the switch is running. By default, the switch runs version 1 but can be set
to version 2. Within a domain, the two VTP versions are not interoperable so make sure to configure the same VTP
version on every switch in a domain.
+ Configuration Revision: current Revision number on this switch.
+ Maximum VLANs Supported Locally: maximum number of VLANs supported locally.
+ Number of Existing VLANs: Number of existing VLANs.
+ VTP Operating Mode: can be server, client, or transparent.
+ VTP Domain Name: name that identifies the administrative domain for the switch.
By default, a switch operates in VTP Server mode with a NULL (blank) domain name with no password configured
(the password field is not listed in the output)
+ VTP Pruning Mode: displays whether pruning is enabled or disabled. We will discuss about VTP Pruning later.
+ VTP V2 Mode: displays if VTP version 2 mode is enabled. VTP version 2 is disabled by default.
+ VTP Traps Generation: displays whether VTP traps are sent to a network management station.
+ MD5 Digest: a 16-byte checksum of the VTP configuration.
+ Configuration Last Modified: date and time of the last configuration modification. Displays the IP address of the
switch that caused the configuration change to the database.
VTP Pruning
To understand what VTP Pruning is, lets see an example:

When PC A sends a broadcast frame on VLAN 10, it travels across all trunk links in the VTP domain. Switches
Server, Sw2, and Sw3 all receive broadcast frames from PC A. But only Sw3 has user on VLAN 10 and it is a waste
of bandwidth on Sw2. Moreover, that broadcast traffic also consumes processor time on Sw2. The link between
switches Server and Sw2 does not carry any VLAN 10 traffic so it can be pruned.

VTP Pruning makes more efficient use of trunk bandwidth by forwarding broadcast and unknown unicast frames on
a VLAN only if the switch on the receiving end of the trunk has ports in that VLAN. In the above example, Server
switch doesnt send broadcast frame to Sw2 because Sw2 doesnt have ports in VLAN 10.
When a switch has a port associated with a VLAN, the switch sends an advertisement to its neighbors to inform that
it has active ports on that VLAN. For example, Sw3 sends an advertisement to Server switch to inform that it has
active port for VLAN 10. Sw2 has not advertised about VLAN 10 so Server switch will prune VLAN 10 on the trunk to
Sw2.
You only need to enable pruning on one VTP server switch in the domain.
VTP Configuration
Main
Main
Main
Main

Sw(config)#vtp
Sw(config)#vtp
Sw(config)#vtp
Sw(config)#vtp

version 2
domain 9tut
mode server
password keepitsecret

On client switches
Client(config)#vtp
Client(config)#vtp
Client(config)#vtp
Client(config)#vtp

version 2
domain 9tut
password keepitsecret
mode client

Notice: Before configuring VTP make sure the links between your switches are trunk links. Your trunk link can
automatically be formed if both of your switches are not 2960 or 3560 because ports on the 2960 and 3560
switches are set to dynamic auto by default. If both sides are set to dynamic auto, the link will remain in access
mode. To configure trunk between these ports, use these commands:
Client(config)#interface fa0/1 (or the interface on the link you want to be trunk)
Client(config-if)#switchport mode trunk
These commands only need to be used on one of two switches to form the trunk.
Below summaries important notes about VTP:
+ Whenever a change occurs in the VLAN database, the VTP server increments its configuration revision number
and then advertises the new revision throughout the VTP domain via VTP advertisements.
+ VTP operates in one of three modes: server, transparent, or client.

VTP modes:
* Server: The default mode. When you make a change to the VLAN configuration on a VTP server, the change is
propagated to all switches in the VTP domain. VTP messages are transmitted out of all the trunk connections. In
Server mode we can create, modify, delete VLANs.
* Client: cannot make changes to the VLAN configuration when in this mode; however, a VTP client can send any
VLANs currently listed in its database to other VTP switches. VTP client also forwards VTP advertisements (but
cannot create VTP advertisements).
* Transparent: When you make a change to the VLAN configuration in this mode, the change affects only the local
switch and does not propagate to other switches in the VTP domain. VTP transparent mode does forward VTP
advertisements that it receives within the domain.
VTP Pruning makes more efficient use of trunk bandwidth by forwarding broadcast and unknown unicast frames on
a VLAN only if the switch on the receiving end of the trunk has ports in that VLAN.
For more information about VTP, I highly recommend you to visit the official tutorial about VTP published by Cisco.
It is very comprehensive:http://www.cisco.com/warp/public/473/vtp_flash/

IPv6 Tutorial
Internet has been growing extremely fast so the IPv4 addresses are quickly approaching complete depletion.
Although many organizations already use Network Address Translators (NATs) to map multiple private address
spaces to a single public IP address but they have to face with other problems from NAT (the use of the same
private address, security). Moreover, many other devices than PC & laptop are requiring an IP address to go to the
Internet. To solve these problems in long-term, a new version of the IP protocol version 6 (IPv6) was created and
developed.
IPv6 was created by the Internet Engineering Task Force (IETF), a standards body, as a replacement to IPv4 in
1998. So what happened with IPv5? IP Version 5 was defined for experimental reasons and never was deployed.
While IPv4 uses 32 bits to address the IP (provides approximately 232 = 4,294,967,296 unique addresses but in
fact about 3.7 billion addresses are assignable because the IPv4 addressing system separates the addresses into
classes and reserves addresses for multicasting, testing, and other specific uses), IPv6 uses up to 128 bits which
provides 2128 addresses or approximately 3.4 * 1038 addresses. Well, maybe we should say it is extremely
extremely extremely huge :)
IPv6 Address Types
Address Type

Description

Unicast

One to One (Global, Link local, Site local)


+ An address destined for a single interface.

Multicast

One to Many
+ An address for a set of interfaces
+ Delivered to a group of interfaces identified by that address.
+ Replaces IPv4 broadcast

Anycast

One to Nearest (Allocated from Unicast)


+ Delivered to the closest interface as determined by the IGP

A single interface may be assigned multiple IPv6 addresses of any type (unicast, anycast, multicast)
IPv6 address format
Format:
x:x:x:x:x:x:x:x where x is a 16 bits hexadecimal field and x represents four hexadecimal digits.
An example of IPv6:
2001:0000:5723:0000:0000:D14E:DBCA:0764
There are:
+ 8 groups of 4 hexadecimal digits.
+ Each group represents 16 bits (4 hexa digits * 4 bit)
+ Separator is :
+ Hex digits are not case sensitive, so DBCA is same as dbca or DBca
IPv6 (128-bit) address contains two parts:
+ The first 64-bits is known as the prefix. The prefix includes the network and subnet address. Because addresses
are allocated based on physical location, the prefix also includes global routing information. The 64-bit prefix is
often referred to as the global routing prefix.
+ The last 64-bits is the interface ID. This is the unique address assigned to an interface.
Note: Addresses are assigned to interfaces (network connections), not to the host. Each interface can have more
than one IPv6 address.
Rules for abbreviating IPv6 Addresses:
+ Leading zeros in a field are optional

2001:0DA8:E800:0000:0260:3EFF:FE47:0001 can be written as


2001:DA8:E800:0:260:3EFF:FE47:1
+ Successive fields of 0 are represented as ::, but only once in an address:
2001:0DA8:E800:0000:0000:0000:0000:0001 -> 2001:DA8:E800::1
Other examples:
FF02:0:0:0:0:0:0:1 => FF02::1
3FFE:0501:0008:0000:0260:97FF:FE40:EFAB = 3FFE:501:8:0:260:97FF:FE40:EFAB =
3FFE:501:8::260:97FF:FE40:EFAB
0:0:0:0:0:0:0:1 => ::1
0:0:0:0:0:0:0:0 => ::
IPv6 Addressing In Use
IPv6 uses the / notation to denote how many bits in the IPv6 address represent the subnet.
The full syntax of IPv6 is
ipv6-address/prefix-length
where
+ ipv6-address is the 128-bit IPv6 address
+ /prefix-length is a decimal value representing how many of the left most contiguous bits of the address
comprise the prefix.
Lets analyze an example:
2001:C:7:ABCD::1/64 is really
2001:000C:0007:ABCD:0000:0000:0000:0001/64
+ The first 64-bits 2001:000C:0007:ABCD is the address prefix
+ The last 64-bits 0000:0000:0000:0001 is the interface ID
+ /64 is the prefix length (/64 is well-known and also the prefix length in most cases)
In the next part, we will understand more about each prefix of an IPv6 address.
The Internet Corporation for Assigned Names and Numbers (ICANN) is responsible for the assignment of IPv6
addresses. ICANN assigns a range of IP addresses to Regional Internet Registry (RIR) organizations. The size of
address range assigned to the RIR may vary but with a minimum prefix of /12 and belong to the following range:
2000::/12 to 200F:FFFF:FFFF:FFFF::/64.

Each ISP receives a /32 and provides a /48 for each site-> every ISP can provide 2(48-32) = 65,536 site addresses
(note: each network organized by a single entity is often called a site).
Each site provides /64 for each LAN -> each site can provide 2(64-48) = 65,536 LAN addresses for use in their private
networks.
So each LAN can provide 264 interface addresses for hosts.
-> Global routing information is identified within the first 64-bit prefix.
Note: The number that represents the range of addresses is called a prefix

Now lets see an example of IPv6 prefix: 2001:0A3C:5437:ABCD::/64:

In this example, the RIR has been assigned a 12-bit prefix. The ISP has been assigned a 32-bit prefix and the site is
assigned a 48-bit site ID. The next 16-bit is the subnet field and it can allow 216, or 65536 subnets. This number is
redundant for largest corporations on the world!
The 64-bit left (which is not shown the above example) is the Interface ID or host part and it is much more bigger:
64 bits or 264 hosts per subnet! For example, from the prefix 2001:0A3C:5437:ABCD::/64 an administrator can
assign an IPv6 address 2001:0A3C:5437:ABCD:218:34EF:AD34:98D to a host.
IPv6 Address Scopes
Address types have well-defined destination scopes:
IPv6 Address
Scopes

Description

Link-local address

+ only used for communications within the local subnetwork (automatic address
configuration, neighbor discovery, router discovery, and by many routing protocols). It is
only valid on the current subnet.
+ routers do not forward packets with link-local addresses.
+ are allocated with the FE80::/64 prefix -> can be easily recognized by the prefix FE80.
Some books indicate the range of link-local address is FE80::/10, meaning the first 10 bits
are fixed and link-local address can begin with FE80, FE90,FEA0 and FEB0 but in fact the
next 54 bits are all 0s so you will only see the prefix FE80 for link-local address.
+ same as 169.254.x.x in IPv4, it is assigned when a DHCP server is unavailable and no
static addresses have been assigned
+ is usually created dynamically using a link-local prefix of FE80::/10 and a 64-bit interface
identifier (based on 48-bit MAC address).

Global unicast
address

+ unicast packets sent through the public Internet


+ globally unique throughout the Internet
+ starts with a 2000::/3 prefix (this means any address beginning with 2 or 3). But in the
future global unicast address might not have this limitation

Site-local address

+ allows devices in the same organization, or site, to exchange data.


+ starts with the prefix FEC0::/10. They are analogous to IPv4s private address classes.
+ Maybe you will be surprised because Site-local addresses are no longer supported
(deprecated) by RFC 3879 so maybe you will not see it in the future.

All nodes must have at least one link-local address, although each interface can have multiple addresses.
However, using them would also mean that NAT would be required and addresses would again not be end-to-end.
Site-local addresses are no longer supported (deprecated) by RFC 3879.
Special IPv6 Addresses
Reserved Multicast
Address

Description

FF02::1

+ All nodes on a link (link-local scope).

FF02::2

+ All routers on a link

FF02::5

+ OSPFv3 All SPF routers

FF02::6

+ OSPFv3 All DR routers

FF02::9

+ All routing information protocol (RIP) routers on a link

FF02::A

+ EIGRP routers

FF02::1:FFxx:xxxx

+ All solicited-node multicast addresses used for host auto-configuration and neighbor
discovery (similar to ARP in IPv4)
+ The xx:xxxx is the far right 24 bits of the corresponding unicast or anycast address
of the node

FF05::101

+ All Network Time Protocol (NTP) servers

Reserved IPv6 Multicast Addresses


Reserved Multicast
Address

Description

FF02::1

+ All nodes on a link (link-local scope).

FF02::2

+ All routers on a link

FF02::9

+ All routing information protocol (RIP) routers on a link

FF02::1:FFxx:xxxx

+ All solicited-node multicast addresses used for host auto-configuration and neighbor
discovery (similar to ARP in IPv4)
+ The xx:xxxx is the far right 24 bits of the corresponding unicast or anycast address
of the node

FF05::101

+ All Network Time Protocol (NTP) servers

Rapid Spanning Tree Protocol RSTP Tutorial


Note: Before reading this article you should understand how STP works. So if you are not sure about STP, please
read my article about Spanning Tree Protocol tutorialfirst.
Rapid Spanning Tree Protocol (RSTP)
One big disadvantage of STP is the low convergence which is very important in switched network. To overcome this
problem, in 2001, the IEEE with document 802.1w introduced an evolution of the Spanning Tree Protocol: Rapid
Spanning Tree Protocol (RSTP), which significantly reduces the convergence time after a topology change occurs in
the network. While STP can take 30 to 50 seconds to transit from a blocking state to a forwarding state, RSTP is
typically able to respond less than 10 seconds of a physical link failure.
RSTP works by adding an alternative port and a backup port compared to STP. These ports are allowed to
immediately enter the forwarding state rather than passively wait for the network to converge.
RSTP bridge port roles:
* Root port A forwarding port that is the closest to the root bridge in terms of path cost
* Designated port A forwarding port for every LAN segment
* Alternate port A best alternate path to the root bridge. This path is different than using the root port. The
alternative port moves to the forwarding state if there is a failure on the designated port for the segment.
* Backup port A backup/redundant path to a segment where another bridge port already connects. The backup
port applies only when a single switch has two links to the same segment (collision domain). To have two links to
the same collision domain, the switch must be attached to a hub.
* Disabled port Not strictly part of STP, a network administrator can manually disable a port
Now lets see an example of three switches below:

Suppose all the switches have the same bridge priority so the switch with lowest MAC address will become root
bridge -> Sw1 is the root bridge and therefore all of its ports will be Designated ports (forwarding).
Two ports fa0/0 on Sw2 & Sw3 are closest to the root bridge (in terms of path cost) so they will become root ports.
On the segment between Sw2 and Sw3, because Sw2 has lower MAC than Sw3 so it will advertise better BPDU on
this segment -> fa0/1 of Sw2 will be Designated port and fa0/1 of Sw3 will be Alternative port.

Now for the two ports connecting to the hub, we know that there will have only one Designated port for each
segment (notice that the two ports fa0/2 & fa0/3 of Sw2 are on the same segment as they are connected to a hub).
The other port will be Backup port according to the definition of Backup port above. But how does Sw2 select its
Designated and Backup port? The decision process involves the following parameters inside the BPDU:
* Lowest path cost to the Root
* Lowest Sender Bridge ID (BID)
* Lowest Port ID
Well, both fa0/2 & fa0/3 of Sw2 has the same path cost to the root and sender bridge ID so the third parameter
lowest port ID will be used. Because fa0/2 is inferior to fa0/3, Sw2 will select fa0/2 as its Designated port.

Note: Alternative Port and Backup Port are in discarding state.


RSTP Port States:
There are only three port states left in RSTP that correspond to the three possible operational states. The 802.1D
disabled, blocking, and listening states are merged into the 802.1w discarding state.
* Discarding the port does not forward frames, process received frames, or learn MAC addresses but it does
listen for BPDUs (like the STP blocking state)
* Learning receives and transmits BPDUs and learns MAC addresses but does not yet forward frames (same as
STP).
* Forwarding receives and sends data, normal operation, learns MAC address, receives and transmits BPDUs
(same as STP).
STP State (802.1d)

RSTP State (802.1w)

Blocking

Discarding

Listening

Discarding

Learning

Learning

Forwarding

Forwarding

Disabled

Discarding

Although the learning state is also used in RSTP but it only takes place for a short time as compared to STP. RSTP
converges with all ports either in forwarding state or discarding state.
RSTP Quick Summary:
RSTP provides faster convergence than 802.1D STP when topology changes occur.
* RSTP defines three port states: discarding, learning, and forwarding.
* RSTP defines five port roles: root, designated, alternate, backup, and disabled.
Note: RSTP is backward compatible with legacy STP 802.1D. If a RSTP enabled port receives a (legacy) 802.1d
BPDU, it will automatically configure itself to behave like a legacy port. It sends and receives 802.1d BPDUs only.

Network Address Translation NAT Tutorial


To go to the Internet we need to get an public IP address and it is unique all over the world. If each host in the
world required a unique public IP address, we would have run out of IP address years ago. But by using Network
Address Translation (NAT) we can save tons of IP addresses for later uses. We can understand NAT like this:
NAT allows a host that does not have a valid registered IP address to communicate with other hosts through the
Internet
For example your computer is assigned a private IP address of 10.0.0.9 and of course this address can not be
routed on the internet but you can still access the internet. This is because your router (or modem) translates this
address into a public IP address, 123.12.23.1 for example, before routing your data into the internet.

Of course when your router receives a reply packet destined for 123.12.23.1 it will convert back to your private IP
10.0.0.9 before sending that packet to you.
Maybe you will ask hey, I dont see any difference of using NAT to save tons of IP addresses because you still need
a public IP address for each host to access the Internet and it doesnt save you anything, why you need to use
NAT?
Ok, you are right :), in the above example we dont see its usefulness but you now understand the fundamental of
NAT!
Lets take another example!
Suppose your company has 500 employees but your Internet Service Provider (ISP) only gives you 50 public IP
addresses. It means that you can only allow 50 hosts to access the internet at the same time. Here NAT comes to
save your life!
One thing you should notice that in real life, not all of your employees uses internet at the same time. Say, maybe
50 of them use internet to read newspaper at the morning; 50 others use internet at noon for checking mail By
using NAT you can dynamically assign these 50 public IP addresses to those who really need them at that time. This
is called dynamic NAT.
But the above NAT solution does not solve our problem completely because in some days there can be more than
50 people surfing web at the morning. In this case, only the first 50 people can access internet, others must wait to
their turns.
Another problem is, in fact, your ISP only gives you much lesser IP addresses than the number 50 because each
public IP is very precious now.
To solve the two problems above, another feature of NAT can be used: NAT Overload or sometimes called Port
Address Translation (PAT)
PAT permits multiple devices on a local area network (LAN) to be mapped to a single public IP address with different
port numbers. Therefore, its also known as port address translation (PAT). When using PAT, the router maintains
unique source port numbers on the inside global IP address to distinguish between translations. In the below
example, each host is assigned to the same public IP address 123.1.1.1 1 but with different port numbers (from
1000 to 1002).

Note: Cisco uses the term inside local for the private IP addresses and inside global for the public IP addresses
replaced by the router.
The outside host IP address can also be changed with NAT. The outside global address represents the outside host
with a public IP address that can be used for routing in the public Internet.
The last term, outside local address, is a private address of an external device as it is referred to by devices on its
local network. You can understand outside local address as the inside local address of the external device which lies
at the other end of the Internet.
Maybe you will ask how many ports can we use for each IP? Well, because the port number eld has 16 bits, PAT
can support about 216 ports, which is more than 64,000 connections using one public IP address.
Now you has learned all the most useful features of NAT but we should summary all features of NAT:
There are two types of NAT translation: dynamic and static.
Static NAT: Designed to allow one-to-one mapping between local and global addresses. This flavor requires you to
have one real Internet IP address for every host on your network.
Dynamic NAT: Designed to map an unregistered IP address to a registered IP address from a pool of registered IP
addresses. You dont have to statically configure your router to map an inside to an outside address as in static
NAT, but you do have to have enough real IP addresses for everyone who wants to send packets through the
Internet. With dynamic NAT, you can configure the NAT router with more IP addresses in the inside local address
list than in the inside global address pool. When being defined in the inside global address pool, the router allocates
registered public IP addresses from the pool until all are allocated. If all the public IP addresses are already
allocated, the router discards the packet that requires a public IP address.
PAT (NAT Overloading): is also a kind of dynamic NAT that maps multiple private IP addresses to a single public
IP address (many-to-one) by using different ports. Static NAT and Dynamic NAT both require a one-to-one mapping
from the inside local to the inside global address. By using PAT, you can have thousands of users connect to the
Internet using only one real global IP address. PAT is the technology that helps us not run out of public IP address
on the Internet. This is the most popular type of NAT.
Besides NAT gives you the option to advertise only a single address for your entire network to the outside world.
Doing this effectively hides the internal network from the public world really well, giving you some additional
security for your network.
NAT terms:
* Inside local address The IP address assigned to a host on the inside network. The address is usually not an IP
address assigned by the Internet Network Information Center (InterNIC) or service provider. This address is likely
to be an RFC 1918 private address.
* Inside global address A legitimate IP address assigned by the InterNIC or service provider that represents
one or more inside local IP addresses to the outside world.
* Outside local address The IP address of an outside host as it is known to the hosts on the inside network.

* Outside global address The IP address assigned to a host on the outside network. The owner of the host
assigns this address.

To learn how to configure NAT please read my Configure NAT GNS3 Lab tutorial

Access List Tutorial


In this tutorial we will learn about access list.
Access control lists (ACLs) provide a means to filter packets by allowing a user to permit or deny IP packets from
crossing specified interfaces. Just imagine you come to a fair and see the guardian checking tickets. He only allows
people with suitable tickets to enter. Well, an access lists function is same as that guardian.
Access lists filter network traffic by controlling whether packets are forwarded or blocked at the routers interfaces
based on the criteria you specified within the access list.
To use ACLs, the system administrator must first configure ACLs and then apply them to specific interfaces. There
are 3 popular types of ACL: Standard, Extended and Named ACLs.
Standard IP Access List
Standard IP lists (1-99) only check source addresses of all IP packets.
Configuration Syntax
access-list access-list-number {permit | deny} source {source-mask}
Apply ACL to an interface
ip access-group access-list-number {in | out}
Example of Standard IP Access List

Configuration:
In this example we will define a standard access list that will only allow network 10.0.0.0/8 to access the server
(located on the Fa0/1 interface)
Define which source is allowed to pass:
Router(config)#access-list 1 permit 10.0.0.0 0.255.255.255
(there is always an implicit deny all other traffic at the end of each ACL so we dont need to define forbidden traffic)
Apply this ACL to an interface:
Router(config)#interface Fa0/1
Router(config-if)#ip access-group 1 out
The ACL 1 is applied to permit only packets from 10.0.0.0/8 to go out of Fa0/1 interface while deny all other traffic.
So can we apply this ACL to other interface, Fa0/2 for example? Well we can but shouldnt do it because users can

access to the server from other interface (s0 interface, for example). So we can understand why an standard access
list should be applied close to the destination.
Note: The 0.255.255.255 is the wildcard mask part of network 10.0.0.0. We will learn how to use wildcard mask
later.
Extended IP Access List
Extended IP lists (100-199) check both source and destination addresses, specific UDP/TCP/IP protocols, and
destination ports.
Configuration Syntax
access-list access-list-number {permit | deny} protocol source {source-mask} destination {destination-mask} [eq
destination-port]
Example of Extended IP Access List

In this example we will create an extended ACL that will deny FTP traffic from network 10.0.0.0/8 but allow other
traffic to go through.
Note: FTP uses TCP on port 20 & 21.
Define which protocol, source, destination and port are denied:
Router(config)#access-list 101 deny tcp 10.0.0.0 0.255.255.255 187.100.1.6 0.0.0.0 eq 21
Router(config)#access-list 101 deny tcp 10.0.0.0 0.255.255.255 187.100.1.6 0.0.0.0 eq 20
Router(config)#access-list 101 permit ip any any
Apply this ACL to an interface:
Router(config)#interface Fa0/1
Router(config-if)#ip access-group 101 out
Notice that we have to explicit allow other traffic (access-list 101 permit ip any any) as there is an deny all
command at the end of each ACL.
As we can see, the destination of above access list is 187.100.1.6 0.0.0.0 which specifies a host. We can use host
187.100.1.6 instead. We will discuss wildcard mask later.
In summary, below is the range of standard and extended access list
Access list type

Range

Standard

1-99, 1300-1999

Extended

100-199, 2000-2699

Named IP Access List


This allows standard and extended ACLs to be given names instead of numbers

Named IP Access List Configuration Syntax


ip access-list {standard | extended} {name | number}
Example of Named IP Access List
This is an example of the use of a named ACL in order to block all traffic except the Telnet connection from host
10.0.0.1/8 to host 187.100.1.6.

Define the ACL:


Router(config)#ip access-list extended in_to_out permit tcp host 10.0.0.1 host 187.100.1.6 eq telnet
(notice that we can use telnet instead of port 23)
Apply this ACL to an interface:
Router(config)#interface Fa0/0
Router(config-if)#ip access-group in_to_out in
Where to place access list?
Standard IP access list should be placed close to destination.
Extended IP access lists should be placed close to the source.
How many access lists can be used?
You can have one access-list per protocol, per direction and per interface. For example, you can not have two
access lists on the inbound direction of Fa0/0 interface. However you can have one inbound and one outbound
access list applied on Fa0/0.
How to use the wildcard mask?
Wildcard masks are used with access lists to specify a host, network or part of a network.
The zeros and ones in a wildcard determine whether the corresponding bits in the IP address should be checked or
ignored for ACL purposes. For example, we want to create a standard ACL which will only allow network
172.23.16.0/20 to pass through. We need to write an ACL, something like this:

access-list 1 permit 172.23.16.0 255.255.240.0


Of course we cant write subnet mask in an ACL, we must convert it into wildcard mask by converting all bits 0 to 1
& all bits 1 to 0.
255 = 1111 1111 -> convert into 0000 0000
240 = 1111 0000 -> convert into 0000 1111
0 = 0000 0000 -> convert into 1111 1111
Therefore 255.255.240.0 can be written in wildcard mask as 00000000.00000000.00001111.11111111 =
0.0.15.255
Remember, for the wildcard mask, 1s are I DONT CARE, and 0s are I CARE. Now lets analyze our wildcard
mask.
Two first octets are all 0s meaning that we care about the network 172.23.x.x. The third octet, 15 (0000 1111 in
binary), means that we care about first 4 bits but dont care about last 4 bits so we allow the third octet in the form
of 0001xxxx (minimum:00010000 = 16; maximum: 0001111 = 31).

The fourth octet is 255 (all 1 bits) that means I dont care.
Therefore network 172.23.16.0 0.0.15.255 ranges from 172.23.16.0 to 172.23.31.255.
Some additional examples:
+ Block TCP packets on port 30 from any source to any destination:
Router(config)#access-list 101 deny tcp any any eq 30
+ Permit any IP packets in network 192.23.130.128 with subnet mask 255.255.255.248 to any network:
Router(config)#access-list 101 permit ip 192.23.130.128 0.0.0.7 any
Apply the access control list to an interface:
Router(config)#interface fastEthernet0/0
Router(config-if)#ip access-group 101 in

Point to Point Protocol (PPP) Tutorial


Point-to-Point Protocol (PPP) is an open standard protocol that is mostly used to provide
connections over point-to-point serial links. The main purpose of PPP is to transport Layer 3
packets over a Data Link layer point-to-point link. PPP can be configured on:
+ Asynchronous serial connection like Plain old telephone service (POTS) dial-up
+ Synchronous serial connection like Integrated Services for Digital Network (ISDN) or point-topoint leased lines.
PPP consists of two sub-protocols:
+ Link Control Protocol (LCP): set up and negotiate control options on the Data Link Layer
(OSI Layer 2). After finishing setting up the link, it uses NCP.
+ Network control Protocol (NCP): negotiate optional configuration parameters and facilitate
for the Network Layer (OSI Layer 3). In other words, it makes sure IP and other protocols can
operate correctly on PPP link

Establish a PPP session


Before a PPP connection is established, the link must go through three phases of session
establishment:
1. Link establishment phase: In this phase, each PPP device sends LCP packets to configure
and test the data link
2. Authentication phase (optional): If authentication is enabled, either PAP or CHAP will be
used. PAP and CHAP are two authentication protocols used in PPP
3. Network layer protocol phase: PPP sends NCP packets to choose and configure Network
Layer protocol (OSI Layer 3) to be encapsulated and sent over the PPP data link

Note: The default serial encapsulation on Cisco routers is HDLC so if you want to use PPP you
have to configure it. Unlike HDLC which is a Cisco proprietary protocol, PPP is an open
standard protocol so you should use it to connect a Cisco router to a non-Cisco router
PPP Authentication Methods

In this part we will learn more about two authentication methods used in Authentication Phase of
PPP.
PPP has two built-in security mechanisms which are Password Authentication Protocol (PAP)
and Challenge Handshake Authentication Protocol (CHAP).
Password Authentication Protocol (PAP) is a very simple authentication protocol. The client
who wants to access a server sends its username and password in clear text. The server checks
the validity of the username and password and either accepts or denies connection. This is called
two-way handshake. In PAP two-way handshake process, the username and password are sent
in the first message.

PAP two-way handshake


For those systems that require greater security, PAP is not enough as a third party with access
to the link can easily pick up the password and access the system resources. In this case CHAP
can save our life!
Challenge Handshake Authentication Protocol (CHAP) is an PPP authentication protocol
which is far more secure than PAP. Lets see how CHAP three-way handshake works:

With CHAP, the protocol begins with a random text (called a challenge) sent from the Server,
which asks the Client to authenticate.

After receiving the challenge, the Client uses its password to perform a one-way hash algorithm
(MD5) to encrypt the random text received from the server. The result is then sent back to the
Server. Therefore even if someone can capture the messages between client and server, he
cannot know what the password is.

At the Server side, the same algorithm is used to generate its own result. If the two results
match, the passwords must match too.
The main difference between PAP and CHAP is PAP sends username and password in clear text
to the server while CHAP does not. Notice that in CHAP authentication process, the password
itself is never sent across the link.
Another difference between these two authentication protocols is PAP performs authentication at
the initial link establishment only while CHAP performs authentication at the initial link
establishment and periodically after that. The challenge text is random and unique so the
result is also unique from time to time. This prevents playback attack (in which a hacker tries
to copy the result text sent from Client to reuse).
In the next part we will learn how to configure PAP and CHAP for PPP.
PAP and CHAP Configuration
Configure PAP and CHAP is rather easy. First we need to enable PPP encapsulation, then specify
if PAP or CHAP will be used with the ppp authentication pap or ppp authentication chap
command.
PAP Configuration
In many CCNA books you will see two routers authenticate each other and their configurations
are identical. But we wish you to understand the difference in the configuration of Client and
Server. So in this example we only want the Server to authenticate the Client router, not vice
versa.

Client(config)#int s1/0
Client(config-if)#encapsulation ppp
Client(config-if)#ppp pap sent-username
CLIENT1 password TUT
Client(config-if)#no shutdown

Server(config)#username CLIENT1
password TUT
Server(config)#int s1/1
Server(config-if)#encapsulation ppp
Server(config-if)#ppp authentication pap
Server(config-if)#no shutdown

Of course we have to enable PPP in both routers first with the encapsulation ppp command.
Server router is the one who will authenticate when receiving username & password from Client
so we need to use the ppp authentication pap command to tell the router to authenticate via
PAP.
In Server router we also need to create an username and password entry to match the
username & password sent from Client with the username CLIENT1 password TUT command.
Notice that in Client configuration we can specify a username (CLIENT1) that is different from its
hostname (in this case Client) with the ppp pap sent-username command. Client will use
CLIENT1 as its username to authenticate with the Server.
If your configuration is correct then you will see the status up/up on your serial interfaces.
Note: Please do not use the ppp authentication pap command on Client router as we dont
want the Client to authenticate the Server. If you use this command the PPP link would fail
because Server is not configured to send username and password to Client!
CHAP Configuration
The CHAP configuration is rather similar to the PAP configuration so we will not explain more.
Client(config)#interface Serial 1/0
Client(config-if)#encapsulation ppp
Client(config-if)#ppp chap hostname
CLIENT1
Client(config-if)#ppp chap password TUT
Client(config-if)#no shutdown

Server(config)#username CLIENT1 password


TUT
Server(config)#interface Serial 1/1
Server(config-if)#encapsulation ppp
Server(config-if)#ppp authentication chap
Server(config-if)#no shutdown

Note: Please do not use the ppp authentication chap command on Client router as we dont
want the Client to authenticate the Server. If you use this command the PPP link would fail
because Server is not configured to send username and password to Client!
Verification the Serial Encapsulation Configuration
We can use the show interface <interface> command to see the configured encapsulation type
of that Serial interface and the LCP, NCP states if PPP encapsulation is configured.
Client#show interface s1/0
Serial1/0 is up, line protocol is up

Hardware is M4T
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open
Open: CDPCP, crc 16, loopback not set
We can see interface Serial1/0 is configured with PPP encapsulation. The LCP state is open
which means the negotiation and session establishment are good. The Open: CDPCP line tells
us the NCP is listening for the Cisco Discovery Protocol (CDP) protocol.
An useful debug command to check PPP authentication is the debug ppp authentication or
debug ppp negotiation command.

WAN Tutorial
Unlike LAN which is used effectively in relatively small geographic areas, WAN services help
connect networks at a broad geographic distance, from a few to thousands of kilometers. Lets
see the network below, while LANs are used inside buildings like Home, Office, Internet Service
Provider (ISP) WANs are often used to connect between them. By the way, Internet is the
largest WAN nowadays.

Because of long distance connection, individuals usually do not own WAN (unlike LAN which they
often own it). They do not have the rights to bury a long cable between buildings either.
Therefore they hire available network service providers, such as ISPs, cable or telephone
companies in their cities instead. This helps reduce the connection cost very much.
Note: Although we often think about serial connections with copper cables when talking
about WAN but nowadays fiber optical cables play an important role in connection at both
LAN and WAN. Great bandwidth, great distance, very little signal loss, high speed, security,
thin are very big advantages in the transmission so they are used more and more popular
in networking.
WAN Devices & Terminologies
WAN includes many devices and terminologies so you should grasp them. Below are the most
popular ones:
+ Router: a device provides internetworking and WAN access interfaces that connect to the
provider network
+ Data Terminal Equipment (DTE): Typically, DTE is the router (at the customer side)
+ Data Communications Equipment (DCE): provides a clocking signal used to synchronize
data transmission between DCE and DTE devices.
+ Customer Premise Equipment (CPE): devices located at the customer side. CPE often
owned by the customer or hired from the WAN provider. In the picture below, the router, LAN
switch and two computers in the house are classified as CPE

+ Demarcation Point: the physical point where the public network ends and the private
network of a customer begins
+ Local loop: A cable connects the CPE to the nearest exchange or Central Office (CO) of the
service provider. In other words, it is the physical link that connects from the demarcation point
to the edge of the service providers network

+ CSU/DSU: short for Channel Service Unit/Data Service Unit, used on digital lines such as T1,
T3 or E1. CSU/DSU provides clocking signal to the customer equipment interface and terminates
the channelized transport media to a leased line. As a result, DSU/CSU converts one form of
digital format to another digital format. Therefore CSU/DSU terminates a digital local loop. But
you will not see CSU/DSU nowadays because most T1 or E1 interfaces on current routers
integrate CSU/DSU capabilities
+ Modem: short for Modulator/Demodulator, a Modem is a hardware device that allows a
computer to send and receive information over telephone lines by converting digital data into an
analog signal used on phone lines, and vice versa. Modem terminates an analog local loop

WAN Layer 2 Protocols


Two important WAN technologies common in enterprise networks today and will be discussed in
our tutorial are: Leased lines (or point-to-point link) and Packet-Switching.

Leased line
The two most popular WAN protocols used on leased lines are High-Level Data-Link Control
(HDLC) and Point-to-Point Protocol (PPP).
+ High-Level Data-Link Control (HDLC): a point-to-point protocol and it is the default WAN
protocol for Cisco routers. Although HDLC is an open standard but each vendor has a proprietary
field in their HDLC implementation which makes HDLC a proprietary protocol. Therefore running
HDLC between routers from different vendors is not going to work.
+ Point-to-Point Protocol (PPP): it is an open standard and a point-to-point protocol. This is
the most popular WAN protocol nowadays used in Dial, xDSL, ISDN, Serial applications. PPP
supports both synchronous (like analog phone lines) and asynchronous circuits (such as ISDN or
digital links). PPP consists of two subprotocols:
* Link Control Protocol (LCP): set up the link and take care of authentication. After finishing
setting up the link, it uses NCP.
* Network Control Protocol (NCP): negotiate optional configuration parameters and facilities for
the network layer. In other words, it makes sure IP and other protocols can operate correctly on
PPP link
PPP has built-in security mechanisms which are Password Authentication Protocol (PAP) and
Challenge Handshake Authentication Protocol (CHAP). While PAP sends password in clear text,
CHAP uses encrypted text (called a hash of the password) with a three-way handshake for
authentication so CHAP is very secure.
Packet-Switching
A big advantage of packet-switching over leased line services is we can connect many routers to
the packet-switching service using a single serial link on each router. Each router can then
communicate with all other routers. A popular type of packet-switching service that you need to
grasp in CCNA is Frame-Relay. Asynchronous Transfer Mode (ATM) is another type of packetswitching service but it is out of CCNA scope and we will not discuss it in this tutorial.
+ Frame-Relay: a digital packet-switched service that can run only across synchronous digital
connections. Because digital connections have very few errors, it does not perform any error
correction or flow control. However, Frame Relay detects errors and drops bad frames. It is up to
a higher layer protocol, such as TCP, to resend the dropped information. For more information
about this protocol please read our Frame Relay tutorial.
All three protocols above operate at Layer 2 (Data Link Layer) of the OSI Model.

Spanning Tree Protocol STP Tutorial


To provide for fault tolerance, many networks implement redundant paths between devices
using multiple switches. However, providing redundant paths between segments causes packets
to be passed between the redundant paths endlessly. This condition is known as a bridging loop.
(Note: the terms bridge, switch are used interchangeably when discussing STP)
To prevent bridging loops, the IEEE 802.1d committee defined a standard called the spanning
tree algorithm (STA), or spanning tree protocol (STP). Spanning-Tree Protocol is a link
management protocol that provides path redundancy while preventing undesirable loops in the
network. For an Ethernet network to function properly, only one active path can exist between
two stations.
Lets see a situation when there is no loop-avoidance process in operation. Suppose you have
two switches connected with redundant links. One switch connected to PC A and the other switch
connected to PC B.
Now PC A wants to talk to PC B. It then sends a broadcast, say an Address Resolution Protocol
(ARP) to find out where the location of PC B, the green arrow shows a broadcast frame sent by
PC A.
When the switch A receives a broadcast frame, it forwards that frame to all ports except the port
where it receives the request -> SwA forwards that ARP frame out of fa0/0 and fa0/1 ports.

Suppose SwB receives the broadcast frame from fa0/0 first then it will forward that frame to the
two other links ( fa0/1 and fa0/5 of SwB).

The other broadcast frame from SwA comes to fa0/1 of SwB so SwB forwards it to fa0/0 and
fa0/5.

As you can see, SwA has sent 2 broadcast frames out of its fa0/0 and fa0/1, SwB receives each
of them, creates 2 copies and sends one of them back to SwA (the other is sent to PC B).
When SwA receives these broadcast frames it continues broadcasting them again to its other
interfaces, this will keep going on forever until you shutdown the network. This phenomenon is
called a broadcast storm.
Broadcast storm consumes entire bandwidth and denies bandwidth for normal network traffic.
Broadcast storm is a serious network problem and can shut down entire network in seconds.
Other problems:
Multiple frame transmission: Multiple copies of unicast frames may be delivered to
destination stations. Many protocols expect to receive only a single copy of each transmission.
Multiple copies of the same frame can cause unrecoverable errors. In the above example, if the
first frame is not a ARP broadcast but a unicast and SwA and SwB havent learned about the
destination in that frame yet then they flood the frame on all ports except the originating port.
The same phenomenon occurs and PC B will receive more than one copy of that frame.
MAC Database Instability: MAC database instability results when multiple copies of a frame
arrive on different ports of a switch. We can see it in the above example too when the two ports
on SwB (fa0/0 and fa0/1) receive the same frame.
Now you learned about problems when there is no looping-avoidance mechanism running on the
network. All of these problems can be solved with the Spanning Tree Protocol (STP)
STP prevents loop by blocking one of switchs port. For example, by blocking port fa0/0 of SwA,
no data traffic is sent on this link and the loop in the network is eliminated.

But how STP decides which port should be blocked. The whole process is more complex than
what is shown above. We will learn it in the next part.
How Spanning Tree Protocol (STP) works
SPT must performs three steps to provide a loop-free network topology:
1. Elects one root bridge
2. Select one root port per nonroot bridge
3. Select one designated port on each network segment
Now lets have a closer look from the beginning, when you have just turned on the switches
1. Elects one root bridge
A fun thing is that when turned on, each switch claims itself as the root bridge immediately and
starts sending out multicast frames called Bridge Protocol Data Units (BPDUs), which are used to
exchange STP information between switches.

A BPDU contains many fields but there are 4 most important fields for STP to operate correctly:
* The Bridge IDs of the Root Bridge and the Bridge ID of the Transmitting Bridge:
In the initial stage, each switch claims itself as a root bridge so the bridge ID of the root bridge
and the bridge ID of the transmitting bridge are the same.
The Bridge ID is composed of the bridge priority value (0-65535, 2 bytes) and the bridge
MAC address (6 bytes).
Bridge ID = Bridge Priority + MAC Address
For example:
+ The bridge priority of SwA is 32768 and its MAC address is 0000.0000.9999 -> the bridge ID
of SwA is 32768:0000.0000.9999
+ The bridge priority of SwB is 32768 and its MAC address is 0000.0000.1111 -> the bridge ID
of SwB is 32768:0000.0000.1111
The root bridge is the bridge with the lowest bridge ID.
To compare two bridge IDs, the priority is compared first. If two bridges have equal priority,
then the MAC addresses are compared. In the above example, both SwA and SwB have the
same bridge ID (32768) so they will compare their MAC addresses. Because SwB has lower MAC
address it will become root bridge.

On the root bridge, all ports are designated ports. Designated ports are in the forwarding state
and can send and receive traffic.
Note: The default bridge priority value is 32768. An administrator can decide which bridge will
become the root bridge by lowering the priority value (thus lowering Bridge ID). For example,
we can lower SwAs bridge priority to 28672(smaller than 32768) to make it root bridge. But
notice that the bridge priority number can be incremented only in step of 4096.
In conclusion, STP decides which switch will become root bridge by comparing the Bridge ID in
the BPDUs. The bridge priorities are compare first; if they are equal then the MAC addresses will
be used. Because each switch has a unique MAC address so surely one root bridge will be
elected.
* The cost to reach the root from this bridge (Root Path Cost): This value is set to 0 at
the beginning of STP root bridge election process since all bridges claim to be the root. The cost
range is 0-65535.
Link
Speed

Cost (Revised IEEE


Specification)

Cost (Previous IEEE


Specification)

10 Gbps

1 Gbps

100 Mbps

19

10

10 Mbps

100

100

The root path cost is used to elect root port and we will discuss in the next part.
* The Port ID: The transmitting switch port ID, will be discussed later.
2. Select one root port per nonroot bridge
Root port is the port that is closest to the root bridge, which means it is the port that
receiving the lowest-cost BPDU from the root.
Every non-root bridge must have a root port. All root ports are placed in forwarding state.
In the above example, if we suppose the upper link (between two fa0/0 interfaces) are 10Mbps
and the lower link (between two fa0/1 interfaces) is 100Mbps link then fa0/1 of SwA will become
root port as it has lower cost than fa0/0 (cost 19 < cost 100).

3. Select one designated port on each network segment

STP selects one designated port per segment to forward traffic. Other switch ports on the
segment typically become nondesignated ports and are blocked. Therefore interface fa0/0 of
SwA will become nondesignated port (blocking state). In blocking state, although switches
cannot send data traffic but can still receive BPDUs.

Now the network reaches a state called convergence. Convergence in STP occurs when all
ports on bridges and switches have transitioned to either forwarding or blocking states. No data
is forwarded until convergence is complete so the time for convergence when network topology
changes is very important. Fast convergence is very desirable in large networks. The normal
convergence time is 50 seconds for 802.1D STP (which is rather slow) but the timers can be
adjusted.
STP switch port states
When STP is enabled, every switch in the network goes through the blocking state and the
transitory states of listening and learning. The ports then stabilize to the forwarding or blocking
state.
* Blocking no user data is sent or received but it may go into forwarding mode if the other
links in use fail and the spanning tree algorithm determines the port may transition to the
forwarding state. BPDU data is still received in blocking state but discards frames, does not learn
MAC address.
* Listening The switch processes BPDUs and awaits possible new information that would cause
it to return to the blocking state, discards frames and MAC address.
* Learning receives and transmits BPDUs and learns MAC addresses but does not yet forward
frames.
* Forwarding receives and sends data, normal operation, learns MAC address, receives and
transmits BPDUs.
Below is a quick summary of STP states:
State

Can forward
data?

Learn
MAC?

Timer

Transitory or Stable
State?

Blocking

No

No

Max Age (20 sec)

Stable

Listening

No

No

Forward Delay
(15 sec)

Transitory

Learning

No

Yes

Forwarding

Yes

Yes

Forward Delay

Transitory
Stable

* MaxAge How long any bridge should wait, after beginning to not hear hellos, before trying
to change the STP topology. Usually this is a multiple of the hello time; the default is 20
seconds.
* Forward Delay Delay that affects the time involved when an interface changes from
blocking state to forwarding state. A port stays in listening state and then learning state for the
number of seconds dened by the forward delay. This timer is covered in more depth shortly.
The spanning tree algorithm provides the following benefits:
* Eliminates bridging loops
* Provides redundant paths between devices
* Enables dynamic role configuration
* Recovers automatically from a topology change or device failure
* Identifies the optimal path between any two network devices
Now lets take an example using the same network as above but we suppose that the bottom
100Mbps connection is broken.

When the lower link is broken, SwA must wait for Max Age seconds before it begins to transition
fa0/0 interface from blocking to listening state. In listening state it must wait for the Forward
Delay seconds to move to the Learning state. Next it continues waiting for more Forward Delay
seconds. If no BPDU is received, it is then placed in forwarding state. These three waiting
periods of (by default) 20, 15, and 15 seconds create STPs relatively slow convergence.
Now lets consider how BPDU are sent when there are 3 switches in the network. Cisco has a
good flash to demonstrate it so please watch it
athttp://www.cisco.com/image/gif/paws/10556/spanning_tree1.swf
How STP performs when a link fails

Suppose we have a topology with three switches as shown below:

In which SwA is elected the root bridge, the link between SwB and SwC is being blocked. When
STP is converged, the port roles are shown above.
Now suppose the link between SwA and SwB goes down, let us see what and how STP will
perform

1. First, P1 on SwB immediately goes down and SwB declares its link to SwA as down.
2. SwB considers its link to SwC (which is being blocked) as an alternate link to root port. SwB
starts to transition P2 from the blocking state to listening state -> learning state -> forwarding
state. Each of these stages lasts 15 seconds by default. Therefore port P2 on SwB will be hold
blocking for 30 seconds before the network converges again. This downtime of the network is
rather long (although we can tune the timers to 14 second downtime) and the users can feel it.
The noticeable downtime can be reduced significantly if we use Rapid Spanning Tree Protocol
(RSTP). If you are interested in RSTP, please read my Rapid Spanning Tree Protocol Tutorial.

NetFlow Tutorial
One of the most important tasks of a network administrator is to monitor the health of our
networks, learn how our bandwidth is being used, what applications are consuming it, when it
needs upgrade Although monitoring protocols like SNMP and SPAN (port mirroring) can help us
answer some questions but they are not enough to give us an insightful view of our networks.
Luckily we have another amazing tool: NetFlow!
NetFlow is a networking analysis protocol that gives the ability to collect detailed information
about network traffic as it flows through a router interface. NetFlow helps network
administrators answers the questions of who (users), what (application), when (time of day),
where (source and destination IP addresses) and how network traffic is flowing.
Lets take an example! In the topology below, when traffic from Network 1, 2, 3 passes
through the interfaces of a NetFlow enabled device, relevant information is captured and stored
in the NetFlow cache. NetFlow collects IP traffic information as records and sends them to a
NetFlow collector for traffic flow analysis.

NetFlow components
+ NetFlow Monitor: a component applied to an interface and collects information about flows.
Flow monitors consist of a record and a cache. You add the record to the flow monitor after the
flow monitor is created. In the topology above, we can apply the NetFlow Monitors to the s0/0,
Fa0/0 and Fa0/1 interfaces of the router to collect traffic information of these interfaces
+ NetFlow Exporter: aggregates packets into flows, stores IP flow information in its NetFlow
cache and exports them in the form of flow records to the NetFlow collector
+ NetFlow Collector: collects flow records sent from the NetFlow exporters, parsing and
storing the flows. Usually a collector is a separate software running on a network server.
NetFlow records are exported to a NetFlow collector using User Datagram Protocol (UDP)
+ NetFlow Sampler: used to reduce the number of packets that are selected for analysis. It is
applied to a NetFlow Monitor to reduce the overhead load because the number of packets that

the flow monitor must analyze is reduced. But notice that the accuracy of the information stored
in the flow monitors cache is also reduced correspondingly.
Note: The term flows here should be understood as unidirectional streams of related
packets
The most important component of NetFlow is the NetFlow Exporter (and its NetFlow cache) so
we will discuss more about it.
How NetFlow Exporter works
When packets arrive at the NetFlow Exporter, each of them is inspected for one or many IP
packet attributes. These attributes are used to determine if the packet is unique or similar to
other packets. If it is similar then it is classified as in the same flow.

There are seven key IP packet attributes that can be used by NetFlow to classify packets into
separate flows:
+ IP source address
+ IP destination address
+ Source port
+ Destination port
+ Layer 3 protocol type
+ Class of Service (or Type of Service ToS) Byte
+ Input (Router or switch) interface
Other attributes can be also used and they are called non-key attributes such as timestamps,
packet and byte counters, TCP flag information
After inspecting these attributes, the NetFlow Exporter condenses them into flow records and
save in a database called the NetFlow cache. These flow records can also be exported to a
NetFlow Collector.
How to view NetFlow data

There are two main methods to view NetFlow data:


+ Command Line Interface (CLI): Because the NetFlow cache is a part of the NetFlow
Exporter so we can view this cache directly via the Command-Line-Interface (CLI), which is very
useful for troubleshooting, with the show ip cache flow command. An example output of this
command is shown below:

+ A NetFlow reporting tool: there are many tools that can collect NetFlow packets sent to the
NetFlow Collector and display a comprehensive view. Below is an example of what SolarWinds
NetFlow Traffic Analyzer can analyze:

NetFlow versions
Version 1: the original format supported in the initial NetFlow releases.
Versions 2, 3 and 4 were not released.
Version 5: an enhancement that adds Border Gateway Protocol (BGP) autonomous system
information, flow sequence numbers and a few additional fields. This is the standard and most
common NetFlow version. Only support IPv4.
Version 6: similar to version 7
Version 7: Cisco-specific version for Catalyst 5000 series switches but not compatible with Cisco
routers
Version 8: choice of aggregation schemes in order to reduce resource usage
Version 9: support flow-record format and it is known as Flexible NetFlow technology. NetFlow
version 9 includes a template to describe what is being exported. It supports extensible file
export format to enable easier support. It also supports additional fields & technologies such as
MPLS, IPv6, IPSec, NBAR protocols, Multicast, VLAN ID
In general, the two most important NetFlow versions are Version 5 and Version 9 which we will
learn how to configure them.
Note: NetFlow version 5 only supports monitoring inbound statistics using the ip flow
ingress command while NetFlow v9 allows to monitor traffic leaving each interface via ip
flow egress command.
In the next part we will learn how to configure NetFlow version 5 & 9.

Configure NetFlow
NetFlow version 5 and version 9 are commonly used nowadays so this part will show how to
configure NetFlow in version 5 and 9. We only show the minimum configuration to help NetFlow
work well.
Configure NetFlow version 5
The following configuration enables NetFlow version 5 on Fa0/1 interface and export to a
NetFlow collector at 10.1.1.1 on UDP port 2055 through Fa0/2 interface.

Router(config)#interface fa0/1
Router(config-if)#ip route-cache flow
Router(config-if)#exit
Router(config)#ip flow-export destination 10.1.1.1 2055
Router(config)#ip flow-export source fa0/2 //NetFlow will export through this interface
Router(config)#ip flow-export version 5
Router(config)#ip flow-cache timeout active 1 //export flow records every minute.
Note:
+ NetFlow version 5 can inspect inbound traffic only.
+ We can use either the command ip route-cache flow or ip flow ingress in this case. The
former will enable flows on the physical interface and all sub-interfaces associated with it while
the latter can be used on sub-interfaces and will enable flows on sub-interfaces only.
+ The last command ip flow-cache timeout active 1 is necessary for NetFlow to work well. If
you leave it at the default of 30 minutes your traffic reports will have spikes.
Configure NetFlow version 9
To configure NetFlow version 9 (Flexible NetFlow), we need to configure three components:
1. Flow Record
2. Flow Exporter
3. Flow Monitor
The following configuration enables NetFlow version 9 on Fa0/1 interface and export to a
NetFlow collector at 10.1.1.1 on UDP port 2055 through Fa0/2 interface.
1. Configure the Flow Record:
Router(config)# flow record TUT_Record
Router(config-flow-record)# match ipv4 destination address
Router(config-flow-record)# match ipv4 source address

2. Configure the Exporter:


Router(config)# flow exporter TUT_Exporter
Router(config-flow-exporter)# destination 10.1.1.1
3. Configure the Flow Monitor
Router(config)# flow monitor TUT_Monitor
Router(config-flow-monitor)# record TUT_Record //Must match the above Flow Record name
Router(config-flow-monitor)# exporter TUT_Exporter //Must match the above Exporter name
4. Apply to an interface
Router(config)#interface fa0/1
Router(config-if)#ip flow monitor TUT_Monitor input //Monitor the receiving traffic on this
interface
Small note: CEF should be enabled on the NetFlow Exporter router when running NetFlow. CEF
decides through which interface traffic is exiting the router. Any NetFlow Collector will calculate
the OUT traffic for an interface based on the Destination Interface value present in the NetFlow
packets exported from the NetFlow Exporter. If the CEF is disabled on this router, the exported
NetFlow packets will have Destination interface as null and this leads NetFlow Collector to
show no OUT traffic for the interfaces.
Verification
After finishing configuration, we may need some commands to verify and troubleshoot our
NetFlow configuration. Some popular commands used to check the NetFlow operation are listed
below:
+ show ip cache flow: display a summary of the NetFlow accounting statistics. The output of
this command has been showed above
+ show ip flow export: display the status and the statistics for NetFlow accounting data
export, including the main cache and all other enabled caches
Router# show ip flow export
Flow export v5 is enabled for main cache
Exporting flows to 10.1.1.1 (2055)
Exporting using source interface FastEthernet0/2
Version 5 flow records
39676332 flows exported in 1440719 udp datagrams
0 flows failed due to lack of export packet
153 export packets were sent up to process level
0 export packets were dropped due to no fib
0 export packets were dropped due to adjacency issues
0 export packets were dropped due to fragmentation failures
0 export packets were dropped due to encapsulation fixup failures
+ show ip flow interface: displays NetFlow accounting configuration on interfaces
R2# show ip flow interface

FastEthernet0/0
ip route-cache flow
+ show ip flow top-talkers: show which end devices on your network are taking up the most
bandwidth
Router# show ip flow top-talkers

SrcIf

SrcIPaddress

DstIf

DstIPaddress

Pr

SrcP

DstP

Bytes

Et0/1

191.168.1.1

Local

192.168.1.254

01

0000

0000

4800

Et0/2

191.168.1.2

Local

192.168.1.254

01

0000

0000

4800

Et0/3

191.168.1.3

Local

192.168.1.254

01

0000

0000

3400

EIGRP Tutorial
In this article we will mention about the EIGRP protocol.
In the past, Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco-proprietary routing
protocol but from March-2013 Cisco opens up EIGRP as an open standard in order to help
companies operate in a multi-vendor environment. EIGRP is a classless routing protocol,
meaning that it sends the subnet mask of its interfaces in routing updates, which use a complex
metric based on bandwidth and delay.
EIGRP is referred to as a hybrid routing protocol because it has the characteristics of both
distance-vector and link-state protocols but now Cisco refers it as an advanced distance vector
protocol.
Notice: the term hybrid is misleading because EIGRP is not a hybrid between distance vector
and link-state routing protocols. It is a distance vector routing protocol with enhanced features.
EIGRP is a powerful routing protocol and it is really standout from its ancestor IGRP. The main
features are listed below:
+ Support VLSM and discontiguous networks
+ Use Reliable Transport Protocol (RTP) to delivery and reception of EIGRP packets
+ Use the best path selection Diffusing Update Algorithm (DUAL), guaranteeing loop-free
paths and backup paths throughout the routing domain
+ Discover neighboring devices using periodic Hello messages to discover and monitor
connection status with its neighbors
+ Exchange the full routing table at startup and send partial* triggered updates thereafter
(not full updates like distance-vector protocols) and the triggered updates are only sent to
routers that need the information. This behavior is different from the link-state protocol in which
an update will be sent to all the link-state routers within that area. For example, EIGRP will send
updates when a new link comes up or a link becoming unavailable
+ Supports multiple protocols: EIGRP can exchange routes for IPv4, IPv6, AppleTalk and
IPX/SPX networks
+ Load balancing: EIGRP supports unequal metric load balancing, which allows administrators
to better distribute traffic flow in their networks.
* Notice: The term partial means that the update only includes information about the route
changes.
EIGRP use metrics composed of bandwidth, delay, reliability, and load. By default, EIGRP uses
only bandwidth and delay.
EIGRP use five types of packets to communicate:
+ Hello: used to identify neighbors. They are sent as periodic multicasts
+ Update: used to advertise routes, only sent as multicasts when something is changed
+ Ack: acknowledges receipt of an update. In fact, Ack is Hello packet without data. It is always
unicast and uses UDP.
+ Query: used to find alternate paths when all paths to a destination have failed
+ Reply: is sent in response to query packets to instruct the originator not to recompute the
route because feasible successors exist. Reply packets are always unicast to the originator of the
query

EIGRP sends every Query and Reply message using RTP, so every message is acknowledged
using an EIGRP ACK message.
EIGRP Route Discovery
Suppose that our network has 2 routers and they are configured to use EIGRP. Lets see what
will happen when they are turned on.
Firstly, the router will try to establish a neighboring relationships by sending Hello packets to
others running EIGRP. The destination IP address is 224.0.0.10 which is the multicast address of
EIGRP. By this way, other routers running EIGRP will receive and proceed these multicast
packets. These packets are sent over TCP.

After hearing Hello from R1, R2 will respond with another Hello packet.

R2 will also send its routing table to R1 by Update packets. Remember that R2 will send its
complete routing table for the first time.

R1 confirms it has received the Update packet by an ACK message.

R1 will also send to R2 all of its routing table for the first time

R2 sends a message saying it has received R1s routing table.

Now both R1 & R2 learn all the paths of the neighbor and the network is converged. But there
are some notices you should know:
+ After the network converged, Hello messages will still be sent to indicate that the it is still
alive.
+ When something in the network changes, routers will only send partial updates to routers
which need that information.
+ Hellos are sent as periodic multicasts and are not acknowledged directly.
+ The first hellos are used to build a list of neighbors; thereafter, hellos indicate that the
neighbor is still alive
To become a neighbor, the following conditions must be met:
+ The router must hear a Hello packet from a neighbor.
+ The EIGRP autonomous system must be the same.
+ K-values must be the same.
EIGRP builds and maintains three tables:
+ Neighbor table: lists directly connected routers running EIGRP with which this router has an
adjacency
+ Topology table: lists all routes learned from each EIGRP neighbor
+ Routing table: lists all best routes from the EIGRP topology table and other routing processes
Configuring EIGRP
Router(config)#router eigrp 1

Syntax: router eigrp <AS number>


Turn on the EIGRP process
1 is the Autonomous System (AS) number. It can be
from 1 to 65535.
All routers in the same network must use the same AS
number.

Router(config-router)#network
192.168.1.0

Router will turn on EIGRP 1 process on all the


interfaces belonging to 192.168.1.0/24 network.

In the next part we will learn about the Feasible Distance & Administrative Distance of EIGRP
Feasible Distance (FD) and Advertised Distance (AD)

In the next part, we will define these terms and take an example to make them clear.
Advertised distance (AD): the cost from the neighbor to the destination.
Feasible distance (FD): The sum of the AD plus the cost between the local router and the
next-hop router
Successor: The primary route used to reach a destination. The successor route is kept in the
routing table. Notice that successor is the best route to that destination.
Feasible successor: The backup route. To be a feasible successor, the route must have an AD
less than the FD of the current successor route
Maybe its a bit confused with these terms so below is an example to make it clear.

Suppose you are in NEVADA and want to go to IOWA. From NEVADA you need to specify the
best path (smallest cost) to IOWA.
In this topology, suppose router A & B are exchanging their routing tables for the first time.
Router B says Hey, the best metric (cost) from me to IOWA is 50 and the metric from you to
IOWA is 90 and advertises it to router A. Router A considers the first metric (50) as the
Advertised distance. The second metric (90), which is from NEVADA to IOWA (through IDAHO),
is called the Feasible distance.
NEVADA also receives the cost path from NEVADA -> OKLAHOMA -> IOWA advertised by router
C with the Advertised distance of 70 and Feasible distance of 130.
All of these routes are placed in the topology table of router A:
Route

Advertised distance

Feasible distance

NEVADA -> IDAHO -> IOWA

50

90

NEVADA -> OKLAHOMA -> IOWA

70

130

Router A will select the route to IOWA via IDAHO as it has the lowest Feasible distance and put it
into the routing table.
The last thing we need to consider is if the route NEVADA -> OKLAHOMA -> IOWA will be
considered as a feasible successor. To achieve this, it must satisfy the feasibility condition:
To qualify as a feasible successor, a router must have an AD less than the FD of the
current successor route

Maybe you will ask why do we need this feasibility condition? Well, the answer is because it
guarantees a loop-free path to the destination; in other words, it must not loop back to the
current successor.
If the route via the successor becomes invalid (because of a topology change) or if a neighbor
changes the metric, DUAL checks for feasible successors to the destination route. If one is
found, DUAL uses it, avoiding the need to recompute the route as the re-computation can be
processor-intensive. If no suitable feasible successor exists, a re-computation must occur to
determine the new successor.
EIGRP calls these alternative, immediately usable, loop-free routes feasible successor routes,
because they can feasibly be used as a new successor route when the current successor route
fails. The next-hop router of such a route is called the feasible successor.
In this case, the route NEVADA -> OKLAHOMA -> IOWA has an AD (70) less than the FD of the
successor route (90) so it becomes the feasible successor route.
Of course in some cases the feasibility condition will wrongly drop loop-free paths. For example,
if the metric between OKLAHOMA and IOWA is greater than 90 then the route NEVADA ->
OKLAHOMA -> IOWA will not be considered as a feasible successor route although it is loop-free.
But this condition is necessary because it can guarantee the feasible successor routes are loopfree.
Notice that the feasible successors are placed in the topology table, not in the routing table.
Now router A has 3 complete tables as follows (we only consider route to IOWA network)

Now you have a basic concept of EIGRP, in the next part we will dig into the 3 tables of EIGRP
the neighbor, topology & routing tables as understanding them is a requirement for a CCNAtaker and learn how to calculate the metric of EIGRP.
Calculate EIGRP metric
In this part we will continue to learn about the EIGRP Routing Protocol

I built the topology with Packet Tracer to illustrate what will be mentioned. You can download
the lab file here:http://www.9tut.com/download/EIGRP_CCNA_self_study.zip (please unzip &
use at least Packet Tracer v5.3 to open it)

Check the neighbor table of Router0 with the show ip eigrp neighbors command

Lets analyze these columns:


+ H: lists the neighbors in the order this router was learned
+ Address: the IP address of the neighbors
+ Interface: the interface of the local router on which this Hello packet was received
+ Hold (sec): the amount of time left before neighbor is considered in down status
+ Uptime: amount of time since the adjacency was established
+ SRTT (Smooth Round Trip Timer): the average time in milliseconds between the transmission
of a packet to a neighbor and the receipt of an acknowledgement.
+ RTO (Retransmission Timeout): if a multicast has failed, then a unicast is sent to that
particular router, the RTO is the time in milliseconds that the router waits for an
acknowledgement of that unicast.
+ Queue count (Q Cnt): shows the number of queued EIGRP packets. It is usually 0.
+ Sequence Number (Seq Num): the sequence number of the last update EIGRP packet
received. Each update message is given a sequence number, and the received ACK should have
the same sequence number. The next update message to that neighbor will use Seq Num + 1.
As CCNA level, we only care about 4 columns: Address, Interface, Hold & Uptime. Other columns
will be discussed in CCNP so you dont need to remember them now!

Notice that you can see a line IP-EIGRP neighbors for process 100. Process 100 here means
AS 100.
Next we will analyze the EIGRP topology with the show ip eigrp topology command. The output
of Router0 is shown below

The letter P as the left margin of each route entry stands for Passive. Passive state indicates
that the route is in quiescent mode, implying that the route is known to be good and that no
activities are taking place with respect to the route.
Each route shows the number of the successor it has. For example, the network 192.168.2.0,
192.168.1.0,192.168.3.0 & 192.168.4.0 have only 1 successor (and no feasible successor). Only
network 192.168.5.0 has 2 successors.
We notice that there are 2 numbers inside the brackets (30720/28160). The first one is the
metric from Router0 to the destination, the second is the AD of this route, advertised by the
neighbor. For example, the third route entry has:

Lets see how to calculate them!


First you should learn the formula to calculate the metric. Its a conditional formula a bit
complex, I think :)
metric = [K1 * bandwidth + (K2 * bandwidth)/(256 load) + K3 * delay] * [K5/(reliability +
K4)] if K5 > 0
metric = [K1 * bandwidth + (K2 * bandwidth)/(256 load) + K3 * delay] if K5 = 0
Note: you can check these K values with the show ip protocols command. Below is an example
of this command on Router0.

To change these values, use the metric weights tos k1 k2 k3 k4 k5 in the EIGRP router mode.
By default, K1 = 1, K2 = 0, K3 = 1, K4 = 0, K5 = 0 which means that the default values use
only bandwidth & delay parameters while others are ignored. The metric formula is now
reduced:
metric = bandwidth + delay
But the bandwidth here is defined as the slowest bandwidth in the route to the destination &
delay is the sum of the delays of each link. Here is how to calculate the EIGRP metric in detail:

EIGRP uses the slowest bandwidth of the outgoing interfaces of the route to calculate the
metric. In this case we need to find out the bandwidth of Fa0/0 of Router0 & Fa0/1 of Router1 as
the destination network is 192.168.3.0/24.

Find the bandwidth


We can find the bandwidth of each interface by the show interfaces . Below is an output of the
show interfaces fa0/0 on Router0.

All the interfaces in this topology have the bandwidth of 100,000 Kbps so we will get the same
result on interface Fa0/1 of Router1 -> The slowest bandwidth here is 100,000 Kbps. Now we
can calculate the first portion of the formula:

Notice that if the result is not an integer then the result will be rounded down. For example,
10,000,000 divided by 1024 (the speed of T1) equals 9765.625. The result will be rounded down
to 9765.
Find the delay
EIGRP also used the delay of the outgoing interfaces and it can also be found with the show
interfaces , the delay lies next to the bandwidth value (for example, DLY 100usec). In this case,
the delay value of both Fa0/0 of Router0 & Fa0/1 of Router1 is 100 usec (microsecond) so the
sum of delay is 100 + 100 = 200 usec. The second portion of the formula is:

Note: usec here means microsecond (which is 1/1000 miliseconds). According to this
link: http://www.cisco.com/en/US/tech/tk365/technologies_white_paper09186a0080094cb7.sht
ml#eigrpmetrics: The delay as shown in the show ip eigrp topology or show
interface commands is in microseconds. We have to divide by 10 to get the ten of
microsecond unit used in the metric formula above.

Get the metric


Now just sum up two portions of the formula and multiplied by 256 to get the result:

The result is 30720 and it matches the value shown in the topology table of the route to
192.168.3.0/24

Using the formula above, we can easily calculate the AD of that route (with slowest bandwidth =
100,000Kpbs; sum of delay = 10)
metric = (100 + 10) * 256 = 28160
This metric matches with the second parameter of the above route.
Note: The output of show ip eigrp topology command shows only feasible successors while the
output of show ip eigrp topology all-links shows all neighbors, whether feasible successors or
not. To learn more about the show ip eigrp topology all-links please
read http://www.digitaltut.com/route-eigrp-simlet. Although it belongs to CCNP exam but CCNA
level can read it too.
EIGRP Routing table
The last table we will discuss is the routing table. This is the most used table to check the
operation of EIGRP. Here is the output of the show ip routecommand on Router0:

The routing table has two parameters [90/30720] but the first one is the administrative distance
of EIGRP. EIGRP has a default administrative distance of 90 for internal routes and it is often the
most preferred routing protocol because it has the lowest administrative distance.
Administrative distance is the measure used by Cisco routers to select the best path when
there are two or more different routes to the same destination from two different routing
protocols.
Below is the administrative distances of the most popular routing protocols used nowadays.
Notice that the smaller is the better.

So, if a network running two routing protocols at the same time, for example EIGRP and OSPF,
which routing protocol will the router choose? Well, the answer is EIGRP as it has lower
Administrative Distance than OSPF ( 90 < 110).
The second parameter, as you can guess, is the metric of that route as we discussed above.
no auto-summary with EIGRP
One of the features of EIGRP is support VLSM and discontiguous networks. Discontiguous
networks are networks that have subnets of a major network separated by a different major
network. Below is an example of discontiguous networks where subnets 10.10.1.0/24 and
10.10.2.0/24 are separated by a 2.0.0.0/8 network.

Now lets see what will happen when we turn on EIGRP on both of the routers. To turn on EIGRP
you will use these commands:
R1(config)#router eigrp 1
R1(config-router)#network 2.0.0.0
R1(config-router)#network 10.10.1.0 (or network 10.0.0.0)
R2(config)#router eigrp 1
R2(config-router)#network 2.0.0.0
R2(config-router)#network 10.10.2.0 (or network 10.0.0.0)
You can try to use the more specific network 10.10.1.0 instead of network 10.0.0.0, hoping
that EIGRP will understand it is a sub-network. But if we check the configuration with the show
running-config command we will notice that EIGRP has auto-summarized our network.
R1#show running-config

-> Network 10.10.1.0 has been summarized to network 10.0.0.0 because it knows 10.x.x.x
network belongs to class A.
The same thing happens for R2. Now we should check the routing table of R1 with the show ip
route command
R1#show ip route

From the output above we learn that R1 only knows about the directly connected 10.10.1.0/24
network but it doesnt have any information about the far-away 10.10.2.0/24 network and a
ping to 10.10.2.1 cannot be successful (but notice that we can ping to that directly connected
network, 10.10.1.2, for example).
So we can conclude that if a router receives the same route with what it is advertising then it
will not learn that route. In the above example, the collision occurs because both of the routers
summarize into network 10.0.0.0/8 and advertise it to other router. The neighboring router
realizes that it is also advertising this network so it drops this network information.
Now if we use the no auto-summary command on both routers then the problem will surely be
solved but first lets try to use that command only on R1 router.

R1(config)#router eigrp 1
R1(config-router)#no auto-summary
R1#show ip route

-> Nothing changes!


R2#show ip route

-> R2 has just learned about the new 10.10.1.0/24 network which is advertised from R1 so R2
can ping this network

In conclusion when we enable no auto-summary on R1 then R1 will advertise its network with
their subnet mask so R2 can learn them correctly.

OSPF Tutorial
In this article we will learn about the OSPF Routing Protocol
Open-Shortest-Path-First (OSPF) is the most widely used interior gateway protocol routing
protocol on the world because it is a public (non-proprietary) routing protocol while its biggest
rival, EIGRP, is a Cisco proprietary protocol so other vendors cant use it (edit: EIGRP has
become a public routing protocol since 2013). OSPF is a complex link-state routing protocol.
Link-state routing protocols generate routing updates only when a change occurs in the network
topology. When a link changes state, the device that detected the change creates a link-state
advertisement (LSA) concerning that link and sends to all neighboring devices using a special
multicast address. Each routing device takes a copy of the LSA, updates its link-state database
(LSDB), and forwards the LSA to all neighboring devices.
Note:
+ OSPF routers use LSA (Link State Advertisement)to describe its link state. LSDB stores all
LSAs.
+ A router uses Router LSA to describe its interface IP addresses.
+ After OSPF is started on a router, it creates LSDB that contains one entry: this routers Router
LSA.
There are five types of OSPF Link-State Packets (LSPs).

+ Hello: are used to establish and maintain adjacency with other OSPF routers. They are also
used to elect the Designated Router (DR) and Backup Designated Router (BDR) on multiaccess
networks (like Ethernet or Frame Relay).
+ Database Description (DBD or DD): contains an abbreviated list of the sending routers linkstate database and is used by receiving routers to check against the local link-state database

+ Link-State Request (LSR): used by receiving routers to request more information about any
entry in the DBD
+ Link-State Update (LSU): used to reply to LSRs as well as to announce new information.
LSUs contain seven different types of Link-State Advertisements (LSAs)
+ Link-State Acknowledgement (LSAck): sent to confirm receipt of an LSU message

Key points
+ Is a public (non-proprietary) routing protocol.
+ Is the only link-state routing protocol you learn in CCNA
+ This works by using the Dijkstra algorithm
+ Information about its neighbors (local connectivity) is sent to the entire network using
multicasting
+ Routing information is shared through Link-state updates (LSAs)
+ HELLO messages are used to maintain adjacent neighbors. By default, OSPF routers send
Hello packets every 10 seconds on multiaccess and point-to-point segments and every 30
seconds on non-broadcast multiaccess (NBMA) segments (like Frame Relay, X.25, ATM).
+ Is a classless routing protocol because it does not assume the default subnet masks are used.
It sends the subnet mask in the routing update.
+ Supports VLSM and route summarization
+ Uses COST as a metric which CISCO defines as the inverse of the bandwidth
+ Uses AREAs to subdivide large networks, providing a hierarchical structure and limit the
multicast LSAs within routers of the same area Area 0 is calledbackbone area and all other
areas connect directly to it. All OSPF networks must have a backbone area
+ Only support IP but its not bad as we are all using IP, right? :)
Area Border Routers (ABR) are any routers that have one interface in one area and another
interface in another area
Lets see an example of OSPF
Suppose OSPF has just been enabled on R1 & R2. Both R1 and R2 are very eager to discover if
they have any neighbors nearby but before sending Hello messages they must first choose an
OSPF router identifier (router-id) to tell their neighbors who they are. The Router ID (RID) is an
IP address used to identify the router and is chosen using the following sequence:
+ The highest IP address assigned to a loopback (logical) interface.
+ If a loopback interface is not defined, the highest IP address of all active routers physical
interfaces will be chosen.
+ The router ID can be manually assigned
In this example, suppose R1 has 2 loopback interfaces & 2 physical interfaces:

+ Loopback 0: 10.0.0.1
+ Loopback 1: 12.0.0.1
+ Fa0/0: 192.168.1.1
+ Fa0/1: 200.200.200.1
As said above, the loopback interfaces are preferred to physical interfaces (because they are
never down) so the highest IP address of the loopback interfaces is chosen as the router-id ->
Loopback 1 IP address is chosen as the router-id.

Suppose R1 doesnt have any loopback interfaces but it has 2 physical interfaces:
+ Fa0/0: 210.0.0.1 but it is shut down
+ Fa0/1: 192.168.1.2 (is active)
Although Fa0/0 has higher IP address but it is shutdown so R1 will choose Fa0/1 as its router-id.

Now both the routers have the router-id so they will send Hello packets on all OSPF-enabled
interfaces to determine if there are any neighbors on those links. The information in the OSPF
Hello includes the OSPF Router ID of the router sending the Hello packet.

For example, R1 wants to find out if it has any neighbor running OSPF it sends a Hello message
to the multicast address 224.0.0.5. This is the multicast address for all OSPF routers and all
routers running OSPF will proceed this message.

If an OSPF router receives an OSPF Hello packet that satisfied all its requirement then it will
establish adjacency with the router that sent the Hello packet. In this example, if R1 meet R2s
requirements, meaning it has the same Hello interval, Dead interval and AREA number, R2
will add R1 to its neighbor table.
+ Hello interval: indicates how often it sends Hello packets. By default, OSPF routers send
Hello packets every 10 seconds on multiaccess and point-to-point segments and every 30
seconds on non-broadcast multiaccess (NBMA) segments (like Frame Relay, X.25, ATM)
+ Dead interval: number of seconds this router should wait between receiving hello packets
from a neighbor before declaring the adjacency to that neighbor down
+ AREA number: the area it belongs to

Now R1 and R2 are neighbors but they dont exchange LSAs immediately. Instead, they sends
Database Description (DD or DBD) packets which contain an abbreviated list of the sending
routers link-state database.
The neighbors also determine who will be the master and who will be the slave. The router which
higher router-id will become master and initiates the database exchange. The receiver

acknowledges a received DD packet by sending an identical DD packet back to the sender. Each
DD packet has a sequence number and only the master can increment sequence numbers.

R1 or R2 can send Request to get missing LSA from its neighbors

R2 sends back an LSAck packet to acknowledge the packet

There are 3 type of tables


+ Neighbor
+ Topology
+ Routing

Neighbor table
+ Contain information about the neighbors
+ Neighbor is a router which shares a link on same network
+ Another relationship is adjacency
+ Not necessarily all neighbors
+ LSA updates are only when adjacency is established
Topology table
+ Contain information about all network and path to reach any network
+ All LSAs are entered into the topology table
+ When topology changes LSAs are generated and send new LSAs
+ On topology table an algorithm is run to create a shortest path, this algorithm is known as SPF
or dijkstra algorithm
Routing Table
+ Also knows as forwarding database
+ Generated when an algorithm is run on the topology database
+ Routing table for each router is unique
D: Exchange LSDBs list
Neighbors use DD (Data Description) to exchange their LSDB catalogs. In this scenario, R1
sends DD to R2 first. It says: I have a Route LSA from R1. R2 also sends DD to R1: I have a
Route LSA from R2.
Note: DD works like table fo content. It lists what LSDB has, but not details. By reading DD, the
receiving router can determine what it is missing and them ask the sender to transmit required
LSAs..
R1 Request, R2 Update
R1 has learned that R2 has a R2 Router LSA that it does not have.
R1 sends a LS Request to R2. When R2 receives this request, it sends an Update to transmit this
LSA to R1.
R2 Request, R1 Update
R2 also sends request to R1. R1 replies an Update. Upon receiving Update, R2 adds R1 Router
LSA to its LSDB, calculates its routes, and add a new entry (192.168.1.0, S1/0) to its routing
tabe.
Note: OSPF works distributely. After routers have synchronized their LSDB, they use the same
data (LSDB) to calculate shortest paths, and updates their routing tables independently.
Ack update : LSAs are received

In order to assure reliable transmission, when a router receives an Update, it sends an Ack to
the Update sender. If the sender does not receivie Ack within a specific peried, it times out and
retransmits Update.
Note: OSPF uses Update-Ack to implemnet relaible transmission. It does not use TCP.
H1 ping H2: succeeded.
Each OSPF router creates a Router LSA to describe its interfaces IP addresses and floods its
Router LSA to its neighbors. After a few rounds of flooding, all OSPF routers have the same set
of Router LSAs in their LSDBs. Now routers can use the same LSDB to calculate routes and
update routing tables.
From LSDB, a router learns the entire topology: the number of routers being connected. Router
interfaces and their IP addresses, interface link costs (OSPF metric). With such detail
information, routers are able to calculate routing paths to reach all destinations found in LSDB.
For example, in the OSPF basic simulation (see External links), R1s LSDB contains two Router
LSAs: A Router LSA from R1. R1 has two links. Their IP addresses are
192.168.1.0/24,192.168.3.0/30. A Router LSA from R2. R2 has two links. Their IP addresses
are 192.168.2.0/24,192.168.3.0/30. From these LSA, R1 can calculate the routing path to reach
remote destination 192.11.68.2.2 and adds an entry (192.168.2.0/24, S1/0) to its routing table.

RIP Tutorial
In this tutorial we will learn about RIP routing protocol
Routing Information Protocol (RIP) is a distance-vector routing protocol. RIP sends the complete
routing table out to all active interfaces every 30 seconds. RIP only uses hop count (the number
of routers) to determine the best way to a remote network.
Note: RIP v1 is a classful routing protocol but RIP v2 is a classless routing protocol.
Classful routing protocols do not include the subnet mask with the network address in routing
updates, which can cause problems with discontiguous subnets or networks that use VariableLength Subnet Masking (VLSM). Fortunately, RIPv2 is a classless routing protocol so subnet
masks are included in the routing updates, making RIPv2 more compatible with modern routing
environments.
Distance vector protocols advertise routing information by sending messages, called routing
updates, out the interfaces on a router
RIP Operation
A big problem with distance vector routing protocol is routing loop. Lets take a look at how a
routing loop occurs.
Here we have routers A, B and C. Notice that at the beginning (when a routing protocol is not
turned on) there are only directly connected networks in the routing tables of these routers. For
example, in the routing table of router A, network 1.0.0.0 has already been known because it
is directly connected through interface E0 and the metric (of a directly connected network)
is 0 (these 3 parameters are shown in the routing tables below).

Also B knows networks 2.0.0.0 & 3.0.0.0 with a metric of 0.


Also C knows networks 3.0.0.0 & 4.0.0.0 with a metric of 0.
Now we turn on RIP on these routers (we will discuss the configuration later. In the rest of this
article, we will call network 1.0.0.0 network 1, 2.0.0.0 network 2 and so on).
RIP sends update every 30 seconds so after 30 sec goes by, A sends a copy of its routing table
to B, B already knew about network 2 but now B learns about network 1 as well. Notice the
metric we have here for directly connected networks, since were using RIP, were using a metric
of hop count. Remember a hop count (or a hop) is how many routers that these packets will
have to go through to reach the destination. For example, from router A to network 1 & 2 (which

are directly connected) it goes to 0 hop, router B has now learned about network 1 from A via
E0 interface so the metric now will be 1 hop.

Each router receives a routing table from its direct neighbor. For example, Router B receives
information from Router A about network 1 and 2. It then adds a distance vector metric (such as
the number of hops), increasing the distance vector of these routes by 1.
B also exchanges its routing table with A about network 2 and 3.

B then passes the routing table to its other neighbor, Router C.

C also sends its update to B and B sends it to A.

Now the network is converged.


Now lets assume network 4 down suddenly.

When network 4 fails, Router C detects the failure and stops routing packets out its E1 interface.
However, Routers A and B have not yet received notification of the failure. Router A still believes
it can access 4.0.0.0 through Router B. The routing table of Router A still refects a path to
network 10.4.0.0 with a distance of 2 and router B has a path with a distance of 1.
There will be no problem if C sends an update earlier than B and inform that network is currently
down but if B sends its update first, C will see B has a path to network 4 with a metric of 1 so it
updates its routing table, thinking that if B can go to network 4 by 1 hop than I can go to
network 4 by 2 hops but of course this is totally wrong.

The problem does not stop here. In turn, C sends an update to B and informs it can access
network 4 by 2 hops. B learns this and think if C can access network 4 by 2 hops than I can
access by 3 hops.

This same process occurs when B continually sends its update to C and the metric will increase
to infinity so this phenomenon is called counting to infinity.
Below lists some methods to prevent this phenomenon:
SPLIT HORIZON:
A router never sends information about a route back in same direction which is original
information came, routers keep track of where the information about a route came from. Means
when router A sends update to router B about any failure network, router B does not send any
update for same network to router A in same direction.
ROUTE POISONING:
Router consider route advertised with an infinitive metric to have failed ( metric=16) instead of
marking it down. For example, when network 4 goes down, router C starts route poisoning by
advertising the metric (hop count) of this network as 16, which indicates an unreachable
network. When router B receives this advertising, it continue advertising this network with a
metric of 16.
POISON REVERSE:
The poison reverse rule overwrites split horizon rule. For example, if router B receives a route
poisoning of network 4 from router C then router B will send an update back to router C (which
breaks the split horizon rule) with the same poisoned hop count of 16. This ensures all the
routers in the domain receive the poisoned route update.
Notice that every router performs poison reverse when learning about a downed network. In the
above example, router A also performs poison reverse when learning about the downed network
from B.
HOLD DOWN TIMERS:
After hearing a route poisoning, router starts a hold-down timer for that route. If it gets an
update with a better metric than the originally recorded metric within the hold-down timer
period, the hold-down timer is removed and data can be sent to that network. Also within the
hold-down timer, if an update is received from a different router than the one who performed
route poisoning with an equal or poorer metric, that update is ignored. During the hold-down
timer, the downed route appears as possibly down in the routing table.

For example, in the above example, when B receives a route poisoning update from C, it marks
network 4 as possibly down in its routing table and starts the hold-down timer for network 4.
In this period if it receives an update from C informing that the network 4 is recovered then B
will accept that information, remove the hold-down timer and allow data to go to that network.
But if B receives an update from A informing that it can reach network by 1 (or more) hop, that
update will be ignored and the hold-down timer keeps counting.
Note: The default hold-down timer value = 180 second.
TRIGGERED UPDATE :
When any route failed in network ,do not wait for the next periodic update instead send an
immediate update listing the poison route.
COUNTING TO INFINITY:
Maximum count 15 hops after it will not be reachable.
RIP Timers
RIP uses several timers to regulate its operation. These timers are described below:
Update timer: how often the router sends update. Default update timer is 30 seconds
Invalid timer (also called Expire timer): how much time must expire before a route becomes
invalid since seeing a valid update; and place the route into holddown. Default invalid timer is
180 seconds
Holddown timer: if RIP receives an update with a hop count higher (poorer) than the hop
count recording in the routing table, RIP does not believe in that update. Default holddown
timer is 180 seconds
Flush timer: how much time since the last valid update, until RIP deletes that route in its
routing table. Default Flush timer is 240 seconds

Configuring RIP
Router(config)#router rip

Enter router RIP configuration mode

Router(configrouter)#network<address>

Identify networks that will participate in the router


protocol. Notice that you identify networks, and not
interfaces.

NOTE: You need to advertise only the classful network number, not a subnet:
Router(config-router)#network 172.16.0.0
not
Router(config-router)#network 172.16.10.0
If you advertise a subnet, you will not receive an error message, because the router will
automatically convert the subnet to the classful network address.
To learn more about configuring RIP, please read my Configuring RIP GNS3 Lab tutorial
Key points:
+ RIP uses hop counts to calculate optimal routes (a hop is a router).
+ RIP routing is limited to 15 hops to any location (16 hops indicates the network is
unreachable).
+ RIP uses the split horizon with poison reverse method to prevent the count-to-infinity
problem.
+ RIP uses only classful routing, so it uses full address classes, not subnets.
+ RIP broadcasts updates to the entire network.
+ RIP can maintain up to six multiple paths to each network, but only if the cost is the same.
+ RIP supports load balancing over same-cost paths.
+ The update interval default is 30, the invalid timer default is 180, the holddown timer default
is 180, and the flush timer default is 240.

Das könnte Ihnen auch gefallen