Sie sind auf Seite 1von 15

6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer

How to prepare for Cisco


CCNA Data Center 640-916
DCICT
Posted on 25/01/2016

After passing the first exam required to get a CCNA DC certification:


DCICN (640-911), obviously, I was also studying for the second exam:
DCICT (640-916). As this exam brought less surprise to me in terms of
exam content, it was still a lot of information to process and study.
Especially since I got a lot of info from different sources and to help me
with studying, I decided to do the same thing as with the first exam. You
can find the information which I gathered to pass the exam in this post.
Hope it helps.

Exam preparation
To prepare, my first source of information was the book from Cisco
Press: 640-916 Official Certification Guide from Cisco Press. The book
is very extensive in explaining details. Sometimes I found it even a little
overwhelming. For me personally, it would be better if they first explain
a technology in simple words and then focus more on the details.

Besides the book, I used a lot of Googling. Mainly to find explanations


trough the eyes of other people. Also, I already learned that it’s a good
thing to have a look at the configuration guides on the Cisco website
and for sure it’s good to do some simulation in a lab environment. Even
a very basic setup usually reveals stuff which I didn’t think about after
reading a guide.

Unlike with the 640-911 exam, I found that this exam cleanly covers the
exam objectives which are available from Cisco. The topics were
quite evenly distributed and they were clear. Sometimes a small detail
in the question really matters, so pay attention to that. Comparing with
640-911, I found the exam easier but the content was more difficult.
Maybe confusing but that’s my experience.

As I mentioned also in my previous post: How to prepare for Cisco


CCNA Data Center 640-911 DCICN: You need more than this post to
pass the exam. It’s important to understand the concepts and purpose
of the technologies. All information under here is what I used to help
me study.

Network architecture
A classic datacenter network has a modular multi layer network design.
The most used model has three layers. Using a modular design is
scalable and each layer has it’s own specific task:

Core layer:

Default summary route


High speed switching
Layer 3 routing

Distribution/aggregation layer:

Network services (firewall/IPS/ACE/WAAS) available to all


servers
Policy enforcement
Boundary between L2 & L3
Aggregates the access switches

Access layer:

Host connectivity
QoS marking
VLAN marking

More information:

http://www.w7cloud.com/cisco-3-layer-hierarchical-network-
model-core-distribution-access/

Variations on the traditional three layer model:

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 1/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Collapsed core: Distribution and Core are one layer. Can be used
in smaller environments
Spine and leaf
Leaf = Access layer. More leaf-switches = more access
connectivity
Spine connects all leafs in a redundant way. More spine-
switches = more switching capacity
FabricPath
ACI (Application Centric Infrastructure)

OTV (Overlay Transport Virtualization)

OTV is a technology to extend L2 over geographically separated data


centers.

Other examples are: EoMPLS (Ethernet over Multiprotocol Label


Switching), VPLS (Virtual Private LAN services), Dark fiber

Characteristics:

Creates a logical overlay network over the transport network


Designated forwarding device per VLAN: AED (Authoritative Edge
Device)
OTV devices need to form an adjacency via multicast or unicast
Supported on Nexus 7000 and ASR 1000
Requires M or F3 line cards
MAC addresses are routed
BPDU’s are not forwared over overlay interface
Uses IS-IS routing protocol for control plane
No fragmentation via edge: DF (MTU needs to be large enough)
ARP snooping (caches ARP requests and emulates them if
needed)
Multihoming is supported
Requires the LAN transport license
OTV encapsulation increases the MTU with 42 bytes

Terminology:

OTV edge: device that participates in OTV


OTV internal interface: interface on the L2 side of the network
OTV join interface: interface on the L3 side of the network (can be
a physical/subinterface/logical (port-channel))
OTV overlay interface: logical interface that is responsible for the
OTV traffic over the join interface

Configuration:

In the example I’ll configure OTV for a multicast trapnsort forVLAN 5


and use VLAN 10 for the OTV traffic. This configuration needs to be
done on both side of the OTV edges. (in the example:
S1=192.168.100.10, S2=192.168.100.20):

Configure the OTV join interface (L3 side):

S1# con
Enter configuration commands, one per line. End with CNTL/Z.
S1(config)# int eth2/1
S1(config-if)# ip address 192.168.100.10 255.255.255.0
S1(config-if)# ip igmp version 3
S1(config-if)# no shut

General config of OTV

S1(config)# feature otv


S1(config)# otv site-vlan 10
S1(config-site-vlan)# otv site-identifier 0x256
% Site Identifier mismatch will prevent overlays from forward

Configure overlay interface to use over multicast:

S1(config)# int overlay 1


S1(config-if-overlay)# otv control-group 239.1.1.1
S1(config-if-overlay)# otv data-group 232.1.1.0/28
S1(config-if-overlay)# otv join-interface eth 2/1
OTV needs join interfaces to be configured for IGMP version 3
S1(config-if-overlay)# otv extend-vlan 5
S1(config-if-overlay)# no shut

Debug/check:

S1(config)# show otv overlay 1

OTV Overlay Information


Site Identifier 0000.0000.0256

Overlay interface Overlay1

VPN name : Overlay1


VPN state : UP
Extended vlans : 5 (Total:1)
Control group : 239.1.1.1
Data group range(s) : 232.1.1.0/28
Broadcast group : 239.1.1.1
Join interface(s) : Eth2/1 (192.168.100.10)

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 2/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Site vlan : 10 (up)
AED-Capable : No (No extended vlan operationally up)
Capability : Multicast-Reachable

S1(config)# show otv adjacency


Overlay Adjacency database

Overlay-Interface Overlay1 :
Hostname System-ID Dest Addr
S2 000c.29c6.b255 192.168.100.20

More
information: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-
os/OTV/quick_start_guide/b-Cisco-Nexus-7000-Series-OTV-QSG.html

vPC (virtual Port Channel)

vPC presents two upstream switches as one virtual switch to hosts


connection using a port channel

Characteristics:

Loop-free
Uses all bandwidth (without vPC, one of the links would be
blocked by STP)
Max. two switches per vPC domain
Max. one vPC domain per switch (or VDC)
Available of M or F line cards
best practice is to use dedicated rate mode
vPC keepalive is required to establish the vPC but can be lost
during usage
Traffic normally doesn’t pass the vPC peer link (but it can be)
There is no preemt after a failover

Terminology:

vPC peer: one of both switches on the upstream side


vPC peer link: link between peers (10GE port-channel)
vPV peer keepalive: separate link between peers (mgmt0 is
allowed)
vPC member port: port used to connect hosts to the virtual port
channel
orphan port: port that doesn’t use vPC but does provide
connectivity to a vPC VLAN
CFS: Cisco Fabric Services Protocol: state sync/config over vPC
peer link

Configuration:

In the example, I’ll config vPC on one of both vPC peers. Ethernet 2/1
and 2/2 are used for the vPC peer link, mgmt0 is used for keepalive
and Ethernet 2/3 is used to connect a host (member port).

Basic configuration and enabling vPC:

S1# show ip int brief vrf management


IP Interface Status for VRF "management"(2)
Interface IP Address Interface Status
mgmt0 192.168.55.10 protocol-up/link-up/admin
S1# show ip int brief
IP Interface Status for VRF "default"(1)
Interface IP Address Interface Status
Eth2/1 192.168.100.10 protocol-up/link-up/admin
S1# con
Enter configuration commands, one per line. End with CNTL/Z.
S1(config)# feature vpc

vPC configuration and keepalive link:

S1(config)# vpc domain 1


S1(config-vpc-domain)# peer-keepalive destination 192.168.55.2

vPC peer link:

S1(config)# int eth 2/1-2


S1(config-if-range)# switchport
S1(config-if-range)# channel-group 1 mode on
S1(config-if-range)# int port-channel 1
S1(config-if)# switchport mode trunk
S1(config-if)# vpc peer-link

vPC member port:

S1(config)# int eth 2/3


S1(config-if)# switchport
S1(config-if)# channel-group 2 mode active
S1(config-if)# int port-channel 2
S1(config-if)# switchport mode trunk
S1(config-if)# vpc 2

Debug/check:

S1(config)# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC pe

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 3/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer

vPC domain id : 1
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Configuration inconsistency reason: Consistency Check Not Perf
vPC role : secondary
Number of vPCs configured : 1
Peer Gateway : Disabled
Dual-active excluded VLANs : -
vPC Peer-link status
--------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ -------------------------------------------
2 Po2 up 1,3001-3500
vPC status
--------------------------------------------------------------
id Port Status Consistency Reason
------ ----------- ------ ----------- ------------------------
1 Po1 up success success

More information:

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-
5000-series-switches/configuration_guide_c07-543563.html
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/command/reference/vpc/n5k-
vpc-cr/n5k-vpc_cmds_show.html#wp1104923

FabricPath
FabricPath is used to get to a spine/leaf network design where a large
layer 2 environment is scalable and get’s the benefits of layer 3 routing
like multipathing, fast convergence and SPF without creating multiple
segments or STP disadvantages.

Characteristics:

Cisco proprietary
Does address lookup for incoming traffic to find outgoing
destination (SPF)
ECMP (Equal Cost Multipathing) is possible up to 16 links
Uses a tree topology to determine routes for ARP, broadcast or
multicast traffic. Trees are per VLAN
Uses IS-IS as routing protocol
Uses conversational MAC-learning: only learn relevant MAC’s on
a port
Not available for M1 interface, only on F1/2/3
Ehtertype: 0x8903
STP BPDUs do not pass core ports
For classic switches connected to the FabricPath network
appears as one STP bridge with a fixed BID: C84C.75FA.6000).
The edge port always needs to be the root for FabricPath VLANs
vPC+ enable FabricPath to work with a vPC domain (FabricPath
emulates a switch)

Terminology:

Core port: ports that form the FabricPath network. Uses


FabricPath header to encapsulate all traffic
Edge port: link to the classic network (CE): uses standard
ethernet frames
DRAP (Dynamic Resource Allocation Protocol) assigns ID’s to
each switch
FTAG (FabricPath forwarding Tag): 10 bit traffic id tag in the
FabricPath header

Known unicast behavior: uses an existing mapping between unicast


MAC and destination switch ID. Flow based load balancing over equal
cost paths can be used.

1. Receive normal ethernet frame on edge port


2. Peform a lookup to identify destination switch ID: succes
3. Once the destination switch ID is known, look for the next-hop to
reach the destination
4. Encapsulate the original ethernet frame with a FabricPath header
5. The next-hop is a spine switch and it will forward the FabricPath-
encapsulated frame to the destination leaf switch
6. On the destination leaf switch: check if this is really the end-
destination and determine the edge port.
7. Remove the FabricPath header and forward the ethernet frame to
the destination edge port

Multicast behavior: uses multidimensional trees calculated by IS-IS.


Uses pruning to not forward to switches that do not have receivers

Broadcast behavior: edge switches do not learn MAC’s but updates of


known addresses are peformed

Unknown unicast behavior: conversational learning

1. Edge port owning switch learns the source MAC for the host
connected
2. Perform a lookup to identify destination switch ID: fails

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 4/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
3. Encapsulate the original ethernet frame with a FabricPath header
4. Unicast root (elected earlier) will forward the request to all leafs
(they own the devices)
5. Leaf owning the device will answer via the root to the incoming
port
6. Learn the layer 2 route
7. Learn the destination MAC only on the switch owning the edge
port connected to the destination

Configuration:

Install and enable fabricpath:

S1# con
Enter configuration commands, one per line. End with CNTL/Z.
S1(config)# license grace-period
S1(config)# install feature-set fabricpath
S1(config)# feature-set fabricpath
S1(config)# show feature-set
Feature Set Name ID State
-------------------- -------- --------
fabricpath 2 enabled
fex 3 uninstalled
mpls 4 uninstalled

Optionally configure a fixed switch-id:

S1(config)# fabricpath switch-id 1

Configure FabricPath VLAN’s:

S1(config)# vlan 11-20


S1(config-vlan)# mode fabricpath

Configure interfaces that will be FabricPath core ports (will form IS-IS
adjecency):

S1(config)# int e2/1


S1(config-if)# switchport
S1(config-if)# switchport mode fabricpath

Debug/check:

S1(config)# show fabricpath switch-id


S1(config)# show fabricpath isis
S1(config)# show fabricpath route

More information:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/fabricpath/513_n1_1/N5K_FabricPath_Configuration_
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus7000/sw/fabricpath/command/reference/fp_cmd_book/fp__cmd

FEX (Fabric EXtender)

Fex is a technology which allows you to extend the capacity of a parent


switch with remote line cards (FEX) in a separate chassis (or card),
connected over L3.

Characteristics:

FEX-device itself has no local configuration


Nexus 2000 series
Uses VN-tags (802.1BR) to identify FEX ports over the link
between parent and FEX
ACLs, VLANs, QoS,… is performed on the parent for any FEX-
type
Can use static pinning: dedicated link for a group of ports
between parent and FEX
Can use dynamic pinning: port-channel for all links between
parent and FEX
Ports on FEX are also called satellite ports

FEX-types:

(ToR/Rack) FEX: Nexus 2000 connected to a parent Nexus


5000/7000; fabric to top of rack
Adapter FEX: Use a NIC or CNA in one host as a FEX, creates
vNICs for the OS: fabric to server
VM-FEX: extension on adapter FEX to the level of a hypervisor
(KVM, Hyper-V or VMWare). Each guest on the hypervisor can
have a vNIC: fabric to VM
Blade/chassis FEX: FEX-adapter in a Cisco UCS blade enclosure
(for example 2104XP): fabric to UCS blade

Terminology:

NIF: Network interface: port on the FEX to the parent switch.


Uses VN-tages
HIF: Host interface: port on the FEX to the host (server)
VIF: Virtual interface HIF including VLAN and other parameters
LIF: Logical interface: representation of HIF on the parent

Configuration:

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 5/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Configuration is only done on the parent switch, there is no console or
similar on the FEX-side.

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus2000/sw/configuration/guide/rel_4_0_1a/NX2000CLIConfig/FEX
Config.html
http://packetlife.net/blog/2012/mar/29/nexus-2200-fex-
configuration/

Model overview:

More information:

http://nexp.com.ua/technologies/dc/choosing-between-dynamic-
and-static-fex-interfaces-pinning/
https://www.packetmischief.ca/2012/08/28/what-the-fex-is-a-fex-
anyways/
http://vjswami.com/2011/11/10/wtf-what-the-fex-are-you-talking-
about/

Storage architecture
Traditional SAN infrastructure types:

Point to point (DAS):


Direct connection between two FC-devices
Uses 1 bit addressing (one port os 0x000000, the other is
0x000001)
Arbitrated Loop (FC-AL):
All devices are connected in a loop and all devices can
access all storage
A hub is also considered as a loop-device
Uses 8 bit physical address (AL-PA)
126 addresses available for node (1 reserved for FL-port)
Switched Fabric
Singe tier (one layer of switches)
Multitier (usually in a core-edge design): cost-effective for
larger designs
Uses 24 bit addressing: 8 bit for switch domain (unique for
each switch), 8 bit for the area (group of ports in a domain)
and 8 bit for the device
239 domains available (01-EF)

For a switched fabric it’s a good practice to have at least two fabrics
(A/B) which are either separated by physical switches or VSAN

Device connection process:

1. Port Initialization
2. Fabric login (FLOGI)
A. Switch assigns an FCID, based on the WWN
B. Switch reserves the necessary buffer2buffer credits (the
larger the distance, the more B2B credits are needed)
3. Port login (PLOGI)
A. Between nodes that want to communicate

Terminology:

File based protocols: CIFS/SMB/NFS


Block based protocols: SCSI/iSCSI/FC/FCoE
NPV: allows multiple FLOGI on one physical link. It creates
multiple virtual F-ports. Since the number of domain ID’s is limited
to 239 NPV allows sharing of the same domain ID over multiple
edge switches. You can see it as a kind of proxy between edge
and core.
NPIV: N port identification virtualization: provides a means to
assign multiple FC IDs to a single N port, which allows the server
to assign unique FC IDs to different applications. Commonly used
for virtualization.
WWN= hardcoded ~ MAC
WWN that starts with 20:=Cisco
WWN that has the second byte different from 0 is an
extended WWN
WWNN = device ID
WWPN = device port (different for example for a dual port HBA)
FWWN = fabric WWN
PWWW = port WWN. PWWW that starts with
FSPF: Fabric Shortest Path First
Routing for FC
Uses domain ID (up to 75/fabric)
Loop free over E or TE ports
Uses a topology DB ~ comparable with OSPF
Directory/name server
adres: FF FF FC
N-port can do a query on the nameserver
Contains address, WWN, volume names

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 6/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Principal/secondary (per VSAN) based on priority
FCIP: FC frames in IP packets over normal L3 link (uses
acceleration and compression techniques)
VSAN: like VLAN for SAN: isolation between FC
max 4094 VSAN’s
FCID’s can be re-used in different VSANs

Port types:

N = Node (server/storage device/tape)


F = Fabric (switch)
E = Expansion (link between switches, 1 VSAN; ISL)
TE = Trunk Expansion (trunk between switches (multiple VSAN)
tss 2 switches: EISL)
TF= Trunking Fabric: connects to TN port: trunking to node,
sends tagged frames
NP = Node Proxy: NPV switch (blade server, connected to F-port
on switch)
VE = Virtual E (FCoE)
VF = Virtual F (FCoE)
VNP = Virtual NPV (FCoE)
FL = Fabric Loop: connects to NL-ports and FL-ports
NL = Node loop (connected to hub)

Fibre Channel protocol stack:

FC-0: Physcial interface: fiber characteristics like


singlemode/multimode
FC-1: Encode/decode: clock/data stream/parity/link control
FC-2: Framing: flow control/CRC/service class/buffer credits
FC-3: Common services: adressing/login/name server
FC-4: Application: ULP (upper layer protocols)

Cisco MDS
Characteristics:

MDS = Multilayer Director Switch


9500/9700 = modular configuration
9100/9200= fixed configuration
Initial setup ask SAN-specific items:
Default Zoneset distribution
Default Switchport mode (by default: F-port)
Nexus 7000 doesn’t do inter-vsan routing and deosn’t have native
FC
Can be managed/configured by DCNM (Data Center Network
Manger)

NPV

In NPV mode, the edge switch relays all traffic from server-side ports to
the core switch. The core switch provides F port functionality (such as
login and port security) and all the Fibre Channel switching capabilities.
This means that the edge looks like a host for the core and show flogi
database, … can’t be executed anymore on the edge

When enabling NPV mode (feature npv), the switch config is erased
and the switch reboots. Default switch mode is fabric mode.

When enable FCoE and NPV in one time, the switch doesn’t need to
change between Fabric mode and NPV so there no write-erase and a
reboot (feature fcoe-npv).

Disruptive load balancing can be configured only in NPV mode.

When disruptive load balancing is enabled, NPV redistributes the


server interfaces across all available NP uplinks when a new NP uplink
becomes operational. To move a server interface from one NP uplink to
another NP uplink, NPV forces reinitialization of the server interface so
that the server performs a new login to the core switch. This action
causes traffic disruption on the attached end devices.

To avoid disruption of server traffic, enable this feature only after adding
a new NP uplink, and then disable it again after the server interfaces
have been redistributed.

More information:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide/npv.htm
http://blog.scottlowe.org/2009/11/27/understanding-npiv-and-npv/

Debug:

Port-type, VSAN, speed, port-channel:

show int brief

Check local fabric logins (includes VSAN):

show flogi database

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 7/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Fabric wide:

show fcns database


show fcns database detail

Check who is the principal for nameserver:

show fcdomain domain-list

Unified ports

Module names usually have UP in their name


When changing a port from ethernet to FC: switch or module
reboot
When changing a port from ethernet to FC: port is renamed from
eth2/10 to fc2/10
Lowest numbers are reserved for ethernet, higher for FC

Configuration:

Switch between port types:

S1# con
Enter configuration commands, one per line. End with CNTL/Z.
S1(config)# slot 2
S1(config-slot)# port 1-8 type fc
S1(config-slot)# port 9-16 type ethernet

Create a VSAN:

S1(config)# vsan database


S1(config-vsan-db)# vsan 100 name jensd
updated vsan 100
S1(config-vsan-db)# vsan 100 interface fc2/10
S1(config-vsan-db)# show vsan 100 membership

Configure ports:

S1(config)# interface fc2/10


S1(config-if)# switchport speed 1000/2000/4000/8000/auto
S1(config-if)# switchport mode E/F/SD/auto
S1(config-if)# switchport trunk mode auto/off/on
S1(config-if)# switchport trunk allowed (VSANS)
S1(config-if)# channel-group 10

Zoning & masking:

Isolation between initiator and target for security reasons


Zoning: done on the switch: protects SAN-devices from
communicating
LUN masking: done on a host: protects parts (LUN) of SAN-
devices from communicating
A device can be a member of multiple zones
One initiator to one target = recommended
One initiator to multiple targets = accepted
Multiple initiators to multiple targets = not recommended
In the default zone devices cannot communicate
Maximum 16.000 zones
Zone modes:
basic
enhanced (nobody can change anything until active
changes are committed)

Zone configuration:

1. Add physical ports to VSAN


2. Configure zones per VSAN
3. Add zones to the zoneset (only one zoneset can be active per
fabric)

List logged on devices:

S1# show flogi database


--------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NOD
--------------------------------------------------------------
sup-fc0 2 0xb30100 10:00:00:05:30:00:49:63 20:00:00:0
fc9/13 1 0xb200e2 21:00:00:04:cf:27:25:2c 20:00:00:0
fc9/13 1 0xb200e1 21:00:00:04:cf:4c:18:61 20:00:00:0

Create an alias for a device:

S1(config)# device-alias database


S1(config-device-alias-db)# device-alias name nicename pwwn 21
S1(config-device-alias-db)# exit
S1(config)# device-alias commit

Create a zone and add ports:

S1(config)# zone name test vsan 10


S1(config-zone)# member pwwwn 21:00:00:04:cf:4c:18:61
S1(config-zone)# member device-alias nicename

Create a zoneset and add zone:

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 8/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
S1(config)# zoneset name testzs vsan 10
S1(config-zoneset)# member test

Activate a zoneset:

S1(config)# zoneset activate name testzs vsan 10


S1(config)# show zoneset active vsan 10

More information:

http://www.techworld.com/tutorial/storage/how-to-interpret-
worldwide-names-156
http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-
os/quick/guide/qcg_vin.html
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/troubleshooting/guide/N5K_Troubleshooting_Guide/n5

FCoE (Fibre Channel over Ethernet)

FCoE is a technology to converge Ethernet and Fibre Channel traffic


over the same link. Hosts that want to use FCoE need a special CNA-
adapter which allows FCoE and Ethernet.

Characteristics

FCoE encapsulates an FC frame, which in turn encapsulates a


SCSI frame
FCoE NPV: parent: NPV, child: NPIV -> FIP snooping
VFC binds to VSAN and physical Ethernet interface. The port
must connect to a trunk
FCoE is only allowed on 10G/40G interfaces
FCoE VLAN that matches with the VFC VSAN must be allowed
FCoE VLAN can’t be the native VLAN
VLAN1 can’t be the FCoE VLAN
Multihop FCoE crosses the access layer up to the core (for Nexus
7000: requires F1/F2e or F2 modules)
Best practice: once FCoE passed the access layer: dedicated
links for FC traffic and Ethernet traffic
FCoE is only possible with a Sup2/2e supervisor
FCoE on Cisco requires every switch in the path to be FCoE
aware = FCoE dense model.

Terminology:

DCB: Data Center Bridging: Standard to have a low latency


lossless connection suited for FCoE traffic
DCB eXchange protocol: advertise capabilities over link and
checks both side prior to data transfer
CNA: Converged Network Adapter: Adapter for hosts that is a
combination of NIC and FCoE interface (appears as two separate
devices for the OS)
FCF: FCoE switch
FIP: FCoE init protocol
FCF: FCoE Fibre Channel forwarder: switches in the FCoE path
that are FCoE aware to keep traffic lossless
VFC: Virtual FC interface
802.1Qbb (lossless Ethernet) Priority Flow Control
802.1Qaz (bw management) Enhanced Transmission Standard
802.1p: Implements QoS at MAC-level with a 3-bit field called the
Priority Code Point (PCP) in the Ethernet frame header

FCoE connection process (FIP):

1. VLAN discovery (multicast to FCF MAC over native VLAN)


2. FCF discovery (multicast to FCP MAC)
3. FLOGI

Configuration:

Enable FCoE:

S1(config)# feature fcoe

Create a VSAN for FC:

S1(config)# vsan database


S1(config-vsan-db)# vsan 400
updated vsan 400
S1(config-vsan-db)# exit

Create a VLAN for Ethernet. It’s easy to keep the VLAN and VSAN
number equal:

S1(config)# vlan 400


S1(config-vlan)# exit

Create VFC interface and map it to a physical port. It’s easy to keep the
same number for the VFC as the VSAN.

S1(config)# int vfc 400


S1(config-if)# bind interface e1/30
S1(config-if)# no shut
S1(config-if)# switchport mode F
S1(config-if)# exit

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 9/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Bind the VFC to a VLAN and VSAN:

S1(config)# vlan 400


S1(config-vlan)# fcoe vsan 400
S1(config-vlan)# no shut
S1(config-vlan)# exit
S1(config)# vsan database
S1(config-vsan-db)# vsan 400 interface vfc 400

Configure the physical port. It must be a trunk and the allowed VLAN
must match:

S1(config)# int e1/30


S1(config-if)# switchport mode trunk
S1(config-if)# spanning-tree port type edge trunk
S1(config-if)# switchport trunk allowed vlan 999,400 (999=data
S1(config-if)# no shut

Show translation between VLAN ID and VSAN ID and debug:

S1# show vlan fcoe


S1# show vsan membership
S1# show int vfc 400

Configure a trunk over FCoE. This configuration must match on both


sides of the trunk:

S1(config)# int e1/25


S1(config-if)# switchport mode trunk
S1(config-if)# switchport trunk allowed vlan 400

Create VFC and bind it to a physical interface (number is free to


choose):

S1(config)# int vfc 25


S1(config-if)# bind int e1/25
S1(config-if)# switchport mode e

Map the VFC to the correct VSAN:

S1(config)# vsan database


S1(config-vsan-db)# vsan 400 interface vfc 25

Enable the trunk interface:

S1(config)# int vfc 25


S1(config-if)# no shut

Debug:

S1# show int vfc 25

Nexus general

Connection parameters: 9600/8/N/1 (baud/data bits/parity/stop bits)

Planes:

Planes are isolated (one faulty plane does not influence the other)
Management plane: config/policy/CLI/GUI/SNMP/API/CoPP…
EEM: Embedded Event Manager: Can take action based on
an event (syslog/CLI/GOLD/environment/hardware)
Control plane:
OSPF/EIGRP/STP/CDP/BFD/UDLD/LACP/ARP/FabricPath/VRRP/

CoPP: Control Plane Policing: Protects the control plane
when problems occur in the data plane (for example:
broadcast storem/DoS)
Possible configuration for CoPP
CIR (Committed information rate)
PIR (Peak information rate)
EB (Extended burst)
Predefined CoPP policies:
Strict/Moderate/Lenient/Dense/Skip (default: skip)
Ethanalyzer: wireshark for the control plane
Data plane: packet forwarding

Ways of data forwarding in the data plane:

Store and forward: get the complete packet/frame in memory and


then forward
Advantages:
Error checking of complete frame (FCS)
Possibility to buffer frames (more robust)
ACL
Cut trough: forward a frame as soon as the destination is known
Advantages:
Lower latency
Flags a frame instead of dropping it in case FCS is not
correct

SOUR CE DES TINATION MOD E

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 10/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer

SOUR CE DES TINATION MOD E

10G 10G Cut-trough

10G 1G Cut-trough

1G 1G Store and forward

1G 10G Store and forward

VRF (Virtual Routing and Forwarding)


A VRF allows one switch to have multiple IP routing protocol stakcs in
the same environment.

Characteristics:

Virtualization of the IP routing protocol


Separate routing and forwarding decisions
IPv4 and IPv6 have separate tables
Membership on an interface determines which VRF will be used
mgmt0 can only be in the management VRF

VRF-aware commands:

S1(config)# show vrf


VRF-Name VRF-ID State Reason
default 1 Up --
management 2 Up --
S1(config)# show ip int brief vrf management
IP Interface Status for VRF "management"(2)
Interface IP Address Interface Status
mgmt0 192.168.55.10 protocol-up/link-up/admin
S1(config)# ping 192.168.55.20 vrf management
PING 192.168.55.20 (192.168.55.20): 56 data bytes
64 bytes from 192.168.55.20: icmp_seq=0 ttl=254 time=1.443 ms
64 bytes from 192.168.55.20: icmp_seq=1 ttl=254 time=1.118 ms
64 bytes from 192.168.55.20: icmp_seq=2 ttl=254 time=1.45 ms
64 bytes from 192.168.55.20: icmp_seq=3 ttl=254 time=1.366 ms
64 bytes from 192.168.55.20: icmp_seq=4 ttl=254 time=1.194 ms
^C
--- 192.168.55.20 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.118/1.314/1.45 ms
S1(config)# show routing vrf management
IP Route Table for VRF "management"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

192.168.55.0/24, ubest/mbest: 1/0, attached


*via 192.168.55.10, mgmt0, [0/0], 06:20:54, direct
192.168.55.10/32, ubest/mbest: 1/0, attached
*via 192.168.55.10, mgmt0, [0/0], 06:20:54, local
S1(config)# show ip arp vrf management

Flags: * - Adjacencies learnt on non-active FHRP router


+ - Adjacencies synced via CFSoE
# - Adjacencies Throttled for Glean
D - Static Adjacencies attached to down interface

IP ARP Table for context management


Total number of entries: 2
Address Age MAC Address Interface
192.168.55.1 00:01:34 0050.56c0.0001 mgmt0
192.168.55.20 00:00:26 000c.29c6.b23a mgmt0

Another way is to set the routing-context to a certain VRF. All


commands will then be executed in that VRF. Notice the changed
prompt:

S1# routing-context vrf management


S1%management# ping 192.168.55.20
PING 192.168.55.20 (192.168.55.20): 56 data bytes
64 bytes from 192.168.55.20: icmp_seq=0 ttl=254 time=1.981 ms
...
S1%management# routing-context vrf default
S1#

Create a VRF and add an interface to it:

S1(config)# vrf context jensd


S1(config-vrf)# description test
S1(config-vrf)# exit
S1(config)# int e2/3
S1(config-if)# vrf context jensd

VDC (Virtual Device Context)

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 11/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
A VDC allows one physical switch to be splitted up into multiple virtual
switches. Each VDC is running it’s own processes and configuration.

Characteristics:

Nexus 7000 with supervisor Sup1 or Sup2: 4+1 (admin VDC)


Nexus 7000 with supervisor Sup2E (7000/7700): 8+1 VDC
allowed
Extra VDCs requires a license
The admin VDC can’t own interface but has special privileges to
manage the other VDCs and RBAC
To communicate between VDCs requires a physical connection
between member ports of those VDCs

VDC types:

Default VDC: manage physical device, other VDC’s, upgrades,


captures,…
Nondefault VDC: user-created from default or admin VDC
Admin VDC:
Can be enabled with sytem admin-vdc (migrate).
Replaces the default VDC
Features or feature-sets cannot be enabled from here
Can’t have interfaces assigned to it
Doesn’t require special license
Storage VDC:
nondefault
Requires the FCoE licence (but no VDC license)
Can have only FCoE or shared interfaces

Configuration:

Show current used VDCs and license:

S1# show license usage (VDC_LICENSES)


S1# show vdc

Show members of VDC

S1# show vdc membership

Create a new VDC and allocate ports to it:

S1(config)# vdc jensd


S1(config-vdc) allocate interface ethernet 4/47-48
S1(config-vdc) limit-resource (for example a module-type)
S1(config-vdc) copy run start vdc-all

Switch between VDC:

S1# switchto vdc jensd


S1-jensd# switchback
S1#

Nexus 1000v

The Nexus 1000v is a virtual switch (appliance), running on a


hypervisors (VMware, Hyper-v,…). It extends the networking
functionality of the hypervisor.

Characteristics:

Uses a control-interface (in a separate VLAN) to


exchange hearbeat and control messages
Uses a packet-interface for CDP and IGMP messages
Up to 250 ESX hosts per VSM
Port profiles (VLAN, ACL, SPAN, ERSPAN, QoS,…) are mobile
and follow the guest when doing a vMotion
Every host (hypervisor) needs to have a VEM
Comes in Essential (free) and Advanced edition
Communication between VSM and VEM can be either L2 or L3
(recommended)
in L3 mode, every host needs a vmkernel interface with an
IP
VSM connects to vCenter using SSL. Communication via the
VMWare API

Terminology:

VSM: Virutal Switch Module


VEM: Virtual Ethernet Module
vEth: Virtual Ethernet Port
VSG: Virtual Security Gateway
deployed on top of 1000v, provides security between guests
vPath: first packet of a flow is sent to VSG for inspection

Installation procedure:

1. Create VLAN’s on the vSwitch on every host that will run a VSM
for VSM control and management traffic

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 12/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
2. Deploy the OVA for the primary VSM (select manual or automatic
setup)
3. During deployment, map the control and management VLAN’s as
created in step 1
4. Connect to the VM’s console and do basic configuration
A. Admin password
B. Role: primary/secondary
C. Domain ID
D. Basic configuration dialog (as on a normal Nexus switch):
SNMP/switch name/IP/ssh/http-server/SVS control mode:
L2/L3)
5. Deploy the OVA for the secondary VSM (also see 3) (select VSM
secondary)
6. Enter the domain ID and admin password of the primary
7. The secondary automatically gets the configuration of the primary
8. Create a connection to vCenter (svs connection <name>,
protocol, remote ip,…)
A. Distributed virtual switch gets automatically created on the
vCenter
9. Configure the rest of the network (VLAN’s, port profiles,…)
10. Create a vmkernel interface for each host running a VSM for VEM
connectivity

Configure port profile:

n1000v# config t
n1000v(config)# port-profile vmware-dmz
n1000v(config-port-prof)# switchport mode access
n1000v(config-port-prof)# switchport access vlan 100
n1000v(config-port-prof)# no shut
n1000v(config-port-prof)# state enabled

Only after the last command, the port profile gets pushed to VMWare

Debug:

vCenter connection and config:

n1000v# show svs domain


n1000v# show svs connections

Specific port profile:

n1000v# show port-profile name zzz

Show connected VSM and VEM:

n1000v(config-port-prof)# show module


Mod Ports Module-Type Model
--- ----- -------------------------------- -----------------
1 0 Virtual Supervisor Module Nexus1000V
Mod Sw Hw
--- --------------- ------
1 4.0(4)SV1(1) 0.0
Mod MAC-Address(es) Serial-Num
--- -------------------------------------- ----------
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
Mod Server-IP Server-UUID Se
--- --------------- ------------------------------------ --
1 172.23.10.188 NA NA

More information:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/install_upgrade/vsm_vem/guide/b_
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/4_0/install/vem/guide/vem_install_n1000v.html
https://www.youtube.com/watch?v=gZJNcftZEcE

UCS (Unified Computing System)

UCS are the x86-servers from Cisco. They come in two variations (B-
series: blade and C-series: rack). UCS-server are managed by UCSM
(UCS Manager), which is running on Fabric Interconnects.

Fabric Interconnect:

UCS 6xxx-series Fabric Interconnect


Has 1-2 expansion modules (16 ports/module)
6248: 32 fixed ports, max. 48 ports
6296: 48 fixed ports, max. 96 ports
6100-series has no unified ports. FC is only available via a
module
Up to two nodes for high availability
Data connectivity is active-active
Management (UCSM) connectivity is active-passive
Cluster port on Fabric Interconnect for interconnection (L1/L2)
Connection to blades:
Best practice: port channel connection (just add a member if
extra capacity is required)
Every blade has two paths (2 FEX I/O modules) to the
Fabric Interconnects
SAN Fabric should not be cross-connected (A/B separate)

Fabric Interconnect port types:

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 13/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Unconfigured
Server: to a rack server or blade chassis
Ethernet uplink
Ethernet uplink (to upstream network)
FCoE uplink
Appliance: to NFS-storage
Fibre Channel uplink
Fibre Channel uplink port to upstream SAN-fabric
Fibre Channel storage port for Direct Attached FC Storage

Installation procedure:

1. Setup primary FI
2. Setup secondary FI (requires the cluster IP and admin password
chosen in step 1)
3. Login to the cluster IP
4. Configure connectivity
A. Create VLAN’s
B. Choose ports for LAN uplink
C. Create VSAN’s (min 1 fabric A, 1 fabric B)
D. Choose ports for SAN uplink
5. Configure port channels
6. Start discovery

UCSM (UCS Manager)

Running on the Fabric Interconnect


GUI/CLI/XML API
Has a floating (virtual) IP for management
Primary and secondary (DB and state is replicated)
Can manage up to 160 servers
Stateless computing: apply a profile to a physical server to
set FW, boot device, BIOS, MAC, VLAN, QoS,…

Most hardware is automatically discovered by UCSM. Before enabling


server ports, make sure that the discovery policy is correctly
configured.

Chassis discovery process:

1. Server port on FI becomes linked with an I/O module in a blade


chassis
2. Communicate with the management controller in the blade
chassis
3. Check compatibility/serial/existing or new/…
4. Accept the chassis
5. Discover the rest of the hardware: model, firmware, power
supplies,…

Chassis/FEX discovery policy (Equipment tab) allows you to set the


minimum conditions that should be matched before a chassis is
detected. For example minimum number of links required to a chassis,
minimum number of PSU or require manual acknowledgement before
adding a new chassis.

Server discovery process

1. Slot in a blade chassis detects new blade


2. Communicate with the CIMC on the blade in the slot
3. Discover basic information: mode, serial, BIOS, CIMC, CPU,
memory, HBA, NIC,…
4. Boot UCS Utility OS
5. Discover further information: local disk info, specific HBA or NIC
info

Blade chassis:

UCS 5xxx Blade server chassis


5108: 8 blades, 4 PSU, 8 fan modules
2 I/O modules (FEX) on the backside, connectivity to Fabric
Interconnect
Can connect to Fabric Interconnect with 2/4/8/16 10G links
depending on the required capacity

B-series Blade:

UCS B200M3 Blade


Has a CIMC (Cisco Integrated Management Controller):
allows discovery of blades
allows KVM access to the server
IPMI

C-series Rackserver:

UCS C240M3
Can also be managed by UCSM
also has CIMC

UCS VIC: Virtual Interface Card

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 14/15
6/2/2018 How to prepare for Cisco CCNA Data Center 640-916 DCICT | Jensd's I/O buffer
Mezzanine adapter in server/blade
Can have up to 256 virtual network adapters presented to the OS
or hypervisor
Can be configured as IP, FCoE, Adapter FEX or VM FEX
Allows for fabric failover: in hardware failover without the OS
being aware of NIC teaming

More information:

http://www.cisco.com/c/en/us/support/docs/servers-unified-
computing/ucs-manager/116188-configure-fcoe-00.html
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/2-
0/b_UCSM_GUI_Configuration_Guide_2_0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html

Network services

Cisco ACE (Application Control Engine): Load balancer

Features:

Can be a separate device or a module in an IOS router


Uses intelligent NAT
Makes decisions from Layer 3 to Layer 7
Decisions are made in hardware
Can do SSL encryption/decryption (in hardware by using a
daughter card), certificate management and SSL offloading
Does HTTP optimization and compression
Can be split up in contexts (comparable to VDC on Nexus)
Max 250 contexts
Admin context to manage
Every context is a sandbox
Failover is context-aware
No Layer 2 communication between contexts
Contexts can have dedicated resources (bandwidth,
#connections, compression,…)
Management:
TACAS/RADIUS for RBAC (CLI & GUI)
Telnet/SSH per context
Webgui (HTTP) for documentation/MIBS/tools
XML API
ACE4710 has a built-in GUI
ACE30: hardware accelerated HTTP compression (GZIP/deflate)
& HTTP optimization
ACE10/20/30 have no built-in GUI but can use ANM (Application
Network Manager): A fat-client to manage multiple ACE devices
Predecessor was CSS/CSM

GSS (Global Site Selector): DNS load balancer


Features:

Rules determine which IP is returned for the DNS request


Uses metrics: proximity lookup to return location-based IP

GSLB (Global server Load Balancing): ACE + GSS


Features:

Uses KAL AP (Keepalive Appliance Protocol) for communication


between ACE & GSS
Load balancing decisions are made by both devices
Check the load of VIP
ACE: one datacenter
GSS: determines which datacenter

WAAS: Wide Area Application Services: WAN optimizer

Features:

Has approved/licensed support for ICA and Exchange


WAAS-device is between the switch and the WAN-connection
Does load sharing and failover
Transparent to the network
WAAS Central Manager to manage (or CLI)
Can be a separate device or a module in an IOS router
WCCP: Web Cache Communication Protocol

http://jensd.be/698/network/how-to-prepare-for-cisco-ccna-data-center-640-916-dcict 15/15

Das könnte Ihnen auch gefallen