Sie sind auf Seite 1von 16

UCS Technology Labs Nexus 1000v on UCS

QoS in Nexus 1000v


Last updated: April 12, 2013

Task
Set up one of the generic rack-mount servers in your rack with the IP address of 10.0.110.10/24
and place it in VLAN 110 on your N5K.
(Server 1 - 8 depending on rack number)
Generate a ping from VSM1 destined to 10.0.110.10 with the DSCP 46 (EF).
Provision N1Kv to intercept that packet and change the DSCP value to 40, and also to mark L2
with the CoS value of 5.
Ensure that this CoS value is being honored by UCSM by verfication in the proper Fabric
Interconnect.

Pre-Verification
To test this QoS scenario, we need to generate some packets with a particular DSCP value from
a VM running on the N1Kv switch VEM. Although we may be able to do this from a Windows
box, it is much easier and faster (and therefore makes much more sense) to do this from a
Cisco device. Luckily, the VSM is running on its own VEM, so we can simply test using an
extended ping from the VSM management interface, because it is simply another Veth interface
on the VEM. We will see that our VSM-1 (or Sup1) is currently active, so we know that our ping
will come from the management port on VSM-1, which in our case happens to be Veth10.
We will ping a Windows box that is running Wireshark and filter for ICMP. Note that this box
happens to be on the same subnet but is a standalone rack-mount server and is connected
outside of the UCS on a N5K. This will give us the ability to check the hardware queues on the
UCS Fabric Interconnects later for L2 CoS-to-Queue mapping.
Let's first make sure that the active VSM is indeed VSM1. This is critical to knowing which
vmnic in VMWare, and therefore which vNIC in UCSM, will be transmitting the packets
northbound. And we can clearly see that VSM1 is active.

FEEDBACK

N1Kv-01# show mod


Mod Ports Module-Type

Model

Status

--- ----- -------------------------------- ------------------ ----------1

Virtual Supervisor Module

Nexus1000V

active

Virtual Supervisor Module

Nexus1000V

ha-stand

248

Virtual Ethernet Module

NA

ok

248

Virtual Ethernet Module

NA

ok

*
2
by

We will begin by testing a simple ping marking DSCP to the PHB of EF (which is ToS 184), and
ensuring that it arrives at its destination marked accordingly. Next we'll identify the path it should
take and at what port and queue we expect it to be seen, apply the policy-map, and later test it
again to determine whether the DSCP is being changed and the CoS bits are being marked and
then matched to queue.

N1Kv-01# ping
Vrf context to use [default] :
No user input: using default context
Target IP address or Hostname: 10.0.110.10
Repeat count [5] :
Datagram size [56] :
Timeout in seconds [2] :
Sending interval in seconds [0] :
Extended commands [no] : y
Source address or interface :
Data pattern [0xabcd] :
Type of service [0] : 184
Set DF bit in IP header [no] :
Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 5, 56-bytes ICMP Echos to 10.0.110.10
Timeout is 2 seconds, data pattern is 0xABCD
64 bytes from 10.0.110.10: icmp_seq=0 ttl=126 time=1.94 ms
64 bytes from 10.0.110.10: icmp_seq=1 ttl=126 time=1.683 ms
64 bytes from 10.0.110.10: icmp_seq=2 ttl=126 time=1.656 ms
64 bytes from 10.0.110.10: icmp_seq=3 ttl=126 time=1.704 ms
64 bytes from 10.0.110.10: icmp_seq=4 ttl=126 time=1.652 ms
--- 10.0.110.10 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.652/1.726/1.94 ms
N1Kv-01#

On the external server running Wireshark, we can see the ping come in, and we can also clearly
see that it is marked with DSCP 10111000 or 46 or PHB EF.

Now back in N1Kv, we revisit the show port command to see that the VSM1 Management
port, where all pings would come from, is pinned north to vmnic4, which also corresponds to our
last vNIC in UCSM, and that it is being served on VEM 3, which in UCSM is Service Profile
ESXi1.

N1Kv-01# module vem 3 execute vemcmd show port


LTL

VSM Port Admin Link State PC-LTL SGID Vem Port Type

17

Eth3/1

UP

UP

F/B*

305

vmnic0

18

Eth3/2

UP

UP

F/B*

305

vmnic1

19

Eth3/3

UP

UP

F/B*

20

Eth3/4

UP

UP

FWD

21

Eth3/5

49

Veth7

UP

UP

FWD

50

Veth8

UP

UP

FWD

51

Veth9

UP

UP

FWD

52

Veth10

53

Veth11

UP

UP

FWD

3 N1Kv-01-VSM-1.eth2

54

Veth12

UP

UP

FWD

3 Win2k8-www-2.eth0

55

Veth13

UP

UP

FWD

3 Win2k8-www-3.eth0

56

Veth14

UP

UP

FWD

3 vCenter.eth0

305

Po1

UP

UP

F/B*

306

Po3

UP

UP

FWD

UP

UP

UP

UP

FWD

FWD

306
306

vmnic2
3
4

vmnic3
vmnic4

vmk0
vmk1

4 N1Kv-01-VSM-1.eth0
4 N1Kv-01-VSM-1.eth1

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.
Please run "vemcmd show port vlans" to see the details.
N1Kv-01# sh run int veth10
!Command: show running-config interface Vethernet10
!Time: Thu Feb 21 20:02:00 2013
version 4.2(1)SV2(1.1)
interface Vethernet10
inherit port-profile N1Kv-Management
description N1Kv-01-VSM-1, Network Adapter 2
vmware dvport 353 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05
cd ba
32"
vmware vm mac 0050.56BB.3D01
N1Kv-01#

Back in UCSM, click Service Profile ESXi1, expand vNICs, and click the last vNIC, which is
eth4-vm-fabB, to see that it is pinned to Fabric B (at least during normal, non-failover operation
- which we have currently).

Click the Service Profile ESXi1, and note that the associated server is blade 2.

If we click the right pane on VIF Paths, we can see that for Path B, the eth4-vm-fabB vNIC is
pinned to the uplink FI-B 1/4 and the link is indeed active.

To confirm this in NX-OS, we can run a couple of quick commands to note that Veth 693 is,
indeed, the Chassis 1 Blade 2 (or Server 1/2) and that it is VNIC eth4-vm-fabB; then with a
show pinning server-interfaces command, we can confirm that it is pinned to outgoing
interface 1/4. Remember that we want to be on FI-B when checking this information.

INE-UCS-01-B(nxos)# sh pinning server-interfaces


---------------+-----------------+------------------------+---------------SIF Interface

Sticky

Pinned Border Interface Pinned Duration

---------------+-----------------+------------------------+---------------Po1282

No

Po1283

No

Eth1/1

No

Eth1/2

No

Veth688

No

Eth1/4

1d 12:33:51

Veth690

No

Eth1/4

1d 12:33:51

Veth692

No

Eth1/4

1d 12:33:51

Veth693

No

Eth1/4

1d 12:33:51

Veth698

No

Eth1/3

23:39:15

Veth700

No

Eth1/3

23:39:15

Veth702

No

Eth1/3

23:39:15

Veth703

No

Eth1/3

23:39:15

Veth8888

No

Veth8898

No

INE-UCS-01-B(nxos)#
INE-UCS-01-B(nxos)# sh run int veth693
!Command: show running-config interface Vethernet693
!Time: Thu Feb 21 12:10:37 2013
version 5.0(3)N2(2.03a)
interface Vethernet693
description server 1/2, VNIC eth4-vm-FabB
switchport mode trunk
hardware vethernet mac filtering per-vlan
no pinning server sticky
pinning server pinning-failure link-down
switchport trunk allowed vlan 1,110-114,118-122
bind interface port-channel1282 channel 693
service-policy type queuing input org-root/ep-qos-Host-Control-BE
no shutdown
INE-UCS-01-B(nxos)#

However, knowing its external pinned interface is not going to help us very much. Why?
Remember that because this is a Nexus NX-OS switch (in essence), it is based primarily on an
ingress queuing architecture. So what we really need to know is on what port the frame is
coming into the FI, rather than the Veth (which of course doesn't have any hardware queues), or
even the egress interface headed up to the N5Ks.
So to find that, let's go back to the UCSM in the left navigation pane. Click the Equipment tab,

and then click the root Equipment; in the right pane, click Policies, and select the Global
Policies tab to recall that we did not use port channeling to aggregate the links coming from the
IOM/FEXs up to the FIs.

This means that we will need to determine on which individual port the frame is leaving the IOM
and coming into the FI (of course, we would still need to do this if it were a port channel, but it
makes it a bit easier on us since it is not).
In the left navigation pane, expand Chassis 1 and Server 2, and look at the last two DCE
interfaces, which are 802.1KR backplane traces to the IOM inside the chassis. In the right pane,
notice that the IOM we are pinned to is chassis 1, slot 2 (or IOM 2 going to FI-B), and port 5.

Note:
You may see a port channel here. This is not a port channel from the blade up to
the FI, but rather just the two 10Gb DCE 802.1KR traces from the blade to the
IOM. We already saw that there is no port channel from the IOM up to the FI.

So back in FI-B, in NX-OS, let's look at the FEX detail, and we can see that both of these two
backplane DCE traces Eth1/1/5 and Eth1/1/7 are pinned to the uplink Fabric port of Eth1/2. So
that's where we need to look next.

INE-UCS-01-B(nxos)# sh fex detail


FEX: 1 Description: FEX0001

state: Online

FEX version: 5.0(3)N2(2.03a) [Switch version: 5.0(3)N2(2.03a)]


FEX Interim version: 5.0(3)N2(2.03a)
Switch Interim version: 5.0(3)N2(2.03a)
Chassis Model: N20-C6508, Chassis Serial: FOX1630GZB9
Extender Model: UCS-IOM-2208XP, Extender Serial: FCH16297JG2
Part No: 73-13196-04
Card Id: 136, Mac Addr: 60:73:5c:50:c3:02, Num Macs: 42
Module Sw Gen: 12594 [Switch Sw Gen: 21]
post level: complete
pinning-mode: static

Max-links: 1

Fabric port for control traffic: Eth1/1


Fabric interface state:
Eth1/1 - Interface Up. State: Active
Eth1/2 - Interface Up. State: Active
Fex Port

State Fabric Port

Eth1/1/1

Up

Eth1/1

Eth1/1/2 Down

None

Eth1/1/3

Up

Eth1/1

Eth1/1/4 Down

None

Eth1/1/5

Up

Eth1/2

Eth1/1/6 Down

None

Eth1/1/7

Up

Eth1/2

Eth1/1/8 Down

None

Eth1/1/9

Up

Eth1/1

Eth1/1/10 Down

None

Ingress port to FI-B is Eth1/2, and that's the port we are clearly interested in looking at. Let's
first look at the QoS configuration portion of FI-B in NX-OS. Remember in the N1Kv (in just a
little bit - not yet), we are going to be mapping DSCP EF to CoS 5, so that's where we need to
look first to see what qos-group that maps to, and then the ingress port what qos-group maps to
what queue.
Here we see that CoS 5 maps to Class-Platinum and that Class-Platinum maps to QoS-Group
2.

INE-UCS-01-B(nxos)# sh run | sec class-map|policy-map|system


class-map type qos class-fcoe
class-map type qos match-all class-gold
match cos 4
class-map type qos match-all class-silver
match cos 2
class-map type qos match-all class-platinum
match cos 5
class-map type queuing class-fcoe
match qos-group 1
class-map type queuing class-gold

match qos-group 3
class-map type queuing class-silver
match qos-group 4
class-map type queuing class-platinum
match qos-group 2
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
policy-map type qos system_qos_policy
class class-platinum
set qos-group 2
class class-silver
set qos-group 4
class class-gold
set qos-group 3
class class-fcoe
set qos-group 1
policy-map type queuing system_q_in_policy
class type queuing class-fcoe
bandwidth percent 30
class type queuing class-platinum
bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10
policy-map type queuing system_q_out_policy
class type queuing class-fcoe
bandwidth percent 30
class type queuing class-platinum
bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10
policy-map type queuing org-root/ep-qos-FC-for-vHBAs
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-2_30per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-4_20per-BW
class type queuing class-default

bandwidth percent 100


shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-5_10per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Host-Control-BE
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-1Gb-BW
class type queuing class-default
bandwidth percent 100
shape 1000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-20Gb-BW
class type queuing class-default
bandwidth percent 100
shape 20000000 kbps 10240
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos class-gold
match qos-group 3
class-map type network-qos class-silver
match qos-group 4
class-map type network-qos class-platinum
match qos-group 2
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos system_nq_policy
class type network-qos class-platinum
class type network-qos class-silver
class type network-qos class-gold
mtu 9000
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
system qos
service-policy type qos input system_qos_policy
service-policy type qos input system_qos_policy
service-policy type queuing input system_q_in_policy
service-policy type queuing output system_q_out_policy
service-policy type network-qos system_nq_policy
system default switchport shutdown
INE-UCS-01-B(nxos)#

Let's look at the queuing on the ingress interface eth1/2. Notice that Tx Queuing doesn't have

much value beyond WRR, but that Rx Queuing is clearly where the action is. Notice also that
the Q to which qos-group 2 is mapped has not yet reported any matched packets. So this is
where we will be looking for packets to match after we apply our configuration.

INE-UCS-01-B(nxos)# sh queuing interface e1/2


Ethernet1/2 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0

WRR

10

WRR

30

WRR

10

WRR

20

WRR

30

RX Queuing
qos-group 0
q-size: 196480, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 1228
Statistics:
Pkts received over the port

: 1040393

Ucast pkts sent to the cross-bar

: 944903

Mcast pkts sent to the cross-bar

: 95490

Ucast pkts received from the cross-bar : 907526


Pkts sent to the port

: 1230897

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 1
q-size: 79360, HW MTU: 2158 (2158 configured)
drop-type: no-drop, xon: 128, xoff: 252
Statistics:
Pkts received over the port

: 352160

Ucast pkts sent to the cross-bar

: 352160

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 435978


Pkts sent to the port

: 435978

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 2
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

:0

Ucast pkts sent to the cross-bar

:0

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0


Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 3
q-size: 29760, HW MTU: 9000 (9000 configured)
drop-type: drop, xon: 0, xoff: 186
Statistics:
Pkts received over the port

:0

Ucast pkts sent to the cross-bar

:0

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0


Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 4
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

:0

Ucast pkts sent to the cross-bar

:0

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0


Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar

: 323371

INE-UCS-01-B(nxos)#

Now we need to apply the configuration to the N1Kv and port Veth10.
On N1Kv:

class-map type qos match-any DSCP-EF


match dscp 46
policy-map type qos SET-COS
class DSCP-EF
set cos 5
set dscp 40
interface Vethernet 10
service-policy input SET-COS

And let's just verify the application.

N1Kv-01(config-if)# sh run int veth10


interface Vethernet10
inherit port-profile N1Kv-Management
service-policy type qos input SET-COS
description N1Kv-01-VSM-1, Network Adapter 2
vmware dvport 353 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05
cd ba
32"
vmware vm mac 0050.56BB.3D01
N1Kv-01(config-if)#

Note:
Before sending any pings, remember one critical thing: We are sending
packets/frames into a UCSM vNIC, and by default vNICs do not trust any L2 CoS
values - so by default they re-write everything to 0. In earlier tasks, we had you set
up the two updating vNIC templates that were used to instantiate two vNICs for
each service profile. When we set up these updating vNIC templates, we had you
apply a QoS policy; recall that only the two vNICs destined to be used for VM
traffic were the only two that we allowed "Host Control" in the QoS Policy. Also
recall that the VSMs are both using these VM vNICs (vNIC 3 and 4). It is for this
reason alone that our attempt to mark L2 CoS should be honored.
Now let's send another ping and check both our modified DSCP value in Wireshark and the
ingress Eth1/2 port on FI-B for the proper queue mapping.

N1Kv-01# ping
Vrf context to use [default] :
No user input: using default context
Target IP address or Hostname: 10.0.110.10
Repeat count [5] :
Datagram size [56] :
Timeout in seconds [2] :
Sending interval in seconds [0] :
Extended commands [no] : y
Source address or interface :
Data pattern [0xabcd] :
Type of service [0] : 184
Set DF bit in IP header [no] :
Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 5, 56-bytes ICMP Echos to 10.0.110.10
Timeout is 2 seconds, data pattern is 0xABCD
64 bytes from 10.0.110.10: icmp_seq=0 ttl=126 time=2.07 ms
64 bytes from 10.0.110.10: icmp_seq=1 ttl=126 time=1.139 ms
64 bytes from 10.0.110.10: icmp_seq=2 ttl=126 time=0.855 ms
64 bytes from 10.0.110.10: icmp_seq=3 ttl=126 time=0.802 ms
64 bytes from 10.0.110.10: icmp_seq=4 ttl=126 time=0.802 ms
--- 10.0.110.10 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.802/1.133/2.07 ms
N1Kv-01#

It appears that it mapped the DSCP properly to 10100000 or decimal 40, but what about L2
CoS?

INE-UCS-01-B(nxos)# sh queuing interface e1/2


Ethernet1/2 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0

WRR

10

WRR

30

WRR

10

WRR

20

WRR

30

RX Queuing
qos-group 0
q-size: 196480, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 1228
Statistics:
Pkts received over the port

: 1048988

Ucast pkts sent to the cross-bar

: 952655

Mcast pkts sent to the cross-bar

: 96333

Ucast pkts received from the cross-bar : 914006


Pkts sent to the port

: 1238764

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 1
q-size: 79360, HW MTU: 2158 (2158 configured)
drop-type: no-drop, xon: 128, xoff: 252
Statistics:
Pkts received over the port

: 352172

Ucast pkts sent to the cross-bar

: 352172

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 435993


Pkts sent to the port

: 435993

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 2
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

:5

Ucast pkts sent to the cross-bar

:5

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0

ive)

Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

qos-group 3
q-size: 29760, HW MTU: 9000 (9000 configured)
drop-type: drop, xon: 0, xoff: 186
Statistics:
Pkts received over the port

:0

Ucast pkts sent to the cross-bar

:0

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0


Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
qos-group 4
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

:0

Ucast pkts sent to the cross-bar

:0

Mcast pkts sent to the cross-bar

:0

Ucast pkts received from the cross-bar : 0


Pkts sent to the port

:0

Pkts discarded on ingress

:0

Per-priority-pause status

: Rx (Inactive), Tx (Inact

ive)
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar

: 324758

INE-UCS-01-B(nxos)#

And there we have it! It appears to have mapped properly to L2 CoS as well, which means that
the UCS can now honor traffic being passed.
^ back to top

Disclaimer (http://www.ine.com/feedback.htm) | Privacy Policy (http://www.ine.com/resources/)


Inc., All Rights Reserved (http://www.ine.com/about-us.htm)

2013 INE

Das könnte Ihnen auch gefallen