Sie sind auf Seite 1von 696

UCS Technology Labs - UCS B-Series

Components and Management


UCS B-Series Default Configuration
Task
YOU SHOULD NOT PERFORM THIS TASK!
This task has already been performed for you by the automated
scripting engine. This task is merely provided for you to ensure that
you have a complete understanding of the UCS system and how to
default the configuration, or in case you happen to have your own
UCS lab equipment and need to perform this from time to time. If you
are using INE's racks and you do happen to wipe the configuration
clean, it will then become necessary to load a new base configuration
onto your rack using the Rack Rental Control Panel. This will take
approximately 20 minutes to complete.

Configuration
Connect to the console (9600 baud default) of Fabric Interconnect A:
Welcome to INE`s Training Cisco UCS Platform INE-UCS-1-A login: admin
Password: Cciedc01
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
INE-UCS-1-A# connect local-mgmt

Cisco Nexus Operating System (NX-OS) Software


TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
INE-UCS-1-A(local-mgmt)# erase configuration
All UCS configurations will be erased and system will reboot. Are you sure? (yes/n o): yes

Removing all the configuration. Please wait....


/bin/rm: cannot remove directory `/bootflash/sysdebug//tftpd_logs`: Device or reso
urce busy
Configurations are cleaned up. Rebooting....
Shutdown Ports..
writing reset reason 9,

INIT: Sending processes the TERM signal


Dec 22 17:39:34 %STP-2-PROFILE SIGNAL Handler:: crossed (0 secs), elapsed (0 secs
2 usecs), usr (0 secs

0 usecs), sys (0 secs 0 usecs) :

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22


.
.
.
syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

syslog[5149]: ERROR, mts_send failed, ERROR = -1, ERRNO = 22

svcconfig init: /opt/db/sam.config NOT Found. Sleeping for a while


INUnexporting directories for NFS kernel daemon...done.
Stopping NFS kernel daemon: rpc.mountd rpc.nfsddone.
Unexporting directories for NFS kernel daemon...
done.

Stopping portmap daemon: portmap.


Stopping kernel log daemon: klogd.
Sending all processes the TERM signal... done.
Sending all processes the KILL signal... done.
Unmounting remote filesystems... done.
Deactivating swap...done.
Unmounting local filesystems...done.
mount: you must specify the filesystem type
Starting reboot command: reboot
Rebooting...
Resetting board
Restarting system.

N5000 BIOS v.3.5.0, Thu 02/03/2011, 05:12 PM

.
.
.

B2
Version 2.00.1201. Copyright (C) 2009 American Megatrends, Inc.

.
.
.

Booting kickstart image: bootflash:/installables/ACE4710/ucs-6100-k9-kickstart.5


.0.3.N2.2.03a.bin....
...............................................................................
......Image verification OK

Starting kernel...
Usage: init 0123456SsQqAaBbCcUu
INIT: version 2.85 booting
I2C - Mezz absent
sprom_drv_init_platform: nuova_i2c_register_get_card_index
blogger: /var/log/isan.log: No such file or directory (2).
Starting system POST.....
Executing Mod 1 1 SEEPROM Test:...done (0 seconds)
Executing Mod 1 1 GigE Port Test:....done (32 seconds)
Executing Mod 1 1 PCIE Test:.................done (0 seconds)
Mod 1 1 Post Completed Successfully
POST is completed
autoneg unmodified, ignoring
autoneg unmodified, ignoring
S10mount-ramfs.supnuovaca Mounting /isan 3000m

Mounted /isan
Creating /callhome..
Mounting /callhome..
Creating /callhome done.
Callhome spool file system init done.
Checking all filesystems.....rr done.
Checking NVRAM block device ... done
The startup-config won`t be used until the next reboot.
.
Loading system software
Uncompressing system image: bootflash:/installables/ACE4710/ucs-6100-k9-system.5.0.
3.N2.2.03a.bin

Loading plugin 0: core_plugin...


Loading plugin 1: eth_plugin...
Loading plugin 2: fc_plugin...

8+1 records in
8+1 records out
ethernet end-host mode on CA
FC end-host mode on CA
n_port virtualizer mode.
--------------------------------------------------------------ln: `/isan/etc/sysmgr.d//fcfwd.conf`: File exists
ethernet end-host mode
Extracting VEM images failed (/isan/bin/vem/vem-release.tar.gz)
INIT: Entering runlevel: 3
Exporting directories for NFS kernel daemon...done.
Starting NFS kernel daemon:rpc.nfsd.
rpc.mountddone.

Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config


Added VLAN with VID == 4042 to IF -:muxif:cp: cannot stat `/isan/plugin_img/fex.bin`: No such file or directory

--------------------enabled fc feature
--------------------2012 Dec 22 17:42:25
n

%$ VDC-1 %$ %USER-2-SYSTEM_MSG: CLIS: loading cmd files begi

- clis

2012 Dec 22 17:42:28

%$ VDC-1 %$ Dec 22 17:42:28 %KERN-2-SYSTEM_MSG: Starting ker

nel... - kernel
2012 Dec 22 17:42:28
bsent

%$ VDC-1 %$ Dec 22 17:42:28 %KERN-0-SYSTEM_MSG: I2C - Mezz a

- kernel

2012 Dec 22 17:42:28

%$ VDC-1 %$ Dec 22 17:42:28 %KERN-0-SYSTEM_MSG: sprom_drv_in

it_platform: nuova_i2c_register_get_card_index
2012 Dec 22 17:42:34

- kernel

%$ VDC-1 %$ %USER-2-SYSTEM_MSG: CLIS: loading cmd files end

- clis
2012 Dec 22 17:42:34

%$ VDC-1 %$ %USER-2-SYSTEM_MSG: CLIS: init begin

- clis

System is coming up ... Please wait ...


System is coming up ... Please wait ...
System is coming up ... Please wait ...
System is coming up ... Please wait ...
System is coming up ... Please wait ...
System is coming up ... Please wait ...
System is coming up ... Please wait ...
2012 Dec 22 17:43:54

%$ VDC-1 %$ %VDC_MGR-2-VDC_ONLINE: vdc 1 has come online

System is coming up ... Please wait ...


nohup: appending output to `nohup.out`

---- Basic System Configuration Dialog ----

This setup utility will guide you through the basic configuration of
the system. Only minimal configuration including IP connectivity to
the Fabric interconnect and its clustering mode is performed through these steps.

Type Ctrl-C at any time to abort configuration and reboot system.


To back track or make modifications to already entered values,
complete input till end of section and answer no when prompted
to apply configuration.

Enter the configuration method. (console/gui) ?

Connect to the console of Fabric Interconnect B and preform the exact same task.
Output omitted for the sake of repetition.

UCS Technology Labs - UCS B-Series


Components and Management
UCS B-Series Initialization
Task
YOU SHOULD NOT PERFORM THIS TASK!
This task has already been performed for you by the automated
scripting engine. This task is merely provided for you to ensure that
you have a complete understanding of the UCS system and how to
initialize the configuration, or in case you happen to have your own
UCS lab equipment and need to perform this from time to time. If you
are using INE's racks and you do happen to wipe the configuration
clean, it will then become necessary to load a new base configuration
onto your rack using the Rack Rental Control Panel. This will take
approximately 20 minutes to complete.
Connect to the console of the UCS 5108 Fabric Interconnects (FI) and provision
them.
Use the following information:
Cluster Name: INE-UCS-YY (where YY is your DC rack number; e.g., Rack
1 = INE-UCS-01)
Cluster IP Address: 192.168.0.100
FI-A IP Address: 192.168.0.101
FI-B IP Address: 192.168.0.102
Gateway IP Address: 192.168.0.254
Cluster Admin User: admin
Cluster Admin Password: cciedc01

Configuration
Connect to the console of Fabric Interconnect A:
---- Basic System Configuration Dialog ----

This setup utility will guide you through the basic configuration of
the system. Only minimal configuration including IP connectivity to
the Fabric interconnect and its clustering mode is performed through these steps.

Type Ctrl-C at any time to abort configuration and reboot system.


To back track or make modifications to already entered values,
complete input till end of section and answer no when prompted
to apply configuration.

Enter the configuration method. (console/gui) ? console

Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup

You have chosen to setup a new Fabric interconnect. Continue? (y/n): y

Enforce strong password? (y/n) [y]: n

Enter the password for "admin": cciedc01


Confirm the password for "admin": cciedc01

Is this Fabric interconnect part of a cluster(select `no` for standalone)? (yes/no) [n]: yes

Enter the switch fabric (A/B) []: A

Enter the system name:

INE-UCS-01

Physical Switch Mgmt0 IPv4 address : 192.168.0.101

Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0

IPv4 address of the default gateway : 192.168.0.254

Cluster IPv4 address : 192.168.0.100

Configure the DNS Server IPv4 address? (yes/no) [n]: n

Configure the default domain name? (yes/no) [n]: yes

Default domain name : ine.com

Following configurations will be applied:

Switch Fabric=A
System Name=INE-UCS-01-A
Enforced Strong Password=yes

Physical Switch Mgmt0 IP Address=192.168.0.101


Physical Switch Mgmt0 IP Netmask=255.255.255.0
Default Gateway=192.168.0.254
Domain Name=ine.com

Cluster Enabled=yes
Cluster IP Address=192.168.0.100
NOTE: Cluster IP will be configured only after both Fabric Interconnects are initialized

Apply and save the configuration (select `no` if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.

Configuration file - Ok

Cisco UCS 6200 Series Fabric Interconnect


INE-UCS-01-A login: 2012 Dec 22 18:10:23 INE-UCS-01-A-A %$ VDC-1 %$ %UCSM-2-MANA
GEMENT_SERVICES_UNRESPONSIVE: [F0452][critical][management-services-unresponsive][
sys/mgmt-entity-B] Fabric Interconnect B, management services are unresponsive

Cisco UCS 6200 Series Fabric Interconnect


INE-UCS-01-A login: admin
Password:
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

INE-UCS-01-A#

UCS Technology Labs - UCS B-Series


Components and Management
CCIE DC
Task
YOU SHOULD NOT PERFORM THIS TASK!
This task has already been performed for you by the automated
scripting engine. This task is merely provided for you to ensure that
you have a complete understanding of the UCS system and how to
initialize the configuration, or in case you happen to have your own
UCS lab equipment and need to perform this from time to time. If you
are using INE's racks and you do happen to wipe the configuration
clean, it will then become necessary to load a new base configuration
onto your rack using the Rack Rental Control Panel. This will take
approximately 20 minutes to complete.
Ensure high availability by setting up FI B as a subordinate peer to FI A.

Configuration
Connect to the console of Fabric Interconnect B:
Enter the configuration method. (console/gui) [console] ? console

Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the clu

Enter the admin password of the peer Fabric interconnect:


Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
Peer Fabric interconnect Mgmt0 IP Address: 192.168.0.101
Peer Fabric interconnect Mgmt0 IP Netmask: 255.255.255.0
Cluster IP address

: 192.168.0.100

Physical Switch Mgmt0 IPv4 address [0.0.0.0]: 192.168.0.102

Apply and save the configuration (select `no` if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.

Configuration file - Ok

Cisco UCS 6200 Series Fabric Interconnect


INE-UCS-01-B login:

Verification
Log in and verify high availability state, and that this FI is subordinate.
Cisco UCS 6200 Series Fabric Interconnect
INE-UCS-01-B login: admin
Password:
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

INE-UCS-01-B# connect local-mgmt


Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php

INE-UCS-01-B(local-mgmt)# show cluster extended-state


Cluster Id: 0x8f8623104c6311e2-0xab50547feec57d44

Start time: Sat Dec 22 19:15:23 2012


Last election time: Sat Dec 22 19:15:24 2012

B: UP, SUBORDINATE
A: UP, PRIMARY

B: memb state UP, lead state SUBORDINATE, mgmt services state: UP


A: memb state UP, lead state PRIMARY, mgmt services state: UP

heartbeat state PRIMARY_OK

INTERNAL NETWORK INTERFACES:


eth1, UP
eth2, UP

HA NOT READY No device connected to this Fabric Interconnect

INE-UCS-01-B(local-mgmt)#

We can see that this device is, in fact, subordinate, and the heartbeat between FI B
and FI A is OK. However, we can clearly see that HA is not yet ready because there
is no device connected. We must configure chassis discovery so that a SEEPROM
(Serial Electrically Erasable Programmable Read Only Memory) in the first chassis
midplane can be utilized as a method of dispute resolution in the unlikely event that
a Partition-in-Space (also called split-brain) or a Partition-in-Time situation might
occur.
A Partition-in-Space is when heartbeats are for any reason lost across both L1 and
L2 links, and both FIs check the SEEPROM in the first UCS B-Series chassis (or in
a file on UCS C-Series Rack-Mount server at /mnt/jffs2 if being managed by UCSM)
to see if it has been updated by the other FI. Each FI will attempt to claim as many
chassis as possible, and there will always be one chassis designated as the
"Quroum" chassis, which its SEEPROM will indicate; whomever gets that chassis (in
the case of a tie of equal chassis claimed by each FI) will emerge the victor, and the
other will fade away and not operate until L1/L2 connectivity heartbeats are restored.
A Partition-in-Time is when one of the FIs tries to begin its UCSM process on an
outdated configuration, but before it does, it checks the chassis SEEPROM to detect
the revision of the config; if the config is older, it contacts its brother FI to get the
newer config.

UCS Technology Labs - UCS B-Series


Components and Management
Connecting to Web/Java UCSM UI
Task
Connect to the UCS Manager GUI and log in.
UCS Virtual IP address: 192.168.0.100

Please Note
If you are unable to access the UCS GUI using https://192.168.0.100 the http
services may not be enabled. To verify UCS sevices Telnet into dc.racks.ine and
select 5 in the AS menu. Log into the UCS FI Vitual IP (cisco/cisco) and issue the
following command:
INE-UCS-01-A# show configuration | in http
enable http
enable http-redirect
enable https
set http port 80
set https cipher-suite "ALL:!ADH:!EXPORT56:!LOW:RC4+RSA:+HIGH:+MEDIUM:+
EXP:+eNULL"
set https cipher-suite-mode medium-strength
set https keyring default
set https port 443
INE-UCS-01-A#

If you do not see this output you will need to enable http services using the following
commands:
INE-UCS-01-A# scope system
INE-UCS-01-A /system # scope sevices
INE-UCS-01-A /system/services # enable http
INE-UCS-01-A /system/services # commit-buffer
INE-UCS-01-A /system/services # enable https
INE-UCS-01-A /system/services # commit-buffer

INE-UCS-01-A# exit

It may take up to ten minutes before you are able to access the UCS GUI.

Configuration
Open a web browser, browse to https://192.168.0.100, and accept the self-signed
SSL certificate.

Click Launch UCS Manager.

Choose to Keep the file. After it downloads, click to launch it.

Choose to Always trust content from this publisher so that you are not

constantly asked whether to trust it while configuring.

Choose to Always trust content from this publisher so that you aren't asked
whether to trust it every time you launch this application.

Log in with the pre-assigned user name admin and password cciedc01.

UCS Technology Labs - UCS B-Series


Components and Management
UCS System Management Communications
Task
Configure the UCS Manager to allow management communications via HTTPS,
SSH, Telnet, SNMPv3, and Read-Only SMASH CLP.
Redirect all port 80 web requests to port 443. Limit to 6 web sessions per user, and a
maximum of 32 sessions for the entire system.
Create an SNMP user named SNMP-MONITOR.
Make any password for this user, for example C1sc0123.
Ensure that the user uses a 160-bit hashing algorithm and 128-bit encryption method
whenever authenticating and whenever sending any sort of data.
Utilize this user for all inbound SNMP queries sent to the UCS system and all
outbound trap/inform notifications.
Send all traps or informs to the IPv4 address of 192.168.0.199.
When creating a SNMP Trap or Inform, use the one that is more reliable. The system
contact for SNMP should be UCS.Admin@ine.com and the location should be set to
"INE DC1, Reno, NV, US, Earth".

Configuration
Begin by limiting the scope of view to only what is needed at the current time, to
simplify configuration. In this case, filter to Communications Management. Then
click the Communications Services sub-group.

Next, limit the number of web sessions to those specified: 6 per user and 32 overall.
Notice that HTTP port 80 already redirects to HTTPS/SSL port 443, so nothing more
there is needed. Also note that the default Key Ring is used for a self-signed SSL
certificate. If a PKI/CA was desired for use, this would require changing the filter for
view to Key Management and adding a CA Trustpoint and a CA Signing Request,
then importing the resulting certificate. (If using a CA/PKI Trustpoint, this should be
done prior to any other configuration on the system being performed.) Also note in
this graphic that we have enabled Telnet communications, which is by default
disabled.

SNMP Communications are now configured by first enabling them and specifying

the username that will be used in all outbound trap/inform notifications. The System
Contact and System Location fields have also been filled out. Note the red
highlighted plus symbols on the right side of each SNMP box below. These will be
used to create the necessary traps and users required by the task in the next two
graphics.

Click + to create the SNMP Trap. Enter the IP address and username. Use V3 to
ensure the ability to both authenticate and encrypt, and enforce both of those by
choosing the v3Privilege of Priv (Auth authenticates but doesn't encrypt, and
Noauth doesn't do either, as you might expect). Choosing Informs as the Type is
necessary because SNMP Traps have no indicator that the receiving side actually
received anything at all, whereas Informs require by RFC for the receiver to send an
SNMP response protocol data unit (PDU). This method will be used for outbound
SNMP Informs from the UCS system.

For inbound queries, we need to set up a user to be authenticated. We also want to


make sure we select 160-bit authentication (96 of which are used in the HMAC-SHA96 spec per RFC 2404) and AES-128 encryption mechanism.

In the screen shot below, we can see that both are enabled. SMASH CLP (readonly) and SSH are both enabled by default and cannot be disabled. Remember to
click Save at the bottom of the screen before moving on.

Note that SNMP is used in UCSM to read-only, not to do writes or management but
simply to do monitoring. Management of the system is performed via SOAP calls to
the well-defined and well-documented XML-API.

UCS Technology Labs - UCS B-Series


Components and Management
Call Home
Task
Enable Call Home functionality for Warnings levels and above.
Enable Throttling if too many messages are queued up to be sent at once.
Contact should be the same as before in SNMP.
Use the values in table below.

UCS Field

Value

Contact Information
Contact:

UCS.Admin@ine.com

Phone:

+1-213-525-1212

Email:

UCS.Admin@ine.com

Address:

INE DC1, Reno, NV, US, Earth

IDs
Customer ID:

4815162342

Contract ID:

1234

UCS Field
Site ID:

Value
5551212

Email Addresses
From:

UCS.Admin@ine.com

Reply To:

UCS.Admin@ine.com

SMTP Server
Host:

192.168.0.199

Port:

25

Configuration
Fill in the fields as shown in this image, and then click Save.

UCS Technology Labs - UCS B-Series


Components and Management
Orgs and RBAC
Task
Create two organizations - one for both customers sharing computing power from the
single INE UCS cluster.
Name them CUST1 and CUST2.
Create locales for the two organizations with the same names as before - CUST1
and CUST2.
Create a custom role, allowing only the user assigned that role to change QoS
parameters anywhere in the UCS cluster, and call the role "qos".
Three other existing roles will be used: "admin," LAN or "network," and SAN or
"storage."
Explore those roles.

Configuration
We could start on almost any tab in the left navigation pane. Specifically, we could
perform this operation on the Server, LAN, SAN, or Admin tab, but not on the VM or
Equipment tabs, because the first four all reflect the same org structure, which
begins with the "root" org. We will begin on the Server tab, and expand Servers >>
Service Profiles >> root.

Right-click and choose Create Organization.

Fill in the name as directed, and click OK.

Repeat to create the second org.

Notice that they have both been created. If you like, you can navigate to the other
tabs (LAN, SAN, and Admin) and see that both orgs are present there as well.

Click the Admin tab, filter to User Management, expand User Services, right-click
Locales, and choose Create Locale.

Enter the org name.

Drag the CUST1 org to the new CUST1 Locale.

Repeat for the CUST2 org.

Under User Services, expand Roles and note the existing roles.

Click the Storage role and note the fields that are pre-selected.

Click the Network role and note the fields that are pre-selected.

Click the Admin role and note the fields that are pre-selected.

Right-click Roles and choose Create Role.

Enter the name qos and find any and all privileges relating to QoS (scroll down to
find them all).

When all are selected, click OK.

Explore your new role.

UCS Technology Labs - UCS B-Series


Components and Management
AAA and LDAP
Task
Use the following table to see existing Active Directory LDAP users in existing groups
along with how they should be mapped to roles and locales that you created in a
previous task.
The Active Directory server is at the IP address of 192.168.0.10.
You have no direct access to this server, but AD is listening on port 389 (no SSL).
You may authenticate with any of these users because they all already exist in the
MS AD LDAP.
Every user's password is cisco.
To query the LDAP, use the bind user ucsbind with the password cisco.
All users in the LDAP belong in the base group "CN=users,DC=ine,DC=com".
If a UCS Locale is not populated below, then it should not be mapped to any Locale.

Full Name

LDAP User

LDAP
Group

UCS Role

Mark Snow

msnow

all-ucsadmin

admin

Don Draper

ddraper

all-ucsqos

qos

Jax Teller

jteller

cust1-ucslan

network

UCS
Locale

CUST1

Full Name

LDAP User

LDAP
Group

UCS Role

UCS
Locale

Carrie
Mathison

cmathison

cust1-ucssan

storage

CUST1

Dexter
Morgan

dmorgan

cust1-ucsserver

all server

CUST1

Walter White

wwhite

cust2-ucslan

network

CUST2

Tyrion
Lannister

tlannister

cust2-ucssan

storage

CUST2

Rick Grimes

rgrimes

cust2-ucsserver

all server

CUST2

Configuration
On the Admin tab in the left navigation pane, expand User Management >> LDAP.
In the right pane, click General and fill in the Base DN and Filter as shown. The
filter for MS Active Directory should always be sAMAccountName=$userid. Also
note that UCSM doesn't care if there are spaces between the CN or DC entities in
the Base DN to search, but in other places in LDAP configuration, UCSM requires
that there be no spaces. We will leave the spaces here, but point them out when it
becomes necessary (mainly in the creation of LDAP Group Maps).

Right-click LDAP Providers and then click Create LDAP Provider.

Fill in the fields as shown and click Next. Note that you need the same filter here,
and also note that spaces are not important in this section (although it's always a
good practice to not include spaces so that you don't get in trouble if you don't
remember where they are allowed; the parser does not warn you).

If we want to pass the AD groups onto UCSM for mappings to UCSM Roles, we
must enable Group Authorization. Recursion allows us to dive down into lower
levels of OUs in the AD, and although we don't need it here, you may in some
environments. Here we also must set the Target Attribute explicitly to "memberOf",
and both this and the filter are case sensitive.

Here we see the provider created.

Right-click LDAP Provider Groups in the left navigation pane and click to create a
new group.

Enter a name for the group, and move the provider from the right to left pane, as
shown.

Notice the new group.

Now it is time to map MS AD groups to UCSM Roles and Locales. Right-click LDAP
Group Maps and click to create a new group map.

Here it is crucial to not include any spaces. The parser will not warn you if you do; it
simply will not work, and you won't be able to troubleshoot the problem very well.
Enter the well-formed Distinguished Name (DN) for the AD group, and then click

both the role and the locale below, linking them together, and click OK. Here we
were instructed that user msnow in the all-ucs-admin group did not have a Locale
entry. This is because the Admin role can never be linked to a specific locale, simply
because you cannot administer only part of the UCSM.

Repeat this process for the all-ucs-qos AD group, this time choosing the qos role.

Repeat this process for the cust1-ucs-lan AD group, this time choosing the network
role and the CUST1 locale.

Repeat this process for the cust1-ucs-san AD group, this time choosing the storage

role and the CUST1 locale.

Repeat this process for the cust1-ucs-server AD group, this time choosing the three
server roles and the CUST1 locale.

Repeat this process for the cust2-ucs-lan AD group, this time choosing the network
role and the CUST2 locale.

Repeat this process for the cust2-ucs-san AD group, this time choosing the storage

role and the CUST2 locale.

Repeat this process for the cust2-ucs-server AD group, this time choosing the three
server roles and the CUST2 locale.

Note all of the groups you have created, checking carefully to ensure that there are
no spaces in your LDAP DNs.

Outside the scope of anything that could be tested in the lab, it can be quite useful
to take a look in your Microsoft AD server, under the ADSI utility, just to be 100%
certain of the DN formatting for various groups. That image is included here for
perusal and use in comparison with the above LDAP Group Mappings.

Now we need to create an Authentication Domain to use for invocation of the AD


LDAP provider. In the left navigation pane, expand Authentication, and right-click
Authentication Domains, and click to create a new domain.

Enter the name, choose Ldap as the Realm, and select the Provider Group from
the drop-down menu.

In the left navigation pane, click Native Authentication and note the options for
Default Authentication and Console Authentication. This is particularly useful so that
we can always recover from Console if we happen to lock ourselves out of the GUI
auth. However, this really only presents us with the default authentication method

when we open the login applet, and we always have the ability to change that
authentication method from the drop-down in the login applet. Note that we can also
choose the Role Policy For Remote Users. Here we choose to Assign Default
Role, although it should not be necessary after we test our AD LDAP integration
and know it to be working properly. A quick review of Authentication vs.
Authorization is important here: With this field set to Assign Default Role, if
Authentication succeeds but Authorization fails (because of improper
mapping/naming), you should log the user in and give him the default role, which
provides him with read-only access to the whole system. This is useful to test with
when login fails, to distinguish between a problem with Authentication and a
problem with Authorization. A value of No Login means that if Authorization fails,
Authentication will fail as well.
After successful testing of your LDAP (or RADIUS or TACACS+)
configuration, this should be changed back to "No Login" to prevent
unauthorized access to UCSM.

Verification
Because you have already logged in to UCSM, it will be tempting to simply open
your .jnlp Java Web Start icon to launch the login applet; however, upon doing so,
you would notice that it does not include a drop-down to allow for login from the
LDAP. To get that, you need a new .jnlp file. Delete your old file, browse to
http://192.168.0.100 again, and click Launch UCS Manager. Download the file and
instantiate it.

Log in with user wwhite and password cisco, who, since he is assigned to the
LDAP group cust2-ucs-lan, should have rights only to the CUST2 org, and more
specifically only to Network control within that org.

A quick Wireshark capture filtered to tcp.port == 389 shows us the story first of the
ucsbind user logging into the AD LDAP and then querying to find the user wwhite
and his "memberOf" attribute containing the proper value of "CN=cust2-ucslan,CN=Users,DC=ine,DC=com", and also that authentication with his credentials
was successful.

Back in UCSM, on the Admin tab in the left navigation pane under User Services
>> Remotely Authenticated Users, we see him logged in. Note in the left
navigation pane not only his name, but also the prefix: this can be used to log in
from the console if desired (we'll cover this later).

On the LAN tab in the left navigation pane, if you navigate to Policies and try to
right-click and create a Network Control Policy under the root org, you cannot.

The same is true for a Network Control Policy under the CUST1 org.

However, under the CUST2 org we can clearly see that we have rights to create a
new Network Control Policy.

But on the Servers tab, we don't have any rights to create any Service Profiles or
anything server-related, even under the CUST2 org.

Let's log in from PuTTY via SSH to the UCS virtual IP.

And we will try the wwhite userid, but specifically with the prefix we noted earlier to
force LDAP login (we don't have a local user named wwhite anyway).

Note now that we can see him logged in as a remote user via LDAP, not only via
web but also pts (Psuedo Terminal Slave).

We also have the ability to simply test our LDAP from CLI without trying to log in as
that user. Note, however, that this only tests authentication and not authorization:
INE-UCS-01-A# connect nxos
INE-UCS-01-A(nxos)# test aaa group ldap wwhite cisco user has been authenticated

INE-UCS-01-A(nxos)#

We can also perform a

debug ldap all

from the NXOS CLI, resulting in the following

output:
2012 Dec 30 22:01:44.063464 ldap: mts_ldap_aaa_request_handler: session id 0, list handle is NULL
2012 Dec 30 22:01:44.063683 ldap: mts_ldap_aaa_request_handler: user :wwhite:, user_len 6, user_data_len 5
2012 Dec 30 22:01:44.063898 ldap: ldap_authenticate: user wwhite with server group LDAP_INE_Main
2012 Dec 30 22:01:44.064167 ldap: ldap_read_global_config:
2012 Dec 30 22:01:44.064407 ldap: ldap_global_config: entering ...
2012 Dec 30 22:01:44.064655 ldap: ldap_global_config: GET_REQ...

2012 Dec 30 22:01:44.064918 ldap: ldap_global_config: got back the return value of global configuration operation: s
2012 Dec 30 22:01:44.065139 ldap: ldap_global_config: REQ - num server 1 num group 3 timeout 30
2012 Dec 30 22:01:44.065354 ldap: ldap_global_config: returning retval 0
2012 Dec 30 22:01:44.065584 ldap: ldap_read_group_config:
2012 Dec 30 22:01:44.065806 ldap: ldap_servergroup_config: GET_REQ for LDAP servergroup index 0 name LDAP_INE_Main

2012 Dec 30 22:01:44.066041 ldap: ldap_servergroup_config: GET_REQ got protocol server group index 4 name LDAP_INE_M
2012 Dec 30 22:01:44.066260 ldap: ldap_servergroup_config: returning retval 0 for server group LDAP_INE_Main
2012 Dec 30 22:01:44.066477 ldap: ldap_server_config: entering for server , index 1
2012 Dec 30 22:01:44.066695 ldap: ldap_server_config: key size 532, value size 953
2012 Dec 30 22:01:44.066913 ldap: ldap_server_config: GET_REQ: server index: 1 addr:
2012 Dec 30 22:01:44.067135 ldap: ldap_server_config: Got for Protocol server index:1 addr:192.168.0.10
2012 Dec 30 22:01:44.068057 ldap: ldap_server_config:
got back the return value of Protocol server 192.168.0.10 operation: success, des success
2012 Dec 30 22:01:44.068287 ldap: initialize_ldap_server_from_conf:
192.168.0.10, CN=ucsbind, CN=Users, DC=ine, DC=com, , CN=Users, DC=ine, DC=com, memberOf, 1, 0

2012 Dec 30 22:01:44.068513 ldap: ldap_client_authenticate: entering...


2012 Dec 30 22:01:44.068729 ldap: ldap_client_auth_init: entering...

To conclude, let's test some credentials that should not work from CLI, just to see
their results:
INE-UCS-01-A(nxos)# test aaa group ldap wwhite ciscocisco
user has failed authentication Invalid credentials

INE-UCS-01-A(nxos)#

If we had wanted to use RADIUS or TACACS instead of LDAP, not only for
authentication but also for authorization, we would have had to make sure that we
passed back a custom attribute for the Role and/or Locale mapping.
For RADIUS, CiscoAVPair should be used, and can be used in several ways. The
combination of them along with how they are configured (spaces in between) are
below as follows:

For Cisco RADUIS including ACS and ISE:


1. A Role: shell:roles="network"
2. Multiple Roles: shell:roles="network storage"
3. A Locale: shell:locales="CUST1"
4. Multiple Locales: shell:locales="CUST1 CUST2"
5. A Role and a Locale: shell:roles="network" shell:locales="CUST1"
6. Multiple Roles and Locales: shell:roles="network storage" shell:locales="CUST1
If using MS RADIUS, be sure to select only Unencrypted Authentication
(PAP/SPAP).

Also, the VSA value looks a bit more like it does in TACACS, as shown below.

CUST2"

For TACACS+, a custom TACACS shell (exec) attribute should be used, and can be
used several ways. The combination of them along with how they are configured
(spaces in between) are below as follows:
1. A Role: shell:roles*"network"
2. Multiple Roles: shell:roles*"network storage"
3. A Locale: shell:locales*"CUST1"
4. Multiple Locales: shell:locales*"CUST1 CUST2"
5. A Role and a Locale: shell:roles*"network" shell:locales*"CUST1"
6. Multiple Roles and Locales: shell:roles*"network storage" shell:locales*"CUST1 CUST2"
Finally, this last method can also be used with LDAP after extending the AD
Schema; however, the new method illustrated above is much preferred after UCSM
v1.4.1, because no AD Schema modification is necessary.

UCS Technology Labs - UCS B-Series


Components and Management
Backup and Restore
Task
Create four backup jobs for the UCS system using a TFTP server already running at
192.168.0.10.
Put both backups in a directory called UCSBackups.
The first backup job should back up everything in the system, resulting in a binary
file; run this job.
The second backup job should back up everything related to the Server, LAN, and
SAN tabs, should preserve any values already assigned from pools to various
entities, and should result in an XML file when run; do not run this job.
The third backup job should back up everything related to the Admin tab only and
result in an XML file when run; do not run this job.
The fourth backup job should back up almost everything in the UCSM except for
cluster IP addresses, preserving any values already assigned from pools to various
entities, and result in an XML file when run; do not immediately run this job upon
creation, but rather after creation of the job has completed.
Import the last XML-based backup containing all system configuration, and merge it
with the current configuration.
Do not at any time attempt to restore the full binary file backup, because this
can only be done from CLI after erasing the entire system configuration.

Configuration
In the left navigation pane on the Admin tab, click the root object named All. Then,
in the right pane, click Backup Configuration.

Click Create Backup Operation.

Choose the options shown below for a Full State backup that is Enabled on a
Remote File System of type TFTP at the IP and path shown.

Verification
Note the FSM (Finite State Machine) showing the progress of the backup.

Configuration
At this UCSM rev of 2.0, you must delete any current backup job to create a new
job. You also cannot schedule backups in 2.0, but you can in rev 2.1. Delete the
current backup job. You will have to do this between each of the next few tasks as
well.

Create a new backup job of type Logical Configuration that has a state that is
Disabled. Select the Preserve Identities check box.

Create a new backup job of type System Configuration that has a state that is
Disabled

Create a new backup job of type All Configuration that has a state that is Disabled.
Select the Preserve Identities check box. After you have created this job by
clicking OK, activate it by clicking on it and changing the state to Enabled and
clicking Save. Watch to see that it completes properly using the FSM.

Click Import Configuration.

Click Create Import Operation.

Select Enabled for the state and Merge for the action.

Note the FSM progress showing the import operation.

It should complete successfully.

UCS Technology Labs - UCS B-Series


Components and Management
Chassis and Blade Discovery
Task
Discover the chassis with 2 links from each FI.
Do not aggregate links together from the FI to the chassis.
Explore all of the components of the chassis discovered.

Configuration
In the left navigation pane, on the Equipment tab, click Equipment, click the
Policies tab in the right pane, and then click the Global Policies tab in the tabset
below. In the Chassis Discovery Policy section, click the Action drop-down list
and choose 2 Link. Select None for the Link Grouping Preference.

In the left navigation pane, navigate to Fabric Interconnects >> Fabric


Interconnect A (primary) and note the picture of the Fabric Interconnect with ports
that are populated with SFPs, displayed in light yellow to indicate Link Down.

Navigate to Fabric Interconnect A >> Fixed Module >> Unconfigured Ethernet


Ports.

Click Port 1, and in the right pane click Configure as Server Port; when prompted
if you are sure, click Yes.

Do the same for Port 2.

Note that these ports have been automatically moved under the Server Ports
section.

Note that we already begin seeing Chassis 1 appear along with some IO Modules

(FEX).

Navigate to Fabric Interconnect B >> Fixed Module >> Unconfigured Ethernet


Ports and do the same thing for Port 1.

And for Port 2.

They have moved to the Server Ports section now as well.

Verification
We begin seeing all of the components that are installed in the one chassis present
in our system. (Note that, of course, more chassis are supported with a one pair of
FIs - up to 20 - but we happen to only have one installed.)

Explore Server 1 to discover things such as amount and speed of memory, number
of CPUs and cores, and the number of DCE Interfaces. Here we have a Cisco
model B22 M3 blade with 48GB of RAM, 2 CPUs each with 6 cores, and 4 DCE
interfaces - this is because of the VIC 1240 card, which has 2x 10Gbps 802.1KR
backplane ethernet traces that route to each IOM/FEX (because there are two IOMs
in the chassis, we have a total of 4x 10Gbps traces for an available aggregate
forwarding bandwidth of 40GB).

Note the VIF (Virtual InterFace) Paths, and that they have automatically formed PC
(Port Channels) - one to each IOM - because this is the only way to use all 20GB of
bandwidth to each IOM.

If we click DCE Interfaces, we can see where each trace goes and the PC that it
belongs to in a summary view. Notice that the traces are numbered strangely - 1, 3,
5, 7. This is because there is the possibility that we could add a Port Extender (a
daughter card that can be added to the mezzanine adapter) that would bring the
total number of traces up to 8 (4 to each IOM), and the trace numbers are reserved
for the extender ports - 2, 4, 6, 8. Although we have DCE interfaces, we have no

HBAs or NICs. This is because everything in a "Palo" mezzanine card (which


include the 1st-gen M81KR and 2nd-gen VIC1240 and VIC1280) is completely
virtual and must be instantiated using Adapter-FEX. We will do this later by creating
vNICs and vHBAs and assigning them to these blades by means of an abstraction
called a Service Profile.

The same information but at a detailed view for trace 1.

The same information but at a detailed view for trace 3.

The same information but at a detailed view for trace 5.

The same information but at a detailed view for trace 7.

Note the specifics of Server 2, which is exactly the same as Server 1 - B22 M3 with
48GB RAM, 2 CPUs with 6 cores each and a VIC 1240 mezzanine adapter.

Note the VIF Paths.

Note the specifics of the DCE Interfaces (pictured here for later reference when we
explore specific vNIC paths).

Note the specifics of Server 3, which is significantly different from Servers 1 and 2.

Here we have an older generation 2 Cisco B200 M2 with 1 CPU with 4 cores and
16GB RAM. There are also only 2 DCE interfaces, and there are pre-defined HBAs
and NICs. That is because this blade has an Emulex mezzanine adapter, which is a
CNA (Converged Network Adapter) with a 2-port Ethernet and 2-port FCoE HBA.

Here we see the specifics of the DCE interfaces, with one 10Gbps path leading up
to each IOM in the chassis and then up to each FI independently.

Note the specifics of the VIF Paths swinging up to each FI.

Note the specifics of both HBA interfaces.

Note the specifics of HBA Port 1, with its burned-in pWWN and nWWN addresses.

Note the specifics of both NIC interfaces.

Note the specifics of NIC Port 1, with its burned-in MAC address.

In the FIs, after we


uplinks.

connect nxos

FI-A:

Here is our FEX definition.


fex 1
pinning max-links 1
description "FEX0001"

, we can see the results of configuring our ports as

Here are our two port-channels that are defined below and going to our two blade
servers that have VIC-1240 vntag-capable adapters.
interface port-channel1280
switchport mode vntag

switchport vntag

no pinning server sticky


speed 10000

interface port-channel1281
switchport mode vntag

switchport vntag

no pinning server sticky


speed 10000

These next two interfaces are on our FI and are the two FEX-facing interfaces.
interface Ethernet1/1
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 module-sl ot right
no shutdown

interface Ethernet1/2
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 module-sl ot right

no shutdown

The next set of interfaces begins our remote linecard or FEX (aka IOM), and we can
see that they match up to the DCE interfaces discovered earlier on the blades and
their mezzanine adapters. We can see that the first four populated interfaces (1, 3,
5, 7) are all VNTag interfaces - this is because they are running Adapter-FEX
technology. The next interface (9) is a normal trunk interface to a host - being the
blade with the non-vntag capabale M72KR-E Emulex adapter
interface Ethernet1/1/1
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1

channel-group 1280

no shutdown

interface Ethernet1/1/2
no pinning server sticky

interface Ethernet1/1/3
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1

channel-group 1280

no shutdown

interface Ethernet1/1/4
no pinning server sticky

interface Ethernet1/1/5
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2

channel-group 1281

no shutdown

interface Ethernet1/1/6
no pinning server sticky

interface Ethernet1/1/7
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2

channel-group 1281

no shutdown

interface Ethernet1/1/8
no pinning server sticky

interface Ethernet1/1/9
pinning server pinning-failure link-down
switchport mode trunk

fabric-interface Eth1/1

no shutdown

interface Ethernet1/1/10
no pinning server sticky

interface Ethernet1/1/11
no pinning server sticky

interface Ethernet1/1/12
no pinning server sticky

interface Ethernet1/1/13
no pinning server sticky

interface Ethernet1/1/14
no pinning server sticky

interface Ethernet1/1/15
no pinning server sticky

interface Ethernet1/1/16
no pinning server sticky

interface Ethernet1/1/17
no pinning server sticky

interface Ethernet1/1/18
no pinning server sticky

interface Ethernet1/1/19
no pinning server sticky

interface Ethernet1/1/20
no pinning server sticky

interface Ethernet1/1/21
no pinning server sticky

interface Ethernet1/1/22
no pinning server sticky

interface Ethernet1/1/23
no pinning server sticky

interface Ethernet1/1/24
no pinning server sticky

interface Ethernet1/1/25
no pinning server sticky

interface Ethernet1/1/26
no pinning server sticky

interface Ethernet1/1/27
no pinning server sticky

interface Ethernet1/1/28
no pinning server sticky

interface Ethernet1/1/29
no pinning server sticky

interface Ethernet1/1/30
no pinning server sticky

interface Ethernet1/1/31
no pinning server sticky

interface Ethernet1/1/32
no pinning server sticky

Even though the IOM is advertised as a 32-port backplane interface, this extra
Ethernet interface is used for control of the IOM and Chassis and appears on both
FEXs. Technically, this interface controls the CMC (Chassis Management
Controller) via the CMS (Chassis Management Switch).
interface Ethernet1/1/33
no pinning server sticky
switchport mode trunk
switchport trunk native vlan 4044
switchport trunk allowed vlan 4044
no shutdown

FI-B:

fex 1
pinning max-links 1
description "FEX0001"

interface port-channel1282
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000

interface port-channel1283
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000

interface Ethernet1/1
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 module-sl ot right
no shutdown

interface Ethernet1/2
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 module-sl ot right
no shutdown

interface Ethernet1/1/1
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1

channel-group 1283

no shutdown

interface Ethernet1/1/2
no pinning server sticky

interface Ethernet1/1/3
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1

channel-group 1283

no shutdown

interface Ethernet1/1/4
no pinning server sticky

interface Ethernet1/1/5
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2

channel-group 1282

no shutdown

interface Ethernet1/1/6
no pinning server sticky

interface Ethernet1/1/7
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2

channel-group 1282

no shutdown

interface Ethernet1/1/8
no pinning server sticky

interface Ethernet1/1/9
no pinning server sticky
pinning server pinning-failure link-down
switchport mode trunk

fabric-interface Eth1/1

no shutdown

interface Ethernet1/1/10
no pinning server sticky

interface Ethernet1/1/11
no pinning server sticky

interface Ethernet1/1/12
no pinning server sticky

interface Ethernet1/1/13
no pinning server sticky

interface Ethernet1/1/14
no pinning server sticky

interface Ethernet1/1/15
no pinning server sticky

interface Ethernet1/1/16
no pinning server sticky

interface Ethernet1/1/17
no pinning server sticky

interface Ethernet1/1/18
no pinning server sticky

interface Ethernet1/1/19
no pinning server sticky

interface Ethernet1/1/20
no pinning server sticky

interface Ethernet1/1/21
no pinning server sticky

interface Ethernet1/1/22
no pinning server sticky

interface Ethernet1/1/23
no pinning server sticky

interface Ethernet1/1/24
no pinning server sticky

interface Ethernet1/1/25
no pinning server sticky

interface Ethernet1/1/26
no pinning server sticky

interface Ethernet1/1/27
no pinning server sticky

interface Ethernet1/1/28
no pinning server sticky

interface Ethernet1/1/29
no pinning server sticky

interface Ethernet1/1/30
no pinning server sticky

interface Ethernet1/1/31
no pinning server sticky

interface Ethernet1/1/32
no pinning server sticky

interface Ethernet1/1/33

no pinning server sticky


switchport mode trunk
switchport trunk native vlan 4044
switchport trunk allowed vlan 4044
no shutdown

UCS Technology Labs - UCS B-Series


Components and Management
Management IP Address Pools
Task
Create a pool of IP addresses to be used later for management of blades and service
profiles.
This pool must begin with 192.168.0.200/24 and include a total of 32 addresses.
The default gateway is to be set to 192.168.0.254.
NOTE: Going outside the range prescribed here will result the inability to
connect to your blades via KVM later.

Configuration
In the left navigation pane, click the Admin tab, filter to Communication
Management, and click Management IP Pool (ext-mgmt). Then in the right pane,
click the General tab, and click Create Block of IP Addresses.

Fill in the information as shown.

Verification
In the right pane, click the IP Addresses tab and note that three management IP
addresses have already been consumed by the three blades. They are not
consumed in any sensible fashion. This is the only option in rev 2.0, but changes in
UCSM rev 2.1 allow a more orderly top-down consumption.

In the right pane, click the IP Blocks tab to note a more concise summary of the
block created.

UCS Technology Labs - UCS B-Series


Components and Management
DNS Server Management
Task
Provision the UCS system with a DNS server at the IP address 192.168.0.10.

Configuration
In the left navigation pane, click the Admin tab, filter to Communication
Management, and click DNS Management. In the right pane, click Specify DNS
Server.

Enter the IP address for the DNS server and click OK.

Verification
Note the server properly provisioned.

UCS Technology Labs - UCS B-Series


Components and Management
NTP Server and Time Management
Task
Provision the UCS system with an NTP server at the IP address 192.168.0.254.
Configure the time zone for the system to America/Los Angeles (Pacific Time).

Configuration
In the left navigation pane, click the Admin tab, and then click Timezone
Management. In the right pane, click Add NTP Server.

Enter the NTP Server value.

In the Timezone drop-down list, select America/Los_Angeles (Pacific Time).

UCS Technology Labs - UCS B-Series LAN


Connectivity
Ethernet Switching Mode, End Host Mode, and Uplink
Configuration
Task
Provision the UCS system to run in End Host Mode (EHM), also called Network
Interface Virtualization (NIV) mode.
Provision the two ports for each FI according to the rack you have rented as Ethernet
uplink ports.
Provision the Nexus 5000 switches for your rack to allow the FI Ethernet links to
come up in trunk mode, and ensure that spanning tree allows traffic as soon as
possible.
It is crucially important to note which rack you are working on, and to
only enable those Ethernet uplink ports for your rack. This is partly to
prevent any other rack from interfering with your UCS operation, but
the primary and critical reason is to only allow a broadcast receiver
port to be chosen for the uplinks that you have configured in your
rack. Think about what would happen if you were on DC Rack 1, and
someone in Rack 3 enabled a port that connected to your rented UCS
FI, and that port was chosen as the broadcast receiver for a particular
VLAN - your FI would then drop any broadcasts it received on your
rack-connected Ethernet uplink ports. No broadcasts means no ARP,
which means IP doesn't work properly.

Configuration
In the left navigation pane, click the LAN tab, and then click the LAN root entity. At
the bottom of the right pane, click LAN Uplinks Manager.
Many tasks in UCSM can be done from multiple locations. Much of
what we are doing here could also be done from the Equipment tab
under the Fabric Interconnects. This guide explores many options for
creating and modifying objects, but it does not seek to exploit every

possible way to configure the same entity. Search them out on your
own and note all the different ways into the very same objects.

In the LAN Uplinks Manager navigator window, click the LAN Uplinks tab. We can
see that we are already running in End Host Mode, but that here would be were we
could change it if desired.

Expand Unconfigured Ethernet Ports, navigate to Fabric Interconnects >>


Fabric Interconnect A >> Fixed Module, select ports 3 and 4, right-click both, and

then choose Configure as Uplink Port.

When prompted, click Yes.

Notice that these have been removed from the Unconfigured Ethernet Ports and
added to the Uplink Interfaces section for Fabric A.

Pictured here, only for other racks' sake, we see configuration of the rest of the
Ethernet ports as Uplinks.
If you are working with INE's rented racks, you should only configure
the two ports noted in the topology specific to the rack you have
rented.

Perform the same actions for Fabric Interconnect B to configure only the two ports
noted in the topology specific to the rack you have rented.

On N5K1:
interface Ethernet1/12
switchport mode trunk
spanning-tree port type edge trunk
no shutdown

interface Ethernet1/13
switchport mode trunk
spanning-tree port type edge trunk
no shutdown

On N5K2:
interface Ethernet1/12
switchport mode trunk
spanning-tree port type edge trunk
no shutdown

interface Ethernet1/13
switchport mode trunk
spanning-tree port type edge trunk
no shutdown

Verification

In the FIs, after we

connect nxos

, we can see the results of configuring our ports as

uplinks.
FI-A:
interface Ethernet1/3
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1
no shutdown

interface Ethernet1/4
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1
no shutdown

FI-B:
interface Ethernet1/3
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1
no shutdown

interface Ethernet1/4
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1
no shutdown

On N5K1:
N5K1# sh int e1/12-13
Ethernet1/12 is up
Hardware: 1000/10000 Ethernet, address: 000d.ecda.80c8 (bia 000d.ecda.80c8)
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA Port mode is trunk
full-duplex, 10 Gb/s, media type is 10G
Beacon is turned off
Input flow-control is off, output flow-control is off

Rate mode is dedicated


Switchport monitor is off
EtherType is 0x8100
Last link flapped 6d02h
Last clearing of "show interface" counters never
30 seconds input rate 2720 bits/sec, 0 packets/sec
30 seconds output rate 5912 bits/sec, 8 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 2.62 Kbps, 0 pps; output rate 5.27 Kbps, 8 pps
RX
22216826 unicast packets
22422010 input packets

190108 multicast packets


16535598506 bytes

8808104 jumbo packets


0 runts

0 giants

0 input error
0 watchdog

0 storm suppression packets

0 CRC

0 no buffer

0 short frame

0 bad etype drop

0 input with dribble

15076 broadcast packets

0 overrun

0 underrun

0 bad proto drop

0 ignored

0 if down drop

0 input discard

0 Rx pause
TX
2670331 unicast packets

15645283 multicast packets

19068999 output packets

2973609147 bytes

753385 broadcast packet

3036 jumbo packets


0 output errors

0 collision

0 lost carrier

0 no carrier

0 deferred

0 late collision

0 babble 0 output discard

0 Tx pause
4 interface resets
Ethernet1/13 is up
Hardware: 1000/10000 Ethernet, address: 000d.ecda.80c9 (bia 000d.ecda.80c9)
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA Port mode is trunk

full-duplex, 10 Gb/s, media type is 10G


Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped 6d02h
Last clearing of "show interface" counters never
30 seconds input rate 96 bits/sec, 0 packets/sec
30 seconds output rate 200 bits/sec, 0 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 184 bps, 0 pps; output rate 272 bps, 0 pps
RX

55712156 unicast packets

159371 multicast packets

56126494 input packets

79204358258 bytes

50502509 jumbo packets

0 storm suppression packets

0 runts

0 giants

0 input error
0 watchdog

0 CRC

0 no buffer

0 short frame

0 bad etype drop

0 input with dribble

254967 broadcast packets

0 overrun

0 underrun

0 bad proto drop

0 ignored

0 if down drop

0 input discard

0 Rx pause
TX
60508506 unicast packets
61077258 output packets

297969 multicast packets

270783 broadcast packets

40403112174 bytes

13050995 jumbo packets


0 output errors

0 collision

0 lost carrier

0 no carrier

0 deferred

0 late collision

0 babble 0 output discard

0 Tx pause
4 interface resets

N5K1#

On N5K2:
N5K2# sh int e1/12-13
Ethernet1/12 is up
Hardware: 1000/10000 Ethernet, address: 000d.ecda.80ca (bia 000d.ecda.80ca)
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA Port mode is trunk
full-duplex, 10 Gb/s, media type is 10G
Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped 5d07h
Last clearing of "show interface" counters never
30 seconds input rate 136 bits/sec, 0 packets/sec
30 seconds output rate 8632 bits/sec, 8 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 120 bps, 0 pps; output rate 8.38 Kbps, 8 pps
RX
839 unicast packets

93623 multicast packets

94878 input packets

25796328 bytes

139 jumbo packets

0 storm suppression packets

0 runts

0 CRC

416 broadcast packets

0 giants

0 input error

0 no buffer

0 short frame

0 overrun

0 underrun

0 ignored

0 watchdog

0 bad etype drop

0 input with dribble

0 bad proto drop

0 if down drop

0 input discard

0 Rx pause
TX
183327 unicast packets

15708436 multicast packets

1293425 broadcast packet

s
17185188 output packets

1361118088 bytes

0 jumbo packets
0 output errors

0 collision

0 lost carrier

0 no carrier

0 deferred

0 late collision

0 babble 0 output discard

0 Tx pause
7 interface resets
Ethernet1/13 is up
Hardware: 1000/10000 Ethernet, address: 000d.ecda.80cb (bia 000d.ecda.80cb)
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA Port mode is trunk

full-duplex, 10 Gb/s, media type is 10G


Beacon is turned off
Input flow-control is off, output flow-control is off
Rate mode is dedicated
Switchport monitor is off
EtherType is 0x8100
Last link flapped 5d07h
Last clearing of "show interface" counters never
30 seconds input rate 416 bits/sec, 0 packets/sec
30 seconds output rate 8320 bits/sec, 8 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 216 bps, 0 pps; output rate 8.06 Kbps, 8 pps
RX
19598 unicast packets

108910 multicast packets

132343 input packets


14152 jumbo packets
0 runts

0 giants

0 input error
0 watchdog

3835 broadcast packets

53417205 bytes
0 storm suppression packets

0 CRC

0 no buffer

0 short frame

0 bad etype drop

0 input with dribble

0 overrun

0 underrun

0 bad proto drop

0 ignored

0 if down drop

0 input discard

0 Rx pause
TX
195675 unicast packets

15677605 multicast packets

1290005 broadcast packet

s
17163285 output packets

1360038560 bytes

0 jumbo packets
0 output errors
0 lost carrier

0 collision
0 no carrier

0 deferred

0 late collision

0 babble 0 output discard

0 Tx pause
7 interface resets

N5K2#

UCS Technology Labs - UCS B-Series LAN


Connectivity
MAC Address Aging
Task
Change the UCS system so that it ages out MAC addresses 100 seconds less than
the default time.

Configuration
In the left navigation pane, click the LAN tab and then click the LAN root entity. At
the bottom of the right pane, click LAN Uplinks Manager.

On the Global Policies tab, select Other for the Aging Time, enter the custom time
of 14,400 seconds, and click OK.
In Catalyst 6500 'Best Bractices' guides for the Data Center, Cisco
has recommended to extend the mac-address aging timer to 4 hours,

or 14,400 seconds. The UCS system has, by default, a MAC address


aging timer of 14,500 seconds - 100 seconds longer to keep MAC
addresses populated - because the entire UCS system is purposed to
represent end hosts and not really meant to look like just another
switch. What we are doing here would not be the best practice, but
merely to get you to think about why that timer is there.

Note the format has changed to 4 hours.

UCS Technology Labs - UCS B-Series LAN


Connectivity
VLANs
Task
Create the VLANs listed in the table below all at once.
Each VLAN should be named with the prefix "DC-". (Examples: DC-110, DC-111)
Change VLAN 124 and 125 so that they are used as PVLANs in the manner listed in
the Purpose column.
The Purpose column explains the use of each VLAN in the coming
tasks. With the exception of VLANs 124 and 125, this information is
for reference only - you won't be doing anything with this information
for this task.

VLAN

Subnet

Fabric

Purpose

10.0.1.0/24

Both

Default / vCenter

110

10.0.110.0/24

Both

VM-WIN2K8-WWW

111

10.0.111.0/24

Both

VM-WIN2K8-PVLANPrimary

112

10.0.112.0/24

Both

VM-WIN2K8-PVLANComm1

113

10.0.113.0/24

Both

VM-WIN2K8-PVLANComm2

VLAN

Subnet

Fabric

Purpose

114

10.0.114.0/24

Both

BM-WIN2K8 vNIC1

115

10.0.115.0/24

Both

VMKernel

116

10.0.116.0/24

Both

vMotion

117

10.0.117.0/24

Both

iSCSI

118

10.0.118.0/24

Both

ACE External Clients

119

10.0.119.0/24

Both

Unallocated

120

10.0.120.0/24

Both

N1Kv-Control / Packet

121

10.0.121.0/24

Both

N1Kv-Management

122

10.0.122.0/24

Both

Unallocated

123

10.0.123.0/24

Both

Disjointed L2 Example

124

10.0.124.0/24

Both

UCS Private - Isolated


with VL125 as Primary

125

10.0.125.0/24

Both

UCS Private - Primary

Configuration
In the left navigation pane, click the LAN tab, and then click the LAN root entity. At
the bottom of the right pane, click LAN Uplinks Manager.

On the VLANs tab, click the Dual Mode tab, and then click the + sign on the right to
add new VLANs.

Enter DC- for the prefix, select Common/Global, enter 110-125 for the VLAN IDs
range, select None for the Sharing Type, and click OK.

Note the successful creation of all VLANs and the note that traffic from all VLANs
will flow on all uplinks northbound from both FIs.

On the previous screen, select VLAN DC-125 and click the Modify button on the
right.

Modify the Sharing Type for that VLAN to make it a Private VLAN type of Primary,
and then click OK.

On the previous screen, select VLAN DC-124 and click the Modify button on the
right.

Modify the Sharing Type for that VLAN to make it a Private VLAN type of Isolated,
change its Primary VLAN to DC-125, and click OK.

Verification
On the previous screen, select VLAN DC-125 and click the Modify button on the
right.

Notice that it now has a Secondary VLAN of DC-124 associated with it.

Review the summary of all VLANs created.

In the FIs, after we

connect nxos

, we can also see the results.

FI-A

vlan 110-123
vlan 124
private-vlan isolated
vlan 125
private-vlan primary

private-vlan association 124

interface Ethernet1/3
description U: Uplink
pinning border
switchport mode trunk switchport trunk allowed vlan 1,110-125
no shutdown

interface Ethernet1/4
description U: Uplink
pinning border
switchport mode trunk switchport trunk allowed vlan 1,110-125
no shutdown

FI-B

vlan 110-123
vlan 124
private-vlan isolated
vlan 125
private-vlan primary

interface Ethernet1/3
description U: Uplink
pinning border

private-vlan association 124

switchport mode trunk switchport trunk allowed vlan 1,110-125


no shutdown

interface Ethernet1/4
description U: Uplink
pinning border
switchport mode trunk switchport trunk allowed vlan 1,110-125

no shutdown

UCS Technology Labs - UCS B-Series LAN


Connectivity
Uplink Port Channels
Task
On Fabric A only, combine the two Ethernet uplink ports associated with your rack
into a port channel and make that display in NXOS as "interface port-channel1"
Ensure that communication works to your northbound Nexus 5K from FI1

Configuration
In the left navigation pane, click the LAN tab, and then click the LAN root entity. At
the bottom of the right pane, click LAN Uplinks Manager.

On the LAN Uplinks tab, click Create Port Channel.

Select Fabric A.

To meet the requirement, enter 1 for the port channel ID and, optionally, give it a
name for easy reference later.

Select ports 3 and 4 (be careful not to select ports 1 and 2 because they are Server
ports), and click >> to move them to the port channel.

They should appear in the right column. Click Finish.

Right-click the newly created port channel and select Enable Port Channel.

Click Yes.

N5K1:

feature lacp

interface port-channel1
switchport mode trunk
spanning-tree port type edge trunk
speed 10000

interface Ethernet1/1
switchport mode trunk
spanning-tree port type edge trunk
channel-group 1 mode active

interface Ethernet1/2
switchport mode trunk
spanning-tree port type edge trunk
channel-group 1 mode active

Verification
Note the port status and port channel status in the LAN Uplinks Manager and click
OK.

Note the port status and port channel status in the main interface as well.

In the FIs, after we


uplinks.

connect nxos

, we can see the results of configuring our ports as

FI-A:
interface port-channel1
description U: Uplink
switchport mode trunk
pinning border
switchport trunk allowed vlan 1,110-125

speed 10000

interface Ethernet1/3
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1,110-125 channel-group 1 mode active
no shutdown

interface Ethernet1/4
description U: Uplink
pinning border
switchport mode trunk
switchport trunk allowed vlan 1,110-125 channel-group 1 mode active
no shutdown

INE-UCS-01-A(nxos)# show port-channel summary


Flags:

D - Down

P - Up in port-channel (members)

I - Individual

H - Hot-standby (LACP only)

s - Suspended

r - Module-removed

S - Switched

R - Routed

U - Up (port-channel)

-------------------------------------------------------------------------------Group Port-

Type

Protocol

Member Ports

Channel
-------------------------------------------------------------------------------1

Po1(SU)

Eth

LACP

Eth1/3(P)

Eth1/4(P)

1280

Po1280(SU)

Eth

NONE

Eth1/1/1(P)

Eth1/1/3(P)

1281

Po1281(SD)

Eth

NONE

Eth1/1/5(D)

Eth1/1/7(D)

N5K1# show port-channel summary


Flags:

D - Down

P - Up in port-channel (members)

I - Individual

H - Hot-standby (LACP only)

s - Suspended

r - Module-removed

S - Switched

R - Routed

U - Up (port-channel)
M - Not in use. Min-links not met
-------------------------------------------------------------------------------Group Port-

Type

Protocol

Member Ports

Channel
-------------------------------------------------------------------------------1

Po1(SU)

Eth

LACP

Eth1/1(P)

Eth1/2(P)

UCS Technology Labs - UCS B-Series LAN


Connectivity
LAN Pin Groups
Task
Make the previously created port channel on Fabric A a dedicated Pin Group toward
which we can specifically direct traffic.

Configuration
In the left navigation pane, click the LAN tab, and then click on the LAN root entity.
At the bottom of the right pane, click LAN Uplinks Manager.

At the bottom, click Create Pin Group.

Provide a name that is intuitive. Then, under Targets, select Fabric A, click the
Interface drop-down list, expand Fabric A >> Port Channels, and select PortChannel 1 (Fabric A). Click OK.

In many real-world designs, you would provide interfaces on both


Fabric A and Fabric B; however, because this is a lab, we will not so
that we can continue to observe the effects of dynamic pinning on

Fabric B later. Also, in real-world designs, it might not make much


sense to make this port channel a LAN Pin Group, because it is the
only uplink interface on this Fabric; if we pointed a vNIC to Fabric A, it
would traverse this link northbound anyway. This task is merely for
illustration and practice creating Pin Groups.

Verification
You should see the LAN Pin Group appear in the right pane. Click OK.

You should see the LAN Pin Group appear in the left navigation pane under the
heading of the same name.

UCS Technology Labs - UCS B-Series LAN


Connectivity
Disjointed L2 Networks
Task
Ensure that traffic destined to or coming from VLAN 123 only traverses one of the
uplinks for both Fabrics A and B that you are not using on your rack. (Example: For
Rack 1, choose interface Ethernet 1/9 on both Fabric A and B.)

Configuration
In the left navigation pane, click the LAN tab, and then click the LAN root entity. At
the bottom of the right pane, click LAN Uplinks Manager.

On the VLANs tab, click the VLAN Manager tab, and then click Fabric A. In the left
pane, expand Uplink Interfaces >> Fabric A, click an unused interface such as
Eth Interface 1/9, click VLAN DC-123 in the right pane, and then click Add to VLAN
at the bottom.

The message states that traffic for this VLAN will flow only on the selected uplinks
and no others. Click OK.

Do the same for Fabric B. Click Cancel, because this was simply a demonstration;
we will not be using disjointed L2 networks for the remainder of this lab. We will use
them in full, 8-hour mock labs.

The purpose of this task is to understand the way in which the FIs
working in NIV mode (also called End Host Mode) deal with VLANs,
and more specifically with Broadcasts and Multicasts, and how a
single interface is chosen for each VLAN to allow those packet types
into the FIs coming from the northbound switches. The single
interface chosen per VLAN is called the Broadcast Receiver port, and
it can be found in NXOS like this:

INE-UCS-01-B(nxos)# show platform software enm internal info vlandb id 110

vlan_id 110
------------Designated receiver: Eth1/3
Membership:
Eth1/3

Eth1/4

If you had not specified that traffic for a certain VLAN should flow only
on one particular interface, and you also had isolated that traffic
manually to a particular interface, if that interface was not chosen as
the broadcast receiver you would never send or receive broadcasts
on that port, which would break ARP, which would break all further
communications. Also, because the broadcast receiver is dynamically
chosen after every reboot, there would be no possible way to
guarantee that it would ever work properly.

UCS Technology Labs - UCS B-Series LAN


Connectivity
Global LAN and FCoE QoS
Task
Enable system QoS classes for CoS 5, 4, 3, 2, and Best Effort.
Make CoS 3 the only non-drop class.
Utilize round robin servicing to allow both CoS 2 and 3 marked traffic to get 30%,
CoS 4 to get 20%, and both CoS 5 and Best Effort traffic to get 10% during times of
congestion.

Configuration
In the left navigation pane, click the LAN tab and navigate to LAN >> LAN Cloud
>> QoS System Class. In the right pane, click the General tab and note the
defaults. We see that only two QoS system classes are enabled by default (BE and
FC), and that as far as round robin servicing goes, they are weighted evenly at 5
(this is a ratio, and we see 50% to the right of these weights). Also all MTUs are set
to normal, or 1500 bytes. We also see that the FC class is a No-Drop class, and that
the Platinum class would also be No-Drop, if it were enabled.

Enable the Platinum, Gold, and Silver classes, and change their weights to Platinum
= 1, Gold = 2, Silver = 3, BE = 1, and Fibre Channel = 3. Also change the Platinum
class to a Packet Drop class.

Verification
Note that the weights have been updated to reflect their proper RR servicing
percentage. Also note that the weights that were entered as 1 now show best-effort,
but that the percentage still correctly shows 10%. This is because 1 = best-effort in
UCS; however, the actual weight percentage could be driven much lower if we
enabled other classes or simply allocated higher numbers to existing classes.

We can also verify what we've done in NXOS on either FI (both should reflect the
same values).

class-map type qos class-fcoe


class-map type qos match-all class-gold
match cos 4
class-map type qos match-all class-silver
match cos 2
class-map type qos match-all class-platinum
match cos 5

class-map type queuing class-fcoe


match qos-group 1
class-map type queuing class-gold
match qos-group 3
class-map type queuing class-silver
match qos-group 4
class-map type queuing class-platinum
match qos-group 2
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2

policy-map type qos system_qos_policy


class class-platinum
set qos-group 2
class class-silver
set qos-group 4
class class-gold
set qos-group 3
class class-fcoe
set qos-group 1

policy-map type queuing system_q_in_policy


class type queuing class-fcoe
bandwidth percent 30
class type queuing class-platinum
bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10

policy-map type queuing system_q_out_policy


class type queuing class-fcoe
bandwidth percent 30

class type queuing class-platinum


bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10

class-map type network-qos class-fcoe


match qos-group 1
class-map type network-qos class-gold
match qos-group 3
class-map type network-qos class-silver
match qos-group 4
class-map type network-qos class-platinum
match qos-group 2
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2

policy-map type network-qos system_nq_policy


class type network-qos class-platinum
class type network-qos class-silver
class type network-qos class-gold
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default

system qos
service-policy type qos input system_qos_policy
service-policy type queuing input system_q_in_policy
service-policy type queuing output system_q_out_policy
service-policy type network-qos system_nq_policy

UCS Technology Labs - UCS B-Series LAN


Connectivity
LAN and FCoE QoS Policies
Task
Create a series of QoS policies that can later be applied to vNICs or vHBAs based
on the following table.

Name

CoS

Allow Host to
Override CoS

Rate-Limit

Limit-to-1Gb-BW

1 Gbps

No

Limit-to-20Gb-BW

20 Gbps

No

CoS-5_10per-BW

No Limit

No

CoS-4_20per-BW

No Limit

No

CoS-2_30per-BW

No Limit

No

Host-Control-BE

No Limit

YES

FC-for-vHBAs

No Limit

No

Configuration

In the left navigation pane, click the LAN tab, navigate to LAN >> Policies >> root,
right-click QoS Policies, and then click Create QoS Policy.

Enter the information to create the policy as shown. Recall that Priority Best Effort is
CoS 1. Note that the Rate is displayed in Kbps, so we must enter 1,000,000 kbps.

Again, right-click QoS Policies and click Create QoS Policy.

Enter the information to create the policy as shown. Recall that Priority Best Effort is
CoS 1. Note that the Rate is displayed in Kbps, so we must enter 20,000,000 kbps.

Repeat to create the additional policies. Enter the information to create the policy as
shown. Recall that Priority for CoS 5 is named Platinum.

Enter the information to create the policy as shown. Recall that Priority for CoS 4 is
named Gold.

Enter the information to create the policy as shown. Recall that Priority for CoS 2 is
named Silver.

Enter the information to create the policy as shown. Host Control = Full means that

the FI will trust whatever comes from the host running on the blade.

Enter the information to create the policy as shown. Recall that Priority for CoS 3 is
named Fc (aka FCoE).

Verification
In the main interface, review the policies that have been created.

We can also view these policies in the form of Policy Maps in NXOS. We will see
these applied to Ethernet or Vethernet interfaces in future labs. Note that if this were
more than a lab, we would set the burst rate appropriately as well.

policy-map type queuing org-root/ep-qos-FC-for-vHBAs


class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-2_30per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-4_20per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-5_10per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Host-Control-BE
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-1Gb-BW
class type queuing class-default
bandwidth percent 100
shape 1000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-20Gb-BW
class type queuing class-default
bandwidth percent 100
shape 20000000 kbps 10240

UCS Technology Labs - UCS B-Series LAN


Connectivity
MAC Address Pools
Task
Create MAC Address Pools for the CUST1 and CUST2 orgs as follows:
CUST1 org should have a primary pool, a backup pool, and an overflow for
MAC addresses.
The primary pool for CUST1 should be named CUST1-MAC-Pool1 and
should pull MAC addresses from a range of 00:25:B5:01:01:00 00:25:B5:01:01:1F.
The secondary pool for CUST1 should pull MAC addresses from a range of
00:25:B5:01:02:00 - 00:25:B5:01:02:1F.
CUST2 org should have a primary pool, a backup pool, and an overflow for
MAC addresses.
The primary pool for CUST2 should be named CUST2-MAC-Pool1 and
should pull MAC addresses from a range of 00:25:B5:02:01:00 00:25:B5:02:01:1F.
The secondary pool for CUST2 should pull MAC addresses from a range of
00:25:B5:02:02:00 - 00:25:B5:02:02:1F.
Both CUST1 and CUST2 should have the same overflow pool, and they
should both pull MAC addresses from a range of 00:25:B5:03:01:00 00:25:B5:03:01:1F.

Configuration
In the left navigation pane, click the LAN tab, navigate to LAN >> Pools >> root >>
MAC Pools >> Sub-Organizations >> CUST1, and note CUST2 as well.

Right-click MAC Pools under the CUST1 org and click Create MAC Pool.

Assign the name CUST1-MAC-Pool1 as shown and click Next.

Click Add.

Enter the base MAC address 00:25:B5:01:01:00 and a pool size of 32 and click OK.
(1F = 31, plus the base address of 00 brings us to 32)

Click Finish.

Under the root org, right-click Create MAC Pool.

Create a new pool with exactly the same name - CUST1-MAC-Pool1.


vNICs will attempt to pull MAC addresses if they are available from
the primary pool that you assign them to, but upon finding that the
primary pool is exhausted, they will then search for a pool of the exact
same name in their parent org (in this case, root). Upon discovery that
that pool may also be empty, they will then search in the default pool
in their parent org.

Enter the base MAC address 00:25:B5:01:02:00 and a pool size of 32 and click OK.

Click Finish.

Right-click MAC Pools under the CUST2 org and click Create MAC Pool.

Assign the name CUST2-MAC-Pool1 as shown and click Next.

Click Add.

Enter the base MAC address 00:25:B5:02:01:00 and a pool size of 32 and click OK.

Click Finish.

Under the root org, right-click Create MAC Pool.

Create a new pool with exactly the same name - CUST2-MAC-Pool1.

Click Add.

Enter the base MAC address 00:25:B5:02:02:00 and a pool size of 32 and click OK.

Click Finish.

Finally, right-click MAC-POOL default under the root org and click Create a Block
of MAC Addresses.

Add the overflow block that both orgs will share of 00:25:B5:03:01:00 and a pool
size of 32 and click OK.

Verification
Note all of the pools that have been created.

UCS Technology Labs - UCS B-Series LAN


Connectivity
LAN Policies - Network Control
Task
Create two Network Control policies.
The first policy will be used for VMWare ESXi:
Allow CDP traffic.
Register MAC addresses for every VLAN where hosts or guests are
assigned.
Allow MAC addresses to be spoofed.
Don't warn, but actually show that the link has failed if the upstream pinned
link goes down.
The second policy will be used for a bare metal Win2K8:
Allow CDP traffic.
Register MAC addresses only for the native VLAN.
Do not allow MAC addresses to be spoofed.
Don't warn, but actually show that the link has failed if the upstream pinned
link goes down.

Configuration
In the left navigation pane, click the LAN tab, navigate to LAN >> Policies >> root,
right-click Network Control Policies, and click Create Network Control Policy.

Assign an intuitive name. Enable CDP. For MAC Register Mode, select All Host
Vlans. For action on Uplink Fail, select Link Down. For MAC Security - Forge,
select Allow. (This allows spoofed MAC addresses, which is exactly what VMware
must do because it represents many more guests behind its host hypervisor
engine.) Click OK.

Right-click Network Control Policies and click Create Network Control Policy.

Assign an intuitive name. Enable CDP. For MAC Register Mode, select Only Native
Vlan. For action on Uplink Fail, select Link Down. For MAC Security - Forge, select
to Deny. Click OK.

Verification

Note the newly created policies.

UCS Technology Labs - UCS B-Series LAN


Connectivity
LAN Policies - Flow Control
Task
Create a policy that will allow a port to later utilize 802.1Qbb PFC for both Tx and Rx
traffic.

Configuration
In the left navigation pane, click the LAN tab, navigate to LAN >> Policies >> root,
right-click Flow Control Policies, and then click Create Flow Control Policy.

Assign an intuitive name and turn on all values for Priority, Receive, and Send.

UCS Technology Labs - UCS B-Series LAN


Connectivity
Monitoring LAN Connections and Exposing Hot Spots
Task
Ensure that statistics are collected for all networking port types at half of their normal
rate.
Do not change statistics collection rates for any unimplemented or non-networking
collection policy.
Do change reporting intervals for all implemented statistics (networking or nonnetworking) to their lowest (fastest reporting) values.
Create a monitoring statistic that provides alerts when the southbound traffic from the
FIs (FI-to-IOM) goes above a rate of 25Gbps, and list it as a 'Critical' alarm.
Create a monitoring statistic that provides alerts when the northbound traffic to or
from the FIs (FI-to-LAN and LAN-to-FI) goes above a rate of 15Gbps, and list it as a
'Major' alarm.
Create a monitoring statistic called "vNIC-1Gb" (to be used later in Service Profiles)
that provides alerts when traffic from a vNIC goes above a rate of 1Gbps, and list it
as a 'Warning' alarm.

Configuration
The Fabric Interconnect collects data at every collection interval, but it
doesn't send the values to the UCSM until the reporting interval. For
example, if the collection interval is set to 30 seconds and the
reporting interval is set to 2 minutes, the fabric interconnect collects
four samples in that 2-minute reporting interval. Instead of sending
four statistics to Cisco UCS Manager, it sends only the most recent
recording along with the average, minimum, and maximum values for
the entire group.
The Collection Policy for "Host" is currently unimplemented in v2.0 (it
is meant for VMs at a later date), so we should not change collection
policies for this one. Also, a "Chassis" does not have any networking

components (FEXs/IOMs do, but not the chassis itself), so we will not
change the collection rate here, but we will change the reporting
statistic. For the rest, we will change the collection statistics to half of
their default rate, which is 60 seconds (so we will change them to 30
seconds each), and the reporting interval to its lowest or quickest
value, which is 2 minutes.
It is very important to understand how we derive the values that we
will be inputting in our threshold policies. First, we are dealing with
thresholds, which implicitly means that they must be crossed. Second,
the way these are implemented in UCSM may seem a bit strange: We
have metrics such as "Tx/Rx Total Bytes," but those won't do us much
good because they are metrics that will be crossed once, and then
stay above that crossed value, forever increasing - or at least until the
entire system is restarted (which hopefully is never). So we will work
mostly with "Tx/Rx Total Bytes Delta." Next, these are in bytes, not
bits, so we must convert. "Delta" refers to the delta of the collection
period or sampling rate, which we will be changing to the minimum
value of 30 seconds; we must determine the rate in bits per second
that we want to measure against, divide that by 8 to get bytes per
second, and multiply that times 30 to get bytes per 30 seconds - that
will be our value.
Delta = bps / 8 (for bytes/sec) * 30 (sampling rate)
In the left navigation pane, click the Admin tab and filter or navigate to Stats
Management.

Under Stats Management, click Collection Policy Adapter. In the right pane,
change the Collection Interval to 30 Seconds and the Reporting Interval to 2
Minutes (their lowest respective values) and click Save Changes.

Under Stats Management, click Collection Policy Chassis. In the right pane, do
not change the Collection Interval, but do change the Reporting Interval to 2 Minutes
because the chassis has no networking ports (FEX does, chassis does not), and
click Save Changes.

Under Stats Management, click Collection Policy FEX. In the right pane, change
the Collection Interval to 30 Seconds and the Reporting Interval to 2 Minutes, and
click Save Changes.

Under Stats Management, skip Collection Policy Host and click Collection Policy
Port. In the right pane, change the Collection Interval to 30 Seconds and the
Reporting Interval to 2 Minutes, and click Save Changes. A "port" is any port in the
actual FIs.

Under Stats Management, click Collection Policy Server. In the right pane, do not
change the Collection Interval, but do change the Reporting Interval to 2 Minutes
because the server has no networking ports (Adapter does, server itself does not),
and click Save Changes.

In the left pane, navigate to Fabric >> Internal LAN, right-click thr-policy-default,
and click Create Threshold Class. Internal LAN deals with southbound traffic to
and from the FI-to-IOM.
(Note that this task could also be performed from the LAN tab, which you see
outlined and highlighted.)

Choose Ether Tx Stats.

Click Add.

From the drop-down menu, choose Ether Tx Stats Total Bytes Delta, and ensure
that the delta is measured against a normal value of 0.0 bytes. Now for the
calculation: The task requires that FI southbound traffic to the IOM trigger a Critical
alarm when it passes 25Gbps, or 25,000,000,000. Divide that by 8 to get
3,125,000,000. Multiply that by the collection interval of 30 to get 93,750,000,000 -

the value we'll enter in the Critical Up column. Remember that thresholds must be
exceeded to be triggered, and we must fall back below them to stop triggering - so
choose some slightly lower value for the Down column.
(Remember that the reporting interval is just what gets reported (which is the last
collection interval plus avg, min, max, etc.), but we still measure our deltas against
the collection interval, not the reporting interval.)

Click Finish.

We see our class under the default threshold defined.

In the left pane, navigate to Fabric >> LAN Cloud, right-click thr-policy-default,
and click Create Threshold Class. LAN Cloud deals with northbound traffic to and
from the FI-to-LAN.

Choose Ether Tx Stats.

Click Add.

From the drop-down list, choose Ether Tx Stats Total Bytes Delta, and ensure that
the delta is measured against a normal value of 0.0 bytes. The task requires FI
northbound traffic to the LAN to trigger a Major alarm when it passes 15Gbps, so
(15,000,000,000 / 8) *30 = 56,250,000,000 is our value for the Up column. Choose
some slightly lower value for the Down column.
(Remember that if we had left our default collection interval set at 1 minute, we
would be multiplying our values by 60 rather than 30.)

Click Finish.

We are not done yet; the task requires that we measure both to and from the
northbound LAN. Right-click thr-policy-default under LAN Cloud, and click
Create Threshold Class.

Choose Ether Rx Stats.

Click Add.

From the drop-down list, choose Ether Rx Stats Total Bytes Delta, and ensure that
the delta is measured against a normal value of 0.0 bytes. Enter the same value as
before (56,250,000,000) for the Up column and a slightly lower value for the Down
column.

Click Finish.

Notice that both classes under the default threshold are now defined.

Finally, we'll create a policy for our vNICs. In the left pane, right-click the root org
and click Create Threshold Policy. vNIC policies are applied to vNICs that are in
Service Profiles, which must belong to orgs, so it makes sense that this is where we
will find and create these policies.

Give it a name and click Next.

Click Add.

For Stat Class, choose Vnic Stats.

Click Add.

From the drop-down list, choose Vnic Stats Packets Tx Delta, and ensure that the
delta is measured against a normal value of 0.0 bytes. (1,000,000,000 / 8) *30 =
3,750,000,000 is our value for the Up column, and a slightly lesser value should be
entered for the Down column.

Click Finish.

Click Finish again.

Verification
We can see the values entered for this vNIC.

We can also see the values for any of the policies that we created. We will have to
generate a lot of traffic later to really test these.

UCS Technology Labs - UCS B-Series LAN


Connectivity
vNIC Templates
Task
Create a series of vNIC templates, and configure them using information from the
following table:

Name

ESXiVMKernelA

ESXiVMKernelB

Fabric
ID

A w/o
Failover

B w/o
Failover

ESXivMotionA

A w/
Failover

ESXi-VMFabA

A w/
Failover

Target

Adapter

Adapter

Adapter

Adapter

Template
Type

VLANs

Initial

115
only no
native

Initial

115
only no
native

Initial

116
only no
native

Updating

All no
native

MTU

MAC
Pool

QoS
Policy

1500

CUST1MACPool1

Limit-to1GbBW

1500

CUST1MACPool1

Limit-to1GbBW

1500

CUST1MACPool1

CoS2_30per
BW

1500

CUST1MACPool1

HostControlBE

Name

Fabric
ID

ESXi-VMFabB

B w/
Failover

BMWin2K8FabA

A w/
Failover

ESXiVMFEXFabB

B w/
Failover

Target

Adapter

Adapter

Adapter

Template
Type

VLANs

Updating

All no
native

Initial

All no
native

Initial

All no
native

MTU

MAC
Pool

QoS
Policy

1500

CUST1MACPool1

HostControlBE

1500

CUST2MACPool1

None

1500

CUST2MACPool1

None

Configuration
In the left navigation pane, on the LAN tab, expand Policies >> root, right-click
vNIC Templates, and click Create vNIC Template.

Assign all fields as prescribed in the task for the first template named ESXiVMKernel-A and click OK.

In the left navigation pane, right-click vNIC Templates and click Create vNIC
Template. (We will omit this step from each of the following template creations, but
note that it must be performed before each new template can be created.)

Assign all fields as prescribed in the task for the template named ESXi-VMKernel-B
and click OK.

Assign all fields as prescribed in the task for the template named ESXi-vMotion-A
and click OK.

Assign all fields as prescribed in the task for the template named ESXi-VM-FabA
and click OK.

Assign all fields as prescribed in the task for the template named ESXi-VM-FabB
and click OK.

Assign all fields as prescribed in the task for the template named BM-Win2K8-FabA
and click OK.

Assign all fields as prescribed in the task for the template named ESXi-VMFEXFabB and click OK.

UCS Technology Labs - UCS B-Series LAN


Connectivity
Dynamic vNIC Provisioning
Task
Create a Dynamic vNIC Connection Policy that allows 20 dynamic vNICs to be
provisioned on demand as needed.

Configuration
In the left navigation pane, on the LAN tab, expand Policies >> root, right-click
Dynamic vNIC Connection Policies, and then click Create Dynamic vNIC
Connection Policy.

Assign an appropriate name, and choose 20 for the Number of Dynamic vNICs.
Choose VMWarePassThru for the Adapter Policy, and choose Protected for the
Protection (to allow the vNICs to dynamically alter between A and B as they are
provisioned).

UCS Technology Labs - UCS B-Series SAN


Connectivity
Fibre Channel Switching Mode, End Host Mode, and
Uplink Configuration
Task
Provision the UCS system to run in Fibre Channel End Host Mode (EHM), also called
Network Port Virtualizer (NPV) mode.
Provision the two ports for each FI according to the rack you have rented as Fibre
Channel Uplink ports.

Configuration
In the left navigation pane, click the Equipment tab, expand Fabric Interconnects,
and click Fabric Interconnect B (which happens to be running as primary in this
screen shot). In the right pane, click Configure Unified Ports.

Note the important warning telling us that if we change anything on either module
(either the base slot with the first 32 ports or the Generic Expansion Module (GEM)
if one is installed), it will cause a disruption in traffic and it will IMMEDIATELY reboot

that Fabric Interconnect. For this reason, you would do this at the time of initial
provisioning; if you must do it live, you would perform this task on one FI at a time.
Click Yes.

Click and drag the slider below the ports to the left into the desired position. Note
the legend at the bottom: All green ports are Ethernet, and all purple ports are Fibre
Channel. Also note the letter over each that indicates its port type.

Now we see that ports that are 11 and higher have changed to Fibre Channel, and
by default all are labeled with a B to indicate Uplink ports. Note that you cannot
change port types on odd-numbered boundaries (for example, you cannot have
ports 1-9 be Ethernet; it must be 1-8, or 1-10, etc.). This is because one ASIC
controls every two ports. Click Finish.

If we console into FI-B, we can see that as soon as we clicked Finish, the FI began
rebooting.

When FI-B has finished rebooting, reconnect to USCM (if logged out), go back to
the left navigation pane, click the Equipment tab, expand Fabric Interconnects,
and click Fabric Interconnect A. In the right pane, click Configure Unified Ports.

Note the important warning again and click Yes.

Click and drag the slider below the ports to the left into the desired position.

Again, we see that ports that are 11 and higher have changed to Fibre Channel, and
by default all are labeled with a B on to indicate Uplink ports. Click Finish.

If this is our primary, notice that we will be logged out of the system.

Again, if we console into FI-A, we can see that as soon as we clicked Finish the FI
began rebooting.

In the left navigation pane, click the SAN tab and click the SAN root entity. At the
bottom of the right pane, click SAN Uplinks Manager.

We can now see that we are already in End Host Mode (NPV), but this is where we
could change it if we wanted to.

UCS Technology Labs - UCS B-Series SAN


Connectivity
VSANs and FCoE VLANs
Task
Provision two VLANs to carry FCoE traffic and two VSANs to carry Fibre Channel
traffic as follows:

VSAN

FCoE VLAN

Fabric

101

101

102

102

Configuration
In the left navigation pane, click the SAN tab, navigate to SAN >> SAN Cloud, rightclick VSANs, and click Create VSAN.

Enter an intuitive name and choose Fabric A, along with VSAN 101 and VLAN 101.
This VLAN will be dynamically created, and then the VSAN will also be created,
referencing the proper FCoE VLAN.

Default Zoning is something that you would specify if you needed to


be in FC Switching Mode as opposed to FC NPV Mode, because FC
zoning cannot be provisioned by the UCSM <= v2.0 (which is the
requirement for a Cisco MDS or N5K as the upstream FC switch in
FC Switching mode, to provide enhanced zoning via CFS). In v2.1,
native FC zoning became a configurable option. We're in NPV mode,
so we have no need for it.

The error below is not something you should expect to see, but it is included here to
show what could happen if you created a VLAN prior to creating this VSAN. Also, in
older versions of UCS (1.3), you did have to create the VLAN used to carry FCoE
traffic prior to creating and referencing it in a VSAN. Again, you cannot do this
today; if you do, you will receive this message.

You should see the message that the VSAN was successfully created.

Repeat to create the next VSAN.

Enter an appropriate name, select Disabled for Default Zoning, select Fabric B,
and enter VSAN and FCoE VLAN number 102.

Verification
We can see the VSANs and their respective FCoE VLANs properly created, along
with the Fabric they belong to.

UCS Technology Labs - UCS B-Series SAN


Connectivity
Uplink Trunking Port Channels
Task
Create a SAN port channel out of the two FC interfaces for Fabric A only.
Enable FC VSAN Trunking on both SAN Fabrics.
Ensure that Fabric A trunks VSAN 101 northbound.
Ensure that Fabric B trunks VSAN 102 northbound.
Configure the northbound MDS switches 1 and 2 to match these settings coming
from the FIs.
Use the same Port-Channel ID on the MDS that you use in the UCS.

Configuration
If you are building your own racks, please note that Gen 1 MDS
switches (e.g. 9216i) do not natively support F Port Channels, even
though they are configurable. To complete this task you must have a
Gen 2 line card (such as a "DS-X9124" 1/2/4 Gbps FC line card). This
line card is what INE uses in all of our MDS 9216i switches on module
2, and that blade is where all native FC connections in our MDS' are
connected to. If building your own rack, you may of course also use
an MDS 9222i or a Nexus 5K as the NPIV switch upstream of the
UCS FIs.
In the left navigation pane, click the SAN tab and click the root entity SAN. At the
bottom of the right pane, click SAN Uplinks Manager.

Click the SAN Uplinks tab, and under FC Port Channels, right-click Fabric A and
click Create Port Channel.

Choose an ID and enter a meaningful name. Click Next.

Select the two ports for your rack and click >> to move them to the port channel.

Click Finish.

Notice the error. We see this because we used the same Port Channel ID (1) that
we used for our LAN Port Channel, and they cannot overlap.

Click < Prev.

Change the port channel ID to 2 (or something unused) and click Next.

Click Finish.

Notice that the port channel now has both ports, but both ports and the channel are
disabled.

We can attempt to enable the ports individually by right-clicking the FC Interface and
clicking Enable Interface.

However, we receive an error. This is unlike NX-OS, where we no shut the


individual interfaces and then no shut the port channel interface as well.

Instead, we will enable the port channel by right-clicking the port channel and
clicking Enable Port Channel.

Notice that both the port channel and the individual interfaces have all been enabled.

Should you ever need to delete a member from a port channel, it can be a bit tricky
to figure out. The next few graphics are provided as a demonstration. We won't
finalize this, however, because don't actually intend to delete any ports from this port
channel.
Right-click the port channel and click Show Navigator.

Click the Ports tab, highlight the port you want to remove, and click the Delete
(trash) icon on the right. We will simply click Cancel here, because we don't intend
to do this.

Go back to the SAN Uplinks Manager interface to review how we might delete an
entire port channel. This time, right-click Fabric A and click Show Navigator.

Click the Port Channels tab, select the port channel, and click the Delete (trash)
icon. We will click Cancel.

Go back to the SAN Uplinks Manager interface, where we want to enable VSAN
trunking for the entire FI. Right-click Fabric A and click Enable FC Uplink Trunking.

Note that if we do this, certain VSANs are disabled.

Do the same for Fabric B.

Now we need to configure both MDSs to be ready for the FIs in NPV mode, the port

channel to form on Fabric A, and trunking to work properly.


On MDS1:
MDS1:
feature npiv
feature fport-channel-trunk

interface port-channel 2
channel mode active
switchport mode F
switchport speed 2000
switchport rate-mode dedicated
switchport trunk allowed vsan add 101
no shutdown

vsan database
vsan 101
vsan 101 interface port-channel 2

interface fc1/1
switchport speed 2000
switchport rate-mode dedicated
switchport mode F
channel-group 2 force
no shutdown

interface fc1/2
switchport speed 2000
switchport rate-mode dedicated
switchport mode F
channel-group 2 force
no shutdown
MDS2:

feature npiv

vsan database
vsan 102
vsan 102 interface fc1/1
vsan 102 interface fc1/2

interface fc1/1
switchport mode F
switchport trunk allowed vsan add 102
no shutdown

interface fc1/2
switchport mode F
switchport trunk allowed vsan add 102
no shutdown

Now, back in the SAN Uplinks Manager, under Fabric A, right-click the port channel,
and click Show Navigator.

Make sure that proper VSAN 101 is chosen and click OK.

In the SAN Uplinks Manager, under Fabric B, right-click FC Interface 1/11, and click
Show Navigator.

Make sure that proper VSAN 102 is chosen and click OK.

Do the same for FC Interface 1/12 and click Show Navigator.

Make sure that proper VSAN 102 is chosen and click OK.

We can also see what was created in the FI's NX-OS.


FI-A:
feature fcoe
feature npv
feature npiv

vsan database
vsan 101
vsan 101 interface san-port-channel 2

vlan 101
fcoe vsan 101
name fcoe-vsan-101

interface san-port-channel 2
channel mode active
switchport mode NP

interface fc1/11
channel-group 2 force
no shutdown

interface fc1/12
channel-group 2 force
no shutdown

interface fc1/11
switchport mode NP

interface fc1/12
switchport mode NP

FI-B:

feature fcoe
feature npv
feature npiv

vsan database
vsan 102
vsan 102 interface fc1/11
vsan 102 interface fc1/12

vlan 102
fcoe vsan 102
name fcoe-vsan-102

interface fc1/11
switchport trunk mode on
no shutdown

interface fc1/12
switchport trunk mode on
no shutdown

interface fc1/11

switchport mode NP

interface fc1/12
switchport mode NP

Verification
INE-UCS-01-A(nxos)# show interface fc1/11-12
fc1/11 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:0b:54:7f:ee:c5:7d:40

Admin port mode is NP, trunk mode is on

snmp link state traps are enabled


Port mode is NP

Port vsan is 101

Speed is 2 Gbps
Transmit B2B Credit is 16
Receive B2B Credit is 16
Receive data field Size is 2112
Beacon is turned off Belongs to san-port-channel 2
1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
330 frames input, 171612 bytes
7 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


327 frames output, 22708 bytes
1 discards, 0 errors
2 input OLS, 2 LRR, 0 NOS, 0 loop inits
4 output OLS, 4 LRR, 3 NOS, 0 loop inits
last clearing of "show interface" counters never
16 receive B2B credit remaining
16 transmit B2B credit remaining
0 low priority transmit B2B credit remaining

fc1/12 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:0c:54:7f:ee:c5:7d:40
snmp link state traps are enabled
Port mode is NP

Port vsan is 101

Speed is 2 Gbps
Transmit B2B Credit is 16
Receive B2B Credit is 16
Receive data field Size is 2112
Beacon is turned off

Admin port mode is NP, trunk mode is on

Belongs to san-port-channel 2

1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec


1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
146 frames input, 64524 bytes
8 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


139 frames output, 9496 bytes
1 discards, 0 errors
2 input OLS, 2 LRR, 0 NOS, 0 loop inits
4 output OLS, 2 LRR, 3 NOS, 0 loop inits
last clearing of "show interface" counters never
16 receive B2B credit remaining
16 transmit B2B credit remaining
0 low priority transmit B2B credit remaining
INE-UCS-01-A(nxos)# show interface san-port-channel 2
san-port-channel 2 is trunking
Hardware is Fibre Channel
Port WWN is 24:02:54:7f:ee:c5:7d:40

Admin port mode is NP, trunk mode is on

snmp link state traps are enabled


Port mode is TNP
Port vsan is 101
Speed is 4 Gbps
Trunk vsans (admin allowed and active) (1,101)
Trunk vsans (isolated)

()

Trunk vsans (initializing)

(1)

Trunk vsans (up)

1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec


1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
663 frames input, 261700 bytes
22 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


650 frames output, 56856 bytes
1 discards, 0 errors
3 input OLS, 3 LRR, 0 NOS, 0 loop inits
22 output OLS, 16 LRR, 12 NOS, 0 loop inits
last clearing of "show interface" counters never
Member[1] : fc1/11
Member[2] : fc1/12
Interface last changed at Mon Mar 18 12:07:47 2013

INE-UCS-01-A(nxos)# show san-port-channel summary

U-Up D-Down B-Hot-standby S-Suspended I-Individual link

(101)

summary header
-------------------------------------------------------------------------------Group

Port-

Type

Protocol

Member Ports

Channel
-------------------------------------------------------------------------------2

San-po2

FC

PCP

(U)

FC

fc1/11(P)

fc1/12(P)

MDS1# show port-channel summary


-----------------------------------------------------------------------------Interface

Total Ports

Oper Ports

First Oper Port

-----------------------------------------------------------------------------port-channel 2

fc1/1

MDS1# show interface port-channel 2 brief

------------------------------------------------------------------------------Interface

Vsan

Admin

Status

Oper

IP

Mode

Speed

Address

Trunk

Oper

Mode

(Gbps)

------------------------------------------------------------------------------port-channel 2

101

on

trunking

TF

--

MDS1# show interface port-channel 2 trunk vsan


port-channel 2 is trunking
Vsan 1 is down (waiting for flogi) Vsan 101 is up (None)

MDS1# show interface brief

------------------------------------------------------------------------------Interface

Vsan

Admin

Admin

Mode

Trunk

Status

SFP

Oper

Oper

Port

Mode

Speed

Channel

Mode

(Gbps)

------------------------------------------------------------------------------fc1/1

101

on

trunking

swl

TF

fc1/2

101

on

trunking

swl

TF

MDS1# show flogi database


-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------port-channel 2

101

0x280040

Total number of flogi = 1.

24:02:54:7f:ee:c5:7d:40 20:65:54:7f:ee:c5:7d:41

MDS1#show fcns database

VSAN 101:
-------------------------------------------------------------------------FCID

TYPE

PWWN

(VENDOR)

FC4-TYPE:FEATURE

-------------------------------------------------------------------------0x280040

24:02:54:7f:ee:c5:7d:40 (Cisco)

npv

Total number of entries = 1


INE-UCS-01-B(nxos)# show interface fc1/11-12
fc1/11 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:0b:54:7f:ee:c5:6a:80

Admin port mode is NP, trunk mode is on

snmp link state traps are enabled


Port mode is NP

Port vsan is 102

Speed is 2 Gbps
Receive data field Size is 2112
Beacon is turned off
1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
13 frames input, 1076 bytes
4 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


8 frames output, 756 bytes
0 discards, 0 errors
0 input OLS, 0 LRR, 0 NOS, 0 loop inits
1 output OLS, 1 LRR, 0 NOS, 0 loop inits
last clearing of "show interface" counters never

fc1/12 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:0c:54:7f:ee:c5:6a:80

Admin port mode is NP, trunk mode is on

snmp link state traps are enabled


Port mode is NP

Port vsan is 102

Speed is 2 Gbps
Receive data field Size is 2112
Beacon is turned off
1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
13 frames input, 1076 bytes
4 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


8 frames output, 756 bytes
0 discards, 0 errors
0 input OLS, 0 LRR, 0 NOS, 0 loop inits
1 output OLS, 1 LRR, 0 NOS, 0 loop inits
last clearing of "show interface" counters never
MDS2)# show interface fc1/1-2
fc1/1 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:01:00:0d:ec:26:e9:c0

Admin port mode is F, trunk mode is on

snmp link state traps are enabled


Port mode is F, FCID is 0x530000
Port vsan is 102

Speed is 2 Gbps

Transmit B2B Credit is 16


Receive B2B Credit is 16
Receive data field Size is 2112
Beacon is turned off
5 minutes input rate 48 bits/sec, 6 bytes/sec, 0 frames/sec
5 minutes output rate 104 bits/sec, 13 bytes/sec, 0 frames/sec
41 frames input, 3848 bytes
0 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


49 frames output, 7244 bytes
0 discards, 0 errors
4 input OLS, 4 LRR, 8 NOS, 0 loop inits
15 output OLS, 0 LRR, 4 NOS, 2 loop inits
16 receive B2B credit remaining
16 transmit B2B credit remaining
16 low priority transmit B2B credit remaining
Interface last changed at Mon Mar 18 20:33:58 2013

fc1/2 is up
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
Port WWN is 20:02:00:0d:ec:26:e9:c0

Admin port mode is F, trunk mode is on

snmp link state traps are enabled


Port mode is F, FCID is 0x530001
Port vsan is 102

Speed is 2 Gbps

Transmit B2B Credit is 16


Receive B2B Credit is 16
Receive data field Size is 2112
Beacon is turned off
5 minutes input rate 48 bits/sec, 6 bytes/sec, 0 frames/sec
5 minutes output rate 104 bits/sec, 13 bytes/sec, 0 frames/sec
35 frames input, 3356 bytes

0 discards, 0 errors
0 CRC,

0 unknown class

0 too long, 0 too short


48 frames output, 7228 bytes
0 discards, 0 errors
4 input OLS, 4 LRR, 8 NOS, 0 loop inits
15 output OLS, 0 LRR, 4 NOS, 2 loop inits
16 receive B2B credit remaining
16 transmit B2B credit remaining
16 low priority transmit B2B credit remaining
Interface last changed at Mon Mar 18 20:33:58 2013

MDS2# show interface brief

------------------------------------------------------------------------------Interface

Vsan

Admin

Admin

Mode

Trunk

Status

SFP

Oper

Oper

Port

Mode

Speed

Channel

Mode

(Gbps)

------------------------------------------------------------------------------fc1/1

102

on

up

swl

--

fc1/2

102

on

up

swl

--

MDS2# show flogi database


-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/1

102

0x530000

20:0b:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

fc1/2

102

0x530001

20:0c:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

Total number of flogi = 2.


MDS2# show fcns database

VSAN 102:
-------------------------------------------------------------------------FCID

TYPE

PWWN

(VENDOR)

FC4-TYPE:FEATURE

-------------------------------------------------------------------------0x530000

20:0b:54:7f:ee:c5:6a:80

229

0x530001

20:0c:54:7f:ee:c5:6a:80

229

Total number of entries = 2

UCS Technology Labs - UCS B-Series SAN


Connectivity
SAN Pin Groups
Task
Create a SAN PIN Group to (later) direct traffic toward the specific interface on
Fabric B of FC1/11.

Configuration
In the left navigation pane, click the SAN tab and click SAN the root entity. At the
bottom of the right pane, click SAN Uplinks Manager.

In the SAN Uplinks Manager, at the bottom, click Create Pin Group.

Give it an appropriate name, select Fabric B, and use the drop-down list to select
FC Interface 1/11. Click OK.

Click OK.

You should see the new pin group in the right SAN Pin Groups pane.

Verification
When you exit the SAN Uplinks Manager, you should see the same new pin group.

UCS Technology Labs - UCS B-Series SAN


Connectivity
WWNN/nWWN Pools
Task
Create a nWWN Pool in the root org with a range from 20:FF:00:25:B5:00:00:00 20:FF:00:25:B5:00:00:1F.

Configuration
In the left navigation pane, click the SAN tab and navigate to SAN >> Pools >> root
>> WWNN Pools >> WWNN Pool node-default. In the left navigation pane, click
Create WWN Block.

Enter the first value as shown, and choose a size of 32 (which will take you from 00
to 1F).

Click OK.

Verification
Note the new block created in the default pool.

UCS Technology Labs - UCS B-Series SAN


Connectivity
WWPN/pWWN Pools
Task
Create WWPN Pools in the CUST1 and CUST2 orgs as follows:
CUST1 org should be provisioned with two ranges for Fabric A and B.
Fabric A pool should be named CUST1-WWPN-FabA and should consist of
a range that is 20:AA:00:25:B5:01:00:00 - 20:AA:00:25:B5:01:00:1F.
Fabric B pool should be named CUST1-WWPN-FabB and should consist of
a range that is 20:BB:00:25:B5:01:00:00 - 20:BB:00:25:B5:01:00:1F.
CUST2 org should provisioned with two ranges for Fabric A and B.
Fabric A pool should be named CUST2-WWPN-FabA and should consist of
a range that is 20:AA:00:25:B5:02:00:00 - 20:AA:00:25:B5:02:00:1F.
Fabric B pool should be named CUST2-WWPN-FabB and should consist of
a range that is 20:BB:00:25:B5:02:00:00 - 20:BB:00:25:B5:02:00:1F.

Configuration
In the left navigation pane, click the SAN tab and navigate to SAN >> Pools >>
Sub-Organizations >> CUST1. Right-click WWPN Pools and click Create WWPN
Pool.

Enter the pool name CUST1-WWPN-FabA and click Next.

Click Add.

Enter the first WWPN, taking care to be precise. Enter the size of 32 and click OK.

Click Finish.

Click OK.

Again, under SAN >> Pools >> Sub-Organizations >> CUST1, right-click WWPN
Pools and click Create WWPN Pool.

Enter the pool name CUST1-WWPN-FabB and click Next.

Click Add.

Enter the first WWPN, taking care to be precise. Enter the size of 32 and click OK.

Click Finish.

Click OK.

Under SAN >> Pools >> Sub-Organizations >> CUST2, right-click WWPN Pools
and click Create WWPN Pool.

Enter the pool name CUST2-WWPN-FabA and click Next.

Click Add.

Enter the first WWPN, taking care to be precise. Enter the size of 32 and click OK.

Click Finish.

Click OK.

Again, under SAN >> Pools >> Sub-Organizations >> CUST2, right-click WWPN
Pools and click Create WWPN Pool.

Enter the pool name CUST2-WWPN-FabB and click Next.

Click Add.

Enter the first WWPN, taking care to be precise. Enter the size of 32 and click OK.

Click Finish.

Click OK.

Verification
Back in the UCSM, you should see both orgs containing pools for each Fabric.

When we get to tasks involving boot-from-SAN, and so that you do


not have to log in to the actual SAN Array and change the pWWN
initiator every time (because of the dynamic nature of pWWN
allocation in UCS v2.0), we will not actually be using these pools,
although we will still use the nWWN pool. It was still important for you
to create these pools and see where they can be accessed when you
get to Service Profiles (for example, CUST1 pools from CUST1
Service Profiles, CUST2 pools from CUST2 Service Profiles) to
understand them.

UCS Technology Labs - UCS B-Series SAN


Connectivity
iSCSI IPs and IQNs
Task
Using VLAN 117, create a block of IP addresses ranging from 10.0.117.10 10.0.117.20/24 with a default gateway of 10.0.117.1.
Create a range of iSCSI Initiators: iqn.2013-01.com.ine:ucs:esxi-vmfex:0 - iqn.201301.com.ine:ucs:esxi-vmfex:9
Ensure that the IQN Prefix is set to iqn.2013-01.com.ine.

Configuration
On the LAN tab, filter or navigate to Pools >> root org, right-click IP Pool (iscsiinitiator-pool), and click Create Block of IP Addresses.

Create your block beginning with 10.0.117.10, and enter a size of 10 (0-9). Enter a
subnet mask of 255.255.255.0 and a default gateway of 10.0.117.1 and click OK.

On the SAN tab, filter or navigate to Pools >> root org, right-click IQN Pools, and
click Create IQN Suffix Pool.

Define a name (we will use this later for VM-FEX, so we'll name it that now), and
enter the prefix of iqn.2013-01.com.ine.

Click Add.

Enter the suffix ucs:esxi-vmfex, and make the block begin at 0 and grow with a
total size of 10 (0-9). Between the prefix and the suffix, there will automatically be a
colon (:), resulting in a IQN of iqn.2013-01.com.ine:ucs:esxi-vmfex:0. Click OK.

Click Finish.

Click OK.

Verification
On the LAN tab, click your new IP pool to see all of the addresses awaiting
assignment.

On the SAN tab, click on your new IQN pool to see all of the names awaiting
assignment.

For more information on iSCSI IQN naming conventions, be sure to


check out RFC 3720 Section 3.2.6.3.1, which includes examples such
as:

Naming
Type

Date

String defined by

Auth

"example.com" naming authority

+--++-----+ +---------+ +-----------------------------+


|

||

| |

| |

iqn.1992-01.com.example:storage:diskarrays-sn-a8675309
iqn.1992-01.com.example
iqn.1992-01.com.example:storage.tape1.sys1.xyz
iqn.1992-01.com.example:storage.disk2.sys1.xyz

UCS Technology Labs - UCS B-Series SAN


Connectivity
vHBA Templates
Task
Create two vHBA templates, one for Fabric A and one for Fabric B.
For Fabric A, ensure that:
Name of template is fc0.
VSAN 101 is used.
Template type is Inital.
WWN is from the default pool.
Proper QoS Policy is applied.
Stats Threshold Policy is set to vNIC-1Gbps.
For Fabric B, ensure that:
Name of template is fc1.
VSAN 102 is used.
Template type is Inital.
WWN is from the default pool.
Proper QoS Policy is applied.
SAN Pin Group of FabB-FC11 is used.
Stats Threshold Policy is set to vNIC-1Gbps.

Configuration
In the left navigation pane, click the SAN tab and filter or navigate to SAN >>
Policies >> root, right-click vHBA Templates, and click Create vHBA Template.

Assign all info for fc0 (Fabric A) exactly as the task describes and as shown below.

Click OK.

Again, right-click vHBA Templates and click Create vHBA Template.

Assign all info for fc1 (Fabric B) exactly as the task describes and as shown below.

Click OK.

Verification
You should see both vHBA templates created and awaiting usage.

UCS Technology Labs - UCS B-Series Server


Pool Provisioning
UUID Pool
Task
Create a UUID Pool that has suffixes that range from 0000-000000000001 to 0000000000000020

Configuration
On the Servers tab, filter or navigate to Pools >> root, right-click UUID Suffix
Pools, and click Create UUID Suffix Pool.

Enter an approriate name, leave the defaul prefix type of Derived, and click Next.
While not typically recommended, you are able to change the UUID
prefix, provided that you change it to any value other than 000000000000-0000. If you change it to all 0's, the system will let you input it,
but completely ignore you and use the Derived value. Just remember
that if you change the UUID, it must be unique across all blades and
chassis. In UCSM v2.1+ this goes for even chassis across FIs, as
UCS Central will soon begin allowing cross-FI service profile
association.

Click Add.

Enter the first suffix, 0000-0000000000000001, and choose a size of 32 (remember


that UUIDs are in hex, so 32 = 0x20).

Click Finish.

Click OK.

Verification
You should now see UUID suffixes ranging from 01 to 20, awaiting assignment.

UCS Technology Labs - UCS B-Series Server


Pool Provisioning
Server Pool Creation
Task
Create an empty server pool named Palo-Blades-w-gt-32GB.

Configuration
On the Servers tab, filter or navigate to Pools >> root, right-click Server Pools,
and click Create Server Pool.

Enter the name as shown and click Next.

Leave the Pooled Servers pane empty as shown and click Finish.

Click OK.

UCS Technology Labs - UCS B-Series Server


Pool Provisioning
Server Pool Qualification
Task
Create a server pool qualification named Palo-Blades-32GB and ensure that it only
qualifies certain servers.
Must contain a Palo mezzanine adapter.
Must contain at least 32GB of RAM.

Configuration
On the Servers tab, filter or navigate to Policies >> root, right-click Server Pool
Policy Qualifications, and click Create Server Pool Policy Qualification.

Assign the name as directed. Then click the + on the right and click Create Adapter
Qualifications.

In the Type field, choose Virtualized Eth If and click OK.

Click the + on the right and click Create Memory Qualifications.

In the Min Cap (MB) field, click Select, enter the value of 32000, and click OK.

Click OK.

Click OK.

UCS Technology Labs - UCS B-Series Server


Pool Provisioning
Server Pool Policies
Task
Use the previously created server pool Policy qualification (Palo-Blades-32GB) to
populate the previously created server pool (Palo-Blades-w-gt-32GB) using a server
pool policy named Palo-32GB-Pool.

Configuration
On the Servers tab, filter or navigate to Policies >> root, right-click Server Pool
Policies, and click Create Server Pool Policy.

Complete the Name, Target Pool, and Qualification fields as shown, and click OK.

Click OK.

Verification
Filter or navigate to Pools >> root >> Server Pools >> Server Pool Palo-Bladesw-gt-32GB, and note that it has been populated with all blades that meet the
minimum qualifications.

UCS Technology Labs - UCS B-Series Service


Profiles
Policies for Service Profiles
Task
Creat a bios policy named LoudBoot-w-Coh that performs the following:
Bypasses the Cisco logo on bootup, and instead shows you important
storage-related information such as FC and iSCSI target disks.
Supports Intel Coherency.
Enables USB devices to access any blade.
Allows PCI devices to address memory above 4GB.
Create an IPMI profile that allows a user named admin with a password of cciedc01
to access and fully administer a blade.
Create a disk configuration policy that allows a blade to mirror two local disks
together, but ensure that if the policy is dissociated with the blade, the blade keeps
the local disks in a mirrored fashion.
Create a power cap policy that is better than the default, allowing a blade to stay
powered on even if other blades must be powered down.
Create a scrub policy that allows both disks and BIOS to be wiped clean in the event
of a service profile dissociation.

Configuration
On the Servers tab, filter or navigate to Policies >> root, right-click BIOS Policies,
and click Create BIOS Policy.

Assign the name as shown, and disable Quiet Boot (this allows the Cisco Logo to be
bypassed and instead shows us critical storage information at boot). Either click
Next or click Intel Directed IO on the left.

On the Intel Directed IO page, enable Coherency Support.


This is a feature that requires deep understanding of CPU cache,

memory, and IO bus architecture before enabling, and we are not


suggesting that you turn it on in a production environment before
reading about it. This step merely demonstrates where to find it and
other features of BIOS.

Click USB on the left. Disable USB Front Panel Access Lock (meaning disable the
lock or enable support for USB devices).

Click PCI Configuration on the left. Enable Memory Mapped IO Above 4Gb Config
and click Finish.

Click OK.

Right-click IPMI Access Policies and click Create IPMI Access Profile.

Assign it an intuitive name and click +.

Enter the name admin and password cciedc01, choose Admin for the Role, and
Click OK.

Click OK.

Click OK.

Right-click Local Disk Config Policies and click Create Local Disk Configuration
Policy.

Assign it an intuitive name, select Raid 1 Mirrored for the Mode, and select the
Protect Configuration check box. Click OK.

Click OK.

Right-click Power Control Policies and click Create Power Control Policy.

Assign it an intuitive name and choose the option to cap the power with the Priority
of 1 (the default is 5, and lower is better). Click OK.

Click OK.

Note the default of 5.

Finally, right-click Scrub Policies and click Create Scrub Policy.

Assign it an intuitive name and select Yes for Disk Scrub and BIOS Settings Scrub.
Click OK.

Click OK.

UCS Technology Labs - UCS B-Series Service


Profiles
Service Profile Updating Templates with Boot from
iSCSI
Task
Clone the previously created service profile template, change the name of the new
template to ESXi-VMFEX-iSCSI, and move it to the CUST2 org.
Modify the new template and delete all FC configurations.
Add a new vNIC that will be used to boot from iSCSI with the following parameters:
Obtains MAC address from the CUST2 MAC Pool
Uses Fabric A but fails to Fabric B
Transmits on VLAN 117 but does not expect the BIOS or OS to need to
send Dot1Q header or VLAN ID
Has an MTU of 9000 bytes and the system supports it
Is tagged with CoS 4 and guaranteed 20% of bandwidth in times of fabric
congestion
The iSCSI vNIC should boot from a target that is at IP address 10.0.117.25:3260, is
at the target name iqn.2013-01.com.ine:storage:esxi-vmfex:1, and uses LUN 1.
Apply the dynamic vNIC connection policy that was created earlier that dynamically
instantiates up to 20 vNICs and explicitly ensures that BIOS supports VMWare
DirectPathIO on these dynamic vNICs.

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root org >> Sub-Organizations >> CUST1, rightclick the service profile template ESXi-Initial-Temp, and click Create a Clone.

Enter the clone name ESXi-VMFEX-iSCSI and change the org to CUST2. Click OK.
Many things that may seem like limitations in UCS are actually
designed with intention. For example, you cannot move service
profiles or their templates from one org to another. However, you can
create a clone, and that can be reassigned to a new org.

Click OK.

Select the new template in the left navigation pane. In the right pane, click the
Storage tab. Select the first vHBA and click the Delete icon at the bottom.

Click Yes.

Select the second vHBA and click the Delete icon at the bottom.

Click Yes.

Click Save Changes.

Click OK.

In the right pane, click the Network tab and click Add.

Give the new vNIC a name such as iSCSI, and be sure not to use a template to
create this (there have been known issues with booting from iSCSI in UCSM v2.0
when they were instantiated from a template). Choose the CUST2-MAC-Pool1.
Choose Fabric A and Enable Failover. Choose VLAN 117, and be sure to mark it
as a Native VLAN so that it doesn't require dot1q tagging. Change the MTU to 9000
and the QoS Policy to CoS-4_20per-BW. We will need to ensure that the system
supports the 9000-byte MTU for this CoS 4 value, and we'll do that as soon as we
finish doing a few more required things with this new vNIC. Change the Adapter
Policy to VMWare. Click OK.

Stay on the Network tab in the right pane, and click Modify vNIC/vHBA Placement.

Select Specify Placement Manually. Select all vNICs and click assign to move
them to vCon1.

Ensure that the vNIC iSCSI is the absolute last in order. This is important when
booting from iSCSI to deal with an issue in some versions of VMWare ESXi. Click
OK.

Click OK.

Note the changes.

Change tabs in the left navigation pane to LAN and navigate to LAN >> LAN Cloud
>> QoS System Class. In the right pane, note that the CoS4 is currently set to the
MTU of Normal or 1500 byte.

Change this MTU to 9000 and click Save Changes. Now the vNIC will actually
support jumbo frames for iSCSI transmission.

Click OK.

Now, although it may seem like we just did this, we need to create the iSCSI vNIC.
The last thing we created was just the overlay vNIC, and we'll reference it here.
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root org >> Sub-Organizations >> CUST2, and
click the new Service Profile Template ESXi-VMFEX-iSCSI. Then in the right pane,
click the iSCSI vNICs tab and click Add.

Give it a name (you may call it the same thing if you like), and select the Overlay
vNIC, which is the vNIC we just finished creating. Choose the default iSCSI Adapter
Policy, and the same VLAN, 117. It is very important that you do not select any MAC
address (the overlay vNIC already contains the mapped MAC address). Click OK.

In the right pane, click the Boot Order tab, select FC SAN Storage, and click the
Delete icon at the bottom.

Click Save Changes.

Click OK.

Expand the iSCSI vNICs, select the iSCSI vNIC iSCSI, and drag it under CD-ROM.

Click Set Boot Parameters.

Leave the Authentication Profile as not set. (It is possible and often likely to
encounter CHAP authentication when accessing an iSCSI target, because there is
no protection like FC zoning in most iSCSI implementations and anyone with an IP
stack can access the target.) For the Initiator Name Assignment, choose the ESXiVMFEX pool that we created earlier, and for the Initiator IP Address Policy, choose
the Pool pool that we also created earlier (note that the IP address will not populate

until we click OK and go back into the Boot Parameters). Click +.

Enter the iSCSI target name, IP address, port, and LUN numbers as shown. Do not
choose an authentication profile. Click OK.

Click OK.

Click OK.

If we go back into the Boot Parameters, we see the IP address, subnet mask, and
default gateway populated from the pool.

Now that we've finished with the iSCSI specifics for vNICs, we will go back to the
Network tab in the right pane and create new vNICs that can be used either for
VMWare Pass Through Switching (PTS) or VMWare DirectPathIO. Click
Change Dynamic vNIC Connection Policy.

Choose to Use a Dynamic vNIC Connection Policy and choose the policy
previously created, VMFEX-DirectPath. Click OK.

Click OK.

Notice the 20 Dynamic vNICs that are ready to be consumed using VM-FEX later.

We were told to ensure that BIOS supports VMWare DirectPathIO on these dynamic
vNICs, and although most of the blade platforms support the Virtualization
Technology (VT) bit that we need for DirectPathIO, we were told to explicitly ensure
that they had it enabled, so we need a new BIOS Policy.
Click the Policies tab in the right pane and click Create BIOS Policy.

Give it a name. We still want it to show us storage while bootstrapping in BIOS, so


we will disable Quiet Boot.

Click Processor on the left. On the right, enable Virtualization Technology (VT).
Click Finish.

Click OK.

Select that new policy and click Save Changes.

Click OK.

UCS Technology Labs - UCS B-Series Service


Profiles
Service Profile Initial Templates
Task
Create a service profile template in the CUST1 org that meets all of the following
qualifications:
Name is ESXi-Initial-Temp.
Type is an Intial Template.
Pulls from previously created UUID pool.
Uses Expert SAN configuration:
Pulls from the WWNN default pool previously populated.
Create two vHBAs (fc0 & fc1) that pull from the templates of the same name.
Adapter Policy should be for VMWare.
Uses Expert LAN configuration to create 5 vNICs all with Adapter Policy VMWare:
vNIC 1 should be named eth0-vmk1 and use the ESXi-VMKernel-A vNIC
template.
vNIC 2 should be named eth1-vmk2 and use the ESXi-VMKernel-B vNIC
template.
vNIC 3 should be named eth2-vmotion and use the ESXi-vMotion-A vNIC
template.
vNIC 4 should be named eth3-vm-fabA and use the ESXi-VM-FabA vNIC
template.
vNIC 5 should be named eth4-vm-fabB and use the ESXi-VM-FabB vNIC
template.
Boot policy:
Boot from CD-ROM if present.
Boot from SAN should be selected and should boot primarily from Fabric A
and then Fabric B.
Use a policy to ensure that if any changes are made to the service profile resulting
from this template, a user must acknowledge any change prior to the blade being
allowed to reboot.

Automatically associate any resulting service policies created from this template to
servers that have a Palo adapter and >= 32GB of RAM, using the previously created
server pool.
Ensure that service profiles are automatically in the Down state after creation.
Use the LoudBoot-w-Coh BIOS Policy.
Allow the 'admin' IPMI user to have access.
Ensure that a management IP address is pulled from the global pool.
Assign the Power Cap Policy for priority 1.
Ensure that any BIOS and/or local disks are wiped upon service profile dissociation.

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root, right-click the CUST1 org, and click Create
Service Profile Template.

Assign an intuitive name, select Initial Template for Type and choose the globlserver-uuids pool for the UUID Assignment. Click Next.

There are no local disks that we care about, so leave the default. Choose Expert for
the SAN connectivity option, and choose the node-default pool for the WWNN
Assignment. Click Add.

Assign the name fc0 for the first vHBA, select the Use SAN Connectivity Template
check box, and choose the vHBA Template of fc0 and the Adapter Policy of
VMWare. Click OK.

Click Add again.

Assign the name fc1 for the second vHBA, select the Use SAN Connectivity
Template check box, and choose the vHBA Template of fc1 and the Adapter Policy
of VMWare. Click OK.

Click Next.

Dynamic vNICs are for VM-FEX; that is not the primary purpose of this service
profile template, so we will leave that field with the default of no policy. In the LAN
connectivity section, choose Expert and click Add.

Enter the name and details as shown for adapter eth0-vmk1. Click OK.

Again, click Add.

Enter the name and details as shown for adapter eth1-vmk2. Click OK.

Click Add.

Enter the name and details as shown for adapter eth2-vmotion. Click OK.

Click Add.

Enter the name and details as shown for adapter eth3-vm-fabA. Click OK.

Click Add.

Enter the name and details as shown for adapter eth4-vm-fabB. Click OK.

Click Next.

Click Next.

Choose to Create a Specific Boot Policy. This is often desired for templates
because they rarely have the same boot targets. In fact, we won't assign the boot
target here, only the vHBAs used to find the targets. We'll specify the boot targets
later when we instantiate service profiles from the templates.
Click Add CD-ROM. Click vHBA fc0. Click vHBA fc1. Click Next.

We were directed to create a policy to ensure that if any changes are made to the
service profile resulting from this template, a user must acknowledge the changes
prior to the blade being allowed to reboot; we don't yet have one created. However,
the UCSM allows us to create a policy here from within the Service Profile Template
that we can use here immediately but that also will result in a global policy for others
to use later. Click Create Maintenance Policy.

Name it intuitively, and select the User Ack option.

Click OK.

Choose the policy just created. Click Next.

Choose the server pool assignment of the previously created Palo-Blades-w-gt32GB. Specify that the power state be Down after the resulting service profile is
associated to a blade. Note that we can also use server pool qualifications here to
limit where this template/profile can be applied, and to restrict migration of the
resulting profile to blades that do not meet the qualification. This goes beyond
simply telling the service profile where it is initially assigned, and goes to the point of
preventing it from ever being allowed to be assigned later to anything that doesn't
meet this qualification - even if it gets disassociated with its original blade. Click Next
.

Choose the BIOS policy we created for LoudBoot-w-Coh. Choose the IPMI Access
Policy we created for IPMI-Admin. For the Management IP Address Policy, select
Pooled to pull from the global pool. Choose the Power Control Policy we created for
PwrCap1. Finally, choose the Scrub Policy we created for Wipe-Disk-n-BIOS. Click
Finish.

Click OK.

UCS Technology Labs - UCS B-Series Service


Profiles
Service Profile Instantiation - ESXi1
Task
Spawn two instances of service profiles from the Service Profile Template named
ESXi-Initial-Temp.
Ensure that they are named ESXi1 and ESXi2.
For the remainder of the task, modify only the new service profile instance named
ESXi1 with the following configuration:
Modify both vHBAs:
vHBA fc0 should use the custom pWWN of 20:AA:00:25:B5:C1:00:01.
vHBA fc1 should use the custom pWWN of 20:BB:00:25:B5:C1:00:01.
Modify the Boot Order to include a Boot Target for each vHBA according to the table
Boot Targets and LUNs.
Note the Fabric, LUN, and ESXi Service Profile number.
Associate Service Profile ESXi1 to a blade based on the pooling already assigned,
and mitigate any issues that arise in association.
Configure MDS1 and MDS2 to properly zone and allow communication between the
initiators and disk targets on each respective fabric.
Use the topology for your rack to determine location of FC ports from
MDS1/2 to FC SAN.
Boot the blade and ensure that you see the target disk.
If performing this task on the INE CCIE DC Rack Rentals, your target
pWWNs may differ based on which rack you are assigned to. To
determine the correct target pWWN use the show flogi
database command on the MDS switch that connects to the SAN in
your configuration.

Boot Targets and LUNs

Fabric

Target pWWN

LUN

Description

Learned from
MDS 1

ESXi Shared Datastore


1

Learned from
MDS 2

ESXi Shared Datastore


1

Learned from
MDS 1

ESXi1 Boot Volume

Learned from
MDS 2

ESXi1 Boot Volume

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root org >> Sub-Organizations >> CUST1, rightclick Service Template ESXi-Initial-Temp, and click Create Service Profiles
From Template.

Enter ESXi for the Naming Prefix, and 2 for the number of instances to be created.

Click OK.

Click OK.

Note the two new instances named appropriately.

Click the new service profile instance ESXi1 and note that there is a Config Failure.
We will be dealing with this later.

In the right pane, click the Storage tab, select vHBA fc0, and click Modify at the
bottom.

Clear the Use SAN Connectivity Template check box. In the WWPN Assignment
drop-down list, select 20:XX:XX:XX:XX:XX:XX:XX. In the WWPN field, enter the
value as shown for fc0 on Fabric A of 20:AA:00:25:B5:C1:00:01. Click OK.

Select vHBA fc1 and click Modify at the bottom.

Clear the Use SAN Connectivity Template check box. In the WWPN Assignment
drop-down list, select 20:XX:XX:XX:XX:XX:XX:XX. In the WWPN field, enter the
value as shown for fc1 on Fabric B of 20:BB:00:25:B5:C1:00:01. Click OK.

Click Save Changes.

Note that when we try to save the changes, we see that there will still be a
Configuration Failure (the inability to associate the service profile to a blade), but
here we get a bit more information that a "vnic can only have either an isolated or a
set of regular vlans." It doesn't look like PVLANs play very nicely with other VLANs
when they are all selected together; when you think about the design intention of
PVLANs, this makes a lot of sense! We will mitigate this later, but for now, let's finish
with our SAN configuration. Click Yes.

Click OK.

In the right pane, click the Boot Order tab, expand vHBAs, and click Add SAN
Boot Target

A drop-down list appears. Click Add San Boot Target To SAN primary to add it to
the first vHBA - fc0.

From the table above, we know that the Boot Target LUN for Service Profile ESXi1
is 0. Enter the Boot Target WWPN learned from the show flogi database ouput on
MDS 1, and specify that this will be a Primary boot type. Click OK.
This is different from the "primary" you chose on the previous screen.
That was choosing the primary or secondary vHBA to add a target to,
whereas here we are stating that within the primary vHBA, choose
either a primary or secondary boot target to try to boot from within a
single/primary vHBA. Each vHBA can have a primary and a
secondary boot target. We will quickly review how that might look in
some designs. The primary boot target for a primary adapter would be
the SAN Array Storage Processor 1 over Fabric A. The secondary
boot target for a primary adapter would be the SAN Array Storage
Processor 2 over Fabric A. The primary boot target for a secondary
adapter would be the SAN Array Storage Processor 2 over Fabric B.
The secondary boot target for a secondary adapter would be the SAN
Array Storage Processor 1 over Fabric B. We don't have anything that

redundant in our design, we simply have a single controller for all


disks in our SAN array over both Fabrics, so we will simply use a
primary target over both primary and secondary vHBAs here.

Click OK.

Click Add SAN Boot Target.

This time when the drop-down list appears, click Add San Boot Target To SAN
secondary to add it to the second vHBA - fc1.

Note that the pWWN for Fabric B is different. Enter the Boot Target WWPN learned
from the show flogi database ouput on MDS 2. Enter the same LUN of 0, and the
boot type of Primary. Click OK.

Click OK.

You should see these both populated properly.

Back on the General tab in the right pane, expand Status Details and note the
error again. It's time to work on fixing that.

Remember from a previous task that the only vNICs that were created with almost
all VLANs were the two that will be used for guest VMs. Also recall that they are
Updating Template types of vNIC templates. This means that we can do one of two
things: A) unlink our vNICs from their updating templates, or B) simply update the
updating templates, and they will propagate those changes down. Let's do the latter.
In the left navigation pane, click the LAN tab, and filter or navigate to Policies >>
root org >> vNIC Templates >> vNIC Template ESXi-VM-FabA. In the right pane,
click Modify VLANs.

Deselect the two PVLANs, which are 124 and 125. While we are here, you can
deselect others that we won't be using these adapters for at any time, such as 115117 and 123, which was our example disjointed L2 VLAN. Specifically, you need to
be sure to keep these VLANs selected: 1, 110-114, 118-122. Click OK.

The error that we now see is really just a parser error, because we already
deselected the PVLANs that overlapped with regular VLANs. Click Yes.

Click OK.

Let's do the same thing to the vNIC template ESXi-VM-FabB. Click it in the left
navigation pane, and click Modify VLANs in the right pane.

Deselect the same VLANs that you did for ESXi-VM-FabA. Click OK.

Click Yes.

Click OK.

Verification
Back on the Servers tab, click our Service Profile under the CUST1 org for ESXi1,
and in the right pane click the General tab. Expand the Status Details area, and
notice that our service profile is now associating to whichever blade it chose
dynamically from the server pool (based on the Server Pool Policy and
Qualifications) that we assigned it earlier.

Notice that it chose blade 2 (service profiles tend to choose from the bottom of the
pool in UCSM v2.0), and that the service profile has finished association and now
has the blade in a Down state.
Association involves a process whereby UCSM actually boots the

blade on its own into a stripped-down Linux kernel and uses a


barebones operating system to actually program the blade in every
way needed. Things like BIOS settings are assigned, RAID controller
is set up, and the Palo mezzanine adapter card is programmed for the
proper vNICs and FCoE vHBAs (if the blade has a Palo adapter). This
can take up to 10-15 minutes at times, so be patient when associating
service profiles to blades. If you would like to see a high level of this
process occurring, you can go to the Equipment tab, click the proper
blade in the proper chassis, and click KVM Console to see it boot the
mini-OS.

Configuration (MDS)
MDS1:

vsan database
vsan 101
vsan 101 interface fc1/3
vsan 101 interface fc1/4

interface fc1/3
switchport mode F
no shutdown

interface fc1/4
switchport mode F
no shutdown

Although a device-alias database is not necessary, it makes for easier sightresolution from pWWNs to meaningful names. We will fill in everything that we will
be using, not only in this task but in future tasks, and we can continue to reference it
later.

MDS1:

device-alias database
device-alias name ESXi1-fc0 pwwn 20:aa:00:25:b5:c1:00:01
device-alias name ESXi2-fc0 pwwn 20:aa:00:25:b5:c1:00:02
device-alias name ESXi-VMFEX-fc0 pwwn 20:aa:00:25:b5:c2:00:03
device-alias name C200-ESXi-fc0 pwwn 20:aa:d4:8c:b5:bd:46:0e
device-alias name SAN_ARRAY1_FabA_FC0-P0 pwwn 21:00:00:1b:32:04:37:dc
device-alias name SAN_ARRAY1_FabA_FC0-P1 pwwn 21:01:00:1b:32:24:37:dc

device-alias commit

!Active Zone Database Section for vsan 101


zone name ESXi-12-boot-datastore vsan 101
member pwwn 20:aa:00:25:b5:c1:00:01
!

[ESXi1-fc0]
member pwwn 21:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]

zoneset name VSAN101 vsan 101


member ESXi-12-boot-datastore

zoneset activate name VSAN101 vsan 10

!Full Zone Database Section for vsan 101


zone name ESXi-12-boot-datastore vsan 101
member pwwn 20:aa:00:25:b5:c1:00:01
!

[ESXi1-fc0]
member pwwn 21:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]

zoneset name VSAN101 vsan 101


member ESXi-12-boot-datastore

zone commit vsan 101

MDS2:

vsan database
vsan 102
vsan 102 interface fc1/3
vsan 102 interface fc1/4

interface fc1/3

switchport mode F
no shutdown

interface fc1/4
switchport mode F
no shutdown

device-alias database
device-alias name ESXi1-fc1 pwwn 20:bb:00:25:b5:c1:00:01
device-alias name ESXi2-fc1 pwwn 20:bb:00:25:b5:c1:00:02
device-alias name ESXi-VMFEX-fc1 pwwn 20:bb:00:25:b5:c2:00:03
device-alias name C200-ESXi-fc1 pwwn 20:bb:d4:8c:b5:bd:46:0f
device-alias name SAN_ARRAY1_FabB_FC0-P2 pwwn 21:02:00:1b:32:44:37:dc
device-alias name SAN_ARRAY1_FabB_FC0-P3 pwwn 21:03:00:1b:32:64:37:dc

device-alias commit

!Active Zone Database Section for vsan 102


zone name ESXi-12-boot-datastore vsan 102
member pwwn 20:bb:00:25:b5:c1:00:01
!

[ESXi1-N1Kv-fc1]
member pwwn 21:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]

zoneset name VSAN102 vsan 102


member ESXi-12-boot-datastore

zoneset activate name VSAN102 vsan 102

!Full Zone Database Section for vsan 102


zone name ESXi-12-boot-datastore vsan 102
member pwwn 20:bb:00:25:b5:c1:00:01
!

[ESXi1-N1Kv-fc1]
member pwwn 21:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]

zoneset name VSAN102 vsan 102


member ESXi-12-boot-datastore

zone commit vsan 102

Verification
Back in UCSM, click the ESXi1 service profile in the left navigation pane, and in the
right pane click KVM Console.

In the new window, click Boot Server.

Click OK.

Click OK.

On MDS1, we can see that both the FC SAN Targets and their Initiators have
FLOGI'd and been assigned FCIDs:
MDS1# show flogi database
-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/3

101

0x280200

21:00:00:1b:32:04:37:dc 20:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]
fc1/4

101

0x280500

21:01:00:1b:32:24:37:dc 20:01:00:1b:32:24:37:dc

[SAN_ARRAY1_FabA_FC0-P1]
port-channel 2

101

0x280040

24:02:54:7f:ee:c5:7d:40 20:65:54:7f:ee:c5:7d:41

port-channel 2

101

0x280041

20:aa:00:25:b5:c1:00:01 20:ff:00:25:b5:00:00:0f

[ESXi1-fc0]

Total number of flogi = 4.


MDS1# show fcns database

VSAN 101:
-------------------------------------------------------------------------FCID

TYPE

PWWN

(VENDOR)

FC4-TYPE:FEATURE

-------------------------------------------------------------------------0x280040

24:02:54:7f:ee:c5:7d:40 (Cisco)

0x280041

20:aa:00:25:b5:c1:00:01

0x280200

21:00:00:1b:32:04:37:dc (Qlogic)

npv
[ESXi1-fc0]
scsi-fcp:target

[SAN_ARRAY1_FabA_FC0-P0]

0x280500

21:01:00:1b:32:24:37:dc (Qlogic)

scsi-fcp:target

[SAN_ARRAY1_FabA_FC0-P1]

Total number of entries = 4

On MDS2 we can see that both the FC SAN Targets and their Initiators have
FLOGI'd and been assigned FCIDs:
MDS2# show flogi database

-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/1

102

0x530000

20:0b:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

fc1/1

102

0x530002

20:bb:00:25:b5:c1:00:01 20:ff:00:25:b5:00:00:0f

[ESXi1-fc1]
fc1/2

102

0x530001

20:0c:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

fc1/3

102

0x530500

21:02:00:1b:32:44:37:dc 20:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]
fc1/4

102

0x530200

21:03:00:1b:32:64:37:dc 20:03:00:1b:32:64:37:dc

[SAN_ARRAY1_FabB_FC0-P3]

Total number of flogi = 5.


MDS2# show fcns database

VSAN 102:
-------------------------------------------------------------------------FCID

TYPE

PWWN

(VENDOR)

FC4-TYPE:FEATURE

-------------------------------------------------------------------------0x530000

20:0b:54:7f:ee:c5:6a:80

229

0x530001

20:0c:54:7f:ee:c5:6a:80

229

0x530002

20:bb:00:25:b5:c1:00:01

[ESXi1-fc1]

0x530200

21:03:00:1b:32:64:37:dc

scsi-fcp:init

0x530500

[SAN_ARRAY1_FabB_FC0-P3]
21:02:00:1b:32:44:37:dc

scsi-fcp:target

[SAN_ARRAY1_FabB_FC0-P2]

Total number of entries = 5

Back on the the UCSM KVM session, note the small Cisco logo (not large) and
basic BIOS information.

Here we see, as a result of disabling "Quiet Boot" in BIOS, the Cisco VIC FC card
and Boot Driver along with the FC Disk and its corresponding pWWN and LUN
number.

UCS Technology Labs - UCS B-Series Service


Profiles
Service Profile Instantiation - ESXi2
Task
Modify the second new service profile instance named ESXi2 with the following
configuration:
Modify both vHBAs:
vHBA fc0 should use the custom pWWN of 20:AA:00:25:B5:C1:00:02.
vHBA fc1 should use the custom pWWN of 20:BB:00:25:B5:C1:00:02.
Modify the Boot Order to include a Boot Target for each vHBA according to the table
Boot Targets and LUNs.
Note the Fabric, LUN, and ESXi Service Profile number.
Associate Service Profile ESXi2 to a blade based on the pooling already assigned,
and mitigate any issues that arise in association.
Configure MDS1 and MDS2 to properly zone and allow communication between both
the ESXi1 and ESXi2 initiators and disk targets on each respective fabric.
Boot the blade and ensure that you see the target disk.
If performing this task on the INE CCIE DC Rack Rentals, your target
pWWNs may differ based on which rack you are assigned to. To
determine the correct target pWWN use the show flogi
database command on the MDS switch that connects to the SAN in
your configuration.

Boot Targets and LUNs

Fabric

Target pWWN
Learned from
MDS 1

LUN

Description
ESXi Shared Datastore
1

Fabric

Target pWWN

LUN

Description

Learned from
MDS 2

ESXi Shared Datastore


1

Learned from
MDS 1

ESXi2 Boot Volume

Learned from
MDS 2

ESXi2 Boot Volume

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profiles >> root org >> Sub-Organizations >> CUST1, and click the
service profile ESXi2. In the right pane, expand the Status Details section, and
note that we have a different issue that must be resolved (this was actually an issue
on the last service profile as well, but we fixed it before we saw the issue explicitly).
This is because, when we created our vHBAs in our Service Profile Template, we
chose for the pWWNs to be assigned from the default pool, which contains no
entries. A HBA without a pWWN is like a NIC without a MAC address - it can't talk
on the network and it can't be assigned a logical FCID. Let's remedy that.

In the right pane, click the Storage tab, select vHBA fc0, and click Modify at the
bottom.

Clear the Use SAN Connectivity Template check box. In the WWPN Assignment
drop-down list, select 20:XX:XX:XX:XX:XX:XX:XX. In the WWPN field, enter the
value as shown for fc0 on Fabric A of 20:AA:00:25:B5:C1:00:02. Click OK.

Select vHBA fc1 and click Modify at the bottom.

Clear the Use SAN Connectivity Template check box. In the WWPN Assignment
drop-down list, select 20:XX:XX:XX:XX:XX:XX:XX. In the WWPN field, enter the
value as shown for fc1 on Fabric B of 20:BB:00:25:B5:C1:00:02. Click OK.
While here, note that a vHBA must have a proper FC-specific QoS
Policy (one with a no-drop class assigned) for the service profile to
associate and boot. UCSM does not allow you to not have a no-drop
class for your FCoE. This is a good thing to watch for as another
possible misconfiguration that would prevent association.

Click Save Changes.

Notice that this looks better than the last message we got after clicking Save at this
point. This is because we fixed the Updating vNIC Template, which has already
updated the vNICs here in this service profile. Click Yes.

Click OK.

In the right pane, click the Boot Order tab, expand vHBAs, and click Add SAN
Boot Target.

A drop-down menu appears. Click Add San Boot Target To SAN primary to add it
to the first vHBA - fc0.

From the table above, we know that the Boot Target LUN for Service Profile ESXi2
is 0. Enter the Boot Target WWPN learned from the show flogi database ouput on
MDS 1, and specify that this will be a Primary boot type. Click OK.

Click OK.

Click Add SAN Boot Target. This time when the drop-down menu appears, click
Add San Boot Target To SAN secondary to add it to the second vHBA - fc1.

Note that the pWWN for Fabric B is different. Enter the Boot Target WWPN learned
from the show flogi database ouput on MDS 2. Enter the same LUN of 0, and the
boot type of Primary. Click OK.

Click OK.

You should see these both populated properly.

Verification
Back on the General tab, when we expand the Status Details section we can see
that the blade has associated to blade 1 properly and is currently resting in a Down
state.

Configuration (MDS)
On MDS1:

vsan database
vsan 101
vsan 101 interface fc1/3
vsan 101 interface fc1/4

interface fc1/3
switchport mode F
no shutdown

interface fc1/4
switchport mode F
no shutdown

Although a device-alias database is not necessary, it makes for easier sightresolution from pWWNs to meaningful names. We will fill in everything that we will
be using not only in this task but in future tasks, and we can continue to reference it
later.
device-alias database
device-alias name ESXi1-fc0 pwwn 20:aa:00:25:b5:c1:00:01
device-alias name ESXi2-fc0 pwwn 20:aa:00:25:b5:c1:00:02
device-alias name ESXi-VMFEX-fc0 pwwn 20:aa:00:25:b5:c2:00:03
device-alias name C200-ESXi-fc0 pwwn 20:aa:d4:8c:b5:bd:46:0e
device-alias name SAN_ARRAY1_FabA_FC0-P0 pwwn 21:00:00:1b:32:04:37:dc
device-alias name SAN_ARRAY1_FabA_FC0-P1 pwwn 21:01:00:1b:32:24:37:dc

device-alias commit

Here in the zoning, we will simply add a member to the existing zone name,
because later we will want both ESXi hosts to have access to the same datastore to
allow for vMotion. If we zone by pwwn as we are here, then both ESXi hosts will be
able to see each other's boot LUNs as well; however, for the sake of this lab, we
don't care much, and each ESXi host will continue to boot from the LUN we
assigned it as a boot target back in UCSM.
!Active Zone Database Section for vsan 101
zone name ESXi-12-boot-datastore vsan 101
member pwwn 20:aa:00:25:b5:c1:00:01
!

[ESXi1-fc0]
member pwwn 20:aa:00:25:b5:c1:00:02

[ESXi2-fc0]
member pwwn 21:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]

zoneset name VSAN101 vsan 101


member ESXi-12-boot-datastore

zoneset activate name VSAN101 vsan 10

!Full Zone Database Section for vsan 101


zone name ESXi-12-boot-datastore vsan 101
member pwwn 20:aa:00:25:b5:c1:00:01
!

[ESXi1-fc0]
member pwwn 20:aa:00:25:b5:c1:00:02

[ESXi2-fc0]
member pwwn 21:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]

zoneset name VSAN101 vsan 101


member ESXi-12-boot-datastore

zone commit vsan 101

On MDS2:
vsan database
vsan 102
vsan 102 interface fc1/3
vsan 102 interface fc1/4

interface fc1/3
switchport mode F
no shutdown

interface fc1/4
switchport mode F
no shutdown

device-alias database
device-alias name ESXi1-fc1 pwwn 20:bb:00:25:b5:c1:00:01
device-alias name ESXi2-fc1 pwwn 20:bb:00:25:b5:c1:00:02
device-alias name ESXi-VMFEX-fc1 pwwn 20:bb:00:25:b5:c2:00:03
device-alias name C200-ESXi-fc1 pwwn 20:bb:d4:8c:b5:bd:46:0f
device-alias name SAN_ARRAY1_FabB_FC0-P2 pwwn 21:02:00:1b:32:44:37:dc
device-alias name SAN_ARRAY1_FabB_FC0-P3 pwwn 21:03:00:1b:32:64:37:dc

device-alias commit

!Active Zone Database Section for vsan 102


zone name ESXi-12-boot-datastore vsan 102
member pwwn 20:bb:00:25:b5:c1:00:01
!

[ESXi1-N1Kv-fc1]
member pwwn 21:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]
member pwwn 20:bb:00:25:b5:c1:00:02

[ESXi2-N1Kv-fc1]

zoneset name VSAN102 vsan 102


member ESXi-12-boot-datastore

zoneset activate name VSAN102 vsan 102

!Full Zone Database Section for vsan 102


zone name ESXi-12-boot-datastore vsan 102
member pwwn 20:bb:00:25:b5:c1:00:01
!

[ESXi1-N1Kv-fc1]
member pwwn 21:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]
member pwwn 20:bb:00:25:b5:c1:00:02

[ESXi2-N1Kv-fc1]

zoneset name VSAN102 vsan 102


member ESXi-12-boot-datastore

zone commit vsan 102

Verification
On MDS1, we can see that the new FC SAN vHBA Initiators have FLOGI'd and
been assigned FCIDs:
MDS1(config)# sh flog d
-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/3

101

0x280200

21:00:00:1b:32:04:37:dc 20:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]
fc1/4

101

0x280500

21:01:00:1b:32:24:37:dc 20:01:00:1b:32:24:37:dc

[SAN_ARRAY1_FabA_FC0-P1]

port-channel 2

101

0x280040

24:02:54:7f:ee:c5:7d:40 20:65:54:7f:ee:c5:7d:41

port-channel 2

101

0x280041

20:aa:00:25:b5:c1:00:01 20:ff:00:25:b5:00:00:0f

[ESXi1-fc0]
port-channel 2

101

0x280042

20:aa:00:25:b5:c1:00:02 20:ff:00:25:b5:00:00:1f

[ESXi2-fc0]

Total number of flogi = 5.


MDS1(config)# sh fcns d

VSAN 101:
-------------------------------------------------------------------------FCID

TYPE

PWWN

(VENDOR)

FC4-TYPE:FEATURE

-------------------------------------------------------------------------0x280040

24:02:54:7f:ee:c5:7d:40 (Cisco)

npv

0x280041

20:aa:00:25:b5:c1:00:01

scsi-fcp:init fc-gs

[ESXi1-fc0]
0x280042

20:aa:00:25:b5:c1:00:02

scsi-fcp:init fc-gs

[ESXi2-fc0]

0x280200

21:00:00:1b:32:04:37:dc (Qlogic)

scsi-fcp:target

[SAN_ARRAY1_FabA_FC0-P0]
0x280500

21:01:00:1b:32:24:37:dc (Qlogic)

scsi-fcp:target

[SAN_ARRAY1_FabA_FC0-P1]

Total number of entries = 5


MDS1(config)#

On MDS2, we can see that the new FC SAN vHBA initiators have FLOGI'd and
been assigned FCIDs:
MDS2# sh flog d
-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/1

102

0x530000

20:0b:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

fc1/1

102

0x530002

20:bb:00:25:b5:c1:00:01 20:ff:00:25:b5:00:00:0f

[ESXi1-fc1]
fc1/1

102

0x530003

20:bb:00:25:b5:c1:00:02 20:ff:00:25:b5:00:00:1f

[ESXi2-fc1]
fc1/2

102

0x530001

20:0c:54:7f:ee:c5:6a:80 20:66:54:7f:ee:c5:6a:81

fc1/3

102

0x530500

21:02:00:1b:32:44:37:dc 20:02:00:1b:32:44:37:dc

[SAN_ARRAY1_FabB_FC0-P2]
fc1/4

102

0x530200

21:03:00:1b:32:64:37:dc 20:03:00:1b:32:64:37:dc

[SAN_ARRAY1_FabB_FC0-P3]

Total number of flogi = 6.


MDS2# sh fcns d

VSAN 102:
------------------------------------

UCS Technology Labs - UCS B-Series Service


Profiles
Service Profile Instantiation - ESXi-VMFEX with Boot
from iSCSI
Task
Spawn a single instance of the Service Profile Template ESXi-VMFEX-iSCSI and be
sure that the service profile is named ESXi-VMFEX-iSCSI-1.
As stated in a previous task, the iSCSI vNIC should boot from a target that is at IP
address 10.0.117.25:3260, is at the target name of iqn.2013-01.com.ine:storage:esxivmfex:1, and uses LUN 0.
Provision MDS1 (only) to be a gateway for iSCSI traffic with the above IP address
and target IQN, to connect on the back end to the actual FC target discovered on
MDS1's FC2/3 interface using the show flogi database command and LUN 0.
Configure the MDS1 to spoof an FC initiator with the pWWN of
20:aa:00:25:b5:c1:00:01.
Configure MDS1 to use FC zoning to allow communication strictly between the FC
initiator and disk target in this task.
Boot the blade and ensure that you see the target disk.

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root org >> Sub-Organizations >> CUST2, rightclick Service Template ESXi-VMFEX-iSCSI, and click Create Service Profiles
From Template.

For the Naming Prefix, enter ESXi-VMFEX-iSCSI-, and enter 1 instance to be


created. Click OK.

Click OK.

On MDS1:
feature iscsi
iscsi enable module 1

vsan database

vsan 101 interface iscsi1/1

iscsi virtual-target name iqn.2013-01.com.ine:storage:esxi-vmfex:1


pWWN 21:01:00:1b:32:24:37:dc fc-lun 0x0000 iscsi-lun 0x0000
pWWN 21:01:00:1b:32:24:37:dc fc-lun 0x0001 iscsi-lun 0x0001
advertise interface gigabitethernet 1/1
all-initiator-permit

iscsi initiator name iqn.2013-01.com.ine:ucs:esxi-vmfex:9


static nWWN 20:01:00:05:9b:1e:5b:42
static pWWN 20:aa:00:05:05:00:00:01
vsan 101

interface GigabitEthernet1/1
ip address 10.0.117.25 255.255.255.0
no shutdown

interface iscsi1/1
no shutdown

zone mode enhanced vsan 101

zone name ESXi-VMFEX-Boot vsan 101


member pwwn 20:aa:00:05:05:00:00:01
member pwwn 21:01:00:1b:32:24:37:dc

zoneset name VSAN101 vsan 101


member ESXi-12-boot-datastore
member ESXi-VMFEX-Boot

zoneset activate name VSAN101 vsan 101

zone commit vsan 101

Verification
The only way to test this is to temporarily disassociate one of the existing ESXi
service profiles from a blade and associate this ESXi-VMFEX-iSCSI service profile
to the blade.

Click ESXi1 in the left navigation pane and click Disassociate Service Profile in
the right pane.

Notice that when the ESXi1 service profile completes disassociation (which may
take a while), that the ESXi-VMFEX-iSCSI service profile begins association.

When that process completes, click Boot Server, and then click KVM Console in
the right pane.

Back on MDS1, as the Cisco VIC iSCSI boot driver finally gets to its stage, we begin
to see some action, such as the iSCSI interface FLOGI to the FC Fabric and an
iSCSI session setup.
MDS1# sh int br

------------------------------------------------------------------------------Interface

Status

Oper Mode

Oper Speed
(Gbps)

------------------------------------------------------------------------------iscsi1/1

up

ISCSI

-------------------------------------------------------------------------------Interface

Status

IP Address

Speed

MTU

Port
Channel

-------------------------------------------------------------------------------GigabitEthernet1/1

up

10.0.117.25/24

1 Gbps

1500

--

MDS1(config)# sh iscsi initiator configured


iSCSI Node name is iqn.2013-01.com.ine:ucs:esxi-vmfex:9
Member of vsans: 101
Node WWN is 20:01:00:05:9b:1e:5b:42
No. of PWWN: 1
Port WWN is 20:aa:00:05:05:00:00:01
Configured node (iSCSI)
MDS1(config)# sh iscsi virtual-target
target: iqn.2013-01.com.ine:storage:esxi-vmfex:1
Port WWN 21:01:00:1b:32:24:37:dc
Configured node (iSCSI)
No. of LU mapping: 2
iSCSI LUN: 0x0000, FC LUN: 0x0000
iSCSI LUN: 0x0001, FC LUN: 0x0001
No. of advertised interface: 1
GigabitEthernet 1/1
Trespass support is

All initiator permit is enabled


disabled

Revert to primary support is

disabled

MDS1(config)# sh iscsi initiator iscsi-session


iSCSI Node name is iqn.2013-01.com.ine:ucs:esxi-vmfex:9
Initiator ip addr (s): 10.0.117.19
iSCSI alias name: INE-UCS-01
Configured node (iSCSI)
Node WWN is 20:01:00:05:9b:1e:5b:42 (configured)
Member of vsans: 101
Number of Virtual n_ports: 1

MDS1(config)# sh flog d
-------------------------------------------------------------------------------INTERFACE

VSAN

FCID

PORT NAME

NODE NAME

-------------------------------------------------------------------------------fc1/3

101

0x280200

21:00:00:1b:32:04:37:dc 20:00:00:1b:32:04:37:dc

[SAN_ARRAY1_FabA_FC0-P0]
fc1/4

101

0x280500

21:01:00:1b:32:24:37:dc 20:01:00:1b:32:24:37:dc

[SAN_ARRAY1_FabA_FC0-P1]
port-channel 1

101

0x280040

24:02:54:7f:ee:c5:7d:40 20:65:54:7f:ee:c5:7d:41

iscsi1/1

101

0x280080

20:aa:00:05:05:00:00:01 20:01:00:05:9b:1e:5b:42

Total number of flogi = 6.

MDS1(config)#

MDS1(config-iscsi-tgt)# sh iscsi session

Initiator iqn.2013-01.com.ine:ucs:esxi-vmfex:9
Initiator ip addr (s): 10.0.117.19
Session #1
Target iqn.2013-01.com.ine:storage:esxi-vmfex
VSAN 101, ISID 00023d010000, Status active, no reservation

MDS1# sh iscsi session detail

Initiator iqn.2013-01.com.ine:ucs:esxi-vmfex:9 (INE-UCS-01)


Initiator ip addr (s): 10.0.117.19
Session #1 (index 15)
Target iqn.2013-01.com.ine:storage:esxi-vmfex:1
VSAN 101, ISID 00023d010000, TSIH 12288, Status active, no reservation
Type Normal, ExpCmdSN 8, MaxCmdSN 135, Barrier 0
MaxBurstSize 262144, MaxConn 1, DataPDUInOrder Yes
DataSeqInOrder Yes, InitialR2T No, ImmediateData Yes
Registered LUN 0, Mapped LUN 2
Stats:
PDU: Command: 8, Response: 8
Bytes: TX: 200, RX: 0
Number of connection: 1
Connection #1 (index 1)
Local IP address: 10.0.117.25, Peer IP address: 10.0.117.19
CID 0, State: Full-Feature
StatSN 11, ExpStatSN 0
MaxRecvDSLength 262144, our_MaxRecvDSLength 262144
CSG 3, NSG 3, min_pdu_size 48 (w/ data 48)
AuthMethod none, HeaderDigest None (len 0), DataDigest None (len 0)
Version Min: 0, Max: 0
FC target: Up, Reorder PDU: No, Marker send: No (int 0)
Received MaxRecvDSLen key: Yes
Stats:
Bytes: TX: 200, RX: 0

MDS1#

And finally, back in our UCSM KVM session, we see the iSCSI disk and LUN.

UCS Technology Labs - UCS B-Series Service


Profiles
Manual Service Profile Creation - Win2k8 Bare Metal
Task
Manually create a service profile named Win2K8-BM that contains all of the following
information:
UUIDs pulled from the global pool.
Local storage using RAID 1 (boot from that volume).
No vHBAs (no FC or iSCSI).
A single vNIC called eth0 that pulls from the vNIC Template BM-Win2K8FabA and allows VLAN 114 with no Dot1Q tagging.
Allow changes to the service profile to reboot the server blade immediately
without any intervention.
Manually associate it to Chassis 1 Blade 3 and make sure the blade is down
by default.
Ensure the Cisco logo does not show on boot, and enable CPU cache
coherency.
Allow the service profile to be controlled via KVM using an IP from the global
pool.
Ensure that upon dissociation both the BIOS and disk are wiped clean.
Remedy any and all problems that may come from association to a blade.

Configuration
On the Servers tab in the left navigation pane, filter or navigate to Servers >>
Service Profile Templates >> root org >> Sub-Organizations, right-click the
CUST2 org, and click Create Service Profile (expert).

Enter the name Win2K8-BM and the proper pool for UUID Assignment. Click Next.

For Local Storage, choose the RAID1 policy already created. Under SAN
connectivity, select No vHBAs. Click Next.

Select no dynamic vNIC policy, and select Expert for LAN connecivity. Click Add.

Name the vNIC and choose the vNIC template as shown. Select Windows for the
Adapter Policy. Click OK.

Click Next.

Click Next.

On the left, expand Local Devices, click Add CD-ROM, and then click Add Local
Disk. Click Next.

Leave the Maintenance Policy as default to allow the immediate reboot upon any
change. Click Next.

For Server Assignment, choose to Select Existing Server and select blade 3
(probably the only available blade). Choose the power state of Down. Click Next.

For the BIOS Policy, select the previously created LoudBoot-w-Coh. For
Management IP Address Policy, select Pooled. For the Scrub Policy, select WipeDisk-n-BIOS to erase all settings and data on dissociation. Click Finish.

Click OK.

In the right pane on the General tab, expand Status Details and note that we have
the same problem as before (we used a different initial template on this service
profile).

For a different approach to the solution, in the left navigation pane, expand the
service profile Win2k8-BM, expand vNICs, and click vNIC eth0. In the right pane
click Modify VLANs.

This time, select only VLAN 114 and make it a Native VLAN. Click OK.

Click OK.

Now we see a different problem: this server's adapter does not support failover.
That's because in this particular blade, we have a Gen 2 Emulex adapter without the
Cisco Menlo ASIC, so no failover support.

Back in the vNIC eth0, we see that Fabric Failover is selected, as it was when we
created the vNIC template and instantiated this instance.

Clear the check box and click Save Changes.

Click Yes.

Click OK.

Verification
We see that the service profile is now associated properly and resting in the Down
state awaiting further instructions.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
OS Installation on CUST1 Service Profile ESXi1
Task
Install ESXi 5.0 on service profile ESXi in CUST1 org.
Assign the root password cciedc01.
Assign it the IP address 10.0.115.11/24 in VLAN 115 with a gateway of 10.0.115.254.

Configuration
In UCSM, in the left navigation pane on the Servers tab, navigate to Servers >>
Service Profiles >> root org >> Sub-Organizations >> CUST1 >> >> ESXi1. In
the right pane, click KVM Console.

Note that the server is powered off and we therefore have have No Signal.

Click the Virtual Media tab, and then click Add Image.

Find the path for the VMWare 5.0 VMvisor Installer and select it. Select the Mapped

check box.

Click Boot Server.

Click OK.

Click OK.

Note the server beginning to boot and test memory.

Notice the small Cisco logo and basic information about CPU model, memory, and
speed.

Although we saw the small Cisco logo in the last screen, we do not see the large
logo that typically eclipses this entire screen. This is because quiet boot is disabled,
which is critical to for ensuring that Boot-from-SAN (FC or iSCSI) can be properly
seen here. Here we see the SCSI disk that we need to boot to (SCST is a Linux

SCSI subsystem because we are using OpenFiler here for our FC SAN Array). This
is a critical step for ensuring that the next steps in trying to write to the disk will
succeed.

Press Enter.

VMware begins loading its drivers.

Press Enter.

Press F11.

VMware scans for disks to install to.

Select the proper 5GB LUN called ESXi1 to install to. Press Enter.

Press Enter.

Enter the password cciedc01 and press Enter.

Press F11.

ESXi hypervisor takes a few minutes to install.

Press Enter.

In the KVM, the DVD should automatically eject, or you can elect to manually eject it
by clearing the Mapped check box.

Click Yes.

ESXi reboots.

ESXi Loading VMKernel.

Press F2.

Enter the password cciedc01. Press Enter.

Arrow down to Configure Management Network and press Enter.

Arrow down to VLAN (optional) and press Enter.

Enter VLAN 115 (because we did not choose Native for that VLAN) and press Enter.

Note the VLAN applied.

Arrow down to IP Configuration and press Enter.

Arrow down to Set static IP and press the spacebar, then arrow down and fill in the
IP, b, and Gateway fields. Press OK.

Note the information applied. Press Esc.

Press Y to apply and restart the networking subsystem.

Press Esc.

We should now be able to manage this ESXi host from the vSphere VI client.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
OS Installation on CUST1 Service Profile ESXi2
Task
Install ESXi 5.0 on service profile ESX2 in CUST1 org.
Assign the root password cciedc01.
Assign it the IP address 10.0.115.12/24 in VLAN 115 with a gateway of 10.0.115.254.

Configuration
Perform the same tasks as before for the ESXi1 service profile, except this time
choose the second ESXi2 disk LUN to install to.

When you configure the management network, be sure to configure the proper IP

address of 10.0.115.12.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
OS Installation on CUST2 Service Profile Win2k8-BM
Task
Install ESXi 5.0 on service profile ESXi in CUST1 org.
Assign the Administrator password Cc1edc01.
Assign it the IP address 10.0.114.21/24 in VLAN 114 with a gateway of 10.0.114.254.

Configuration
In UCSM, in the left navigation pane on the Servers tab, navigate to Servers >>
Service Profiles >> root org >> Sub-Organizations >> CUST2 >> Win2k8-BM. In
the right pane, click KVM Console.

Click the Virtual Media tab and click the Add Image button. Find the path for the
VMWare 5.0 VMvisor Installer and select it. Select the Mapped check box. At the
top, click Boot Server.

Click OK.

Click OK.

Notice the server finding the pair of 500GB HDDs in RAID 1 as a single logical
volume.

Notice it booting and seeing PXE drivers for the Emulex CNA - even if we aren't
PXE booting. If you don't happen to be available with PXE (pronounced pixie)
booting, look into it. It's part of what allows true cloud automation by use of very
powerful, highly customizable tools such as Puppet.

Press the spacebar to boot from (virtual) DVD.

Windows starts.

Click Next.

Click Install Now.

Choose an operating system and click Next.

Accept the license terms and click Next.

Select Custom.

Click the 500GB Disk 0 Partition 2.


If you were booting Windows from SAN, this would be the critical
place where you would need to click Load Driver and select the driver
for the CNA (Emulex M72KR in this case) that you previously
downloaded from Cisco.com with a proper CCO login. It would also
be critical that you had only a primary path and FC target, not a
secondary (multi)path or target. You can add those later, but not
during installation. It it also quite important that the partition already
be formatted with the proper GPT (GUID Partition Table) to allow
Win2k8R2 to see it properly and allow installation to occur.

Click OK.

Windows expands and installs necessary files.

Windows starts.

Log in.

Now we need to go back to the KVM, eject the installation DVD, and add the image
for the UCS-Bxxx-Drivers.2.x.x.iso so that we can install all the drivers necessary
into Win2k8R2.

Select the image and select the check box to map the DVD (basically, to connect it
to the blade).

Autoplay detects the DVD in Windows.

Browse in Explorer to the DVD and to the Windows directory.

Browse to the Network directory.

To find the manufacturer of our adapter, we can go back to UCSM and click the
Equipment

tab, navigate to Equipment >> Chassis >> Chassis 1 >> Servers >> Server 3.
Then, in the right pane, click the Inventory tab and then the NICs tab. We see that
the manufacturer is Emulex.

Double-click the Emulex directory.

Back in UCSM, we can find the model number by clicking Adapter1. We see that it's
an M27KR-E.

Double-click the Win2k8R2 directory.

Double-click x64.

From Server Manager, click Change System Properties.

Click the Hardware tab and click Device Manager.

In Device Manager, you will see Other devices already expanded; the obviously
unidentified Ethernet and Fibre Channel controllers underneath are our 2-logicalport 10GE NIC and 2-logical-port FCoE HBA in our Emulex CNA.

Right-click the Ethernet controller and click Update Driver Software.

Click to Browse my computer for driver software.

Click Browse.

Navigate to the proper folder and click OK.

Click Next.

Windows installs the drivers.

The proper CNA has been identified and drivers installed. Click Close.

Right-click the other Ethernet controller and click Scan for hardware changes.

You can do the same thing with the CNA here if you want or need to use FC
services. However, we are not using them in this lab because we are using local
disks for everything.

From the system tray, click the Network icon and click Open Network and Sharing
Center.

Click Change adapter settings.

Right-click Local Area Connection and click Properties.

Click Properties for IPv4.

Set the IP address, mask, and gateway as directed. Click OK.

Ping the gateway to test.

Back in the Network and Sharing Center, click Windows Firewall.

Click Turn Windows Firewall On or Off.

Because this is only a lab environment, disable both instances of firewall so we can
ping this device. Click OK.

From the 5K, we can ping this bare metal server and see that all is well.
N5K3# ping 10.0.114.21
PING 10.0.114.21 (10.0.114.21): 56 data bytes
64 bytes from 10.0.114.21: icmp_seq=0 ttl=127 time=1.831 ms
64 bytes from 10.0.114.21: icmp_seq=1 ttl=127 time=0.835 ms
64 bytes from 10.0.114.21: icmp_seq=2 ttl=127 time=0.771 ms
64 bytes from 10.0.114.21: icmp_seq=3 ttl=127 time=0.788 ms
64 bytes from 10.0.114.21: icmp_seq=4 ttl=127 time=0.751 ms

--- 10.0.114.21 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.751/0.995/1.831 ms
N5K3#

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Log In and Initialize Datastore
Task
Log in to ESXi1 host and initialize the 100GB datastore.

Configuration
Launch the vSphere VI client and log in to the ESXi1 host.

Select the check box to install the certificate and click Ignore. This appears because
it is a self-signed certificate and is not from a trusted entity such as VeriSign or
GoDaddy.

Click OK. We work with evaluation licenses because we erase and reset them fairly
often.

Click the link to create a datastore.

Select Disk/LUN and click Next.

Choose the 100GB LUN or whatever your home lab might have a LUN set to. This
must be reachable over a shared disk using FC, iSCSI, or NFSv3 if you want to use
it for vMotion, DRS, FT, and HA, although here we really only care about vMotion for
the CCIE labs. We are using FC connected LUNs reachable over our vHBAs, as
shown here. Click Next.

Select VMFS-5 and click Next.

Click Next.

Enter the datastore name datastore1. Click Next.

Select Maximum available space and click Next.

Click Finish.

Your datastore should mount, format, and initialize and be ready for use.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Provision VMware vSwitch for Kernel Management
Task
Provision vNICs named eth0 and eth1 (vmnic0 and vmnic1, respectively) for
Active/Active load balancing and failover for VMKernel management.

Configuration
Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Standard Switch, and click Properties.

Click the Network Adapters tab and click Add.

Select vmnic1 and click Next.

Ensure that both vmnic0 and vmnic1 are selected under Active Adapters and click
Next.

Click Finish. Repeat the same set of tasks for host ESXi2.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Provision VMware vSwitch for Kernel vMotion
Task
Provision vNICs named eth2 (vmnic2) for VMKernel vMotion.
Assign the vMotion NIC the IP 10.0.116.11/24 for ESXi1 and the IP 10.0.116.12/24
for ESXi2.

Configuration
Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Standard Switch, and click Add Networking.

Select VMKernel and click Next.

Select Create a vSphere Standard Switch and choose vmnic2. Click Next.

Name it VMKernel and explicitly enter the VLAN of 116 (because we didn't choose
this as a Native VLAN in UCSM). Select Use this port group for vMotion. Click
Next.
It is completely possible to simply choose Native VLAN in UCSM for
this vNIC; then, here in ESX, you would simply choose the VLAN ID
0 - which we know isn't a VLAN, but instead informs ESXi not to add a
Dot1Q header. We recommend not using Native VLAN, so that you

can clearly see the VLAN IDs here in ESXi and later in vCenter. It's
up to you, but be sure to be consistent on both sides, or you will not
be able to communicate (for example, ESXi sends dot1q header but
UCSM doesn't expect it, or ESXi doesn't send dot1q header but
UCSM does expect it).

Enter the IP address and mask as shown and click Next.

Click Finish. Repeat the same set of tasks for host ESXi2.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Provision VMware vSwitch for VMs
Task
Create a new vSwitch to be used for VM guests using vNICs eth3 and eth4 (vmnic3
and vmnic4) in Active/Active load balancing mode.
Create three port groups as follows:
VM-110 should explicitly use VLAN ID 110.
VM-111 should explicitly use VLAN ID 111.
vCenter should use VLAN ID 0 (no VLAN ID sent or Native VLAN).

Configuration
Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Standard Switch, and click Add Networking.

Select Virtual Machine and click Next.

Select Create a vSphere Standard Switch and choose vmnic3 and vmnic4. Click
Next.

Enter a Network Label of VM-110, explicitly enter the VLAN of 110, and click Next.

Click Finish.

Click Properties on vSwitch2.

Click Add.

Select Virtual Machine and click Next.

Enter a Network Label of VM-111, explicitly enter the VLAN of 111, and click Next.

Click Finish.

Click Add.

Do the same to enter a Network Label of vCenter, leave the VLAN of None (0), and
click Next. Below is the result. Click Close.

This is the result after you perform the same set of tasks for host ESXi2.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Deploy Guest VMs and vCenter from Inventory and
Integrate ESXi Hosts
Task
Deploy four VM guest machines from the datastore whose information is as follows:
Win2k8-www-1 uses VLAN 110 and IP address 10.0.110.111.
Win2k8-www-2 uses VLAN 110 and IP address 10.0.110.112.
Win2k8-www-3 uses VLAN 110 and IP address 10.0.110.113.
vCenter uses VLAN 1 (Native or 0) and IP address 10.0.1.100.
Username/password for all three is Administrator/Cc1edc01.
When vCenter is fully deployed, add both ESXi1 and ESXi2 hosts to the cluster.

Configuration
Navigate to ESXi1, click the Configuration tab, click Storage, right-click datastore1
, and click Browse Datastore.

Find and click the folder for Win2k8-www-1, find the .vmx file in the right pane, rightclick it, and click Add to Inventory.

Click Next.

Click Next.

Click Finish.

Repeat these tasks to add Win2k8-www-2 and Win2k8-www-3.

Navigate again to ESXi1, click the Configuration tab, click Storage, right-click
datastore1, and click Browse Datastore.

Find and click the folder for vCenter, find the .vmx file in the right pane, right-click it,
and click Add to Inventory.

When all hosts are successfully in ESXi1 inventory, right-click each one and click
Edit Settings

For Network Adapter 1, for vCenter, choose the port group named vCenter. For the
other three 'www' guests, choose the port group named VM-110. Click OK when
finished with each.

When completed, if you navigate to the Configuration tab and click Networking,
you should see all of the VM guests appear in vSwitch2, as shown here.

Give vCenter a moment to fully boot and start all VMware-related services.

Open a new vSphere VI client and point it to the vCenter server at 10.0.1.100, and
log in using the appropriate user name/password.

Click Create a datacenter in the right pane.

Name it CCIE-DC. In the right pane, click Add a host.

Enter the IP address 10.0.115.11 for ESXi1, username root, and password
cciedc01. Click Next.

Click Yes for the self-signed cert.

Click Next.

Click Next.

Click Next.

Click Next.

Click Finish.

Give it a moment to activate the vCenter agent on ESXi1 and add it to the
datacenter.

After a short while, it should appear online and you should see all of the guests.

Right-click CCIE-DC in vCenter and click Add Host.

Enter the IP address 10.0.115.12 for ESXi2, the username root, and the password
cciedc01. Click Next and go through all of the same screens to add that host.

When the host is added, navigate to ESXi2, click the Configuration tab, click
Storage, select Datastores, and notice that datastore1 is not yet active, but is
coming online.

The shared datastore should fully come online and be listed as normal.

UCS Technology Labs - UCS B-Series OS


Installation and Testing
Test vMotion, Ethernet Fabric Failover, and FC
Multipathing
Task
Test vMotion Win2k8-www-1 from ESXi1 to ESXi2 from within vCenter.
Verify by starting a continuous ping to 10.0.110.111 before beginning the
vMotion operation.
Test Fabric Failover from A to B by disabling Ethernet uplink ports (for your rack) on
FI-A.
Verify by tracking MAC address tables in upstream switches.
Test FC multipathing from A to B by disabling FC uplink ports (for your rack) on FI-A.
Verify in ESXi that the backup path is functioning properly.

Configuration
Before we vMotion, begin a continuous ping to 10.0.110.111 from Server1/3/5/7,
depending on your rack.

In vCenter, right-click Win2k8-www-1 and click Migrate.

Leave Change host selected (not datastore - that is called Storage vMotion) and
click Next.

Select host ESXi2. Click Next.

Leave High priority selected and click Next.

Click Finish.

Note the progress at the bottom under Recent Tasks.

Notice that the guest has completed moving.

Stop the ping and notice that either 1 ping or in this case not a single ping was lost
in the transition.

Next we will test failover, but before we do, let's grab the VMware-assigned MAC
address for the Win2k8-www-1 that is currently on ESXi2 so that we know what to
track.

Let's also start the ping again to that VM guest from Server1/3/5/7.

Let's also grab the FI port number by looking at CDP from within ESXi2. Here we
can see that vmnic3 is pinned to Vethernet 701 in FI-A.

On FI-A, look at the MAC address table to see if the MAC for that VM guest is
present, and then look at interface Vethernet 701 to determine what uplink port it is
pinned to. We can see that the MAC address for that VM guest is present on this FI
in VLAN 110, that it is mapped to Veth701, and that Veth701 is pinned to uplink
interface Po1.

INE-UCS-01-A(nxos)# sh mac address-table address 000c.298f.859d


Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

20

Veth701

INE-UCS-01-A(nxos)#
INE-UCS-01-A(nxos)#
INE-UCS-01-A(nxos)# sh pinning server-interfaces | in Veth701 Veth701

No

Po1

25:19:45
INE-UCS-01-A(nxos)#

Now let's check FI-B. It is clear that this same MAC address is not found (yet) on FIB.
INE-UCS-01-B(nxos)# sh mac address-table address 000c.298f.859d
INE-UCS-01-B(nxos)#

Now let's check the upstream N5Ks (for this particular task we are working on DC
Rack 1, so N5K1 and N5K2). On N5K1, we can see that the MAC address for the
VM is learned via the port-channel 1 link down to FI-A.
N5K1# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports/SWID.SSID.LID

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

20

Po1

N5K1#

On N5K2, we can see that the MAC address for the VM is learned via the Eth1/3
link over to N5K1.
N5K2# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports/SWID.SSID.LID

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

20

Eth1/3

N5K2#

Now let's fail both Ethernet uplink ports on FI-A. Right-click each and click Disable.
Because we are working on DC Rack 1, this is ports 3 and 4 pictured here, but note
that they may be different uplink ports depending on your rack.

Let's check FI-A again. It has disappeared.


INE-UCS-01-A(nxos)# sh mac address-table address 000c.298f.859d
INE-UCS-01-A(nxos)#

And FI-B? Now we have learned it here on Veth702 (which we'll see in a moment is
still the same vmnic3 in ESXi, but failed over to FI-B). Notice that Veth702 is pinned
to uplink Eth1/3.
INE-UCS-01-B(nxos)# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

Veth702

INE-UCS-01-B(nxos)#
INE-UCS-01-B(nxos)#
INE-UCS-01-B(nxos)# sh pinning server-interfaces | in Veth702 Veth702

No

25:43:20
INE-UCS-01-B(nxos)#

Let's look at the upstream N5Ks. Here we see in N5K1 that the MAC is now known
from across the Eth1/3 link over to N5K2.
N5K1# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link

Eth1/3

VLAN

MAC Address

Type

age

Secure NTFY

Ports/SWID.SSID.LID

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

Eth1/3

N5K1#

And in N5K2 we see that the MAC is now known from the southbound Eth1/12 link
to FI-B's Eth1/3.
N5K2# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports/SWID.SSID.LID

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

Eth1/12

N5K2#

And a quick visual CDP confirmation back in vCenter shows that vmnic3 sees itself
via CDP connected to FI-B Veth702.

Re-enable those two ports in UCSM by right-clicking both and choosing Enable to
verify that every one fails back to its original Veth port and state.

In FI-A, we see that the MAC address has re-appeared.


INE-UCS-01-A(nxos)# sh mac address-table address 000c.298f.859d
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN

MAC Address

Type

age

Secure NTFY

Ports

---------+-----------------+--------+---------+------+----+-----------------* 110

000c.298f.859d

dynamic

Veth701

INE-UCS-01-A(nxos)#

We can also see that vmnic3 again shows connected to FI-A port Veth701.

And if we stop our ping, we see that we have lost a total of 4 packets in the failover
and failback operations. Not bad at all.

Next we test FC multipathing. Let's begin by looking at our functioning setup.


Navigate to ESXi1, click the Configuration tab, click Storage Adapters, click
vmhba1 (fc0 on Fabric A in UCSM), and click the Devices button at the bottom.
Notice all three LUNs mounted.

Click the Paths button at the bottom and notice that vmhba1 is currently not only
Active, but is used as the I/O path for all three LUNs.

Click vmhba2 (fc1 on Fabric B in UCSM) and click the Devices button at the
bottom. Notice again all three LUNs mounted.

Click the Paths button at the bottom and notice that vmhba2 currently has all three
LUNs listed as Active, but it is not being used as the I/O path for any of them.

If we click the Storage Views tab at the top and then click Reports, we can see that
we have full redundancy.

If we click Maps, we see a nice graphical map view of our three LUNs (SCST disks)
along with the two front-end storage processors they are attached to and the
vmhbas they are accessible from.

Back in UCSM, on FI-A, right-click both port 11 and 12 and click Disable.

Click Yes for each port.

Ports are disabled. Click OK.

Back in vCenter, on the Storage Views tab, we see that the guests all show Partial
or No Redundancy. This would be the same whether we were on ESXi1 (blade 2) or
ESXi2 (blade 1) because they are both a part of the same chassis and pair of FIs.

On the Configuration tab, click Storage Adapters on vmhba1 and notice that
vmhba1 shows that no LUNs are seen on the Devices button.

Likewise on the Paths button, no paths to the LUNs are visible.

If we click vmhba2, we see that the LUNs are still present on the Devices button.

Clicking the Paths button reveals that the path is now not only Active, but it is also
being used as the I/O path.

Go back to UCSM and re-enable both FC uplink ports (re-establishing the san-portchannel up to MDS1). Then, back in vCenter on the Configuration tab, click
Storage Adapters on vmhba1, and click Rescan All.

Click OK.

Click Yes.

On the Storage Views tab, we see that we now have been restored to full
redundancy.

UCS Technology Labs - Nexus 1000v on UCS


Deploy Nexus 1000v VSMs in L3 Mode and Install
VEMs in ESXi Hosts
Task
You will not actually perform this task. It has already been done for you.
This task is included only to demonstrate how the installation can be performed.
Install both Nexus 1000v VSMs so that they are on VMs distributed between both
ESXi1 and ESXi2 hosts.
Integrate Nexus 1000v as a DVS within the vCenter cluster that both ESXi1 and
ESXi2 are members of.
Install Nexus 1000v VEMs onto both ESXi1 and ESXi2 hosts.

Configuration
Although it is entirely possible to deploy a Nexus 1000v virtual switch
using only the Java-based Installer app, things become a bit more
difficult when you intend to run the VSMs on their own VEMs. With
that in mind, we will deploy just the VSMs here, and then come back
in a few tasks and migrate physical NICs (vNICs in our case) over, so
we do not disrupt any ESXi traffic before we have a chance to build
the necessary port profiles (which, of course, would be inside the
VSM that we haven't yet deployed).
Before we begin, we need to add a few more VLANs to our standard vSwitch2.
Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Standard Switch, and then click Properties for vSwitch2.

Click Add.

Name the VLAN N1Kv-Control and enter VLAN ID 120. Click Next.

Click Finish.

Click Add and repeat the same process to add VLAN 121 for N1Kv-Management.

Note that both VLANs are now created and present in vSwitch2.

Perform the same set of tasks on host ESXi2.

Navigate to where you have downloaded the Nexus1000v switch installer files and
double-click the java installer .jar file.
You must have Java already installed and functioning properly to use
this installer.

Choose the Complete Installation and select the Custom option.

The prerequisites from the installer are included here for your reference. Read them
carefully. We will discuss most items in the following tasks. One of the things that
we have already covered deals with upstream switches and ensuring that links
connected to the N1Kv are pruning spanning tree BPDUs with spanning-tree port
type edge trunk and spanning-tree bpdu filter (and) guard . Nexus 1000v does not
run any instances of spanning tree, so it is important for you to design carefully.

Installer prerequisites continued...

On N5K1:
interface port-channel1
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpdu filter spanning-tree bpdu guard
speed 10000

interface Ethernet1/1
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpdu filter spanning-tree bpdu guard
channel-group 1 mode active

interface Ethernet1/2
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpdu filter spanning-tree bpdu guard

channel-group 1 mode active

On N5K2:
interface Ethernet1/1
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpdu filter spanning-tree bpdu guard

interface Ethernet1/2
switchport mode trunk
spanning-tree port type edge trunk
spanning-tree bpdu filter spanning-tree bpdu guard

N5K1# sh port-ch sum


Flags:

D - Down

P - Up in port-channel (members)

I - Individual

H - Hot-standby (LACP only)

s - Suspended

r - Module-removed

S - Switched

R - Routed

U - Up (port-channel)
M - Not in use. Min-links not met
-------------------------------------------------------------------------------Group Port-

Type

Protocol

Member Ports

Channel
-------------------------------------------------------------------------------1

Po1(SU)

Eth

LACP

Eth1/1(P)

Eth1/2(P)

N5K1#

Enter the IP address of the vCenter server at 10.0.1.100, the user ID Administrator
, and the password Cc1edc01. Click Next.

Watch as it tries to connect to the vCenter server and auth.

DC-3750#telnet
1
2
Mod
0
Sw
Virtual
10.0.121.1Trying
Supervisor
Hw--- Module
10.0.121.1 ...Nexus1000V
-----------------OpenNexus
------------------------------------------------1
1000v Switchlogin:
active * adminPassword: Cc1edc01Bad terminal
ha-standby
4.2(1)SV2(1.1)
type: "network".
0.0

Now
Cc1edc01
Enable
Layer
Review
IFinally
Close
Home
Inventory
Networking
Hosts
Right-click
Add
Note
Yes
Do
L
Migrate
C
Next
.N
Finish
VUM
Finally,
This
Let's
connectivity,
twhen
>>
tab,
for
check
We
eave
avigate
hoose
lick
ote
reminds
tells
not
any/all
does
Host
and
comes
can
try
itthat
the
progress
us
3
ready.
ready
the
itregistering
going
powering
establishing
box
migrate
ifyour
Telnet
both
the
to
virtual
completes,
new
warning,
VUM
note
all
itchange
at
itnot
us
telnet
the
of
has
so
cleared
validates
the
option
went
the
to
choices
out
ESXi
of
the
which
DVS
mean
that
new
(VMWare
continued...
deploy.
completed
top
machine
the
big
on
and
to
vmk
the
well,
its
which
SSH
we
hosts
in
and
DVS
to
the
SSH
part.
IP
is
both
and
that
your
XML
and
complexity
deploying
the
interfaces.
recommended
don't
address
after
switch.
click
iswe
and
these
Update
connection.
When
VSMs.
that
CCIE-DC
click
networking
configuration
not
extension
successfully,
have
aare
click
belong
while
required
Assuming
VEMs
we
possible,
VSM1
VSM2
already
allowed
level
Manager)
Click
any
can
data
you
key
to
over
yet.
will
later
on
(although
use
the
with
should
and
aware
center
to
with
use
insert
that
host
Layer
in
add
progress
vCenter
tovCenter
we
the
the
telnet/SSH
our
ESXi1.
ESXi2.
of,
see
in
more
into
can
2VSM
Browse
itvCenter
vCenter.
VLANs
and
now.
would
that
server;
the
click
at
before
N1Kv
click
after
the
VSM-controlled
Layer
both
buttons,
to
be
are
server
bottom.
Click
do
modules,
we
the
moving
preferred
hosts
still
2not
telnet
VSM
means
the
and
set
because
click
have
on.
new
up
with
creating
which
in.
inany
that
virtual
properly
Be
had
aDVS,
- production
10.0.121.1.
they
vmnic
we
sure
the
VEMs
the
chassis
click
will
VEMs
will
from
to
SVS
adapters
decline
select
extensions
retrieve
the
environment).
before
(ESXi
yet.
(Server
Click
the
at
This
yet.
information
host
in
this
Virtualizatio
installed
our
is
(We
point.
vmkernel
because
UCSM
Choos
willClic
dire
in
do

Verification

UCS Technology Labs - Nexus 1000v on UCS


Port Profile Provisioning
Task
Provision three port profiles as system uplink profiles with the following information
(in addition to the default uplink profile):
Name VM-Sys-Uplink, VLAN and System VLAN 115, Mode Trunk
Name vMotion-Uplink, VLAN 116, Mode Trunk
Name VM-Guests, VLAN 110-112, 120-121 and System VLANs 120-121,
Mode Trunk
Provision seven port profiles as VM profiles with the following information (in addition
to the default veth profile):
Name VMKernel, VLAN and System VLAN 115, Mode Access
Name vMotion, VLAN 116, Mode Access
Name VM-110, VLAN 110, Mode Access
Name VM-111, VLAN 111, Mode Access
Name VM-112, VLAN 112, Mode Access
Name N1Kv-Control, VLAN 120, Mode Access
Name N1Kv-Management, VLAN 121, Mode Access
In N1Kv, we call things port profiles; in VMware, we call these very
same things port groups. Throughout the rest of the lesson, we will
refer to them as port profiles to prevent confusion, but please be
aware that they are technically called port groups in VMWare. In fact,
in the configuration for each port profile, we will see a specific line that
tells the port profile to create a VMWare port group.

Configuration
On N1Kv VSM:
Uplink port profiles are defined with type ethernet. System VLAN is a special VLAN
that allows for instant communication prior to a VEM being inserted into the DVS.
This allows forwarding to occur on a VEM prior to a VSM programming its

forwarding tables into it.


port-profile type ethernet VM-Sys-Uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 115
no shutdown
system vlan 115
state enabled

port-profile type ethernet vMotion-Uplink


vmware port-group
switchport mode trunk
switchport trunk allowed vlan 116
no shutdown
state enabled

port-profile type ethernet VM-Guests


vmware port-group
switchport mode trunk
switchport trunk allowed vlan 110,111,112,120,121
no shutdown
system vlan 120,121
state enabled

VM guest port profiles are defined with type vethernet. It is very important to include
the capability l3control to whatever vethernet profile will be used for VMKernel
management. This allows the VEM to encapsulate its messages into UDP 4785 and
route them on to the VSM (the VEM already knows the VSM IP address because we
installed the VEM into the ESXi host via VUM).
port-profile type vethernet VMKernel
capability l3control
vmware port-group
switchport mode access
switchport access vlan 115
no shutdown
system vlan 115
state enabled

port-profile type vethernet vMotion


vmware port-group
switchport mode access
switchport access vlan 116

no shutdown
state enabled

port-profile type vethernet VM-110


vmware port-group
switchport mode access
switchport access vlan 110
no shutdown
state enabled

port-profile type vethernet VM-111


vmware port-group
switchport mode access
switchport access vlan 111
no shutdown
state enabled

port-profile type vethernet VM-112


vmware port-group
switchport mode access
switchport access vlan 112
no shutdown
state enabled

port-profile type vethernet N1Kv-Control


vmware port-group
switchport mode access
switchport access vlan 120
no shutdown
system vlan 120
state enabled

port-profile type vethernet N1Kv-Management


vmware port-group
switchport mode access
switchport access vlan 121
no shutdown
system vlan 121
state enabled

Verification
Navigate to ESiX1, click the Configuration tab, click Networking, select
vSphere Distributed Switch, and note that the new port profiles appear on either

side of the switch (uplink on the right and vm guest on the left).

Navigate to ESXi2, click the Configuration tab, click Networking, select


vSphere Distributed Switch, and note that the new port profiles appear on either
side of the switch.

UCS Technology Labs - Nexus 1000v on UCS


Migrating Physical Adapters and VM Guests to N1Kv
DVS
Task
Migrate one physical adapter to each uplink (Ethernet) port profile on the N1Kv DVS.
Migrate all VM guests, including vCenter and both N1Kv VSMs, over to run on the
N1Kv VEMs.
Create any needed port profiles if not previously created.
Before any VEMs join the N1Kv, ensure that VEM running on ESXi1 joins the N1Kv
as module 3 and VEM running on ESXi2 joins the N1Kv as module 4.

Configuration
To start, let's check the UUIDs of the blades in UCSM, because they serve as the
defining value of the VEM inside the N1Kv.
On ESXi1 Service Profile:

And on ESXi2 Service Profile:

On N1Kv:
vem 3
host vmware id 8f862310-4c63-11e2-0000-00000000000f
vem 4

host vmware id 8f862310-4c63-11e2-0000-00000000001f

We will intentionally start with ESXi2 to avoid disrupting vCenter or VSM1 traffic.
Navigate to ESXi2, click the Configuration tab, click Networking, select
vSphere Distributed Switch, and click Manage Physical Adapters.

Under the VM-Sys-Uplink port profile, click Click to Add NIC.

Choose vmnic0 and click OK.

We have a redundant vmnic for VMKernel traffic, so this shouldn't be too disruptive.
Click Yes.

Under the vMotion-Uplink port profile, click Click to Add NIC.

Choose vmnic2 and click OK.

We don't have a redundant vmnic for vMotion, but it isn't critical (in production it
would be, but we aren't in production). Click Yes.

Under the VM-Guests port profile, click Click to Add NIC.

Choose vmnic3 and click OK.

We also have a redundant vmnic for VM-Guest traffic, so again this shouldn't be too
disruptive. Click Yes.

Click OK.

Note the new adapters under their respective uplink port profiles; they should all
show green 8-pin modular connector icons. Click Manage Virtual Adapters to
begin moving VMkernel interfaces over to the new DVS.

Click Add.

Select Migrate existing virtual adapters. Click Next.

Select the Management Network adapter, double-click under Port Group to


display the drop-down menu, and select the VMKernel veth port profile from the
N1Kv DVS.

Do the same for the vMotion adapter, but select the vMotion veth port profile from
the N1Kv DVS. Click Next.

Click Finish.

Click Close.

Note that we see both VMK interfaces moved to the DVS.

Verification
In N1Kv, note that the VEM module comes online and inserts into the N1Kv DVS
properly as VEM 4.
By default, modules are inserted and dynamically assigned the first
available slot number; however, based on what we did in the first few
tasks, we assured that it would be inserted as module 4. It is a very
good practice to keep VEM numbers synchronized with your ESXi
hosts in some way, if possible. They can be changed later, but it is
much preferred to set them up properly from the start.

N1Kv-01#
2013 Feb 20 19:16:32 N1Kv-01 %VEM_MGR-2-VEM_MGR_DETECTED: Host 10.0.115.12 detected as module 4
2013 Feb 20 19:16:32 N1Kv-01 %VEM_MGR-2-MOD_ONLINE: Module 4 is online
N1Kv-01# sh mod
Mod

Ports

Module-Type

Model

Status

---

-----

--------------------------------

------------------

------------

Virtual Supervisor Module

Nexus1000V

active *

Virtual Supervisor Module

Nexus1000V

ha-standby

248

Virtual Ethernet Module

NA

ok

Mod

Sw

Hw

---

------------------

------------------------------------------------

4.2(1)SV2(1.1)

0.0

4.2(1)SV2(1.1)

0.0 4

4.2(1)SV2(1.1)

VMware ESXi 5.0.0 Releasebuild-623860 (3.0)

Mod

MAC-Address(es)

Serial-Num

---

--------------------------------------

----------

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA 4

Mod

Server-IP

Server-UUID

Server-Name

---

---------------

------------------------------------

--------------------

10.0.121.1

NA

NA

10.0.121.1

NA

NA

10.0.115.12

8f862310-4c63-11e2-0000-00000000001f

10.0.115.12

02-00-0c-00-03-00 to 02-00-0c-00-03-80

* this terminal session


N1Kv-01#

Take note of the new Ethernet (physical) interfaces in the N1Kv.


N1Kv-01# sh run | b "interface Ethernet"

interface Ethernet4/1
inherit port-profile VM-Sys-Uplink

interface Ethernet4/3
inherit port-profile vMotion-Uplink

interface Ethernet4/4
inherit port-profile VM-Guests

And also notice the new Vethernet (virtual VM) interfaces in the N1Kv.
N1Kv-01# sh run | b "interface Vethernet"

interface Vethernet1
inherit port-profile VMKernel
description VMware VMkernel, vmk0
vmware dvport 160 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0025.B501.011B

interface Vethernet2
inherit port-profile vMotion

NA

description VMware VMkernel, vmk1


vmware dvport 192 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.5670.8A91

Configuration
Back in vCenter, right-click VSM2 in ESXi2 and click Edit Settings.

Change Network Adapter 1 to the new N1Kv-Control (N1Kv-01) port group in the
DVS.
You can distinguish DVS port profiles from standard vSwitch groups
by the fact that they always indicate which DVS they are a part of in
parentheses at the end of the name. (Note that VMWare can run
multiple different DVS at one time.)

Change Network Adapter 2 to the new N1Kv-Management (N1Kv-01) port group in


the DVS.

Change Network Adapter 3 to the new N1Kv-Control (N1Kv-01) port group in the
DVS. Click OK.

Note in the vDS page for ESXi2 how they have been assigned and show green
8P8C for connectivity.

Verification
We see that VSM2 has power-cycled but is still seen by the N1Kv.
N1Kv-01# sh mod
Mod

Ports

Module-Type

Model

Status

---

-----

--------------------------------

------------------

------------

Virtual Supervisor Module

Nexus1000V

active *

Virtual Supervisor Module

248

Virtual Ethernet Module

Mod

Sw

Hw

---

------------------

------------------------------------------------

4.2(1)SV2(1.1)

0.0

4.2(1)SV2(1.1)

VMware ESXi 5.0.0 Releasebuild-623860 (3.0)

Mod

MAC-Address(es)

Serial-Num

---

--------------------------------------

----------

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA

02-00-0c-00-03-00 to 02-00-0c-00-03-80

NA

Mod

Server-IP

Server-UUID

Server-Name

---

---------------

------------------------------------

--------------------

10.0.121.1

NA

NA

10.0.115.12

8f862310-4c63-11e2-0000-00000000001f

10.0.115.12

powered-up

NA

ok

* this terminal session


N1Kv-01#

Take note of the new Vethernet interfaces in the N1Kv.


It is important to understand something critical in the Nexus 1000v
switch: Although Vethernet interface numbering doesn't typically
change after it is assigned (even when vMotioning a VM to another
ESXi host), these are still virtual interfaces and they can be destroyed
(e.g., delete a VM altogether).The N1Kv refers to these virtual
interfaces or Vethernet interfaces for use in the forwarding tables with
a value known as Local Target Logic (LTL). We will begin to see
these LTL values more and more as we look at pinning in future labs.
But for now, remember one more critical point: a VSM will always
have three interfaces, they will always be ordered as Control,

Management, Packet, and they will always be assigned LTL 10, 11,
and 12 values, respectively. This can greatly aide in troubleshooting.
vemcmd show port-old will show you the values for these and
all other eth and veth interfaces, and we'll discuss various vemcmd's
later.

N1Kv-01# sh run | b "interface Vethernet3"

interface Vethernet3
inherit port-profile N1Kv-Control
description N1Kv-01-VSM-2, Network Adapter 1
vmware dvport 320 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.56BB.3A87

interface Vethernet4
inherit port-profile N1Kv-Management
description N1Kv-01-VSM-2, Network Adapter 2
vmware dvport 352 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.56BB.6099

interface Vethernet5
inherit port-profile N1Kv-Control
description N1Kv-01-VSM-2, Network Adapter 3
vmware dvport 321 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.56BB.14BB

Configuration
We will continue on and migrate the other host on ESXi2. Right-click Win2k8-www-1
and click Edit Settings.

Change Network adapter 1 to VM-110 (N1Kv-01). Click OK.

Note its appearance as connected (green) in the vDS.

Verification
From any switch, we should be able to ping it, and we should also be able to ping
the other two VMs still on the standard vSwitch on ESXi1.
DC-3750#ping 10.0.110.111

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.110.111, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
DC-3750#ping 10.0.110.112

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.110.112, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/9 ms
DC-3750#ping 10.0.110.113

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.110.113, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
DC-3750#

Take note of the new Vethernet (virtual VM) interfaces in the N1Kv.
N1Kv-01# sh run | b "interface Vethernet6"

interface Vethernet6
inherit port-profile VM-110
description Win2k8-www-1, Network Adapter 1
vmware dvport 224 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 000C.298F.859D

Configuration
Now we will move the physical adapters on ESXi1 over using the same method that
we used on ESXi2. Navigate to ESXi1, click the Configuration tab, click
Networking, select vSphere Distributed Switch, and click Manage Physical
Adapters

. Click Click to Add NIC for VM-Sys-Uplink, vMotion-Uplink, and VM-Guests, and
choose adapters vmnic0, vmnic2, and vmnic3 for the three uplink port profiles,
respectively. Click OK.

Note their appearance as connected in the vDS. Click Manage Virtual Adapters.

Click Add.

Select Migrate existing virtual adapters. Click Next.

Select the Management Network adapter, double-click under Port Group to


display the drop-down menu, and choose the VMKernel. For the VMKernel, choose
the vMotion veth port profiles from the N1Kv DVS. Click Next.

Click Finish.

Click Close.

Verification
The VEM comes online as module 3.
N1Kv-01(config-vem-slot)# sh mod
Mod

Ports

Module-Type

Model

Status

---

-----

--------------------------------

------------------

------------

Virtual Supervisor Module

Nexus1000V

active *

Virtual Supervisor Module

Nexus1000V

ha-standby

248

Virtual Ethernet Module

NA

ok

248

Virtual Ethernet Module

NA

ok

Mod

Sw

Hw

---

------------------

------------------------------------------------

4.2(1)SV2(1.1)

0.0

4.2(1)SV2(1.1)

0.0 3

4.2(1)SV2(1.1)

VMware ESXi 5.0.0 Releasebuild-623860 (3.0)

Mod

MAC-Address(es)

Serial-Num

---

--------------------------------------

----------

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA

00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8

NA 3

02-00-0c-00-04-00 to 02-00-0c-00-04-80

NA

Mod

Server-IP

Server-UUID

Server-Name

---

---------------

------------------------------------

--------------------

10.0.121.1

NA

NA

10.0.121.1

NA

NA

10.0.115.11

8f862310-4c63-11e2-0000-00000000000f

10.0.115.11

4.2(1)SV2(1.1)

VMware ESXi 5.0.0 Releasebuild-623860 (3.0)

02-00-0c-00-03-00 to 02-00-0c-00-03-80

NA

10.0.115.12

8f862310-4c63-11e2-0000-00000000001f

10.0.115.12

* this terminal session

Note the new Ethernet interfaces, and notice that any interfaces belonging to
module 3 on ESXi1 will be numbered as Ethernet 3/x and any interfaces belonging
to module 4 on ESXi2 will be numbered as Ethernet 4/x.
N1Kv-01(config-vem-slot)# sh run | b "interface E"
interface Ethernet3/1
inherit port-profile VM-Sys-Uplink
interface Ethernet3/3
inherit port-profile vMotion-Uplink
interface Ethernet3/4

inherit port-profile VM-Guests

interface Ethernet4/1
inherit port-profile VM-Sys-Uplink

interface Ethernet4/3
inherit port-profile vMotion-Uplink

interface Ethernet4/4
inherit port-profile VM-Guests

Also note the new Vethernet interfaces for vmk management and vmotion.

N1Kv-01(config-vem-slot)# sh run | b "interface Vethernet7"

interface Vethernet7
inherit port-profile VMKernel
description VMware VMkernel, vmk0
vmware dvport 161 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0025.B501.010D

interface Vethernet8
inherit port-profile vMotion
description VMware VMkernel, vmk1
vmware dvport 193 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.567F.633F

Configuration
We haven't added a port profile for vCenter (VLAN 1) in N1Kv before now, so now is
a good time, before we begin migrating the remainder of the VM guests over. Before
we do this, we need to modify our uplink VM-Guests profile to include VLAN 1, and
because vCenter is rather critical, it's a good idea to make it a system VLAN as
well - that way, it can begin forwarding traffic on the VEM even before the VEM is
inserted into the N1Kv DVS.
port-profile type ethernet VM-Guests
vmware port-group
switchport mode trunk switchport trunk allowed vlan 1
,110,120-121
no shutdown system vlan 1
,120-121
state enabled port-profile type vethernet vCenter
vmware port-group switchport mode access
switchport access vlan 1
no shutdown system vlan 1

state enabled

Note that syslog should inform you that the DVPG or Distibuted Virtual Port Group
was created in vCenter.

2013 Feb 20 21:23:27 N1Kv-01 %VMS-5-DVPG_CREATE: created port-group 'vCenter' on the vCenter Server.

Back in vCenter, right-click the vCenter VM and click Edit Settings.

Migrate Network adapter 1 over to the vCenter (N1Kv-01) vDS port profile/group.
Click OK.

Verification
We see a Veth interface created for vCenter, and so all should still be responsive
with the vCenter client.

2013 Feb 20 21:24:30 N1Kv-01 %VIM-5-IF_ATTACHED: Interface Vethernet14 is attached to Network Adapter 1 of vCenter o
2013 Feb 20 21:24:30 N1Kv-01 %ETHPORT-5-IF_UP: Interface Vethernet14 is up in mode access

Configuration
Continue to migrate the rest of the VM guests NICs on ESXi 1. When finished, every
guest on both hosts should be fully migrated off of any local vSwitch and running
soley on the Nexus 1000v DVS platform.

Verification
In vCenter on the vDS for ESXi1, we should see all VMs running on the N1Kv.

In the N1Kv CLI, we should see all of the Eth and Veth interfaces populated; note
that the Veth interfaces populate the description automatically with the name of the
running VM occupying that respective interface.
N1Kv-01# sh int status

-------------------------------------------------------------------------------Port

Name

Status

Vlan

Duplex

Speed

Type

-------------------------------------------------------------------------------mgmt0

--

up

routed

full

1000

--

Eth3/1

--

up

trunk

full

1000

--

Eth3/3

--

up

trunk

full

unknown --

Eth3/4

--

up

trunk

full

unknown --

Eth4/1

--

up

trunk

full

1000

Eth4/3

--

up

trunk

full

unknown --

Eth4/4

--

up

trunk

full

unknown --

Veth1

VMware VMkernel, v up

115

auto

auto

--

Veth2

VMware VMkernel, v up

116

auto

auto

--

Veth3

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth4

N1Kv-01-VSM-2, Net up

121

auto

auto

--

Veth5

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth6

Win2k8-www-1, Netw up

110

auto

auto

--

Veth7

VMware VMkernel, v up

115

auto

auto

--

Veth8

VMware VMkernel, v up

116

auto

auto

--

--

Veth9

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth10

N1Kv-01-VSM-1, Net up

121

auto

auto

--

Veth11

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth12

Win2k8-www-2, Netw up

110

auto

auto

--

Veth13

Win2k8-www-3, Netw up

110

auto

auto

--

Veth14

vCenter, Network A up

auto

auto

--

control0

--

routed

full

1000

--

N1Kv-01#

up

UCS Technology Labs - Nexus 1000v on UCS


Nexus 1000v Virtual Port Channel - Host Mode (vPCHM)
Task
Provision vPC Host Mode (vPC-HM) on the N1Kv so that it returns adapters to the
same groups and redundant states they were in when they were running on the
standard vSwitches (vmnic0 and vmnic1 together for VM-Sys-Uplink, vmnic3 and
vmnic4 together for VM-Guests for each ESXi VEM).
Migrate the remaining physical adapters off of ESXi standard vSwitches and onto the
N1Kv vDS.

Configuration
On N1Kv, let's look at the existing hashing algorithm for load balancing traffic. We
can see that there are 17 options for hashing traffic up over the uplinks, ranging
from very broad to very granular.

N1Kv-01(config)# port-channel load-balance ethernet ?


dest-ip-port

Destination IP address and L4 port

dest-ip-port-vlan

Destination IP address, L4 port and VLAN

destination-ip-vlan

Destination IP address and VLAN

destination-mac

Destination MAC address

destination-port

Destination L4 port

source-dest-ip-port

Source & Destination IP address and L4 port

source-dest-ip-port-vlan

Source & Destination IP address, L4 port and VLAN

source-dest-ip-vlan

Source & Destination IP address and VLAN

source-dest-mac

Source & Destination MAC address

source-dest-port

Source & Destination L4 port

source-ip-port

Source IP address and L4 port

source-ip-port-vlan

Source IP address, L4 port and VLAN

source-ip-vlan

Source IP address and VLAN

source-mac

Source MAC address

source-port

Source L4 port

source-virtual-port-id

Source Virtual Port Id

vlan-only

VLAN only

But if we look at the default in the config, we see that it is a very basic method of
pinning traffic coming from one source MAC address and going to one of the uplink
port-groups.
N1Kv-01(config)# sh run | in "port-channel load-balance" port-channel load-balance ethernet source-mac

N1Kv-01(config)#

Does this mean that any one VM can only send traffic up one link to the upstream
switch? In fact, it does. Let's recall our topology and take some time to consider this
very important topic in N1Kv.
When we remember that we are running Nexus 1000v on top of UCS, and that each
of our uplinks (in this scenario, at least) travels north from each blade up to a
different IOM and therefore to a different FI, and we remember that these FIs have
completely separate forwarding tables, MAC address learning, etc. and appear to
each upstream switch as independent hosts, you can see how there is no possible
way that we can form port channels from the two separate upstream switches down
to the two separate FIs.
If that is the case, how can we possibly do a port channel here? This is the magic
behind vPC-HostMode or vPC-HM. (By the way, this really has nothing to do with
vPC that runs on the N5K/7Ks at all - it's more of a marketing term). The magic is in

the load balancing hash.


Because we only ever pin traffic from one VM (based on source MAC) up to one of
the two uplinks, which will only show itself to one of the two switches that are
upstream from the corresponding FI, we can see that there is no need for the
upstream switching to have any knowledge that what is happening south is some
(strange) form of a port channel. There is no LACP, and the only real "Mode On" is
to tell the N1Kv that there is a port channel, and that you should consider it up, and
that you can forward traffic - but in this case, only based on source MAC. Also note
that if one link should go down, traffic will failover to the other link in the port
channel, and the MAC address will simply disappear from one FI (and therefore its
corresponding upstream switch) and appear on the other FI (and be learned by the
other upstream switch).
In that case, you might ask, Why are all of the other load balance hashes even
here? Remember that just because we are running N1Kv on a UCS B-Series
architecture in this particular scenario, there is nothing preventing us from also
running it on standard pizza-box type rack-mount servers, where two NICs are, in
fact, in the same chassis, and connecting them to the same upstream switch (or
even a proper pair of VSS or vPC switches).
Let's go ahead and turn on the vPC-HM type of port channel. Note that all port
channels in N1Kv use the auto command to automatically assign PC numbers.
port-profile type ethernet VM-Sys-Uplink
channel-group auto mode on mac-pinning

Verification
Watch as the port channel comes up.
2013 Feb 20 21:37:02 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 1 for member ports Ethernet3/1

2013 Feb 20 21:37:02 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet3/1 is added to port-c


2013 Feb 20 21:37:02 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel1 created

2013 Feb 20 21:37:02 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_IN_PROGRESS: Interface Ethernet3/1 is down

2013 Feb 20 21:37:03 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet3/1 is added to port-c


2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet3/1, operational speed changed to 1 Gbps
2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet3/1, operational duplex mode changed to Full
2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet3/1, operational Receive Flow Control

2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet3/1, operational Transmit Flow Control
2013 Feb 20 21:37:03 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 2 for member ports Ethernet4/1

2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel1, operational speed changed to 1 Gbps
2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel1, operational duplex mode changed to Full

2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel1, operational Receive Flow Contro

2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel1, operational Transmit Flow Contr

2013 Feb 20 21:37:03 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet4/1 is added to port-c


2013 Feb 20 21:37:03 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel2 created

2013 Feb 20 21:37:03 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_IN_PROGRESS: Interface Ethernet4/1 is down


2013 Feb 20 21:37:03 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel1: Ethernet3/1 is up

2013 Feb 20 21:37:04 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel1: first operational port changed from non
2013 Feb 20 21:37:04 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet3/1 is up in mode trunk
2013 Feb 20 21:37:04 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel1 is up in mode trunk

2013 Feb 20 21:37:05 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet4/1 is added to port-c


2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/1, operational speed changed to 1 Gbps
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/1, operational duplex mode changed to Full
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/1, operational Receive Flow Control

2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/1, operational Transmit Flow Control
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel2, operational speed changed to 1 Gbps
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel2, operational duplex mode changed to Full

2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel2, operational Receive Flow Contro

2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel2, operational Transmit Flow Contr
2013 Feb 20 21:37:05 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel2: Ethernet4/1 is up

2013 Feb 20 21:37:05 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel2: first operational port changed from non
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/1 is up in mode trunk
2013 Feb 20 21:37:05 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel2 is up in mode trunk
N1Kv-01(config-port-prof)#

So why did it create two port channels? Because we have two ESXi hosts, and
therefore two VEMs, and there could never be a port channel that spanned multiple
ESXi hosts - because if you think about it, there is only one physical/electrical way
out of each ESXi server, and that is with the physical NIC(s)/physical KR traces that
go from each blade to each IOM/FI.
Let's do the same for the VM-Guests port profile (note that we do not need this for
vMotion, because it has only one physical NIC per ESXi host).
port-profile type ethernet VM-Guests
channel-group auto mode on mac-pinning

Verification
2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 3 for member ports Ethernet3/4

2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet3/4 is added to port-c


2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel3 created

2013 Feb 20 21:44:25 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_IN_PROGRESS: Interface Ethernet3/4 is down

2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet3/4 is added to port-c


2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet3/4, operational speed changed to 20 Gbps
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet3/4, operational duplex mode changed to Full
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet3/4, operational Receive Flow Control

2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet3/4, operational Transmit Flow Control
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 4 for member ports Ethernet4/4

2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel3, operational speed changed to 20 Gbps
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel3, operational duplex mode changed to Full

2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel3, operational Receive Flow Contro

2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel3, operational Transmit Flow Contr

2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet4/4 is added to port-c


2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel4 created

2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_IN_PROGRESS: Interface Ethernet4/4 is down


2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel3: Ethernet3/4 is up

2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel3: first operational port changed from non
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet3/4 is up in mode trunk
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel3 is up in mode trunk

2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD: Interface Ethernet4/4 is added to port-c


2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/4, operational speed changed to 20 Gbps
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/4, operational duplex mode changed to Full
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/4, operational Receive Flow Control

2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/4, operational Transmit Flow Control
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel4, operational speed changed to 20 Gbps
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel4, operational duplex mode changed to Full

2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port-channel4, operational Receive Flow Contro

2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port-channel4, operational Transmit Flow Contr
2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel4: Ethernet4/4 is up

2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel4: first operational port changed from non
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/4 is up in mode trunk
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel4 is up in mode trunk
N1Kv-01(config-port-prof)#

Now that we have our port channels created, let's go back to vCenter and add a
second physical NIC to each Ethernet port profile in N1Kv.

Configuration
Navigate to ESXi1, click the Configuration tab, click Networking, select
vSphere Distributed Switch, and click Manage Physical Adapters.

Under the VM-Sys-Uplink port profile, click Click to Add NIC.

Choose vmnic1 and click OK.

Click Yes.

Under the VM-Guests port profile, click Click to Add NIC.

Choose vmnic4 and click OK.

Click Yes.

Click OK.

Verification
Note that in ESXi1's vDS view, we now see two uplink adapters for each port profile,
and all ports show green icons indicating they are connected.

Back in N1Kv, note the second Ethernet interface being added to each port channel.
N1Kv-01(config-port-prof)# 2013 Feb 20 21:48:14 N1Kv-01 %VIM-5-IF_ATTACHED:
Interface Ethernet3/2 is attached to vmnic1 on module 3
2013 Feb 20 21:48:14 N1Kv-01 %VIM-5-IF_ATTACHED: Interface Ethernet3/5 is attached to vmnic4 on module 3
2013 Feb 20 21:48:15 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD:
Interface Ethernet3/2 is added to port-channel1
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet3/2, operational speed changed to 1 Gbps
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet3/2, operational duplex mode changed to Full
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet3/2, operational Receive Flow Control

2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet3/2, operational Transmit Flow Control
2013 Feb 20 21:48:15 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel1: Ethernet3/2 is up
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet3/2 is up in mode trunk
2013 Feb 20 21:48:15 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD:
Interface Ethernet3/5 is added to port-channel3

2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet3/5, operational speed changed to 20 Gbps
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet3/5, operational duplex mode changed to Full
2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet3/5, operational Receive Flow Control

2013 Feb 20 21:48:15 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet3/5, operational Transmit Flow Control
2013 Feb 20 21:48:15 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel3: Ethernet3/5 is up
2013 Feb 20 21:48:16 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet3/5 is up in mode trunk

Perform the same tasks back in vCenter for host ESXi2, and notice the output from
the switch.
2013 Feb 20 21:52:34 N1Kv-01 %VIM-5-IF_ATTACHED: Interface Ethernet4/2 is attached to vmnic1 on module 4
2013 Feb 20 21:52:34 N1Kv-01 %VIM-5-IF_ATTACHED: Interface Ethernet4/5 is attached to vmnic4 on module 4
2013 Feb 20 21:52:34 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD:
Interface Ethernet4/2 is added to port-channel2
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/2, operational speed changed to 1 Gbps
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/2, operational duplex mode changed to Full
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/2, operational Receive Flow Control

2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/2, operational Transmit Flow Control
2013 Feb 20 21:52:35 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel2: Ethernet4/2 is up
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/2 is up in mode trunk
2013 Feb 20 21:52:35 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD:
Interface Ethernet4/5 is added to port-channel4

2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/5, operational speed changed to 20 Gbps
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/5, operational duplex mode changed to Full
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/5, operational Receive Flow Control

2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/5, operational Transmit Flow Control
2013 Feb 20 21:52:35 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel4: Ethernet4/5 is up
2013 Feb 20 21:52:35 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/5 is up in mode trunk
2013 Feb 20 21:52:39 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel4: Ethernet4/5 is down
2013 Feb 20 21:52:39 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel2: Ethernet4/2 is down
2013 Feb 20 21:52:39 N1Kv-01 %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet4/5 is down (Link failure)
2013 Feb 20 21:52:39 N1Kv-01 %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet4/2 is down (Link failure)
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/5, operational speed changed to 20 Gbps
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/5, operational duplex mode changed to Full
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/5, operational Receive Flow Control

2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/5, operational Transmit Flow Control
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/2, operational speed changed to 1 Gbps
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/2, operational duplex mode changed to Full
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet4/2, operational Receive Flow Control

2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet4/2, operational Transmit Flow Control
2013 Feb 20 21:52:41 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel4: Ethernet4/5 is up
2013 Feb 20 21:52:41 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel2: Ethernet4/2 is up
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/5 is up in mode trunk
2013 Feb 20 21:52:41 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/2 is up in mode trunk

We can also see the result of these added port channels in a simple show run:
interface port-channel1
inherit port-profile VM-Sys-Uplink
vem 3

interface port-channel2
inherit port-profile VM-Sys-Uplink
vem 4

interface port-channel3
inherit port-profile VM-Guests
vem 3

interface port-channel4
inherit port-profile VM-Guests
vem 4

At this point, both local ESXi vSwitches are completely barren of both guests and
physical NIC adapters, and all networking has been transferred over to the the N1Kv
switch.
Now it's time to see how the N1Kv has chosen to pin the VMs up to uplink Ethernet
interfaces. We'll do this by looking at the output of two main commands in N1Kv,
taking note of a few key fields in each output, and then bringing them together to
give us the whole picture. The fields to look for are called LTL and SGID. LTL
means Local Target Logic, and SGID is the Sub-Group ID, because each virtual port
channel is divided further into sub-groups. The two commands are really just show
port and show pinning ; however, because this is a virtual switch and we are on the
virtual supervisor, we need to tell it specifically on what remote linecard (VEM) we
want to run the command, and to do that, we must prefix the show commands a bit,
as we see here.
module vem 3 execute vemcmd show port

module vem 3 execute vemcmd show pinning

Note that we can run this directly from the CLI of an ESXi host after we SSH into it.
If we wanted to do that, we would simply omit the prefix, and run the commands as
such.
vemcmd show port

vemcmd show pinning

There are many other show commands that we can run from vemcmd within an ESXi
host or from the VSM if we prefix the module X execute command to it. There are
also some set commands that can be quite useful, but they should be applied with

care because they can quickly take a VEM offline from its VSM (but that's what a lab
is for, right?).
Before we look at their output, let's review the terminology:
LTL = Local Target Logic
PC-LTL = Port Channel Local Target Logic
SGID = Sub-Group ID
Eff_SGID = Effective Sub-Group ID
We are going to look at the PC-LTL in the show port , then look at the Eff_SGIDs in
the show pinning , and then look at them with respect to their PC-LTL values.
To clarify this, let's isolate two VM guests that share the same uplink for
comparison. We will contrast "Win2k8-www-2" with "N1Kv-01-VSM-1" (specifically,
its first adapter). We must also point out the physical NICs or vmnics, to see what
port channels and sub-groups they belong to.
Now let's look at their output.
First, we can see that Eth 3/4 and 3/5 (vmnic 3 and 4) are in a port channel, and that
their PC-LTL value is 306, and that their Sub Group ID (SGID) matches their vmnic
number (3 and 4), which makes things quite easy.
N1Kv-01# module vem 3 execute vemcmd show port
LTL

VSM Port

Admin Link

17

Eth3/1

UP

UP

18

Eth3/2

UP

UP

19

Eth3/3

UP

UP

State

PC-LTL

SGID

Vem Port

F/B*

305

vmnic0

F/B*

305

vmnic1

F/B*

vmnic2

20

Eth3/4

UP

UP

FWD

306

vmnic3

21

Eth3/5

UP

UP

FWD

306

vmnic4

49

Veth7

UP

UP

FWD

50

Veth8

UP

UP

FWD

51

Veth9

UP

UP

FWD

Type

vmk0
vmk1

N1Kv-01-VSM-1.eth0

52

Veth10

UP

UP

FWD

N1Kv-01-VSM-1.eth1

53

Veth11

UP

UP

FWD

N1Kv-01-VSM-1.eth2

54

Veth12

UP

UP

FWD

Win2k8-www-2.eth0

55

Veth13

UP

UP

FWD

Win2k8-www-3.eth0

56

Veth14

UP

UP

FWD

vCenter.eth0

305

Po1

UP

UP

F/B*

0 306

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.

Po3

UP

UP

FWD

Please run "vemcmd show port vlans" to see the details.

Again, as we look at the show pinning , we can see that PC-LTL 306 is broken down
into SGID of 3 and 4, and that Win2k8-www-2.eth0 is pinned to SGID 3 or vmnic3,
whereas N1Kv-01-VSM-1.eth0 is pinned up SGID 4 or vmnic4.
N1Kv-01# module vem 3 execute vemcmd show pinning
LTL

IfIndex

PC-LTL

VSM_SGID

Eff_SGID

10

306

32

12

306

32

49

1c000060

305

32

51

1c000080

306

32

iSCSI_LTL*

Name

vmk0
N1Kv-01-VSM-1.eth0

52

1c000090

306

32

N1Kv-01-VSM-1.eth1

53

1c0000a0

306

32

N1Kv-01-VSM-1.eth2

54

1c0000b0

306

32

Win2k8-www-2.eth0

55

1c0000c0

306

32

Win2k8-www-3.eth0

56

1c0000d0

306

32

vCenter.eth0

iSCSI_LTL* : iSCSI pinning overrides VPC-HM pinning


N1Kv-01#

Now let's look at the same commands, but on VEM module 4 - or ESXi 2.
N1Kv-01# module vem 4 execute vemcmd show port
LTL

VSM Port

Admin Link

State

PC-LTL

SGID

Vem Port

17

Eth4/1

UP

UP

F/B*

305

vmnic0

18

Eth4/2

UP

UP

F/B*

305

vmnic1

19

Eth4/3

UP

UP

F/B*

Type

vmnic2

20

Eth4/4

UP

UP

FWD

306

vmnic3

21

Eth4/5

UP

UP

FWD

306

vmnic4

49

Veth1

UP

UP

FWD

50

Veth2

UP

UP

FWD

51

Veth3

UP

UP

FWD

N1Kv-01-VSM-2.eth0

52

Veth4

UP

UP

FWD

N1Kv-01-VSM-2.eth1

53

Veth5

UP

UP

FWD

N1Kv-01-VSM-2.eth2

54

Veth6

UP

UP

FWD

Win2k8-www-1.eth0

305

Po2

UP

UP

F/B*

306

Po4

UP

UP

FWD

vmk0
vmk1

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.
Please run "vemcmd show port vlans" to see the details.

N1Kv-01#
N1Kv-01# module vem 4 execute vemcmd show pinning
LTL

IfIndex

PC-LTL

VSM_SGID

Eff_SGID

iSCSI_LTL*

Name

10

306

32

12

306

32

49

1c000000

305

32

vmk0

51

1c000020

306

32

N1Kv-01-VSM-2.eth0

52

1c000030

306

32

N1Kv-01-VSM-2.eth1

53

1c000040

306

32

N1Kv-01-VSM-2.eth2

54

1c000050

306

32

Win2k8-www-1.eth0

iSCSI_LTL* : iSCSI pinning overrides VPC-HM pinning


N1Kv-01#

We see the same exact LTL and PC-LTL values! Remember, these are Local
Target Logic. The global logic values are the port channel numbers in the VSM, but
the local target values are just that - local to each VEM. Therefore, all things being
equal on both ESXi hosts (number of vNICs, VLANs, purpose, assignemnt), it
makes sense that they would calculate similar values for these local target logic
values.

UCS Technology Labs - Nexus 1000v on UCS


Test vMotion on Nexus 1000v
Task
Test vMotion running over N1Kv.

Configuration
We'll begin a ping to 10.0.110.112 and keep it going.

Drag Win2k8-www-2 from ESXi1 down to ESXi2 to start the vMotion process.

Click Next.

Click Finish.

Note it migrating.

In N1Kv, we can see the Veth port detatch and re-attach to the VEM below.
N1Kv-01# 2013 Feb 20 22:37:16 N1Kv-01 %VIM-5-IF_DETACHED: Interface Vethernet12 is detached

2013 Feb 20 22:37:16 N1Kv-01 %ETHPORT-5-IF_DOWN_NON_PARTICIPATING: Interface Vethernet12 is down (Non participating)
2013 Feb 20 22:37:16 N1Kv-01 %VIM-5-IF_ATTACHED:
Interface Vethernet12 is attached to Network Adapter 1 of Win2k8-www-2 on port 7 of module 4 with dvport id 225

2013 Feb 20 22:37:16 N1Kv-01 %ETHPORT-5-IF_UP: Interface Vethernet12 is up in mode access

And back at our ping, we see that not a single packet was lost.

This all usually happens with 0% packet loss, or occasionally with a single packet
lost.
vMotion seems to work just fine with our N1Kv DVS!

UCS Technology Labs - Nexus 1000v on UCS


QoS in Nexus 1000v
Task
Set up one of the generic rack-mount servers in your rack with the IP address of
10.0.110.10/24 and place it in VLAN 110 on your N5K.
(Server 1 - 8 depending on rack number)
Generate a ping from VSM1 destined to 10.0.110.10 with the DSCP 46 (EF).
Provision N1Kv to intercept that packet and change the DSCP value to 40, and also
to mark L2 with the CoS value of 5.
Ensure that this CoS value is being honored by UCSM by verfication in the proper
Fabric Interconnect.

Pre-Verification
To test this QoS scenario, we need to generate some packets with a particular
DSCP value from a VM running on the N1Kv switch VEM. Although we may be able
to do this from a Windows box, it is much easier and faster (and therefore makes
much more sense) to do this from a Cisco device. Luckily, the VSM is running on its
own VEM, so we can simply test using an extended ping from the VSM
management interface, because it is simply another Veth interface on the VEM. We
will see that our VSM-1 (or Sup1) is currently active, so we know that our ping will
come from the management port on VSM-1, which in our case happens to be
Veth10.
We will ping a Windows box that is running Wireshark and filter for ICMP. Note that
this box happens to be on the same subnet but is a standalone rack-mount server
and is connected outside of the UCS on a N5K. This will give us the ability to check
the hardware queues on the UCS Fabric Interconnects later for L2 CoS-to-Queue
mapping.
Let's first make sure that the active VSM is indeed VSM1. This is critical to knowing
which vmnic in VMWare, and therefore which vNIC in UCSM, will be transmitting the
packets northbound. And we can clearly see that VSM1 is active.

N1Kv-01# show mod

Mod

Ports

Module-Type

Model

Status

---

-----

--------------------------------

------------------

------------

Virtual Supervisor Module

Nexus1000V

active *

Virtual Supervisor Module

Nexus1000V

ha-standby

248

Virtual Ethernet Module

NA

ok

248

Virtual Ethernet Module

NA

ok

We will begin by testing a simple ping marking DSCP to the PHB of EF (which is
ToS 184), and ensuring that it arrives at its destination marked accordingly. Next
we'll identify the path it should take and at what port and queue we expect it to be
seen, apply the policy-map, and later test it again to determine whether the DSCP is
being changed and the CoS bits are being marked and then matched to queue.

N1Kv-01# ping

Vrf context to use [default] :


No user input: using default context
Target IP address or Hostname: 10.0.110.10
Repeat count [5] :
Datagram size [56] :
Timeout in seconds [2] :
Sending interval in seconds [0] : Extended commands [no] : y

Source address or interface :


Data pattern [0xabcd] : Type of service [0] : 184

Set DF bit in IP header [no] :


Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 5, 56-bytes ICMP Echos to 10.0.110.10
Timeout is 2 seconds, data pattern is 0xABCD

64 bytes from 10.0.110.10: icmp_seq=0 ttl=126 time=1.94 ms


64 bytes from 10.0.110.10: icmp_seq=1 ttl=126 time=1.683 ms
64 bytes from 10.0.110.10: icmp_seq=2 ttl=126 time=1.656 ms
64 bytes from 10.0.110.10: icmp_seq=3 ttl=126 time=1.704 ms
64 bytes from 10.0.110.10: icmp_seq=4 ttl=126 time=1.652 ms

--- 10.0.110.10 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.652/1.726/1.94 ms
N1Kv-01#

On the external server running Wireshark, we can see the ping come in, and we can
also clearly see that it is marked with DSCP 10111000 or 46 or PHB EF.

Now back in N1Kv, we revisit the show port command to see that the VSM1
Management port, where all pings would come from, is pinned north to vmnic4,
which also corresponds to our last vNIC in UCSM, and that it is being served on
VEM 3, which in UCSM is Service Profile ESXi1.
N1Kv-01# module vem 3 execute vemcmd show port
LTL

VSM Port

17

Eth3/1

UP

UP

18

Eth3/2

UP

19

Eth3/3

20

Eth3/4

Admin Link

State

PC-LTL

SGID

Vem Port

F/B*

305

vmnic0

UP

F/B*

305

vmnic1

UP

UP

F/B*

UP

UP

FWD

306

vmnic3

vmk0

Type

vmnic2
21 Eth3/5

UP

vmnic4
49

Veth7

UP

UP

FWD

50

Veth8

UP

UP

FWD

Veth9

UP

FWD

51
UP

UP

FWD

UP
04

vmk1
4

N1Kv-01-VSM-1.eth0

N1Kv-01-VSM-1.eth1

53

Veth11

UP

UP

FWD

N1Kv-01-VSM-1.eth2

54

Veth12

UP

UP

FWD

Win2k8-www-2.eth0

55

Veth13

UP

UP

FWD

Win2k8-www-3.eth0

56

Veth14

UP

UP

FWD

vCenter.eth0

305

Po1

UP

UP

F/B*

306

Po3

UP

UP

FWD

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.
Please run "vemcmd show port vlans" to see the details.
N1Kv-01# sh run int veth10

52 Veth10

UP

FWD

306

!Command: show running-config interface Vethernet10


!Time: Thu Feb 21 20:02:00 2013

version 4.2(1)SV2(1.1)

interface Vethernet10

inherit port-profile N1Kv-Management

description N1Kv-01-VSM-1, Network Adapter 2

vmware dvport 353 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba


32"
vmware vm mac 0050.56BB.3D01

N1Kv-01#

Back in UCSM, click Service Profile ESXi1, expand vNICs, and click the last vNIC,
which is eth4-vm-fabB, to see that it is pinned to Fabric B (at least during normal,
non-failover operation - which we have currently).

Click the Service Profile ESXi1, and note that the associated server is blade 2.

If we click the right pane on VIF Paths, we can see that for Path B, the eth4-vm-fabB
vNIC is pinned to the uplink FI-B 1/4 and the link is indeed active.

To confirm this in NX-OS, we can run a couple of quick commands to note that Veth
693 is, indeed, the Chassis 1 Blade 2 (or Server 1/2) and that it is VNIC eth4-vmfabB; then with a show pinning server-interfaces command, we can confirm that it is
pinned to outgoing interface 1/4. Remember that we want to be on FI-B when
checking this information.
INE-UCS-01-B(nxos)# sh pinning server-interfaces

---------------+-----------------+------------------------+----------------SIF Interface

Sticky

Pinned Border Interface

Pinned Duration

---------------+-----------------+------------------------+----------------Po1282

No

Po1283

No

Eth1/1

No

Eth1/2

No

Veth688

No

Eth1/4

1d 12:33:51

Veth690

No

Eth1/4

1d 12:33:51

Veth692

No

Eth1/4

1d 12:33:51

Veth693

No

Eth1/4

1d 12:33:51

Veth698

No

Eth1/3

23:39:15

Veth700

No

Eth1/3

23:39:15

Veth702

No

Eth1/3

23:39:15

Veth703

No

Eth1/3

23:39:15

Veth8888

No

Veth8898

No

INE-UCS-01-B(nxos)#
INE-UCS-01-B(nxos)# sh run int veth693

!Command: show running-config interface Vethernet693


!Time: Thu Feb 21 12:10:37 2013

version 5.0(3)N2(2.03a)
interface Vethernet693
description server 1/2, VNIC eth4-vm-FabB

switchport mode trunk


hardware vethernet mac filtering per-vlan

no pinning server sticky


pinning server pinning-failure link-down
switchport trunk allowed vlan 1,110-114,118-122
bind interface port-channel1282 channel 693
service-policy type queuing input org-root/ep-qos-Host-Control-BE
no shutdown

INE-UCS-01-B(nxos)#

However, knowing its external pinned interface is not going to help us very much.
Why? Remember that because this is a Nexus NX-OS switch (in essence), it is
based primarily on an ingress queuing architecture. So what we really need to know
is on what port the frame is coming into the FI, rather than the Veth (which of course
doesn't have any hardware queues), or even the egress interface headed up to the
N5Ks.
So to find that, let's go back to the UCSM in the left navigation pane. Click the
Equipment tab, and then click the root Equipment; in the right pane, click Policies,
and select the Global Policies tab to recall that we did not use port channeling to
aggregate the links coming from the IOM/FEXs up to the FIs.

This means that we will need to determine on which individual port the frame is
leaving the IOM and coming into the FI (of course, we would still need to do this if it
were a port channel, but it makes it a bit easier on us since it is not).
In the left navigation pane, expand Chassis 1 and Server 2, and look at the last two
DCE interfaces, which are 802.1KR backplane traces to the IOM inside the chassis.
In the right pane, notice that the IOM we are pinned to is chassis 1, slot 2 (or IOM 2
going to FI-B), and port 5.
You may see a port channel here. This is not a port channel from the
blade up to the FI, but rather just the two 10Gb DCE 802.1KR traces
from the blade to the IOM. We already saw that there is no port
channel from the IOM up to the FI.

So back in FI-B, in NX-OS, let's look at the FEX detail, and we can see that both of
these two backplane DCE traces Eth1/1/5 and Eth1/1/7 are pinned to the uplink
Fabric port of Eth1/2. So that's where we need to look next.
INE-UCS-01-B(nxos)# sh fex detail
FEX: 1 Description: FEX0001

state: Online

FEX version: 5.0(3)N2(2.03a) [Switch version: 5.0(3)N2(2.03a)]


FEX Interim version: 5.0(3)N2(2.03a)
Switch Interim version: 5.0(3)N2(2.03a)
Chassis Model: N20-C6508,

Chassis Serial: FOX1630GZB9

Extender Model: UCS-IOM-2208XP,

Extender Serial: FCH16297JG2

Part No: 73-13196-04


Card Id: 136, Mac Addr: 60:73:5c:50:c3:02, Num Macs: 42
Module Sw Gen: 12594

[Switch Sw Gen: 21]

post level: complete


pinning-mode: static

Max-links: 1

Fabric port for control traffic: Eth1/1


Fabric interface state:
Eth1/1 - Interface Up. State: Active
Eth1/2 - Interface Up. State: Active
Fex Port

State

Fabric Port

Eth1/1/1

Up

Eth1/1

Eth1/1/2

Down

None

Eth1/1/3

Up

Eth1/1

Eth1/1/4

Down

None Eth1/1/5

Up

Eth1/2

Eth1/1/6

Down

None Eth1/1/7

Up

Eth1/2

Eth1/1/8

Down

None

Eth1/1/9

Up

Eth1/1

Eth1/1/10

Down

None

Ingress port to FI-B is Eth1/2, and that's the port we are clearly interested in looking
at. Let's first look at the QoS configuration portion of FI-B in NX-OS. Remember in

the N1Kv (in just a little bit - not yet), we are going to be mapping DSCP EF to CoS
5, so that's where we need to look first to see what qos-group that maps to, and
then the ingress port what qos-group maps to what queue.
Here we see that CoS 5 maps to Class-Platinum and that Class-Platinum maps to
QoS-Group 2.
INE-UCS-01-B(nxos)# sh run | sec class-map|policy-map|system
class-map type qos class-fcoe
class-map type qos match-all class-gold
match cos 4
class-map type qos match-all class-silver
match cos 2
class-map type qos match-all class-platinum
class-map type queuing class-fcoe
match qos-group 1
class-map type queuing class-gold
match qos-group 3
class-map type queuing class-silver
match qos-group 4
class-map type queuing class-platinum
match qos-group 2
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
policy-map type qos system_qos_policy
class class-platinum

set qos-group 2

class class-silver
set qos-group 4
class class-gold
set qos-group 3
class class-fcoe
set qos-group 1
policy-map type queuing system_q_in_policy
class type queuing class-fcoe
bandwidth percent 30
class type queuing class-platinum
bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10
policy-map type queuing system_q_out_policy

match cos 5

class type queuing class-fcoe


bandwidth percent 30
class type queuing class-platinum
bandwidth percent 10
class type queuing class-gold
bandwidth percent 20
class type queuing class-silver
bandwidth percent 30
class type queuing class-default
bandwidth percent 10
policy-map type queuing org-root/ep-qos-FC-for-vHBAs
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-2_30per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-4_20per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-CoS-5_10per-BW
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Host-Control-BE
class type queuing class-default
bandwidth percent 100
shape 40000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-1Gb-BW
class type queuing class-default
bandwidth percent 100
shape 1000000 kbps 10240
policy-map type queuing org-root/ep-qos-Limit-to-20Gb-BW
class type queuing class-default
bandwidth percent 100
shape 20000000 kbps 10240
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos class-gold
match qos-group 3
class-map type network-qos class-silver
match qos-group 4
class-map type network-qos class-platinum
match qos-group 2

class-map type network-qos class-all-flood


match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos system_nq_policy
class type network-qos class-platinum
class type network-qos class-silver
class type network-qos class-gold
mtu 9000
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
system qos

service-policy type qos input system_qos_policy

service-policy type qos input system_qos_policy


service-policy type queuing input system_q_in_policy
service-policy type queuing output system_q_out_policy
service-policy type network-qos system_nq_policy
system default switchport shutdown
INE-UCS-01-B(nxos)#

Let's look at the queuing on the ingress interface eth1/2. Notice that Tx Queuing
doesn't have much value beyond WRR, but that Rx Queuing is clearly where the
action is. Notice also that the Q to which qos-group 2 is mapped has not yet
reported any matched packets. So this is where we will be looking for packets to
match after we apply our configuration.
INE-UCS-01-B(nxos)# sh queuing interface e1/2
Ethernet1/2 queuing information:
TX Queuing
qos-group

sched-type

oper-bandwidth

WRR

10

WRR

30

WRR

10

WRR

20

WRR

30

RX Queuing
qos-group 0
q-size: 196480, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 1228
Statistics:
Pkts received over the port

: 1040393

Ucast pkts sent to the cross-bar

: 944903

Mcast pkts sent to the cross-bar

: 95490

Ucast pkts received from the cross-bar

: 907526

Pkts sent to the port

: 1230897

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 1
q-size: 79360, HW MTU: 2158 (2158 configured)
drop-type: no-drop, xon: 128, xoff: 252
Statistics:
Pkts received over the port

: 352160

Ucast pkts sent to the cross-bar

: 352160

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 435978

Pkts sent to the port

: 435978

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 2
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics: Pkts received over the port

: 0

Ucast pkts sent to the cross-bar

: 0

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 3
q-size: 29760, HW MTU: 9000 (9000 configured)
drop-type: drop, xon: 0, xoff: 186
Statistics:
Pkts received over the port

: 0

Ucast pkts sent to the cross-bar

: 0

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 4
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

: 0

Ucast pkts sent to the cross-bar

: 0

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

Total Multicast crossbar statistics:


Mcast pkts received from the cross-bar

: 323371

INE-UCS-01-B(nxos)#

Now we need to apply the configuration to the N1Kv and port Veth10.
On N1Kv:
class-map type qos match-any DSCP-EF
match dscp 46

policy-map type qos SET-COS


class DSCP-EF
set cos 5
set dscp 40

interface Vethernet 10
service-policy input SET-COS

And let's just verify the application.


N1Kv-01(config-if)# sh run int veth10

interface Vethernet10
inherit port-profile N1Kv-Management service-policy type qos input SET-COS

description N1Kv-01-VSM-1, Network Adapter 2


vmware dvport 353 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.56BB.3D01

N1Kv-01(config-if)#

Before sending any pings, remember one critical thing: We are


sending packets/frames into a UCSM vNIC, and by default vNICs do
not trust any L2 CoS values - so by default they re-write everything to
0. In earlier tasks, we had you set up the two updating vNIC templates

that were used to instantiate two vNICs for each service profile. When
we set up these updating vNIC templates, we had you apply a QoS
policy; recall that only the two vNICs destined to be used for VM traffic
were the only two that we allowed "Host Control" in the QoS Policy.
Also recall that the VSMs are both using these VM vNICs (vNIC 3 and
4). It is for this reason alone that our attempt to mark L2 CoS should
be honored.
Now let's send another ping and check both our modified DSCP value in Wireshark
and the ingress Eth1/2 port on FI-B for the proper queue mapping.
N1Kv-01# ping
Vrf context to use [default] :
No user input: using default context
Target IP address or Hostname: 10.0.110.10
Repeat count [5] :
Datagram size [56] :
Timeout in seconds [2] :
Sending interval in seconds [0] :
Extended commands [no] : y
Source address or interface :
Data pattern [0xabcd] : Type of service [0] : 184

Set DF bit in IP header [no] :


Time to live [255] :
Loose, Strict, Record, Timestamp, Verbose [None] :
Sweep range of sizes [no] :
Sending 5, 56-bytes ICMP Echos to 10.0.110.10
Timeout is 2 seconds, data pattern is 0xABCD

64 bytes from 10.0.110.10: icmp_seq=0 ttl=126 time=2.07 ms


64 bytes from 10.0.110.10: icmp_seq=1 ttl=126 time=1.139 ms
64 bytes from 10.0.110.10: icmp_seq=2 ttl=126 time=0.855 ms
64 bytes from 10.0.110.10: icmp_seq=3 ttl=126 time=0.802 ms
64 bytes from 10.0.110.10: icmp_seq=4 ttl=126 time=0.802 ms

--- 10.0.110.10 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.802/1.133/2.07 ms
N1Kv-01#

It appears that it mapped the DSCP properly to 10100000 or decimal 40, but what
about L2 CoS?
INE-UCS-01-B(nxos)# sh queuing interface e1/2
Ethernet1/2 queuing information:
TX Queuing
qos-group

sched-type

oper-bandwidth

WRR

10

WRR

30

WRR

10

WRR

20

WRR

30

RX Queuing
qos-group 0
q-size: 196480, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 1228
Statistics:
Pkts received over the port

: 1048988

Ucast pkts sent to the cross-bar

: 952655

Mcast pkts sent to the cross-bar

: 96333

Ucast pkts received from the cross-bar

: 914006

Pkts sent to the port

: 1238764

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 1
q-size: 79360, HW MTU: 2158 (2158 configured)
drop-type: no-drop, xon: 128, xoff: 252
Statistics:
Pkts received over the port

: 352172

Ucast pkts sent to the cross-bar

: 352172

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 435993

Pkts sent to the port

: 435993

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 2
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics: Pkts received over the port
Ucast pkts sent to the cross-bar

: 5

: 5

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 3
q-size: 29760, HW MTU: 9000 (9000 configured)
drop-type: drop, xon: 0, xoff: 186
Statistics:
Pkts received over the port

: 0

Ucast pkts sent to the cross-bar

: 0

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

qos-group 4
q-size: 22720, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 142
Statistics:
Pkts received over the port

: 0

Ucast pkts sent to the cross-bar

: 0

Mcast pkts sent to the cross-bar

: 0

Ucast pkts received from the cross-bar

: 0

Pkts sent to the port

: 0

Pkts discarded on ingress

: 0

Per-priority-pause status

: Rx (Inactive), Tx (Inactive)

Total Multicast crossbar statistics:


Mcast pkts received from the cross-bar

: 324758

INE-UCS-01-B(nxos)#

And there we have it! It appears to have mapped properly to L2 CoS as well, which
means that the UCS can now honor traffic being passed.

UCS Technology Labs - Nexus 1000v on UCS


IGMP in Nexus 1000v
Task
Using JPerf, generate a simple multicast feed from Win2k8-www-1 to Win2k8-www-2.
Ensure that you see the IGMP group in N1Kv.

Configuration
We need to have PIM running to manage IGMP, and because we are on the same
subnet, and therefore not routing, PIM passive will work just fine. We will run this on
the 3750 for now, but this could just as easily be ip pim sparse-mode on one of the
N7Ks.
On 3750:
ip multicast-routing distributed

interface Vlan110
ip address 10.0.110.254 255.255.255.0
ip pim passive

On Win2k8-www-2, we need to run JPerf and set up the client as shown here.
Remember in multicast, server and client are backward.

On Win2k8-www-1, we need to run JPerf and set up the server as shown here.

Verification
Now let's look at our N1Kv IGMP snooping groups.
N1Kv-01# sh ip igmp snooping groups
Type: S - Static, D - Dynamic, R - Router port

Vlan

Group Address

Ver

Type

Port list

110

*/*

Po3 Po4 110

N1Kv-01# sh run int po3

interface port-channel3

225.1.1.1

v2

Veth6

inherit port-profile
VM-Guests

vem 3

N1Kv-01# sh run int po4

interface port-channel4
VM-Guests

inherit port-profile

vem 4

N1Kv-01# sh run int veth6

interface Vethernet6
inherit port-profile VM-110

description Win2k8-www-1, Network Adapter 1

vmware dvport 224 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba


32"
vmware vm mac 000C.298F.859D

N1Kv-01#

UCS Technology Labs - Nexus 1000v on UCS


SPAN in Nexus 1000v
Task
Ensure that both Win2k8-www-1 and Win2k8-www-2 are running on the same ESXi
host (and therefore the same VEM).
Set up a span session that allows Win2k8-www-1 to see all packets Tx or Rx to/from
Win2k8-www-2.
Ping from Win2k8-www-2 to Win2k8-www-3 and capture the packets in Wireshark on
Win2k8-www-1.

Configuration
Ensure that both Win2k8-www-1 and Win2k8-www-2 are running on the same ESXi
host.

We need to know which Veth ports the source (Win2k8-www-2) and the destination
(Win2k8-www-1) are residing on.
N1Kv-01# sh int status

-------------------------------------------------------------------------------Port

Name

Status

Vlan

Duplex

Speed

Type

-------------------------------------------------------------------------------mgmt0

--

up

routed

full

1000

--

Eth3/1

--

up

trunk

full

1000

--

Eth3/2

--

up

trunk

full

1000

--

Eth3/3

--

up

trunk

full

unknown --

Eth3/4

--

up

trunk

full

unknown --

Eth3/5

--

up

trunk

full

unknown --

Eth4/1

--

up

trunk

full

1000

--

Eth4/2

--

up

trunk

full

1000

Eth4/3

--

up

trunk

full

unknown --

Eth4/4

--

up

trunk

full

unknown --

Eth4/5

--

up

trunk

full

unknown --

Po1

--

up

trunk

full

1000

--

Po2

--

up

trunk

full

1000

--

Po3

--

up

trunk

full

unknown --

Po4

--

up

trunk

full

unknown --

Veth1

VMware VMkernel, v up

115

auto

auto

--

Veth2

VMware VMkernel, v up

116

auto

auto

--

Veth3

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth4

N1Kv-01-VSM-2, Net up

121

auto

auto

--

Veth5

N1Kv-01-VSM-2, Net up

120

auto

auto

-- Veth6

, Netw up

110

auto

auto

--

--

Veth7

VMware VMkernel, v up

115

auto

auto

--

Veth8

VMware VMkernel, v up

116

auto

auto

--

Veth9

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth10

N1Kv-01-VSM-1, Net up

121

auto

auto

--

Veth11

N1Kv-01-VSM-1, Net up

120

auto

auto

-- Veth12

, Netw up

110

auto

auto

--

Veth13

Win2k8-www-3, Netw up

110

auto

auto

--

Veth14

vCenter, Network A up

auto

auto

--

control0

--

routed

full

1000

--

up

N1Kv-01(config-monitor)#

Configure the span session on the N1Kv.


monitor session 1 type local
source interface Vethernet12 both
destination interface Vethernet6
no shutdown

Verification
N1Kv-01# sh monitor session 1
session 1
--------------- type
state

: up

source intf

rx

: Veth12

tx

: Veth12

source VLANs

rx

tx

: local

both

Win2k8-www-1

: Veth12

Win2k8-www-2

both

source port-profile :
rx

tx

both

:
: filter not specified destination ports : Veth6

filter VLANs

destination port-profile :

N1Kv-01(config-monitor)#

We can also ask the VEMs themselves about local SPAN configuration, if the
monitor session is not shut down. Notice that we will only get a DST response from
one of them (the VEM that the source is also on), and note the LTL or Local Target
Logic value. A show port will be necessary to connect this to an actual interface.
N1Kv-01# module vem 3 execute vemcmd show span

VEM SOURCE IP: 10.0.115.11

HW SSN ID

ERSPAN ID

HDR VER

DST LTL/IP

local

RX Sources :55,
TX Sources :55,
Source Filter RX :110,
Source Filter TX:110,

N1Kv-01# module vem 4 execute vemcmd show span

VEM SOURCE IP: 10.0.115.12

HW SSN ID

ERSPAN ID

HDR VER

DST LTL/IP

local 54

RX Sources : 55
,
TX Sources :55,
Source Filter RX :110,
Source Filter TX:110,

N1Kv-01# module vem 4 execute vemcmd show port


LTL

VSM Port

Admin Link

17

Eth4/1

UP

UP

18

Eth4/2

UP

19

Eth4/3

20
21

State

PC-LTL

SGID

Vem Port

F/B*

305

vmnic0

UP

F/B*

305

vmnic1

UP

UP

F/B*

Eth4/4

UP

UP

FWD

306

vmnic3

Eth4/5

UP

UP

FWD

306

vmnic4

vmnic2

Type

49

Veth1

UP

UP

FWD

vmk0

50

Veth2

UP

UP

FWD

51

Veth3

UP

UP

FWD

N1Kv-01-VSM-2.eth0

52

Veth4

UP

UP

FWD

N1Kv-01-VSM-2.eth1

53

Veth5

UP

UP

FWD

N1Kv-01-VSM-2.eth2 54

vmk1

UP

UP

FWD

Win2k8-www-1.eth0 55

UP

UP

FWD

Win2k8-www-2.eth0

305

Po2

UP

UP

F/B*

306

Po4

UP

UP

FWD

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.
Please run "vemcmd show port vlans" to see the details.
N1Kv-01#

Ping from Win2k8-www-2 to Win2k8-www-3.

Verify on Win2k8-www-1 that we see the packets.

Veth12

Veth6

Notice what would happen if we changed the source to Veth13 (Win2k8-www-3) that
is running on a different VEM.
N1Kv-01(config-monitor)# sh monitor session 1
session 1
--------------type

: local state

source intf

rx

: Veth13

tx

: Veth13

both

: Veth13

source VLANs

: down (Src/Dst LC mismatch)

rx

tx

both

source port-profile :
rx

tx

both

filter VLANs

: filter not specified

destination ports : Veth6


destination port-profile :

N1Kv-01(config-monitor)#

This is because Veth13 and Veth6 are on different LCs (Linecards) or VEMs.

Also note that as long as you have at least one valid source (on same VEM), the
SPAN session will come up; it simply will not capture any traffic from any interfaces
not on the same VEM.
N1Kv-01(config-monitor)# sh monitor session 1
session 1
--------------type

: local state

source intf

: up

rx

: Veth12

Veth13

tx

: Veth12

Veth13

Veth13
source VLANs

rx

tx

both

source port-profile :
rx

tx

both

filter VLANs

: filter not specified

destination ports : Veth6


destination port-profile :

N1Kv-01(config-monitor)#

both

: Veth12

UCS Technology Labs - Nexus 1000v on UCS


ERSPAN in Nexus 1000v
Task
Set up a ERSPAN session that allows Server1 on N5K1 (or 3, 5, or 7, depending on
your rack) to see all packets Tx or Rx to/from Win2k8-www-3.
Ping from Win2k8-www-2 to Win2k8-www-3 and capture the packets in Wireshark on
Server1.

Configuration
We can see that Win2k8-www-3 is running on ESXi1 or VEM 3.

Let's find the Veth port number in N1Kv.


N1Kv-01# sh int status

-------------------------------------------------------------------------------Port

Name

Status

Vlan

Duplex

Speed

Type

-------------------------------------------------------------------------------mgmt0

--

up

routed

full

1000

--

Eth3/1

--

up

trunk

full

1000

--

Eth3/2

--

up

trunk

full

1000

--

Eth3/3

--

up

trunk

full

unknown --

Eth3/4

--

up

trunk

full

unknown --

Eth3/5

--

up

trunk

full

unknown --

Eth4/1

--

up

trunk

full

1000

--

Eth4/2

--

up

trunk

full

1000

--

Eth4/3

--

up

trunk

full

unknown --

Eth4/4

--

up

trunk

full

unknown --

Eth4/5

--

up

trunk

full

unknown --

Po1

--

up

trunk

full

1000

--

Po2

--

up

trunk

full

1000

--

Po3

--

up

trunk

full

unknown --

Po4

--

up

trunk

full

unknown --

Veth1

VMware VMkernel, v up

115

auto

auto

--

Veth2

VMware VMkernel, v up

116

auto

auto

--

Veth3

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth4

N1Kv-01-VSM-2, Net up

121

auto

auto

--

Veth5

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth6

Win2k8-www-1, Netw up

110

auto

auto

--

Veth7

VMware VMkernel, v up

115

auto

auto

--

Veth8

VMware VMkernel, v up

116

auto

auto

--

Veth9

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth10

N1Kv-01-VSM-1, Net up

121

auto

auto

--

Veth11

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth12

Win2k8-www-2, Netw up

110

auto

auto

-- Veth13

, Netw up

110

auto

auto

--

Veth14

vCenter, Network A up

auto

auto

--

control0

--

routed

full

1000

--

up

Win2k8-www-3

N1Kv-01#

To provide ERSPAN, we need UDP packetization, and therefore capability l3control


on our VMKernel interface from which the ERSPAN packets will originate. This also
provides the source address for the ERSPAN. Set up the ERSPAN session to send
to the destination of N5K1's SVI for the same VLAN that the VMKernel is running on
(this can easily be a different VLAN if routing is properly configured).
port-profile type vethernet VMKernel capability l3control

vmware port-group
switchport mode access
switchport access vlan 115
no shutdown
system vlan 115
state enabled

monitor session 2 type erspan-source


source interface Vethernet12 both
destination ip 10.0.115.51
erspan-id 2
ip ttl 64
mtu 1500

header-type 2
no shut

On N5K1:
interface vlan 115
ip address 10.0.115.51
no shut

interface e1/1
switchport
switchport monitor
no shut

monitor session 2 type erspan-destination


source ip 10.0.115.12
destination interface e1/1
erspan-id 2
vrf default
no shut

Verification
On N1Kv:
N1Kv-01# sh monitor session 2
session 2
--------------type

: erspan-source

state

: up

source intf

rx

: Veth12

tx

: Veth12

both

: Veth12

source VLANs

rx

tx

both

source port-profile :
rx

tx

both

filter VLANs

: filter not specified destination IP

: 10.0.115.51

ERSPAN ID

: 2

ERSPAN TTL

: 64

ERSPAN IP Prec.

: 0

ERSPAN DSCP

: 0

ERSPAN MTU

: 1500

ERSPAN Header Type: 2

N1Kv-01#

And looking on the linecard directly.


N1Kv-01(config-erspan-src)# module vem 4 execute vemcmd show span

VEM SOURCE IP: 10.0.115.12

HW SSN ID

ERSPAN ID

HDR VER

DST LTL/IP 2

RX Sources :55
,
TX Sources :55,
Source Filter RX :110,
Source Filter TX:110,
N1Kv-01#

Let's send our ping from Win2k8-www-2.

We should see it on Server1 off N5K1.

10.0.115.51

UCS Technology Labs - Nexus 1000v on UCS


Access Control Lists in Nexus 1000v
Task
Set up an ACL on N1Kv that prohibits standard web traffic from reaching Win2k8www-3.
Permit all other traffic to that server.

Configuration
First, let's be sure of our Veth interface number.
N1Kv-01(config)# sh int status

-------------------------------------------------------------------------------Port

Name

Status

Vlan

Duplex

Speed

Type

-------------------------------------------------------------------------------mgmt0

--

up

routed

full

1000

--

Eth3/1

--

up

trunk

full

1000

--

Eth3/2

--

up

trunk

full

1000

--

Eth3/3

--

up

trunk

full

unknown --

Eth3/4

--

up

trunk

full

unknown --

Eth3/5

--

up

trunk

full

unknown --

Eth4/1

--

up

trunk

full

1000

--

Eth4/2

--

up

trunk

full

1000

--

Eth4/3

--

up

trunk

full

unknown --

Eth4/4

--

up

trunk

full

unknown --

Eth4/5

--

up

trunk

full

unknown --

Po1

--

up

trunk

full

1000

--

Po2

--

up

trunk

full

1000

--

Po3

--

up

trunk

full

unknown --

Po4

--

up

trunk

full

unknown --

Veth1

VMware VMkernel, v up

115

auto

auto

--

Veth2

VMware VMkernel, v up

116

auto

auto

--

Veth3

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth4

N1Kv-01-VSM-2, Net up

121

auto

auto

--

Veth5

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth6

Win2k8-www-1, Netw up

110

auto

auto

--

Veth7

VMware VMkernel, v up

115

auto

auto

--

Veth8

VMware VMkernel, v up

116

auto

auto

--

Veth9

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth10

N1Kv-01-VSM-1, Net up

121

auto

auto

--

Veth11

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth12

Win2k8-www-2, Netw up

110

auto

auto

-- Veth13

, Netw up

110

auto

auto

--

Veth14

vCenter, Network A up

auto

auto

--

control0

--

routed

full

1000

--

up

Win2k8-www-3

N1Kv-01(config)#

Next, browse to Win2k8-www-3 to make sure it's still alive. Let's also ping it infinitely.

Now apply an access list, blocking port 80 traffic from ever reaching it, therefore
preventing us from getting a reply when we browse to it.
ip access-list NoHTTP
10 deny tcp any any eq www
20 permit ip any any

interface Vethernet13
ip port access-group NoHTTP out

Verification
Check our ping and try to refresh the browser window.
Even if we vMotion this guest to another host, the ACL will still be in
effect. Guests don't change vethernet port numbers simply because
of vMotion, and they retain all of their settings. One thing to be
cautious of, however, is that if you edit the settings of the guest and

change the network adapter to a different port profile/group and click


Apply, and then even go back into the settings and move the adapter
back to the original port profile/group, the ACL will not remain. This is
true for any settings applied to the vethernet interface, such as QoS,
Netflow, DHCP trust, and so on.

UCS Technology Labs - Nexus 1000v on UCS


Private VLANs in Nexus 1000v
Task
Create two Private VLANs, 112 and 113, and one primary, PVLAN 111.
The subnet for all adapters in any PVLAN should be 10.0.111.0/24.
Add a second Ethernet adapter to all three guest VMs and use them for PVLANs.
Assign an IP address to each adapter in VLAN 111, but do not assign a
default gateway.
Configure PVLANs as necessary to ensure that Win2k8-www-1 and Win2k8-www-2
can talk to each other, but that they cannot talk to Win2k8-www-3 and it cannot talk
to them, no matter which ESXi host the guests are on at any time.
Borrow the last physical adapter (vmnic4) from the VM-Guests port profile, and use it
for the PVLAN promiscuous trunk.
Ensure that 3750 SVI for VLAN 111 can ping all three adapters at all times.

Configuration
One thing that is important to remember about the N1Kv virtual switch is that the
VEMs are not all physically in one box. So if we want to have something such as a
community PVLAN instance, and we want guests in PVLAN 112 on one VEM to
communicate with guests in the same PVLAN 112 that are on a separate VEM, we
need to have a physical adapter that trunks those community PVLANs northbound
(and not only the primary VLAN).
On N1Kv:
vlan 112
name VM-Win2k8-PVLAN-Comm1
private-vlan community

vlan 113
name VM-Win2k8-PVLAN-Comm2
private-vlan community

vlan 111

name VM-Win2k8-PVLAN-Primary
private-vlan primary
private-vlan association 112-113

port-profile type ethernet PVLAN-Prom-Trunk


vmware port-group
switchport mode private-vlan trunk promiscuous
switchport private-vlan trunk allowed vlan 111-113
switchport private-vlan mapping trunk 111 112-113
no shutdown
state enabled

port-profile type vethernet PVLAN-Comm1


vmware port-group
switchport mode private-vlan host
switchport private-vlan host-association 111 112
no shutdown
state enabled

port-profile type vethernet PVLAN-Comm2


vmware port-group
switchport mode private-vlan host
switchport private-vlan host-association 111 113
no shutdown
state enabled

In vCenter, navigate to ESXi2, click the Configuration tab, click Networking,


select vSphere Distributed Switch, and click Manage Physical Adapters.

Under VM-Guests, click Remove for vmnic4.

Click Yes.

Under the PVLAN-Prom-Trunk port profile, click Click to Add NIC.

Select the unclaimed vmnic4. Click OK.

Click OK.

Notice that vmnic4 appears connected under the PVLAN-Prom-Trunk port profile.

Do the same for ESXi1.

In N1Kv, that this will temporarily move all connections to Sub-Group ID 3 while we
test PVLANs. Don't worry, we'll reverse everything when we've finish this task, to
regain normal operations for the remainder of the tasks.
N1Kv-01# module vem 3 execute vemcmd show port
LTL

VSM Port

Admin Link

17

Eth3/1

UP

UP

18

Eth3/2

UP

UP

19

Eth3/3

UP

20

Eth3/4

21

State

PC-LTL

SGID

Vem Port

F/B*

305

vmnic0

F/B*

305

vmnic1

UP

F/B*

UP

UP

FWD

306 3

Eth3/5

UP

UP

F/B*

49

Veth7

UP

UP

FWD

50

Veth8

UP

UP

FWD

51

Veth9

UP

UP

FWD

03

Type

vmnic2
vmnic3
vmnic4
0

vmk0
vmk1

N1Kv-01-VSM-1.eth0

52

Veth10

UP

UP

FWD

03

N1Kv-01-VSM-1.eth1

53

Veth11

UP

UP

FWD

03

N1Kv-01-VSM-1.eth2

54

Veth14

UP

UP

FWD

03

vCenter.eth0
305

Po1

UP

UP

F/B*

306

Po3

UP

UP

FWD

* F/B: Port is BLOCKED on some of the vlans.


One or more vlans are either not created or
not in the list of allowed vlans for this port.
Please run "vemcmd show port vlans" to see the details.
N1Kv-01#

To begin testing in a simple, single VEM fashion, before moving on to more complex

inter-VEM testing, vMotion all guests onto the ESXi1 host.

Ensure that the vMotion completes successfully and that all guests are on one VEM.

Right-click Win2k8-www-1 and click Edit Settings.

Click Add to add an Ethernet adapter.

Click Ethernet Adapter and click Next.

Choose the E1000 adapter type and select the PVLAN-Comm1 (N1Kv-01) port
profile/group. Click Next.

Click Finish.

Ensure that everything is correct and click OK.

RDP to Win2k8-www-1, click the icon for networks in the system tray, and click
Open Network and Sharing Center.

Click Change Adapter Settings.

Right-click Local Area Connection 2 and click Properties.

Click Properties for IPv4.

Enter the IP address 10.0.111.111/24 and click OK.

Click Close.

Edit settings for Win2k8-www-2, and add another adapter and place it in PVLANComm1 (N1Kv-01). Also, RDP to this host and follow the same steps as before to
set the IP address to 10.0.111.112/24.

Edit settings for Win2k8-www-3, add another adapter and place it instead in PVLANComm2 (N1Kv-01). Also, RDP to this host and follow the same steps as before to
set the IP address to 10.0.111.113/24.
Note that this one is in a different PVLAN port profile.

Verification
From Cmd in Win2k8-www-1, try to ping 10.0.111.112 and notice that it is
successful. Also try to ping 10.0.111.113 and notice that it is unsuccessful.

From Cmd in Win2k8-www-2, try to ping 10.0.111.111 and notice that it is


successful. Also try to ping 10.0.111.113 and notice that it is unsuccessful.

Finally, from Cmd in Win2k8-www-3, try to ping both 10.0.111.112 and 10.0.111.113
and notice that both are unsuccessful.

Ensure that from the 3750 VLAN 111 SVI we can ping all three adapters.
DC-3750#ping 10.0.111.111

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.111, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/2/8 ms
DC-3750#ping 10.0.111.112

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.112, timeout is 2 seconds:
.!!!!

Success rate is 80 percent (4/5), round-trip min/avg/max = 1/3/9 ms


DC-3750#ping 10.0.111.113

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.113, timeout is 2 seconds:
.!!!!
Success rate is 80 percent (4/5), round-trip min/avg/max = 1/3/9 ms
DC-3750#

Now we will try PVLANs across VEMs. vMotion Win2k8-www-2 over to ESXi1.

Ensure that it has moved successfully.

From Cmd in Win2k8-www-2, try to ping 10.0.111.111 and notice that it is


successful. Also try to ping 10.0.111.113 and notice that it is still unsuccessful.

From Cmd in Win2k8-www-1, try to ping 10.0.111.112 and notice that it is still
successful. Also try to ping 10.0.111.113 and notice that it is still unsuccessful.

And from the 3750 VLAN 111 SVI, we can still ping all three adapters.
DC-3750#ping 10.0.111.113

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.113, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
DC-3750#ping 10.0.111.112

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.112, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/9 ms
DC-3750#ping 10.0.111.111

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.0.111.111, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
DC-3750#

UCS Technology Labs - Nexus 1000v on UCS


DHCP Snooping in Nexus 1000v
Task
Configure a DHCP server running on 3750 in VLAN 111.
Monitor all DHCP requests and responses in N1Kv.

Pre-Verification
On 3750, remember that before we turn on a DHCP server we must have an L3
interface (SVI) to intercept the L3 DHCP requests, and then enable a DHCP server.
interface vlan111
ip address 10.0.111.254 255.255.255.0
no shutdown

ip dhcp pool VLAN-111


network 10.0.111.0 255.255.255.0
dns-server 8.8.8.8

Turn on dhcp debugging so that we can monitor what is happening with the server.
DC-3750# debug ip dhcp server packet

On a VM running on a VEM in N1Kv (such as Win2k8-www-1), right-click the


adapter in VLAN 111 and click Properties.

Click Properties for IPv4.

Enable DHCP and click OK.

Click Close.

On the 3750, monitor the successful DHCP request and response.


DC-3750#
*Mar 21 23:37:42.721: DHCPD: Reload workspace interface Vlan111 tableid 0.
*Mar 21 23:37:42.721: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:37:42.721: DHCPD: client's VPN is .
*Mar 21 23:37:42.721: DHCPD: using received relay info.
*Mar 21 23:37:42.721: DHCPD: Sending notification of DISCOVER:
*Mar 21 23:37:42.721:

DHCPD: htype 1 chaddr 0050.56bb.738c

*Mar 21 23:37:42.721:

DHCPD: interface = Vlan111

*Mar 21 23:37:42.721:

DHCPD: class id 4d53465420352e30

*Mar 21 23:37:42.721:
DC-3750#

DHCPD: out_vlan_id 0 *Mar 21 23:37:42.721: DHCPD:

DHCPDISCOVER received from client 0100.5056.bb73.8c on interface Vlan111.


*Mar 21 23:37:42.721: DHCPD: using received relay info.
*Mar 21 23:37:42.721: DHCPD: Sending notification of DISCOVER:

*Mar 21 23:37:42.721:

DHCPD: htype 1 chaddr 0050.56bb.738c

*Mar 21 23:37:42.721:

DHCPD: interface = Vlan111

*Mar 21 23:37:42.721:

DHCPD: class id 4d53465420352e30

*Mar 21 23:37:42.721:

DHCPD: out_vlan_id 0

DC-3750#
*Mar 21 23:37:44.734: DHCPD: Adding binding to radix tree (10.0.111.1)
*Mar 21 23:37:44.734: DHCPD: Adding binding to hash tree
*Mar 21 23:37:44.734: DHCPD: assigned IP address 10.0.111.1 to client 0100.5056.bb73.8c. (2179 0)
*Mar 21 23:37:44.734: DHCPD: DHCPOFFER notify setup address 10.0.111.1 mask 255.255.255.0
*Mar 21 23:37:44.734: DHCPD: Sending DHCPOFFER to client 0100.5056.bb73.8c (10.0.111.1).
*Mar 21 23:37:44.734: DHCPD: no option 125
*Mar 21 23:37:44.734: DHCPD: Check for IPe on Vlan111

DC-3750#*Mar 21 23:37:44.734: DHCPD: creating ARP entry (10.0.111.1, 0050.56bb.738c).


*Mar 21 23:37:44.734: DHCPD: unicasting BOOTREPLY to client 0050.56bb.738c (10.0.111.1).
*Mar 21 23:37:44.751: DHCPD: Reload workspace interface Vlan111 tableid 0.
*Mar 21 23:37:44.751: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:37:44.751: DHCPD: client's VPN is .
*Mar 21 23:37:44.751: DHCPD: DHCPREQUEST received from client 0100.5056.bb73.8c.
DC-3750#23:37:44.751: DHCPD: Sending notification of ASSIGNMENT:
*Mar 21 23:37:44.751:

DHCPD: address 10.0.111.1 mask 255.255.255.0

*Mar 21 23:37:44.751:

DHCPD: htype 1 chaddr 0050.56bb.738c

*Mar 21 23:37:44.751:

DHCPD: lease time remaining (secs) = 86400

*Mar 21 23:37:44.751:

DHCPD: interface = Vlan111

*Mar 21 23:37:44.751:

DHCPD: out_vlan_id 0

*Mar 21 23:37:44.751: DHCPD: Sending DHCPACK to client 0100.5056.bb73.8c (10.0.111.1).


*Mar 21 23:37:44.751: DHCPD: no option 125
*Mar 21 23:37:44.751: DHCPD: Check for IPe on Vlan111
*Mar 21 23:37:44.751: DHC
DC-3750#PD: creating ARP entry (10.0.111.1, 0050.56bb.738c).
*Mar 21 23:37:44.751: DHCPD: unicasting BOOTREPLY to client 0050.56bb.738c (10.0.111.1).
DC-3750#
*Mar 21 23:37:47.821: DHCPD: Reload workspace interface Vlan111 tableid 0.
*Mar 21 23:37:47.821: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:37:47.821: DHCPD: client's VPN is .
*Mar 21 23:37:47.821: DHCPD: DHCPINFORM received from client 0100.5056.bb73.8c (10.0.111.1).
*Mar 21 23:37:47.821: DHCPD: Sending DHCPACK to client 0100.5056.bb73.8c (10.0.111.1).
*Mar 21 23:37:47.821: DHCPD: no option 125
*Mar 21 23:37:47.821: DHCPD: unicasting BOOTREPLY to client 0050.56bb.738c (10.0.111.1).
DC-3750#

DC-3750# sh ip dhcp binding


Bindings from all pools not associated with VRF:
IP address

Client-ID/

Lease expiration

Type

Mar 22 1993 11:37 PM

Automatic

Hardware address/
User name
10.0.111.1

0100.5056.bb73.8c

DC-3750#

On Win2k8-www-1, note the successful IP address assignment.

Configuration
On N1Kv:
svs switch edition advanced
feature dhcp
ip dhcp snooping
ip dhcp snooping vlan 111

Verification
Notice that the snooping binding table is empty.
N1Kv-01(config)# sh ip dhcp snooping binding
MacAddress

IpAddress

LeaseSec

Type

VLAN

Interface

-----------------

---------------

--------

----------

----

-------------

N1Kv-01(config)#

On Win2k8-www-2, right-click the NIC that is configured in VLAN 111 and click
Properties.

Enable DHCP and click OK, and then click Close.

Open the shell and perform an

ipconfig

to see the adapter get an IP address.

On 3750, watch the server to see if an address is obtained.


*Mar 21 23:43:04.482: DHCPD: checking for expired leases.
*Mar 21 23:43:44.949: DHCPD: Reload workspace interface Vlan111 tableid 0.
*Mar 21 23:43:44.949: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:43:44.949: DHCPD: client's VPN is .
*Mar 21 23:43:44.949: DHCPD: using received relay info.
*Mar 21 23:43:44.949: DHCPD: Sending notification of DISCOVER:
*Mar 21 23:43:44.949:

DHCPD: htype 1 chaddr 0050.56bb.404c

*Mar 21 23:43:44.949:

DHCPD: interface = Vlan111

*Mar 21 23:43:44.949:

DHCPD: class id 4d53465420352e30

*Mar 21 23:43:44.949:

DHCPD: out_vlan_id 0 *Mar 21 23:43:44.949: DHCPD:

DHCPDISCOVER received from client 0100.5056.bb40.4c on interface Vlan111.


*Mar 21 23:43:44.949: DHCPD: using received relay info.
*Mar 21 23:43:44.949: DHCPD: Sending notification of DISCOVER:
*Mar 21 23:43:44.949:

DHCPD: htype 1 chaddr 0050.56bb.404c

*Mar 21 23:43:44.949:

DHCPD: interface = Vlan111

*Mar 21 23:43:44.949:

DHCPD: class id 4d53465420352e30

*Mar 21 23:43:44.949:

DHCPD: out_vlan_id 0

*Mar 21 23:43:46.962: DHCPD: Adding binding to radix tree (10.0.111.2)


*Mar 21 23:43:46.962: DHCPD: Adding binding to hash tree
*Mar 21 23:43:46.962: DHCPD: assigned IP address 10.0.111.2 to client 0100.5056.bb40.4c. (2179 0)
*Mar 21 23:43:46.962: DHCPD: DHCPOFFER notify setup address 10.0.111.2 mask 255.255.255.0
*Mar 21 23:43:46.962: DHCPD: Sending DHCPOFFER to client 0100.5056.bb40.4c (10.0.111.2).
*Mar 21 23:43:46.962: DHCPD: no option 125
*Mar 21 23:43:46.962: DHCPD: Check for IPe on Vlan111
*Mar 21 23:43:46.962: DHCPD: creating ARP entry (10.0.111.2, 0050.56bb.404c).
*Mar 21 23:43:46.962: DHCPD: unicasting BOOTREPLY to client 0050.56bb.404c (10.0.111.2).

*Mar 21 23:43:46.962: DHCPD: Reload workspace interface Vlan111 tableid 0.


*Mar 21 23:43:46.962: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:43:46.962: DHCPD: client's VPN is .
*Mar 21 23:43:46.962: DHCPD: DHCPREQUEST received from client 0100.5056.bb40.4c.
*Mar 21 23:43:46.962: DHCPD: Sending notification of ASSIGNMENT:
*Mar 21 23:43:46.962:

DHCPD: address 10.0.111.2 mask 255.255.255.0

*Mar 21 23:43:46.962:

DHCPD: htype 1 chaddr 0050.56bb.404c

*Mar 21 23:43:46.962:

DHCPD: lease time remaining (secs) = 86400

*Mar 21 23:43:46.962:

DHCPD: interface = Vlan111

*Mar 21 23:43:46.962:

DHCPD: out_vlan_id 0

*Mar 21 23:43:46.962: DHCPD: Sending DHCPACK to client 0100.5056.bb40.4c (10.0.111.2).


*Mar 21 23:43:46.962: DHCPD: no option 125
*Mar 21 23:43:46.962: DHCPD: Check for IPe on Vlan111
*Mar 21 23:43:46.962: DHCPD: creating ARP entry (10.0.111.2, 0050.56bb.404c).
*Mar 21 23:43:46.962: DHCPD: unicasting BOOTREPLY to client 0050.56bb.404c (10.0.111.2).
*Mar 21 23:43:50.402: DHCPD: Reload workspace interface Vlan111 tableid 0.
*Mar 21 23:43:50.402: DHCPD: tableid for 10.0.111.254 on Vlan111 is 0
*Mar 21 23:43:50.402: DHCPD: client's VPN is .
*Mar 21 23:43:50.402: DHCPD: DHCPINFORM received from client 0100.5056.bb40.4c (10.0.111.2).
*Mar 21 23:43:50.402: DHCPD: Sending DHCPACK to client 0100.5056.bb40.4c (10.0.111.2).
*Mar 21 23:43:50.402: DHCPD: no option 125
*Mar 21 23:43:50.402: DHCPD: unicasting BOOTREPLY to client 0050.56bb.404c (10.0.111.2).

DC-3750#sh ip dhcp binding


Bindings from all pools not associated with VRF:
IP address

Client-ID/

Lease expiration

Type

Hardware address/
User name
10.0.111.1

0100.5056.bb73.8c

Mar 22 1993 11:37 PM

Automatic

10.0.111.2

0100.5056.bb40.4c

Mar 22 1993 11:43 PM

Automatic

DC-3750#

On N1Kv, notice that the DHCP snooping binding table is now populated with a
mapping of the MAC address to IP address to interface and VLAN.

N1Kv-01(config)# sh ip dhcp snooping binding


MacAddress

IpAddress

LeaseSec

Type

VLAN

Interface

-----------------

---------------

--------

----------

----

-------------

00:50:56:bb:40:4c

10.0.111.2

86308

dhcp-snoop

111

Vethernet16

N1Kv-01(config)#

Let's perform a DHCP renew on .111.

Back on N1Kv, we check the snooping table again and see the new mapping.
N1Kv-01# sh ip dhcp snooping binding
MacAddress

IpAddress

LeaseSec

Type

VLAN

Interface

-----------------

---------------

--------

----------

----

-------------

00:50:56:bb:40:4c

10.0.111.2

86160

dhcp-snoop

111

Vethernet16

00:50:56:bb:73:8c

10.0.111.1

86359

dhcp-snoop

111

Vethernet15

N1Kv-01#

And that seems to match up with what we see here in relation to port numbers and
guest VM names.

N1Kv-01(config)# sh int status

-------------------------------------------------------------------------------Port

Name

Status

Vlan

Duplex

Speed

Type

-------------------------------------------------------------------------------mgmt0

--

up

routed

full

1000

--

Eth3/1

--

up

trunk

full

1000

--

Eth3/2

--

up

trunk

full

1000

--

Eth3/3

--

up

trunk

full

unknown --

Eth3/4

--

up

trunk

full

unknown --

Eth3/5

--

up

trunk

full

unknown --

Eth4/1

--

up

trunk

full

1000

--

Eth4/2

--

up

trunk

full

1000

--

Eth4/3

--

up

trunk

full

unknown --

Eth4/4

--

up

trunk

full

unknown --

Eth4/5

--

up

trunk

full

unknown --

Po1

--

up

trunk

full

1000

--

Po2

--

up

trunk

full

1000

--

Po3

--

up

trunk

full

unknown --

Po4

--

up

trunk

full

unknown --

Veth1

VMware VMkernel, v up

115

auto

auto

--

Veth2

VMware VMkernel, v up

116

auto

auto

--

Veth3

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth4

N1Kv-01-VSM-2, Net up

121

auto

auto

--

Veth5

N1Kv-01-VSM-2, Net up

120

auto

auto

--

Veth6

Win2k8-www-1, Netw up

110

auto

auto

--

Veth7

VMware VMkernel, v up

115

auto

auto

--

Veth8

VMware VMkernel, v up

116

auto

auto

--

Veth9

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth10

N1Kv-01-VSM-1, Net up

121

auto

auto

--

Veth11

N1Kv-01-VSM-1, Net up

120

auto

auto

--

Veth12

Win2k8-www-2, Netw up

110

auto

auto

--

Veth13

Win2k8-www-3, Netw up

110

auto

auto

--

Veth14

vCenter, Network A up

auto

auto

-- Veth15

, Netw up

112

auto

auto

-- Veth16

, Netw up

112

auto

auto

--

Win2k8-www-2

Veth17

Win2k8-www-3, Netw up

113

auto

auto

--

control0

--

routed

full

1000

--

up

Win2k8-www-1

N1Kv-01(config)#

Finally, notice that we did not have to trust the interface that the upstream DHCP
server was connected to. This is because the N1Kv switch is slightly different than
most Cat IOS or NX-OS switches with respect to DHCP snooping: The N1Kv
inherently trusts DHCP offers coming from northbound physical Ethernet interfaces,

but inherently distrusts traffic that comes from east-westbound vethernet interfaces.
And although we don't have a DHCP server on Veth17, we could add one and if we
did; then we would need to explicitly trust that port.
interface Vethernet17
inherit port-profile PVLAN-Comm2
description Win2k8-www-3, Network Adapter 2
vmware dvport 480 dvswitch uuid "2a 80 3b 50 37 b7 1b 8e-18 98 be 68 05 cd ba
32"
vmware vm mac 0050.56BB.66A5 ip dhcp snooping trust

UCS Technology Labs - Nexus 1000v on UCS


Dynamic ARP Inspection in Nexus 1000v
Task
Provision N1Kv to inspect all ARP traffic in VLAN 111, and drop it if there is not a
corresponding entry for the MAC address to IP to requesting interface bound
together in the table.

Configuration
IP ARP inspection is predicated on the DHCP Snooping Binding database table to
validate MAC address to interface. Turn on DHCP snooping and IP ARP inspection
for VLAN 111. We were asked to also inspect for matching IP addresses, so we'll
add that argument.
On N1Kv:
svs switch edition advanced
feature dhcp
ip dhcp snooping
ip dhcp snooping vlan 111
ip arp inspection vlan 111
ip arp inspection validate src-mac ip

Verification
Let's first try to erase our ARP cache on Win2k8-www-1 and try to ping an IP on the
same subnet as one of our adapters to make sure that ARP works properly. It looks
like we can. This is because we have just set up DHCP snooping and had already
populated the table in the previous task.

On N1Kv, let's look at the IP ARP inspection statistics. We see matches for permits,
and nothing dropped so far.
N1Kv-01(config)# sh ip arp inspection vlan 111

Source Mac Validation

: Enabled

Destination Mac Validation


IP Address Validation

: Disabled
: Enabled

Filter Mode (for static bindings): IP-MAC

Vlan : 111
----------Configuration

: Enabled

Operation State

: Active

DHCP logging options : Deny


N1Kv-01(config)# sh ip arp inspection statistics vlan 111

Vlan : 111
----------ARP Req Forwarded

= 24 ARP Res Forwarded

ARP Req Dropped

= 0

ARP Res Dropped

= 0

DHCP Drops

= 0

DHCP Permits

= 14

SMAC Fails-ARP Req = 0


SMAC Fails-ARP Res = 0
DMAC Fails-ARP Res = 0

= 6

IP Fails-ARP Req

= 0

IP Fails-ARP Res

= 0

N1Kv-01(config)#

What will happen if we clear the DHCP Snooping Binding database?


N1Kv-01(config)# clear ip dhcp snooping binding
N1Kv-01(config)# sh ip dhcp snooping binding
MacAddress

IpAddress

LeaseSec

Type

VLAN

Interface

-----------------

---------------

--------

----------

----

-------------

N1Kv-01(config)#

Let's clear the ARP table on Win2k8-www-1 again, and try to ping. It is clear that we
have a problem; IP ARP inspection is doing its job and blocking our request
(because we don't have a corresponding entry in the DHCP snooping table).

On N1Kv, let's look again at the IP ARP inspection statistics. We see now see ARP
Reqs dropped.
N1Kv-01(config)# sh ip arp inspection vlan 111

Source Mac Validation


Destination Mac Validation
IP Address Validation

: Enabled
: Disabled
: Enabled

Filter Mode (for static bindings): IP-MAC

Vlan : 111

----------Configuration

: Enabled

Operation State

: Active

DHCP logging options : Deny


N1Kv-01(config)# sh ip arp inspection statistics vlan 111

Vlan : 111
----------ARP Req Forwarded

= 24

ARP Res Forwarded

= 6 ARP Req Dropped

ARP Res Dropped

= 0

DHCP Drops

= 0

DHCP Permits

= 14

= 4

SMAC Fails-ARP Req = 0


SMAC Fails-ARP Res = 0
DMAC Fails-ARP Res = 0
IP Fails-ARP Req

= 0

IP Fails-ARP Res

= 0

N1Kv-01(config)#

But if we ask the DHCP server to renew our DHCP again, we should see it
repopulated in the database and be able to ping again. Note here that you MUST do
a DHCP Release and then a DHCP Renew; otherwise, the VM guest already knows
its DHCP server and will attempt to unicast it - which will fail because of ARP
inspection. We will need to release and renew on both VM guests, but screen shots
for just one are shown because they are identical in execution.
Release.

Renew.

We see the DHCP Snooping Binding database restored, and pings should work
again.
N1Kv-01(config)# sh ip dhcp snooping binding
MacAddress

IpAddress

LeaseSec

Type

VLAN

Interface

-----------------

---------------

--------

----------

----

-------------

00:50:56:bb:40:4c

10.0.111.2

86392

dhcp-snoop

111

Vethernet16

00:50:56:bb:73:8c

10.0.111.1

86082

dhcp-snoop

111

Vethernet15

N1Kv-01(config)#

We try the ping again, and connectivity is restored!

UCS Technology Labs - Nexus 1000v on UCS


IP Source Guard in Nexus 1000v
Task
Ensure VM-Guests' IP addresses are monitored to make sure they match the MAC
addresses originally used to get a DHCP address.

Configuration
svs switch edition advanced
feature dhcp
ip dhcp snooping
ip dhcp snooping vlan 111

port-profile type ethernet VM-Guests


vmware port-group
switchport mode trunk ip verify source dhcp-snooping-vlan

switchport trunk allowed vlan 1,110-112,120-121


channel-group auto mode on mac-pinning
no shutdown
system vlan 1,112,120-121
state enabled

UCS Technology Labs - Nexus 1000v on UCS


Netflow in Nexus 1000v
Task
Configure N1kv to collect NetFlow for all traffic Tx or Rx on Win2k8-www-2.

Configuration
Although we don't really have a way to test this (no collectors installed), we can still
benefit from learning the syntax to configure.
flow exporter Netflow-Export-1
destination 10.0.110.10
transport udp 2055
source mgmt0
version 9

flow record Netflow-Record-1


match ipv4 destination address
collect counter packets

flow monitor Netflow-Monitor-1


record Netflow-Record-1
exporter Netflow-Export-1
timeout active 1000
cache size 2048

interface Vethernet12
ip flow monitor Netflow-Monitor-1 input
ip flow monitor Netflow-Monitor-1 output

UCS Technology Labs - VM-FEX on UCS


VM-FEX Preparation
Task
Associate the ESXi-VMFEX-iSCSI service profile to a blade and install VMware ESXi
using boot-from-iSCSI.
Give VM-FEX ESXi Kernel the IP address 10.0.115.21/24 with a default
gateway of 10.0.115.254.
Root Username should have the password cciedc01.
Use a separate vCenter instance that is running at IP address 10.0.1.100/24 with
default gateway 10.0.1.254.
Make sure that you power off the other vCenter instance running at the
same IP address first!
Assign it to a datacenter named VMFEX-DC1, and add the 10.0.115.21 host
to the cluster.
Add to the vCenter inventory, from the datastore, one instance for each guest of
Win2k8-www-1 and Win2k8-www-2, and assign them the IP addresses 10.0.110.211
and 10.0.110.212, respectively, via their consoles.
This task has already been performed for you, but installation and
setup are included here as a guide for those who have have never
performed this task and for anyone who happens to be setting up his
own lab. Note that we do not cover the entire installation here,
because we have covered this in previous labs.

Configuration
Be sure to properly shut down (if previously active) VM guests and ESXi hosts
running on both CUST1 service profiles, freeing up a blade with a Palo card for use
with VM-FEX. Boot the server and launch the KVM.
A blade with a Palo card (M81KR, VIC1240, VIC1280) must be used
for VM-FEX.

Note that this is booting from an iSCSI disk that was set up via a gateway to an FC
target on MDS1 in a previous lab.

Choose the proper ESXi-VMFEX-Boot LUN.

Enter the VLAN ID 115, because we did not choose this as a native VLAN earlier in
UCSM.

Enter the IP address, subnet mask, and default gateway as shown. Click OK.

On Server1, load VMware vSphere Client and enter the proper IP address, user
name, and password, and click Login.

Click the Configuration tab, click Network Adapters, and note that vmnic5 is
allocated to the iSCSIBootSwitch.

Click the Configuration tab, click Networking, select vDphere Standard Switch,
and note the same. Click Add Networking.

Go through all the necessary steps (covered in previous labs) to add physical
adapters and standard vSwitches to allow networking to appear as shown here.

Click the Configuration tab, click Storage Adapters, note the vmhba33 allocated
to iSCSI, and note the devices seen.

Also note the paths. Click Properties.

The iSCSI Initiator Properties dialog box appears.

Click the Network Configuration tab, and then click Add.

Choose iSsciBootPG on vmk1/vmnic5 and click OK.

Click Close.

Click Yes to rescan the bus.

Click the Configuration tab, click Storage, and click to create a datastore.

Select Disk/LUN. Click Next.

Choose the larger 48GB LUN and click Next.

Select VMFS-5 and click Next.

Note the iSCSI disk. Click Next.

Assign the datastore a name that is different from previous datastore names. Here
we have chosen vmfex-datastore. Click Next.

Select max space. Click Next.

Click Finish.

Notice the datastore come online.

On the File menu, click Deploy OVF Template.

Find the source for your vCenter template.


Notice that we are choosing to use a separate vCenter instance for
VM-FEX so that we do not cause any confusion in the learning
process with multiple DVSs running on the same cluster. Of course,
vCenter does support multiple DVSs running on the same cluster.

We'll name it differently to prevent any confusion with previous labs.

Selecting Thin Provisioned is a good way to save space in a lab (not supported in
production for vCenter, of course).

Edit the settings of the vCenter guest.

Change the network adapter to the vCenter vSwitch.

Assign it the IP address, subnet mask, and default gateway as shown.

Log in to vCenter.

Create a new datacenter named VMFEX-DC1.

Add the ESXi host at 10.0.115.21.

Enter the IP address, the root user, and the password cciedc01.

Note that everything is running properly in vCenter.

Click the Configuration tab, click Storage, right-click vmfex-datastore, and


browse it.

In the left pane, click Win2k8-www-1. In the right pane, right-click the .vmx file and
click Add to Inventory. Click through the next few screens to finish adding it to the
inventory.

Note that it was added properly.

Do the same, this time for Win2k8-www-2.

Note that it was added properly.

UCS Technology Labs - VM-FEX on UCS


VM-FEX vCenter Plugin, VEM Installation, and Port
Profiles
Task
Integrate vCenter with the UCSM for use with VM-FEX.
Install VEM on ESXi host at 10.0.115.21.
Create a DVS in vCenter with the name VM-FEX-DVS1.
Create a VMKernel port profile named VMK_PoProf with the VLAN 115, allow CDP,
and assign it to CoS 5 in UCSM.
Create a VM guest port profile named VM-110_PoProf with the native VLAN 110 and
allow CDP.

Configuration
In the UCSM, in the left navigation pane, click the VM tab, click VMware, and in the
right pane, click Configure VMware Integration.

Click Export.

Choose a location such as the Desktop and click OK to export the XML key.

Click OK.

In vCenter, click Plug-ins and then click Manage Plug-ins.

Right-click anywhere in the free space and click New Plug-in.

Browse to the location to which you exported the XML extension key. Note that this
XML file is called cisco_nexus_1000v_extension.xml. This is because the same
VEM is used for VM-FEX as is used in the N1Kv product. In VM-FEX, the UCSM FIs
act as VSM1 and VSM2 for this DVS. Click Register Plug-in.

Click Ignore.

Click OK.

Note the successful installation of the key.

Back in the wizard, click Next.

Enter the vCenter name and IP address, the proper datacenter name in vCenter,
create a folder name and a DVS name, and enable the DVS. Click Next.
Note that a password is not required for vCenter; the key registration
eliminated the need for a passwrod.

Enter the details as shown for the VLAN 110 port profile, and note at the bottom that
we must create something called a Profile Client. The port profile is the same thing
that we did previously in the N1Kv labs. The profile client here is basically a vehicle
for SVS that allows the port profile to pass to the DVS. This is because we might
have UCSM integrated into different vCenter datacenters, and we may even want to
use the same port profiles but create them in multiple different DVSs in different
vCenter datacenters.

Click Finish.

Click OK.

Navigate to Home >> Inventory >> Networking.

In the left navigation pane, click VM-FEX-DVS1. In the right pane, click the
Configuration tab and notice the DVS with the new VM-110-PoProf on the VM
guest-side of the switch. Also take note of what has been occurring in Recent Tasks
at the bottom. This is all being pushed from the UCSM FIs.

Back in UCSM, on the VM tab in the left navigation pane, click the new port profile.
In the right pane, click the General tab and note that this is where the wizard
created this port profile.

In the right pane, click the Profile Clients tab and note the profile client that was
created by the wizard.

In the left navigation pane, right-click Port Profile and click Create Port Profile.

Enter the details as shown for the VMKernel VLAN 115 port profile and make sure
that the VLAN is not marked as native. Click OK.

Click OK.

Note the successful creation of the port profile in UCSM.

However, in vCenter's VM-FEX DVS, notice that the port group has not appeared
yet. This is because we still need to create a profile client in UCSM to push it over.

Back in UCSM, click the new Port Profile in the left navigation pane, and in the right
pane click the Profile Clients tab and click +.

Define a meaningful name, and fill in the fields to push it to the proper (only, in our
case) DVS. Click OK.

Click OK.

Note the successful creation of the profile client in UCSM.

Also note the successful creation of the port group now in the vCenter DVS.

Now it is time to try to install the VEM to the ESXi host, because all we have so far
is Supervisor modules (USCM FIs) and no real linecards to forward any traffic.
In vCenter, right-click the DVS and click Add Host.

Click the only ESXi instance we have installed here, and select vmnic4 to migrate.
Click Next.

Tell the wizard to try to migrate the ESXi host's management vmk0 interface to the
new VMK_PoProf group. Click Next.

Configure the two Win2k8 guests to migrate to the VM-110_PoProf port groups also.
Click Next.

Click Finish.

Note the progress of VUM trying to install the VEM VDS module on the ESXi host at
the bottom in Recent Tasks.

Notice that the operation here seems to have failed. There could be a variety of
reasons for this, not the least of which is that we chose a vmnic for uplink purposes
that has VLANs for the guests, but that does not carry the VLAN 115 needed for
management. However, we will discuss an alternative method so that you know how

to manually install a VEM module in an ESXi host, should the need ever arise.

Navigate back to Home >> Inventory >> Hosts and Clusters.

Navigate to the ESXi host, click the Configuration tab, click Security Profile, and
click Properties.

Select ESXi Shell and click Options.

Notice that it is stopped. Select the option to start automatically and then click Start.

Notice that it is now running. Click OK.

Select SSH and click Options.

Select the option to start automatically and then click Start. Click OK.

Notice on the Summary tab that the ESXi Shell and SSH have been enabled.
VMware doesn't typically like this, so it is reported as a "Configuration Issue." Also
note the hazard triangle with an exclamation mark in the left pane on the ESXi host.

Now we must get the VEM file needed to install to the ESXi host directly. Browse to
the UCSM FI virtual IP address and ignore the SSL warning.

Click the option at the bottom to obtain VM-FEX software.

Right-click to download the highlighted .zip file that is for ESXi 5.0 or later.

Select Save Target As.

Choose a location such as the Desktop and save it there.

Back in vCenter, click the ESXi host, click the Configuration tab, click Storage,
select Datastores, right-click vmfex-datastore, and browse to it.

Click the Upload File icon.

Choose the file from where you stored it (the Desktop was recommended) and
upload it to the datastore (so that we can access it later from SSH CLI). Close the
datastore window when finished.

Click the Summary tab, and click Enter Maintenance Mode; make sure that all
guests are powered off first. This will require exiting the vSphere client that is logged
in to the vCenter, relaunching the client, and logging directly in to the ESXi host.

Click Yes.

In the 3750, we SSH over to the ESXi host. We could do this from Server 1 just as
well with PuTTY, but it is best not to use an NX-OS box as an SSH client; this is
because it is linux at its core, so it has a .known_hosts file, and if you've ever tried to
SSH to the same IP address before, but the RSA key has changed (maybe because
you reinstalled ESXi on another occasion as the same IP address), you won't be
allowed in.
We will SSH in and run the VMware VIB installer.
DC-3750# ssh -l root 10.0.115.21

Password:
The time and date of this login have been sent to the system logs.

VMware offers supported, powerful system administration tools.

Please

see www.vmware.com/go/sysadmintools for details.

The ESXi Shell can be disabled by an administrative user. See the


vSphere Security documentation for more information.
~ # ~ # ls -lh /vmfs/volumes/vmfex-datastore/
-rw-------

1 root

root

3.1M Mar

7 19:27 VEM500-20110825132140-BG-release.zip

drwxr-xr-x

1 root

root

1.5k Mar

7 00:28 VMFEX-vCenter

drwxr-xr-x

1 root

root

2.2k Mar

7 19:06 Win2k8-www-1

drwxr-xr-x

1 root

root

1.8k Mar

7 17:56 Win2k8-www-2

~ #~ #
esxcli software vib install --depot=/vmfs/volumes/vmfex-datastore/VEM500-20110825132140-BG-release.zip
Installation Result Message: Operation finished successfully.
Reboot Required: false VIBs Installed: Cisco_bootbank_cisco-vem-v132-esx_4.2.1.1.4.1.0-3.0.4

VIBs Removed:
VIBs Skipped:
~ #

Back in the vSphere Client, take the ESXi host out of Maintenance Mode.

Power on all of the guests. Wait a while for the vCenter guest to come back online
and fully restart all services.

Exit the vSphere client that is logged directly in to the ESXi host, and log back in to
the vCenter. Navigate to Home >> Inventory >> Networking, right-click VM-FEX
DVS1, and click Add Host.

Choose the ESXi host and same vmnic4 adapter as before and click Next.

Choose not to migrate any of the VMkernel interfaces over at this time. Click Next.

Choose not to migrate any of the VMs over at this time, either. Click Next.

Click Finish.

Notice that this time the host adds properly to the DVS.

Navigate back to Home >> Inventory >> Hosts and Clusters.

From the ESXi host, click the Configuration tab, click Networking, and select
vSphere Distributed Switch. Note the newly installed DVS and the single
connected uplink adapter and port group

UCS Technology Labs - VM-FEX on UCS


VM-FEX with PassThru Switching (PTS)
Task
Migrate VM guest machine Win2k8-www-2 to the new VM-FEX DVS and configure it
for PassThru Switching (PTS).
Ensure that Server1 connected to N5K1 can ping the guests.

Configuration
In vCenter, in the left pane, right-click Win2k8-www-2 and click Edit Settings.

Click Network Adapter 1 and click the drop-down on the right to change the port
group to VM-110_PoProf (VM-FEX-DVS1).

If all we want to use is PassThru Switching, an E1000 adapter type is suitable. Note
that DirectPath I/O is not supported on this adapter type. Click OK.

In the right pane, click the Configuration tab, select vSphere Distributed Switch,
and notice that the VM is now a part of the DVS.

Verification
Back in UCSM, in the left navigation pane, click the VM tab, expand Virtual
Machines, Host Blade 1/1, Virtual Machines, Virtual Machine Win2k8-www-2,
and click vNIC0. In the right pane, click the VIFs tab and notice the two VIFs, one to
each Fabric. Only one is active for forwarding, although both appear as

operationally active. This is because UCSM pre-provisions all fabric-failover-enabled


vNICs on both FIs, but only forwards traffic out of the designated FI. The other FI is
fully ready if any network failure occurs on the primary FI.

In NX-OS on UCS-FI-A:
INE-UCS-01-A(nxos)# sh int br | in Veth
Vethernet

VLAN

Type Mode

Status

Reason

Speed

Veth709

eth

trunk

up

none

auto

Veth711

eth

trunk

up

none

auto

Veth713

eth

trunk

up

none

auto

Veth716

eth

trunk

up

none

auto

Veth717

117

eth

trunk

up

none

auto

Veth32769

110

eth

trunk

up

none

auto

INE-UCS-01-A(nxos)#

In NX-OS on UCS-FI-B:
INE-UCS-01-B(nxos)# sh int br | in Veth
Vethernet

VLAN

Type Mode

Status

Reason

Veth710

eth

trunk

up

none

auto

Veth712

eth

trunk

up

none

auto

Veth714

eth

trunk

up

none

auto

Veth715

eth

trunk

up

none

auto

Veth718

117

eth

trunk

up

none

auto

Veth32769

110

eth

trunk

up

none

auto

INE-UCS-01-B(nxos)#

Speed

From Server1 connected to N5K1, try to ping the guest at the IP address
10.0.110.212. We can see that traffic works just through our FIs and down to our
VEM running on our VM-FEX DVS.

Back in NX-OS on FI-A, we can see that we have packets matched from the ping.
INE-UCS-01-A(nxos)# sh int Veth 32769
Vethernet32769 is up
Bound Interface is port-channel1280
Hardware: Virtual, address: 547f.eec5.7d40 (bia 547f.eec5.7d40)
Encapsulation ARPA
Port mode is trunk
EtherType is 0x8100
Rx

6 unicast packets

15 multicast packets
22 input packets

1 broadcast packets
2245 bytes

0 input packet drops


Tx

8 unicast packets

11 multicast packets
70 output packets

51 broadcast packets
6419 bytes

0 flood packets
0 output packet drops

INE-UCS-01-A(nxos)#

UCS Technology Labs - VM-FEX on UCS


VM-FEX with DirectPathIO (High-Performance VMFEX)
Task
Create a new high-performance port profile named VM-110-HiPerf in native VLAN
110 that allows VM-FEX guests to have all networking traffic bypass the ESXi
hypervisor.
Make any changes necessary to support VMDirectPathIO in either UCSM or
vCenter, ESXi or on the guest VM itself.
Migrate VM guest machine Win2k8-www-1 to the new VM-FEX DVS and configure
for DirectPathIO High Performance switching.

Configuration
Before we begin, read the following particularly helpful lines from the
Cisco VM-FEX Using VMware ESX Environment Troubleshooting
Guide that go a bit beyond the scope of what we cover here (because
we won't discuss editing conf files inside the ESXi host) that would be
of particular importance if you had a non-default ESXi installation and
customized Tx, Rx, or Completion Queues.
Interrupts

VMDirectPath mode by default Interrupt mode is set to MSI-X which consumes 4


Interrupts vectors from ESX Host Interrupt pool of 128 maximum limit, if there is a
scenario where ESX host reports Interrupt pool is full and cannot allocate any
interrupt vectors to any Virtual Machines' to change to VMDirectPath mode from
emulated mode then will fail and reports in active status in vCenter on Virtual
Machine network properties pane.
Transmit/Receive/Completion Queues

To enable VMDirectPath mode with RSS/multi-queue, you need to ensure that the
adapter policy is configured accordingly. The values can be set in Eth Adapter Policy
VMWare PassThru and in VMXNET3 driver. The values for these queues must meet
the following criteria:
The values set for Transmit Queues (TQs) in VMXNET3 should be greater
than the values set for TQs in Eth Adapter Policy VMWare PassThru
(largest VMXNET3.NumTQs on host <= dynamicPolicy.NumTQs).
The values set for Recieve Queues (RQs) in VMXNET3 should be greater
than the values set for RQs in Eth Adapter Policy VMWare PassThru
(largest VMXNET3.NumRQs on host <= dynamicPolicy.NumRQs).
The values set for Transmit Queues (CQs) in VMXNET3 should be greater
than the values set for CQs in Eth Adapter Policy VMWare PassThru
(largest VMXNET3.NumCQs on host <= dynamicPolicy.NumCQs).
The values set for Interrupts (INTRs) in VMXNET3 should be greater than
the values set for INTRs in Eth Adapter Policy VMWare PassThru (largest
VMXNET3.NumINTRs on host <= dynamicPolicy.NumINTRs)
Source: Cisco VM-FEX Using VMware ESX Environment Troubleshooting Guide

In UCSM, in the left navigation pane, click the Servers tab, navigate to Servers >>
Policies >> Adapter Policies >> Eth Adapter Policy VMWarePassThru. In the
right pane, note the Tx, Rx, and Completion Queues, and modify only if after
reading the above note, you know your ESXi host to be customized (for this lab you
may leave them as is). Change the Interrupt Mode to Msi X and click Save Changes
.

Click OK.

Now in the left navigation pane, click the Servers tab and navigate to Servers >>
Service Profiles >> root org >> Sub-Organizations >> CUST2 >> ESXi-VM-FEXiSCSI-1. In the right pane, click the Policies tab and expand BIOS Policy. Note the
BIOS policy in use. We may not need to modify that policy, but we do need to check
it to ensure proper configuration for usage with VM-FEX and DirectPathIO.

Navigate to Servers >> Policies >> BIOS Policies >> Loud-w-VT. In the right
pane, click the Advanced tab, and then click the Processor tab. Notice that
Virtualization Technology is enabled. This is actually the default for a B22-M3 blade,
but we had to be absolutely certain that it was enabled to support DirectPathIO in

VM-FEX.

In the left navigation pane, click the VM tab, right-click Port Profiles, and click
Create Port Profile.

Enter the name as shown, choose the native VLAN 110, and select the Host
Network IO Performance mode of High Performance. Click OK.

Click OK.

Click the new port profile in the left navigation pane, and in the right pane click
Profile Clients.

Create a new profile client with relatively the same name that pushes the port profile
to the VM-FEX DVS.

Click OK.

Back in vCenter, right-click the Win2k8-www-1 guest and click Edit Settings.

On the Hardware tab, ensure that the Adapter Type is VMXNET 3. Change the port
group to our new VM-110-HiPerf (VM-FEX-DVS1). Note that in the DirectPath I/O
section it tells us that it is Inactive because (assuming we have the proper adapter
type) all memory must be reserved (it cannot be shared with any other guests).
If the current network adapter is not the type of VMXNET 3, take time
now to delete the current adapter, add a new network adapter with
this specific adapter type, console into the guest, install VM Tools
(which contain the driver), and reboot the guest VM. Failure to use the
proper network driver type will cause an inability utilize
VMDirectPathIO.

Click the Resources tab. Click Memory. Select the Reserve all guest memory (All
locked) check box. Click OK.

Right-click the Win2k8-www-1 guest again and click Edit Settings.

Notice now that if you followed all steps carefully, VM DirectPath I/O is now Active.
You have successfully bypassed the hypervisor for all network switching. vMotion is
even supported with this mode enabled, but only with Cisco UCS.

Back in UCSM, on the VM tab, notice that the new Virtual Machine appears in the
DVS, and that it has two vNICs. This is expected if you had migrated this guest to
the VM-FEX DVS PassThru Switching mode prior to migrating it to DirectPathIO
mode. Only the latter vNIC is actually active now, and the other is a mere remnant.

From Server1 connected to N5K1, try to ping the Win2k8-www-1 guest at the IP of
10.0.110.211. We should see that it is successful.

In NX-OS on UCS-FI-A, notice that we have two vNICs for this guest, but that the
former (PTS vNIC) is nonParticipating.
INE-UCS-01-A(nxos)# sh int br | in Veth
Vethernet

VLAN

Type Mode

Status

Reason

Speed

Veth709

eth

trunk

up

none

auto

Veth711

eth

trunk

up

none

auto

Veth713

eth

trunk

up

none

auto

Veth716

eth

trunk

up

none

auto

Veth717

117

eth

trunk

up

none

auto

Veth32769

110

eth

trunk

up

none

auto

Veth32770

110

eth

trunk

down

nonPartcipating

auto

Veth32771

110

eth

trunk

up

none

auto

INE-UCS-01-A(nxos)#

Note the same thing in NX-OS on UCS-FI-B.


INE-UCS-01-B(nxos)# sh int br | in Veth
Vethernet

VLAN

Type Mode

Status

Reason

Veth710

eth

trunk

up

none

auto

Veth712

eth

trunk

up

none

auto

Veth714

eth

trunk

up

none

auto

Veth715

eth

trunk

up

none

auto

Veth718

117

eth

trunk

up

none

auto

Veth32769

110

eth

trunk

up

none

auto

Veth32770

110

eth

trunk

down

nonPartcipating

auto

Veth32771

110

eth

trunk

up

none

auto

INE-UCS-01-B(nxos)#

Speed

Nexus Technology Labs - Data Center


Technology Lab Resources
Data Center Rack Rental Access Guide
This guide describes how to use the INE CCIE Data Center Rack Rentals that
complement the CCIE Data Center Technology Lab Online Workbook.

Important Notes
To rent Data Center racks, you must have purchased the Data Center Online
Workbook. You can read about, preview, and purchase the workbook here.
You may rent a Data Center rack for up to 85 hours per month. Data Center rack addons are also limited to 85 hours per month.

Common Problems
Scheduling a Rack Session
Passwords
Scheduling Data Center Rack Add-ons
Loading and Saving Configurations
Canceling a Rack Session
Connecting to NX-OS Devices CLI
Using the Rack Control Panel
Connecting to Windows Virtual Machines
Connecting to UCS Blade ESXi Instances

Common Problems
Telnet Connection Warning

Some Telnet clients will close the Telnet window if the connection cannot be
established. This behavior prevents you from seeing error messages that indicate
the line is in use, so its a good practice to disable this behavior in your Telnet client.
If you do not see a command prompt when you establish a Telnet connection, you
may need to press Enter to wake up the device.
Important Configuration Changes
When you load a saved configuration into the UCS, you must change the interfaces
in use to match the Data Center rack you are using; our automation cannot make
this modification for you. You will find the necessary configuration changes in the
reminder email you receive before your rack session.
Currently, ONLY UCS can be loaded. N7K and N5K cannot be loaded
at this time. Thank you for your patience as we work to remove this
limitation.
Port Speed Information
If you receive an error message stating, SFP validation failed, you must manually
set the port speed to 1G (1000) because Nexus does not support auto-negotiation
from 10G to 1G.

Scheduling a Rack Session


You schedule your Data Center rack session through your Members account.
1. Sign in to the Members site, and then click Rack Rentals on the left side of the page.
2. Find the rack rental type you want, and click Schedule to see the rack session
booking page.
3. On this page, select a preferred range of start and end dates and start and end times
for your rack rental session.
4. Click Search to find and select available sessions. Click the Book button to reserve
your time.
Data Center rack sessions start at designated, set intervals every
three hours. You cannot choose a custom start time for Data Center
rack rentals at this time. The last half hour of your session is for
intersession (the rack is reset for the next customer). For example, if
you reserve a 3-hour session, your session will last 2.5 hours; if you

reserve a 6-hour session, your session will last 5.5 hours. You are not
charged for the intersession portion of your rack rental.
FULLY BOOKED means that the session is not available to rent.
Book means that you may reserve this session. It also displays the number of
tokens required to reserve this session.
Rent Now means that the session is available for immediate rental.
Before booking or reserving a session, select any add-ons you want
by using the check boxes above the calendar tool.
In the Reserved Sessions area, the Active tab displays any sessions that have
already started. These sessions cannot be cancelled. The Upcoming tab displays
your reserved sessions, which can be cancelled at any time before the start time of
the session.

Passwords

Device

IP Address

Username/password (casesensitive)

Apache1

10.0.110.1

root / Cciedc01

Apache2

10.0.110.2

root / Cciedc01

Apache3

10.0.110.3

root / Cciedc01

ESXi1

10.0.115.11

root / cciedc01

ESXi2

10.0.115.12

root / cciedc01|

ESXi-VMFEX

10.0.115.21

root / cciedc01

Win2k8-BM

10.0.114.21

Administrator / Cciedc01

Device

IP Address

Username/password (casesensitive)

ACE

cisco / ciscocisco or admin /


Cciedc01

N7K and N5K

cisco / cisco

MDS

cisco / cisco (role = cisco)

Scheduling Data Center Rack Add-ons


Data Center Rack Rentals are modular.
By default, each 'Base 5K/7K' rental includes access to 2 x Nexus 7000 VDCs, 2 x
Nexus 5000s, and 2 x Windows Server Virtual Machines.
By selecting the UCS/SAN Add-On, you also get access to the UCS Blade Server
chassis, UCS 6200 Fabric Interconnects, MDS 9200 SAN Switches, and the Fibre
Channel SAN.
By selecting the N2K/SAN Add-On, you also get access to the Nexus 2000 Fabric
Extenders with a 10GigE attached server, MDS 9200 SAN Switches, and the Fibre
Channel SAN.

Note that not all labs require each Add-On. You can determine which labs need
which Add-Ons by comparing the Data Center Rack Rental Access Guide CCIE DC
Physical Diagram, located in the Resources section in the upper-right corner of this
page, with the diagram for the individual workbook section that you are working on.
Use the following general guidelines to help determine which add-ons you need:
The Nexus labs require only the Base N5K/N7K rental, unless you are working on
the Fabric Extender or SAN labs, in which case they require the N2K/SAN Add-On.
The UCS labs require the Base N5K/N7K rental plus the UCS Add-On.

Loading and Saving Configurations


When you load a saved configuration into the UCS, you must change the interfaces
in use to match the Data Center rack you are using; our automation cannot make
this modification for you. You will find the necessary configuration changes in the
reminder email you receive before your rack session.
Before saving any configuration, make sure that all of your devices are at the
command prompt or a login prompt.

Canceling a Rack Session


You can cancel a rack session in your Members account.
1. Click Rack Rentals on the left side of the page, and then click the Upcoming
Rentals tab at the top.
2. To cancel a session, click the red Cancel button in the Upcoming Rentals window.
3. You will be asked to confirm your cancellation. Another message states the number
of tokens that have been returned to your account. Click OK. Your session will be
cancelled and your tokens refunded.

Connecting to NX-OS Devices CLI


When your session begins, find your login information on the Rack Rentals page as
shown above. Open your terminal software, such as PuTTY, SecureCRT, etc., and
telnet to dc.ine.com. Log in with the provided username and password.

When you are logged in, a menu displays the devices that you have access to. The
example shown below indicates that this session is assigned the Base 5K/7K rental,
with both the UCS/SAN and N2K/SAN add-ons.

Choose the device that you want to connect to, such as N7K1, and log in with the
username cisco and password cisco. Note that the default ACE
username/password is cisco/ciscocisco.

Certain commands, such as erasing and reloading the device or


editing the management IP address, are purposely restricted to allow
the automation system to function properly.

Using the Rack Control Panel


The Rack Control Panel allows you to reset devices to their default configurations
and access the remote desktops of the virtual machines. To access the control
panel during your active rack session, click Rack Rentals on the Members site, find
your session by clicking the Upcoming Rentals tab, and then click the Control
Panel button.
Save config and load config functionality will be added to the DC
control panel shortly.

Connecting to Virtual Machines


Each rack session is assigned access to certain virtual machines, based on your
rack number assignment and the add-ons that you chose. To access the virtual
machines:
1. Go to the Rack Control Panel on the Members site.
2. Click the Remote Desktop Access tab to see a list of available virtual machines
below. If this page asks you for a username and password, use your rack username
and password, e.g. username dcrack1 password abcdef.
3. Click the icon for the VM that you want to access, and the remote desktop will
automatically open in another window. The remote desktop's resolution automatically
resizes to your browser window size, so if you simply refresh this page the resolution
will update to the browser size.

If for some reason you lock yourself out of a Virtual Machine, you can reset them
from the control panel. All changes you made during your rack session will be
reverted when you use this option.

Connecting to UCS Blade ESXi Instances


When it becomes necessary to connect to the UCS blade ESXi (or WindowsBareMetal) instances, follow the directions in the task to connect to them, using the
VMWare Infrastructure Client provided (preinstalled on any of the Windows VM
desktops). The IP addresses for the UCS blade ESXi and Windows Win2k8R2 bare
metal instances are:
ESXi1: 10.0.115.11
ESXi2: 10.0.115.12
ESXi-VMFEX: 10.0.115.21
Win2k8-BM: 10.0.114.21
The username and password for all ESXi instances are root and cciedc01.
The username and password for the Win2k8R2 instance is Administrator and
Cciedc01.

UCS Technology Labs - Release Notes


UCS Technology Labs: January 7, 2014
Changes made January 7, 2014
This workbook was updated with new rack number associations, and redundant
MDS links were removed.

Das könnte Ihnen auch gefallen