Sie sind auf Seite 1von 237

ocucol

Cisco Data Center


Unified Computing
Design
Volume 2
Version 3.0

Student Guide

Text Part Number: 97·2838-02


Table of Contents
Volume 2
Understanding Existing Computing Solutions 7-1
Overview 7-1
Objectives 7-1
Understanding Historical Performance Characteristics 7-3
Overview 7-3
Objectives 7-3
Identifying Infrastructure Historical Perfo rmance Characteristics 7-4
Identifying Server Historical Performance Characteristics 7-6
Summary 7-12
Identifying Data Center Reconnaissance and Analysis Tools 7-13
Overview 7-13
Objectives 7-13
Reconnaissance and Analysis Tools Overview 7-14
Identify the Reconnaissance and Analysis Tools 7-16
Summary 7-28
References 7-28
Understanding a Migration Plan 7-29
Overview 7-29
Objectives 7-29
Migration Plan Overview 7-30
Migration Plan Aspects 7-38
Overnight Migration 7-39
Gradual Migration 7-39
Evaluating a Migration Plan 7-44
Summary 7-47
Module Summary 7-49
Module Self-Check 7-50
Module Self-Check Answer Key 7-51

Positionin9. the Cisco UCS 8-1


Overview 8-1
Objectives 8-1
Identifying Cisco Unified Computing System Deployments 8-3
Overview 8-3
Objectives 8-3
Server Deployment Options 8-4
Server Virtualization Environments 8-10
Summary 8-15
Describing Cisco Data Center Unified Computing Solution Advantages 8-17
Overview 8-17
Objectives 8-17
Cisco Unified Computing Business Advantages 8-18
Summary 8-23
Module Summary 8-25
Module Self-Check 8-26
Module Self-Check Answer Key 8-27
Understanding Server Virtualization Networking 9-1
OvelView 9-1
Objectives 9-1
Identifying Server Virtualization 9-3
OvelView 9-3
Objectives 9-3
SelVer Virtualization OvelView 9-4
VMware SelVer Virtualization Solution OvelView 9-14
Summary 9-26
Cisco Server Virtualization Networking Solution 9-27
OvelView 9-27
Objectives 9-27
VMware SelVer Virtualization Networking 9-28
Identifying the Cisco SelVer Virtualization Solution 9-35
Cisco Nexus 1000V Deployment Design 9-49
Summary 9-52
Sizing a Virtual Machine 9-53
OvelView 9-53
Objectives 9-53
Identify Virtual Machine Sizing Parameters 9-54
Sizing Virtual Machine CPU 9-57
Sizing Virtual Machine Memory 9-61
Sizing Virtual Machine Network 9-65
Sizing Virtual Machine Storage 9-68
Using the Virtual Machine Sizing Criteria 9-73
Summary 9-77
References 9-78
Module Summary 9-79
Module Self-Ch eck 9-80
Module Self-Check Answer Key 9-82

Evaluating Cisco Unified Computing Solutions 10-1


OvelView 10-1
Objectives 10-1
Understanding Design Success Criteria 10-3
OvelView 10-3
Objectives 10-3
Design Success Criteria OvelView 10-4
Using the Design Success Criteria 10-6
Using the Design Success Criteria 10-12
Summary 10-16
Determining the Design ROI 10-17
OvelView 10-17
Objectives 10-17
Design ROI OvelView 10-18
Evaluating Design ROI 10-23
Summary 10-29
Module Summary 10-31
Module Self-Check 10-32
Module Self-Check Answer Key 10-33

Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Appendix 1: Describing Cisco Unified Computing Solution Services A 1-1
Overview A 1-1
Objectives A 1-1
Cisco Unified Computing Services Overview A1-2
Cisco Unified Computing Workshops A1-5
Cisco Unified Computing Planning, Design, Implementation Service A1-8
Cisco Unified Computing Support and Warranty Service A1-14
Cisco Unified Computing Remote Management Service A1-17
Cisco Data Center Optimization Servic A1-19
Cisco Security Services A1-21
Cisco Data Center Services A1-22
Summary A1-23

Appendix 2: Understanding Policy Retention A2-1


Overview A2-1
Objectives A2-1
Identifying Policy Retention A2-2
Describing Cisco Data Center Unified Computing Solution Policy Retention A2-5
Summary A2-9

Appendix 3: Addressing Environmental Aspects A3-1


Overview A3-1
Objectives A3-1
Environmental Aspects Overview A3-2
Temperature A3-3
Humidity A3-3
Altitude A3-3
Grounding A3-4
Power A3-4
Air Flow A3-4
Weight and Floor Loading A3-4
Site Requirements Specification A3-5
Site Survey Form A3-5
Equipment Environmental Properties A3-6
Summary A3-14

© 2009 Cisco Systems . Inc. Cisco Data Center Unified Computing Design (DCUCD) v3.0 iii
Module 71

Understanding Existing
Computing Solutions
Overview
This module describes existing solutions and historical performance characteristics,
reconnaissance and analysis tools and describes the aspects of the migration plan.

Objectives
Upon completing this module, yO ] will be able to identify existing computing solutions and
historical performance characteristics and describe migration plan aspects. This includes the
ability to meet these objectives:
• Understand the historical performance characteristics.
• Identify the reconnaissance and analysis tools.
• Determine the aspects affectir g migration plan.
Lesson 1

Understanding Historical
Performance Characteristics
Overview
This lesson identifies, lists, and describes those important historical performance characteristics
(CPU load, memory usage, I/O us age for network and storage connectivity and interfaces,
application performance and storage space requirements, and so on.) that need to be examined
to gather and assess relevant data.

Objectives
Upon completing this lesson, you will be able to identify and describe the historical
performance characteristics to be examined migration. This ability includes being able to meet
these objectives:
• Identify the computing solution infrastructure historical performance characteristics.
• Identify the server historical rerformance characteristics.
Identifying Infrastructure Hi torical Performance
Characteristics
This topic identifies and describes the infrastructure historical performance characteristics.

Network Performance Characteristics


• Traffic is sent via devices that
comp rise the network
• Network device performance
- CPU load
- Memory usage
- Amount of packets per second
processed
• Connections throughput
- Amount of traffic sent via links
• Performance might be affected by:
- Insufficient packet manipulation
capacity
- Un reliable network

Network performance characteristics are define by network device performance and


connections throughput.
The device performance is defined with:
• CPU power and load
• Memory usage
• Number of packets that the device can hand e per second
Network performance may especially be affecte when packet manipulation is done.
The connection performance is defined with the throughput - i.e., bits per second.
Network performance can also be affected by ar unstable environment - i.e., flapping or
erroneous links.

7-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Storage Performance Characteristics
Network
• Storage device performanc,e
IllllltQ
- Device connection th roughput
- Readlwrite operation speed
- LUN space expansio
- Time to rebuild lost d isk
- Cache size and acce ss time
• SAN periormance
- CPU load
- Memory usage
- Amount of packets per second
processed
- Amount of traffic sent via links

¢ .~~ '. •: ';' ":',' ,:.< :~ ,'''I: . )~:.:; : » ".;,. ,,~::,

Storage performance characteristics are defined by SAN storage device performance.


The storage device performance is limited by the following:
• Number of interfaces and speed of the interfaces that connect the device to the SAN
• Disk read/write operation speed
• The load presented by the rebuild operation in case of disk failure
• Size and speed of the controller cache

SAN performance is affected by tne following:


• CPU power and load
• Memory usage
• Number of packets the device can handle per second
• Amount of traffic sent via Fibre Channel link (or Fibre Channel over Ethernet [FCoE,] or
Fibre Channel over IP [FCIP,_, or any type of link)
The connection performance is defined with the throughput - i.e" bits per second.
SAN performance is strongly affected by an unstable environment - i.e., flapping or erroneous
links.

© 2009 Cisco Systems, Inc, Understanding Existing Computing Solutions 7-5


Identifying Server Historical Performance
Characteristics
This topic identifies and describes infrastructure historical performance characteristics.

Server CPU
• CPU characteristics
- Speed
- Nu mber of cores
- Nu mber of concurrent threads
/

• Affected by the operating system use


- An operating system incapable of using the full CPU features
limits the server CPU performance.

SoclO'll CPU Co re

Server CPU characteristics are defined by CPU speed, number of cores, and concurrent threads.
Keep in mind that even if the CPU offers advanced functionality, the operating system might
not be capable of fully using this functionality, t us performance is the least common
denominator between CPU characteristics and 0 erating system capabilities.

7-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Memory
D Key memory characteristics:
- Size
- Access speed
II Memory performance affected by:
- Maximum addressable memory by operating system
- Intemal server bus a rchttecture

I--.--
-
~_-""w.!":
1-Ai.

-'"'

.::- .':.~<; ,..;~ "':~: "'< ::- .: ,< . . •:':


~::: . :; "~:, ~. :.~

Server memory characteristics are limited by the maximum amount of memory that can be
installed and the memory access speed.
As with the CPU, the capability of the operating system might be the limiting factor for
memory usage.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-7


Server 110
• Server I/O is defined by LAN and SAN
con nectivity
- Separate adapters for SAN and LA
connectivity
- Different speeds supported:
• LAN = 1/10 Gb/s
• SAN = 1/2/4/8 Gb/s
• Muttiple adapters can be used:
- To achieve redundancy
- To achieve higher throughput
• Maximum throughput is affected by:
- Intemal bus architecture
- Services hand led by CPU when
no TOE present

Server 110 characteristics are defined with the LA.N and SAN connectivity.
Separate adapters can be used for LAN and SAN connectivity with different raw speeds.
The 110 subsystem throughput may be affected by the CPU performance in cases where the
CPU must manipulate packets received or sent - for example, Internet Small Computer
Systems Interface (iSCSI) deployment without TCP Offload Engine (TOE), Fibre Channel over
Ethernet (FCoE) deployment with software bas d FCoE functionality.
Multiple adapters are typically used to scale performance and to achieve redundancy .
.~

7-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Server Storage
• Located on
- Local disk
- Remote storage device (disk array) - more frequently used
" Space is limited by the storage device.
II Performance is affected by disk subsystem
- Read/write operation speed
- SAN connection throughput
- Time required to expand volume
- Time required to rebuild lost disk
- Cache size and acce ss time

¢ ."!'~: : .• ~ ,;. '~ i <;~ ::- . : '( . ,~t- . ;;,:.; ~:.,... ,H .

The server storage performance iE affected by the storage subsystem.


If the volume uses a local disk the disk performance defines the server storage performance.
More often, the server storage is decoupJed from the server hardware to a storage array. The
performance of that combined with the SAN performance defines the server storage
performance characteristics.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-9


Application Performance
• Application performance
depends on:
- Server performance
- Application code used
- Application protocol
• Chatty protocols incur additional
processing time thus limiting the
appl ication' s respon siveness.

Application performance characteristics are af~ ' ted by all the server components plus the
application performance itself. Applications can be written in a nonoptimal way, resulting in a
slow response - even if the underlying server pe formance is sufficient.

7-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Performance Characteristics
• Raw server pe rformanc e is affected by server components.
• Actual server performance depends on:
- Average and maximum CPU load
- Average and peak memory usage
- LAN throughput
- SAN 1/0 throughput
- Applicatio n characte r sties
- Storage space requirements

: .• ~ l{. ':f ~:< :~ ,,~ . ,':: : :; . ~, ,. ,.,. .:.: .

Actual server performance characteristics depend on:


• A verage and peak CPU load
• Average and peak memory usage
• Average and peak LAN and SAN throughput
• Required storage space

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-11


Summary
This topic summarizes the key points that were iscussed in this lesson.

Summary
• Infrastructure performance is affected y throughput and
reliabil~y.

• Key server performance characteristi that influence application


performance are CPU load, memory usage, I/O throughput, and
storage space used.

7-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21

Identifying Data Center


Reconnaissance and Analysis
Tools
Overview
This lesson identifies, lists, and describes the most appropriate tools used to perform
reconnaissance and analysis for e)l isting computing solutions .

Objectives
Upon completing this lesson, you will be able to identify and describe available reconnaissance
and analysis tools. This ability includes being able to meet these objectives:
• Describe the general functiom of reconnaissance and analysis tools.
• Identify the reconnaissance and analysis tools used to gather and analyze information for
existing solutions.
Reconnaissance and Analy is Tools Overview
This topic introduces the functions needed by the analysis and reconnaissance tools.

Analysis Tools Overview


• Used to gather and analyze data cent r information
- Network information gathering tools
- Storage information gathering tools
- Server information gathering tools
• The selVer is the cente rpiece of recon naissance analysis
- Affects the server selection and sizing f r vi rtualization solution

.~

The reconnaissance and analysis tools are used t gather information about data center resource
utilization for network, storage, and server components.
When planning for virtualization, the server hist rical performance is of utter importance since
it governs the physical server dimensions as well as how the physical server resources are
divided between the virtual machines (VMs).

7-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Analysis Tool Functions
• Purpose
- Assess current server utilization and workload
- Plan capacity optimization
- Aid in the dedsion for an optimal solution
• Gather resource utilization information across monitored servers
- CPU and memory
- Network
- Storage
,. Analyze gathered data and present the report
- Performance graphs
- Minimum, maximum, ave rage loadlutilization information
.. Provide benchmarking based on references
,. Perform ''what-if'' analysis

.. "~ •. j'. ;. ,, 'X<"~·:

The analysis and reconnaissance tools are used for the following purposes:
• Assess the current workload capacity of the IT infrastructure through comprehensive
discovery and inventory of IT assets
• Measure system workloads ar d capacity utilization across various elements of the IT
infrastructure - including by function, location, and environment
• Plan for capacity optimizatior through detailed utilization analysis, benchmarking,
trending, and identification of capacity optimization alternatives
• Identify resources and establish plan for virtualization, hardware purchase, or resource
deployment
• Decide on the optimal solution by evaluating various alternatives through scenario
modeling and "what-if' analysis
• Determine which alternative best meets the predefined criteria
• Monitor resource utilization through anomaly detection and alerts based on bench marked
thresholds
• Help generate recommendations to ensure ongoing capacity optimization

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-15


Identify the Reconnaissanc and Analysis Tools
This topic introduces the tools to be used for an lysis and reconnaissance.

Windows Embedded Tools


• Computer management
- Disk management
- Volume properties
• Administrative tools
- System performance
• Task manager

Microsoft Windows operating system offers a selection of embedded tools that can be used to
gather the historical performance characteristics for a single server.
The figure introduces the following Microsoft indows embedded tools that can be used to
perform server analysis:
• Computer management
• Administrative tools
• Task manager

7-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Linux Embedded Tools

Linuxconf

Webmin

::,:-.;.,...; ::; ' ..:;. :-j"- ; , ..; .,;{",..;

The Linux operating system is available with many applications and utilities, thus the
embedded tools that can be used to gather historical performance characteristics vary per Linux
distribution.
Linuxconf comes with Mandrake Linux and Red Hat Linux, but is also available for most
modern Linux distributions. MultIple interfaces for Linuxconf are available: GUI, Web,
command-line, and curses.
Webmin is purely a web-based modular application. It offers a set of core modules that handle
the usual system administration fl nctionality, and there are also third-party modules available
for administering a variety of packages and services. To download and learn more about
Webmin, point your web browser to www.webmin.comlwebmin.This package is available in a
number of formats specific to different distributions.

Note Whereas any user can install Linuxconf, Webmin must be installed by root. After that, you
can access this tool froM any user account as long as you know the root password.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-17


VMware Capacity Planner
• Pre-virtualization tool used in physical environments
• Capacity planning tool
• Collects seNer resource utilization inf rmation

VMware Capacity Planner is a pre-virtualizatio tool used in physical environments to provide


a server virtualization and consolidation-stackin o plan.
VM ware Capacity Planner is used to plan for ca acity optimization and to design an optimal
solution to achieve maximum performance. It assists in IT resource utilization and in the
development of a virtualization roadmap for server containment and consolidation.
VM ware Capacity Planner is a capacity planning tool that collects comprehensive resource
utilization data in heterogeneous environments nd then compares it to industry standard
reference data to provide analysis and decision support modeling. Its detailed capacity analysis
includes CPU, memory, network and disk utilization across every server monitored.

Agent-free Implementation
The VM ware Capacity Planner Data Collector is installed on-site at the data center that is being
assessed. This component collects detailed hardware and software metrics required for capacity
utilization analysis across a broad range of platf rms, without the use of software agents.

Web-based Hosted Application Service


The Capacity Planner Dashboard is delivered as a Web-based application that delivers rich
analysis, modeling, and decision support capabi ities based on the data collected from your data
center. Service providers can use this interface t access pre-built report templates and create
custom reports, depending on the type of assessment being delivered.

Reference Benchmarking
Analysis provided by the VM ware Capacity Pla.l ner Dashboard is based on comparisons to
reference data collected across the industry. This unique capability helps in guiding decisions
around server consolidation and capacity optimization for your data center.
More information can be found on the Cisco Pmtner Resource Center at
http:\\www.ciscoprc.com.

7-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware vCenter CapacitylQ
• Management infrastructure vService
- Analyze, forecast, ard plan virtualized data center or desktop
capacity
- Post-virtualization tool
• Explore existing capacity
- How many VMs can be deployed?
- What is the historical capacity utilization?
- Can the existing capacity be used more efficiently?
• Predict future needs
- When will the capacn y limit be hit?
- What happens ~ capacity is added, removed, or reconfigured?

¢ ,Y;',<, ,',: ;,: ,~.;",~>; . :': '-:: .:,j. ;,.

VM ware vCenter provides a set of management vServices that greatly simplifies application
and infrastructure management.
With VMware vCenter CapacityIQ a continuously capacity-managing capacity can be
achieved. VMware vCenter CapacityIQ continuously analyzes and plans capacity to ensure
optimal sizing of virtual machines, clusters, and entire data centers.
Key features of VMware vCenter CapacityIQ are:
• Performs "what-if' impact analysis to model effect of capacity changes
• Identifies and reclaims unused capacity
• Forecasts timing of capacity shortfalls and needs
Benefits of the Vmware vCenter CapacityIQ are:
• Delivers the right capacity at the right time
• Makes informed planning, purchasing, and provisioning decisions
• Enables capacity to be utilized most efficiently and cost-effectively
VMware vCenter CapacityIQ is a post-virtualization product used for ongoing management of
virtualized environments.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-19


VMware vCenter CapacitylQ Use
Case Example
• How can VMware vCenter CapacitylQ be used to resolve the
following questions?
" To understand current capacity usage
- How much capacity is currently used?
" To forecast future capacity needs
- How many more VMs can be adde ?
• To predict impact of capactty changes
- What happens to capacity if more Ms are added?
II To maximize utilization of existing capacity
- How much capacity can be reclaimed?

VMware vCenter CapacityIQ can assist in answ ring the following questions concerning day-
to-day operations:
• How much capacity is being used right now ?
• How many more virtual machines (VMs) ca be added? (In other words, when capacity run
out?)
• What happens to capacity if more VMs are added?
• How much capacity can be reclaimed?

7-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - Examining Current
Resources

ro-...d
1m I : ~::"~"'''!ol:,,-,,,q~

I"""" I
$llm
"liL
I~ I~<~_
u. It
~ ~; ;,;49''''''"""
'>~; ~ ".;a:",Y.'><-..:<

~;tN=

Imr:'=5~ ' . . . . .....


i a t>'.....~·:~~

(: .": ~ " .:; ~ : , ., ~ X"' ': ,.: <~ . ; j'.


~I.d .11 .dIII
First, the Capacity Dashboard for the selected data center should be reviewed. This shows the
data center-level view of the current and future state of capacity in terms of
VMs/Hosts/Clusters and CPU/Memory/Disk.

2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-21


CapacitylQ - "What... if" Analysis

Second a "what-if' analysis can be started. For example, what happens if 10 more new VMs
are added?
• Start new "what-if' scenario and enter para eters per the wizard
• Select the type of capacity change - hardware or virtual machine
• Select the kind of virtual machine capacity change in this scenario
• Define the proposed VM configurations for this scenario
• Review summary of existing VM configurations as a reference for sizing VMs

7-22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - "Vlhat. . if" Analysis (Cont.)

C<.....,""
~·~;«t:',:4·M::-.<>:"

"'"

M!»..<l1"'''''~~~l:''
u-_ .. r:~~

... ~>'M~"'"':!'f.'?->'fIW!-:.
! ....)tl{
.t'~"'Y,<.>'J>
4.litt<w.

"'"
~"l'.'.'Y-'i>tt.;oi~.~.(

-:- .''; ..... ,.,~)(. :\ ,,>~r.-,; .,,, ',:: ,.;;/". « :.; 'X""",;

Select how to view the results, review selections, and select "Finish" to complete the scenario.

CapacitylQ - "What-if " Analysis (Cont.)

?~·"" .I~.t;;;""l"';"~. ~:.1

,..,.,.t. I . _"'>'b><._",..,f.......,kl:l4: · ·
'_r :S'l, ' '''''''l~W. ··''

, ( n.,
1iJ
'_'m"
'.(~<m

r~~'; mil .-.,-.~


~;;»x. ic(.»:')

1,
4;,...
~",.1
~'" S'I:'~m

I "'-" /if:timiii1mika'H'
v~.. ",,~r

l!i; :·······, ~~(II~~~

!J.-.o·.... ;oot ..I ...,"""".,.., ... _

(: :.1':.":" ,-,:-;(: ,., ;".....; • '! ;,:; . •)'.;,: ~; .;.:.:,~;.• ; .

The "what-if' scenario result for the deployed vs. total VM capacity (graphical view) shows
that at the current provisioning rate, capacity will run out in November (red line).
If 10 new VMs were deployed today, capacity would run out in 23 days.
Select alternative views to examine additional information.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-23


CapacitylQ - "What-if" Analysis (Cont.)

'.~ ... !~.d>:v~:r..>X'


~l?{I~
U-~·.f!>@

.~);;~

All Views: Cluster Capacity


This shows cluster capacity before and after the "what-if' scenario of adding 10 new VMs.

Idle Virtual Machines


This shows that idle VMs are VMs that consiste tly have low resource utilization over a long
period of time. Here you can see that the CapacityIQ has identified four idle VMs (out of 97
total VMs in this cluster) - e.g., low resource utilization for >95% of its profiled lifetime.

Over-allocated Virtual Machines


This shows that overallocated VMs are VMs that have been allocated more capacity than they
actually demand. These are candidates for recon 19uration to actual capacity needs. This
enables the reclaiming of resource capacity.

7-24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - "What-if" Analysis Results
.. Understand current capacity usage
- How much capacity is currently used?
• Currently 97 VMs on a cluster which is at 87% resources
• Forecast future capacitY 1eeds
- How many more VMs can be added?
.. 16 more VMs can be added .
.. If tre nd continues , 'l ew capacity must be added in 70 days.
• Predict impact of capacity changes
- What happens to capacity if more VMs are added?
• Adding 10 more VMs depletes cluster capacity in 23 days.
• Maximize utilization of eXisting capacity
- How much capacity can be reclaimed?
• There are 4 idle ard 4 over-allocated VMs. 2GB of memory
can be reclaimed .

',:;, <: :~ , .; ~:. '""~ . '" ",:: ..;). ,<,.;- ",:.:-."..;.

With the dashboard and "what-if' analyses, you have answered the following:
• How much capacity is being used right now? This cluster currently has 97 VMs and is 87 %
full.
• How many more VMs can be added? 16 more VMs can be added. More cluster capacity
must be added within 70 days.
• What happens to capacity if more VMs are added? If lO more VMs are added, cluster
capacity will run out in 23 days.
• How much capacity can be safely reclaimed? There are four idle VMs and four over-
allocated VMs, so 2GB of memory can be reclaimed.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-25


Akorri BalancePoint
• Agentless management software
.~'
- Advanced analytics to help fix problems, optimize utilization,
and improve performance in the virtualized data center

Akorri BalancePoint is another agentless management software with advanced analytics to help
fix problems, optimize utilization, and improve erformance in the virtualized data center.
Akorri BalancePoint helps companies make sure that data center server and storage virtual and
physical systems deliver the best application performance possible.
BalancePoint can help reduce operations and infrastructure costs by using servers and storage
more efficiently and reducing the time and reso rces spent on management through automation.
BalancePoint supports a wide range of applications, servers, and storage systems. It can assist
with the following tasks:
• Manage performance across applications, s rvers, and storage with cross-domain
visualization and performance-based servic alerting. Automatically find deeply buried
contention, hotspots, and bottlenecks. In ad ition, when problems are found, it provides
direct automatic troubleshooting analysis.
• Optimize application performance and reso rce utilization. Manage IT as a business with
performance indicators that indicate the optimal balance between application requirements
and resource capabilities.
BalancePoint's main capabilities derive from three key components:
• ScanPoint agentless discovery and data collection provides performance and utilization
data from the operating system and VM ware, databases, servers, and storage infrastructure
resources. This data is stored in a self-managing internal performance database mined for
historical patterns, performance trends, and modeling baselines.
• Viewpoint unique performance topology visualizations show the performance impact on
data center applications running through all the physical and logical layers of IT
infrastructure, including server and storage virtualization. These topological views are
automatically built from ScanPoint data, and are color coded with BalancePoint's
assessment of performance problems.
• GuidePoint actionable recommendations and analyses are based on Akorri's Cross-Domain
Analytics™ technology. These mathematic lly advanced algorithms provide intelligent
alerting and proactive service management tools to help with remediation, optimization,
and planning.

7-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
PlateSpi n Recon
# Agentless managemen t software
- Advanced analytics to help fix problems, optimize utilization,
and improve performance in the virtualized data center

dU
11 '" 'j 'I ;, '1 I~ !t "

;"!: S~::: : ,* :t<:""y ':'S; :.:>.->


;I" =111111,
I: ...
~·: ?· ~,; . <-1 ·:.~.»>: r,
I"

, ·"i:(:-;.';:!;;":{:; <:'!':

. ::; .,~. : " ... -·: ·;.:r·,,·';.

PlateS pin offers Recon tools that can be used for tasks similar to those performed by VM ware
Capacity Planner and Akorri BalancePoint.
These tools can be used for workload profiling. PlateS pin Recon tracks actual CPU, disk,
memory and network utilization e ver time, on both physical and virtual hosts. Every server
workload and virtual host has util zation peaks and valleys, and PlateSpin Recon can build
consolidation scenarios based on mterlocking these peaks and valleys.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-27


Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
• Reconnaissance and analysis tools are used to gather server
resource utilization and workload data.
• Various tools are available to gather information; embedded tools
are operating system dependent and fferper-server information,
whereas specialized tools such as VMware Capacity Planner can
gather and present information for multiple se rvers.

References
For additional information, refer to these resources:
• http://www.vmware.com!products!capacity- lanner/

7-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 31

Understanding a Migration
Plan
Overview
This lesson identifies and explains how to build and evaluate a migration plan.

Objectives
Upon completing this lesson, you will be able to define the requirements for migration, list the
necessary contents of a migration plan, and provide guidelines for evaluating migration plans.
This ability includes being able to meet these objectives:
• Identify the migration plan.
• Identify the aspects of the migration plan.
• Identify the methods for migration plan evaluation.
Migration Plan Overview
This topic introduces and explains migration pIa s.

Migration Scale
• Full migration to new site
- Build new physical data center
old equipment with Cisco UCS)
- Migrate operating systems,
applications, and data
• Full migration within existing site
- Build new logical data center within an
existing physical data center spa e
(replace old equipment with Cisc
UCS)
- Migrate applications and data
• Partial migration or redesign
- Redesign an existing data center (add
Cisco UCS to the existing deploy ent)
- Migrate operating systems,
applications, and data

Depending on the type of migration, which can range from simply upgrading the existing data
center to include virtualization to building a physically new data center with new equipment,
you can determine what migration steps are reg ired to have the least painful migration.

7-30 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Plan
,. Deploy Cisco UCS cluster(s)
II Provide connectivity between existing computing solution and
Cisco UCS eq uip ment (L2 and L3)
• Define migration actions for migrating operating systems,
applications, and data to Cisco UCS
II Define verification step!: for each migration action
" Define rollback procedures in case of unforeseen migration issues
• Decommission old equipment or integrate it into the new data
center as part of migration

:: ,.~ .J',> '~":-"< ,

In general, a migration plan should include a list of migration actions, which can be divided
into migration phases. Each migration action should have three main components:
• Detailed task list assigned to 1he appropriate resource
• Verification steps to confirm that the migration was successful
• Rollback procedure to revert 10 the original setup in case an unforeseen problem was
detected which cannot be immediately mitigated

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-31


,~'

Migrating Servers
• Select server migration method
• Physical to Physical (P2P)
- Define whether server personality (MAC, WWN, UUID, ... )
needs to be migrated
- Define if complete operating syste m and application reinstall is
required
• Physical to Virtual (P2V)
- Select the P2V conversion tool - VMware vConvert
- Define Cisco UCS and VMware as igration prerequisites
• Virtual to Virtual (V2V)
- Define server infrastructure prerequ sites - Cisco UCS and
VMware
- Add new ESX hosts to virtual infrastructure
- Existing VMware infrastructure - mig rate VMs using VMotion
- New VMware infrastructure - exporVimport VMs

For the server migration, flrst the migration method should be selected:
• Physical-to-Physical (P2P): The existing servers are migrated to Cisco UCS server blades
in one-to-one fashion. The migration plan must determine whether it is necessary to
migrate personality identifiers like MAC, w rld wide name (WWN), Universally Unique
ID (UUID addresses). Next, it has to evaluate whether complete operating system and
application reinstall is needed or some cloni g tool can be used to migrate the server
operating system, application(s), and data (if it resides on the local disks).
Physical to Virtual (P2V): The existing physical servers are ::onverted to virtual machines (VMs).
• The migration plan must define the prerequisites-installed Cisco UCS cluster(s)
connected to LAN and SAN, implemented VMware infrastructure with proper
management. Next, the plan must define which tools will be used for P2V migration (for
example, VMware vConvert).
• Virtual to Virtual (V2V): The existing VMs are migrated to a new VMware virtual
infrastructure. The migration plan must defl e the prerequisites-the installed Cisco UCS
cluster(s) connected to LAN and SAN and new ESX hosts added to the virtual
infrastructure. Second, the migration plan should also consider possibilities in regards to
the virtual infrastructure:
Existing virtual infrastructure will be used with new ESX hosts added to the
infrastructure. The Cisco UCS cluster(s) should be properly connected to LAN to
permit communication between the management infrastructure- VM ware vCenter,
and to the SAN (should be connecte to the same shared storage as other ESX hosts
in a cluster) to allow proper operation for VMware services like VMotion, High
Availability (HA), Fault Tolerance (FT), .Disaster Recovery Solutions (DRS), etc.
New virtual infrastructure will be used, thus new management services must be
deployed and ESX hosts added to that infrastructure. Next, the migration plan
should define how the VMs from the existing virtual infrastructure would be
migrated to the new one-using export/import of VMs or using cloning tools.

7·32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migrating Operating Systems,
Applications, and Data
• Option 1 - Can be done at the application layer if supported by an
application
- Add fresh virtual or physical server to application server cluster
- Remove ok::l physica servers from the cluster
• Option 2 - Can be done at the operating system layer
- Install new virtual or o hysical application server
- Copy data to new vi rtual or physical server or provide access
to existing external database server
- Switch over from ok::l physical server to new server

.'., ;.};.,." ' l,' <_~. ;;;.,~ '.:..:.: r'

In general, when migrating from a traditional setup to a virtualized one, you need to migrate a
service from a physical server to a virtual server.
This can be done in three ways:
Option 1: Using application clusters
• Install a fresh virtual server a'1d an application server. Put the new server into an
application cluster with the ex isting physical server. Repeat the step for as many servers as
required. Once there are enough virtual servers in the cluster, start removing the physical
server until only virtual servees remain. Decommission the physical servers.
Option 2: Reinstalling servers
• Install an operating system and an application server into a virtual server. Migrate data and
configuration from the physical server to the virtual server. Switch over from physical to
virtual server.

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-33


Migrating Operating Sys ems,
Applications, and Data (Cant.)
• Option 3 - Can be done at the virtualization layer
- Use dedicated tools to virtualize ph sical servers
- Switch over from physical to new vi r(ual server
• Option 4 - Can be done at server hardware layer
- All data located on SAN attached st rage
- Switch over from existing to Cisco CS server blades

Option 3: Converting physical servers to virtual servers


• Use dedicated tools that can do a hot or col conversion of a physical server to a virtual
server. Switch over from physical to virtual server.
Option 4: Converting physical to physical servers
• Use boot from SAN and store all data on SAN attached storage. Switch over to Cisco UCS
by booting blade server.

7-34 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Example
and new DC

Conversion
Tool

3. Swltchoverto virtu. I server


4. Verify operation of virtual server
5. OpHonal: switch back In case offailed
mlgmtlon

'" .>.'~" .::~ ;·:·'f, ·«~ : :- .:'~. ,,~ . .:.;':, ~ :.", . :.~,

The figure illustrates a simplified set of migration steps for a single server using the third
migration option-using conversion tools. The old and new data center infrastructures are
temporarily connected on Layer 2 and a hot conversion can be used to create a new virtual
server with the same operating system and applications as the original physical server (only the
underlying hardware is different).

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-35


Verification Steps
• Define what to check after each migration step
• Evaluate possible issues when migrati g to Cisco UCS
• Physical deployment
- Not enough power when servers u der full load
- Not enough power to add new com onents
• Connectivity
- I/O Module to Fabric Interconnect, Pin groups
- V LAN , VSAN misconfiguration
- Undersized uplinks
• Server parameters
- MAC, WWN, UUID misconfiguration - overlapping addresses
• Operating system patches not applied

Verification steps are an important part of the migration plan since they are used to verify
proper operation of the migrated server infrastr cture and applications.
To write proper verification steps the plan shoul evaluate the possible problems that could
take place during migration.

Physical Deployment Related Issues


• It could happen that during or after the migration, all or part of the equipment could be shut
down due to insufficient power or cooling capacity. The issue could arise from the amount
of load placed on the servers, which under fu ll load consume more power and produce
more heat than when idle or under light loa .
• It could happen that adding a new component-e.g., a Cisco UCS server blade-could not
be completed due to insufficient power. This could be due either to an insufficient power
supply installed in the Cisco UCS chassis or due to insufficient power at rack level.

Connectivity Related Issues


• Misconfigured I/O Module (10M) to Fabric Interconnect links
The 10M to Fabric Interconnect is n t following Fabric Interconnect A and B design
requirements, but the uplinks are cross-connected to Fabric Interconnect A and B in
a ad hoc manner-left and right 10M connected to the same Fabric Interconnect.
Wrong configuration of pin groups, which results in no connectivity. PING
GROUP(s) might specify the wrong uplink for a certain server.
• VLAN errors-server network interface car (NIC) adapters might be placed in a wrong
VLAN; upstream switch trunk interface might block required VLANs (due to allowed
VLAN list not properly configured)
• VSAN errors-server host bus adapters (H As) might be configured with wrong VSAN or
Fibre Channel over Ethernet (FCoE) VLAN ID might be overlapping with existing VLAN s
• Undersized uplinks resulting in poor or no connectivity due to insufficient bandwidth

7-36 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Server Issues
Overlapping MAC, WWN, or VUID parameters might result in no connectivity.

Operating System Issues


An operating system without proper patches might prevent proper server operation.

2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-37


Migration Plan Aspects
This topic describes the migration considerations that need to be taken into account when
migrating data centers.

Migration Plan Aspects


Consider the implications of migration:
• Overnight migration vs. gradual migrat on
• Downtime for individual services
" Potential data loss
• Potential degradation of service
• Required migration staff (in-house vs. xternal)
• Required temporary migration resource
• Selected server migration aspects

Migration of a large and complex data center ca be a very sensitive operation. It is the
migration that will reveal any outstanding flaws in the data center design and migration plan.
The list above covers the most common considerations which, when properly addressed, help
you build a migration plan that can reduce the overall risk.

7-38 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Timing
• Overnight migration
- Requires significant staff for speedy migration (external help)
- Multiple issues can a rise.
- One issue can preve'1 t the whole migration from completing.
- Significant downtime (not appropriate for round-the-clock
enterprises)
• Gradual migration
- Less personnel required
- Individual issues can b e investigated and migration steps
postponed and repeated at a later time.
- Less or no downtime
- Requires many maintenance windows for one or several
concurrently performed migration steps
• Recommendation: Desiqn migration plans for gradual migration

(.: X· ~<· ·:.·~ :-:.:::).,;~ : ~ . :,~ .'~~.:;::;. '-:-:'><':':".

Overnight Migration
Migration of data centers can be performed in one go (the so-called overnight migration)
wherein the migration plan is designed to migrate all services at once. This is typically feasible
only for small data centers that are used only during business hours and not around the clock.
This can offer a migration window between two business days (literally over night) or during
the weekend (two full days). Larger data centers or data centers that are used around the clock
typically cannot be migrated in such short times.

Gradual Migration
An alternative to overnight migrations is to devise a migration plan wherein services are being
migrated one by one over several maintenance windows (i.e., nights or weekends) and should
be designed in a way to cause mir imum downtime per service (e.g., minutes or hours).

2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-39


Downtime
• Cold conversion of servers may require signijicant downtime (i.e.,
hours).
• Hot conversion minimizes required dOlJl; ntime but may not always
be available or reliable.
- Cannot be used for physical to phys cal server migration
• Application-layer clustering has zero d wntime, but:
- Is possible only with applications th t have clustering
functionality
- Requires addttional planning for ad ing new servers to the
cluster and decommissioning old se vers
• Recommendation: Use application clu tering or hot cloning if
possible

When determining how to migrate old servers t new servers it is recommended to select the
migration option with minimum down time, esp > ially for services that require maximum
uptime.

Data Loss
• Hot server conversion or data copying uring busy hours could
result in data loss during conversion a d switchover.
• Replication between existing and new centralized storage is an
option, but switchover requires downti e (stop old server, start
new server)
• Recommendation: Prefer downtime ove r hot switchover for
sensitive applications

Hot conversion means that a conversion is bein done while the server is active. This can
potentially lead to problems where data can be 1 st within moments when a conversion and
switchover are being performed. A void using th s method for services dealing with sensitive
data where data loss is not acceptable (e.g., e-ba king transaction server). The preference is a
longer downtime vs. data loss.

7-40 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Degradation of Service
~ Migrated servers should provide the same functionality with equal or
better performance.
• Consider individual migrated corrponents:
- CPU ¢ Guaranteed CPU resources in virtual environment
equal to or greater than existing CPU power
-- Memory ¢Equal to or g reater than existing memory for virtual
machine
- Disk ¢ Create virtual disk on the central storage that is equal to
or greatertha'l existing disk capacity
- Network ¢Ensure equal or greater bandwidth to virtual
machine (on server and network equipment)
• Recommendation: Audit the existing environment to get peak and
average resource utilizatio'1 and provide guarantees for the identified
peak periods (do not stretch the virtual-to-physical ratio to the limit)

.(.: :',':, ,-,:;. .;: :., -;>;. ',",; . " '.~ "-;)" .~ ,.; ';';' :"~:'.:.

When dimensioning the virtualized data center it is normal to assign mUltiple virtual servers per
one high-performance server (i.e., several CPUs and/or cores and large amounts of memory).
Care should be taken not to assign too many, which could result in performance degradation
after migration. Longer monitoring (e.g., over several weeks) should be performed and the data
analyzed to determine each physical server' s average and peak resource utilization. The peak
utilization should be used to dimension the new data center and the physical-to-virtual ratio.

e 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-41


Migration Staff
• In-house staff may not have any experi nce with migration.
• Use external expertise when lacking in-house expertise for
migration.
• Recommendation: Consult external experts for creating a
migration plan, use in-house staff to ca ry out a gradual migration.

A gradual migration plan can be prepared with t e help of external experts with experience in
data center design and migration. The migration itself, when done gradually, can be performed
using in-house staff with only optional external versight or help.

Temporary Resources
• Some resources may be required for tl" e migration only:
- Layer-1/2 connectivity between old nd new data center (e.g.,
dark fiber or L2 VPN)
- Cisco UCS server blades for migration tools
• The migration plan should list and detail the temporary resources
to ensure a smooth migration .

.~'

A migration plan should also list any temporary resources that are needed during the migration
process. This would typically include Layer-2 connectivity between the old and the new data
center and some servers and network devices to help in the migration process.

7-42 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Server Migration Aspects
.. P2P migration
- Evaluate if different server hardware affects operation
• P2V migration
- Follow P2V guidelines for applications
• V2V migration
- Evaluate whether ESX hosts can be integrated into existing
VMware solution
- Evaluate how VMs can be migrated to new VMware
infrastructure
• Can server personality parameters be migrated?
IIIntegration with existing server management tools for operating
system, applications, vi rt ualization - VMware vCenter
- Maximum infrastructure size
- Effect of relocating management servers and services
(downtime)
(~.":"!-,",:;. .;:; ':"~:""'; :" <~';J'''' ~:'};~''(-';'

The server migration aspects largely depend on the selected migration method.
With peer-to-peer (P2P) migration, the plan should evaluate if using different server hardware
compared to the existing one affects server operation after migration.
With physical to virtual (P2y) migration, the plan should evaluate how the application(s) will
behave when running in VMs.
With virtual to virtual (V2V ) migration, the plan should evaluate whether the ESX hosts can be
integrated into the existing virtual infrastructure or-in the case of a new infrastructure-
evaluate how the VMs can be migrated.
The migration plan should also evaluate:
• Whether the existing server parameters (MAC, WWN, UUID) can be used and need to be
migrated to new servers
• How a new Cisco UCS server infrastructure merges with the existing management tools
• Can the existing management tools for operating system, applications, and virtual
infrastructure be used?
• Is any additional configuration like Serial over LAN (SoL) or Intelligent Platform
Management Interface (IPMI) required for server management?
• Would adding new servers reauire a new management application or license since the
infrastructure would go beyond the existing management limitations?
• What would happen if the management servers and services were relocated?

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7·43


Evaluating a Migration Plan
This topic describes the evaluation procedures for a migration plan.

Evaluating a Migration Plan


.. Test implementation and migration steps ir advance if possible:
• Connectivity
- Interconnect old and new infrastructure (L2)
- Create dummy virtual or physical serve with available IP addresses
in existing server VLANs on Cisco UCS
- Test connectivity from Cisco UCS dummy servers to real servers
.. Conversion to virtual servers
- If possible, hot-convert a real server
- Start the virtual seNer in isolated envir nment to verify its stand-alone
operation
.. Migration of physical servers
- Use cloning tool to transfer data if poss b le
• Management
- Verify proper operation of existing managementtools for operating
system and applications

A "dry-run" of the migration can be performed--where all actions that do not cause downtime
can be performed to ensure that any issues in the migration plan are identified in advance
before the actual migration is started.
This evaluation requires that a new data center e built and configured, connectivity provided
between the old and the new, and that some services are tested in an isolated environment.

7·44 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Evaluating a Migration Plan (Cont.)

~' .,:. ~'< ,~; .; ,:.,' " "" :~ . : ,< '~::':,:: :;. ~ :." .:·!c

The networking and storage infrastructure of a new data center can be built and tested in
advance. The operation of real servers can be tested only once they are migrated. To maximize
the reliability of the migration process it is recommended to create dummy virtual servers and
test their connectivity to the existing physical servers (this connectivity is required during the
migration period) and their connectivity to clients (this connectivity will be required for normal
server functionality after migration).

@ 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7·45


Evaluating a Migration Plan (Cont.)

Additionally, it is recommended to hot-convert some or all of the existing physical servers and
start them in the new virtual environment in an isolated mode, simply to test that the servers
were successfully converted and are operational

7-46 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
" Identify the servers and services to be migrated
• Detail the technical aspects of the servers, services and the
network
• Create a migration plan to allow for the gradual migration of
servers
" Create a detailed set of verification steps for each server or
service
• Create rollback procedures in case verification fails and cannot be
immediately mitigated

't. ::-<:-, ~ :-.~:A-:::' :"'·''''.~: ;,,' :i1;~' :',;".~" ..< : >~\~>, !.C _-'Ii ::,- ,

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-47


Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
• Server performance ch aracteristics are affected by applications
and hardware resources.
• VMware Capac~y Planner is a premigration tool used to assess
the existing server infrast ructure to aid in physical to virtual
conversion .
.. VMware Capac~ylQ is 2 postvirtualization tool used to analyze,
forecast, and plan virtu al infrastructure.
• A migration plan should address server migration aspects for the
selected migration method.

r;. : H(' ,";" ...,; ., -;_,.,.;,,- . ,,;- "~, , ,;;:~ ',_. 4·.··,.~

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-49


Module Self-Check
Use the questions here to review what you learn d in this module. The correct answers and
solutions are found in the Module Self-Check A swer Key.
Q 1) Which of the following affects actual server performance? (Source: Understanding
Historical Performance Characteristics)
A) application, chattiness
B) operating system management
C) CMOS
D) BIOS
Q2) Which tool can be used to evaluate req irements for converting physical Windows
2003 severs to virtual machines? (Source: Identifying Data Center Reconnaissance and
Analysis Tools)
A) VMware CapacityIQ
B) VMware Capacity Planner
C) Web min
D) CLI
Q3) What should be defined in the migration plan when migrating virtual machines to a
new virtual infrastructure? (Source: Understanding a Migration Plan)
A) conversion tool
B) application reinstall requirements
C) power requirements
D) virtual machine migration meth d

7·50 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Self-Check Answer Key
Q I) A
Q2) B
Q3) D

© 2009 Cisco Systems, Inc. Understanding Existing Computing Solutions 7-51


Module 81

Positioning the Cisco UCS


Overview
This module describes how to position the Cisco Unified Computing System (UCS).

Objectives
Upon completing this module, you will be able to identify and design Cisco Unified Computing
storage. This includes the ability 10 meet these objectives:
• Identify Cisco UCS deployments.
• Describe Cisco UCS solution advantages.
Lesson 1

Identifying Cisco Unified


Computing System
Deployments
Overview
This lesson identifies and describes Cisco Unified Computing System (UCS) server
deployment options and usability, especially in a server virtualization environment.

Objectives
Upon completing this lesson, you will be able to identify and describe Cisco UCS deployment
options. This ability includes being able to meet these objectives:
• Describe the Cisco UCS deployment options.
• Identify the Cisco UCS deployment for server virtualization environment.
Server Deployment Options
This topic describes the purposes for which the Cisco Unified Computing System can be
utilized.

Flexible Server Deployment

Network Fabric

., Blum,...

A single Cisco Unified Computing System can be deployed with a mixture of bare-metal
operating system installations to virtualized ser er solutions.
From another perspective, a Cisco UCS can be llsed for server deployments-which include
new servers, upgrading, and repurposing-and for achieving business continuity, which
includes high availability and disaster recovery.
Neither server deployment nor business continUity are functions unique to blade server markets.
Enterprises with larger server deployments face many of these issues.
Even in smaller environments (tens to hundreds of servers), the challenges faced in deployment
and business continuity can still take advantage f the features provided by Cisco UCS.
However, the true advantage of the Cisco UCS IS more easily seen on larger server
deployments (thousands of servers).
The life cycle for servers in any organization is 0etween 18 months to 3 years, which is
governed by the advent of new technologies. T ese technology shifts can happen with
frequencies between 3-5 years. The changes in server technologies often follow the lifecycle of
solutions changes and not so much the changes f the technology.

8-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
o • • •

Simplified Server Deployment


• Server, storage, and network
administrators
- Pre-configure multiple servers up-
front using service profles
- Firmware bundle
- BIOS and adapter settings
- Boot order and parameters
- LAN connectivity
- SAN connectivity
" Deploy physical servers as needed
overtime
- Plan and pre-configure once
- "Pay-as-you-grow" non-disruptive
incremental deployment
- SifllJle server replacement-
replacing physical blad
. ,,' ".~ ••y .:" ~,,/.< ....:..; .

The Cisco DCS platform makes server deployments easy, with service profiles. The service
profile is created by a server administrator, and may possibly use resources created by the
storage and network administrator, like MAC and world wide name (WWN) pools. Service
profiles allow for the following capabilities:
• Plan and pre-configure once: By using resource pools, service profiles, and service
templates, you can plan and configure for groups of servers all at one time and then, using
the profiles, provision new servers at any time.
• Incremental deployments: After service profiles are created, adding additional servers is
as simple as using a service template to create a new server.
• Server replacement: This task is as simple as starting a profile on a new blade after it has
been physically replaced.

© 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-5


Dynamic Server Provisioning
• Apply appropriate profile to provision-specIfic server type
.. Same hardware can be dynamically deployed with different server types
• No need to purchase custom configured s rvers for specific applications
• Maximize server hardware

Profiles for Web Servers Profiles for App Servers

One of the challenges faced by companies who have large numbers of servers concerns
provisioning and server utilization. Typically, w en designing a solution, a customer will need
to design it for different levels of utilization. ThIs could mean that, in certain solutions,
customers may need to purchase and have available additional server hardware and software,
for burst capacity, for example. This hardware, while being underutilized, cannot also be
dynamically repurposed; if it is not needed, it m st simply occupy space, power, and cooling
resources.
Because servers are virtual and configured usin profiles, repurposing resources (server
hardware) is as simple as disassociating a profile and reassociating a new one. Less-used
servers can be shut down and repurposed for m re highly used servers.

8-6 Cisco Data Center Unified Computing DeSign (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Upgrade Within Cisco UCS
• Disassociate service profile from existing server
• Associate service profi le to new server
• Existing server can be retired or repu rposed (create or associate
with appropriate profile)

Existing Server New Server

1
UUID: 560004 05 ... 1
MAC:00: 25:b5:33:21 :11 J
WWNN: 20:11:00:11 :22 ... 1
Boot order: SA" . LAN 1
FirmWdre: XX.yy.!Z I
. ,:- "'.',. ,1
"~" _~'::':;';'v;,~:.-<{#, ... ,,~;-,.-h:"'''/ ...:v.. w,·v - "..;, . ~ ........... - .• '"

¢ :':,"!, :'.~;..:~ ,'.~ ;.:..'~'; . ,", <.~ ,.,t.;;. -.; ';':~\"' .:.

The Cisco DCS enhances server t pgrades with service profiles. Simply put, the blade in the
Cisco DCS is agnostic and can have its hardware defined with the service profile.
A server in a traditional deployment will be built specifically for a solution that contains
adapters, firmware, and boot definitions. If you wish to upgrade such a server, you must again
go through the process of reinspecting the server for new hardware and coordinate that with
other administrators. Then, you have to schedule extended time to make the move.
With the Cisco DCS platform, there is a choice of two options:
• Change the service profile to Joot the server on another upgraded blade.
• Copy the service profile and 2ssign it to the other blade. Then, you can test the new server
before pointing users to it.

© 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-7


Server Disaster Recovery
• Server incorrpatibility across sites can ca se application launch failures
at Disaster Recovery site.
• Disaster Recovery across sites involves the full infrastructure stack
- Application failover and dependency m tching
- Operating system configuration compati bility
- Data replication
- LAN/SAN connectivity matching
- Server compatibility Infrastructure Stack
• Server compatibility across sites requires:
- Configuration matching
(NICs, HBAs, etc.)
- Firmware version matching
- Parameter settings matching
." - LAN/SAN connectivity matching

While no complete solution for disaster recovery is provided, the features of Cisco UCS do aid
in a disaster recovery solution. When designing a disaster recovery solution, you must consider
the whole infrastructure stack. This includes:
• Application failover and dependency matchmg
• Operating system configuration and compatlbility
• Data replication and integrity
• Connectivity matching (LAN and SAN)
• Server compatibility
Of these, the area in which the Cisco UCS can aid significantly is server compatibility. Server
compatibility requirements include:
• Configuration matching (adapters)
• Firmware matching
• Server parameters
• LAN or SAN specifics (VLANs and VSAN )
All of these items are configurable in the service profile for a given server, such that when
recovering, the Cisco UCS can quickly ensure server compatibility on the recovery system.
Without this ability, server compatibility can completely derail a disaster recovery solution.

8-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Disaster Recovery with Cisco UCS
• Export and import servi ce profiles on a periodic or ad-hoc basis
• Periodic Disaster Recovery tests at remote site
• Hardware at DR site can be repurposed during normal operations:
- Servers can be dep lo yed for testldev/QA at DR site
- During outage, disa ssociate existing profiles and associate
imported production Drofiles

::, -.:: . : ,;l'.>~ ': .,:~,'C.;

Much of server compatibility is a manual process that leaves a lot of potential for incorrect
configuration. Cisco UCS, with the scope of statelessness and the ability to transparently use
profiles over redundant systems, helps with server compatibility issues.

Database Environment and Cisco UCS


• Database environment
- Large memory footprint
- CPU intensive
- Multiple interfaces required

© 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-9


Server Virtualization Enviro ments
This topic describes how the Cisco UCS produce is used in server virtualization environments.

VMware ESX and Cisco UCS


• More VMs per ESX host
- Reduced hardware costs
- Reduced license costs
• VM level visibility
- Better VM security
- VM performance tuning
• Fewer components
- Adapters, cables, and swit
- Reduced power and cooling
• Pay-as-you-grow deployment
- Enhanced HA & DR
- Risk-free upgradesireplacements

In a general ESX deployment, one or several bl des will be running VM ware ESX hypervisor
and will host multiple VMs on the blades. The value propositions make the Cisco UCS ideal for
ESX deployments.
• Expanded Memory Blade Servers (Cisco DCS B250-Ml): The more memory that is
available, the more VM you can host per blade, allowing your blade to have a very cost-
effective approach on a per-VM basis. However, there are other advantages, as well:
Reduced HW costs
Reduced licensing (based on the nu ber of cores)
Larger memory handles VMotion better
• Virtualization Integration: The Cisco UCS VIC M81KR adapter provides not only
visibility into the VM networking, but also provides:
Better security
Performance-Virtual adapters moved out to be handled at the hardware level,
rather than the hypervisor
Improved troubleshooting
• Unified Fabric: Provides a single network for connectivity, simplifying your configuration
while reducing power and cooling requirements. Virtual NICs are configured and managed
like all other resources through Cisco UCS Manager.

8·10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
• Service Profiles: Server statelessness allows for the rapid replication and deployment of
servers, which is good for a pay-as-you-grow environment. In addition, server statelessness
also provides for:
Enhanced disaster recovery: Eliminates worry in server compatibility DR
Easy upgrades and replacement: Server profiles combined with vMotion

© 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-11


Hypervisor and Service Profiles
• Server virtualization and service profil es are independent of each
other.
• Hypervisor is unaware of underlying hardware state abstraction.

VMware and Cisco UCS are complementary products and technologies.


While VMware can be used to replace many of the physical servers within an organization,
VM ware is still an application and operating sy tern that needs a server platform on which to
run. With integration and visibility into network virtualization, the Cisco UCS is uniquely
positioned as a preferred platform on which to run VMware ESX.
VM ware provides a software-based solution that enables server virtualization at a software
level. VM ware provides a hypervisor that allows virtual machines to think they are accessing a
common set of expected hardware.
What Cisco UCS does is to ensure server hardware compatibility across the entire Cisco UCS.
This means that the hypervisor will run on any lade in the system, and our software ensures
that the hardware is identically configured, regardless of the blade from which the VMware
ESX server is chosen to run.

8·12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Advanced Functionality
• Flexible number and tyr e (Fibre Channel and Ethernet) of
adapters for best practic es networking
• Quick and simple provisioning of new ESX hosts
- Minimize window of exposure for overcommitted clusters

r:::; ,.~
!_~~::=~~; :r.~ _~J
VMware VMotion VMware HA, FT VMware DRS, DPM
• . 101
BitJ ~.' [1j Lgl II
i~:.: , .. !!JIll :'1 iD
q

· · BI
fjijiMi;fj '" ii:m ': :tf*1t~!j!!!i~f- _. -iifj£6~j{i{1:j:
1. . , . HI•• : :_ . .
... ~ .. ...... ~ . ........ ... .... ... .. ..... _..... ... ... .... ... .. ", (
P*IY.;I:
\ _ __ :"' _ ~ ~ ... ______ ... __ ... _ _ _ _ ... ___ __ J

Fniled Sef'Vl}t SModbyHo,t rj)wefOp~im jl.td


Server
,'..;: ,'.> ,: : ,~ ,,'; '" ~_ ~ ...jo'x _, .,;-,,-.,..!

VMware provides a suite of soft\\are to enable High Availability (HA) and Disaster Recovery
(DR) for the virtual machines that are being hosted. This migration can occur by the following
means:
• Live migration across ESX hosts: A manual intervention that can be used for upgrading
servers. With California, we can easily use service profiles to create identical ESX servers,
as needed, to move VMs between. Network and SAN access would be identical, saving
time.
• Policy-based migrations: Moving a VM from one to another, based on some type of
policy, such as server utilization statistics. All access can be replicated simply through
virtual NIC in service profiles.
• VM restarts due to failed hosts: Hosts can be quickly replicated with service profiles,
thus relieving fail over hosts. Larger memory footprints reduce the impact of increasing the
number of VMs during failun ;.

~ 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-13


Virtual Desktop Environ nent
• ESX servers for VDI require large memory
• Application vi rtual ization and boot image management apps benem from
caching.
• Certain streaming workloads can cause E X hosts to be CPU bound.
• Advanced architecture best practices require multiple segregated
networks.
• "VOlin a box" for VOl Pod architectures.

Virtual Desktop environment or Virtual Desktop Infrastructure (VDI) has the following
characteristics, which can exploit the benefits of Cisco UCS:
• VDIs are very CPU- and memory-intensive. The Cisco UCS B250-1 blade offering is ideal
in that the memory is scalable to very large · mounts, and the Nehalem CPU will be
expanding the numbers of cores.
• Some customers may not want to depend 0 too many VMs at one time. Our ability to
rapidly deploy servers using service profile provides better HA than traditional servers.
• Application servers and boot servers can benefit from the greater memory available for
caching.
• Streaming workloads can cause VMs to be server-bound. Cisco UCS virtualized adapters
and adapter profiles provide ease of migrati n.
• Some architectures require specific network segregation. The Cisco UCS virtualized
adapter and VLANs can easily provide this ,egregation.
Lastly, the Cisco UCS is ideal, because even in an advanced VDI deployment, it can provide
for all of the server platforms that are needed, allowing a VDI deployment in a box or "pod".

8-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key pcints that were discussed in this lesson.

Summary
• Cisco UCS enables flexible server deployment and dynamic
- server provisioning, ana it simplifies server upgrades within
systems.
II VMware server virtualization benefits from the Cisco UCS large
memory support and ada pter virtualization .

.... :,.(~ '";"..c :.;"", ••:; : ~ ~.~ '"

© 2009 Cisco Systems, Inc. Positioning the Cisco UCS 8-15


Lesson 21

Describing Cisco Data Center


Unified Comouting Solution
Advantages
Overview
This lesson identifies and explains the benefits and advantages of the Cisco Data Center
Unified Computing solutions.

Objectives
Upon completing this lesson, you will be able to meet the following objective:
• Describe the Cisco Data Center Unified Computing solution advantages.
Cisco Unified Computing B siness Advantages
This topic describes the business advantages of the Cisco Unified Computing solution.

Reduced Expenses
• Reduced total cost of ownership (TeO)
• Better return on investment (ROI)
• Reduced physical infrastructure cost (CAPEX):
- Facilities: Less space used
- Lower cost per computing unit
• Reduced operational costs (OPEX):
- Less space used
- Lower power consumption costs
- More efficient cooling

The most obvious advantage is reduced Total Cost of Ownership (TCO). In general, less
equipment is required to perform selected tasks, and this applies to both servers and network
equipment.
Better Return on Investment (ROI) is achieved by higher utilization of equipment, so that over
the long term there is a significant gain in investment.
Lower capital costs are achieved by utilizing les3 physical space, and by the relatively lower
cost of a processing unit. A virtualized infrastructure allows for sharing of commodity
equipment, such as network interface cards, po 'er supplies, cabling, and so forth.
Lower operating expenses are obtained through more efficient use of resources (higher
utilization of less space). Functions such as cooling can be optimized, as they can be
concentrated within a smaller space footprint.

8-18 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Operational Advantages
• Reduced complexity
• Increased productivity
• Simplified management
• Increased resource utilization:
- Better total vs. utilized resources ratio
• End-to-end solution
• Extended data center life span
• Increased data center facilities capacity BiIflIn> Mtr
" utilized Ca pac w/
_ Total C"!,,,city

r. ' .~'.:.~ '..:~: ,.) !.,,.~.: : ::.; . ,",:. ,'.:" ,~;: : : ;. : -;:.:,;:,.":: ;"' :.).~:) "~

The main goal is to better utilize the total available capacity of your facilities. As an example, a
highly utilized virtualized server can have an average CPU utilization at around 35%, compared
to a 5% average on a standalone server box.
Reduced complexity in terms of cabling and supporting infrastructure is an additional key
factor, resulting in simplified phy,> ical management and increased productivity.
Combining several technologies, Including servers, networks, and data storage, virtualized
infrastructure brings in an end-to-end solution. As this equipment is usually modular, it extends
the data center life span by provicing higher upgradeability (for example, a Cisco Catalyst 6500
chassis can be used for the previous, the current, and the next generation of supervisor engines,
and so on.).
By condensing equipment, the data center floor footprint is smaller, meaning a net increase of
facility capacity.

© 2008 Cisco Systems, Inc. Positioning the Cisco Unified Computing System 8-19
.~

Operational Advantages (Cont.)


• Investment protection:
- Single or muttisite deployment
- Scalable architecture that accommodates for future growth
• Increased service flexibility and agility:
- Better responsiveness
- Shorter design-implement-deploym nt-manage lifecycle
• Reduced infrastructure and operational risks

Cost-Effectiveness
Dynamically provisionable applications infrast cture must also be designed to reduce
operational costs. Pooling resources helps to inc -ease overall resource utilization and leads to
more standardized operating environments. The result is facilitated multisite deployment.

Agility and Service Flexibility


Agility is increased to make the rollout of new a plications and scaling of existing applications
faster and easier.
One example of this is the use of virtual machines (VMs) that can be loaded onto any server on
demand, as opposed to applications that permanently reside on a specific server.
Ultimately, the goal is to logically partition com uting, network, and storage resources into
services that can be dynamically provisioned on an on-demand basis.
Complete cycles from planning to implementati n and rollout can be shortened from a few
weeks into a span of only days.

Resilience and Reduced Operation Risks


The two key aspects to achieving resiliency are security and disaster recovery. A strong
business continuity strategy needs to account for both aspects. Storage resources must be tiered
according to the level of service they provide, and applications must be provisioned with the
appropriate storage and intersite transports according to each application's service level
agreement (SLA), minimizing downtime from days into hours.

8-20 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Green Data Center Advantages
• Designed for ene rgy eft ciency
• Optimize energy consumption:
- Lower power consu mption
- Lower cooling energy consumption
• Achieve environmental compliance

.;. •":,!, (. ~ ~.; , ::).~". ",; . " "~.;) . ,,, .':';':~"""~

Fifty per cent of today' s data centers will have insufficient power and cooling capacity to meet
the demands of high-density equipment in the near future. Through 2009, energy costs will
emerge as the second-highest operating cost (behind labor) in 70 percent of data center
facilities. Power demand for high-density equipment will level off or decline. In-rack and in-
row cooling will be the predominant cooling strategy for high-density equipment. In-chassis
cooling technologies will be adopted in 15 percent of servers.
Environmental compliance is an interesting point when running up for tenders*, achieving
higher chances of winning in case energy efficiency is counted.

Note *Tender is a request for quote (RFQ) from public- or government-owned company. Multiple
bidders must propose their solution and in the tender the best one is selected.

A few measures you can take to increase energy efficiency:


• Continue server virtualization (est. >50 percent power/cooling avoidance)
• Implement blades (20-30 percent power/cooling savings)
• TotaIly separate hot and cool air in all data centers
• Investigate and deploy fresh air cooling in geographies where it makes sense (>30 percent
power savings)
• Raise data center ambient temperature from 68 to 72-75 degrees Fahrenheit

© 2008 Cisco Systems, Inc. POSitioning the Cisco Unified Computing System 8-21
Data Center Lifespan Prolonged

Capacity
Virtualization
and Unified
fctJric
deployment

Capacity

• • Total Capacity

: . Utilized Capacity

In the figure, you can see that within three consecutive years after adding equipment to the data
center, the total thermal capacity of the room is exceeded. The solution might be to upgrade the
cooling system. However, the cooling system has to cool down the installed equipment, which
mitigates the efficiency of using the equipment itself. In many cases, equipment upgrades
become unnecessary and could be avoided if the currently installed equipment were utilized
more efficiently. This also makes the data center more environmentally friendly and lowers
operating costs.
Virtualization is the key to maximizing the pote tial of the existing data center equipment. By
utilizing resources more efficiently through virtualization, you avoid the need for constant
upgrades. This prolongs the lifecycle of the equipment because the load is more stable and its
functionality is optimized over the long term.
Reclaimed storage reduces the expense of adding shelves of disks to the storage system, and the
money saved can be invested in increasing business capacity instead of just keeping up with the
demand on the available storage. The effect of etwork and storage virtualization and unified
fabric deployment results in less power used per computing unit, making room for future
growth.
In summary, the total effect of the Cisco Data Center Unified Computing solution is that it
creates space in which a business can grow; it prolongs the lifespan of the data center overall,
and it reduces the need to upgrade the facility.

8-22 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
• The business advantag es of the Cisco Data Center Unified
Computing solution incl ude reduced complexity, improved
productivity, scalabiltty, and better resource utilization, along with
reduced costs.

", . ~ ." .' ; , .J: : ' ;'¥~_ ., ,, - ,' ~ ; ..;.:" ~, .~'.~.:

e 2008 Cisco Systems, Inc. Positioning the Cisco Unified Computing System 8-23
Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
• Cisco UCS is a flexible, saleable solution which can be used for
simple server deployment, dynamic provisioning, and to address
disaster recovery needs.
.. Cisco Unified Computing solution brings business and technical
benefits.

t- :'·'~··:"w··; .. " ... .... ':~"':'''' '.~.'~~

© 2009 Cisco Systems, Inc. Positioning Cisco Unified Computing System 8-25
Module Self-Check
Use the questions here to review what you learn d in this module. The correct answers and
solutions are found in the Module Self-Check A swer Key.
Ql) Which Cisco UCS characteristic offers the greatest benefit in the VMware
virtualization environment? (Source: Identifying Cisco Unified Computing System
Deployments)
A) eight-slot chassis
B) advanced power management
C) expanded memory blade
D) support for iSCSI
Q2) Which of the following is a benefit of t e Cisco Data Center Unified Computing
solution? (Source: Describing Cisco Data Center Unified Computing Solution
Advantages)
A) higher CAPEX
B) longer ROI
C) reduced OPEX
D) shorter data center lifecycle

8-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Module Self-Check Answer Key
Ql) C
Q2) C

© 2009 Cisco Systems, Inc. Positioning Cisco Unified Computing System 8-27
Module 91

Understanding Server
Virtualization Networking
Overview
This module introduces the VMware server virtualization solution and server virtualization
networking, describes the Cisco solution for server virtualization networking, and explains
virtual machine sizing.

Objectives
Upon completing this module, you will be able to identify server virtualization and networking,
understand Cisco Nexus lOOOV, and understand virtual machine sizing. This includes the
ability to meet these objectives:
• Identify server virtualization.
• Identify the Cisco server virtualization networking solution.
• Describe how to size a virtual machine.
Lesson 1

Identifying S rver Virtualization


Overview
This lesson provides an overview of the server virtualization and describes the VM ware server
virtualization solution, tools, and :.:oncepts.

Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Identify server virtualization.
• Describe VM ware server virtualization concepts.
Server Virtualization Overview
This topic explains server virtualization.

Server Virtualization
• Abstracts operating system and applic tion from physical
hardware
• Offers hardware independence and fl exibiltty

Physical Server Virtualized Server

CPU Me molY Storage Netv.olk

Historically, when physical servers are used they are deployed with one application and one
operating system within a single set of hardware.
A single operating system is isolated to one machine; for example, running either the Windows
or Linux operating system. This means that the physical server is tightly coupled with the
underlying hardware, which makes migration or replacement a process that requires time and
skill.
If additional applications are put on a physical s rver, these multiple applications start
competing for resources. which typically causes problems related to performance or insufficient
resources--challenges that are difficult to addre s and manage. Thus, a single application might
run on a single server, resulting in server resource underutilization-with average utilization
ranging from 5% to 10%.
When a new application must be deployed-e.g , a Web service-a physical server must be
deployed, racked, stacked, connected to external resources, and configured-all of which
requires a substantial amount of time.
Since numerous applications are used, some of t em demanding high availability as well, a data
center ends up with numerous server deployme ts. In many cases, this causes various problems
ranging from insufficient space to excessive po er requirements.

Server Virtualization
Server virtualization decouples the server from the physical hardware-this makes the server
independent of the underlying physical server. The hardware is literally abstracted or separated
from the operating system.

9-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
The operating system and the application(s) are contained in a container-a virtual machine. A
single physical server with server virtualization software deployed (e.g., VMware ESX) can run
multiple virtual machines. Keep in mind that virtual machines do share the physical resources
of the underlying physical server. However, with virtualization, tools exist to control how the
resources are allocated to individual virtual machines.
The virtual machines on a physical server are isolated from each other and do not interact. In
other words, they are running deployed applications without affecting each other. Virtual
machines can be brought online without the need for installing new server hardware, which
allows rapid expansion of computing resources to support greater workloads.

©2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-5


Server Virtualization Benefits
• Hardware resource consolidation
" Physical resou rce sharing
" Utilization optimization

Avg. load 20% Averagl load 70%

With physical server deployment, a single operating system is used by each server. The
software and hardware are tightly coupled, whic makes the solution inflexible. Since multiple
applications are typically not running on a single machine, due to potential conflicts, the
resources are underutilized and the computing i frastructure cost is high.

Benefits
When server virtualization is used, the operatin system(s) and applications become
independent of the underlying hardware, allowing a virtual machine to be provisioned to any
physical server. Since the operating system and application are encapsulated in a virtual
machine, multiple virtual machines can be run 0 the same physical server. Thus, server
virtualization offers significant benefits as compared to physical server deployment:
• Physical hardware can be consolidated
• Resources of a physical machine can be shared among virtual machines
• Resource utilization is improved-so fewer resources are wasted
The example shows application deployment in a physical server environment compared to a
virtualized server environment. A physical server configuration uses three (3) servers that have
a low average load ranging from 10% to max 4 %. A virtualized solution uses just one (1)
server with three (3) virtual machines deployed- so the total physical server average load is the
sum of virtual machine average loads, which is around 70%-significantly better than with the
former configuration.

Note In the virtualized solution, a small portio of resources is also used for the virtualization
abstraction layer-the hypervisor. Compared to virtual machine (VM) resource usage the
hypervisor utilization should be low.

9-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Hypervisor - Abstraction Layer
• Hypervisor or Virtual Machine Monitor
(VMM)
- Thin operating syste between Virtualized Server
hardware and virtual achine
- Controls and manages hardware
resources
- Manages virtual machines (create,
destroy, etc.)
• Virtualizes hardware res ources
- CPU process time-s haring
- Memory span from physical
memory
- Network
- Storage
.~~ ",~,,\~ , ~ .:.", .H

A hypervisor or Virtual Machine Monitor (VMM) is server virtualization software that allows
multiple operating systems to run on a host computer concurrently.
The hypervisor provides abstraction of the physical server hardware for the virtual machine. A
thin operating system performs the following basic tasks:
• Control and management of physical resources by assigning them to virtual machines and
monitoring resource access and usage
• Control and management of vlrtual machines-the hypervisor creates and maintains virtual
machines and, if requested, destroys the virtual machine (if the VMM is alive)
Ideally, a hypervisor would abstract all physical server components-CPU, memory, network,
and storage. CPU abstraction is ac hieved with CPU time-sharing between virtual machines, and
memory abstraction by assigning memory span from a physical memory.
A virtual server is used to enable ~ particular service or application, and from the server
perspective typically CPU, memory, 110 and storage resources are of concern.
Note that when multiple virtual machines are deployed, they can oversubscribe resources; thus
the hypervisor must employ an intelligent mechanism to allow oversubscription without
incurring performance penalties.
The virtualization methods differ per server virtualization solution-VMware ESX uses a
different virtualization approach from Microsoft Hyper-V, thus performance would also differ.

2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-7


Virtual Machine (VM)
• VM contains operating system and
application Virtual Machine
- Operating System = Guest OS
- Guest OS does not have full control
over hardwa re
- Applications are isolated from one
another
• PerVM
- vMAC address
- viP address
- Memory, CPU, storage space

A virtualized server is called a Virtual Machine (VM). A virtual machine is a container holding
the operating system and the application(s). The operating system in a VM is called the guest
operating system.
A VM is defined as a representation of a physical machine by software that has its own set of
virtual hardware on which an operating system nd applications can be loaded. With
virtualization, each virtual machine is provided ith consistent virtual hardware regardless of
the underlying physical hardware that the host server runs on. A virtualized server has the same
"hardware" characteristics as a physical machine:
• CPU(s)
• Memory
• Network adapter(s)
• Disk(s)
All the virtual server resources are virtualized. Apart from the resources each VM also
possesses its own set of parameters-e.g., a virtual MAC address and virtual IP address-to
allow it to communicate with the external world. Therefore, a single physical server will
,~
typically have multiple MAC addresses and IP addresses-those defined and used by the VMs
it is serving.
Since a VM uses virtualized resources, the guest operating system is no longer in control of
hardware-this is the privilege of the hypervisor. Underlying physical machine resources are
shared between different virtual machines, each running its own operating system instance.
The VM resources are defined by the server administrator, which creates a VM-defines the
characteristics of a VM-the CPU speed, amount of memory, storage space, network
connectivity, etc.

9-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Benefits
Partitioning Isolation

Four
Key
rope rties
Encapsulation Hardware
Abstraction
[1.Jl r5J fi If%jJ
r;;S~l Ii~
:ijtf!Bifji -Itir~mjiij ':
i,---------------------_/
. . . . . . .. i
/'.. : .. ..... ..,~ ... S~·1'·- .: .':;' ';: ' ~~ "' ''' ,_,:~

Using a VM provides significant benefits with four (4) key characteristics:


• Hardware partitioning: Mu ltiple virtual machines are running on the same physical
server at the same time.
• VM isolation:-A VM running on the same physical server cannot impact the stability of
the other VM.
• VM encapsulation:-A VM IS kept in a couple of files, which eases VM mobility.
• Hardware abstraction: The VM is not tied to a physical machine and can be literally
moved on a per business or administrative demand. The load can be dynamically balanced
among the physical machines.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-9


Virtual Machine Partitioning
• ESX abstracts the hardware from the g est OS and application .
II Runs mu~iple operating systems on a single physical machine
• Divides server syste m resources betw en VMs

Resource
Pool

..-.'.-.-. ".
.-
V...c~-bacrup Po.w....dO .. ,COiiil ~... "'o¥",nL , 00 0 C'i.iI:.';rr.D
OI""4tix & P"",ored .. , 00$ ~pttsnt;.;.r.§,~ 249 _ Il% _ 9 i21rJn
Yet<ioQ<tOrix f;> p",,,,,,,.d.. , 00$ gJtemtle.nil. >, 1$ _ 1>77 _ 18 !Jl::lTi::Dn
GMayp.; V ~ed ••• GOfJJ M,,,,.w. ."'... Il< IIiiOI 493 Ii!ii!Ii$ -17 imiZ,Lr;

Partitioning means that a physical server (ESX s shown in the graphic) is running two or more
operating systems with different applications installed. The VM operating system is called the
guest operating system.
The guest operating systems might be different- VM ware supports a plethora of them ranging
from Windows, Linux, or Solaris to NetWare or any other vendor-specific system. None of the
guest operating systems have any knowledge of others running on the same ESX server. Still,
they share the physical resources of the physical server.
The control and abstraction of the hardware and physical resources is done by the VM ware
ESX hypervisor-a thin operating system that p ovides the hardware abstraction.

9-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Isolation
• Hardware level based fault and security isolation
- VMs are not aware of other VMs' presence
- VM failure does not affect other VMs on the same ESX
II Advanced server resource control
- To preserve and cont rol performance

ResOI..Jce Allocation ~ 'R


CPU Reservation: o MHz Memory Reservation:
CPU Reservation Used: o MHz Memory Reservation Used:
CPU Unreserved: 1021 SO MHz Memory Unreserved:
CPU Reservation Type: EHpandable Memory Reservation Type: EHpandable

View: !i1t1l1fH,~@l
Vercingentonx-bac\<up Unlimited Normal ''':i) 25
Operatix Unlimited Normal i ~: : ~:~ 25
Vercingetorlx Unlimited Notmal : ~·![f': 25
Garovix Unlimited NOfmai ',:::(f: 25

.;.-.,_-;:~<: . ": .:.-;. .' ~'.;~<.. ~'7'

A second key VM characteristic iii isolation. Isolation means that VMs do not know about other
VMs that might be running on the same ESX server. They have no knowledge of any other
VM.
The implication of isolation is of course security. Not knowing of each other, the VMs do not
interfere with each other's data. Isolation also prevents any specific VM failure from affecting
any other VM's operation.

Note VMs on the same or different physical servers can communicate if network configuration
permits it.

To ensure proper performance for a VM, the hypervisor allows advanced resource control,
where certain resources can effectively be reserved per VM-. For example, ESX allocates and
dedicates memory.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-11


Virtual Machine Encapsulation
• Each VM has a state that can be save to known files.
• VMs can be moved or copied (cloned)
- Simple file move/copy operation

A third key VM characteristic is encapsulation- a VM is a collection of files on a host


operating system (the ESX storage space), whic saves the VM state that is composed of the
following information:
• The guest operating system and application installed
• The VM parameters like memory size, num er of CPUs, etc.
An encapsulated VM can easily be moved or co ied for backup or cloning purposes-this is
just a simple move/copy operation on the host ESX system. Since a VM is independent of the
underlying physical server, it can be moved to and started on a different ESX server.

9-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Hardware Abstraction
II Any VM can be provisioned or migrated to any other ESX server
with similar physical characteristics.
• Support for multiple operating systems
- Windows, Linux, Solaris, NetWare
~ltii ~'P.tw,( i[ffiw,.,

...
• •
f..",MlIHAAI+>6

Ul,f.Woo

~tt'i; ~

"""'f", ~ .,.. ••
__ JHltt~~..,

1~ljh

.r.fII~W'('_'*""

f~t~~i~E~~j;'iji'!i0:fi:&'jf;~;f'til!; ;[i t(1fili'i tt,~.il;;.,J~~,J".,~:~


u~ ,.,::.".(~

Resourc(,) pool

The fourth key characteristic is hardware abstraction. As already mentioned, this is performed
by the ESX hypervisor to provide VM hardware independency.
Being hardware independent, the VM can be migrated to another ESX server to utilize the
physical resources of that server. Mobility also provides scalable, on-demand server
provisioning, server resource poo growth, and failed server replacement.
With advanced VMware mechanisms like Dynamic Resource Scheduler (DRS), the VM can be
moved to a less utilized physical ~ erver, thus dynamic load balancing is provided.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-13


VMware Server Virtualizatio Solution Overview
This topic describes the server virtualization solution and tools.

Server Virtualization Types

Native (full) Host-based


Para-vi rtualization
Virtualization Vi rtualizat ion
VMl VM2 VMl VM2 VM1 VM2

Server virtualization solutions employ approaches that differ in how the guest operating system
is isolated from the underlying hardware and in where the hypervisor or virtual machine
manager (VMM) resides. The most often used a proaches are:
• Native or full virtualization
• Host-based virtualization
• Para-virtualization
Abstraction of the operating system and application from the underlying physical server
hardware is an important benefit of virtualization. The abstraction can be employed in disaster-
recovery scenarios since it intelligently address s the traditional requirement of physical server-
based disaster recovery-the need to provide identical hardware at the backup data center.
With complete abstraction, any VM can be brought online on any supported physical server
without having to consider hardware or softwar compatibility.
The ability to run multiple virtual machines on a single server also reduces the costs of a
backup data center solution by consolidating applications on fewer physical servers than would
normally be required-at a backup data center a minimal hardware set can be used while fast
recovery in a disaster situation is accomplished.

9-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Native (Full) Virtualization
Native virtualization has the following characteristics:
• The hypervisor runs on bare metal-i.e., directly on the physical server hardware without
the need for a host operating system.
• The hypervisor completely virtualizes hardware from the guest operating system(s).
Drivers used to access the hardware exist in the hypervisor.
• The guest operating system deployed in a VM is unmodified.
Such an approach enables almost any guest operating system deployment and allows the best
scalability. The most widely used example of native virtualization is VMware ESX hypervisor.

Host-based Virtualization
Host-based virtualization has the following characteristics:
• The VMM runs in a host operating system not directly on the physical server hardware.
• Drivers used to access physicaI hardware are host operating system kernel based, whereas
the hardware is still emulated by the VMM.
• The guest operating system deployed in a VM is unmodified but must be supported by the
VMM and host operating system.
Examples of host-based virtualization are Microsoft Virtual Server and VMware Server
solutions. Such solutions typically have a larger footprint due to host operating system usage
and the additional I/O that is used for the host operating system communication.
Microsoft Virtual Server can be deployed with Windows XP, Windows Vista, or Windows
2003 host operating systems and can host Windows NT, Windows 2000, Windows 2003, and
Linux as a guest operating system. The current version is Microsoft Virtual Server 2005 R2
SPl.

Hybrid Virtualization
Microsoft Hyper- V is a hybrid natIve-host virtualization solution, where a hypervisor resides on
a bare metal server but requires a parent VM or partition running Windows 2008. The parent
partition creates child partitions hosting guest operating systems. The virtualization stack runs
in the parent partition, which has direct access to the hardware devices and provides physical
hardware access to child partitions The guest operating systems range from Windows 2000,
Windows 2003 , Windows 2008, SUSE Linux Enterprise Server 10 SPlISP2, Windows Vista
SP 1 to Windows XP Professional SP2/SP3/x64.

<9 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-15
Para-virtualization
Para-virtualization has the following characterisdcs:
• The hypervisor runs on bare metal-i.e., dir x tly on the physical server hardware without
the need for a host operating system.
• The guest operating system deployed must be modified to make calls to or receive events
from the hypervisor. This typically requires a guest operating system code change.
• The application binary interface used by the application software remains intact, thus the
applications do not have to be changed to run inside a VM.
Unmodified guest operating systems can be supported if virtualization is supported in hardware
architecture; e.g., Intel VT -x or AMD Pacific.
An example of a para-virtualization solution is XEN, which supports guest operating systems
like Linux, NetBSD, FreeBSD, SolarislO, and certain Windows versions.

Note The rest of the module will focus on a VMware solution.

9-1 6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
-,'" - ' :I- , '

VMware Server Virtualization


• VMware vSphere - cloud operating system:
- Infrastructure and app lication services
M Orchestration of tools and applications for a complete server
virtualization solution
- Management and automation
- Virtual infrastructure
- Virtualization platforms

...-.,s,..·,,.,.... :.,,_ .• ,: ..,;>- ..... """"-.'C''<

VMware vSphere
VMware vSphere is the latest VMware server virtualization solution. VMware vSphere is a
cloud operating system designed to manage large collections of infrastructure-CPUs, storage,
networking-as a seamless, flexible, and dynamic operating environment.
VM ware vSphere comprises:
• Management and automation with infrastructure optimization, business continuity, desktop
management, and software lifec ycle tools
• Virtual infrastructure with resource management, availability, mobility, security tools
• ESX, ESXi, virtual Symmetric Multiprocessing (SMP), VMFS virtualization platforms
Among key vSphere solution tools and applications are:
• ESX and ESXi hypervisor
• Dynamic Resource Scheduler (DRS)
• Virtual Machine File System I VMFS) native ESX cluster
• Distributed switch that can be native VMware or Cisco NexuslOOOV
• VMotion, Storage vMotion, I-ligh Availability (RA), Fault Tolerance (FT), and Data
Recovery availability tools
• Virtual SMP enabling virtual machines to use multiple physical processors; i.e., upon being
created the administrator can assign multiple virtual CPUs to a virtual machine

@2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9·17


VMware ESX
VMware ESX 4 is the next generation hypervis cr, which extends ESX3.5 performance with the
following characteristics:
• Support for VMs with up to 8 virtual CPUs
• Up to 255GB of memory assigned per VM
• Sustained 40Gbps of network I/O
• Up to 200,000 I/O operations per second
The VM ware hypervisor-ESX-uses intelligent mechanisms to address virtualization, such as
a purpose-built scheduler, use of hardware-assisted technologies for optimized CPU access,
transparent page sharing/ballooning for effectiv memory usage, and hardware assisted
memory management for low overhead.
The VMware ESX 4 host scales to support 64 cores and up to 512GB of physical memory
(double the previous limit) to be able to support very large servers and run them efficiently.
Virtual SMP (Symmetric Multiprocessing) enables the virtual machines to use multiple
physical processors. This means that upon creation the administrator can assign multiple virtual
CPUs to a virtual machine.
From a management perspective the VM ware E X host can be configured using a specific
client (e.g., SSH to access a Linux like CLI). The CLI and the commands available can be used
for virtual ~witch configuration, NrC teaming, a d so forth. Typically, the VMware ESX host is
configured via VMware vCenter Server (former y VMware VirtuaICenter).
This management workstation communicates with VMware ESX hosts deployed and has a
management database that holds the informatio about the ESX hosts and the virtual machines.
Within the VM ware solution data center management constructs exist that are comprised of a
number of machines that can move within the defined data center.
This defines also the VMware vMotion (discuss d later) network requirements and scope.

9·18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Scalability
• Scalabil~y is defined by the ESX host and VM characteristics
- Numberof CPUs
- Amount of memory
- Nu mber of VMs
.. Expands w~h the application needs

Host Scalabil ity VM Scalability

~IhJ'" ~i¥I~~~i@,§f~Jil
.~-. ,z,. · ~ ~· ~ .: ..;c• •":' v' :" ~,,,,,,. ~.\

The ESX host and VM characteristics govern the VMware solution scalability.
The ESX host scalability characteristics define the performance and consolidation rates
available per single physical server, whereas the VM scalability characteristics define the
amount of workload a VM can handle.
Note that the ESX version 4 now ")upports on the fly (or hot) additions of resources like
memory, CPU, network connectivity, and storage-with this the virtual machines no longer
need to be shut down in order to make configuration changes .
In the current VMware ESX vers ion 4, the following applies:
• The ESX host scalability is limited to:
512 GB of memory
256 virtual CPUs per host
256 VMs
• The VM scalability is limited to:
8 virtual CPUs with Virtual SMP
255 GB memory

Note To be able to utilize large memory capacities the ESX uses the 64-bit VMkernel.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-19


VMware Management
• vCenter server
- Manages ESX hosts and VMs
- Required for advanced services like v otion
- Deployed on physical server or VM

Resource
Pool

vCenter Server

CPU

Memory

Network

When the size of the virtual infrastructure grows-that is, more and more ESX hosts are added
,~'
to the virtual infrastructure-it becomes hard to keep up with management. VMware vCenter
server is the solution for virtual infrastructure management, having the following
characteristics:
• Windows based application
• Resource management for ESX and ESXi h sts and VMs
• VM deployment and template management
• Task scheduling, statistics, logging, alarms and events monitoring.
Although VM ware vCenter is not required for ESX host and VM deployment, it is required for
some advanced services-such as VMotion, HA, Fault Tolerance, and DRS.
The VM ware vCenter server can be installed on a physical server or VM. In either case the
requirements are:
• Processor = 2GHz or faster
• Memory = 2GB or more
• Disk space = minimum 1GB
• Network adapter = recommended 1GE
• Operating system (32- or 64-bit) = Window ' 2003 Server with SP1 or SP2, Windows 2003
Server R2, Windows 2008 Server
• Database server = Oracle 109, Oracle 11g, Microsoft SQL Server 2005 Express Edition
(comes with vCenter Server), Microsoft SQ Server 2005 with SP2, Microsoft SQL Server
2008.
Since vCenter is an important part of the VM ware solution, it is vital to plan proper availability.

9-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Mobility
.. VMware VMotion
- Moves virtual machines across physical servers wtthout
interruption
- Changes hardware resources dynamically
- Eliminates downtime and provides continuous service
- Balances workloads for computing resource optimization
VMotion
~

~. I
'~'w " '" '"
W' IIII
.-1 '1"
iMi.f
;( (
,'tk~JiMiiMMWi - -- - --- -- - ii;--- - ~:tJ.- - - - - - ;_~i',

I
Resource Pool:
I
~~f)~tm* I
!'~.~'"
»."'....w.,':i
"'$~'~' ,.~,< ~,
,:1'0"0:. ' ,
I

------------ - -----------------------------,
, <.- ,,:::~,~, ~ : "'~ ,'-:

VM mobility is achieved with VM ware VMotion, which allows the moving of virtual machines
across physical ESX hosts with no virtual machine interruption. During such a migration, the
transactional integrity is preserved, and the virtual machine's resource requirements are
dynamically shifted to the new ESX Server.
VMotion can be used to eliminate downtime normally associated with hardware maintenance.
It can also be employed to optimi ze server utilization by balancing virtual machine workloads
across available ESX Server resources. VMotion enables server administrators to transparently
move running VMs from one physical server to another physical server across the Layer 2
network.
For example, a ues blade needs additional memory. VMotion could be used to migrate all
running virtual machines off the blade, allowing the blade to be removed so that memory could
be added without impact to virtual machine applications.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-2 1


Virtual Machine High Availability
• VMware High Availability (HA)
- Automatic restart of VM upon ESX host failure
• VMware Fau~ Tolerance (FT)
- Primary and secondary VM in lockstep
- Instant swijchover in case of ESX failure

rimary Seoondary
VM VM

vLockstep
E

Failed Server

The VMware solution incorporates two differen mechanisms to achieve VM high availability:
• VMware High Availability (HA)
• VM ware Fault Tolerance (FT)

VMware High Availability (HA)


VMware HA enables automatic failover of VMs upon ESX host failure. The HA automatic
failover restarts the VM that was running on a failed ESX host on another ESX host that is part
of the HA cluster.
Since the VM is restarted and thus the operating system has to boot, HA does not provide
automatic service or application fail over in the sense of maintaining client sessions. Upon
failure a short period of downtime occurs. The exact amount of downtime depends on the time
needed to boot the VM(s). Since the failover is chieved with VM restart there is potential that
some data might be lost due to ESX host failure.
VM ware HA requires the following:
• Dedicated network segment with assigned IP subnet
• ESX or ESXi hosts configured in an HA cluster
• Operational DNS server
• ESX or ESXi hosts must have access to the same shared storage
• ESX or ESX hosts must have identical netw rking configuration (either by configuring
standard vSwitch the same way on all ESX osts or with distributed vSwitch)
When designing VMware HA it is important to bserve whether the remaining ESX hosts in a
HA cluster will be overcommitted upon member failure.

9·22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Fault Tolerance (FT)
VMware FT advances the HA fu nctionality by enabling true zero downtime switchover time.
This is achieved by running primary and secondary VMs, where the secondary is an exact copy
of the primary one. The VMs run in lockstep using VM ware vLockstep-with which the
secondary VM ends up in the same state as the primary one. The difference between the two
VMs is that the primary one owns network connectivity.
Upon failure, the switchover to the secondary VM preserves the live client session since the
VM is not restarted. FT is enabled per VM.
To be able to use the FT the foll owing requirements, among others, have to be met:
• DRS for the VM is disabled.
• Thin disk provisioning is converted to zeroized thick disk provisioning.
• Hosts used for FT must be in the same cluster.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-23


Dynamic Virtual Machine Placement
• VMware Dynamic Resource Scheduler (DRS)
- Dynamic balancing of VM workloads across resource pools
- Intelligent resource allocation base on predefined rules
- Computing resources aligned with usiness demands

Distribute Load

III
Resource Pool

Another useful and interesting application is VNIware Dynamic Resource Scheduler (DRS),
which allows dynamic and intelligent allocation of hardware resources to ensure optimal
alignment between business demands and comp ting resources.
DRS, when deployed, is used to dynamically ba ance VMs across computing resource pools.
The resource allocation decisions are made base on predefined rules, which may be defined by
the administrator.
Using DRS increases administrator productivity by automatically maintaining optimal
workload balancing and avoiding situations wh re an individual ESX Server could become
overcommitted.

9·24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Power Management
.. VMware Dynamic Power Management (DPM)
- Consolidates VM workloads to reduce power consumption
- Intelligent and automatic physical server power management
- Support for multiple wake-up protocols

@%ill
~I
'" 0;11111

Standby Host Server Powe r Optimized

'" .-:. : ~:.. ~'r'-.:"

Dynamic Power Management can be used to reduce the power and cooling expenses related to
physical servers.
DPM consolidates the virtual machines on a minimum number of physical servers-the
VM ware ESX hosts-by constantly monitoring resource requirements and power consumption
across ESX hosts in a cluster.
When fewer resources are required, the virtual machines are consolidated on a couple of ESX
servers, and those that are unused are put in a standby mode.
If the resource utilization increases and workload requirements increase, DPM brings the
standby host servers back online, and then redistributes the VMs across the newly available
resources.
DPM requires a supported power 'l1anagement protocol on the ESX Host. Intelligent platform
management interface (IPMI), Integrated Lights-Out (iLO), and Wake-On-LAN (WOL) are
supported protocols.
Note that for each of the wake-up protocols, you have to perform a specific configuration on
each host before enabling DPM.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-25


Summary
This topic summarizes the key points that were iscussed in this lesson.

Summary
• SeNer virtualization provides virtual machine partitioning,
isolation, encapsulation, and hardwar abstraction.
• The VMware vSphere seNer virtualization solution provides
advanced tools and mechanisms that ' cale virtual machine
deployment.
• VMware management is achieved witI' the vCenter server.
• VMotion enables VM mobility across ESX hosts without VM
interruption.

9-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21

Cisco Server Virtualization


Networking Solution
Overview
This lesson identifies VM ware server virtualization networking and describes server
virtualization networking with Ci'5 cO Nexus lOOOV distributed virtual switch.

Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Describe VM ware server virtualization networking.
• Identify the Cisco solution for server virtualization networking.
• Describe Cisco Nexus lOOOV deployment design.
VMware Server Virtualizatio Networking
This topic introduces VM ware server virtualizatlOn networking.

Virtual Network Components

Virtual Machines - - .

Virtual NICs - - .

Virtual S,,!/itch - - .

Physical NICs - - -

The VMware server virtualization solution extends the access layer into the ESX server with
the VM networking layer. The following comp ents are used to implement server
virtualization networking:
• Physical network(s): Physical devices connecting VMware ESX hosts for resource
sharing. Physical Ethernet switches are use to manage traffic between ESX hosts, the
same as in a regular LAN environment.
• Virtual network(s): Virtual devices running on the same system for resource sharing.
• Virtual Ethernet switch (vSwitch): Sirnila to a physical switch. Maintains table of
connected devices, which is used for frame forwarding. Can be connected via uplink to a
physical switch via a physical network inter ace card (NIC). Does not provide the advanced
features of a physical switch.
• Port Group: Subset of ports on a vSwitch f r VM connectivity.
• Physical NIC (vmnic): Physical network i terface card used to uplink ESX host to the
external network.

9-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Access Layer

Virtual
Access
Laye r

A1ysical
Access
Layer

,_,,,,,,~,,,; '::: ..;.


"~"0'" :';~{"(

VM ware networking is deployed on each ESX server and extends the access layer into
configured physical servers-a virtual access layer. The virtual access layer does not have the
same functionality as physical access layer, typically lacking ACL and QOS configuration
options.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-29


vNetwork Standard Switch Overview
• Associates physical (vmnic) and virtual NICs
• Conn ectivity for
- VM communication within and between ESX hosts
- Service console for ESX managem nt
- VMkernel for VMotion, iSCSI, FT logging

1
··0 .

l
• Service Console port - assigned to a VLAN
.. VMkemel port(s) - assigned to a VlAN
.. VM port group(s)
- Assig ned to a VLAN
- VMs assigned to port group
• Uplink(s)
- External connectivity
- vmnic associated with single vSwit h only

vNetwork Standard Switch (vSwitch) is a software-based switch that resides in a vmKernel. A


vmKernel is a VMware operating system that controls and virtualizes server physical resources.
It is installed on a local disk, on SAN storage, or embedded in hardware with ESXi.
vSwitch provides:
• Connectivity between VMs within a single ESX host and on different ESX hosts
• Management connectivity for ESX hosts
• Connectivity for advanced VMware services like VMotion and FT, iSCSI storage
connectivity
This connectivity associates physical NICs (vrnnic) with virtual NICs. A NIC in VMware is
represented by an interface called a vmNIC. The vmNIC number is allocated during VMware
installation. A VMware ESX server can be confIgured with one or more vSwitches, which are
managed independently on different ESX hosts.
All ESX hosts in a cluster providing VMotion for VM mobility must have identically
configured vSwitches. Such a configuration process is manual and is performed from VMware
vCenter.
Communication and connectivity is provided with port groups--configured on vSwitch(es) that
segregate traffic based on type or VLAN. A vS witch allows three types of connection: .
• Service Console port: For ESX manageme t access
• VMkernel port: Used by the VMkernel T CP/IP stack for functions such as vMotion and
access to network attached storage (NAS)
• VM port group: Used for VM connectivity to the network
To connect the vSwitch to the external network an uplink is used-a physical NIC (vrnnic)
associated with the vSwitch. An individual vrnnlC can be associated with one vSwitch only.

9-30 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
vNetwork Standard Switch Operation
.. Muttiple switches on a single ESX host
- No internal communication between vSwitches
.. Operates as physical Layer 2 switch
- Forwards frames per MAC addresses
- Maintains MAC address table
- Internal switching for VMs
" Supports
- Trunk ports with 802.1q for VLANs
- PortChannel- NIC teaming
- Cisco Discovery Protocol

-.;, ,": ",<. '-,::.';; :", :-.:. '." . ::: ~. ~ •• ';'i:< ~; -;':':'",..~.

An individual ESX host can be configured with multiple vSwitches, which typically have no
internal connectivity and provide no communication among each other. A vSwitch operates at
OSI Layer 2 like a normal LAN switch-it forwards the Ethernet frames based on MAC
addresses, maintains a MAC address table, and provides switching for VMs connected to it.
vSwitches support 802. 1q VLAN trunking, PortChanneling with NIC teaming, and Cisco
Discovery Protocol.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-31


vNetwork Standard Switch Operation
(Cont.)
• No participation in STP, DTP, PAgP
lit Internal vSwitch(es)
- Testing and traffic isolation
- VMs accessible via VMware vCenter
'" Virtual guest tagging
- VLAN 4095
- Tagged traffic passed up to guest operating
system
• NIC teaming
- Connects mu~iple vmnics to a single vSwitch
- Outbound load balancing only
• vSwitch port based
• SourceMAC
• IP hash

There are differences between physical LAN switches and vSwitches-vSwitches do not
participate in Spanning Tree Protocol (STP), do not support Dynamic VLAN Trunking
protocol, and PAgP PortChannel protocol. Instead, vSwitches never learn addresses on uplink
ports and forward all unknown unicast messages using one of the available uplink NICs.
A vSwitch can be used for internal communication only if it is not associated with any vrnnic-
no uplink port is defined. Such configuration can be used for testing and traffic isolation
purposes.
vSwitches are capable of supporting VLAN IDs in one of three methods:
• External Switch Tagging (EST): This met od leaves frames untagged and allows the
external switch to handle all tagging operati ns.
• Virtual Guest Tagging (VGT): This meth d allows software within the guest operating
system to tag the frame, and maintains that tag within the virtual network. Tagged Ethernet
frames are forwarded up to the guest operating system. For this purpose, VLAN 4095 is
used with ESXJESXi host vSwitch.
• Virtual Switch Tagging (VST): This is the preferred and default method. In this mode, the
vSwitch will tag each frame based on the aS3igned port group VLAN.
To scale the available bandwidth and provide better redundancy NIC teaming can be used.
Multiple NICs can be teamed and associated as an uplink to a vSwitch. VMware uses built-in
drivers for NIC teaming, which allows both active/active and active/passive configurations. The
teaming supports various load balancing schem s (all are done for outbound only):
• vSwitch port based: A default scheme that ties each virtual switch port to a specific uplink
associated with vSwitch. The algorithm tries to maintain equal port-to-uplink assignments
to achieve load balancing.
• Source MAC based: Load balancing is bas.:!d on the source MAC addresses. To achieve
proper load balancing, the number of virtual network adapters should be greater than the
number of uplinks.
• IP hash based: This method uses source and destination IP addresses to determine the
physical network adapter.

9-32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
, <. '" ,

vNetwork Standard Switch Configuration


.. Configured and managed in software by ESX administrator
.. Created in the ESX hos
.. Managed as separate v rtual networks
,. Up to 1016 useable ports per vSwitch
,.....",.."...,...,...."....--...,
,. Up to 32 vmnics per vSwitch

,
{'-~ ,

A VMware vSwitch is created and managed from the VMware vCenter by the ESX
administrator. It resides in the ESX host. An individual vSwitch is managed as a separate
virtual network, isolating traffic from other vSwitches.
A vSwitch can have up to 1016 useable virtual ports, of which up to 32 can be used for uplinks
that are associated with the physical adapters.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-33


vNetwork Distributed Switch Overview
• Spans multiple ESX hosts in cluster
• Managed as a single switch
• Requ ires vCenter Server

vtnwarEr
vCenter Server

VMware vSphere 4 introduces vNetwork Distributed Switch-a distributed virtual switch


(DVS). With a DVS, multiple vSwitches within an ESX cluster can be configured from a
central point. The DVS automatically applies ch nges to the individual vSwitches on each ESX
host.
The feature is licensed and relies on VMware vCenter Server. It cannot be used for individually
managed hosts.
The VM ware DVS adds additional functionality and simplified management to the VM ware
network. DVS adds the ability to utilize PVLANs, perform inbound rate limiting, and track VM
port state with migrations. Additionally, the DVS is a single point of network management for
VMware networks. The VMware DVS is a requ rrement for the Cisco Nexus lOOOV.
The VMware DVS and vSwitch are not mutually exclusive. Both devices can be run in tandem
on the same VMware ESX host. An example of when this type of configuration is necessary
would be when using the Cisco Nexus lOOOV VSM running on a host that it is controlling. In
this scenario, the VSM runs on a vSwitch configured for VSM connectivity while it controls a
DVS running a Virtual Ethernet Module (VEM) on the same host.

9·34 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Identifying the Cisco Server Virtualization
Solution
This topic describes the Cisco Nexus 1000V networking solution for virtualized server
environments.

Cisco Server Virtualization Networking


Solutions
• Policy-based VM connectivity
• Mobility of network and security properties
• Non-disruptive operatio nal model
Cisco Nexus 1000V Cisco UCS with VN-Unk

Cisco UCS
6100XP

Cisco server virtualization solution uses technology jointly developed by Cisco and VM ware.
The network access layer is moved into the virtual environment to provide enhanced network
functionality at the VM level.
This can be deployed as a hardware- or software-based solution, depending on the data center
design and demands. Both deployment scenarios offer VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.

Cisco Nexus 1000V


The Cisco Nexus 1000V is a software-based solution providing VM-Ievel network
configurability and management. The Cisco Nexus 1000V works with any upstream switching
system to provide standard networking controls to the virtual environment.

VN-Link
VN-Link technology was jointly developed by Cisco and VMware and has been proposed to
the IEEE for standardization. The technology is designed to move the network access layer into
the virtual environment in order to provide enhanced network functionality at the VM level.

Cisco UCS 61 OOXP


With the Cisco DCS 6100, VN-Li nk can be deployed as a hardware-based solution offering
VM visibility, policy-based VM connectivity, policy mobility, and a nondisruptive operational
model.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-35


Cisco Nexus 1000V Distributed Virtual Switch
Policy-based VM connectivity

Defined Policies
WEB Apps

HR .• Defined irinetwOr1< .. '


DB rill wnwcm:r · ~.: Applied invceni~r. .· .
vCOllter Sorvor ~ ' LinkedtoVM UUIO '

The Cisco Nexus lOOOV bypasses the VMware vSwitch with a Cisco software switch. This
model provides a single point of configuration f r the networking environment of multiple ESX
hosts. Additional functionality includes policy- ased connectivity for the VMs, network
security mobility, and a nondisruptive software model.
VM connection policies are defined in the network and applied to individual VMs from within
VMware vCenter. These policies are linked to the Universally Unique ID (UUID) of the VM
and are not based on physical or virtual ports.

9-36 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Distributed Virtual
Switch (Cont.)
Mobility of network an d security properties

Defined Policie"
WE.BApps
HR
DB [~K;.n-~ vrnVVCH ~' • Maintained connection state
vCenter Serve r
Compliance • Ensured VM security

(;' :'; ','" ,',:;'.:: :", .;>:. '.>: . ,•• <~ • . j'. '. ,.: .,:~',:..;.

Through the VMware vCenter APls, the Cisco Nexus lOOOV monitors VM migration and
ensures policy enforcement as machines transition between physical ports. Security policies are
applied and enforced as VMs mi grate through automatic or manual processes.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-37


.~'

Cisco Nexus 1000V Dist ributed Virtual


Switch (Cont.)
• Nondisruptive operational model
• Server benefits
- Existing VM management preserved
- Reduced deployment time and
operational workload
- Improved s::alability
- VM-Ievel visibility
• Network benefits
- Unified network management and
operations
- Improved operational serurity
vCenter Server
- Enhanced VM network features
- Policy persistence
- VM-Ievel visibility

When using the Cisco Nexus lOOOV, the management model for VMs stays the same and is
handled by the VM administrator. Network ad inistrators create security profiles and these
policies are applied to individual VMs by the VM admin. Deployment time is reduced through
pre-configured repeatable processes, reducing tr e operational workload. The benefits include
unified network management and operations, enhanced network features at the VM level, and
VM-Ievel visibility.

9·38 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Features
• Layer2
- VLAN, PVLAN, 802.1q
- LACP
- vPC host mode
• QoS classification and marking
II Security
- Layer 2, 3, 4 access lis'S
- Port security
• SPAN and ERSPAN
• COfl1)atibilitywith VMware
- VMotion, Storage VMotion
- DRS, HA, FT

,:) ';_>:d.>; . ,., ~.~ " ;l'-,< x .. J.


.;.:~,~:

Cisco Nexus lOOOV supports the same features as physical Cisco Catalyst or Nexus switches
while maintaining compatibility with VMware advanced services like VMotion, DRS, Fr, and
HA.

vPC Host Mode


Virtual port channel host mode (vPC-HM) allows member ports in a port channel to connect to
two different upstream switches. With vPC-HM, ports are grouped into two subgroups for
traffic separation. If Cisco Discovery Protocol is enabled on the upstream switch, then the
subgroups are automatically created using Cisco Discovery Protocol information. If Cisco
Discovery Protocol is not enabled on the upstream switch, then the subgroup on the interface
must be manually configured.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-39


Cisco Nexus 1000V Architecture
• Ucense - per each server CPU
• VSM - Virtual Supervisor Module
- Management, monitoring and
co nfi gu rat io n
- Integrates with VMware
vCenter
• VEM - Virtual Ethernet Module
- Enables advanced networking
on ESX hypervisor
- Provides each VM with
dedicated port
- Multiple VEMs = vNetwork
Distributed swttch (vDS)

Cisco Nexus 1000V is licensed per each server CPU regardless of the number of cores. It
comprises the following:
• Virtual Supervisor Module (VSM): Performs management, monitoring, and
configuration tasks for the Cisco Nexus 100 V and is tightly integrated with the VMware
vCenter-the connectivity definitions are pushed from Cisco NexuslOOOV to the vCenter.
• Virtual Ethernet Module (VEM): Enables advanced networking capability on the ESX
hypervisor and provides each VM with a virtual dedicated switch port.
A Cisco Nexus 1000V deployment consists of SM (one or two for redundancy) and multiple
VEMs installed in the ESX hosts-a vNetwork istributed switch.
A VSM is a supervisor module much like in regular physical modular switches, whereas VEMs
are remote Ethernet linecards to VSM.
In Cisco Nexus 1000V deployments, VMware rovides the vNIC and drivers while the Cisco
Nexus 1000V provides the switching and management of switching.

9-40 Cisco Data Center Unified Computing Design (OCUCO) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Operation
• Centralized control plane - VSM
- Manages multiple data planes
- Software appliance on physical server
orVM
- Two VSMs for high availabi lity
• ESX cluster = single Cisco Nexus switch
- One data plane (VE M) per ESX host
- Up to 64 ESX hosts per VSM
- 51 2 active VLANs VSMVM !!>

- 32 physical NIC per ESX os Ot her Linux (64bit)


Memory 2GB
- 2048 vEth ports per vOS vCPU
- 216 vEth ports per ESX Network 3x el000
Disk 4GB

-j.'~ ".; ...;.;.


.;':~'

The Cisco Nexus 1000V uses distributed architecture. This architecture separates control and
data plane functionality. The control plane functionality is represented by VSM, which
manages multiple distributed data planes (VEM in each ESX host). Thus, a VSM acts as a
supervisor module for remote VEMs.
All configuration and supervisor functions are handled by the VSM. Using Console, Telnet, or
SSH, an administrator makes all configuration changes on the VSM. When a change is made on
the VSM, the configuration is passed to vCenter and the changes are made on the DVS. DVS
changes are passed down to the ccrresponding VEM.
Each VEM will act as a module on the VSM and the VSMNEM(s) appear as a single switch to
Cisco Discovery Protocol neighbors. The VSM does not reside in the data path and therefore
cannot directly receive or respond to Cisco Discovery Protocol messages. Cisco Discovery
Protocol and other network management packets are transferred between the VEM to the VSM
on one of three required VLANs, known as the Packet VLAN.
VSM can be deployed as a softwa -e appliance on a physical server (Control Plane Physical
Appliance- CPPA) or on a VM (Control Plane Virtual Appliance- CPVA). A redundant Cisco
Nexus 1000V deployment would incorporate two VSM appliances.
The VEM is a software replacement for the VMware vSwitch on a VMware ESX Host. All
traffic-forwarding decisions are made by the VEM.
Individual Cisco Nexus 1000V supports:

• One data plane per ESX host

• Up to 64 ESX hosts

• 512 active VLANs

• 32 physical NICs per ESX host

• 2048 virtual Ethernet ports

• 216 virtual Ethernet ports per ESX host

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-41


The VSM has the following minimum system requirements when run as a VM:
• 2GB memory
• Single vCPU
• Three elOOO network adapters named Management, Control, and Packet in the right
sequence
• 4GB disk
With the 64 VEMs and the redundant supervisors, the Cisco Nexus IOOOV can be viewed as a
66-s10t modular switch.

Cisco Nexus 1000V Ope ation (Cant.)


• Distributed data plane - VEMs
- Each operates independently
- No address learning/synchronizatio
across VEM
- No backplane between VEM
- No concept of forwarding from ingr ss line
card to egress line card
- No EtherChannel across VEMs
• Internal switching - Access to VSM not
required
- VSM failure does not fail data path
- Traffic continues to flow by VEM
• No Spanning Tree Protocol (STP)

Each VEM acts as a separate switching line car with no concept of a fabric between VEMs.
This means there are no PortChannels or connections between VEMs and they do not rely on
one another for operation. Switching decisions and frame forwarding all happen on the VEM
and are not reliant on the VSM.
Only the uplinks in a host can be bundled in a P rtChannel for load balancing and high
availability. The Cisco Nexus IOOOV does not support EtherChannels across different VEMs.
The Cisco Nexus IOOOV does not run STP beca se it will deactivate all but one uplink to an
upstream switch, preventing full utilization of uplink bandwidth. Instead, each VEM is
designed to prevent loops in the network topology .

.~'

9·42 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Connectivity

Management
#;;: wn':-\'tJ'~
Control vCenrer
Server
Packet
Data

~ .";'~,!, :.,~: :·.).;~.'M: :., •..~.:0:.. "';';':" "\''': t· ~:;,:'; :'

The VSM, VEM, vCenter, and VM connectivity uses dedicated VLANs-management,


control, packet, and one or more data networks.

Cisco Nexus 1000V Connectivity (Cont.)


• Management VLAN
- Out-of-band management for VSM (mgmtO port)
-- Should be the same as vCenter and ESX management VLAN
• Domain 10
- Single Nexus1000V instance - dual redundant VSM and VEMs
• Control VLAN
- Exchange control messages between VSM and VEM
• Packet VLAN
- Used for protocols like Cisco Discovery Protocol, LACP, IGMP
• Data VLANs
- One or more VLANs for VM connectivity
• Recommendation - mairtain separate VLANs
" Layer 2 connectivity reqL ired between VSM and VEMs
""' ''':' '", ......... ~ ~~ .. ,:.:- " ..., ...... .;

The Cisco Nexus IOOOV VLANs consist of a management, control, packet, and one or more
data networks.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-43


Management VLAN
Each VMware ESX Host, the VSM, and the vCenter Server must all reside in the same
management network and be part of the same L yer 2 domain.

Domain ID
A single Cisco Nexus lOOOV instance, including dual redundant VSMs and managed VEMs,
forms a switch domain. Each Cisco Nexus lOOOV domain within a VMware vCenter Server
needs to be distinguished by a unique integer called the Domain Identifier.

Control and Packet VLANs


Control and Packet VLANs from the VSM must be accessible by uplink profiles on each VEM.
The Control VLAN and the Packet VLAN are u ed for communication between the VSM and
the VEMs within a switch domain.
The Packet VLAN is used by protocols such as Cisco Discovery Protocol, LACP, and IGMP.
The Control VLAN is used for the following:
• VSM configuration commands to each VEM, and their responses
• VEM notifications to the VSM; for exampl , a VEM notifies the VSM of the attachment or
detachment of ports to the DVS
• VEM NetFlow exports are sent to the VSM, where they are then forwarded to a NetFlow
Collector.

Data VLANs
The data networks carry VM packet traffic (server data). One or more data VLANs are defined
for this purpose. Data traffic from the VM is not sent to the VSM and the VSM does not require
access to the data VLANs. All VSM management is out of band and switching decisions do not
rely on the VSM.

Note It is recommended that the Control VLM J and Packet VLAN be separate VLANs, and that
they be on separate VLANs from those that carry data.

9-44 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Port Profiles
• Po rt Profiles
- Used to configure m u ~iple similar ports
- Define VLAN, ACL, OoS, port security,
etc.
- Port groups in VMware - created for
each port profile
• Uplink port profiles
- Provide outbound connectivity from VEM
- Used for VEM to VS M and VM data
traffic
• VM port profiles
- Provide configuration for VM ports
• Recommendation - use for all po rt
configuration on Cisco Nexus 1000V
.::- .": '~, ,•• :;.,~ :'0) ;>.".".; . ;.; ~_~ . ::1' ''' !O:.:':~ ' \' .~

In Cisco Nexus 1000V, port profiles are used to configure interfaces. A port profile can be
assigned to multiple interfaces, giving them all the same configuration. Changes to the port
profile can be propagated automatically to the configuration of any interface assigned to it.
In the VM ware vCenter Server, a port profile is represented as a port group. The vEthernet or
Ethernet interfaces are assigned in vCenter Server to a port profile for:
• Defining port configuration by policy
• Applying a single policy acrm s a large number of ports
• Supporting both vEthernet and Ethernet ports

Port Profile Configuration


A port profile is a set of interface configuration commands that can be dynamically applied to
either the physical (uplink) or virtual interfaces. A port profile can define a set of attributes
including the following:

• VLAN

• Port channels

• Private VLAN (PVLAN)

• ACL

• Port security

• NetFlow

• Rate limiting

• QoS marking

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-45


The network administrator defines port profiles m the VSM. When the VSM connects to
vCenter Server, it creates a distributed virtual sitch (DVS) and each port profile is published
as a port group on the DVS. The server administrator can then apply those port groups to
specific uplinks, VM vNICs, or management po s, such as virtual switch interfaces or VM
kernel NICs.
A change to a VSM port profile is propagated to all ports associated with the port profile. The
network administrator uses the Cisco NX-OS CLI to change a specific interface configuration
from the port profile configuration applied to it. For example, a specific uplink can be shut
down or a specific virtual port can have ERSPAN applied to it, without affecting other
interfaces using the same port profUe.

Note Although the configuration can be applied to the individual virtual port on Cisco Nexus
1aaav, it is recommended to apply the ntire configuration via port profiles.

By using port profiles, hundreds or thousands of VMs can be provisioned rapidly with detailed
network configurations such as port state, quality of service (QoS) tagging, and access control
list (ACL) controls. Additionally, port profUes r ~uce the risk of misconfiguration on groups of
similar ports by maintaining the configuration in a central location. While individual port
configuration is still possible using the Cisco N xus lOOOV, it is recommended to use port
profiles rather than configuring ports directly. T is method reduces administration time and
simplifies network troubleshooting.

Uplink Port Profiles


The server administrator can assign port profiles that are configured as uplinks to physical
NICs.

VM Port Profiles
Port profiles that are not configured as uplinks can be assigned to a VM virtual port.

9-46 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Uplink Port Pro1i1es
" Assigned to physical NIC on ESX host (vmnic)
" System uplink port profi e
- Carries control and j: acket VLAN between
VEM and VSM
• VM uplink port profile
- Carries VM data traffic
• Single uplink port profile can be system and VM
uplink at the same time

(' .": '~~ ""'.'';. '" -..~ ..J. ,.• -.; .:...., " ~:,

System Uplink Port Profiles


When a server administrator adds a host to the DVS, the VEM in that host needs to be able to
configure the VSM. Since the ports and VLANs for this communication are not yet in place,
system port profiles and system VLANs are configured to meet this need. VSM sends minimal
early configuration to the vCenter Server, which then propagates it to the VEM when the host is
added to the DVS.
A system port profile is designed to establish and protect vCenter Server connectivity. It can
carry the following VLANs:
• System VLANs or vNICs used when bringing up the ports before communication is
established between the VSM and VEM
• The uplink that carries the control VLAN
• Management uplink(s) used for VMware vCenter Server connectivity or SSH or Telnet
connections. There can be more than one management port or VLAN; for example, one
dedicated for vCenter Server connectivity, one for SSH, one for SNMP, a switch interface,
and so forth.
• VMware kernel NIC for accessing VMFS storage over iSCSI or NFS.

VM Uplink Port Profiles


VM uplink port profiles are used to define VM uplink characteristics that are separate from the
control and packet VLANs.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-47


VM Port Profiles
• Provide configuration for VM ports
• Assigned to VM vNIC
• Exte rnal netwo rk access
- Requires uplink port
• Internal network access only
- No uplink port

VM profiles are used to provide configuration for VMs and typically require an uplink profile
to access the physical network.
When a VM profile is configured without a corresponding uplink profile, it creates internal VM
networks. If a VM profile is created accessing a VLAN that is not trunked to the physical
network, then the assigned VMs will be able to communicate only with other VMs assigned to
the profile in the same host. This configuration IS similar to creating internal only vSwitches or
port groups within a standard VMware networkmg environment.

9-48 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Deployment Design
This topic describes considerations that need to be addressed when preparing the Cisco Nexus
lOOOV deployment design.

Cisco Nexus 1000V Deployment Design


.. VSM considerations
- Select ESX(es) for VS M placement
- Management param eters - IP address, login credentials
- Define Domain 10
- Disable DRS and FT for VSM
- VMotion proh ibited
M VLAN scheme - management, control, packet VLANs
" Define system uplink port profile
.. Acquire licensing
" High availability - stand~lone vs. primary-secondary VSM

.;, :" ~ ,',."-,-, "~' .........:; ,... ~:':.'

Planning the Cisco Nexus lOOOV deployment must take into consideration Cisco Nexus lOOOV,
VM ware, and uplink switch aspects.
From the Cisco Nexus lOOOV perspective the following must be addressed:
• Licensing-Cisco Nexus lOOOV is licensed per server CPU. The designer must thus know
how many ESX hosts will be mitially used and how many will be used in the future to plan
and select proper licensing pack.
• VLAN scheme-for the Cisco Nexus lOOOV deployment minimum of three VLANs are
required-management, control, and packet. The designed must reserve and assign these
VLAN IDs from the free VLAN ID pool. Although the VLAN IDs could be any of those
already used, it is recommended to separate VLAN IDs.
• Design VSM deployment.

VSM Deployment Design


VSM deployment design must address the following Cisco Nexus lOOOVaspects:
• Selecting the ESX host(s) where VSM appliance will be running-the host should have
sufficient resources for the VSM VM.
• Defining the management parameters like management IP address and login credentials
• Define the Domain ID parameters for multi Cisco Nexus lOOOV deployments. A unique
Domain ID per Cisco Nexus lOOOV domain is recommended.
• If you must use the same Control and Packet VLAN pair for multiple domains, you must
ensure that their domain identIfiers are different.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-49


• Define VSM restrictions
Prohibited VMotion, disabled DRS a ld FT for VSM appliance
• Define system uplink port profile for contro and packet VLANs
• Define the redundancy scheme for the VSM-standalone or active-standby (primary-
secondary)
Select the ESX host for the secondary VSM appliance. The selected host should be
different from the one selected for the primary VSM.

Nexus 1000V Deployment Design (Cant.)


• VMware vCenter considerations
- Select vmnic(s) for uplink port profile(s) forVSM to VEM
connectivity
• Upstream switch considerations
- ESX attached interface - trunk wit allowed VLANs

... ~.~ .-:!?~.~y......Q ....................-.......-..... .

r§~~~~~~~~~i:;:;~~:1

[~~:~~~ iii~::·;;~~.5:T:m~i:~;:j~j;~j~ i-:@.·l


i l~: ~ Upl n ldl (3 NIC Adapters)
! ~O. 17 , 1.3
wri:O 0.17.1.2
vnncl 0. 17.1.1
'@ Uplnkl (0 NIC Adapters)
\iJ.iLl>l nklO(ONIC_",)
(@ Upln kl l (0 NIC Adapters)
~ ~nk12 (0 NIC Ad&pters)
e;u:x..nk130NIC

On the VMware side, the physical NICs have t be selected for the system uplink port profiles
to enable VSM to VEM communication.
On the uplink switch side, the interface where ESX is attached has to be configured for 802.1q
trunking and must allow the expected VLANs--control, packet, and management.

9-50 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VM Deployment Considerations
.. Cisco Nexus 1000V considerations
...t;I.~.~~.~~. __ ._~.. _....
- Define data VLANs
- Define VM uplink PO "t profile(s) - trunk ~;Vlrtu~ MacJT.es (3)

data VLANs pl-wint


pl-win2
pl-win3
- Define VM port p rofile( s) - fo r VM
~ poo:>", '
connectivity fj Vrtual Machhes (3)

- Per VM policies - security, OoS, etc. .""""


. 3-"."
03"""
.. VMware vCenter consiCerations
- Select vmnic(s) for VM uplink port [~1 V1rtual~S(3)

....."
profile(s) forVM con 1ectivity 05-"'"
...."
.. Upstream sw~ch consi oerations
[~\::..:~7~~V":' .'
- ESX attached interface for trunk with
allowed VLANs

It: .":-:<. ".: ~ .:: ~..' -;;':.1.>; • " <~" :1':'~ ,.;.:.;.:":..;.

Once the Cisco Nexus 1000Y deployment is designed, the VM deployment can be planned.
This includes Cisco Nexus 1000V, VMware, and upstream switch settings.

Cisco Nexus 1000V Design for VM Deployment


The design should define:
• VLAN scheme for data traffic from different VMs. The scheme would typically define
multiple VLANs for VM connectivity.
• VM port profiles for VM connectivity. A common VM profile for applications with the
same connectivity requiremerts should be defined.
• Per VM port profile policy, which includes QoS, security, and other settings.

VMware Design for VM Deployment


The design should specify the port groups derived from port profiles for different VMs. The
port group description should specify the connectivity policy, which is derived from the port
profiles.

Upstream Switch Design


The design should specify the interface settings of the upstream switch where the ESX server is
connected for the vmnic carrying data VLANs.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-51


Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
• Cisco Nexus 1000V architecture provides a virtual switch chassis
with a supervisor module (VSM) and s itching line cards (VEM).
• Cisco Nexus 1000V deployment requi tes management, control,
packet, and data VLANs.
• Cisco UCS systems can be combined with Cisco Nexus 1000V
technology to provide physical network controls at the virtual
network level.
• Cisco Nexus 1000V design defines VLANs, system and VM uplink
port profiles, and VM port profiles.

9-52 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 31

Sizing a Virtu I Machine


Overview
This lesson identifies and explains how to properly size a virtual machine using gathered
historical performance characteri~tics (CPU load, memory usage, network and storage I/O,
storage space usage, and so on.).

Objectives
Upon completing this lesson, you will be able to describe and perform proper virtual machine
sizing. This ability includes being able to meet these objectives:
• Describe the parameters that influence the VM sizing.
• Describe the VM CPU sizing.
• Describe the VM memory siz ng.
• Describe the VM network sizing.
• Describe the VM storage sizing.
• Describe how the VM is sizeo.
Identify Virtual Machine Sizi 9 Parameters
This topic introduces and explains the parameters used in sizing the virtual machine.

Virtual Machine Parame ers


• Virtual infrastructure
- Pools of CPU, memory, storage,
network resources
• VM parameters
- Memory size
Memory S10 rage NellMJrk
- vCPU(s)
&1.#&
- NIC(s)
- Drive controller(s) CPU Pool
- Virtual disk(s) - capacity specified
Memory Pool
- Peripheral device(s) - CD/DVD
• Reservations can be made to ensure Storage Pool
memory and/or CPU capacity

Network Pool

ESX servers provide hardware resources like CPU, memory, disk space, and network
connectivity to VMs, thus making the resources part of a certain pool.
A virtual machine, when created, specifies reso rce parameters which define the amount of
resources that will be used-whether this is CPU, memory, disk, or network. Virtual machines
provide server deployment on business demand.

9-54 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Parameter Limits
• VMware ESX 3.5 per VM maximums
- Disk size = 2TB
- Numberof vCPUs =4
- Memory size = 64G
_.. Numberof NICs = 4
• VMware ESX 4 per VM maximums
- Disk size = 2TB
- Numberof vCPUs = Igl '(Ideo card \lideocard

- Memory size =255GB ~ IIMCl device


~ Floppy ddve 1
Restricted
Cient Device
~ CD/OVO Drive 1 Oent Device
- Numberof NICs = 1 II!lI Network adapter 1 PODHO (gl-nl kv), Por .. .
o SC51 controller 0 LSI Logic Parallel
~ Hard <isk 1 Viftwl Disk

." ",,~··0 :'" X-<>'''''~·:.

The following parameter list describes VMware ESX 3.5 per virtual machine maximums:
• Size of SCSI disk = 2TB
• Number of vCPU = 4
• Memory size = 64GB
• Number of NICs =4
The following parameter list desc "ibes VMware ESX 4 per virtual machine maximums:
• Size of SCSI disk = 2TB
• Number of vCPU = 8
• Memory size = 255GB
• Number of NICs = 10

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-55


Application Requirements Example

Genelal canputing

As mentioned, the VM sizing should take into ccount application requirements that vary by
application type and vendor. The following list gathers typical memory, storage, and network
use cases for certain applications:
• High end database applications like Orac.e RAC, MS-SQL HA Cluster, and Sybase HA
Cluster require a large amount of memory, multiple NICs, should be connected to SAN,
and require high availability.
• Low to midrange database applications like Linux/Microsoft Server Based MySQL, and
Postgress databases require low to midsize emory, dual NICs, and can use NAS or SAN
for storage.
• Web Hosting applications like Linux Apac e or Microsoft IIS require a relatively small
memory amount, local disk or NAS storage, and dual NICs.
• General computing applications like Microsoft SharePoint, file servers, print servers, etc.
require a low memory amount, local disk or NAS storage, and single NIC.
• CRMlERP front-end application servers like SAP, PeopleSoft typically require a mid-
size memory amount, dual NICs, and local disk or NAS/SAN storage.
• Microsoft Exchange (depending on the use case) might require mid-size memory, dual
NIC, and local disk or NAS/SAN storage.
• High guest count Virtual Desktop Infrast r ucture like VMware with 128 to 300 guests
per server requires a typically large memory, multiple NICs, SAN or NAS storage, and
high availability.
• Market data applications/algorithmic trading requires a large memory, with multiple
NIC connectivity, SAN or NAS storage, and high availability.
• Linux grid based environments like Data ynapse require a midsize memory, dual NICs,
and local disk or NAS storage.

9-56 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Virtual Machine CPU
This topic describes the virtual machine CPU sizing aspects.

" CPU Sizing Input Data


" Physical server CPU capacity and utilization
-- Measure with VMware Capacity Planner
• Guest operating system and application
- Recommended CPU capacity
- Architecture - single· or multithreaded

f:M;~Ufl9 £nvirollmelll

d~1
u~mnt:i"' ..

lSiro:OOll~.ol! ?33.«l1 0,28

':~'r:,".,~;~fl ",::~!--~~~~:!,.,,,~~~.,, "".


J,!;. Mil ?.t.%T 1%.4IT ~.31

•.;;,j : ;~?,.j -,:.~<".'~::' ;;" . ,'f: :>';":' :.......<=t", ..~..:.


::; ~ "' :i;,;;:' ·_<·'.' · ';: · ~

Gathering VM CPU Sizing Informatio


A VM CPU should be sized according to the utilization of the physical machine that is to be
converted to a virtual one. In addition to that, the minimum resource requirements of the
deployed application should also be taken into account.
Physical machine resources as well as applications running inside must be analyzed before
converting it into a virtual one. Pn or to conducting the physical machine profiling any
unnecessary services should be stopped.
The following information should be gathered for the VM CPU sizing:
• Number of vCPU required, which is governed by the application architecture (single- vs.
multithreaded)
• Measured average CPU load with a profiling tool like VM ware Capacity Planner
• Application minimum resource requirements specified by the vendor

A CPU load profiling can be done using the following tools:


• VMware Capacity Planner to gather historical physical machine CPU workload-at least
one month period of profiling should be used to gather relevant data
• For Microsoft Windows based server Perfmon and Task Manager tools can be used

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-57


Sizing vCPU
• Number of vCPUs
• Use single vCPU for single-threaded applications

Cl'lU
s.I«t!ho r<JIl1b<r of'li'toal_",. iii the .".MI _tn.

m tl~fML~D!JJ&~tn·
~
y);tfJikmJi'i\\Xm}$!il
("z:!'lHr (Jts'tt?Wit~, 2tn®
CYlJoo
~1.1~{V.-?Y

N.s::·!.w..::rv,
s:(~<~ ~:"rx-iHdf!(
-~~~:t ,'$ t..~~~:.
::'~.>Yj'J:f t(:. :~:.:.,%;k:~:;:

VM CPU sizing is done when a VM is created. The administrator needs to specify how many
vCPUs the VM should have.

VM CPU Workload
Like host CPU saturation, VM CPU saturation can also occur. Depending on the application
type, adding vCPU count might help the VM's performance-i.e., if the application is
multithreaded it could utilize additional threads . The VM profile can be updated by increasing
the CPU reservation for a particular VM that ha ' been identified as lacking CPU power.

Host CPU Workload


CPU load on a physical server is generated by the guest operating system and application(s)
installed as well as the VM ware ESX hypervisoL, since it provides a virtual interface to the
physical hardware. Normally the great majority of the load should be the result of processing
due to the application(s) in a VM, while the processing performed by the host should result in
just a small, incremental increase in load.
A CPU workload depends on the VM and application deployed as well as the ESX hypervisor
configuration. Host saturation indicates that too much load has been put on a particular host.
Though such CPU overutilization is not desired, it can always be amended by moving the VM,
causing the overutilization to another ESX server.
Other actions that can be taken to address the host CPU overutilization caused by the VM are:
• Examine and decrease disk and/or network activity caused by the VM applications that
cache data. Increase amount of memory assIgned to VM to lower I/O and reduce ESX
hypervisor hardware virtualization burden.
• Reduce the number of vCPUs for the VM t the number required to execute the application
processing. If application is single-threaded there is no benefit in assigning 4 vCPU to VM.
On the other hand, there is an extra burden placed on the ESX hypervisor since maintaining
three idle vCPUs takes CPU cycles.

9·58 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Allocating Host CPU
• Configure host CPU capacity sharing per VM
- CPU contention sharing (Low, Normal,
High, Custom)
- Capacity reservation in MHz
- Maximum CPU capacity in MHz or unlimited

M <">YI'!.$d Cru
M"')J('J(~~~ tU>1A~;"~ ~ t

(
!;:-'>:<!; '~!"!:" .. ;'!.,:~, ..!'; ;. ; . '':;!'~'''=' :'.-(',.< " '-'.~.

Apart from specifying the number of virtual CPUs available to the VM, the administrator can
also allocate and ensure CPU capacity for the times when CPU sharing contention happens.
The following parameters can be applied in a VM profile:
• Number of vCPUs: VM deployed with single-threaded application should be given only
one vCPU. For the multithreaded applications the number of vCPUs depends on the
application itself but is also Ii mited by the processor type and ESX limitation. For ESX 3.5
up to 4 vCPUs can be assigned, whereas with ESX 4.0 up to 8 vCPUs can be assigned.
• Shares: Allow the administrator to define the processor usage of each VM in relation to the
other VMs hosted by the system.

Note If you expect frequent c'langes to the total available resources, use Shares, not
Reservation, to allocate resources fairly across virtual machines.

• CPU capacity reservation: Sets the minimum CPU capacity to prevent VM CPU
starvation. Use when you need to guarantee minimum CPU capacity.
• Maximum CPU capacity: T 'l e value specifies the maximum CPU capacity that can be
used by a VM and can be used to prevent a VM from consuming too many CPU cycles. If
unlimited is selected a VM may use all CPU capacity if needed.

Note Be aware that VMware ESX hypervisor adds CPU overhead-thus do not overcommit the
physical CPU.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-59


Since VMware ESX can be deployed using different CPUs-i.e., Intel or AMD-the processor
type presented to a VM depends on the host processor type.
In general, the total CPU resources needed by t e virtual machines running on the physical
server should not exceed the CPU capacity of that host.

Note Host CPU capacity allocation is more art than science. Proper allocation requires complete
understanding of the hosted VM's perfo mance requirements and behavior. Unless complete
performance understanding is possess ,a default utilization scheme should be used.

9·60 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Sizing Virtual Machine Memory
This topic describes aspects of virtual machine memory sizing.

Memory Sizing Input Data


• Physical server memory capacity and utilization
- Acquired via VMware Capacity Planner
• Guest operating system and application recommended memory
capacity
£xhUng ( n'tkotlfnlmt

$'3.6t
36.% a.311 7';.$1

O'~~·'.»:~ ::"".) ",.,:?.~: :.' be .~;; ::<;:'.:0-. :;,;~~,.x; :>~:)~ ",',l,:;: ,,* :?

Gathering VM Memory Sizing Information


A VM's memory should be sized according to the memory utilization of the physical machine
that is being converted. The value should also take into account minimum resource
requirements of the application deployed.
Similar to CPU profiling, the memory profiling of a physical machine's resources as well as
applications running inside must be analyzed before converting it into a virtual one. Prior to
conducting the physical machine profiling any unnecessary services should be stopped.
The following information should be gathered for the VM memory sizing:
• Amount of memory required by the guest operating system and application
• Measured average memory utilization with a profiling tool like VMware Capacity Planner
which can gather historical performance machine memory usage. A one month period of
profiling is recommended to gather relevant data.
• Application and guest operating system minimum resource requirements specified by the
vendor

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-61


Sizing Memory
• Memory size for VM in MB
• VMware vCenter client recommendation for selected guest
operating system

"'.
$.:t:{~}."'J:mM.t:l;;,.?~~

r.'~; ·'.«w,w...
~~"C'_,:;:
b ......:~·:,) ::.:-.:::«-:~

Memory is presented to a VM in the form of slots that can be populated with memory-the
speed and type but not the size of the memory is that of the host server. The option is not
configurable and occurs automatically. For example, a VM with 4 GB of memory will typically
see two slots configured with 2048 MB DIMMs
VM memory sizing is done upon VM creation, or even after VM has been created. This
includes specifying the amount of memory assigned to a VM.
Be aware that you can assign a VM more memory than a host physically has-the maximum
being 64GB for ESX 3.5 and 255 for ESX4.0 version. Thus even a host with 16GB of physical
memory can have a VM with 32GB configured.
In general, the sum of memory of all running VMs and the VMware ESX hypervisor should
typically not greatly exceed the amount of physical memory. It is recommended to load
VMware ESX memory to 80% or 85%. Using this approach allows for some spare memory in
case the VMs start to use more physical memof). Remember also that using more than 80% to
85% of memory capacity in an ESX cluster depl yment can impact VMware High Availability
failover and diminish functionality.

VMware ESX VM Memory Allocation


VMware ESX employs several mechanisms to address VM memory allocation.

9-62 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Transparent Page Sharing
VMware ESX can save a lot of physical memory using transparent page sharing, which is
particularly useful in environments where mUltiple similar guest operating systems are used.
The VM ware hypervisor checks each block of memory that a VM wants to write to physical
memory-the block of memory being equal to a block of memory already saved in physical
memory means that there is no need to use extra physical memory. Instead only a pointer is set
that remembers that the block is used by other VMs. If the VMs only read such a block and
never change it, the block is saved just once. ESX uses such a block until a VM wants to write
to the block, at which time an additional memory block copy is created.

Ballooning
If VM ware ESX runs out of guest VM memory space, it starts a process inside the guest
operating system that claims memory. The guest operating system checks whether there is
memory not being used. If such memory exists, it is given to the process. Then VMware Tools
claim such memory and report to ESX exactly the memory blocks that can be reused for other
VMs.
Using this approach unused memory is taken out of the other VMs to give it to the VM that
needs it more.

Swapping
If the ballooning is not sufficient, ESX uses a last resort-swapping out VM memory to a disk.
This incurs performance degradat on since disks are always much slower than physical
memory.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-63


Allocating Host Memory
• Configure host memory sharing per VM
- Memory contention sharing (Low, ormal, High,
Custom)
- Capacity reservation in MB
- Maximum memory capacity in MB or unlimited

Apart from specifying the size of the allocated emory available to the VM, the administrator
can also ensure memory capacity for the times when memory sharing contention happens. The
following parameters can be applied in a VM profile to address VM memory requirements:
• Shares allocation is used to identify preferential treatment when memory resources are
under constraint on an ESX host. Resource shares are based on a proportional allocation
system, where a VM' s "due share" is a rati based on its shares compared to the total
shares allocated to all objects in the respective group. If two VMs have the same amount of
memory provisioned to them and are in the ~ame resource pool, the VM with the greater
number of shares will enjoy preferential access to physical memory on a proportional basis
equal to the differences in the share allocati ns.
• Memory reservation guarantees an amount of physical memory that will always be
available to the VM.
• Memory limit defines the absolute maximum physical memory a VM can consume on the
host.
The options can be configured independently of each other-certain VM profiles might employ
a reservation without any limit.

Note If you expect frequent changes to the t tal available resources, use Shares, not
Reservation, to allocate resources fairly across virtual machines.

9-64 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Virtual Machine Network
This topic describes aspects of virtual machine network sizing.

Network Sizing Input Data


• Physical server adapter speed and utilization
- Acquired via VMware Capacity Planner
• Guest operating system and application recommended
connectivity

:hti~tiog Environment

l,iloo
2,000
z,ol)l)

',) :1;~'';' ;'''''u ';.,~.''.'" :'>:' . •~j :>:;:,:., : >7>;!"~<'('- :»):.:((';'.:.::: .. .:-:. :~

Gathering VM Network Sizing Information


A VM network should be sized according to the physical server network connectivity-the
number of adapters used, for example. A VM can have multiple network adapters if required.
The VM network performance depends on the underlying physical server network
connectivity-the number of physical network adapters, their speed, the number of VMs
sharing individual adapters, and the amount of traffic produced by such VMs.
VM network profiling should be used to analyze a physical machine's network usage before
converting it into a virtual one. Prior to conducting the physical machine profiling any
unnecessary services should be stepped. The VM network analysis should state the number of
network adapters required, the net Nork connection speed, and the actual network utilization.
The information can be gathered with the VMware Capacity Planner tool. To assess the actual
network load, a one-month period is recommended to gather relevant data.
The input information should also specify the application and guest operating system network
requirements as specified by the vendor.

© 2009 Cisco Systems, Inc. Understanding Server Virtuallzation Networking 9-65


Sizing Network
• Number of network adapters
• Individual NIC parameters
- Network label - VM port group
- Status - "Connect at power on"
- Adapter type
Iid,-SCT'I'Pf!
- MAC address Wfl¥.twt«~~·d:l)U; , «t ~~?

,~

VM Network Overview
Packets that are sent by a guest application pass through multiple layers before they go on to
the wire. The message from the application is first processed by TCP/IP in the guest operating
system. After the required headers have been ad ed to the packet, it is sent to the device driver
in the virtual machine.

Applying VM VO Parameters
VM I/O sizing is done upon VM creation. The f llowing parameters can be applied in a VM
profile to address VM I/O requirements:
• Number of network adapters defines the number of NICs that will be available to a VM.
• MAC address specifies an administratively assigned MAC address. If not specified, a MAC
address is dynamically assigned by the ESX hypervisor.
• Adapter type defines type of adapter a VM will use. Different types can be used, offering
different performances. The use of a VMXNET or ElOOO adapter is recommended for
optimum performance. Remember that the vNIC type must be supported by the guest
operating system; i.e., there has to be a device driver installed in the guest operating
system.

Available Network Adapters


A VM can use the following network adapters epending on the guest operating system:
• Vlance or PCNet32: Common physical network adapter that most 32bit guest operating
systems, except Windows Vista, support, th s the network can be used immediately.
• VMXNET: This virtual network adapter is Jsed to address Vlance network performance in
a VM. Vrnxnet is highly optimized for performance in a VM. The network card has no
physical counterpart thus operating system vendors don't provide built-in drivers for this
card, and VMware Tools must be installed t) have driver available,

9-66 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
• Flexible: This virtual adapter identifies as a Vlance adapter when VM boots, but initializes
and functions as either a Vlance or a VMXNET adapter, depending which driver initializes
it. Recent VMware Tools are required to support VMXNET representation and operation.
• EIOOO: This adapter is a virtual implementation of a physical network adapter that is
supported by newer operating systems like 32 and 64bit Windows Vista. The performance
is intermediate between Vlance and VMXNET.
• VMXNET2 (Enhanced); The adapter is based on the VMXNET adapter but provides
high-performance features like jumbo frames. The guest operating system support depends
on driver availability-Le., the following guest operating system support is available for
32bit Microsoft Windows XP , 32/64bit Red Hat Enterprise Linux 5.0, 32/64bit SUSE
Linux Enterprise Server 10, and 64bit Red Hat Enterprise Linux 4.0.
• VMXNET3: This adapter is a paravirtualized NIC designed for performance. It offers all
the features available in VMXNET 2, and adds several new features like mUltiqueue
support and MSIIMSI-X interrupt delivery. The guest operating system support depends on
driver availability-i.e., the following guest operating system support is available for 32bit
Microsoft Windows XP, 32/64bit Red Hat Enterprise Linux 5.0, 32/64bit SUSE Linux
Enterprise Server 10, and 32/64bit Sun Solaris 10 U4 and later.

Note The availability of network adapter types depends on the VMware ESX version. In VMware
ESX 4 the options are Flexible, E1000, VMXNET2, VMXNET3.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-67


Sizing Virtual Machine Storage
This topic describes aspects of virtual machine storage sizing.

Storage Sizing Input Data


• Physical server disk capacity and utilization
- Acquired via VMware Capacity Planner
• Guest operating system and applicati
recommended disk size
tlMbn\! wvironment

VM storage should be sized in regard to:


• Physical server disk space requirements-t e number and the size of disks available
• Application and guest operating system minimum resource requirements specified by the
vendor
The actual disk space consumption can be measured with a profiling tool like VM ware
Capacity Planner.

9-68 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Storage
.. SCSI drive controller type (guest operating system)
• Number and type of disr(s
- New
- Existing
- Raw Device Mapping (RDM) rn/M~iti;t\itl!i:"ii%em!!ti
. l!i!l!!i'i,iiiiiiiiiiiiiii~
Sded a DIsk

SCSI (ootroti~
Whi.. h5CSl tuotrt.ik:tl· type<~ 'fu.J hM ~ .se?

~;>!ill~.t~~~
f~H.~:n::~:\~.;. :t<:m:-!t.
~
~~~
:~\~~!';
X";! f·o/'~J;:.JP..!
SekdeDisk
~-1r.t.f~r~~S~~,~ G(';{!", .~( ~.<:'

:;r!& *.ci·'e-.c..d,:y."/.'>-~
;:{,~(. ~~~ (-.:.::.<=<~~~
t1M~
®.\,@.;
stSI C_
~~1.d:~r:

c~·c..'" ,::.~" ,.,. .~~ ..~. :,...;. J'~ 'O:::::~ ,~"".~ V·';~·.;"; ,::: ." " *-'-~~

VM storage sizing is done upon or after VM creation. The following parameters can be applied
in a VM profile to address VM storage requirements:
• SCSI drive controller type: Based on the selected guest operating system; the vCenter
client suggests the type
• Number of disks: Assigned to a VM by creating mUltiple VM disks
When assigning a disk to a VM, the administrator can either crate a new disk or assign an
existing one.
The administrator must also select the VM volume type. VMware ESX supports VMFS and
RDM volume types.

Virtual Machine File System (VMFS)


VMFS is a cluster file system that leverages shared storage to allow multiple instances of ESX
Server to read and write to the same storage, concurrently. VMFS provides on-disk locking to
ensure that a virtual machine is not powered on by multiple installations of ESX Server at the
same time. Should a server fail, tr e on-disk lock for each virtual machine is released to allow
the virtual machine to be restarted on other physical servers.
VMFS efficiently stores the entire virtual machine state in a central location and can be created
in advance, enabling instant provi<;ioning of virtual machines, without relying on a storage
administrator.

Raw Device Mapping (ROM)


RDM allows management and access of raw SCSI disks or LUNs as VMFS files. An RDM is a
special file on a VMFS volume that acts as a proxy for a raw device. The RDM file contains
metadata used to manage and redirect disk accesses to the physical device. Virtual disks are
recommended for most virtual dis < storage. Raw disks may be needed in some cases. Due to
the nature of RDM the performance is by default better but comes with a price-VM creation
requires LUN assignment by the storage administrator.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-69


RDM can be configured in two different modes virtual compatibility mode or physical
compatibility mode. Virtual compatibility mode virtualizes the mapped device and is generally
transparent to a guest operating system. This mode also provides some VMFS volume
advantages, such as the ability to create snapsh ts. Physical compatibility mode provides
minimal SCSI virtualization of a mapped device, and the VMkernel passes almost all SCSI
commands directly to the device, which enable closer VM and LON integration.

Allocating LUN
Storage resources in a virtual environment can be either isolated or consolidated depending on
the nature of the I/O access patterns of a VM. I olation means that a LON is used to provision
space for a single VM, whereas consolidation u 'es a single LON to provision storage to
mUltiple VMs.
The decision whether to isolate or consolidate pends on the VM requirements; i.e.,
application requirements. An isolation approac should be used if a heavy I/O generating
application is deployed within VM, to assure proper storage access. A single LON to a single
VM allocation can be done using either VMFS r RDM volume-the latter, when deployed,
offering only an isolation approach.
The isolation approach has a scalability downside-an ESX LON number limit can be quickly
reached. Also, when VM storage capacity need to be increased, an additional diskILON needs
to be provisioned, a task serviced by the storag administration team.

9-70 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Storage ~- New Virtual Disk
.. Disk size
• Location - with VM or separate data store
• Provisioning
- Thick
.- Thin

• v~r::lc~~:~::n:o:~~~~s I or ID E) I @,,",,,,,~,,,,;, ,!,.,,,",,,,,


.,·....,.<,.,. ,. ,,,,.,. ,,.,.,,,,.c.'","'""''"' '"c'' ' ' ''''''''''''''J

:1 B ~"';7~';,ym
:k;.~
!l!?M!'J:!.';;ir~_!g2;.n
ii!mlOwan;$yw
;;&,
""""
~
~~'l(~!.~~
~-&i...'L!:§i~
Crute. Did(
....:".." ....:.,:;<>:<:<
~"...rf ~"' · ;"-'i·'<-~

.~';. :X-"'; ." ~~ .. ,j·i< '; «<";':.

The administrator needs to define the following parameters when creating new disk:
• Disk capacity: For an individual disk, which defines the maximum disk size.
• Disk provisioning type:
Thin-the disk space on the storage device is allocated on a per demand basis; that
is, only the space actually used is allocated.
Thick (no option selected)-the VMDK appears fully sized on the datastore, but the
space is not zeroized. In this case if the storage device level thin provisioning is used
only the space actually used is allocated.
Support for FT cJusteri'1g-the configured disk space is allocated on the storage
device.
• Location: Specifies whether the disk files are stored in the same location as VM files or in
a separate datastore
• Virtual device node: Specifies to which SCSI or IDE controller a virtual disk is connected
• Disk mode: Allows disk to be configured as independent. If the selection is not changed
the disk remains in a default S1ate which allows VM snapshots to be created.

Virtual Machine Disk Mode


VM ware ESX supports three disk modes:
• Independent persistent: Changes are immediately and permanently written to the disk,
thus offering high performance.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-71


• Independent nonpersistent: Changes to the disk are discarded when you power off or
revert to the snapshot. In this mode disk wri ~es are appended to a redo log. When a virtual
machine reads from a disk, it fIrst checks the redo log (by looking at a directory of disk
blocks contained in the redo log) and, if the redo log is present, reads that information.
Otherwise, the read goes to the base disk fOt the virtual machine. These redo logs, which
track the changes in a virtual machine's file system and allow you to commit changes or
revert to a prior point in time, can incur a p rformance penalty.
• Snapshot: Captures the entire state of the VIrtual machine at the time you take the
snapshot. This includes the memory and dis ( states as well as the virtual machine settings.
When you revert to a snapshot, you return a these items to the state they were in at the
time you took that snapshot.
.~

9·72 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Using the Virtual Mach ine Sizing Criteria
This topic describes the criteria aTld process of virtual machine sizing.

Creating VM
• Options for creating VM
- Manual in vCenter
- P2V conversion with vConvert
- From template
- Import of third-party '1M wijh
vConvert
• After creating VM
- Add/remove resources (disk,
network, etc.)
-- Fine tune resource s1aring
- Install guest operating system
- Install VMware Tool s

:.; ..
~.;" . ~ "~; .... .;
.. : ;""..:..,,~

VMs can be created in the follow ng ways:


• Manually in vCenter
• Using a P2V conversion tool like VMware vConvert
• From a VM template
• By importing third-party VM with VMware vConvert
The administrator can fine tune VM profile after the VM is created:
• Add or remove certain resources (memory, disk, network adapter)
• Allocate host CPU and memory to VM-configure reservations and contention sharing
The guest operating system is installed after the VM is created. To allow better VM
performance, the VMware Tools should be installed afterwards.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-73


Creating VM - Setting VM Parameters
• Name
• Datastore - VM file storage
• VM version 4or7
• Guest operating system
• vCPUs
Oate store
• Memory Select a dE'asters i"I wtich to store the vitual madn!! FilM

• Network
• SCSI controller
• Disk
M!'tlw.·:')t'
~~>:.v:f.d;
0/::':,( C\~'t:..r~;f.f:
~.(::(~,'!. ~ !):$r.
(h~(~(l<: ~;::',!.:.

"~t~·;;.r.,.",,l ( ':.~ ':.,..;


':\.('~,,;'{ >r, ";r","f: "'~~

The sizing information for the CPU, memory, II , and storage is applied when the VM is
created or changed once VM has already been c -eated. Be aware that changing any of the
parameters once the VM is deployed may present issues with VM operation (depends on the
guest operating system and application used).
The input data for VM sizing are the characteristics and resources capacities of the physical
server that is to be converted to a virtual machi e. This information can be gathered with
VMware Capacity Planner, which helps to determine the capacity and utilization for the CPU,
memory, storage, and network.

9-74 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Recommended VM Configuration
• Critical VM
- Host CPU and memory reservation to guarantee resources
- Share values for periods of increased contention
• Separate drive for operating system and data
- Separate SCSI cont roller for high va requirements
• Use VMXNET3 for improved network performance
• Install VMware Tools

~ .":'!o- ,',:;...:: : •• ,;.» ',',; ::: ~.:; •• .;).,< ~:- .:.:. : ,,:. ,:.

Allocating resources to a VM is important, especially when dealing with critical VMs. The
following recommendations can be taken into account for critical VMs:
• Use host CPU and memory reservations to guarantee resources.
• Set contention sharing for pen ods of increased contention.
• Use separate drives and SCSI controllers for operating system and data for high 110
requirements.
• Use VMXNET2 or VMXNET3 network adapter type to improve network performance.
• Install VMware Tools.

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-75


Creating VM with P2V Conversion
• VMware vConvert
• Tuning P2V conversion
- Reduce drive size to prevent excess storage consumption
- Reduce memory size to prevent ov raliocation
- Reduce number of vCPUs to one
- Remove physical device drivers

A physical machine can be converted to a VM y using the free VMware vCenter Converter
utility-either local or remote machines can be converted. The conversion process does not
incur any downtime or disruption.
The administrator should tune the VM configur tion when performing Physical-to-Virtual
(P2V) conversion.

Example: VirtuaJizing Microsoft SQl Server


Microsoft SQL Server deployed on a physical server has been profiled, and the following
information has been acquired:
• Runs using two cores with total of 4GHz
• Average CPU utilization is below 6%
• Physical machine has 4GB of memory installed with peak usage at 60%
• Average I/O is 100 lOPS
• Average network traffic is 3.2 Mbps
• Storage space used is around 1GB

The following sizing parameters are applied wh n VM is created:


• CPU-two vCPUs are assigned, each with 2GHz capacity. No other reservations are
currently made.
• Memory-3GB of memory is applied to a 'M with no maximum or minimum
reservations.
• Two network adapters are deployed to achi ve redundant paths in order to provide nonstop
connecti vity.
• 10GB of storage space is allocated to the VM, since it is expected.

9-76 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sum mary
This topic summarizes the key points that were discussed in this lesson.

Summary
• Virtual machines have t he same characteristics as physical
machines- CPU, memory size, storage space, and network
con nectivity.
" VM CPU sizing defines the number of virtual CPUs along with
optional capacity reservations.
,R VM memory sizing defi nes the memory allocated, along wtth
optional capacity reservations.
.. VM network sizing defines network adapters and types for
network connectivtty.

~ .. v.' ~. :'~<' :';'.", ..:' '_ ,::; ... : :~" ~ "... .;

Summary (ConI.)
.. VM storage sizing defines the number, type, and size of disks
allocated.
• The sizing criteria shoulc take into account measured physical
server capacities and utilization, along with guest operating
system and application minimum requirements.

t, :"'(- •• , •.•.: :.;- _~~. .- ' ~.:". 1.:-- ' .....! ""'; ~:;:;G";·

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-77


References
For additional information, refer to these resources:
• http://www.vmware.comlproducts/vmmarklresults.html

9·78 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
" VMware advanced services like VMotion, HA, FT, and DRS
require vCenter Server.
• Standard virtual switches have to be configured with the same
configuration on each individual ESX host to use VM mobility.
II The Cisco Nexus 1 OOOV distributed virtual switch enables mobility
for network configuration.
• VSM is the Cisco Nexus 1000V supervisor module integrated with
VMware vCenter.
• Cisco Nexus 1000V deployment requires management, control,
and packet VLANs for VS M to VEM communication.

,:;. :(·(r. "-, 0/.: -; ",., ... . ,.... "_ ~:'': ' ''' ... , ..0,,';

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-79


Module Self-Check
Use the questions here to review what you learn~d in this module. The correct answers and
solutions are found in the Module Self-Check A swer Key.
Q1) Which VMware advanced service provi es instant failover for the VM? (Source:
Identifying Server Virtualization)
A) HA
B) DRS
C) FT
D) DPM
Q2) What is the prerequisite for using a vNecwork Distributed Switch in VMware? (Source:
The Cisco Server Virtualization Networ ing Solution)
A) vCenter
B) High-availability
C) VMotion
D) VMkernel port-group
Q3) Which two VLANs must be defined for proper VSM-to-VEM communication?
(Choose two.) (Source: The Cisco Serv r Virtualization Networking Solution)
A) service console
B) VMotion
C) packet
D) FT-Iogging
E) control
Q4) How many VEMs can be managed fro m a single VSM? (Source: The Cisco Server
Virtualization Networking Solution)
A) 256
B) 128
C) 64
D) 16
Q5) What happens upon VSM failure? (Source: The Cisco Server Virtualization
Networking Solution)
A) All VEMs seize to forward the :raffic.
B) All VEMs revert to standard vSwitch operational mode.
C) VM traffic continues to be switched by VEMs.
D) The ESX host reverts to isolated mode.
Q6) Which is the proper way of ensuring V M high availability? (Source: The Cisco Server
Virtualization Networking Solution)
A) Enable VM ware HA for VSM VM.
B) Enable VM ware FT for VSM M.
C) Run VSM on the physical server.
D) Deploy a secondary VSM on a j ifferent ESX host.

9-80 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Q7) Which practice ensures proper operation of multiple Cisco Nexus lOOOV domains on
the same ESX servers that are sharing control and packet VLANs? (Source: The Cisco
Server Virtualization Networking Solution)
A) Use different domain IDs for individual domains.
B) Configure port groups with different names.
C) Place VSMs on different management VLANs.
D) Put ESX serversn the same cluster.
Q8) How many vCPUs should get a VM running a single-threaded application? (Source:
Sizing a Virtual Machine)
A) one
B) two
C) four
D) eight
Q9) Which disk allocation set'eme uses the least amount of space on a storage device?
(Source: Sizing a Virtual Machine)
A) thick
B) thin
C) zero
D) on demand

© 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-81


Module Self-Check Answer Key
Ql) A
Q2) A
Q3) C,E
Q4) D
Q5) C
Q6) D
Q7) A
Q8) A
Q9) B

9-82 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module 10 I

Evaluating Cisco Unified


Computing Solutions
Overview
This module describes the design ~uccess criteria and ROI for the Cisco Unified Computing
solution.

Objectives
Upon completing this module, you will be able to evaluate design success criteria and ROI for
the Cisco Unified Computing solution. This includes the ability to meet these objectives:
• Explain design success criteri,•.
• Determine the design ROI.
Lesson 1

Understanding Design
Success Criteria
Overview
This lesson identifies and explain$ the need for design success criteria and what kind of criteria
are used to determine the successful design of a Cisco Unified Computing solution. It also
describes the design success criteria using an example.

Objectives
Upon completing this lesson, you will be able to describe and use design success criteria. This
ability includes being able to meet these objectives:
• Identify why design success criteria are required.
• Identify and evaluate the design success criteria.
• Evaluate existing deployment migration design success criteria.
Design Success Criteria Ov rview
This topic explains why design success criteria are needed.

Design Criteria Overview


• Virtualization solutions are deployed t address
business, technical, and environmenta data center
aspects
• Success is the goal.
II Why design success criteria?
- To evaluate the solution design agai nst the
requirements
- To evaluate the solution benefits
- To determine whether the project is successful
- To verify that the design meets key business
and technical goals

The Cisco Data Center Unified Computing solution is used to address the business, technical
and environmental aspects.
Success is an important goal to strive for. A Cisco Data Center Unified Computing project is no
exception. Due to the different perceptions that can exist about what constitutes success, it may
sometimes be difficult to tell whether a project i ~ successful. Time, cost, and quality are often
used as criteria for evaluating the success of a project.
To confirm that solution expectations have been met, a project design should be evaluated
against certain success criteria. The design success criteria are used to answer the following
basic questions:
• Does the solution design fulfill the requirements that have been established for the project?
• Does the solution create benefits according 10 a business, technical, and/or environmental
perspective?
• Were the key business and technical goals that were identified at the start of the project
fulfilled?

10-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
What Should Be Evaluated?
.. Compliance wtth busin ess goals
" Compliance wtth technical goals
" Compliance wtth environmental
goals
• Increased resource utilization
• Business continuity
• Power optimization
" Cost savin gs
" High availability
• Application reliability
• Error isolation
• Portability
• Quick application deplovment
¢ :': '.<, ,.,:;..:~ ='.' .;..':.,", .::: "~ " 0 ;' ~: .;.:~ " '.;.

What are the key aspects of the solution? Have the stated project goals been achieved? Three
key questions to ask when evaluating a project are:
• How has the solution met the business goals that were set before deploying the solution?
• How has the solution met the technical goals that were set before deployment?
• How has the solution met the environmental goals that were set before deployment?
The goals that the solution should fulfill must create certain benefits, which, in the case of
Cisco Data Center Unified Computing, are as follows:
• More efficient resource usage: The Cisco Unified Computing Solution provides the
capability for IT organization~ to ensure that resources will be available and accessible for
applications.
• Error isolation: The Cisco Unified Computing solution serves as a safeguard to provide
security to the system against different possible disruptions or faults that can be a
consequence of unavoidable circumstances such as operating system failure.
• Increased overall security: Although it seems different while Cisco Unified Computing
designates users and applications into various VMs, the solution actually increases security
levels among interrelated and diverse segments. Meaning, using a Cisco UC solution makes
security better even when using VMs that are sharing the same physical hardware
• Quick provisioning: Rather tnan using troublesome procedures for storage installation and
management, the Cisco Unified Computing solution offers the capability to create new
VMs instantly without requiring physical servers. If properly designed, this also shortens
the time needed to set up storage and data management systems. What used to take weeks
can now be accomplished in several minutes.
• Portability and mobility: The use of intangible equipment and virtual resources allows
effortless movement of VMs from one physical server to another.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-5
Using the Design Success riteria
This topic shows how to evaluate the design suc ess criteria using the following example.

Existing Computing Solution

These numbers apply to the existing Data Center:


• 1000 servers are deployed.
• 40 storage devices are deployed.
• 3000 network ports and cables are used.
• 200 cabinet racks are used.

For the LAN and SAN network the following n mbers apply:
• Servers are equipped with 2 LAN adapters nd 2 HBAs totaling 2000 Ethernet and 2000
FC adapters.
• 65 LAN switches are used for Ethernet conr ectivity:
63 switches are used in the access layer; 2 are used as core switches.
• 6 FC switches are used for SAN connectivity:
2 SAN islands are used, with 3 switches in each.
SAN switches are interconnected wit 1 4 ISLs.
• 40 storage devices are connected to the SAN with 4 FC links each.
• 3130 cables are used as the Ethernet cabling infrastructure.
• 2172 cables are used as the Fibre Channel cabling infrastructure.

10-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
From an operational perspective the following numbers apply:
• Planned downtime is 3 hours per month.
• Unplanned downtime for the past year was 18 hours.
• The customer needs 20 hours to recover in case of a failure (e.g., server failure).
• It takes 40 hours to deploy a new application on a new server.
• Average server CPU utilization is less than 10 percent.
• A verage server memory utilization is around 20 percent.
• A verage storage utilization is around 25 percent.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-7
Deployed Solution
• Network characteristics
- Serve r connectivity - Cisco UCS 6" 00
- Existing SAN - Cisco MDS
- LAN connectivity - Cisco Nexus 70 0
- Unffied fabric with FCoE
• Compute characteristics
- Cisco UCS 5100 with blade server
- VMware for server virtualization
• Storage characteristics
- 2 disk arrays

You have redesigned a customer's data center by deploying Cisco UCS and have decided to use
the following:
• For the network:
Cisco UCS6140XP Fabric Interconnect cluster for server connectivity
Cisco MDS switches in the existing AN
Cisco Nexus 7000 for LAN connectivity to provide high throughput using lOGE
Unified fabric with FCoE to simplify server connectivity
• For the server:
Cisco UCS 5108 chassis with blade servers
VM ware to virtualize and consolidate servers
• For the storage component:
2 disk arrays

10-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Bus fness Goals
$500,000

$400,000

$300,000

$200,000

$100,000

$0
Before After

CAPEX
_;0 ~~E~/pJi1f*

'VMware automatically HA handles physical server failure.

~':';,!-,f.:;,;:~. , ·:Xd.>: ' ::: ~_~'~)'·'<>:·X·:''(·;

Evaluating the business goals involves comparing the CAPEX and OPEX before and after the
Cisco Unified Computing solution is used. The main reduction presents the OPEX, since much
less power is used and much less cooling is required.
Apart from that you are also comparing:
• How much time it takes to deploy a new server
• How much time it takes to imolement fail over in case of a server failure
• What the average resource utilization levels are
The time required to deploy a sener is minimized from almost 2 days to 3 hours. Recovery is
instantaneous since VM ware HA is used. In case of server failure VM ware HA detects that and
starts the VMs from the failed server on another ESX server.
The server deployment time has been reduced due to a combination of different mechanisms.
First, the Cisco UCS Solution speeds up several administrators' previously lengthy tasks:
• Templates are used for new physical server deployments.
• Server LAN and SAN connectivity is deployed via Cisco UCS Manager without the need
to disturb the network and storage team.
Second, since VMware vSphere is used for server unified computing in case of available
physical computing resources, new server is rapidly deployed by using a server template in a
form of "gold image" and VMware vSphere for server virtualization.
In viewing the average utilization evels, the numbers reveal that the Cisco Unified Computing
solution helped customers to raise average CPU utilization from less than 10 percent to 82
percent, average memory utilizaticn from 20 percent to 8S percent, and average storage
utilization from 2S percent to 7S percent.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-9
Evaluating Technical Goals

Planned nm"mTlm o

Unplanned downti me

Recovery

* VMware VMotion is used to move the VM.


**VMware HA automatically handles physical server failure.

Next you will evaluate the design against the stated technical goals to see whether it meets the
technical design criteria. First you compare the device count before and after the Cisco Unified
Computing solution is implemented:
• The physical server count is reduced from 1 00 servers to just 80 servers, yielding an
average server consolidation ratio of 12.5 tol.
• The storage devices have also been consoli ated from 40 to just 2 larger disk arrays.
• The cable and port count has been reduced y a factor of 62, from 3000 to 48.
• The facility now has more spare room, since 196 racks have been freed up for use, which
prolongs the data center lifespan by making extra space available for growth.
From the perspective of technical responsiveness, the Cisco Unified Computing solution in a
virtualized data center creates a reduction in the time required for planned, unplanned, and
recovery tasks.
The planned downtime is minimized to almost since VM ware VMotion is used to move the
virtualized server - the VM - away from the physical server that has been selected for
maintenance. Apart from that, the maintenance time related to server firmware (adapter,
enclosure, BIOS, etc.) is also minimized with the use of central server management - the Cisco
UCS Manager.
Unplanned downtime has been reduced to a minimum since multiple HA mechanisms have
been employed in combination with the network - GLBP, redundant links in EtherChannei and
vPC, virtualized server environment where VMware HA is used to tackle server failure, and
application based cluster which also handles application related failures.
Recovery time has been minimized as a conseq ence of using a combination of HA
mechanisms.

10-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Cabling Benefits

. .,..~
~: :.;~ ". '~}.,:~ ::- . ' <. } :: . :;":.;.

Finally, in viewing the benefits from a cabling perspective the passive infrastructure has
significantly been reduced.
• Adapters have been reduced ty a factor of 25, from 4000 to only 80. This was made
possible with the use of FCoE, thus deploying CNAs. Each of 80 servers has one CNA with
redundancy support.
• The number of switches has been reduced to six:
Two Cisco UCS 6100 Series Fabric Interconnects used to connect the 10 blade
enclosures
Two Cisco Nexus 7000 Series Switches used for core switches
Two Cisco MDS switches used to connect the Fibre Channel-based disk arrays
Cabling count has been significant ly reduced - from 5302 to only 48 cables.
• Each of 10 blade enclosures is connected using 2 cables to each of two Cisco UCS 6100
Fabric Interconnects in a cluster totaling 40 cables for server to fabric interconnect
connecti vity.
• Each UCS6100 Fabric Intercon nect is connected to Cisco Nexus 7000 Series Switches
using two cables. In total 4 cables are used.
• The two disk arrays are connected to MDS switches with 16 cables - 8 per disk array to
connect each disk array to both SAN islands.
• The MDS Fibre Channel switches are connected to Cisco UCS 6100 Series Fabric
Interconnects with two cables, each totaling four cables.
The management connectivity for the servers is embedded with a Cisco UCS 5100 Blade
Enclosure to Cisco UCS 6100 cabling - no special cabling is required to manage servers.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-11
Using the Design Success riteria
This topic compares the existing solution with the Cisco Unified Computing Solution without
changing the number of ports.

Existing Deployment
• Physical servers - total 857
- 525 using Windows operating
system
- 300 using Linux operating
system
- 32 using VMware ESX
• Server types
- DAS - dual-attached server
- QAS -quad-attached server
- Blade chassis
• Migration to Cisco Unified
Computing Solution
- No additional server
virtualization

We have created a Cisco UCS design for the existing deployment described in the slide above.
The deployment has 857 physical servers.
The migration to the Cisco UCS is done in a waj that preserves the number of physical servers .

.~

10-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Existing Deployment (Cont.)

': Jiaii5lAAl!.llY!1 ijS«nery fJtW; (, Etlfe_;!i!Pflfil('~


26 26 208x 1GE
3 2 2x 10GE
24 24 48x 10GE
2 2 4x 10GE
4 2 16x1GE

• AV0fage 120VV p;~rswtch


¢ .":'.~ ,',>;.:: ,~> .;.>:.~y' "'"' ~.~ .•-? ..• ' : .;':~'.'~'.:.

The tables above summarize information about the existing solution:

• Number of servers = 857

• Number of racks = 59

• Number of adapters = 2247

• Power consumption = 355,040 W

• Heat dissipation = 1,210,686 BTU/h

© 2009 Cisco Systems, Inc, Evaluating Cisco Unified Computing Solutions 10-13
Cisco Unified Computing Solution

The Cisco Unified Computing Solution has already been designed for this existing deployment
migration. The tables above summarize the information about this solution.

10-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Comparing Solutions
Better results even wi th out additional server virtualization

(}- .=':~ ,~: , ~, ~) ,:.:-,.... " , <.:: " f " ": '>;":'C ':'

When comparing the two solutio) S, you can see solely from an infrastructure quantity
perspective that the Cisco Unifiec' Computing solution designed brings substantial savings in
terms of:
• Infrastructure quantity
• Required rack space
• Required power
• Heat produced
But the infrastructure quantity gains are not the only benefits of the solution. Now the customer
requires far less time to provision the servers.
Bear in mind that the solution did not introduce any extra server virtualization since the
customer requested that it not. Thus, even greater savings and better results would be achieved
if additional servers were virtualized.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Soluti ons 10-15
Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
• The design success criteria is used to evaluate the virtualization
benefits.
• The design success criteria evaluates usiness, technical, and
environmental goals.
• To evaluate design success a measur d data set before and after
solution deployment must be gathere .
• The design success criteria shows that the virtualization solution
provides major benefits .

10-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21

Determining the Design RDI


Overview
This lesson identifies and explains how to evaluate various data center solutions based on their
cost structure and return on investment (ROI).

Objectives
Upon completing this lesson, you will be able to describe and perform ROI calculations for
data center solutions. This ability includes being able to meet these objectives:
• Identify the ROI criteria.
• Explain how to evaluate the deployed virtualization solution ROI.
Design ROI Overview
This topic introduces and explains the approach to calculating ROI and comparing various data
center solutions.

ROI Calculation
• Identify business and technical requirements for the data center
• Identify expected or maximum future rowth for the data center
• Identify technological differences betw en traditional and
consolidated and virtualized data centers
• Quantify CAPEX and OPEX for both
• Calculate ROI based on the cost com arison

Business requirements are the driving factor in building a data center. Business requirements
must first be translated into technical requirements, which can then be used to start planning a
data center.
There are many factors that can guide you to the right solution, but cost is one of the most
significant decision factors .
.~ Quantifying CAPEX and OPEX can help determine the right decisions in data center planning.

10-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
ROI Calculation
Technical Req uirements
.. Initial and expected futLre maximum growth
- Nu mber of applicatio n servers:
" CPU power
" Memory
-- Storage capacity
- Connectivity requirements (core, VPN, Internet)
• High availability require 'Tlents
• Security requirements

. ::: ,,~. :,;.« .': ':':{'

The business and technical characteristics desired for the project should allow us to estimate the
size of the data center and determine the types and quantity of data center equipment needed.
Such estimates should also include initial technical requirements as well as predicted future
growth.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-19
ROI Calculation
Solution Options
• Traditional data center design
- Standalone servers with built-in storage to accommodate
application servers
- Layered LAN solution
- Layered WAN solution
• Cisco Unified Computing solution
- Fewer high-performance blade se rvers, with virtualization used
to accommodate application se rver
- Centralized storage
- Consolidated LAN and SAN (with VLA N and VSAN
capabilities)
• Mixed data center design
- Consolidation and virtualization of ome parts of the data
center

In general, possible data center solutions range from the oldest (Le., most traditional design)
approach to the newest - a consolidated and virtualized data center.
A traditional data center is where each application server is hosted on its own physical server
using its own internal storage (disks).
The Cisco Unified Computing solution is where resources are consolidated and virtualization is
used to keep servers logically separated (isolated).
Other solutions (mixed) are typically partially consolidated and virtualized solutions where
some parts of the data center are built according to a more traditional approach.

10-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
ROI Calculation
Comparing Sol utions

CAPEX
OPEX

Select SoIutionwith
Fastest ROI

~ ,OJ;'!-- ,', :;'';: ,-., ;X"": . " ,,-~ • .:t. ,:< .•;'>:~ " ':' .:.

The figure illustrates how to create multiple calculations based on defining multiple data center
solutions ranging from tradition~l to a full Cisco Unified Computing solution.
When evaluating solutions, you \\ould initially evaluate the costs for each of the options. In our
case, these are traditional DC des ign, mixed DC design, and Cisco Unified Computing solution.
The ROI describes how quickly the investment is returned. Thus, when comparing options, the
solution that has the fastest ROI is the best. When comparing the three options, not only the
infrastructure costs, quantities, and power consumption are evaluated but also the management
overhead and time needed to provision a new server.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-21
ROI Calculation
Comparing Solutions
• Cost comparison scenario 1
- CAPEX - Cisco Unified Computing solution < Traditional DC
• The initial cost of solution can b smaller
- OPEX - Cisco Unified Computing olution« Trad~ional DC
• The operational expenses will be smaller
- ROI« 3 years
• Cost comparison scenario 2
- CAPEX - Cisco Unified Computing solution> Traditional DC
• The initial cost of solution can be larger
- OPEX - Cisco Unified Computing olution« Trad~ional DC
• The operational expenses will be smaller
- ROI < 3 years

This ROI calculation reflects the difference bet een a traditional DC and the Cisco Unified
Computing solution. Two scenarios are generally possible:
• The CAPEX and OPEX of the Cisco Unifie Computing solution are smaller. Typical ROI
will be less than three years.
• The CAPEX of the Cisco Unified Computi g solution is higher, but the OPEX is smaller.
The Cisco Unified Computing solution is stIll a better choice since the CAPEX is the initial
cost, whereas the OPEX is the recurring expense. A. greater CAPEX accounts for additional
equipment, technology, and new knowledge required while being offset by a lower OPEX
due to energy savings, flexibility, and mana eability.

10-22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Design ROI
This topic describes the approach to calculating ROI by comparing various data center
solutions with the consolidated and unified computing data center design.

ROI Calculation Input


CAPEX
• Cisco Unified Computin g solution impact on CAPEX:
- Less equipment is needed.
- Less space is required.
- Smaller power and a.r conditioning capacity is required.
- Less networking equ pment and cabling are needed.
- Expensive high-performance equipment is used.
- Additional licensing, support, and training is needed.

,.. :v.'~ "",~?, . ;_>,. .. . ~. ,.:, " ,;.> ' .. ;N:'~';'

The Cisco Unified Computing solution has both a positive and negative impact on CAPEX:
• Less equipment is needed because of consolidation and virtualization (i.e., resource usage
is optimized [e.g., servers, disks, memory, network connectivity]).
• Less space is required.
• Smal1er power and air conditioning capacity is required due to lower power consumption.
• Less networking equipment a Fld cabling is needed.
• Expensive high-performance equipment is used (e.g., more and faster CPUs, more memory,
centralized storage).
• Additional licensing, support and training for virtualization software is required.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-23
ROI Calculation Input
OPEX
• Cisco Unified Computing solution impact on OPEX:
- Less space is required.
- Less direct and indirect power consumption is needed.
- Less equipment must be maintained .
- Less personnel is required.
- Additional licensing and support for virtualization is required
(optional).

Cisco Unified Computing solution has a mostly positive impact on OPEX:


• Less space is required .
.~ • Less direct and indirect power (cooling) co sumption is created.
• There is less equipment to maintain (e.g., re lacements, upgrades).
• Less personnel is required due to greatly simplified management, if virtualization is used
on all segments.
• Additional licensing and support for virtualization software is needed.

10-24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Example: Direct Power Consumption
Savings on hardware pay fo r virtualization licensing and training
c:!> ROI = 0 years, Reduced CAPEX
• Traditional deployment
- 90 physical servers requimd 90 x Physical Servers

- Average CPU utilization is 22%


... Power consumption is 50 ",W 30 x Physical Servers
with 9 x Virtual Servers
• Cisco UCS solution
- 30 physical servers
- Average CPU utilization is 70%
... Power consumption is 17 I{W
• Price per kWh: ¢10 70%
Power: 17kW
• Option 1: $43,800 annually
• Option 2: $14,892 annually
• Annual savings: $28,908

.;} ~,:,.' ';::·';: -:. -,·;X·N.: :" '. ~ . :.~ :,. '~"':" . < :

The example above is a conservative example:


• The same type of servers is used in both scenarios (consolidated servers need more
memory).
• Real consolidation ratios can go beyond 1: to, but more powerful servers are used.
• Power distribution should be designed and implemented for maximum future expected
requirements.
• Automated server power-off curing off-peak hours can be used to further reduce power
consumption.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-25
Example: Air Conditioning
Savings on power consumption pay f r virtualization licensing
and support ¢ Reduced 0 PEX
• Typical CoP = 2 .5
90 x Physical Servers
• Cooling power consumption
- Traditional deployment = 125kW
- Cisco UCS solution = 42.5kW
• Price per kWh: ¢10
• Option 1 : $1 09,500 annually
• Option 2: $37,230 annually
• Annual savings: $72,270

When building a new data center, you could use the same ROI and TCO principles when
deciding on the air conditioning (CoP range is typically anywhere from 2 to 3).
Air conditioning support infrastructure should be planned for maximum future requirements.
Definition: CoP - Coefficient of Performance - defines the ratio between the cooling power
and the power consumed.

10-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
1:' ';, " < '"

Example: Space
• Required support space is identical for both solutions
(benefits are visible in large data centers).
• Power and cooling restrictions
- 1OkW per rack
• Tradttional deployment
Electrical systems
- 90 RUs space

- Maximum 14 servers per rack


Cab i net space
Air cond~l:lnirg
• Approximately 8kVV per rack system s !Op ace
- Seven racks required
Fi re suppression
s stems!Opace
• Cisco UCS solution
- 26 RUs Actrin5trative
Stora99 space Other
space
• Four chassis = 24 RU
• Two UCS 6120XP = 2 RU
- Two racks requ ired
• Approximately8.5kW per rack
::,-;.):",,;; :::"~ :.s·:, ,~,," '.< : ;,-,,:"

Adequate space requirements should take into account all data center support functions:
• Electrical systems and cabling (power distribution, UPS, generators, and fuel)
• Air conditioning systems and piping
• Administrative, storage, and empty space
• Fire suppression systems
Space requirements should be designed and implemented for maximum future expected growth.

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-27
Evaluating Existing Deployment
Migration
• Typical CoP = 2.5
• Existing deployment
- 857 servers
- 59 racks Pr ce per kWh
- Power consumption =355 kW
Ex isting Deplo ent $1,088,430
- Cooling power = 887.5 kW
• Cisco UCS solution Ci co UCS Solution $496,692
- 857 servers Savings $591,738
- 36 racks
- Power consumption = 162kW
- Cooling power = 405kW

You can compare the power and cooling operational expenses for the existing deployment
migration. You know the original power consum tion and the power consumption of the
proposed Cisco Unified Computing solution.
Using those numbers and assuming that the price per kW is 1O¢, you quickly conclude that the
annual savings would be $591,738.

10-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key prints that were discussed in this lesson.

Summary
• To determine the ROI, both initial and operating costs have to be
examined.
• The reduced operational costs with the Cisco Unified Computing
solution help reach ROI faster.
• Power related costs inc ,ude the air conditioning.
• The Cisco Unified Computing solution uses less space since
more equipment can be placed in the racks .

.c .{. ~:; ~.~~ ",. > .'.,.:. ~ :.~ -'J)< ." " ,~ ..,

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-29
Module Summary
This topic summarizes the key points that were discussed in this module.

Module Summary
• Design success criteria evaluates the solution according to the
requ irements.
• Technical goals are evaluated by comparing the number of
servers, adapters, switc 'les, ports, and power drops forthe
eXisting and the new so utions.
R To evaluate the ROI of both the traditional and the Cisco Unified
Computing solution, compare the power costs (for equipment and
cooling).

>;- : ' " •. :..": •• ;.~,,, .. .. - I~ . .';.,.· ".d'''' ,;

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-3 1
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Ql) Why are design success criteria used? (Source: Understanding Design Success Criteria)
A) to evaluate the solution against 'equirements
B) to calculate CAPEX
C) to calculate OPEX
D) to determine ROI
Q2) Which factor is typically used to determine the amount of power consumed to cool the
equipment in a data center? (Source: Determining the Design ROI)
A) BTU
B) CoP
C) ROI
D) RU

10-32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Self-Check Answer Key
QI) A
Q2) B

© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-33
Appendix 1

Describing Cisco Unified


Computing Solution Services
Overview
This lesson identifies and describes the Cisco services available for Cisco Data Center Unified
Computing solution design, implementation, and maintenance.

Objectives
Upon completing this lesson, you will be able to meet the following objectives:

• Identify the Cisco Unified Computing services.

• Identify the Cisco Unified Computing workshops.

• Identify the Cisco Unified Computing Planning, Design, Implementation service.

• Identify the Cisco Unified Computing Support and Warranty service.

• Identify the Cisco Unified Computing Remote Management service.

• Identify the Cisco Data Center Optimization service.

• Identify the Cisco Security services.

• Identify the Cisco Data Center services.


Cisco Unified Computing Services Overview
This topic provides the Cisco Unified Computing Services overview.

Cisco Unified Computing Services


.~'

Many enterprise data centers remain complex a d siloed environments in which servers and
storage equipment are vastly underutilized, cycle times for provisioning new applications are
long, and operating costs-especially power an cooling costs--consume an ever-greater
percentage of the IT budget. More and more org nizations are turning to virtualization to
address these issues. Unfortunately, many virtualization efforts do not deliver the expected
results because they focus only on servers, failing to account for storage, network, and other
critical resources.
Cisco Unified Computing System was designed to unify network, compute, and virtualization
resources into a single, preintegrated platform. The system provides the foundation for a broad
spectrum of virtualization and performance optimization efforts. Whether you're seeking to
create a stateless computing environment, enable just-in-time provisioning of resources,
simplify the movement of virtual workloads, or Just reduce equipment and operating costs, the
UCS provides a powerful solution. But to realize the full benefit of these innovations, you need
to implement them quickly and correctly, in accordance with proven methodologies and
industry best practices.
Cisco Unified Computing Services help acceler te transition to a unified computing
architecture and sustain and optimize the perfor ance of that architecture after it is deployed.
Providing a unique network-based perspective a'1 d a unified view of all data center resources
and interdependencies, Cisco can extend the be efits of virtualization projects beyond the
domain of servers alone and help tune the entire data center environment to meet financial and
technical objectives.

A1-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UC Services include the following:
• Use case and architecture wo rkshops
• Planning, design, and implementation service
• Support and warranty service
• Remote management service~

• Optimization services
• Security services

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1-3
Cisco Unified Computing Service
Benefits
• Accelerate customer transition to unified computing architecture
• Benefits:
- Help achieve business and techni cal goals
- Mitigate virtualization risks
- Enhance the solution agility
- Reduce complexity
- Simplify management
- Optimize Cisco UCS uptime, performance, efficiency
- Improve application performance

Cisco Unified Computing Services combine Cisco's broad data center expertise with that of
Cisco's industry-leading partners to accelerate c stomer transition to a unified computing
architecture. These services help customers :
• Achieve business goals by aligning the data center strategy with unique business and
technical objectives
• Speed deployment and mitigate risks of virtualization and other data center projects by
applying best practices and methodologies
• Enhance agility by improving the mobility a d manageability of traditional and virtual
workloads and enabling just-in-time provisioning of resources
• Lower operational costs by identifying opp tunities to reduce data center complexity and
simplify management
• Optimize the uptime, performance, and efficiency of unified computing systems to help
maximize the value of their investment
• Provision new applications more quickly
• Strengthen in-house expertise through knowledge transfer and mentoring
• Improve application performance and availa ility to meet service-level agreements

A1-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Cisco Unified Computing Workshops
This topic describes the Cisco Unified Computing workshops.

Workshops Overview
.. Understand unique proj ect environment
" Isolate business objectives
II Map the technical obje C+ives
.. Gather input across organization
.. Help understand project challenges
• Two workshops:
- Use Case Workshop
- Architecture Workshops

f: ; ": }, ~.;' ",. ,:;., ,:~ q ~. :.: .'<,. ' ;~.""" ..< .... ..;
~,~

The first step in applying the benefits of unified computing to your business is isolating your
objectives. The Cisco Unified Computing System provides a powerful platform for a broad
range of projects, from stateless computing to virtualizing resources to improving the mobility
and manageability of virtual machines. But to fully realize your goals, you need a clear
understanding of how the solution will support your project in your unique environment.
Unified computing workshops help you to definitively map out the business and technical
objectives for your unified computing project. These workshops gather input from stakeholders
across your organization to under~ tand your strengths and challenges and the issues that need to
be addressed to make your project goals a reality.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Servic es A 1-5
Use Case Workshop
• One-day interactive session:
- Define project objectives
- Align project with best practices
• Gain understanding of Cisco UCS usage
• Deliverables:
- Cisco UCS implementation require
- Next steps recommendation

This one-day interactive session is held at the c stomer's place of business and is designed to
clearly define project objectives and, when possIble, align the customer's project goals with a
set of industry best practices for achieving them. Through this process, the customer gains a
clear conceptual understanding of how the Cisc Unified Computing System will be used to
meet stated goals. At the end of the workshop, the customer receives documentation of the
requirements for implementing unified computing in their environment as well as a set of
recommendations detailing next steps.

A1 ·6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Architecture Workshop
• Four-day interactive session
- Define project objectives
- Map objectives with nest practices
- Architecture considerations across
current and planned Data Center
resources
- Identify project challenges, success
factors, risks, requirements
• Deliverables
- Project requirements
- Next steps recommendation
- Recommended architectural design

::: r.::.·},:,,< ,w ·,. : :'l-: ';'~', •...,.'~ .. '< : ~:' : ', ;

This fou!"-day workshop also begins by isolating the customer's objectives and mapping them
to a set of best practices for achieving them. It then continues with an in-depth documentation
of architecture considerations across the customer's existing and planned data center assets,
including blade servers, computet systems, applications, operating systems, virtualization,
input/output (I/O), and system management. The workshop documents the challenges, critical
success factors, and risks involved in the customer's project and the requirements the customer
will need to meet based on projec ted environment and user demands. At the end of the
engagement, the customer receives documented requirements and a set of recommendations for
next steps, as well as an architectural design that provides a blueprint of the recommended
architecture for the implementatic n.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-7
cisd:o Unified Computing PI nning, Design,
Implementation Service
This topic describes the plan, design, and implementation service.

Plan, Design, Implement Service


Overview
• Identify planned project benefits
• Identify and plan steps to achieve busi ess
and technical objectives
• Reduce implementation risks
• Se !Vice options:
- Preproduction pilot
- Serve r virtualization mobility and
management
- Accelerated deployment
- Migration plan and delivery
- Installation

The Cisco Unified Computing solution can deli er significant advantages for customer
business, but it can also introduce new challenges.
To realize the full benefits of the customer's pla'1 ned project, the customer needs to account for
the entire data center environment, including servers, storage, network, security, and
applications.
Cisco Unified Computing Planning, Design, an Implementation Service helps the customer
take the right steps to achieve business and technical objectives and help to reduce risk in the
unified computing implementation.
Five service options are available:
• Preproduction pilot
• Server virtualization mobility and management
• Accelerated deployment
• Migration plan and delivery
• Installation

A1-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Preproduction Pilot
• Four-week pilot:
- Conducted in customer's environment
- Validates project del vered business and
techn ical 0 bjectives
- Gain expertise using Cisco UCS
- Develop deployment blueprint
• Deliverables:
- Validated business case
- Architectural design
- Detailed runbook

-::. ~.:~_: ';.;..:,. ,:. , ;>;.,.>: ' l;! ... ~ • •}" '" '~*r'< :

If the customer is still evaluating the Cisco Unified Computing System, a pre-production pilot
is the best place to start.
This four-week engagement conduct&-a pilot in the customer's environment to validate that the
project will deliver the business and technical objectives expected.
Through this pilot, the customer gains valuable expertise using the Cisco Unified Computing
System before committing to a purchase and develops a production-ready blueprint for
deploying in their environment.
At the end of the engagement, the customer receives documentation of their validated business
case, an architectural design, and a detailed runbook that the customer can use to bring the
solution into production.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-9
Server Virtualization Mobility and
Management
• In-depth planning and design
- End-to-end data center virtualizatio
- Virtual machines enhanced
provisioning and management
• Evaluation of current environment and
processes
• Identify virtualization objective
requirements
• Deliverables
- Implementation information
- Requirements
- Archttectural design

This service option provides in-depth planning and design for an end-to-end data center
virtualization strategy and for enhanced provisioning and management of virtual machines.
The service provides an exhaustive evaluation of the customer's current environment and
process and document requirements across the existing and planned assets to achieve the
virtualization objectives.
At the end of the engagement, the customer rec ives all of the information needed to begin
implementing the virtualized data center, including detailed documentation of requirements and
an architectural design.
If the customer is implementing an end-to-end data center virtualization strategy and seeking to
provision and manage virtual machines more effectively, they can also extend the engagement
to include deployment.
It helps bring the virtualized environment online, including setup and implementation of
hypervisor technology. The service also provid s a detailed runbook that you can then use to
scale the implementation throughout your prod ction environment.

A1-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Accelerated Deployment
II For committed Cisco UCS implementation
• Four-week activity
- Planning, design, implementation
- Bring project into production
- One data center segment
• Deliverables ~"f;:~"'> t i
,~,
- Recommended next steps '" ""'" y

- Architectural design
_. Implementation scale runbook
• Transfer of knowledge

~, ;f,:,.=';:,.;:.". ~>:. ,.-.; • '" ' '' ~ }" '> ' i .,:"'< : ,): ';';,', .,.:. -:. .... : ~:: : ', :

If the customer has already committed to implementing the Cisco Unified Computing System,
the accelerated deployment service option provides planning, design, and implementation
expertise to bring the project into production within four weeks in one segment of the data
center. At the end of the engagement, the customer receives documentation of the
recommended next steps, an arch tectural design, and a runbook to scale the implementation
throughout the environment. The customer also receives knowledge transfer through mentoring
to help scale the implementation.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-11
Migration Plan and Delivery
• Transition from existing to Cisco UCS
based architecture
- Expert support for transition
- Support for x86 and legacy platforms
• Deliverables
- Migration plan

This service option provides expert support to sooth the transition from existing server
platforms to a Cisco Unified Computing System architecture.
Migration support is available for both x86-base and legacy (non-x86) server platforms. Using
industry best practices, Cisco helps create and deliver a migration plan to speed migration and
mitigate risks.

A1-12 qsco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Installation
• Help deploy Cisco Unified Computing solutions with minimal
disruption
• Expert installation of:
- Cisco UCS, network, storage networking devices, cabling
- Network and management software

'0~'.!"

'::-'11",,: f;,," .~~ ;:..!;~, ..~ ;~., . -,:; ";:1'::_t " -')•• ,,.:--~. ';', .• : :", .. /~ ~: .•,:;':.> : .:--

This service option provides expert installation of all interconnected hardware in the customer's
data center and helps deploy unified computing in the customer's environment with minimal
stress or disruption to the operations.
The service encompasses compute platforms, network and storage networking devices,
interconnects and cabling, and installation and configuration of related network and
management software.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1-13
Cisco Unified Computing S pport and Warranty
Service
This topic describes the Cisco support and warr nty service.

Support and Warranty Service Overview


• Support from dedicated specialist:
- In-depth expertise in:
• Virtualized data centers
• Server hardware and software
• Cisco Unified Computing techno ogy
• Help increase uptime
" Quick issue resolution
" Se rvice options:
- Cisco Unified Computing Warranty Plus
- Cisco Unified Computing Support Service
- Cisco Unified Computing Mission-Critical
Support Service

The more benefits the customer realizes from the Cisco Unified Computing System, the more
important the technology becomes to the business.
If an issue arises, the customer wants support from dedicated specialists with in-depth expertise
in virtualized data center environments, server h rdware and software, and unified computing
technology.
Customers can be confident they are covered with Cisco Unified Computing Support and
Warranty Services.
Augmenting the Cisco Unified Computing Syst m warranty, Cisco's award-winning support
services help increase uptime, quickly resolve is ues, and get the most from the unified
computing investment.
Cisco Unified Computing Support and Warranty Services include:
• Cisco Unified Computing Warranty Plus
• Cisco Unified Computing Support Service
• Cisco Unified Computing Mission-Critical Support Service

A 1-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
- --- - - - - - - - - - - - -

Support and Warranty Service Options


• Cisco Unified Computing Warranty Plus:
- Faster than standard parts replacement
- Several levels of advar ced parts replacement
- Remote access to Cisco support professionals
_.. Download software updates for Cisco UCS manager
• Cisco Unified Computing Support Service:
- Expert support for Cisco UCS
- Proactive diagnostics, alerts
- Around-the-clock TAC access
- Flexible hardware repla cement options
• Cisco Unified Computing Mission-Critical Support Service:
- Add-on to the Cisco Unified Computing Support Service
- Direct access to Cisco Engineers
- Assigned single point of contact technical account manager
-;~ ~':~.' '.:::. .;: :'. • .;..:"'.': l ::" ~ ·f:" ,<»:, .,< :

Cisco Unified Computing Warranty Plus


For faster parts replacement than s provided with the standard Cisco Unified Computing
System warranty, the customer can purchase the Cisco Unified Computing Warranty Plus. The
option offers several levels of advanced parts replacement coverage, including onsite parts
replacement in as little as two hours. The customer also gains remote access anytime to Cisco
support professionals who can determine if a return materials authorization (RMA) is required.
Plus, they can download software updates and upgrades for the Cisco Unified Computing
Server Manager (UCSM).

Cisco Unified Computing Support Service


This service provides expert support for the customer's entire Cisco Unified Computing
System, providing sustainable performance and availability in data center operations. The
customer gets proactive diagnostics, real-time alerts, and around-the-clock access to Cisco's
award-winning technical assistance center (T AC) from anywhere in the world. Cisco support
engineers have a wide range of industry certifications, including VMware, Red Hat, Novell,
and Microsoft. As a result, Cisco engineers help resolve identifiable and reproducible product
problems using established escalation management procedures to enlist specialized expertise
from Cisco and selected third-parties where necessary.
The service also includes flexible hardware replacement options, ongoing updates of Cisco
software, and access to online technical resources to help maintain optimal efficiency and
uptime of the Cisco Unified Computing environment. If the customer purchases a server
operating system or virtualization software from Cisco, Cisco will provide 24-hour support for
this third-party software as well.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-15
Cisco Unified Computing Mission-Critical Supp rt Service
If the availability of the customer's unified com uting environment is vital to the operation of
the business, the customer can choose the Cisco Unified Computing Mission-Critical Support
Service option. It includes everything in the Uni ied Computing Support Service plus direct
access to Cisco engineers who understand the e vironment and an assigned technical account
manager to provide a single point of contact for all support issues. The customer also has the
option of bringing a field engineer onsite to hel proactively assure that the system operates
efficiently and to address situations that could i pact system availability.

A1-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Unified Computing Remote Management
Service
This topic describes the remote management service.

Remote Management Service Overview


• Addresses data center challenges
• Complete monitoring ard management:
- Industry proven practices
- Networking expertise
- I nnova tive tools
- Physical and logical '1l onitoring and
management
" Service options:
- Cisco Unified CompLting Remote
Management Services:
M Standard RMS serv ice
• Elective Service
- Cisco Advanced Performance Monitoring
Service
" :,.; .);' ':,., ..
~_; ,~q :.!,.: "~ . :>..~ ~ ':;' : ...:.:.--. ,v:' ,':' ~', ,

The benefits of data center consolidation, virtualization, and automation are substantial, but
transforming a data center environment also creates new challenges for IT organizations.
Planning and managing virtual environments, including staffing, application performance,
remote office support, and security requires new capabilities and expertise.
The Cisco Remote Management Services (RMS) and Cisco Advanced Performance Monitoring
Service enable the customer's organization to realize immediate benefits from the Cisco
Unified Computing investment b} providing complete monitoring and management using
Cisco's industry-proven practices. networking expertise, and innovative tools.
Cisco Unified Computing Remote Management Services provide physical and logical
monitoring and management of all unified computing hardware and software elements. The
services are composed of flexible standard and elective elements that may be combined to
deliver a tailored solution to meet customer needs.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-17
Remote Management Service Options
• Standard RMS service for Cisco UCS:
- Remote monttoring
- Incident management
- Service-level management
• Elective Service:
- Utilize Cisco experts for unified commu 1ications-related activities and
changes
- Support change, release, configuration , patch management
• Cisco Advanced Performance Monttoring Se rvice:
- Addition to management services
- Baseline and monitor business critical applications performance:
• SLA application response time mon itoring
• Fault isolation
• Reporting

Standard RMS service


Provides remote monitoring, incident management, problem management, and service-level
management for the Cisco Unified Computing System.
The service manages instances of operating systems (Microsoft, LINUX) as well as virtual
machine (VM) environments.

Elective Service
Provides the customer with access to Cisco engi eers to support change, release, configuration,
and patch management.
Delivered as a usage-based block of monthly ho rs, this service enables the customer to utilize
Cisco expertise for customer requested activities and changes to their unified computing
environment.

Cisco Advanced Performance Monitoring Servi e


Augments the remote management services witI' a means to baseline and monitor performance
of business critical applications across the customer's network.
This service provides service-level agreement onitoring of application response time, fault
isolation, and reporting and gives critical visibihty into application packets as they flow
through the network to and from the data center.

,~'

A1-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Data Center Ootimization Service
This topic describes the Data Cer ter optimization service.

Cisco Data Cen ter Optim ization Service


Overview
.. Address the adaptability of the solution
.. Recurring , subscription-b ased service that
improves:
- Operational efficiency optimization
- Resource and appli cation performance
" Data Center aspects:
- Arch~ecture and virtualized environment
- Application deployment, delivery, network
perfo rmance
- Cisco UCS and server virtualization
- Un ified fabric

:.: <::" ::>-'" ",."". ~.<

Business is dynamic and continually evolving. To meet users' ever-changing demands for
responsiveness, application performance, and efficiency, the data center and unified computing
architecture must be able to adapt and evolve as well.
The Cisco Data Center OptimizatIon Service provides a suite of recurring, subscription-based
service options to help the customer continually optimize operational efficiency, achieve peak
performance of data center resources and applications, and apply industry best practices to the
operation of the environment.
This service covers all aspects of the data center, including:
• End-to-end architecture and vrrtualized environment
• Application deployment and aelivery
• Application network performance
• Unified computing systems and server virtualization
• Storage area networking
• Unified fabric

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-19
Cisco Unified Computing Optimization
Service Options
• Architecture review:
- Ensure that Cisco UCS meets requirements
- Analysis of primary performance metric
• Configuration audit:
- Comprehensive data capture
- Review current Cisco UCS configuration
parameters
- Provide best practices and recommend tions
to improve efficiency
• Capacityand performanceaudtt:
- Comprehensive capacity and performance
review
- Based on primary performance data
• Securttyaudtt:
- Comprehensive systems health assess ent
- Full system secu rtty audit
.:,·:h~y..,.y. .: '" ~: :. : }' ::- '<~':'"' < :

Drawing on Cisco's broad experience optimizing virtualized environments, both internally and
for Cisco' s largest and most successful customers, this service helps the customer to
continually improve the performance and availa ility of the data center resources, reduce risk,
and achieve operational excellence.
To help the customer maintain optimal performance of the unified computing systems, this
service includes four unified computing options:
• Architecture review: Helps ensure that the Cisco Unified Computing Systems are
continuing to meet the business requirements. The review includes an analysis of primary
performance metrics.
• Configuration audit: Provides a comprehensive data capture and review of the current
configuration parameters of the Cisco Unified Computing Systems. This audit provides an
IT organization with best practices and recommendations to improve operational efficiency
and resource utilization.
• Capacity and performance audit: Provides a comprehensive capacity and performance
review of the customer system by examining primary performance data.
• Security audit: Comprehensive assessment of the health of the customer's system by
examining security parameters. It includes a full audit of system security management
reports and documents any identified exception cases, alarm patterns, and policy violations
so an IT organization can maximize the sec rity of the server and computer infrastructure.

A1-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Security Services
This topic describes Cisco security services.

Security Service Overview


• Address the security asp ect of the data center virtualization:
- Potential attack vectors
- Misconfigu ration s
- Errors
.. Help protect operation
• Based on industry security best practices for virtualized
environments

.f: r.). ~.,. '~' B, '1"'''''-: . '.; .<.~ •. ~~." >'''< .~.~

Virtualization introduces profound changes in servers and data centers. Although these changes
can be hugely beneficial, they might also present new potential attack vectors for criminals with
malicious intent, as well as new opportunities for misconfiguration or error.
Cisco Security Services protect Cl stomers' businesses by helping assure them that the
virtualization strategy is based on industry best practices for security, as well as years of
experience from Cisco and its trmted data center partners in critical data center environments.

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Servic es A1-21
Cisco Data Center Services
This topic describes the data center services.

Data Center Services 0 erview


Strategic IT and Architecture Management and Operations
• End-to-end architecture • Rapid problem resolution
• Consolidation and virtualization • Proactive monitoring
• IT operations process • High availability and operations
• Business process to infras • Provisioning
dependendes and ma~ing
• Business case/metrics (Ra)

IT Planning and Deployment Optimization


• Consolidation and virtualization • Evolve dota center
• Security integration • Maintain cptimal performance
• Business continuance planning • Increase expertise
• D ala center technologies
• Migation to urified fabric, unified
computing, virtualized
architecture Efficiency and Fac iii ties
• Greener IT strategy
• Meet energy requirements
• Benchmark and increa power
and cooling efficiency
• Facilities design and bui,d out

Cisco Data Center Services helps the customer consolidate, virtualize, and automate the data
center to meet business goals, increase efficiency, and lower operating expenses.
Even if unified computing is central to customer virtualization strategy, the data center
encompasses a broad range of other critical infr structures and processes beyond the Cisco
Unified Computing System.
Cisco Data Center Services helps the customer consolidate, virtualize, and automate the data
center to meet business goals, increase efficiency, and lower operating expenses.
Whether the customer wants services for the entIre data center or for specific data center
technologies, Cisco offers a broad array of data center services to meet your needs, including:
• Strategic IT and architecture services to help prepare a strategy for the data center initiative
• IT planning, design, and implementation services to help the customer execute the strategy
• Technical support and operations management services to help maintain the health of your
data center
• Optimization services to help maintain a high level of performance as the data center
evolves
• Efficiency and Facilities Services to help ad pt a greener IT strategy and design and build
out efficient data center facilities
Cisco's unique architectural approach provides a unified view of data center resources,
empowering the customer to tune the data center environment for optimal application
performance and availability.

A 1-22 Cisco Data Center Unified Computing DeSign (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
.. Workshops help understanding of project environment in regards
to bus iness and tec hnical objectives.
• Plan, design, implementation service has multiple options that
address t he complete business lifecycle .
.. Support and warranty services help customers to increase uptime
and speed issue resol ution.
B Remote management se rvices provide complete solution
monito ring and management.

:: ;> ,~ :.. ~; " .," ,~, . :~ "(; .

Summary (Cent )
• Data Center optimization service continuously addresses the
need to adapt the solution.
.. Security service addresses the security aspects of the solution.
e Data Center services encompass complete data center
consolidation, virtualizat on, and automation.

',' .\".", , ., "';':':' >'- .... ...;. .'-, . : '~~ ~, ,;.:.'.< ,.;. , ;

© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1 -23
Appendix 21

Understanding Policy
Retention
Overview
This lesson identifies the policy retention concept and describes how the Cisco Data Center
Unified Computing solution addresses this.

Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Identify and describe the benefits of policy retention.
• Identify and describe policy retention within the Cisco Unified Computing solution.
Identifying Policy Retention
This topic describes the policy retention concepts.

What Is Policy Retention?


• Physical server - ability to apply and preserve server personality:
- MAC, WWN, UUID addresses
- Firmware
• Virtual machine - ability to apply and reserve policy per VM:
- Network level- security, OoS, port settings, etc.
- Storage level- LUN access, zone embership, etc.
- Server level- accountability, mana ement, profile, etc.

-.. Server Name: web-server-01 -...".'<:---::-:-


' .:: UUID: 56 4d cd 31 59 5b 61 , .• :.... .
>; >: MAC :·02:00:69:02:01 :FC
.. WWN:20B0020000075740
.:;': Boot Order: SAN. LAN
.. FIrmware: xx.yy.zz

Typically the network and storage policies are a plied to the Ethernet or Fibre Channel switch
port. This is not very effective since these are applied for all the VMs that reside on the
physical serverthus there is no differentiation between the VMs.
The policy retention capabilities are important ~ r the Cisco Unified Computing solution since
they reflect the required management overhead and govern the granularity of policy that can be
applied per server. The capabilities can be observed from two perspectiveseither the virtual
machine or bare-metal server deployment.
From the virtual machine perspective the policy retention is the ability to preserve the network
policy (which encompasses security, QoS, port ~etting configuration), the storage policy (which
encompasses zone membership, LON access, WWN and FCID assignment), and the server
level management, accountability, etc. per indiVidual virtual machine.
From the bare-metal server perspective policy r tention means that the server personality that is
defined with server identifiers like WWN, MAC addressees), firmware, BIOS, UUID, etc. are
preserved for the server.

A2-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Policy Retention Challenges
.. Physical server replace ment/migration - reconfiguration and
installation required
• VM challenges imposeo by:
- Virtualization - policies typically enforced per physical
server/port (Ethernet or Fibre Channel)
- Mobility - impossible to enforce the policies upon virtual
machine motion
- Transparency - difficult to correlate network and storage
resources to virtual machines
- VMware vMotion moves VMs across ESX hosts

f; ::-~i'.-':: , ~;.:,",,' : ,.~ ,: ~~ : •.!,:. ;. ; ".~ : ry' ~ :.. (->. ~" '.-';:. l~ " .t~, · · ·~ ;>:,"""'> ~; \ -',:,

There are multiple challenges that affect the policies and characteristics of virtual machines and
bare-metal server deployments.
From the virtual machine perspective the challenges are imposed by the server virtualization
nature itselfthe virtual machine is decoupled from the underlying physical hardware and is able
to move between the ESX hosts, which incurs the following challenges:
• Policies are typically enforced per physical server/port (either Ethernet or Fibre Channel) of
the underlying physical servervirtualization breaks normal server-port relationship.
• When the virtual machine is in motion it is hard to enforce any policies deployed.
• Since the virtual machines are decoupled from the underlying physical infrastructure it is
hard to correlate network and storage resources to a virtual machine.
For example, the attempt to specify the policy bound to VM source MAC addresses is not
effective since the MAC addresses are assigned by the ESX hypervisor and can change over
timethat is, if the VM does not move to another ESX host.
If the VM moves to another ESX host the policy applied to a port is lost. A solution might be to
apply the same policy for that specific VM MAC address to all the ports where ESX hosts are
attached. It is obvious that such a solution is not scalable and could present management
difficulties.
From the bare-metal server deployment perspective, the main challenge comes from the
requirement to replace a fault sener or to migrate to a new server. Normally such migration
requires the same amount of configuration and installation as with the former server.
Of course the computing solution can exist with no policy retention capabilities, which in
essence means that more administration and management work is required for solution
operation and maintenance.

© 2009 Cisco Systems, Inc. Understanding Policy Retent ion A2-3


Policy Retention Implicat ions
• Global scalability:
- Depends on transparency maintena1ce
- Must provide operational consistency
• Server should be:
- Independent (decoupled) from physical hardware
- Able to preserve personality upon igration/replacement
- Able to use remote boot for installation transparency
• Virtual machine should be:
- Aware of network services
- Aware of storage services
- Decoupled from the physical server infrastructure

The key benefits of using policy retention capabilities is that one can more easily scale a
computing solution while still preserving the tra sparent solution maintenance and ensuring
operational consistency.
For the virtual machine, policy retention means that the configuration is applied and bound to
the virtual machine and not to the underlying physical server. The virtual machine gets its own
identifiers just as the physical server does. If the VM moves, the policy follows without any
extra configuration necessary.
For bare-metal server deployment, the preservatIOn of server personality means that when the
underlying physical server is replaced or changed the new one is assigned the saved server
personalitythus no installation and configuratio is required. Be aware that in order to really
benefit from the policy retention, the bare-metal server deployment should utilize SAN or LAN
boot with which the operating system and application installation is decoupled from the local
server storage (the disk drive).

A2·4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Describing Cisco Data Center Unified Computing
Solution Policy Retention
This topic describes how policy retention is handled by the Cisco Data Center Unified
Computing solution.

Cisco Policy Retention Solutions


II Help preserve policies at different aspects
" Products:
- Cisco Unified Computing System (UCS)
- Cisco Nexus 1000V
- Cisco MDS 9000 ser es Fibre Channel switches
• Technologies:
- Cisco UCS service profiles
- Port profiles
- NPIV functionality

,;. :-.,!, .-i.. ....; ,.;.•"' .. ~ ',~ "~" ::\~ " .d·..... .: ,': 'r '·. r~" ·~ ·~: ::

The Cisco Unified Computing So ution can preserve the policy at different levels and aspects
for different policies whether a network, storage, or even bare-metal server personality.
To deploy and utilize the policy retention for the solution, the Cisco UCS, Cisco Nexus 1000V,
and Cisco MDS 9000 family switches products are used with mechanisms like UCS service
profiles, port-profiles, and NPIV functionality.

© 2009 Cisco Systems, Inc. Understanding Policy Retention A2.-5


Virtual Machine Network Connectivity
• VM level access layer connectivity granularity:
.. Cisco Nexus 1000V - distributed softw re switch forVMware
vSphere
• Ethernet network connectivity to the VM:
- Extends Ethernet network to VM
- Provides mobility of network and se vurity properties

Network
visibility -
boundary

When virtual machines are deployed on an ESX host, by default a native virtual switch is used
to provide connectivity to the outside world. The configuration and feature options on the
native virtual switch are limited. By default it does not enable the policy retention capability.
From VM ware version 4.0 onwards the ESX hy ervisor supports distributed switch
architecture with which a common switching i 'rastructure is deployed. Cisco Nexus 1000V
with VN-link functionality is currently the only distributed switch option that completely
addresses policy retention per VM granular policy and management.
~. With the Cisco Nexus 1000V the administrator gets a single point from which the policies are
configured, and the applied policies follow the VM upon migration to another ESX host
without the need to pre-provision any network arameters on each ESX host where the VM
could move. Now the process of adding a new ESX host to a cluster and enabling it for VM
vMotion is much easier.
Furthermore, the policies follow the VM upon ligration, and the port statistics also follow the
virtual machine, which makes network monitori g more precise and debugging or
troubleshooting easier and more VM-aware.
From the perspective of the Cisco Unified Computing Solution, the virtual distributed
switchCisco Nexus 1000Vbecomes the virtual access layer of the network.

A2-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine SAN Connectivity
VM1 VM2
N PIV for virtual machine level SAN
segmentation:
II WWN assignment per VM
.. Per-VM administration
II Per-VM access control and LUN masking
Each VM has
.. Per-VM traffic men<>norr O'1t own virtual HBA
N_port

• MOS 9124
Single phySical
Fibre Channel link

: . "'5: _Rii
MOS9134
MOS 9500

MOS9222i NPIV Enabled


Nexus 5000
: "'::- ' .'O~> :.:~. -.-..; . ;::; I.::. ,to '" " ""Y-'·: : ::..::.: ,;to'-.' . <.~~", ;.}, :. :,-

Virtual machines typically exist in a couple of files with the VMDK being the one that holds
the actual virtual machine image (you could also call it a disk). Such a file resides on a disk
array LUN that is deployed with the VMFS file systems used to address concurrent VM image
access by different ESX hosts. Sometimes the virtual machine requires access to an individual
LUNthis might be necessary when a clustering solution is deployed. In such cases, per VM
granularity for the storage policy s required. The per VM storage granularity enables the
assignment of WWN and FCID to an individual VM, which in fact enables deployment of
access control (zoning), LUN masking, and traffic management.
The granularity is achieved by enabling the NPIV functionality-with this turned on the F_port
on the MDS switch (where server is connected) accepts multiple WWNs.
The functionality can be deployed on Cisco MDS 9000 family switches and Cisco Nexus 5000
(the Fibre Channel functionality of the switch).

© 2009 Cisco Systems, Inc. Understanding Policy Retention A2.-7


Logical Server with Cisco UCS
• Stateless server based on a service profile:
- Decouples server identity from the physical blade
- Remote boot required for total hard are independency
• Service profile:
- Stores server identity
- Associated with one blade in Cisco CS

Cisco UCS

For server deployment the Cisco UCS brings the benefit of retaining the server identity; i.e., the
server becomes stateless by storing identity information such as MAC address(es), NrC
firmware and settings, WWN address(es), HBA firmware and settings, UUID, BIOS firmware
and settings, boot order, drive controller firmware, and disk drive firmware in a Cisco UCS
service profile.
Combining the service profile with the remote b ot options (SAN or LAN) really makes the
server stateless. In other words, it decouples the server personality from the underlying physical
hardware.
With the Cisco Unified Computing solution, the repurposing of a server, or replacing or
migrating a server, becomes a quick and easy taskno configuration is required; no operating
system or application installation is required.

A2-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.

Summary
• Policy retention enables IT professionals to apply and preserve
network, storage, and se rver level policies per virtual machines.
• Policy retention enables IT professionals to preserve server
personality upon replac ement/migration.
• Virtualization, mobility, a nd transparency influence policy retention
operation.
• Cisco Nexus 1000V is used to achieve VM network connectiv~y
granularity.
• Cisco UCS service profiles enable server personal~y
preservation.

1;. :'''(. ,':.. ...0 ,-; .... " ' <-' .:~ : ..;'''' ,-~-,:,_,; (<;:; ",~ .:: ;; - '~. :,"" ..;,- .

Summary (Cont )
• Per VM SAN connectivity is deployed using the NPIV feature.
• Cisco UCS uses service profiles to abstract the server personality
from the physical blade .

... .. ~ :f · :' ,~.,,... , < : '~, i ':~{;:: : •

© 2009 Cisco Systems, Inc. Understanding Policy Retention A2-9


Appendix 31

Add ressi ng Envi ron mental


Aspects
Overview
This lesson identifies general env ron mental aspects that are observed for the Cisco Data Center
Unified Computing solution.

Objectives
Upon completing this lesson, you will be able to identify and describe the general
environmental aspects observed for the Cisco DCUC solution. This ability includes being able
to meet these objectives:
• Identify general environmenta I aspects for the Cisco Unified Computing solution.
• Review the equipment environmental properties for the Cisco Unified Computing solution.
Environmental Aspects Ove view
This topic identifies general environmental aspects of CDCUC.

Addressing Environmental Aspects


• Evaluate the environmental requirements of the design:
- Determine the operating parameters for the equipment
- Determine the environmental cond ~ i ons of the facility
• Determine installation requirements
• Define site preparation and documentation:
- S~e requ irements specification
- S~e survey forms

Environmental aspects need to be observed in the design to properly plan the facilities,
including space, floor weighting, power, cooling, racking, cabling, delivery, and storage for the
installation of the Cisco Unified Computing sol tion components.
Addressing environmental aspects consists of two steps:
• First, evaluate the environmental requirements of the design. This means that based on the
solution design you need to determine the operating parameters of the equipment that will
be used for the solution implementation.
• Second, when the Cisco Unified Computing solution is deployed in an existing facility, the
environmental parameters of that facility need to be assessed.
Once that information is gathered, the site prepa ation documentation can be created. This
documentation includes:
• A Site Requirements Specification (SRS) document where the equipment operating
parameters are recorded
• A Site Survey Form (SSF), which is used upon facility environmental conditions inspection

A3-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc .
Evaluate Design Environmental
Requirements
• Determine the operating p arameters for the equipment:
- Terrperature
- Humidity
- Altitude
- Grounding requirements
- Dimensions and required space
- Required power, power cables, and voltage
- Airflow and heat dissipation
- Weight
- Cabling and interface req uirements
.. Determine the existing environmental conditions of the fadlity:
- Cabinet and rack specification and availabiltty
- Maximum floor loading and weight distribution
- Terrperature and humidity levels
- Available power, power distribution, UPS availability

• '" ".~ .• ,t.;.. >; v:~,,:··;.

Environmental factors can adversely affect the performance and life span of equipment.
Sensitive equipment typically reqnires a dry, clean, well-ventilated, and air-conditioned
environment. To ensure normal operation, an ambient airflow must be maintained. If the
airflow is blocked or restricted, or if the intake air is too warm, an overtemperature condition
can occur which may result in a failure.

Temperature
Temperature extremes can cause the equipment to operate at reduced efficiency and can cause a
variety of problems, including early degradation, failure of chips, and failure of equipment. To
control the equipment temperature, you must make sure that the equipment has adequate
airflow.

Humidity
High humidity can cause moisture to seep into the equipment. Moisture can cause corrosion of
internal components and degradat on of electrical resistance, thermal conductivity, and physical
strength. Buildings in which the climate is controlled by air conditioning in the warmer months
and by heat during the colder months usually maintain an acceptable level of humidity for the
equipment.

Altitude
If you operate equipment at a high altitude (low pressure), the efficiency of forced convection
cooling is reduced and can result in electrical problems. This condition can also cause sealed
components with internal pressure, such as electrolytic capacitors, to fail or to perform at a
reduced efficiency.

© 2009 Cisco Systems, Inc. Addressing Environmental Aspects A3-3


Grounding
Equipment is sensitive to variations in voltage supplied by the AC-power source. Overvoltage,
undervoltage, and transients (or spikes) can erase data from the memory or cause components
to fail. To protect against these types of problems, you should always make sure that the racks
that hold the equipment are grounded.

Power
You should use dedicated power circuits (rather than sharing circuits with other heavy electrical
equipment). For input-source redundancy, it is r ommended that you use two dedicated AC-
power sources, each of which powers half of the power supply units in the devices.

Air Flow
If your site has hot and cold aisles, align the rack air intake to a cold aisle and exhaust to a hot
aisle. Also, make sure that you do not install the equipment so that it takes in exhaust air flow
from other equipment.

Weight and Floor Loading


For stability and safety, it is always best to place heavier equipment below lighter equipment in
the racks. Second, you should verify how much weight the cabinet racks can carry and how
much weight the floor can support.

A3-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Ioc.
Define Site Preparation
• Ba sed on:
- Design required parameters
- Environmental conditions

;:f"0';&ij:i\Slill\t:stie{r1iQtdlimSS!iY'V%~1)/"
:'#fJ§t%$<4rtTIE~~
,Ali
--<,,<,:J1';Wf:::~~ i>:,m~=B:t*Kft ~:}&%1~~~~1#
~ xWi@ \x.@':::;~%J%t<3":::';;':>#$1ifP:<' x >Yi~:~w:fi-W~*:, 'k,::'
~" "r '\ "'}:' ~)5""L~~~ ::::<-! ~ . . ' ~~~vwi;/,\:/~;, ':<, "Pf;:>' \A( ~'<.;

¢ .. =".> ';"> ,,",;


.~'.<' ,~:, :! . ,,, "'~' :0 :'l ~; ,..:<•. -:•.~.

Site Requirements Specification


The SRS document defines environmental conditions and parameters, which must be met by
the facility before the installation. The document typically consists of:

• Site details

• Environmental considerations

• Electrical considerations

• Cooling considerations

• Cabling

• Equipment requirements

• Cabinet and rack requirement~

Site Survey Form


The SSF document is used to do the site survey. The site survey gathers the information about
the facility and validates (or invalidates) the facility. The document typically consists of:

• Contact and site information

• Environmental parameters

• Power

• LAN connectivity

• SAN connectivity

• Site details

• Environmental considerations

• Electrical considerations

© 2009 Cisco Systems, Inc. Addressing Environmental Aspects A3- -


Equipment Environmental Properties
This topic reviews the Cisco Unified Computing solution equipment environmental properties.

Cisco Unified Computing System


Solution Properties
• Cisco Unified Computing System environ rr-e ntal properties:
- Cisco UCS 6100XP environmental properties
- Cisco UCS SFP+ options
- Cisco UCS 5108 environmental prope les:
• Includes the chassis, I/O module, and blade server properties
• LAN equipment environmental properties:
- Cisco Nexus 7000 environmental prop rties
- Cisco Nexus 7000 SFP+ options
- Cisco Catalyst 6500 environmental pr erties
- Cisco Catalyst 6500 SFP options
• SAN equipment environmental properties:
- Cisco MDS 9000 environmental prope ies
- Cisco MDS 9000 SFP options

Cisco Data Center Unified Computing solutions encompass network, storage, and computing
equipment. Thus when assessing the operational requirements and evaluating the environmental
conditions these parameters must be checked for all equipment used in the solution.

A3-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UCS 610()XP Environmental
Properties
,~+"' <\ 0q~Ii9J.(rI
"t~ ~;:*t«%~ e
'%i"; ~ ~ ":; ,", df <- t,,1i1~'6~~g~}&.i;>iF%t{\St/iJI€l~
!-r:>Y 'lj:2n~%~'t ,': ~'~ /3*~¥t.J:' ~>7: ~/~% ~
o1fg.'};W&'
hAl?
%Tkl' v "

Typical operating power 3f:/JW 480W


Maximum power 5f:/JW 750W
Input voltage 100 to 240 VAC 100 to 240 VAC
PSU efficiency 88to 90% 82 to 88%
Heat dissipation 1536 BTU/h 2561 BTU/h
Size (h "w"d) 4 .4 " 43.9 " 76.2 cm 8.8 " 43.9 " 76.2 cm
Operating temperature Oto 40 C o to 40 C
Humidity 5 to 95% noncondensing 5 to 95% noncondensing
Altitude o to 3000 m Oto 3000 m
Weight 15,88 kg" 22,68 kg""

• OlasSs »ith two pOMlrsupplies, one 210 410 module , two fan rrodules
M OlassB w~h two pov.ersl4Jplies, too 210410 modules, five fan modules

.: .'~'.';'~:": ::', ' ,<. _'::':(J:-: ~~-r:,~::. . .' : : ;.:,,~ . , ~ .:.

The table above summarizes the environmental properties for the Cisco UCS Fabric
Interconnects. The difference between the Cisco UCS 6100 series switches is in weight, size,
and cooling requirements.

© 2009 Cisco Systems, Inc. Addressing Environmental Asp ects A3-7


Cisco UCS SFP+ Options
• Hot swappable
" Optical interoperability with 10GBASE XENPAK, X2, and XFP
interfaces on the same link

.~'

One of the parameters that need to be observed is cabling. Cabling for the Cisco Unified
Computing System is governed by the type of S + that is put into the fabric interconnects and
I/O modules.
For the Cisco UCS I/O Module to Fabric Interconnect connectivity typically copper-based
SPF+ are used, whereas for the uplink connecth ity a short range SPF+ is chosen.

A3-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UCS 5108 Environmental Properties
'" %:;.,
~.,::;:,;<:.,
[ffiffiW..4W",i ~"""wl< ;.S::"'"/.~<t, "x
't'
'~,.,'
""'h':--;' /fi;;:
~''''~"'' I¥w_''
W,,~~:%
i :t{tx'~9-Jl~~fij,~ ;(1 [F':;%:;,/' ~. .4,J~ifiua7;
m.., f<~~p_fPNt.
;;: "7..:,' x x::=<m' »'x:;;v,

... ' =>,


;.: ;u;.:,,:':' z, <~<- , x " *,,:';: ~ ~

Maximum power Up to 4' 2f1JCW

PSU effidency 92%


Size (h'w'd) 26.7' 44 .5' 81 .2 cm
Operati ng ,:x, " ,," Z $<,:,;,,' .xx% x, ;0;», :;"'X» %~ ,x '" <,
10 to 35 C ',,"~'%(4)>~~&.rl'!';'''' · IJ{ttkd*lr.lbli.6tWHtF~ ~atf:ih;
tem perature ~~;;:e{N+>\ 'N f!q~rL t1t~x:~'5~~, ;Y~x' ~i%,~~t! ,~ }l1/
Humidity 5 to 93 nonoondensing Chass is (errpty)

Altitude Ot03000 n 1 to 8per


B20()'M1 server blade
chassis
Backplane 1 .2 Tbps aggregate tnroughput 1 or 2 per
110 morule 1.1 kg
Heat dissipation 1364 BTUIt chass is

Server blade heat Fan module 0.8kg 2


1347 BTUIt
dissipation 2~ r
Hard disk drive 0.4kg
server
1 to 4per
Power supply
chassis
Power distribution unit

.;.' .,,:. :~ "; ~ ." ..:,,,:i. ::: -.: .•\ ;: .:~ % :~.:>< ..:.~.

Since the Cisco UCS consists of chassis and server blades, the equipment has its own
environmental requirements. The tables above summarize the environmental parameters for
that part of the equipment.
If the parameters for different equ ipment do not overlap, use the least common denominator to
stay within the boundaries that are appropriate for all equipment.

© 2009 Cisco Systems, Inc. Addressing Environmental Aspects ~


Cisco Nexus 7000 Environmental Properties

Input voltage 110 or 220V


PSU Depends en the power scheme
Heat dissipation Max. 33780 BTU per chass is Max. 56300 BTU per chassis
Rack units 21RU (two chassis in 42RU rack) 25RU

The network part of the Cisco Unified Computi g solution would typically comprise Cisco
Nexus 7000 or Cisco Catalyst 6500, depending on the required functionality and number of
lOGE interfaces.
The table above summarizes the environmental parameters for a 10- and I8-slot Cisco Nexus
7000 chassis. The weight and required power parameters depend on the exact hardware setup -
i.e., number of modules that are put into the chassis.

A3-1 0 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems , Inc.
Cisco Nexus 7000 Power Requirements
• Power redundancy mode :
- Combined- no redun dancy
-- N+1 redundancy - gua rd against one PSU failure
- Grid redundancy - gu ard against one input circuit failure
_. Full redundancy - guard against power supply or grid failure
~fq(ZAjlemlhlmnt "~ <74 J0~ :NliaJ}irriJrlMBOWer .j'j4r,~t<6f~lliiffmieat'~VlbWerll~':\;?!~
" ,«:Ix;; ,,< '.. ;;: u.-. .. " NNW ,:;;'5. ...__
.. ~ ,., ., ,,;,:, ",;-t, .. x .. " " ... ..,

Supervisor module 210 190


48-port 10/100/ 1000 400 358
32-port 10GE 750 611
48-port GE module 400 358
10-slotfabric module 60 55
18-slot fabric module 100 90
1O-slot fan tray 1680 300
18-s lot fan tray 1273 569
.~ .~~~ ,'.• ;,. '~ '.::'><'.: " } '( ;'.~ ·:~s :": .
~:-'" ~. !:.

The power scheme of the Cisco Nexus 7000 depends on the setup, and it can protect against
single power supply failure or agl' inst input grid failure. The best option is to protect from both.
The amount of power required by the Cisco Nexus 7000 can be calculated by summarizing the
individual component requirements. An even better option is to use the power calculator
available at the Cisco Connection Online web site at http://tools.cisco.com/cpcl.

© 2009 Cisco Systems, Inc. Addressing Environmental Aspects A3-1


Cisco Catalyst 6500 Environmental
Properties

Depends on the power scheme and PSU used (up to 8750 W per
PSU)
12RU 15RU
48.8 x 44.5 x 46.0 em 62.2 x 44.5 x 46.0 em

The table above summarizes the environmental requirements for a 6 and 9 slot Cisco Catalyst
6500 chassis. There are also 3, 4, 9 with vertical alignment, and 13 slot chassis with their own
environmental requirements. The differences ar weight, size, and required power.

A3-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco MDS 9000 Environmental Properties

~It#.;:~~ ••,ry.t~;;~ff~~~';'(r. ~,j ~~,:'~.l;;?~~~~o/~~~"w:t~ ,.'/::.~S9..ti'};~l¥~;,t:;'


Input voltage 110 to 240VAC
psu 845W 1400W@ 11 OV, 3000W@ 220V
Rack units 3RU 14 RU
Size (h*w*d) 13.34 x 43 .99 x 57.56 cm 62.3x44 .1 x 46.8 cm
Operating temperature o to 40 °C
Humidity 10 to 90% noncondensing
Altitude -60 to 2000 m
Chassis only: 25kg
Weight Fully configured: 28 .2kg
Fully configured: 78kg
Airflow Side to side Side to side

:';';:~: ". . '" ;.;. :~ ~ : !>:>:,~~:

A complete Cisco Unified CompLting solution requires storage network connectivity. To


connect Cisco UCS to a storage device(s) Cisco MDS Fibre Channel switches can be used.
Cisco MDS switches exist in various types from Cisco MDS9500 director class switches with
6,9, or 13 slot chassis, to enterpri~ e fabric Cisco MDS 9222i switches, to small Cisco MDS
9124 and Cisco MDS 9134 fabric switches. The table above summarizes the environmental
parameters for the Cisco MDS 9222i and Cisco MDS 9509 Fibre Channel switches.

© 2009 Cisco Systems, Inc. Addressing Environmental As pects A3-13


Summary
This topic summarizes the key points that were discussed in this lesson.

,~

Summary
• The Cisco UCS sizing process include ~ server blade, blade
chassis, and fabric interconnect desigr.
• A Cisco UCS Bill of Materials is create d with the NetformX
DesignXpert tool.
• Sizing Cisco UCS for a new implemen :ation is mainly governed
by the server requirements .
.. Migrating an existing solution to Cisco UCS requires detailed
analysis of the current solution.
• Designing a service provider multitena 1t solution requires defining
the building blocks (blades, chassis, C sco UCS system) to ease
maintenance and scaling.

A3-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Americas Headquarters Asia Pacific Headquarters Europe Headquarters
Cisco Systems, Inc. Cisc c Systems, Inc. Cisco Systems Internationa l BV
t 70 West Tasman Drive 168 Ro binson Road Haarlerbergpark
.. 11.111. San Jose, CA 95134-1 706
USA
www.cisco.com
#28-( 1 Capital Tower
Singapore 068912
www cisco.com
Haarlerbergweg 13-19
1101 CH Amsterdam
The Netherlands
CISCO," Te l 408 526-4000
800 553-NETS (6387)
Te l: +65 63 177777 www-europe.cisco.com
Fax: +35 631 7 7799 Te l: +31 08000200791
Fax: 408 527-0883 Fax: +310203571 100

Cisco has more than 200 offices wo rldwide, Addresses, phone numbers, and fax nUTI bers are listed on the Cisco Webs ite at www,cisco,com/ go/ offic es..

.ft' © 2007 C isco Systems, Inc. All rights reserved. CCVP, the Gisco logo, and the C isco Square Bridge logo are tra d e m a~ k s of C isco Systems, Inc.; Changing the Way We Work. Live, Play. and Learn is a service rT'E-< =- =s:.:
\.1 Systems, Inc.; and Access Registrar; Aironet. BPX. Catalyst. CCOA CCDP. CCIE. CCIP. GeNA, CCNP. CCSP, Cisco. 18 Cisco Certified Internetwork Expert logo. Cisco lOS. Cisco Press, Cisco Systems. C ~ 5. "5"::'::-~
Cepital, the Cisco Systems logo, Cisco Unity. Ente rprise/Solve r; EtherChannel. EtherFast, EtherSwitch . Fast Step, Follow Me Browsing, FormShare. GigaDrive. GigaStack. HomeLink, Internet Quotient, IOS. IP/TV.:O =>.=e-s= --=
I() 'ogo. iO Net Readiness Scorecard. iQuick Study. LightStream. Linksys. MeetingPlace. MGX. Networking Academy. Net vork Reg istrar, Packet. PIX. ProConnect. RateMUX. ScriptSha re. SlideCast. SMARTnet, S :E:;.... ' S~ ---~
=asmst Way to Increase Your Internet Quotient. and TransPath are registered trademarks of Cisco Systems. Inc. and/or its .'lffi liates in th e United States and certain other countr ies .

.;,. other trademarks mentioned in this document or Webs ite are the property of their respective owne rs. The use of the IAJrd partner does not imply a partnership relationship between Cisco and any oth e~ co""" 3E. - ~

STC/ l 1266 2

Das könnte Ihnen auch gefallen