Beruflich Dokumente
Kultur Dokumente
Student Guide
Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Appendix 1: Describing Cisco Unified Computing Solution Services A 1-1
Overview A 1-1
Objectives A 1-1
Cisco Unified Computing Services Overview A1-2
Cisco Unified Computing Workshops A1-5
Cisco Unified Computing Planning, Design, Implementation Service A1-8
Cisco Unified Computing Support and Warranty Service A1-14
Cisco Unified Computing Remote Management Service A1-17
Cisco Data Center Optimization Servic A1-19
Cisco Security Services A1-21
Cisco Data Center Services A1-22
Summary A1-23
© 2009 Cisco Systems . Inc. Cisco Data Center Unified Computing Design (DCUCD) v3.0 iii
Module 71
Understanding Existing
Computing Solutions
Overview
This module describes existing solutions and historical performance characteristics,
reconnaissance and analysis tools and describes the aspects of the migration plan.
Objectives
Upon completing this module, yO ] will be able to identify existing computing solutions and
historical performance characteristics and describe migration plan aspects. This includes the
ability to meet these objectives:
• Understand the historical performance characteristics.
• Identify the reconnaissance and analysis tools.
• Determine the aspects affectir g migration plan.
Lesson 1
Understanding Historical
Performance Characteristics
Overview
This lesson identifies, lists, and describes those important historical performance characteristics
(CPU load, memory usage, I/O us age for network and storage connectivity and interfaces,
application performance and storage space requirements, and so on.) that need to be examined
to gather and assess relevant data.
Objectives
Upon completing this lesson, you will be able to identify and describe the historical
performance characteristics to be examined migration. This ability includes being able to meet
these objectives:
• Identify the computing solution infrastructure historical performance characteristics.
• Identify the server historical rerformance characteristics.
Identifying Infrastructure Hi torical Performance
Characteristics
This topic identifies and describes the infrastructure historical performance characteristics.
7-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Storage Performance Characteristics
Network
• Storage device performanc,e
IllllltQ
- Device connection th roughput
- Readlwrite operation speed
- LUN space expansio
- Time to rebuild lost d isk
- Cache size and acce ss time
• SAN periormance
- CPU load
- Memory usage
- Amount of packets per second
processed
- Amount of traffic sent via links
Server CPU
• CPU characteristics
- Speed
- Nu mber of cores
- Nu mber of concurrent threads
/
SoclO'll CPU Co re
Server CPU characteristics are defined by CPU speed, number of cores, and concurrent threads.
Keep in mind that even if the CPU offers advanced functionality, the operating system might
not be capable of fully using this functionality, t us performance is the least common
denominator between CPU characteristics and 0 erating system capabilities.
7-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Memory
D Key memory characteristics:
- Size
- Access speed
II Memory performance affected by:
- Maximum addressable memory by operating system
- Intemal server bus a rchttecture
I--.--
-
~_-""w.!":
1-Ai.
-'"'
Server memory characteristics are limited by the maximum amount of memory that can be
installed and the memory access speed.
As with the CPU, the capability of the operating system might be the limiting factor for
memory usage.
Server 110 characteristics are defined with the LA.N and SAN connectivity.
Separate adapters can be used for LAN and SAN connectivity with different raw speeds.
The 110 subsystem throughput may be affected by the CPU performance in cases where the
CPU must manipulate packets received or sent - for example, Internet Small Computer
Systems Interface (iSCSI) deployment without TCP Offload Engine (TOE), Fibre Channel over
Ethernet (FCoE) deployment with software bas d FCoE functionality.
Multiple adapters are typically used to scale performance and to achieve redundancy .
.~
7-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Server Storage
• Located on
- Local disk
- Remote storage device (disk array) - more frequently used
" Space is limited by the storage device.
II Performance is affected by disk subsystem
- Read/write operation speed
- SAN connection throughput
- Time required to expand volume
- Time required to rebuild lost disk
- Cache size and acce ss time
Application performance characteristics are af~ ' ted by all the server components plus the
application performance itself. Applications can be written in a nonoptimal way, resulting in a
slow response - even if the underlying server pe formance is sufficient.
7-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Performance Characteristics
• Raw server pe rformanc e is affected by server components.
• Actual server performance depends on:
- Average and maximum CPU load
- Average and peak memory usage
- LAN throughput
- SAN 1/0 throughput
- Applicatio n characte r sties
- Storage space requirements
Summary
• Infrastructure performance is affected y throughput and
reliabil~y.
7-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21
Objectives
Upon completing this lesson, you will be able to identify and describe available reconnaissance
and analysis tools. This ability includes being able to meet these objectives:
• Describe the general functiom of reconnaissance and analysis tools.
• Identify the reconnaissance and analysis tools used to gather and analyze information for
existing solutions.
Reconnaissance and Analy is Tools Overview
This topic introduces the functions needed by the analysis and reconnaissance tools.
.~
The reconnaissance and analysis tools are used t gather information about data center resource
utilization for network, storage, and server components.
When planning for virtualization, the server hist rical performance is of utter importance since
it governs the physical server dimensions as well as how the physical server resources are
divided between the virtual machines (VMs).
7-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Analysis Tool Functions
• Purpose
- Assess current server utilization and workload
- Plan capacity optimization
- Aid in the dedsion for an optimal solution
• Gather resource utilization information across monitored servers
- CPU and memory
- Network
- Storage
,. Analyze gathered data and present the report
- Performance graphs
- Minimum, maximum, ave rage loadlutilization information
.. Provide benchmarking based on references
,. Perform ''what-if'' analysis
The analysis and reconnaissance tools are used for the following purposes:
• Assess the current workload capacity of the IT infrastructure through comprehensive
discovery and inventory of IT assets
• Measure system workloads ar d capacity utilization across various elements of the IT
infrastructure - including by function, location, and environment
• Plan for capacity optimizatior through detailed utilization analysis, benchmarking,
trending, and identification of capacity optimization alternatives
• Identify resources and establish plan for virtualization, hardware purchase, or resource
deployment
• Decide on the optimal solution by evaluating various alternatives through scenario
modeling and "what-if' analysis
• Determine which alternative best meets the predefined criteria
• Monitor resource utilization through anomaly detection and alerts based on bench marked
thresholds
• Help generate recommendations to ensure ongoing capacity optimization
Microsoft Windows operating system offers a selection of embedded tools that can be used to
gather the historical performance characteristics for a single server.
The figure introduces the following Microsoft indows embedded tools that can be used to
perform server analysis:
• Computer management
• Administrative tools
• Task manager
7-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Linux Embedded Tools
Linuxconf
Webmin
The Linux operating system is available with many applications and utilities, thus the
embedded tools that can be used to gather historical performance characteristics vary per Linux
distribution.
Linuxconf comes with Mandrake Linux and Red Hat Linux, but is also available for most
modern Linux distributions. MultIple interfaces for Linuxconf are available: GUI, Web,
command-line, and curses.
Webmin is purely a web-based modular application. It offers a set of core modules that handle
the usual system administration fl nctionality, and there are also third-party modules available
for administering a variety of packages and services. To download and learn more about
Webmin, point your web browser to www.webmin.comlwebmin.This package is available in a
number of formats specific to different distributions.
Note Whereas any user can install Linuxconf, Webmin must be installed by root. After that, you
can access this tool froM any user account as long as you know the root password.
Agent-free Implementation
The VM ware Capacity Planner Data Collector is installed on-site at the data center that is being
assessed. This component collects detailed hardware and software metrics required for capacity
utilization analysis across a broad range of platf rms, without the use of software agents.
Reference Benchmarking
Analysis provided by the VM ware Capacity Pla.l ner Dashboard is based on comparisons to
reference data collected across the industry. This unique capability helps in guiding decisions
around server consolidation and capacity optimization for your data center.
More information can be found on the Cisco Pmtner Resource Center at
http:\\www.ciscoprc.com.
7-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware vCenter CapacitylQ
• Management infrastructure vService
- Analyze, forecast, ard plan virtualized data center or desktop
capacity
- Post-virtualization tool
• Explore existing capacity
- How many VMs can be deployed?
- What is the historical capacity utilization?
- Can the existing capacity be used more efficiently?
• Predict future needs
- When will the capacn y limit be hit?
- What happens ~ capacity is added, removed, or reconfigured?
VM ware vCenter provides a set of management vServices that greatly simplifies application
and infrastructure management.
With VMware vCenter CapacityIQ a continuously capacity-managing capacity can be
achieved. VMware vCenter CapacityIQ continuously analyzes and plans capacity to ensure
optimal sizing of virtual machines, clusters, and entire data centers.
Key features of VMware vCenter CapacityIQ are:
• Performs "what-if' impact analysis to model effect of capacity changes
• Identifies and reclaims unused capacity
• Forecasts timing of capacity shortfalls and needs
Benefits of the Vmware vCenter CapacityIQ are:
• Delivers the right capacity at the right time
• Makes informed planning, purchasing, and provisioning decisions
• Enables capacity to be utilized most efficiently and cost-effectively
VMware vCenter CapacityIQ is a post-virtualization product used for ongoing management of
virtualized environments.
VMware vCenter CapacityIQ can assist in answ ring the following questions concerning day-
to-day operations:
• How much capacity is being used right now ?
• How many more virtual machines (VMs) ca be added? (In other words, when capacity run
out?)
• What happens to capacity if more VMs are added?
• How much capacity can be reclaimed?
7-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - Examining Current
Resources
ro-...d
1m I : ~::"~"'''!ol:,,-,,,q~
I"""" I
$llm
"liL
I~ I~<~_
u. It
~ ~; ;,;49''''''"""
'>~; ~ ".;a:",Y.'><-..:<
~;tN=
Second a "what-if' analysis can be started. For example, what happens if 10 more new VMs
are added?
• Start new "what-if' scenario and enter para eters per the wizard
• Select the type of capacity change - hardware or virtual machine
• Select the kind of virtual machine capacity change in this scenario
• Define the proposed VM configurations for this scenario
• Review summary of existing VM configurations as a reference for sizing VMs
7-22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - "Vlhat. . if" Analysis (Cont.)
C<.....,""
~·~;«t:',:4·M::-.<>:"
"'"
M!»..<l1"'''''~~~l:''
u-_ .. r:~~
... ~>'M~"'"':!'f.'?->'fIW!-:.
! ....)tl{
.t'~"'Y,<.>'J>
4.litt<w.
"'"
~"l'.'.'Y-'i>tt.;oi~.~.(
-:- .''; ..... ,.,~)(. :\ ,,>~r.-,; .,,, ',:: ,.;;/". « :.; 'X""",;
Select how to view the results, review selections, and select "Finish" to complete the scenario.
,..,.,.t. I . _"'>'b><._",..,f.......,kl:l4: · ·
'_r :S'l, ' '''''''l~W. ··''
, ( n.,
1iJ
'_'m"
'.(~<m
1,
4;,...
~",.1
~'" S'I:'~m
I "'-" /if:timiii1mika'H'
v~.. ",,~r
The "what-if' scenario result for the deployed vs. total VM capacity (graphical view) shows
that at the current provisioning rate, capacity will run out in November (red line).
If 10 new VMs were deployed today, capacity would run out in 23 days.
Select alternative views to examine additional information.
.~);;~
7-24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
CapacitylQ - "What-if" Analysis Results
.. Understand current capacity usage
- How much capacity is currently used?
• Currently 97 VMs on a cluster which is at 87% resources
• Forecast future capacitY 1eeds
- How many more VMs can be added?
.. 16 more VMs can be added .
.. If tre nd continues , 'l ew capacity must be added in 70 days.
• Predict impact of capacity changes
- What happens to capacity if more VMs are added?
• Adding 10 more VMs depletes cluster capacity in 23 days.
• Maximize utilization of eXisting capacity
- How much capacity can be reclaimed?
• There are 4 idle ard 4 over-allocated VMs. 2GB of memory
can be reclaimed .
With the dashboard and "what-if' analyses, you have answered the following:
• How much capacity is being used right now? This cluster currently has 97 VMs and is 87 %
full.
• How many more VMs can be added? 16 more VMs can be added. More cluster capacity
must be added within 70 days.
• What happens to capacity if more VMs are added? If lO more VMs are added, cluster
capacity will run out in 23 days.
• How much capacity can be safely reclaimed? There are four idle VMs and four over-
allocated VMs, so 2GB of memory can be reclaimed.
Akorri BalancePoint is another agentless management software with advanced analytics to help
fix problems, optimize utilization, and improve erformance in the virtualized data center.
Akorri BalancePoint helps companies make sure that data center server and storage virtual and
physical systems deliver the best application performance possible.
BalancePoint can help reduce operations and infrastructure costs by using servers and storage
more efficiently and reducing the time and reso rces spent on management through automation.
BalancePoint supports a wide range of applications, servers, and storage systems. It can assist
with the following tasks:
• Manage performance across applications, s rvers, and storage with cross-domain
visualization and performance-based servic alerting. Automatically find deeply buried
contention, hotspots, and bottlenecks. In ad ition, when problems are found, it provides
direct automatic troubleshooting analysis.
• Optimize application performance and reso rce utilization. Manage IT as a business with
performance indicators that indicate the optimal balance between application requirements
and resource capabilities.
BalancePoint's main capabilities derive from three key components:
• ScanPoint agentless discovery and data collection provides performance and utilization
data from the operating system and VM ware, databases, servers, and storage infrastructure
resources. This data is stored in a self-managing internal performance database mined for
historical patterns, performance trends, and modeling baselines.
• Viewpoint unique performance topology visualizations show the performance impact on
data center applications running through all the physical and logical layers of IT
infrastructure, including server and storage virtualization. These topological views are
automatically built from ScanPoint data, and are color coded with BalancePoint's
assessment of performance problems.
• GuidePoint actionable recommendations and analyses are based on Akorri's Cross-Domain
Analytics™ technology. These mathematic lly advanced algorithms provide intelligent
alerting and proactive service management tools to help with remediation, optimization,
and planning.
7-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
PlateSpi n Recon
# Agentless managemen t software
- Advanced analytics to help fix problems, optimize utilization,
and improve performance in the virtualized data center
dU
11 '" 'j 'I ;, '1 I~ !t "
, ·"i:(:-;.';:!;;":{:; <:'!':
PlateS pin offers Recon tools that can be used for tasks similar to those performed by VM ware
Capacity Planner and Akorri BalancePoint.
These tools can be used for workload profiling. PlateS pin Recon tracks actual CPU, disk,
memory and network utilization e ver time, on both physical and virtual hosts. Every server
workload and virtual host has util zation peaks and valleys, and PlateSpin Recon can build
consolidation scenarios based on mterlocking these peaks and valleys.
Summary
• Reconnaissance and analysis tools are used to gather server
resource utilization and workload data.
• Various tools are available to gather information; embedded tools
are operating system dependent and fferper-server information,
whereas specialized tools such as VMware Capacity Planner can
gather and present information for multiple se rvers.
References
For additional information, refer to these resources:
• http://www.vmware.com!products!capacity- lanner/
7-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 31
Understanding a Migration
Plan
Overview
This lesson identifies and explains how to build and evaluate a migration plan.
Objectives
Upon completing this lesson, you will be able to define the requirements for migration, list the
necessary contents of a migration plan, and provide guidelines for evaluating migration plans.
This ability includes being able to meet these objectives:
• Identify the migration plan.
• Identify the aspects of the migration plan.
• Identify the methods for migration plan evaluation.
Migration Plan Overview
This topic introduces and explains migration pIa s.
Migration Scale
• Full migration to new site
- Build new physical data center
old equipment with Cisco UCS)
- Migrate operating systems,
applications, and data
• Full migration within existing site
- Build new logical data center within an
existing physical data center spa e
(replace old equipment with Cisc
UCS)
- Migrate applications and data
• Partial migration or redesign
- Redesign an existing data center (add
Cisco UCS to the existing deploy ent)
- Migrate operating systems,
applications, and data
Depending on the type of migration, which can range from simply upgrading the existing data
center to include virtualization to building a physically new data center with new equipment,
you can determine what migration steps are reg ired to have the least painful migration.
7-30 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Plan
,. Deploy Cisco UCS cluster(s)
II Provide connectivity between existing computing solution and
Cisco UCS eq uip ment (L2 and L3)
• Define migration actions for migrating operating systems,
applications, and data to Cisco UCS
II Define verification step!: for each migration action
" Define rollback procedures in case of unforeseen migration issues
• Decommission old equipment or integrate it into the new data
center as part of migration
In general, a migration plan should include a list of migration actions, which can be divided
into migration phases. Each migration action should have three main components:
• Detailed task list assigned to 1he appropriate resource
• Verification steps to confirm that the migration was successful
• Rollback procedure to revert 10 the original setup in case an unforeseen problem was
detected which cannot be immediately mitigated
Migrating Servers
• Select server migration method
• Physical to Physical (P2P)
- Define whether server personality (MAC, WWN, UUID, ... )
needs to be migrated
- Define if complete operating syste m and application reinstall is
required
• Physical to Virtual (P2V)
- Select the P2V conversion tool - VMware vConvert
- Define Cisco UCS and VMware as igration prerequisites
• Virtual to Virtual (V2V)
- Define server infrastructure prerequ sites - Cisco UCS and
VMware
- Add new ESX hosts to virtual infrastructure
- Existing VMware infrastructure - mig rate VMs using VMotion
- New VMware infrastructure - exporVimport VMs
For the server migration, flrst the migration method should be selected:
• Physical-to-Physical (P2P): The existing servers are migrated to Cisco UCS server blades
in one-to-one fashion. The migration plan must determine whether it is necessary to
migrate personality identifiers like MAC, w rld wide name (WWN), Universally Unique
ID (UUID addresses). Next, it has to evaluate whether complete operating system and
application reinstall is needed or some cloni g tool can be used to migrate the server
operating system, application(s), and data (if it resides on the local disks).
Physical to Virtual (P2V): The existing physical servers are ::onverted to virtual machines (VMs).
• The migration plan must define the prerequisites-installed Cisco UCS cluster(s)
connected to LAN and SAN, implemented VMware infrastructure with proper
management. Next, the plan must define which tools will be used for P2V migration (for
example, VMware vConvert).
• Virtual to Virtual (V2V): The existing VMs are migrated to a new VMware virtual
infrastructure. The migration plan must defl e the prerequisites-the installed Cisco UCS
cluster(s) connected to LAN and SAN and new ESX hosts added to the virtual
infrastructure. Second, the migration plan should also consider possibilities in regards to
the virtual infrastructure:
Existing virtual infrastructure will be used with new ESX hosts added to the
infrastructure. The Cisco UCS cluster(s) should be properly connected to LAN to
permit communication between the management infrastructure- VM ware vCenter,
and to the SAN (should be connecte to the same shared storage as other ESX hosts
in a cluster) to allow proper operation for VMware services like VMotion, High
Availability (HA), Fault Tolerance (FT), .Disaster Recovery Solutions (DRS), etc.
New virtual infrastructure will be used, thus new management services must be
deployed and ESX hosts added to that infrastructure. Next, the migration plan
should define how the VMs from the existing virtual infrastructure would be
migrated to the new one-using export/import of VMs or using cloning tools.
7·32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migrating Operating Systems,
Applications, and Data
• Option 1 - Can be done at the application layer if supported by an
application
- Add fresh virtual or physical server to application server cluster
- Remove ok::l physica servers from the cluster
• Option 2 - Can be done at the operating system layer
- Install new virtual or o hysical application server
- Copy data to new vi rtual or physical server or provide access
to existing external database server
- Switch over from ok::l physical server to new server
In general, when migrating from a traditional setup to a virtualized one, you need to migrate a
service from a physical server to a virtual server.
This can be done in three ways:
Option 1: Using application clusters
• Install a fresh virtual server a'1d an application server. Put the new server into an
application cluster with the ex isting physical server. Repeat the step for as many servers as
required. Once there are enough virtual servers in the cluster, start removing the physical
server until only virtual servees remain. Decommission the physical servers.
Option 2: Reinstalling servers
• Install an operating system and an application server into a virtual server. Migrate data and
configuration from the physical server to the virtual server. Switch over from physical to
virtual server.
7-34 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Example
and new DC
Conversion
Tool
'" .>.'~" .::~ ;·:·'f, ·«~ : :- .:'~. ,,~ . .:.;':, ~ :.", . :.~,
The figure illustrates a simplified set of migration steps for a single server using the third
migration option-using conversion tools. The old and new data center infrastructures are
temporarily connected on Layer 2 and a hot conversion can be used to create a new virtual
server with the same operating system and applications as the original physical server (only the
underlying hardware is different).
Verification steps are an important part of the migration plan since they are used to verify
proper operation of the migrated server infrastr cture and applications.
To write proper verification steps the plan shoul evaluate the possible problems that could
take place during migration.
7-36 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Server Issues
Overlapping MAC, WWN, or VUID parameters might result in no connectivity.
Migration of a large and complex data center ca be a very sensitive operation. It is the
migration that will reveal any outstanding flaws in the data center design and migration plan.
The list above covers the most common considerations which, when properly addressed, help
you build a migration plan that can reduce the overall risk.
7-38 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Migration Timing
• Overnight migration
- Requires significant staff for speedy migration (external help)
- Multiple issues can a rise.
- One issue can preve'1 t the whole migration from completing.
- Significant downtime (not appropriate for round-the-clock
enterprises)
• Gradual migration
- Less personnel required
- Individual issues can b e investigated and migration steps
postponed and repeated at a later time.
- Less or no downtime
- Requires many maintenance windows for one or several
concurrently performed migration steps
• Recommendation: Desiqn migration plans for gradual migration
Overnight Migration
Migration of data centers can be performed in one go (the so-called overnight migration)
wherein the migration plan is designed to migrate all services at once. This is typically feasible
only for small data centers that are used only during business hours and not around the clock.
This can offer a migration window between two business days (literally over night) or during
the weekend (two full days). Larger data centers or data centers that are used around the clock
typically cannot be migrated in such short times.
Gradual Migration
An alternative to overnight migrations is to devise a migration plan wherein services are being
migrated one by one over several maintenance windows (i.e., nights or weekends) and should
be designed in a way to cause mir imum downtime per service (e.g., minutes or hours).
When determining how to migrate old servers t new servers it is recommended to select the
migration option with minimum down time, esp > ially for services that require maximum
uptime.
Data Loss
• Hot server conversion or data copying uring busy hours could
result in data loss during conversion a d switchover.
• Replication between existing and new centralized storage is an
option, but switchover requires downti e (stop old server, start
new server)
• Recommendation: Prefer downtime ove r hot switchover for
sensitive applications
Hot conversion means that a conversion is bein done while the server is active. This can
potentially lead to problems where data can be 1 st within moments when a conversion and
switchover are being performed. A void using th s method for services dealing with sensitive
data where data loss is not acceptable (e.g., e-ba king transaction server). The preference is a
longer downtime vs. data loss.
7-40 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Degradation of Service
~ Migrated servers should provide the same functionality with equal or
better performance.
• Consider individual migrated corrponents:
- CPU ¢ Guaranteed CPU resources in virtual environment
equal to or greater than existing CPU power
-- Memory ¢Equal to or g reater than existing memory for virtual
machine
- Disk ¢ Create virtual disk on the central storage that is equal to
or greatertha'l existing disk capacity
- Network ¢Ensure equal or greater bandwidth to virtual
machine (on server and network equipment)
• Recommendation: Audit the existing environment to get peak and
average resource utilizatio'1 and provide guarantees for the identified
peak periods (do not stretch the virtual-to-physical ratio to the limit)
.(.: :',':, ,-,:;. .;: :., -;>;. ',",; . " '.~ "-;)" .~ ,.; ';';' :"~:'.:.
When dimensioning the virtualized data center it is normal to assign mUltiple virtual servers per
one high-performance server (i.e., several CPUs and/or cores and large amounts of memory).
Care should be taken not to assign too many, which could result in performance degradation
after migration. Longer monitoring (e.g., over several weeks) should be performed and the data
analyzed to determine each physical server' s average and peak resource utilization. The peak
utilization should be used to dimension the new data center and the physical-to-virtual ratio.
A gradual migration plan can be prepared with t e help of external experts with experience in
data center design and migration. The migration itself, when done gradually, can be performed
using in-house staff with only optional external versight or help.
Temporary Resources
• Some resources may be required for tl" e migration only:
- Layer-1/2 connectivity between old nd new data center (e.g.,
dark fiber or L2 VPN)
- Cisco UCS server blades for migration tools
• The migration plan should list and detail the temporary resources
to ensure a smooth migration .
.~'
A migration plan should also list any temporary resources that are needed during the migration
process. This would typically include Layer-2 connectivity between the old and the new data
center and some servers and network devices to help in the migration process.
7-42 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Server Migration Aspects
.. P2P migration
- Evaluate if different server hardware affects operation
• P2V migration
- Follow P2V guidelines for applications
• V2V migration
- Evaluate whether ESX hosts can be integrated into existing
VMware solution
- Evaluate how VMs can be migrated to new VMware
infrastructure
• Can server personality parameters be migrated?
IIIntegration with existing server management tools for operating
system, applications, vi rt ualization - VMware vCenter
- Maximum infrastructure size
- Effect of relocating management servers and services
(downtime)
(~.":"!-,",:;. .;:; ':"~:""'; :" <~';J'''' ~:'};~''(-';'
The server migration aspects largely depend on the selected migration method.
With peer-to-peer (P2P) migration, the plan should evaluate if using different server hardware
compared to the existing one affects server operation after migration.
With physical to virtual (P2y) migration, the plan should evaluate how the application(s) will
behave when running in VMs.
With virtual to virtual (V2V ) migration, the plan should evaluate whether the ESX hosts can be
integrated into the existing virtual infrastructure or-in the case of a new infrastructure-
evaluate how the VMs can be migrated.
The migration plan should also evaluate:
• Whether the existing server parameters (MAC, WWN, UUID) can be used and need to be
migrated to new servers
• How a new Cisco UCS server infrastructure merges with the existing management tools
• Can the existing management tools for operating system, applications, and virtual
infrastructure be used?
• Is any additional configuration like Serial over LAN (SoL) or Intelligent Platform
Management Interface (IPMI) required for server management?
• Would adding new servers reauire a new management application or license since the
infrastructure would go beyond the existing management limitations?
• What would happen if the management servers and services were relocated?
A "dry-run" of the migration can be performed--where all actions that do not cause downtime
can be performed to ensure that any issues in the migration plan are identified in advance
before the actual migration is started.
This evaluation requires that a new data center e built and configured, connectivity provided
between the old and the new, and that some services are tested in an isolated environment.
7·44 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Evaluating a Migration Plan (Cont.)
~' .,:. ~'< ,~; .; ,:.,' " "" :~ . : ,< '~::':,:: :;. ~ :." .:·!c
The networking and storage infrastructure of a new data center can be built and tested in
advance. The operation of real servers can be tested only once they are migrated. To maximize
the reliability of the migration process it is recommended to create dummy virtual servers and
test their connectivity to the existing physical servers (this connectivity is required during the
migration period) and their connectivity to clients (this connectivity will be required for normal
server functionality after migration).
Additionally, it is recommended to hot-convert some or all of the existing physical servers and
start them in the new virtual environment in an isolated mode, simply to test that the servers
were successfully converted and are operational
7-46 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2008 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
" Identify the servers and services to be migrated
• Detail the technical aspects of the servers, services and the
network
• Create a migration plan to allow for the gradual migration of
servers
" Create a detailed set of verification steps for each server or
service
• Create rollback procedures in case verification fails and cannot be
immediately mitigated
't. ::-<:-, ~ :-.~:A-:::' :"'·''''.~: ;,,' :i1;~' :',;".~" ..< : >~\~>, !.C _-'Ii ::,- ,
Module Summary
• Server performance ch aracteristics are affected by applications
and hardware resources.
• VMware Capac~y Planner is a premigration tool used to assess
the existing server infrast ructure to aid in physical to virtual
conversion .
.. VMware Capac~ylQ is 2 postvirtualization tool used to analyze,
forecast, and plan virtu al infrastructure.
• A migration plan should address server migration aspects for the
selected migration method.
r;. : H(' ,";" ...,; ., -;_,.,.;,,- . ,,;- "~, , ,;;:~ ',_. 4·.··,.~
7·50 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Self-Check Answer Key
Q I) A
Q2) B
Q3) D
Objectives
Upon completing this module, you will be able to identify and design Cisco Unified Computing
storage. This includes the ability 10 meet these objectives:
• Identify Cisco UCS deployments.
• Describe Cisco UCS solution advantages.
Lesson 1
Objectives
Upon completing this lesson, you will be able to identify and describe Cisco UCS deployment
options. This ability includes being able to meet these objectives:
• Describe the Cisco UCS deployment options.
• Identify the Cisco UCS deployment for server virtualization environment.
Server Deployment Options
This topic describes the purposes for which the Cisco Unified Computing System can be
utilized.
Network Fabric
., Blum,...
A single Cisco Unified Computing System can be deployed with a mixture of bare-metal
operating system installations to virtualized ser er solutions.
From another perspective, a Cisco UCS can be llsed for server deployments-which include
new servers, upgrading, and repurposing-and for achieving business continuity, which
includes high availability and disaster recovery.
Neither server deployment nor business continUity are functions unique to blade server markets.
Enterprises with larger server deployments face many of these issues.
Even in smaller environments (tens to hundreds of servers), the challenges faced in deployment
and business continuity can still take advantage f the features provided by Cisco UCS.
However, the true advantage of the Cisco UCS IS more easily seen on larger server
deployments (thousands of servers).
The life cycle for servers in any organization is 0etween 18 months to 3 years, which is
governed by the advent of new technologies. T ese technology shifts can happen with
frequencies between 3-5 years. The changes in server technologies often follow the lifecycle of
solutions changes and not so much the changes f the technology.
8-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
o • • •
The Cisco DCS platform makes server deployments easy, with service profiles. The service
profile is created by a server administrator, and may possibly use resources created by the
storage and network administrator, like MAC and world wide name (WWN) pools. Service
profiles allow for the following capabilities:
• Plan and pre-configure once: By using resource pools, service profiles, and service
templates, you can plan and configure for groups of servers all at one time and then, using
the profiles, provision new servers at any time.
• Incremental deployments: After service profiles are created, adding additional servers is
as simple as using a service template to create a new server.
• Server replacement: This task is as simple as starting a profile on a new blade after it has
been physically replaced.
One of the challenges faced by companies who have large numbers of servers concerns
provisioning and server utilization. Typically, w en designing a solution, a customer will need
to design it for different levels of utilization. ThIs could mean that, in certain solutions,
customers may need to purchase and have available additional server hardware and software,
for burst capacity, for example. This hardware, while being underutilized, cannot also be
dynamically repurposed; if it is not needed, it m st simply occupy space, power, and cooling
resources.
Because servers are virtual and configured usin profiles, repurposing resources (server
hardware) is as simple as disassociating a profile and reassociating a new one. Less-used
servers can be shut down and repurposed for m re highly used servers.
8-6 Cisco Data Center Unified Computing DeSign (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Upgrade Within Cisco UCS
• Disassociate service profile from existing server
• Associate service profi le to new server
• Existing server can be retired or repu rposed (create or associate
with appropriate profile)
1
UUID: 560004 05 ... 1
MAC:00: 25:b5:33:21 :11 J
WWNN: 20:11:00:11 :22 ... 1
Boot order: SA" . LAN 1
FirmWdre: XX.yy.!Z I
. ,:- "'.',. ,1
"~" _~'::':;';'v;,~:.-<{#, ... ,,~;-,.-h:"'''/ ...:v.. w,·v - "..;, . ~ ........... - .• '"
¢ :':,"!, :'.~;..:~ ,'.~ ;.:..'~'; . ,", <.~ ,.,t.;;. -.; ';':~\"' .:.
The Cisco DCS enhances server t pgrades with service profiles. Simply put, the blade in the
Cisco DCS is agnostic and can have its hardware defined with the service profile.
A server in a traditional deployment will be built specifically for a solution that contains
adapters, firmware, and boot definitions. If you wish to upgrade such a server, you must again
go through the process of reinspecting the server for new hardware and coordinate that with
other administrators. Then, you have to schedule extended time to make the move.
With the Cisco DCS platform, there is a choice of two options:
• Change the service profile to Joot the server on another upgraded blade.
• Copy the service profile and 2ssign it to the other blade. Then, you can test the new server
before pointing users to it.
While no complete solution for disaster recovery is provided, the features of Cisco UCS do aid
in a disaster recovery solution. When designing a disaster recovery solution, you must consider
the whole infrastructure stack. This includes:
• Application failover and dependency matchmg
• Operating system configuration and compatlbility
• Data replication and integrity
• Connectivity matching (LAN and SAN)
• Server compatibility
Of these, the area in which the Cisco UCS can aid significantly is server compatibility. Server
compatibility requirements include:
• Configuration matching (adapters)
• Firmware matching
• Server parameters
• LAN or SAN specifics (VLANs and VSAN )
All of these items are configurable in the service profile for a given server, such that when
recovering, the Cisco UCS can quickly ensure server compatibility on the recovery system.
Without this ability, server compatibility can completely derail a disaster recovery solution.
8-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Disaster Recovery with Cisco UCS
• Export and import servi ce profiles on a periodic or ad-hoc basis
• Periodic Disaster Recovery tests at remote site
• Hardware at DR site can be repurposed during normal operations:
- Servers can be dep lo yed for testldev/QA at DR site
- During outage, disa ssociate existing profiles and associate
imported production Drofiles
Much of server compatibility is a manual process that leaves a lot of potential for incorrect
configuration. Cisco UCS, with the scope of statelessness and the ability to transparently use
profiles over redundant systems, helps with server compatibility issues.
In a general ESX deployment, one or several bl des will be running VM ware ESX hypervisor
and will host multiple VMs on the blades. The value propositions make the Cisco UCS ideal for
ESX deployments.
• Expanded Memory Blade Servers (Cisco DCS B250-Ml): The more memory that is
available, the more VM you can host per blade, allowing your blade to have a very cost-
effective approach on a per-VM basis. However, there are other advantages, as well:
Reduced HW costs
Reduced licensing (based on the nu ber of cores)
Larger memory handles VMotion better
• Virtualization Integration: The Cisco UCS VIC M81KR adapter provides not only
visibility into the VM networking, but also provides:
Better security
Performance-Virtual adapters moved out to be handled at the hardware level,
rather than the hypervisor
Improved troubleshooting
• Unified Fabric: Provides a single network for connectivity, simplifying your configuration
while reducing power and cooling requirements. Virtual NICs are configured and managed
like all other resources through Cisco UCS Manager.
8·10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
• Service Profiles: Server statelessness allows for the rapid replication and deployment of
servers, which is good for a pay-as-you-grow environment. In addition, server statelessness
also provides for:
Enhanced disaster recovery: Eliminates worry in server compatibility DR
Easy upgrades and replacement: Server profiles combined with vMotion
8·12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Advanced Functionality
• Flexible number and tyr e (Fibre Channel and Ethernet) of
adapters for best practic es networking
• Quick and simple provisioning of new ESX hosts
- Minimize window of exposure for overcommitted clusters
r:::; ,.~
!_~~::=~~; :r.~ _~J
VMware VMotion VMware HA, FT VMware DRS, DPM
• . 101
BitJ ~.' [1j Lgl II
i~:.: , .. !!JIll :'1 iD
q
· · BI
fjijiMi;fj '" ii:m ': :tf*1t~!j!!!i~f- _. -iifj£6~j{i{1:j:
1. . , . HI•• : :_ . .
... ~ .. ...... ~ . ........ ... .... ... .. ..... _..... ... ... .... ... .. ", (
P*IY.;I:
\ _ __ :"' _ ~ ~ ... ______ ... __ ... _ _ _ _ ... ___ __ J
VMware provides a suite of soft\\are to enable High Availability (HA) and Disaster Recovery
(DR) for the virtual machines that are being hosted. This migration can occur by the following
means:
• Live migration across ESX hosts: A manual intervention that can be used for upgrading
servers. With California, we can easily use service profiles to create identical ESX servers,
as needed, to move VMs between. Network and SAN access would be identical, saving
time.
• Policy-based migrations: Moving a VM from one to another, based on some type of
policy, such as server utilization statistics. All access can be replicated simply through
virtual NIC in service profiles.
• VM restarts due to failed hosts: Hosts can be quickly replicated with service profiles,
thus relieving fail over hosts. Larger memory footprints reduce the impact of increasing the
number of VMs during failun ;.
Virtual Desktop environment or Virtual Desktop Infrastructure (VDI) has the following
characteristics, which can exploit the benefits of Cisco UCS:
• VDIs are very CPU- and memory-intensive. The Cisco UCS B250-1 blade offering is ideal
in that the memory is scalable to very large · mounts, and the Nehalem CPU will be
expanding the numbers of cores.
• Some customers may not want to depend 0 too many VMs at one time. Our ability to
rapidly deploy servers using service profile provides better HA than traditional servers.
• Application servers and boot servers can benefit from the greater memory available for
caching.
• Streaming workloads can cause VMs to be server-bound. Cisco UCS virtualized adapters
and adapter profiles provide ease of migrati n.
• Some architectures require specific network segregation. The Cisco UCS virtualized
adapter and VLANs can easily provide this ,egregation.
Lastly, the Cisco UCS is ideal, because even in an advanced VDI deployment, it can provide
for all of the server platforms that are needed, allowing a VDI deployment in a box or "pod".
8-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key pcints that were discussed in this lesson.
Summary
• Cisco UCS enables flexible server deployment and dynamic
- server provisioning, ana it simplifies server upgrades within
systems.
II VMware server virtualization benefits from the Cisco UCS large
memory support and ada pter virtualization .
Objectives
Upon completing this lesson, you will be able to meet the following objective:
• Describe the Cisco Data Center Unified Computing solution advantages.
Cisco Unified Computing B siness Advantages
This topic describes the business advantages of the Cisco Unified Computing solution.
Reduced Expenses
• Reduced total cost of ownership (TeO)
• Better return on investment (ROI)
• Reduced physical infrastructure cost (CAPEX):
- Facilities: Less space used
- Lower cost per computing unit
• Reduced operational costs (OPEX):
- Less space used
- Lower power consumption costs
- More efficient cooling
The most obvious advantage is reduced Total Cost of Ownership (TCO). In general, less
equipment is required to perform selected tasks, and this applies to both servers and network
equipment.
Better Return on Investment (ROI) is achieved by higher utilization of equipment, so that over
the long term there is a significant gain in investment.
Lower capital costs are achieved by utilizing les3 physical space, and by the relatively lower
cost of a processing unit. A virtualized infrastructure allows for sharing of commodity
equipment, such as network interface cards, po 'er supplies, cabling, and so forth.
Lower operating expenses are obtained through more efficient use of resources (higher
utilization of less space). Functions such as cooling can be optimized, as they can be
concentrated within a smaller space footprint.
8-18 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Operational Advantages
• Reduced complexity
• Increased productivity
• Simplified management
• Increased resource utilization:
- Better total vs. utilized resources ratio
• End-to-end solution
• Extended data center life span
• Increased data center facilities capacity BiIflIn> Mtr
" utilized Ca pac w/
_ Total C"!,,,city
r. ' .~'.:.~ '..:~: ,.) !.,,.~.: : ::.; . ,",:. ,'.:" ,~;: : : ;. : -;:.:,;:,.":: ;"' :.).~:) "~
The main goal is to better utilize the total available capacity of your facilities. As an example, a
highly utilized virtualized server can have an average CPU utilization at around 35%, compared
to a 5% average on a standalone server box.
Reduced complexity in terms of cabling and supporting infrastructure is an additional key
factor, resulting in simplified phy,> ical management and increased productivity.
Combining several technologies, Including servers, networks, and data storage, virtualized
infrastructure brings in an end-to-end solution. As this equipment is usually modular, it extends
the data center life span by provicing higher upgradeability (for example, a Cisco Catalyst 6500
chassis can be used for the previous, the current, and the next generation of supervisor engines,
and so on.).
By condensing equipment, the data center floor footprint is smaller, meaning a net increase of
facility capacity.
© 2008 Cisco Systems, Inc. Positioning the Cisco Unified Computing System 8-19
.~
Cost-Effectiveness
Dynamically provisionable applications infrast cture must also be designed to reduce
operational costs. Pooling resources helps to inc -ease overall resource utilization and leads to
more standardized operating environments. The result is facilitated multisite deployment.
8-20 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Green Data Center Advantages
• Designed for ene rgy eft ciency
• Optimize energy consumption:
- Lower power consu mption
- Lower cooling energy consumption
• Achieve environmental compliance
Fifty per cent of today' s data centers will have insufficient power and cooling capacity to meet
the demands of high-density equipment in the near future. Through 2009, energy costs will
emerge as the second-highest operating cost (behind labor) in 70 percent of data center
facilities. Power demand for high-density equipment will level off or decline. In-rack and in-
row cooling will be the predominant cooling strategy for high-density equipment. In-chassis
cooling technologies will be adopted in 15 percent of servers.
Environmental compliance is an interesting point when running up for tenders*, achieving
higher chances of winning in case energy efficiency is counted.
Note *Tender is a request for quote (RFQ) from public- or government-owned company. Multiple
bidders must propose their solution and in the tender the best one is selected.
© 2008 Cisco Systems, Inc. POSitioning the Cisco Unified Computing System 8-21
Data Center Lifespan Prolonged
Capacity
Virtualization
and Unified
fctJric
deployment
Capacity
• • Total Capacity
: . Utilized Capacity
In the figure, you can see that within three consecutive years after adding equipment to the data
center, the total thermal capacity of the room is exceeded. The solution might be to upgrade the
cooling system. However, the cooling system has to cool down the installed equipment, which
mitigates the efficiency of using the equipment itself. In many cases, equipment upgrades
become unnecessary and could be avoided if the currently installed equipment were utilized
more efficiently. This also makes the data center more environmentally friendly and lowers
operating costs.
Virtualization is the key to maximizing the pote tial of the existing data center equipment. By
utilizing resources more efficiently through virtualization, you avoid the need for constant
upgrades. This prolongs the lifecycle of the equipment because the load is more stable and its
functionality is optimized over the long term.
Reclaimed storage reduces the expense of adding shelves of disks to the storage system, and the
money saved can be invested in increasing business capacity instead of just keeping up with the
demand on the available storage. The effect of etwork and storage virtualization and unified
fabric deployment results in less power used per computing unit, making room for future
growth.
In summary, the total effect of the Cisco Data Center Unified Computing solution is that it
creates space in which a business can grow; it prolongs the lifespan of the data center overall,
and it reduces the need to upgrade the facility.
8-22 Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
• The business advantag es of the Cisco Data Center Unified
Computing solution incl ude reduced complexity, improved
productivity, scalabiltty, and better resource utilization, along with
reduced costs.
e 2008 Cisco Systems, Inc. Positioning the Cisco Unified Computing System 8-23
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
• Cisco UCS is a flexible, saleable solution which can be used for
simple server deployment, dynamic provisioning, and to address
disaster recovery needs.
.. Cisco Unified Computing solution brings business and technical
benefits.
© 2009 Cisco Systems, Inc. Positioning Cisco Unified Computing System 8-25
Module Self-Check
Use the questions here to review what you learn d in this module. The correct answers and
solutions are found in the Module Self-Check A swer Key.
Ql) Which Cisco UCS characteristic offers the greatest benefit in the VMware
virtualization environment? (Source: Identifying Cisco Unified Computing System
Deployments)
A) eight-slot chassis
B) advanced power management
C) expanded memory blade
D) support for iSCSI
Q2) Which of the following is a benefit of t e Cisco Data Center Unified Computing
solution? (Source: Describing Cisco Data Center Unified Computing Solution
Advantages)
A) higher CAPEX
B) longer ROI
C) reduced OPEX
D) shorter data center lifecycle
8-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Module Self-Check Answer Key
Ql) C
Q2) C
© 2009 Cisco Systems, Inc. Positioning Cisco Unified Computing System 8-27
Module 91
Understanding Server
Virtualization Networking
Overview
This module introduces the VMware server virtualization solution and server virtualization
networking, describes the Cisco solution for server virtualization networking, and explains
virtual machine sizing.
Objectives
Upon completing this module, you will be able to identify server virtualization and networking,
understand Cisco Nexus lOOOV, and understand virtual machine sizing. This includes the
ability to meet these objectives:
• Identify server virtualization.
• Identify the Cisco server virtualization networking solution.
• Describe how to size a virtual machine.
Lesson 1
Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Identify server virtualization.
• Describe VM ware server virtualization concepts.
Server Virtualization Overview
This topic explains server virtualization.
Server Virtualization
• Abstracts operating system and applic tion from physical
hardware
• Offers hardware independence and fl exibiltty
Historically, when physical servers are used they are deployed with one application and one
operating system within a single set of hardware.
A single operating system is isolated to one machine; for example, running either the Windows
or Linux operating system. This means that the physical server is tightly coupled with the
underlying hardware, which makes migration or replacement a process that requires time and
skill.
If additional applications are put on a physical s rver, these multiple applications start
competing for resources. which typically causes problems related to performance or insufficient
resources--challenges that are difficult to addre s and manage. Thus, a single application might
run on a single server, resulting in server resource underutilization-with average utilization
ranging from 5% to 10%.
When a new application must be deployed-e.g , a Web service-a physical server must be
deployed, racked, stacked, connected to external resources, and configured-all of which
requires a substantial amount of time.
Since numerous applications are used, some of t em demanding high availability as well, a data
center ends up with numerous server deployme ts. In many cases, this causes various problems
ranging from insufficient space to excessive po er requirements.
Server Virtualization
Server virtualization decouples the server from the physical hardware-this makes the server
independent of the underlying physical server. The hardware is literally abstracted or separated
from the operating system.
9-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
The operating system and the application(s) are contained in a container-a virtual machine. A
single physical server with server virtualization software deployed (e.g., VMware ESX) can run
multiple virtual machines. Keep in mind that virtual machines do share the physical resources
of the underlying physical server. However, with virtualization, tools exist to control how the
resources are allocated to individual virtual machines.
The virtual machines on a physical server are isolated from each other and do not interact. In
other words, they are running deployed applications without affecting each other. Virtual
machines can be brought online without the need for installing new server hardware, which
allows rapid expansion of computing resources to support greater workloads.
With physical server deployment, a single operating system is used by each server. The
software and hardware are tightly coupled, whic makes the solution inflexible. Since multiple
applications are typically not running on a single machine, due to potential conflicts, the
resources are underutilized and the computing i frastructure cost is high.
Benefits
When server virtualization is used, the operatin system(s) and applications become
independent of the underlying hardware, allowing a virtual machine to be provisioned to any
physical server. Since the operating system and application are encapsulated in a virtual
machine, multiple virtual machines can be run 0 the same physical server. Thus, server
virtualization offers significant benefits as compared to physical server deployment:
• Physical hardware can be consolidated
• Resources of a physical machine can be shared among virtual machines
• Resource utilization is improved-so fewer resources are wasted
The example shows application deployment in a physical server environment compared to a
virtualized server environment. A physical server configuration uses three (3) servers that have
a low average load ranging from 10% to max 4 %. A virtualized solution uses just one (1)
server with three (3) virtual machines deployed- so the total physical server average load is the
sum of virtual machine average loads, which is around 70%-significantly better than with the
former configuration.
Note In the virtualized solution, a small portio of resources is also used for the virtualization
abstraction layer-the hypervisor. Compared to virtual machine (VM) resource usage the
hypervisor utilization should be low.
9-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Hypervisor - Abstraction Layer
• Hypervisor or Virtual Machine Monitor
(VMM)
- Thin operating syste between Virtualized Server
hardware and virtual achine
- Controls and manages hardware
resources
- Manages virtual machines (create,
destroy, etc.)
• Virtualizes hardware res ources
- CPU process time-s haring
- Memory span from physical
memory
- Network
- Storage
.~~ ",~,,\~ , ~ .:.", .H
A hypervisor or Virtual Machine Monitor (VMM) is server virtualization software that allows
multiple operating systems to run on a host computer concurrently.
The hypervisor provides abstraction of the physical server hardware for the virtual machine. A
thin operating system performs the following basic tasks:
• Control and management of physical resources by assigning them to virtual machines and
monitoring resource access and usage
• Control and management of vlrtual machines-the hypervisor creates and maintains virtual
machines and, if requested, destroys the virtual machine (if the VMM is alive)
Ideally, a hypervisor would abstract all physical server components-CPU, memory, network,
and storage. CPU abstraction is ac hieved with CPU time-sharing between virtual machines, and
memory abstraction by assigning memory span from a physical memory.
A virtual server is used to enable ~ particular service or application, and from the server
perspective typically CPU, memory, 110 and storage resources are of concern.
Note that when multiple virtual machines are deployed, they can oversubscribe resources; thus
the hypervisor must employ an intelligent mechanism to allow oversubscription without
incurring performance penalties.
The virtualization methods differ per server virtualization solution-VMware ESX uses a
different virtualization approach from Microsoft Hyper-V, thus performance would also differ.
A virtualized server is called a Virtual Machine (VM). A virtual machine is a container holding
the operating system and the application(s). The operating system in a VM is called the guest
operating system.
A VM is defined as a representation of a physical machine by software that has its own set of
virtual hardware on which an operating system nd applications can be loaded. With
virtualization, each virtual machine is provided ith consistent virtual hardware regardless of
the underlying physical hardware that the host server runs on. A virtualized server has the same
"hardware" characteristics as a physical machine:
• CPU(s)
• Memory
• Network adapter(s)
• Disk(s)
All the virtual server resources are virtualized. Apart from the resources each VM also
possesses its own set of parameters-e.g., a virtual MAC address and virtual IP address-to
allow it to communicate with the external world. Therefore, a single physical server will
,~
typically have multiple MAC addresses and IP addresses-those defined and used by the VMs
it is serving.
Since a VM uses virtualized resources, the guest operating system is no longer in control of
hardware-this is the privilege of the hypervisor. Underlying physical machine resources are
shared between different virtual machines, each running its own operating system instance.
The VM resources are defined by the server administrator, which creates a VM-defines the
characteristics of a VM-the CPU speed, amount of memory, storage space, network
connectivity, etc.
9-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Benefits
Partitioning Isolation
Four
Key
rope rties
Encapsulation Hardware
Abstraction
[1.Jl r5J fi If%jJ
r;;S~l Ii~
:ijtf!Bifji -Itir~mjiij ':
i,---------------------_/
. . . . . . .. i
/'.. : .. ..... ..,~ ... S~·1'·- .: .':;' ';: ' ~~ "' ''' ,_,:~
Resource
Pool
..-.'.-.-. ".
.-
V...c~-bacrup Po.w....dO .. ,COiiil ~... "'o¥",nL , 00 0 C'i.iI:.';rr.D
OI""4tix & P"",ored .. , 00$ ~pttsnt;.;.r.§,~ 249 _ Il% _ 9 i21rJn
Yet<ioQ<tOrix f;> p",,,,,,,.d.. , 00$ gJtemtle.nil. >, 1$ _ 1>77 _ 18 !Jl::lTi::Dn
GMayp.; V ~ed ••• GOfJJ M,,,,.w. ."'... Il< IIiiOI 493 Ii!ii!Ii$ -17 imiZ,Lr;
Partitioning means that a physical server (ESX s shown in the graphic) is running two or more
operating systems with different applications installed. The VM operating system is called the
guest operating system.
The guest operating systems might be different- VM ware supports a plethora of them ranging
from Windows, Linux, or Solaris to NetWare or any other vendor-specific system. None of the
guest operating systems have any knowledge of others running on the same ESX server. Still,
they share the physical resources of the physical server.
The control and abstraction of the hardware and physical resources is done by the VM ware
ESX hypervisor-a thin operating system that p ovides the hardware abstraction.
9-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Isolation
• Hardware level based fault and security isolation
- VMs are not aware of other VMs' presence
- VM failure does not affect other VMs on the same ESX
II Advanced server resource control
- To preserve and cont rol performance
View: !i1t1l1fH,~@l
Vercingentonx-bac\<up Unlimited Normal ''':i) 25
Operatix Unlimited Normal i ~: : ~:~ 25
Vercingetorlx Unlimited Notmal : ~·![f': 25
Garovix Unlimited NOfmai ',:::(f: 25
A second key VM characteristic iii isolation. Isolation means that VMs do not know about other
VMs that might be running on the same ESX server. They have no knowledge of any other
VM.
The implication of isolation is of course security. Not knowing of each other, the VMs do not
interfere with each other's data. Isolation also prevents any specific VM failure from affecting
any other VM's operation.
Note VMs on the same or different physical servers can communicate if network configuration
permits it.
To ensure proper performance for a VM, the hypervisor allows advanced resource control,
where certain resources can effectively be reserved per VM-. For example, ESX allocates and
dedicates memory.
9-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Hardware Abstraction
II Any VM can be provisioned or migrated to any other ESX server
with similar physical characteristics.
• Support for multiple operating systems
- Windows, Linux, Solaris, NetWare
~ltii ~'P.tw,( i[ffiw,.,
...
• •
f..",MlIHAAI+>6
Ul,f.Woo
•
~tt'i; ~
"""'f", ~ .,.. ••
__ JHltt~~..,
1~ljh
.r.fII~W'('_'*""
Resourc(,) pool
The fourth key characteristic is hardware abstraction. As already mentioned, this is performed
by the ESX hypervisor to provide VM hardware independency.
Being hardware independent, the VM can be migrated to another ESX server to utilize the
physical resources of that server. Mobility also provides scalable, on-demand server
provisioning, server resource poo growth, and failed server replacement.
With advanced VMware mechanisms like Dynamic Resource Scheduler (DRS), the VM can be
moved to a less utilized physical ~ erver, thus dynamic load balancing is provided.
Server virtualization solutions employ approaches that differ in how the guest operating system
is isolated from the underlying hardware and in where the hypervisor or virtual machine
manager (VMM) resides. The most often used a proaches are:
• Native or full virtualization
• Host-based virtualization
• Para-virtualization
Abstraction of the operating system and application from the underlying physical server
hardware is an important benefit of virtualization. The abstraction can be employed in disaster-
recovery scenarios since it intelligently address s the traditional requirement of physical server-
based disaster recovery-the need to provide identical hardware at the backup data center.
With complete abstraction, any VM can be brought online on any supported physical server
without having to consider hardware or softwar compatibility.
The ability to run multiple virtual machines on a single server also reduces the costs of a
backup data center solution by consolidating applications on fewer physical servers than would
normally be required-at a backup data center a minimal hardware set can be used while fast
recovery in a disaster situation is accomplished.
9-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Native (Full) Virtualization
Native virtualization has the following characteristics:
• The hypervisor runs on bare metal-i.e., directly on the physical server hardware without
the need for a host operating system.
• The hypervisor completely virtualizes hardware from the guest operating system(s).
Drivers used to access the hardware exist in the hypervisor.
• The guest operating system deployed in a VM is unmodified.
Such an approach enables almost any guest operating system deployment and allows the best
scalability. The most widely used example of native virtualization is VMware ESX hypervisor.
Host-based Virtualization
Host-based virtualization has the following characteristics:
• The VMM runs in a host operating system not directly on the physical server hardware.
• Drivers used to access physicaI hardware are host operating system kernel based, whereas
the hardware is still emulated by the VMM.
• The guest operating system deployed in a VM is unmodified but must be supported by the
VMM and host operating system.
Examples of host-based virtualization are Microsoft Virtual Server and VMware Server
solutions. Such solutions typically have a larger footprint due to host operating system usage
and the additional I/O that is used for the host operating system communication.
Microsoft Virtual Server can be deployed with Windows XP, Windows Vista, or Windows
2003 host operating systems and can host Windows NT, Windows 2000, Windows 2003, and
Linux as a guest operating system. The current version is Microsoft Virtual Server 2005 R2
SPl.
Hybrid Virtualization
Microsoft Hyper- V is a hybrid natIve-host virtualization solution, where a hypervisor resides on
a bare metal server but requires a parent VM or partition running Windows 2008. The parent
partition creates child partitions hosting guest operating systems. The virtualization stack runs
in the parent partition, which has direct access to the hardware devices and provides physical
hardware access to child partitions The guest operating systems range from Windows 2000,
Windows 2003 , Windows 2008, SUSE Linux Enterprise Server 10 SPlISP2, Windows Vista
SP 1 to Windows XP Professional SP2/SP3/x64.
<9 2009 Cisco Systems, Inc. Understanding Server Virtualization Networking 9-15
Para-virtualization
Para-virtualization has the following characterisdcs:
• The hypervisor runs on bare metal-i.e., dir x tly on the physical server hardware without
the need for a host operating system.
• The guest operating system deployed must be modified to make calls to or receive events
from the hypervisor. This typically requires a guest operating system code change.
• The application binary interface used by the application software remains intact, thus the
applications do not have to be changed to run inside a VM.
Unmodified guest operating systems can be supported if virtualization is supported in hardware
architecture; e.g., Intel VT -x or AMD Pacific.
An example of a para-virtualization solution is XEN, which supports guest operating systems
like Linux, NetBSD, FreeBSD, SolarislO, and certain Windows versions.
9-1 6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
-,'" - ' :I- , '
VMware vSphere
VMware vSphere is the latest VMware server virtualization solution. VMware vSphere is a
cloud operating system designed to manage large collections of infrastructure-CPUs, storage,
networking-as a seamless, flexible, and dynamic operating environment.
VM ware vSphere comprises:
• Management and automation with infrastructure optimization, business continuity, desktop
management, and software lifec ycle tools
• Virtual infrastructure with resource management, availability, mobility, security tools
• ESX, ESXi, virtual Symmetric Multiprocessing (SMP), VMFS virtualization platforms
Among key vSphere solution tools and applications are:
• ESX and ESXi hypervisor
• Dynamic Resource Scheduler (DRS)
• Virtual Machine File System I VMFS) native ESX cluster
• Distributed switch that can be native VMware or Cisco NexuslOOOV
• VMotion, Storage vMotion, I-ligh Availability (RA), Fault Tolerance (FT), and Data
Recovery availability tools
• Virtual SMP enabling virtual machines to use multiple physical processors; i.e., upon being
created the administrator can assign multiple virtual CPUs to a virtual machine
9·18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Scalability
• Scalabil~y is defined by the ESX host and VM characteristics
- Numberof CPUs
- Amount of memory
- Nu mber of VMs
.. Expands w~h the application needs
~IhJ'" ~i¥I~~~i@,§f~Jil
.~-. ,z,. · ~ ~· ~ .: ..;c• •":' v' :" ~,,,,,,. ~.\
The ESX host and VM characteristics govern the VMware solution scalability.
The ESX host scalability characteristics define the performance and consolidation rates
available per single physical server, whereas the VM scalability characteristics define the
amount of workload a VM can handle.
Note that the ESX version 4 now ")upports on the fly (or hot) additions of resources like
memory, CPU, network connectivity, and storage-with this the virtual machines no longer
need to be shut down in order to make configuration changes .
In the current VMware ESX vers ion 4, the following applies:
• The ESX host scalability is limited to:
512 GB of memory
256 virtual CPUs per host
256 VMs
• The VM scalability is limited to:
8 virtual CPUs with Virtual SMP
255 GB memory
Note To be able to utilize large memory capacities the ESX uses the 64-bit VMkernel.
Resource
Pool
vCenter Server
CPU
Memory
Network
When the size of the virtual infrastructure grows-that is, more and more ESX hosts are added
,~'
to the virtual infrastructure-it becomes hard to keep up with management. VMware vCenter
server is the solution for virtual infrastructure management, having the following
characteristics:
• Windows based application
• Resource management for ESX and ESXi h sts and VMs
• VM deployment and template management
• Task scheduling, statistics, logging, alarms and events monitoring.
Although VM ware vCenter is not required for ESX host and VM deployment, it is required for
some advanced services-such as VMotion, HA, Fault Tolerance, and DRS.
The VM ware vCenter server can be installed on a physical server or VM. In either case the
requirements are:
• Processor = 2GHz or faster
• Memory = 2GB or more
• Disk space = minimum 1GB
• Network adapter = recommended 1GE
• Operating system (32- or 64-bit) = Window ' 2003 Server with SP1 or SP2, Windows 2003
Server R2, Windows 2008 Server
• Database server = Oracle 109, Oracle 11g, Microsoft SQL Server 2005 Express Edition
(comes with vCenter Server), Microsoft SQ Server 2005 with SP2, Microsoft SQL Server
2008.
Since vCenter is an important part of the VM ware solution, it is vital to plan proper availability.
9-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Mobility
.. VMware VMotion
- Moves virtual machines across physical servers wtthout
interruption
- Changes hardware resources dynamically
- Eliminates downtime and provides continuous service
- Balances workloads for computing resource optimization
VMotion
~
~. I
'~'w " '" '"
W' IIII
.-1 '1"
iMi.f
;( (
,'tk~JiMiiMMWi - -- - --- -- - ii;--- - ~:tJ.- - - - - - ;_~i',
I
Resource Pool:
I
~~f)~tm* I
!'~.~'"
»."'....w.,':i
"'$~'~' ,.~,< ~,
,:1'0"0:. ' ,
I
------------ - -----------------------------,
, <.- ,,:::~,~, ~ : "'~ ,'-:
VM mobility is achieved with VM ware VMotion, which allows the moving of virtual machines
across physical ESX hosts with no virtual machine interruption. During such a migration, the
transactional integrity is preserved, and the virtual machine's resource requirements are
dynamically shifted to the new ESX Server.
VMotion can be used to eliminate downtime normally associated with hardware maintenance.
It can also be employed to optimi ze server utilization by balancing virtual machine workloads
across available ESX Server resources. VMotion enables server administrators to transparently
move running VMs from one physical server to another physical server across the Layer 2
network.
For example, a ues blade needs additional memory. VMotion could be used to migrate all
running virtual machines off the blade, allowing the blade to be removed so that memory could
be added without impact to virtual machine applications.
rimary Seoondary
VM VM
vLockstep
E
Failed Server
The VMware solution incorporates two differen mechanisms to achieve VM high availability:
• VMware High Availability (HA)
• VM ware Fault Tolerance (FT)
9·22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VMware Fault Tolerance (FT)
VMware FT advances the HA fu nctionality by enabling true zero downtime switchover time.
This is achieved by running primary and secondary VMs, where the secondary is an exact copy
of the primary one. The VMs run in lockstep using VM ware vLockstep-with which the
secondary VM ends up in the same state as the primary one. The difference between the two
VMs is that the primary one owns network connectivity.
Upon failure, the switchover to the secondary VM preserves the live client session since the
VM is not restarted. FT is enabled per VM.
To be able to use the FT the foll owing requirements, among others, have to be met:
• DRS for the VM is disabled.
• Thin disk provisioning is converted to zeroized thick disk provisioning.
• Hosts used for FT must be in the same cluster.
Distribute Load
•
III
Resource Pool
Another useful and interesting application is VNIware Dynamic Resource Scheduler (DRS),
which allows dynamic and intelligent allocation of hardware resources to ensure optimal
alignment between business demands and comp ting resources.
DRS, when deployed, is used to dynamically ba ance VMs across computing resource pools.
The resource allocation decisions are made base on predefined rules, which may be defined by
the administrator.
Using DRS increases administrator productivity by automatically maintaining optimal
workload balancing and avoiding situations wh re an individual ESX Server could become
overcommitted.
9·24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Server Power Management
.. VMware Dynamic Power Management (DPM)
- Consolidates VM workloads to reduce power consumption
- Intelligent and automatic physical server power management
- Support for multiple wake-up protocols
@%ill
~I
'" 0;11111
Dynamic Power Management can be used to reduce the power and cooling expenses related to
physical servers.
DPM consolidates the virtual machines on a minimum number of physical servers-the
VM ware ESX hosts-by constantly monitoring resource requirements and power consumption
across ESX hosts in a cluster.
When fewer resources are required, the virtual machines are consolidated on a couple of ESX
servers, and those that are unused are put in a standby mode.
If the resource utilization increases and workload requirements increase, DPM brings the
standby host servers back online, and then redistributes the VMs across the newly available
resources.
DPM requires a supported power 'l1anagement protocol on the ESX Host. Intelligent platform
management interface (IPMI), Integrated Lights-Out (iLO), and Wake-On-LAN (WOL) are
supported protocols.
Note that for each of the wake-up protocols, you have to perform a specific configuration on
each host before enabling DPM.
Summary
• SeNer virtualization provides virtual machine partitioning,
isolation, encapsulation, and hardwar abstraction.
• The VMware vSphere seNer virtualization solution provides
advanced tools and mechanisms that ' cale virtual machine
deployment.
• VMware management is achieved witI' the vCenter server.
• VMotion enables VM mobility across ESX hosts without VM
interruption.
9-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21
Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Describe VM ware server virtualization networking.
• Identify the Cisco solution for server virtualization networking.
• Describe Cisco Nexus lOOOV deployment design.
VMware Server Virtualizatio Networking
This topic introduces VM ware server virtualizatlOn networking.
Virtual Machines - - .
Virtual NICs - - .
Virtual S,,!/itch - - .
Physical NICs - - -
The VMware server virtualization solution extends the access layer into the ESX server with
the VM networking layer. The following comp ents are used to implement server
virtualization networking:
• Physical network(s): Physical devices connecting VMware ESX hosts for resource
sharing. Physical Ethernet switches are use to manage traffic between ESX hosts, the
same as in a regular LAN environment.
• Virtual network(s): Virtual devices running on the same system for resource sharing.
• Virtual Ethernet switch (vSwitch): Sirnila to a physical switch. Maintains table of
connected devices, which is used for frame forwarding. Can be connected via uplink to a
physical switch via a physical network inter ace card (NIC). Does not provide the advanced
features of a physical switch.
• Port Group: Subset of ports on a vSwitch f r VM connectivity.
• Physical NIC (vmnic): Physical network i terface card used to uplink ESX host to the
external network.
9-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Access Layer
Virtual
Access
Laye r
A1ysical
Access
Layer
VM ware networking is deployed on each ESX server and extends the access layer into
configured physical servers-a virtual access layer. The virtual access layer does not have the
same functionality as physical access layer, typically lacking ACL and QOS configuration
options.
1
··0 .
l
• Service Console port - assigned to a VLAN
.. VMkemel port(s) - assigned to a VlAN
.. VM port group(s)
- Assig ned to a VLAN
- VMs assigned to port group
• Uplink(s)
- External connectivity
- vmnic associated with single vSwit h only
9-30 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
vNetwork Standard Switch Operation
.. Muttiple switches on a single ESX host
- No internal communication between vSwitches
.. Operates as physical Layer 2 switch
- Forwards frames per MAC addresses
- Maintains MAC address table
- Internal switching for VMs
" Supports
- Trunk ports with 802.1q for VLANs
- PortChannel- NIC teaming
- Cisco Discovery Protocol
-.;, ,": ",<. '-,::.';; :", :-.:. '." . ::: ~. ~ •• ';'i:< ~; -;':':'",..~.
An individual ESX host can be configured with multiple vSwitches, which typically have no
internal connectivity and provide no communication among each other. A vSwitch operates at
OSI Layer 2 like a normal LAN switch-it forwards the Ethernet frames based on MAC
addresses, maintains a MAC address table, and provides switching for VMs connected to it.
vSwitches support 802. 1q VLAN trunking, PortChanneling with NIC teaming, and Cisco
Discovery Protocol.
There are differences between physical LAN switches and vSwitches-vSwitches do not
participate in Spanning Tree Protocol (STP), do not support Dynamic VLAN Trunking
protocol, and PAgP PortChannel protocol. Instead, vSwitches never learn addresses on uplink
ports and forward all unknown unicast messages using one of the available uplink NICs.
A vSwitch can be used for internal communication only if it is not associated with any vrnnic-
no uplink port is defined. Such configuration can be used for testing and traffic isolation
purposes.
vSwitches are capable of supporting VLAN IDs in one of three methods:
• External Switch Tagging (EST): This met od leaves frames untagged and allows the
external switch to handle all tagging operati ns.
• Virtual Guest Tagging (VGT): This meth d allows software within the guest operating
system to tag the frame, and maintains that tag within the virtual network. Tagged Ethernet
frames are forwarded up to the guest operating system. For this purpose, VLAN 4095 is
used with ESXJESXi host vSwitch.
• Virtual Switch Tagging (VST): This is the preferred and default method. In this mode, the
vSwitch will tag each frame based on the aS3igned port group VLAN.
To scale the available bandwidth and provide better redundancy NIC teaming can be used.
Multiple NICs can be teamed and associated as an uplink to a vSwitch. VMware uses built-in
drivers for NIC teaming, which allows both active/active and active/passive configurations. The
teaming supports various load balancing schem s (all are done for outbound only):
• vSwitch port based: A default scheme that ties each virtual switch port to a specific uplink
associated with vSwitch. The algorithm tries to maintain equal port-to-uplink assignments
to achieve load balancing.
• Source MAC based: Load balancing is bas.:!d on the source MAC addresses. To achieve
proper load balancing, the number of virtual network adapters should be greater than the
number of uplinks.
• IP hash based: This method uses source and destination IP addresses to determine the
physical network adapter.
9-32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
, <. '" ,
,
{'-~ ,
A VMware vSwitch is created and managed from the VMware vCenter by the ESX
administrator. It resides in the ESX host. An individual vSwitch is managed as a separate
virtual network, isolating traffic from other vSwitches.
A vSwitch can have up to 1016 useable virtual ports, of which up to 32 can be used for uplinks
that are associated with the physical adapters.
vtnwarEr
vCenter Server
9·34 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Identifying the Cisco Server Virtualization
Solution
This topic describes the Cisco Nexus 1000V networking solution for virtualized server
environments.
Cisco UCS
6100XP
Cisco server virtualization solution uses technology jointly developed by Cisco and VM ware.
The network access layer is moved into the virtual environment to provide enhanced network
functionality at the VM level.
This can be deployed as a hardware- or software-based solution, depending on the data center
design and demands. Both deployment scenarios offer VM visibility, policy-based VM
connectivity, policy mobility, and a nondisruptive operational model.
VN-Link
VN-Link technology was jointly developed by Cisco and VMware and has been proposed to
the IEEE for standardization. The technology is designed to move the network access layer into
the virtual environment in order to provide enhanced network functionality at the VM level.
Defined Policies
WEB Apps
The Cisco Nexus lOOOV bypasses the VMware vSwitch with a Cisco software switch. This
model provides a single point of configuration f r the networking environment of multiple ESX
hosts. Additional functionality includes policy- ased connectivity for the VMs, network
security mobility, and a nondisruptive software model.
VM connection policies are defined in the network and applied to individual VMs from within
VMware vCenter. These policies are linked to the Universally Unique ID (UUID) of the VM
and are not based on physical or virtual ports.
9-36 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Distributed Virtual
Switch (Cont.)
Mobility of network an d security properties
Defined Policie"
WE.BApps
HR
DB [~K;.n-~ vrnVVCH ~' • Maintained connection state
vCenter Serve r
Compliance • Ensured VM security
(;' :'; ','" ,',:;'.:: :", .;>:. '.>: . ,•• <~ • . j'. '. ,.: .,:~',:..;.
Through the VMware vCenter APls, the Cisco Nexus lOOOV monitors VM migration and
ensures policy enforcement as machines transition between physical ports. Security policies are
applied and enforced as VMs mi grate through automatic or manual processes.
When using the Cisco Nexus lOOOV, the management model for VMs stays the same and is
handled by the VM administrator. Network ad inistrators create security profiles and these
policies are applied to individual VMs by the VM admin. Deployment time is reduced through
pre-configured repeatable processes, reducing tr e operational workload. The benefits include
unified network management and operations, enhanced network features at the VM level, and
VM-Ievel visibility.
9·38 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Features
• Layer2
- VLAN, PVLAN, 802.1q
- LACP
- vPC host mode
• QoS classification and marking
II Security
- Layer 2, 3, 4 access lis'S
- Port security
• SPAN and ERSPAN
• COfl1)atibilitywith VMware
- VMotion, Storage VMotion
- DRS, HA, FT
Cisco Nexus lOOOV supports the same features as physical Cisco Catalyst or Nexus switches
while maintaining compatibility with VMware advanced services like VMotion, DRS, Fr, and
HA.
Cisco Nexus 1000V is licensed per each server CPU regardless of the number of cores. It
comprises the following:
• Virtual Supervisor Module (VSM): Performs management, monitoring, and
configuration tasks for the Cisco Nexus 100 V and is tightly integrated with the VMware
vCenter-the connectivity definitions are pushed from Cisco NexuslOOOV to the vCenter.
• Virtual Ethernet Module (VEM): Enables advanced networking capability on the ESX
hypervisor and provides each VM with a virtual dedicated switch port.
A Cisco Nexus 1000V deployment consists of SM (one or two for redundancy) and multiple
VEMs installed in the ESX hosts-a vNetwork istributed switch.
A VSM is a supervisor module much like in regular physical modular switches, whereas VEMs
are remote Ethernet linecards to VSM.
In Cisco Nexus 1000V deployments, VMware rovides the vNIC and drivers while the Cisco
Nexus 1000V provides the switching and management of switching.
9-40 Cisco Data Center Unified Computing Design (OCUCO) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Operation
• Centralized control plane - VSM
- Manages multiple data planes
- Software appliance on physical server
orVM
- Two VSMs for high availabi lity
• ESX cluster = single Cisco Nexus switch
- One data plane (VE M) per ESX host
- Up to 64 ESX hosts per VSM
- 51 2 active VLANs VSMVM !!>
The Cisco Nexus 1000V uses distributed architecture. This architecture separates control and
data plane functionality. The control plane functionality is represented by VSM, which
manages multiple distributed data planes (VEM in each ESX host). Thus, a VSM acts as a
supervisor module for remote VEMs.
All configuration and supervisor functions are handled by the VSM. Using Console, Telnet, or
SSH, an administrator makes all configuration changes on the VSM. When a change is made on
the VSM, the configuration is passed to vCenter and the changes are made on the DVS. DVS
changes are passed down to the ccrresponding VEM.
Each VEM will act as a module on the VSM and the VSMNEM(s) appear as a single switch to
Cisco Discovery Protocol neighbors. The VSM does not reside in the data path and therefore
cannot directly receive or respond to Cisco Discovery Protocol messages. Cisco Discovery
Protocol and other network management packets are transferred between the VEM to the VSM
on one of three required VLANs, known as the Packet VLAN.
VSM can be deployed as a softwa -e appliance on a physical server (Control Plane Physical
Appliance- CPPA) or on a VM (Control Plane Virtual Appliance- CPVA). A redundant Cisco
Nexus 1000V deployment would incorporate two VSM appliances.
The VEM is a software replacement for the VMware vSwitch on a VMware ESX Host. All
traffic-forwarding decisions are made by the VEM.
Individual Cisco Nexus 1000V supports:
• Up to 64 ESX hosts
Each VEM acts as a separate switching line car with no concept of a fabric between VEMs.
This means there are no PortChannels or connections between VEMs and they do not rely on
one another for operation. Switching decisions and frame forwarding all happen on the VEM
and are not reliant on the VSM.
Only the uplinks in a host can be bundled in a P rtChannel for load balancing and high
availability. The Cisco Nexus IOOOV does not support EtherChannels across different VEMs.
The Cisco Nexus IOOOV does not run STP beca se it will deactivate all but one uplink to an
upstream switch, preventing full utilization of uplink bandwidth. Instead, each VEM is
designed to prevent loops in the network topology .
.~'
9·42 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Connectivity
Management
#;;: wn':-\'tJ'~
Control vCenrer
Server
Packet
Data
The Cisco Nexus IOOOV VLANs consist of a management, control, packet, and one or more
data networks.
Domain ID
A single Cisco Nexus lOOOV instance, including dual redundant VSMs and managed VEMs,
forms a switch domain. Each Cisco Nexus lOOOV domain within a VMware vCenter Server
needs to be distinguished by a unique integer called the Domain Identifier.
Data VLANs
The data networks carry VM packet traffic (server data). One or more data VLANs are defined
for this purpose. Data traffic from the VM is not sent to the VSM and the VSM does not require
access to the data VLANs. All VSM management is out of band and switching decisions do not
rely on the VSM.
Note It is recommended that the Control VLM J and Packet VLAN be separate VLANs, and that
they be on separate VLANs from those that carry data.
9-44 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Port Profiles
• Po rt Profiles
- Used to configure m u ~iple similar ports
- Define VLAN, ACL, OoS, port security,
etc.
- Port groups in VMware - created for
each port profile
• Uplink port profiles
- Provide outbound connectivity from VEM
- Used for VEM to VS M and VM data
traffic
• VM port profiles
- Provide configuration for VM ports
• Recommendation - use for all po rt
configuration on Cisco Nexus 1000V
.::- .": '~, ,•• :;.,~ :'0) ;>.".".; . ;.; ~_~ . ::1' ''' !O:.:':~ ' \' .~
In Cisco Nexus 1000V, port profiles are used to configure interfaces. A port profile can be
assigned to multiple interfaces, giving them all the same configuration. Changes to the port
profile can be propagated automatically to the configuration of any interface assigned to it.
In the VM ware vCenter Server, a port profile is represented as a port group. The vEthernet or
Ethernet interfaces are assigned in vCenter Server to a port profile for:
• Defining port configuration by policy
• Applying a single policy acrm s a large number of ports
• Supporting both vEthernet and Ethernet ports
• VLAN
• Port channels
• ACL
• Port security
• NetFlow
• Rate limiting
• QoS marking
Note Although the configuration can be applied to the individual virtual port on Cisco Nexus
1aaav, it is recommended to apply the ntire configuration via port profiles.
By using port profiles, hundreds or thousands of VMs can be provisioned rapidly with detailed
network configurations such as port state, quality of service (QoS) tagging, and access control
list (ACL) controls. Additionally, port profUes r ~uce the risk of misconfiguration on groups of
similar ports by maintaining the configuration in a central location. While individual port
configuration is still possible using the Cisco N xus lOOOV, it is recommended to use port
profiles rather than configuring ports directly. T is method reduces administration time and
simplifies network troubleshooting.
VM Port Profiles
Port profiles that are not configured as uplinks can be assigned to a VM virtual port.
9-46 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Uplink Port Pro1i1es
" Assigned to physical NIC on ESX host (vmnic)
" System uplink port profi e
- Carries control and j: acket VLAN between
VEM and VSM
• VM uplink port profile
- Carries VM data traffic
• Single uplink port profile can be system and VM
uplink at the same time
(' .": '~~ ""'.'';. '" -..~ ..J. ,.• -.; .:...., " ~:,
VM profiles are used to provide configuration for VMs and typically require an uplink profile
to access the physical network.
When a VM profile is configured without a corresponding uplink profile, it creates internal VM
networks. If a VM profile is created accessing a VLAN that is not trunked to the physical
network, then the assigned VMs will be able to communicate only with other VMs assigned to
the profile in the same host. This configuration IS similar to creating internal only vSwitches or
port groups within a standard VMware networkmg environment.
9-48 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Nexus 1000V Deployment Design
This topic describes considerations that need to be addressed when preparing the Cisco Nexus
lOOOV deployment design.
Planning the Cisco Nexus lOOOV deployment must take into consideration Cisco Nexus lOOOV,
VM ware, and uplink switch aspects.
From the Cisco Nexus lOOOV perspective the following must be addressed:
• Licensing-Cisco Nexus lOOOV is licensed per server CPU. The designer must thus know
how many ESX hosts will be mitially used and how many will be used in the future to plan
and select proper licensing pack.
• VLAN scheme-for the Cisco Nexus lOOOV deployment minimum of three VLANs are
required-management, control, and packet. The designed must reserve and assign these
VLAN IDs from the free VLAN ID pool. Although the VLAN IDs could be any of those
already used, it is recommended to separate VLAN IDs.
• Design VSM deployment.
r§~~~~~~~~~i:;:;~~:1
On the VMware side, the physical NICs have t be selected for the system uplink port profiles
to enable VSM to VEM communication.
On the uplink switch side, the interface where ESX is attached has to be configured for 802.1q
trunking and must allow the expected VLANs--control, packet, and management.
9-50 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
VM Deployment Considerations
.. Cisco Nexus 1000V considerations
...t;I.~.~~.~~. __ ._~.. _....
- Define data VLANs
- Define VM uplink PO "t profile(s) - trunk ~;Vlrtu~ MacJT.es (3)
....."
profile(s) forVM con 1ectivity 05-"'"
...."
.. Upstream sw~ch consi oerations
[~\::..:~7~~V":' .'
- ESX attached interface for trunk with
allowed VLANs
It: .":-:<. ".: ~ .:: ~..' -;;':.1.>; • " <~" :1':'~ ,.;.:.;.:":..;.
Once the Cisco Nexus 1000Y deployment is designed, the VM deployment can be planned.
This includes Cisco Nexus 1000V, VMware, and upstream switch settings.
Summary
• Cisco Nexus 1000V architecture provides a virtual switch chassis
with a supervisor module (VSM) and s itching line cards (VEM).
• Cisco Nexus 1000V deployment requi tes management, control,
packet, and data VLANs.
• Cisco UCS systems can be combined with Cisco Nexus 1000V
technology to provide physical network controls at the virtual
network level.
• Cisco Nexus 1000V design defines VLANs, system and VM uplink
port profiles, and VM port profiles.
9-52 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 31
Objectives
Upon completing this lesson, you will be able to describe and perform proper virtual machine
sizing. This ability includes being able to meet these objectives:
• Describe the parameters that influence the VM sizing.
• Describe the VM CPU sizing.
• Describe the VM memory siz ng.
• Describe the VM network sizing.
• Describe the VM storage sizing.
• Describe how the VM is sizeo.
Identify Virtual Machine Sizi 9 Parameters
This topic introduces and explains the parameters used in sizing the virtual machine.
Network Pool
ESX servers provide hardware resources like CPU, memory, disk space, and network
connectivity to VMs, thus making the resources part of a certain pool.
A virtual machine, when created, specifies reso rce parameters which define the amount of
resources that will be used-whether this is CPU, memory, disk, or network. Virtual machines
provide server deployment on business demand.
9-54 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine Parameter Limits
• VMware ESX 3.5 per VM maximums
- Disk size = 2TB
- Numberof vCPUs =4
- Memory size = 64G
_.. Numberof NICs = 4
• VMware ESX 4 per VM maximums
- Disk size = 2TB
- Numberof vCPUs = Igl '(Ideo card \lideocard
The following parameter list describes VMware ESX 3.5 per virtual machine maximums:
• Size of SCSI disk = 2TB
• Number of vCPU = 4
• Memory size = 64GB
• Number of NICs =4
The following parameter list desc "ibes VMware ESX 4 per virtual machine maximums:
• Size of SCSI disk = 2TB
• Number of vCPU = 8
• Memory size = 255GB
• Number of NICs = 10
Genelal canputing
As mentioned, the VM sizing should take into ccount application requirements that vary by
application type and vendor. The following list gathers typical memory, storage, and network
use cases for certain applications:
• High end database applications like Orac.e RAC, MS-SQL HA Cluster, and Sybase HA
Cluster require a large amount of memory, multiple NICs, should be connected to SAN,
and require high availability.
• Low to midrange database applications like Linux/Microsoft Server Based MySQL, and
Postgress databases require low to midsize emory, dual NICs, and can use NAS or SAN
for storage.
• Web Hosting applications like Linux Apac e or Microsoft IIS require a relatively small
memory amount, local disk or NAS storage, and dual NICs.
• General computing applications like Microsoft SharePoint, file servers, print servers, etc.
require a low memory amount, local disk or NAS storage, and single NIC.
• CRMlERP front-end application servers like SAP, PeopleSoft typically require a mid-
size memory amount, dual NICs, and local disk or NAS/SAN storage.
• Microsoft Exchange (depending on the use case) might require mid-size memory, dual
NIC, and local disk or NAS/SAN storage.
• High guest count Virtual Desktop Infrast r ucture like VMware with 128 to 300 guests
per server requires a typically large memory, multiple NICs, SAN or NAS storage, and
high availability.
• Market data applications/algorithmic trading requires a large memory, with multiple
NIC connectivity, SAN or NAS storage, and high availability.
• Linux grid based environments like Data ynapse require a midsize memory, dual NICs,
and local disk or NAS storage.
9-56 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Virtual Machine CPU
This topic describes the virtual machine CPU sizing aspects.
f:M;~Ufl9 £nvirollmelll
d~1
u~mnt:i"' ..
Cl'lU
s.I«t!ho r<JIl1b<r of'li'toal_",. iii the .".MI _tn.
m tl~fML~D!JJ&~tn·
~
y);tfJikmJi'i\\Xm}$!il
("z:!'lHr (Jts'tt?Wit~, 2tn®
CYlJoo
~1.1~{V.-?Y
N.s::·!.w..::rv,
s:(~<~ ~:"rx-iHdf!(
-~~~:t ,'$ t..~~~:.
::'~.>Yj'J:f t(:. :~:.:.,%;k:~:;:
VM CPU sizing is done when a VM is created. The administrator needs to specify how many
vCPUs the VM should have.
VM CPU Workload
Like host CPU saturation, VM CPU saturation can also occur. Depending on the application
type, adding vCPU count might help the VM's performance-i.e., if the application is
multithreaded it could utilize additional threads . The VM profile can be updated by increasing
the CPU reservation for a particular VM that ha ' been identified as lacking CPU power.
9·58 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Allocating Host CPU
• Configure host CPU capacity sharing per VM
- CPU contention sharing (Low, Normal,
High, Custom)
- Capacity reservation in MHz
- Maximum CPU capacity in MHz or unlimited
M <">YI'!.$d Cru
M"')J('J(~~~ tU>1A~;"~ ~ t
(
!;:-'>:<!; '~!"!:" .. ;'!.,:~, ..!'; ;. ; . '':;!'~'''=' :'.-(',.< " '-'.~.
Apart from specifying the number of virtual CPUs available to the VM, the administrator can
also allocate and ensure CPU capacity for the times when CPU sharing contention happens.
The following parameters can be applied in a VM profile:
• Number of vCPUs: VM deployed with single-threaded application should be given only
one vCPU. For the multithreaded applications the number of vCPUs depends on the
application itself but is also Ii mited by the processor type and ESX limitation. For ESX 3.5
up to 4 vCPUs can be assigned, whereas with ESX 4.0 up to 8 vCPUs can be assigned.
• Shares: Allow the administrator to define the processor usage of each VM in relation to the
other VMs hosted by the system.
Note If you expect frequent c'langes to the total available resources, use Shares, not
Reservation, to allocate resources fairly across virtual machines.
• CPU capacity reservation: Sets the minimum CPU capacity to prevent VM CPU
starvation. Use when you need to guarantee minimum CPU capacity.
• Maximum CPU capacity: T 'l e value specifies the maximum CPU capacity that can be
used by a VM and can be used to prevent a VM from consuming too many CPU cycles. If
unlimited is selected a VM may use all CPU capacity if needed.
Note Be aware that VMware ESX hypervisor adds CPU overhead-thus do not overcommit the
physical CPU.
Note Host CPU capacity allocation is more art than science. Proper allocation requires complete
understanding of the hosted VM's perfo mance requirements and behavior. Unless complete
performance understanding is possess ,a default utilization scheme should be used.
9·60 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc.
Sizing Virtual Machine Memory
This topic describes aspects of virtual machine memory sizing.
$'3.6t
36.% a.311 7';.$1
O'~~·'.»:~ ::"".) ",.,:?.~: :.' be .~;; ::<;:'.:0-. :;,;~~,.x; :>~:)~ ",',l,:;: ,,* :?
"'.
$.:t:{~}."'J:mM.t:l;;,.?~~
r.'~; ·'.«w,w...
~~"C'_,:;:
b ......:~·:,) ::.:-.:::«-:~
Memory is presented to a VM in the form of slots that can be populated with memory-the
speed and type but not the size of the memory is that of the host server. The option is not
configurable and occurs automatically. For example, a VM with 4 GB of memory will typically
see two slots configured with 2048 MB DIMMs
VM memory sizing is done upon VM creation, or even after VM has been created. This
includes specifying the amount of memory assigned to a VM.
Be aware that you can assign a VM more memory than a host physically has-the maximum
being 64GB for ESX 3.5 and 255 for ESX4.0 version. Thus even a host with 16GB of physical
memory can have a VM with 32GB configured.
In general, the sum of memory of all running VMs and the VMware ESX hypervisor should
typically not greatly exceed the amount of physical memory. It is recommended to load
VMware ESX memory to 80% or 85%. Using this approach allows for some spare memory in
case the VMs start to use more physical memof). Remember also that using more than 80% to
85% of memory capacity in an ESX cluster depl yment can impact VMware High Availability
failover and diminish functionality.
9-62 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Transparent Page Sharing
VMware ESX can save a lot of physical memory using transparent page sharing, which is
particularly useful in environments where mUltiple similar guest operating systems are used.
The VM ware hypervisor checks each block of memory that a VM wants to write to physical
memory-the block of memory being equal to a block of memory already saved in physical
memory means that there is no need to use extra physical memory. Instead only a pointer is set
that remembers that the block is used by other VMs. If the VMs only read such a block and
never change it, the block is saved just once. ESX uses such a block until a VM wants to write
to the block, at which time an additional memory block copy is created.
Ballooning
If VM ware ESX runs out of guest VM memory space, it starts a process inside the guest
operating system that claims memory. The guest operating system checks whether there is
memory not being used. If such memory exists, it is given to the process. Then VMware Tools
claim such memory and report to ESX exactly the memory blocks that can be reused for other
VMs.
Using this approach unused memory is taken out of the other VMs to give it to the VM that
needs it more.
Swapping
If the ballooning is not sufficient, ESX uses a last resort-swapping out VM memory to a disk.
This incurs performance degradat on since disks are always much slower than physical
memory.
Apart from specifying the size of the allocated emory available to the VM, the administrator
can also ensure memory capacity for the times when memory sharing contention happens. The
following parameters can be applied in a VM profile to address VM memory requirements:
• Shares allocation is used to identify preferential treatment when memory resources are
under constraint on an ESX host. Resource shares are based on a proportional allocation
system, where a VM' s "due share" is a rati based on its shares compared to the total
shares allocated to all objects in the respective group. If two VMs have the same amount of
memory provisioned to them and are in the ~ame resource pool, the VM with the greater
number of shares will enjoy preferential access to physical memory on a proportional basis
equal to the differences in the share allocati ns.
• Memory reservation guarantees an amount of physical memory that will always be
available to the VM.
• Memory limit defines the absolute maximum physical memory a VM can consume on the
host.
The options can be configured independently of each other-certain VM profiles might employ
a reservation without any limit.
Note If you expect frequent changes to the t tal available resources, use Shares, not
Reservation, to allocate resources fairly across virtual machines.
9-64 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Virtual Machine Network
This topic describes aspects of virtual machine network sizing.
:hti~tiog Environment
l,iloo
2,000
z,ol)l)
',) :1;~'';' ;'''''u ';.,~.''.'" :'>:' . •~j :>:;:,:., : >7>;!"~<'('- :»):.:((';'.:.::: .. .:-:. :~
,~
VM Network Overview
Packets that are sent by a guest application pass through multiple layers before they go on to
the wire. The message from the application is first processed by TCP/IP in the guest operating
system. After the required headers have been ad ed to the packet, it is sent to the device driver
in the virtual machine.
Applying VM VO Parameters
VM I/O sizing is done upon VM creation. The f llowing parameters can be applied in a VM
profile to address VM I/O requirements:
• Number of network adapters defines the number of NICs that will be available to a VM.
• MAC address specifies an administratively assigned MAC address. If not specified, a MAC
address is dynamically assigned by the ESX hypervisor.
• Adapter type defines type of adapter a VM will use. Different types can be used, offering
different performances. The use of a VMXNET or ElOOO adapter is recommended for
optimum performance. Remember that the vNIC type must be supported by the guest
operating system; i.e., there has to be a device driver installed in the guest operating
system.
9-66 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
• Flexible: This virtual adapter identifies as a Vlance adapter when VM boots, but initializes
and functions as either a Vlance or a VMXNET adapter, depending which driver initializes
it. Recent VMware Tools are required to support VMXNET representation and operation.
• EIOOO: This adapter is a virtual implementation of a physical network adapter that is
supported by newer operating systems like 32 and 64bit Windows Vista. The performance
is intermediate between Vlance and VMXNET.
• VMXNET2 (Enhanced); The adapter is based on the VMXNET adapter but provides
high-performance features like jumbo frames. The guest operating system support depends
on driver availability-Le., the following guest operating system support is available for
32bit Microsoft Windows XP , 32/64bit Red Hat Enterprise Linux 5.0, 32/64bit SUSE
Linux Enterprise Server 10, and 64bit Red Hat Enterprise Linux 4.0.
• VMXNET3: This adapter is a paravirtualized NIC designed for performance. It offers all
the features available in VMXNET 2, and adds several new features like mUltiqueue
support and MSIIMSI-X interrupt delivery. The guest operating system support depends on
driver availability-i.e., the following guest operating system support is available for 32bit
Microsoft Windows XP, 32/64bit Red Hat Enterprise Linux 5.0, 32/64bit SUSE Linux
Enterprise Server 10, and 32/64bit Sun Solaris 10 U4 and later.
Note The availability of network adapter types depends on the VMware ESX version. In VMware
ESX 4 the options are Flexible, E1000, VMXNET2, VMXNET3.
9-68 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Storage
.. SCSI drive controller type (guest operating system)
• Number and type of disr(s
- New
- Existing
- Raw Device Mapping (RDM) rn/M~iti;t\itl!i:"ii%em!!ti
. l!i!l!!i'i,iiiiiiiiiiiiiii~
Sded a DIsk
SCSI (ootroti~
Whi.. h5CSl tuotrt.ik:tl· type<~ 'fu.J hM ~ .se?
~;>!ill~.t~~~
f~H.~:n::~:\~.;. :t<:m:-!t.
~
~~~
:~\~~!';
X";! f·o/'~J;:.JP..!
SekdeDisk
~-1r.t.f~r~~S~~,~ G(';{!", .~( ~.<:'
:;r!& *.ci·'e-.c..d,:y."/.'>-~
;:{,~(. ~~~ (-.:.::.<=<~~~
t1M~
®.\,@.;
stSI C_
~~1.d:~r:
c~·c..'" ,::.~" ,.,. .~~ ..~. :,...;. J'~ 'O:::::~ ,~"".~ V·';~·.;"; ,::: ." " *-'-~~
VM storage sizing is done upon or after VM creation. The following parameters can be applied
in a VM profile to address VM storage requirements:
• SCSI drive controller type: Based on the selected guest operating system; the vCenter
client suggests the type
• Number of disks: Assigned to a VM by creating mUltiple VM disks
When assigning a disk to a VM, the administrator can either crate a new disk or assign an
existing one.
The administrator must also select the VM volume type. VMware ESX supports VMFS and
RDM volume types.
Allocating LUN
Storage resources in a virtual environment can be either isolated or consolidated depending on
the nature of the I/O access patterns of a VM. I olation means that a LON is used to provision
space for a single VM, whereas consolidation u 'es a single LON to provision storage to
mUltiple VMs.
The decision whether to isolate or consolidate pends on the VM requirements; i.e.,
application requirements. An isolation approac should be used if a heavy I/O generating
application is deployed within VM, to assure proper storage access. A single LON to a single
VM allocation can be done using either VMFS r RDM volume-the latter, when deployed,
offering only an isolation approach.
The isolation approach has a scalability downside-an ESX LON number limit can be quickly
reached. Also, when VM storage capacity need to be increased, an additional diskILON needs
to be provisioned, a task serviced by the storag administration team.
9-70 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sizing Storage ~- New Virtual Disk
.. Disk size
• Location - with VM or separate data store
• Provisioning
- Thick
.- Thin
:1 B ~"';7~';,ym
:k;.~
!l!?M!'J:!.';;ir~_!g2;.n
ii!mlOwan;$yw
;;&,
""""
~
~~'l(~!.~~
~-&i...'L!:§i~
Crute. Did(
....:".." ....:.,:;<>:<:<
~"...rf ~"' · ;"-'i·'<-~
The administrator needs to define the following parameters when creating new disk:
• Disk capacity: For an individual disk, which defines the maximum disk size.
• Disk provisioning type:
Thin-the disk space on the storage device is allocated on a per demand basis; that
is, only the space actually used is allocated.
Thick (no option selected)-the VMDK appears fully sized on the datastore, but the
space is not zeroized. In this case if the storage device level thin provisioning is used
only the space actually used is allocated.
Support for FT cJusteri'1g-the configured disk space is allocated on the storage
device.
• Location: Specifies whether the disk files are stored in the same location as VM files or in
a separate datastore
• Virtual device node: Specifies to which SCSI or IDE controller a virtual disk is connected
• Disk mode: Allows disk to be configured as independent. If the selection is not changed
the disk remains in a default S1ate which allows VM snapshots to be created.
9·72 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Using the Virtual Mach ine Sizing Criteria
This topic describes the criteria aTld process of virtual machine sizing.
Creating VM
• Options for creating VM
- Manual in vCenter
- P2V conversion with vConvert
- From template
- Import of third-party '1M wijh
vConvert
• After creating VM
- Add/remove resources (disk,
network, etc.)
-- Fine tune resource s1aring
- Install guest operating system
- Install VMware Tool s
:.; ..
~.;" . ~ "~; .... .;
.. : ;""..:..,,~
• Network
• SCSI controller
• Disk
M!'tlw.·:')t'
~~>:.v:f.d;
0/::':,( C\~'t:..r~;f.f:
~.(::(~,'!. ~ !):$r.
(h~(~(l<: ~;::',!.:.
The sizing information for the CPU, memory, II , and storage is applied when the VM is
created or changed once VM has already been c -eated. Be aware that changing any of the
parameters once the VM is deployed may present issues with VM operation (depends on the
guest operating system and application used).
The input data for VM sizing are the characteristics and resources capacities of the physical
server that is to be converted to a virtual machi e. This information can be gathered with
VMware Capacity Planner, which helps to determine the capacity and utilization for the CPU,
memory, storage, and network.
9-74 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Recommended VM Configuration
• Critical VM
- Host CPU and memory reservation to guarantee resources
- Share values for periods of increased contention
• Separate drive for operating system and data
- Separate SCSI cont roller for high va requirements
• Use VMXNET3 for improved network performance
• Install VMware Tools
~ .":'!o- ,',:;...:: : •• ,;.» ',',; ::: ~.:; •• .;).,< ~:- .:.:. : ,,:. ,:.
Allocating resources to a VM is important, especially when dealing with critical VMs. The
following recommendations can be taken into account for critical VMs:
• Use host CPU and memory reservations to guarantee resources.
• Set contention sharing for pen ods of increased contention.
• Use separate drives and SCSI controllers for operating system and data for high 110
requirements.
• Use VMXNET2 or VMXNET3 network adapter type to improve network performance.
• Install VMware Tools.
A physical machine can be converted to a VM y using the free VMware vCenter Converter
utility-either local or remote machines can be converted. The conversion process does not
incur any downtime or disruption.
The administrator should tune the VM configur tion when performing Physical-to-Virtual
(P2V) conversion.
9-76 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Sum mary
This topic summarizes the key points that were discussed in this lesson.
Summary
• Virtual machines have t he same characteristics as physical
machines- CPU, memory size, storage space, and network
con nectivity.
" VM CPU sizing defines the number of virtual CPUs along with
optional capacity reservations.
,R VM memory sizing defi nes the memory allocated, along wtth
optional capacity reservations.
.. VM network sizing defines network adapters and types for
network connectivtty.
Summary (ConI.)
.. VM storage sizing defines the number, type, and size of disks
allocated.
• The sizing criteria shoulc take into account measured physical
server capacities and utilization, along with guest operating
system and application minimum requirements.
t, :"'(- •• , •.•.: :.;- _~~. .- ' ~.:". 1.:-- ' .....! ""'; ~:;:;G";·
9·78 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
" VMware advanced services like VMotion, HA, FT, and DRS
require vCenter Server.
• Standard virtual switches have to be configured with the same
configuration on each individual ESX host to use VM mobility.
II The Cisco Nexus 1 OOOV distributed virtual switch enables mobility
for network configuration.
• VSM is the Cisco Nexus 1000V supervisor module integrated with
VMware vCenter.
• Cisco Nexus 1000V deployment requires management, control,
and packet VLANs for VS M to VEM communication.
,:;. :(·(r. "-, 0/.: -; ",., ... . ,.... "_ ~:'': ' ''' ... , ..0,,';
9-80 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Q7) Which practice ensures proper operation of multiple Cisco Nexus lOOOV domains on
the same ESX servers that are sharing control and packet VLANs? (Source: The Cisco
Server Virtualization Networking Solution)
A) Use different domain IDs for individual domains.
B) Configure port groups with different names.
C) Place VSMs on different management VLANs.
D) Put ESX serversn the same cluster.
Q8) How many vCPUs should get a VM running a single-threaded application? (Source:
Sizing a Virtual Machine)
A) one
B) two
C) four
D) eight
Q9) Which disk allocation set'eme uses the least amount of space on a storage device?
(Source: Sizing a Virtual Machine)
A) thick
B) thin
C) zero
D) on demand
9-82 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module 10 I
Objectives
Upon completing this module, you will be able to evaluate design success criteria and ROI for
the Cisco Unified Computing solution. This includes the ability to meet these objectives:
• Explain design success criteri,•.
• Determine the design ROI.
Lesson 1
Understanding Design
Success Criteria
Overview
This lesson identifies and explain$ the need for design success criteria and what kind of criteria
are used to determine the successful design of a Cisco Unified Computing solution. It also
describes the design success criteria using an example.
Objectives
Upon completing this lesson, you will be able to describe and use design success criteria. This
ability includes being able to meet these objectives:
• Identify why design success criteria are required.
• Identify and evaluate the design success criteria.
• Evaluate existing deployment migration design success criteria.
Design Success Criteria Ov rview
This topic explains why design success criteria are needed.
The Cisco Data Center Unified Computing solution is used to address the business, technical
and environmental aspects.
Success is an important goal to strive for. A Cisco Data Center Unified Computing project is no
exception. Due to the different perceptions that can exist about what constitutes success, it may
sometimes be difficult to tell whether a project i ~ successful. Time, cost, and quality are often
used as criteria for evaluating the success of a project.
To confirm that solution expectations have been met, a project design should be evaluated
against certain success criteria. The design success criteria are used to answer the following
basic questions:
• Does the solution design fulfill the requirements that have been established for the project?
• Does the solution create benefits according 10 a business, technical, and/or environmental
perspective?
• Were the key business and technical goals that were identified at the start of the project
fulfilled?
10-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
What Should Be Evaluated?
.. Compliance wtth busin ess goals
" Compliance wtth technical goals
" Compliance wtth environmental
goals
• Increased resource utilization
• Business continuity
• Power optimization
" Cost savin gs
" High availability
• Application reliability
• Error isolation
• Portability
• Quick application deplovment
¢ :': '.<, ,.,:;..:~ ='.' .;..':.,", .::: "~ " 0 ;' ~: .;.:~ " '.;.
What are the key aspects of the solution? Have the stated project goals been achieved? Three
key questions to ask when evaluating a project are:
• How has the solution met the business goals that were set before deploying the solution?
• How has the solution met the technical goals that were set before deployment?
• How has the solution met the environmental goals that were set before deployment?
The goals that the solution should fulfill must create certain benefits, which, in the case of
Cisco Data Center Unified Computing, are as follows:
• More efficient resource usage: The Cisco Unified Computing Solution provides the
capability for IT organization~ to ensure that resources will be available and accessible for
applications.
• Error isolation: The Cisco Unified Computing solution serves as a safeguard to provide
security to the system against different possible disruptions or faults that can be a
consequence of unavoidable circumstances such as operating system failure.
• Increased overall security: Although it seems different while Cisco Unified Computing
designates users and applications into various VMs, the solution actually increases security
levels among interrelated and diverse segments. Meaning, using a Cisco UC solution makes
security better even when using VMs that are sharing the same physical hardware
• Quick provisioning: Rather tnan using troublesome procedures for storage installation and
management, the Cisco Unified Computing solution offers the capability to create new
VMs instantly without requiring physical servers. If properly designed, this also shortens
the time needed to set up storage and data management systems. What used to take weeks
can now be accomplished in several minutes.
• Portability and mobility: The use of intangible equipment and virtual resources allows
effortless movement of VMs from one physical server to another.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-5
Using the Design Success riteria
This topic shows how to evaluate the design suc ess criteria using the following example.
For the LAN and SAN network the following n mbers apply:
• Servers are equipped with 2 LAN adapters nd 2 HBAs totaling 2000 Ethernet and 2000
FC adapters.
• 65 LAN switches are used for Ethernet conr ectivity:
63 switches are used in the access layer; 2 are used as core switches.
• 6 FC switches are used for SAN connectivity:
2 SAN islands are used, with 3 switches in each.
SAN switches are interconnected wit 1 4 ISLs.
• 40 storage devices are connected to the SAN with 4 FC links each.
• 3130 cables are used as the Ethernet cabling infrastructure.
• 2172 cables are used as the Fibre Channel cabling infrastructure.
10-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
From an operational perspective the following numbers apply:
• Planned downtime is 3 hours per month.
• Unplanned downtime for the past year was 18 hours.
• The customer needs 20 hours to recover in case of a failure (e.g., server failure).
• It takes 40 hours to deploy a new application on a new server.
• Average server CPU utilization is less than 10 percent.
• A verage server memory utilization is around 20 percent.
• A verage storage utilization is around 25 percent.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-7
Deployed Solution
• Network characteristics
- Serve r connectivity - Cisco UCS 6" 00
- Existing SAN - Cisco MDS
- LAN connectivity - Cisco Nexus 70 0
- Unffied fabric with FCoE
• Compute characteristics
- Cisco UCS 5100 with blade server
- VMware for server virtualization
• Storage characteristics
- 2 disk arrays
You have redesigned a customer's data center by deploying Cisco UCS and have decided to use
the following:
• For the network:
Cisco UCS6140XP Fabric Interconnect cluster for server connectivity
Cisco MDS switches in the existing AN
Cisco Nexus 7000 for LAN connectivity to provide high throughput using lOGE
Unified fabric with FCoE to simplify server connectivity
• For the server:
Cisco UCS 5108 chassis with blade servers
VM ware to virtualize and consolidate servers
• For the storage component:
2 disk arrays
10-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Bus fness Goals
$500,000
$400,000
$300,000
$200,000
$100,000
$0
Before After
CAPEX
_;0 ~~E~/pJi1f*
Evaluating the business goals involves comparing the CAPEX and OPEX before and after the
Cisco Unified Computing solution is used. The main reduction presents the OPEX, since much
less power is used and much less cooling is required.
Apart from that you are also comparing:
• How much time it takes to deploy a new server
• How much time it takes to imolement fail over in case of a server failure
• What the average resource utilization levels are
The time required to deploy a sener is minimized from almost 2 days to 3 hours. Recovery is
instantaneous since VM ware HA is used. In case of server failure VM ware HA detects that and
starts the VMs from the failed server on another ESX server.
The server deployment time has been reduced due to a combination of different mechanisms.
First, the Cisco UCS Solution speeds up several administrators' previously lengthy tasks:
• Templates are used for new physical server deployments.
• Server LAN and SAN connectivity is deployed via Cisco UCS Manager without the need
to disturb the network and storage team.
Second, since VMware vSphere is used for server unified computing in case of available
physical computing resources, new server is rapidly deployed by using a server template in a
form of "gold image" and VMware vSphere for server virtualization.
In viewing the average utilization evels, the numbers reveal that the Cisco Unified Computing
solution helped customers to raise average CPU utilization from less than 10 percent to 82
percent, average memory utilizaticn from 20 percent to 8S percent, and average storage
utilization from 2S percent to 7S percent.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-9
Evaluating Technical Goals
Planned nm"mTlm o
Unplanned downti me
Recovery
Next you will evaluate the design against the stated technical goals to see whether it meets the
technical design criteria. First you compare the device count before and after the Cisco Unified
Computing solution is implemented:
• The physical server count is reduced from 1 00 servers to just 80 servers, yielding an
average server consolidation ratio of 12.5 tol.
• The storage devices have also been consoli ated from 40 to just 2 larger disk arrays.
• The cable and port count has been reduced y a factor of 62, from 3000 to 48.
• The facility now has more spare room, since 196 racks have been freed up for use, which
prolongs the data center lifespan by making extra space available for growth.
From the perspective of technical responsiveness, the Cisco Unified Computing solution in a
virtualized data center creates a reduction in the time required for planned, unplanned, and
recovery tasks.
The planned downtime is minimized to almost since VM ware VMotion is used to move the
virtualized server - the VM - away from the physical server that has been selected for
maintenance. Apart from that, the maintenance time related to server firmware (adapter,
enclosure, BIOS, etc.) is also minimized with the use of central server management - the Cisco
UCS Manager.
Unplanned downtime has been reduced to a minimum since multiple HA mechanisms have
been employed in combination with the network - GLBP, redundant links in EtherChannei and
vPC, virtualized server environment where VMware HA is used to tackle server failure, and
application based cluster which also handles application related failures.
Recovery time has been minimized as a conseq ence of using a combination of HA
mechanisms.
10-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Cabling Benefits
. .,..~
~: :.;~ ". '~}.,:~ ::- . ' <. } :: . :;":.;.
Finally, in viewing the benefits from a cabling perspective the passive infrastructure has
significantly been reduced.
• Adapters have been reduced ty a factor of 25, from 4000 to only 80. This was made
possible with the use of FCoE, thus deploying CNAs. Each of 80 servers has one CNA with
redundancy support.
• The number of switches has been reduced to six:
Two Cisco UCS 6100 Series Fabric Interconnects used to connect the 10 blade
enclosures
Two Cisco Nexus 7000 Series Switches used for core switches
Two Cisco MDS switches used to connect the Fibre Channel-based disk arrays
Cabling count has been significant ly reduced - from 5302 to only 48 cables.
• Each of 10 blade enclosures is connected using 2 cables to each of two Cisco UCS 6100
Fabric Interconnects in a cluster totaling 40 cables for server to fabric interconnect
connecti vity.
• Each UCS6100 Fabric Intercon nect is connected to Cisco Nexus 7000 Series Switches
using two cables. In total 4 cables are used.
• The two disk arrays are connected to MDS switches with 16 cables - 8 per disk array to
connect each disk array to both SAN islands.
• The MDS Fibre Channel switches are connected to Cisco UCS 6100 Series Fabric
Interconnects with two cables, each totaling four cables.
The management connectivity for the servers is embedded with a Cisco UCS 5100 Blade
Enclosure to Cisco UCS 6100 cabling - no special cabling is required to manage servers.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-11
Using the Design Success riteria
This topic compares the existing solution with the Cisco Unified Computing Solution without
changing the number of ports.
Existing Deployment
• Physical servers - total 857
- 525 using Windows operating
system
- 300 using Linux operating
system
- 32 using VMware ESX
• Server types
- DAS - dual-attached server
- QAS -quad-attached server
- Blade chassis
• Migration to Cisco Unified
Computing Solution
- No additional server
virtualization
We have created a Cisco UCS design for the existing deployment described in the slide above.
The deployment has 857 physical servers.
The migration to the Cisco UCS is done in a waj that preserves the number of physical servers .
.~
10-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Existing Deployment (Cont.)
• Number of racks = 59
© 2009 Cisco Systems, Inc, Evaluating Cisco Unified Computing Solutions 10-13
Cisco Unified Computing Solution
The Cisco Unified Computing Solution has already been designed for this existing deployment
migration. The tables above summarize the information about this solution.
10-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Comparing Solutions
Better results even wi th out additional server virtualization
(}- .=':~ ,~: , ~, ~) ,:.:-,.... " , <.:: " f " ": '>;":'C ':'
When comparing the two solutio) S, you can see solely from an infrastructure quantity
perspective that the Cisco Unifiec' Computing solution designed brings substantial savings in
terms of:
• Infrastructure quantity
• Required rack space
• Required power
• Heat produced
But the infrastructure quantity gains are not the only benefits of the solution. Now the customer
requires far less time to provision the servers.
Bear in mind that the solution did not introduce any extra server virtualization since the
customer requested that it not. Thus, even greater savings and better results would be achieved
if additional servers were virtualized.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Soluti ons 10-15
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
• The design success criteria is used to evaluate the virtualization
benefits.
• The design success criteria evaluates usiness, technical, and
environmental goals.
• To evaluate design success a measur d data set before and after
solution deployment must be gathere .
• The design success criteria shows that the virtualization solution
provides major benefits .
10-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Lesson 21
Objectives
Upon completing this lesson, you will be able to describe and perform ROI calculations for
data center solutions. This ability includes being able to meet these objectives:
• Identify the ROI criteria.
• Explain how to evaluate the deployed virtualization solution ROI.
Design ROI Overview
This topic introduces and explains the approach to calculating ROI and comparing various data
center solutions.
ROI Calculation
• Identify business and technical requirements for the data center
• Identify expected or maximum future rowth for the data center
• Identify technological differences betw en traditional and
consolidated and virtualized data centers
• Quantify CAPEX and OPEX for both
• Calculate ROI based on the cost com arison
Business requirements are the driving factor in building a data center. Business requirements
must first be translated into technical requirements, which can then be used to start planning a
data center.
There are many factors that can guide you to the right solution, but cost is one of the most
significant decision factors .
.~ Quantifying CAPEX and OPEX can help determine the right decisions in data center planning.
10-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
ROI Calculation
Technical Req uirements
.. Initial and expected futLre maximum growth
- Nu mber of applicatio n servers:
" CPU power
" Memory
-- Storage capacity
- Connectivity requirements (core, VPN, Internet)
• High availability require 'Tlents
• Security requirements
The business and technical characteristics desired for the project should allow us to estimate the
size of the data center and determine the types and quantity of data center equipment needed.
Such estimates should also include initial technical requirements as well as predicted future
growth.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-19
ROI Calculation
Solution Options
• Traditional data center design
- Standalone servers with built-in storage to accommodate
application servers
- Layered LAN solution
- Layered WAN solution
• Cisco Unified Computing solution
- Fewer high-performance blade se rvers, with virtualization used
to accommodate application se rver
- Centralized storage
- Consolidated LAN and SAN (with VLA N and VSAN
capabilities)
• Mixed data center design
- Consolidation and virtualization of ome parts of the data
center
In general, possible data center solutions range from the oldest (Le., most traditional design)
approach to the newest - a consolidated and virtualized data center.
A traditional data center is where each application server is hosted on its own physical server
using its own internal storage (disks).
The Cisco Unified Computing solution is where resources are consolidated and virtualization is
used to keep servers logically separated (isolated).
Other solutions (mixed) are typically partially consolidated and virtualized solutions where
some parts of the data center are built according to a more traditional approach.
10-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
ROI Calculation
Comparing Sol utions
CAPEX
OPEX
Select SoIutionwith
Fastest ROI
~ ,OJ;'!-- ,', :;'';: ,-., ;X"": . " ,,-~ • .:t. ,:< .•;'>:~ " ':' .:.
The figure illustrates how to create multiple calculations based on defining multiple data center
solutions ranging from tradition~l to a full Cisco Unified Computing solution.
When evaluating solutions, you \\ould initially evaluate the costs for each of the options. In our
case, these are traditional DC des ign, mixed DC design, and Cisco Unified Computing solution.
The ROI describes how quickly the investment is returned. Thus, when comparing options, the
solution that has the fastest ROI is the best. When comparing the three options, not only the
infrastructure costs, quantities, and power consumption are evaluated but also the management
overhead and time needed to provision a new server.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-21
ROI Calculation
Comparing Solutions
• Cost comparison scenario 1
- CAPEX - Cisco Unified Computing solution < Traditional DC
• The initial cost of solution can b smaller
- OPEX - Cisco Unified Computing olution« Trad~ional DC
• The operational expenses will be smaller
- ROI« 3 years
• Cost comparison scenario 2
- CAPEX - Cisco Unified Computing solution> Traditional DC
• The initial cost of solution can be larger
- OPEX - Cisco Unified Computing olution« Trad~ional DC
• The operational expenses will be smaller
- ROI < 3 years
This ROI calculation reflects the difference bet een a traditional DC and the Cisco Unified
Computing solution. Two scenarios are generally possible:
• The CAPEX and OPEX of the Cisco Unifie Computing solution are smaller. Typical ROI
will be less than three years.
• The CAPEX of the Cisco Unified Computi g solution is higher, but the OPEX is smaller.
The Cisco Unified Computing solution is stIll a better choice since the CAPEX is the initial
cost, whereas the OPEX is the recurring expense. A. greater CAPEX accounts for additional
equipment, technology, and new knowledge required while being offset by a lower OPEX
due to energy savings, flexibility, and mana eability.
10-22 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Evaluating Design ROI
This topic describes the approach to calculating ROI by comparing various data center
solutions with the consolidated and unified computing data center design.
The Cisco Unified Computing solution has both a positive and negative impact on CAPEX:
• Less equipment is needed because of consolidation and virtualization (i.e., resource usage
is optimized [e.g., servers, disks, memory, network connectivity]).
• Less space is required.
• Smal1er power and air conditioning capacity is required due to lower power consumption.
• Less networking equipment a Fld cabling is needed.
• Expensive high-performance equipment is used (e.g., more and faster CPUs, more memory,
centralized storage).
• Additional licensing, support and training for virtualization software is required.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-23
ROI Calculation Input
OPEX
• Cisco Unified Computing solution impact on OPEX:
- Less space is required.
- Less direct and indirect power consumption is needed.
- Less equipment must be maintained .
- Less personnel is required.
- Additional licensing and support for virtualization is required
(optional).
10-24 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Example: Direct Power Consumption
Savings on hardware pay fo r virtualization licensing and training
c:!> ROI = 0 years, Reduced CAPEX
• Traditional deployment
- 90 physical servers requimd 90 x Physical Servers
.;} ~,:,.' ';::·';: -:. -,·;X·N.: :" '. ~ . :.~ :,. '~"':" . < :
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-25
Example: Air Conditioning
Savings on power consumption pay f r virtualization licensing
and support ¢ Reduced 0 PEX
• Typical CoP = 2 .5
90 x Physical Servers
• Cooling power consumption
- Traditional deployment = 125kW
- Cisco UCS solution = 42.5kW
• Price per kWh: ¢10
• Option 1 : $1 09,500 annually
• Option 2: $37,230 annually
• Annual savings: $72,270
When building a new data center, you could use the same ROI and TCO principles when
deciding on the air conditioning (CoP range is typically anywhere from 2 to 3).
Air conditioning support infrastructure should be planned for maximum future requirements.
Definition: CoP - Coefficient of Performance - defines the ratio between the cooling power
and the power consumed.
10-26 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
1:' ';, " < '"
Example: Space
• Required support space is identical for both solutions
(benefits are visible in large data centers).
• Power and cooling restrictions
- 1OkW per rack
• Tradttional deployment
Electrical systems
- 90 RUs space
Adequate space requirements should take into account all data center support functions:
• Electrical systems and cabling (power distribution, UPS, generators, and fuel)
• Air conditioning systems and piping
• Administrative, storage, and empty space
• Fire suppression systems
Space requirements should be designed and implemented for maximum future expected growth.
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-27
Evaluating Existing Deployment
Migration
• Typical CoP = 2.5
• Existing deployment
- 857 servers
- 59 racks Pr ce per kWh
- Power consumption =355 kW
Ex isting Deplo ent $1,088,430
- Cooling power = 887.5 kW
• Cisco UCS solution Ci co UCS Solution $496,692
- 857 servers Savings $591,738
- 36 racks
- Power consumption = 162kW
- Cooling power = 405kW
You can compare the power and cooling operational expenses for the existing deployment
migration. You know the original power consum tion and the power consumption of the
proposed Cisco Unified Computing solution.
Using those numbers and assuming that the price per kW is 1O¢, you quickly conclude that the
annual savings would be $591,738.
10-28 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key prints that were discussed in this lesson.
Summary
• To determine the ROI, both initial and operating costs have to be
examined.
• The reduced operational costs with the Cisco Unified Computing
solution help reach ROI faster.
• Power related costs inc ,ude the air conditioning.
• The Cisco Unified Computing solution uses less space since
more equipment can be placed in the racks .
.c .{. ~:; ~.~~ ",. > .'.,.:. ~ :.~ -'J)< ." " ,~ ..,
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-29
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
• Design success criteria evaluates the solution according to the
requ irements.
• Technical goals are evaluated by comparing the number of
servers, adapters, switc 'les, ports, and power drops forthe
eXisting and the new so utions.
R To evaluate the ROI of both the traditional and the Cisco Unified
Computing solution, compare the power costs (for equipment and
cooling).
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-3 1
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Ql) Why are design success criteria used? (Source: Understanding Design Success Criteria)
A) to evaluate the solution against 'equirements
B) to calculate CAPEX
C) to calculate OPEX
D) to determine ROI
Q2) Which factor is typically used to determine the amount of power consumed to cool the
equipment in a data center? (Source: Determining the Design ROI)
A) BTU
B) CoP
C) ROI
D) RU
10-32 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Module Self-Check Answer Key
QI) A
Q2) B
© 2009 Cisco Systems, Inc. Evaluating Cisco Unified Computing Solutions 10-33
Appendix 1
Objectives
Upon completing this lesson, you will be able to meet the following objectives:
Many enterprise data centers remain complex a d siloed environments in which servers and
storage equipment are vastly underutilized, cycle times for provisioning new applications are
long, and operating costs-especially power an cooling costs--consume an ever-greater
percentage of the IT budget. More and more org nizations are turning to virtualization to
address these issues. Unfortunately, many virtualization efforts do not deliver the expected
results because they focus only on servers, failing to account for storage, network, and other
critical resources.
Cisco Unified Computing System was designed to unify network, compute, and virtualization
resources into a single, preintegrated platform. The system provides the foundation for a broad
spectrum of virtualization and performance optimization efforts. Whether you're seeking to
create a stateless computing environment, enable just-in-time provisioning of resources,
simplify the movement of virtual workloads, or Just reduce equipment and operating costs, the
UCS provides a powerful solution. But to realize the full benefit of these innovations, you need
to implement them quickly and correctly, in accordance with proven methodologies and
industry best practices.
Cisco Unified Computing Services help acceler te transition to a unified computing
architecture and sustain and optimize the perfor ance of that architecture after it is deployed.
Providing a unique network-based perspective a'1 d a unified view of all data center resources
and interdependencies, Cisco can extend the be efits of virtualization projects beyond the
domain of servers alone and help tune the entire data center environment to meet financial and
technical objectives.
A1-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UC Services include the following:
• Use case and architecture wo rkshops
• Planning, design, and implementation service
• Support and warranty service
• Remote management service~
• Optimization services
• Security services
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1-3
Cisco Unified Computing Service
Benefits
• Accelerate customer transition to unified computing architecture
• Benefits:
- Help achieve business and techni cal goals
- Mitigate virtualization risks
- Enhance the solution agility
- Reduce complexity
- Simplify management
- Optimize Cisco UCS uptime, performance, efficiency
- Improve application performance
Cisco Unified Computing Services combine Cisco's broad data center expertise with that of
Cisco's industry-leading partners to accelerate c stomer transition to a unified computing
architecture. These services help customers :
• Achieve business goals by aligning the data center strategy with unique business and
technical objectives
• Speed deployment and mitigate risks of virtualization and other data center projects by
applying best practices and methodologies
• Enhance agility by improving the mobility a d manageability of traditional and virtual
workloads and enabling just-in-time provisioning of resources
• Lower operational costs by identifying opp tunities to reduce data center complexity and
simplify management
• Optimize the uptime, performance, and efficiency of unified computing systems to help
maximize the value of their investment
• Provision new applications more quickly
• Strengthen in-house expertise through knowledge transfer and mentoring
• Improve application performance and availa ility to meet service-level agreements
A1-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Cisco Unified Computing Workshops
This topic describes the Cisco Unified Computing workshops.
Workshops Overview
.. Understand unique proj ect environment
" Isolate business objectives
II Map the technical obje C+ives
.. Gather input across organization
.. Help understand project challenges
• Two workshops:
- Use Case Workshop
- Architecture Workshops
f: ; ": }, ~.;' ",. ,:;., ,:~ q ~. :.: .'<,. ' ;~.""" ..< .... ..;
~,~
The first step in applying the benefits of unified computing to your business is isolating your
objectives. The Cisco Unified Computing System provides a powerful platform for a broad
range of projects, from stateless computing to virtualizing resources to improving the mobility
and manageability of virtual machines. But to fully realize your goals, you need a clear
understanding of how the solution will support your project in your unique environment.
Unified computing workshops help you to definitively map out the business and technical
objectives for your unified computing project. These workshops gather input from stakeholders
across your organization to under~ tand your strengths and challenges and the issues that need to
be addressed to make your project goals a reality.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Servic es A 1-5
Use Case Workshop
• One-day interactive session:
- Define project objectives
- Align project with best practices
• Gain understanding of Cisco UCS usage
• Deliverables:
- Cisco UCS implementation require
- Next steps recommendation
This one-day interactive session is held at the c stomer's place of business and is designed to
clearly define project objectives and, when possIble, align the customer's project goals with a
set of industry best practices for achieving them. Through this process, the customer gains a
clear conceptual understanding of how the Cisc Unified Computing System will be used to
meet stated goals. At the end of the workshop, the customer receives documentation of the
requirements for implementing unified computing in their environment as well as a set of
recommendations detailing next steps.
A1 ·6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Architecture Workshop
• Four-day interactive session
- Define project objectives
- Map objectives with nest practices
- Architecture considerations across
current and planned Data Center
resources
- Identify project challenges, success
factors, risks, requirements
• Deliverables
- Project requirements
- Next steps recommendation
- Recommended architectural design
This fou!"-day workshop also begins by isolating the customer's objectives and mapping them
to a set of best practices for achieving them. It then continues with an in-depth documentation
of architecture considerations across the customer's existing and planned data center assets,
including blade servers, computet systems, applications, operating systems, virtualization,
input/output (I/O), and system management. The workshop documents the challenges, critical
success factors, and risks involved in the customer's project and the requirements the customer
will need to meet based on projec ted environment and user demands. At the end of the
engagement, the customer receives documented requirements and a set of recommendations for
next steps, as well as an architectural design that provides a blueprint of the recommended
architecture for the implementatic n.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-7
cisd:o Unified Computing PI nning, Design,
Implementation Service
This topic describes the plan, design, and implementation service.
The Cisco Unified Computing solution can deli er significant advantages for customer
business, but it can also introduce new challenges.
To realize the full benefits of the customer's pla'1 ned project, the customer needs to account for
the entire data center environment, including servers, storage, network, security, and
applications.
Cisco Unified Computing Planning, Design, an Implementation Service helps the customer
take the right steps to achieve business and technical objectives and help to reduce risk in the
unified computing implementation.
Five service options are available:
• Preproduction pilot
• Server virtualization mobility and management
• Accelerated deployment
• Migration plan and delivery
• Installation
A1-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Preproduction Pilot
• Four-week pilot:
- Conducted in customer's environment
- Validates project del vered business and
techn ical 0 bjectives
- Gain expertise using Cisco UCS
- Develop deployment blueprint
• Deliverables:
- Validated business case
- Architectural design
- Detailed runbook
-::. ~.:~_: ';.;..:,. ,:. , ;>;.,.>: ' l;! ... ~ • •}" '" '~*r'< :
If the customer is still evaluating the Cisco Unified Computing System, a pre-production pilot
is the best place to start.
This four-week engagement conduct&-a pilot in the customer's environment to validate that the
project will deliver the business and technical objectives expected.
Through this pilot, the customer gains valuable expertise using the Cisco Unified Computing
System before committing to a purchase and develops a production-ready blueprint for
deploying in their environment.
At the end of the engagement, the customer receives documentation of their validated business
case, an architectural design, and a detailed runbook that the customer can use to bring the
solution into production.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-9
Server Virtualization Mobility and
Management
• In-depth planning and design
- End-to-end data center virtualizatio
- Virtual machines enhanced
provisioning and management
• Evaluation of current environment and
processes
• Identify virtualization objective
requirements
• Deliverables
- Implementation information
- Requirements
- Archttectural design
This service option provides in-depth planning and design for an end-to-end data center
virtualization strategy and for enhanced provisioning and management of virtual machines.
The service provides an exhaustive evaluation of the customer's current environment and
process and document requirements across the existing and planned assets to achieve the
virtualization objectives.
At the end of the engagement, the customer rec ives all of the information needed to begin
implementing the virtualized data center, including detailed documentation of requirements and
an architectural design.
If the customer is implementing an end-to-end data center virtualization strategy and seeking to
provision and manage virtual machines more effectively, they can also extend the engagement
to include deployment.
It helps bring the virtualized environment online, including setup and implementation of
hypervisor technology. The service also provid s a detailed runbook that you can then use to
scale the implementation throughout your prod ction environment.
A1-10 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Accelerated Deployment
II For committed Cisco UCS implementation
• Four-week activity
- Planning, design, implementation
- Bring project into production
- One data center segment
• Deliverables ~"f;:~"'> t i
,~,
- Recommended next steps '" ""'" y
- Architectural design
_. Implementation scale runbook
• Transfer of knowledge
~, ;f,:,.=';:,.;:.". ~>:. ,.-.; • '" ' '' ~ }" '> ' i .,:"'< : ,): ';';,', .,.:. -:. .... : ~:: : ', :
If the customer has already committed to implementing the Cisco Unified Computing System,
the accelerated deployment service option provides planning, design, and implementation
expertise to bring the project into production within four weeks in one segment of the data
center. At the end of the engagement, the customer receives documentation of the
recommended next steps, an arch tectural design, and a runbook to scale the implementation
throughout the environment. The customer also receives knowledge transfer through mentoring
to help scale the implementation.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-11
Migration Plan and Delivery
• Transition from existing to Cisco UCS
based architecture
- Expert support for transition
- Support for x86 and legacy platforms
• Deliverables
- Migration plan
This service option provides expert support to sooth the transition from existing server
platforms to a Cisco Unified Computing System architecture.
Migration support is available for both x86-base and legacy (non-x86) server platforms. Using
industry best practices, Cisco helps create and deliver a migration plan to speed migration and
mitigate risks.
A1-12 qsco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Installation
• Help deploy Cisco Unified Computing solutions with minimal
disruption
• Expert installation of:
- Cisco UCS, network, storage networking devices, cabling
- Network and management software
'0~'.!"
'::-'11",,: f;,," .~~ ;:..!;~, ..~ ;~., . -,:; ";:1'::_t " -')•• ,,.:--~. ';', .• : :", .. /~ ~: .•,:;':.> : .:--
This service option provides expert installation of all interconnected hardware in the customer's
data center and helps deploy unified computing in the customer's environment with minimal
stress or disruption to the operations.
The service encompasses compute platforms, network and storage networking devices,
interconnects and cabling, and installation and configuration of related network and
management software.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1-13
Cisco Unified Computing S pport and Warranty
Service
This topic describes the Cisco support and warr nty service.
The more benefits the customer realizes from the Cisco Unified Computing System, the more
important the technology becomes to the business.
If an issue arises, the customer wants support from dedicated specialists with in-depth expertise
in virtualized data center environments, server h rdware and software, and unified computing
technology.
Customers can be confident they are covered with Cisco Unified Computing Support and
Warranty Services.
Augmenting the Cisco Unified Computing Syst m warranty, Cisco's award-winning support
services help increase uptime, quickly resolve is ues, and get the most from the unified
computing investment.
Cisco Unified Computing Support and Warranty Services include:
• Cisco Unified Computing Warranty Plus
• Cisco Unified Computing Support Service
• Cisco Unified Computing Mission-Critical Support Service
A 1-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
- --- - - - - - - - - - - - -
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-15
Cisco Unified Computing Mission-Critical Supp rt Service
If the availability of the customer's unified com uting environment is vital to the operation of
the business, the customer can choose the Cisco Unified Computing Mission-Critical Support
Service option. It includes everything in the Uni ied Computing Support Service plus direct
access to Cisco engineers who understand the e vironment and an assigned technical account
manager to provide a single point of contact for all support issues. The customer also has the
option of bringing a field engineer onsite to hel proactively assure that the system operates
efficiently and to address situations that could i pact system availability.
A1-16 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Unified Computing Remote Management
Service
This topic describes the remote management service.
The benefits of data center consolidation, virtualization, and automation are substantial, but
transforming a data center environment also creates new challenges for IT organizations.
Planning and managing virtual environments, including staffing, application performance,
remote office support, and security requires new capabilities and expertise.
The Cisco Remote Management Services (RMS) and Cisco Advanced Performance Monitoring
Service enable the customer's organization to realize immediate benefits from the Cisco
Unified Computing investment b} providing complete monitoring and management using
Cisco's industry-proven practices. networking expertise, and innovative tools.
Cisco Unified Computing Remote Management Services provide physical and logical
monitoring and management of all unified computing hardware and software elements. The
services are composed of flexible standard and elective elements that may be combined to
deliver a tailored solution to meet customer needs.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-17
Remote Management Service Options
• Standard RMS service for Cisco UCS:
- Remote monttoring
- Incident management
- Service-level management
• Elective Service:
- Utilize Cisco experts for unified commu 1ications-related activities and
changes
- Support change, release, configuration , patch management
• Cisco Advanced Performance Monttoring Se rvice:
- Addition to management services
- Baseline and monitor business critical applications performance:
• SLA application response time mon itoring
• Fault isolation
• Reporting
Elective Service
Provides the customer with access to Cisco engi eers to support change, release, configuration,
and patch management.
Delivered as a usage-based block of monthly ho rs, this service enables the customer to utilize
Cisco expertise for customer requested activities and changes to their unified computing
environment.
,~'
A1-18 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Data Center Ootimization Service
This topic describes the Data Cer ter optimization service.
Business is dynamic and continually evolving. To meet users' ever-changing demands for
responsiveness, application performance, and efficiency, the data center and unified computing
architecture must be able to adapt and evolve as well.
The Cisco Data Center OptimizatIon Service provides a suite of recurring, subscription-based
service options to help the customer continually optimize operational efficiency, achieve peak
performance of data center resources and applications, and apply industry best practices to the
operation of the environment.
This service covers all aspects of the data center, including:
• End-to-end architecture and vrrtualized environment
• Application deployment and aelivery
• Application network performance
• Unified computing systems and server virtualization
• Storage area networking
• Unified fabric
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A 1-19
Cisco Unified Computing Optimization
Service Options
• Architecture review:
- Ensure that Cisco UCS meets requirements
- Analysis of primary performance metric
• Configuration audit:
- Comprehensive data capture
- Review current Cisco UCS configuration
parameters
- Provide best practices and recommend tions
to improve efficiency
• Capacityand performanceaudtt:
- Comprehensive capacity and performance
review
- Based on primary performance data
• Securttyaudtt:
- Comprehensive systems health assess ent
- Full system secu rtty audit
.:,·:h~y..,.y. .: '" ~: :. : }' ::- '<~':'"' < :
Drawing on Cisco's broad experience optimizing virtualized environments, both internally and
for Cisco' s largest and most successful customers, this service helps the customer to
continually improve the performance and availa ility of the data center resources, reduce risk,
and achieve operational excellence.
To help the customer maintain optimal performance of the unified computing systems, this
service includes four unified computing options:
• Architecture review: Helps ensure that the Cisco Unified Computing Systems are
continuing to meet the business requirements. The review includes an analysis of primary
performance metrics.
• Configuration audit: Provides a comprehensive data capture and review of the current
configuration parameters of the Cisco Unified Computing Systems. This audit provides an
IT organization with best practices and recommendations to improve operational efficiency
and resource utilization.
• Capacity and performance audit: Provides a comprehensive capacity and performance
review of the customer system by examining primary performance data.
• Security audit: Comprehensive assessment of the health of the customer's system by
examining security parameters. It includes a full audit of system security management
reports and documents any identified exception cases, alarm patterns, and policy violations
so an IT organization can maximize the sec rity of the server and computer infrastructure.
A1-20 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco Security Services
This topic describes Cisco security services.
.f: r.). ~.,. '~' B, '1"'''''-: . '.; .<.~ •. ~~." >'''< .~.~
Virtualization introduces profound changes in servers and data centers. Although these changes
can be hugely beneficial, they might also present new potential attack vectors for criminals with
malicious intent, as well as new opportunities for misconfiguration or error.
Cisco Security Services protect Cl stomers' businesses by helping assure them that the
virtualization strategy is based on industry best practices for security, as well as years of
experience from Cisco and its trmted data center partners in critical data center environments.
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Servic es A1-21
Cisco Data Center Services
This topic describes the data center services.
Cisco Data Center Services helps the customer consolidate, virtualize, and automate the data
center to meet business goals, increase efficiency, and lower operating expenses.
Even if unified computing is central to customer virtualization strategy, the data center
encompasses a broad range of other critical infr structures and processes beyond the Cisco
Unified Computing System.
Cisco Data Center Services helps the customer consolidate, virtualize, and automate the data
center to meet business goals, increase efficiency, and lower operating expenses.
Whether the customer wants services for the entIre data center or for specific data center
technologies, Cisco offers a broad array of data center services to meet your needs, including:
• Strategic IT and architecture services to help prepare a strategy for the data center initiative
• IT planning, design, and implementation services to help the customer execute the strategy
• Technical support and operations management services to help maintain the health of your
data center
• Optimization services to help maintain a high level of performance as the data center
evolves
• Efficiency and Facilities Services to help ad pt a greener IT strategy and design and build
out efficient data center facilities
Cisco's unique architectural approach provides a unified view of data center resources,
empowering the customer to tune the data center environment for optimal application
performance and availability.
A 1-22 Cisco Data Center Unified Computing DeSign (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
.. Workshops help understanding of project environment in regards
to bus iness and tec hnical objectives.
• Plan, design, implementation service has multiple options that
address t he complete business lifecycle .
.. Support and warranty services help customers to increase uptime
and speed issue resol ution.
B Remote management se rvices provide complete solution
monito ring and management.
Summary (Cent )
• Data Center optimization service continuously addresses the
need to adapt the solution.
.. Security service addresses the security aspects of the solution.
e Data Center services encompass complete data center
consolidation, virtualizat on, and automation.
',' .\".", , ., "';':':' >'- .... ...;. .'-, . : '~~ ~, ,;.:.'.< ,.;. , ;
© 2009 Cisco Systems, Inc. Describing Cisco Unified Computing Solution Services A1 -23
Appendix 21
Understanding Policy
Retention
Overview
This lesson identifies the policy retention concept and describes how the Cisco Data Center
Unified Computing solution addresses this.
Objectives
Upon completing this lesson, you will be able to meet the following objectives:
• Identify and describe the benefits of policy retention.
• Identify and describe policy retention within the Cisco Unified Computing solution.
Identifying Policy Retention
This topic describes the policy retention concepts.
Typically the network and storage policies are a plied to the Ethernet or Fibre Channel switch
port. This is not very effective since these are applied for all the VMs that reside on the
physical serverthus there is no differentiation between the VMs.
The policy retention capabilities are important ~ r the Cisco Unified Computing solution since
they reflect the required management overhead and govern the granularity of policy that can be
applied per server. The capabilities can be observed from two perspectiveseither the virtual
machine or bare-metal server deployment.
From the virtual machine perspective the policy retention is the ability to preserve the network
policy (which encompasses security, QoS, port ~etting configuration), the storage policy (which
encompasses zone membership, LON access, WWN and FCID assignment), and the server
level management, accountability, etc. per indiVidual virtual machine.
From the bare-metal server perspective policy r tention means that the server personality that is
defined with server identifiers like WWN, MAC addressees), firmware, BIOS, UUID, etc. are
preserved for the server.
A2-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Policy Retention Challenges
.. Physical server replace ment/migration - reconfiguration and
installation required
• VM challenges imposeo by:
- Virtualization - policies typically enforced per physical
server/port (Ethernet or Fibre Channel)
- Mobility - impossible to enforce the policies upon virtual
machine motion
- Transparency - difficult to correlate network and storage
resources to virtual machines
- VMware vMotion moves VMs across ESX hosts
f; ::-~i'.-':: , ~;.:,",,' : ,.~ ,: ~~ : •.!,:. ;. ; ".~ : ry' ~ :.. (->. ~" '.-';:. l~ " .t~, · · ·~ ;>:,"""'> ~; \ -',:,
There are multiple challenges that affect the policies and characteristics of virtual machines and
bare-metal server deployments.
From the virtual machine perspective the challenges are imposed by the server virtualization
nature itselfthe virtual machine is decoupled from the underlying physical hardware and is able
to move between the ESX hosts, which incurs the following challenges:
• Policies are typically enforced per physical server/port (either Ethernet or Fibre Channel) of
the underlying physical servervirtualization breaks normal server-port relationship.
• When the virtual machine is in motion it is hard to enforce any policies deployed.
• Since the virtual machines are decoupled from the underlying physical infrastructure it is
hard to correlate network and storage resources to a virtual machine.
For example, the attempt to specify the policy bound to VM source MAC addresses is not
effective since the MAC addresses are assigned by the ESX hypervisor and can change over
timethat is, if the VM does not move to another ESX host.
If the VM moves to another ESX host the policy applied to a port is lost. A solution might be to
apply the same policy for that specific VM MAC address to all the ports where ESX hosts are
attached. It is obvious that such a solution is not scalable and could present management
difficulties.
From the bare-metal server deployment perspective, the main challenge comes from the
requirement to replace a fault sener or to migrate to a new server. Normally such migration
requires the same amount of configuration and installation as with the former server.
Of course the computing solution can exist with no policy retention capabilities, which in
essence means that more administration and management work is required for solution
operation and maintenance.
The key benefits of using policy retention capabilities is that one can more easily scale a
computing solution while still preserving the tra sparent solution maintenance and ensuring
operational consistency.
For the virtual machine, policy retention means that the configuration is applied and bound to
the virtual machine and not to the underlying physical server. The virtual machine gets its own
identifiers just as the physical server does. If the VM moves, the policy follows without any
extra configuration necessary.
For bare-metal server deployment, the preservatIOn of server personality means that when the
underlying physical server is replaced or changed the new one is assigned the saved server
personalitythus no installation and configuratio is required. Be aware that in order to really
benefit from the policy retention, the bare-metal server deployment should utilize SAN or LAN
boot with which the operating system and application installation is decoupled from the local
server storage (the disk drive).
A2·4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc .
Describing Cisco Data Center Unified Computing
Solution Policy Retention
This topic describes how policy retention is handled by the Cisco Data Center Unified
Computing solution.
,;. :-.,!, .-i.. ....; ,.;.•"' .. ~ ',~ "~" ::\~ " .d·..... .: ,': 'r '·. r~" ·~ ·~: ::
The Cisco Unified Computing So ution can preserve the policy at different levels and aspects
for different policies whether a network, storage, or even bare-metal server personality.
To deploy and utilize the policy retention for the solution, the Cisco UCS, Cisco Nexus 1000V,
and Cisco MDS 9000 family switches products are used with mechanisms like UCS service
profiles, port-profiles, and NPIV functionality.
Network
visibility -
boundary
When virtual machines are deployed on an ESX host, by default a native virtual switch is used
to provide connectivity to the outside world. The configuration and feature options on the
native virtual switch are limited. By default it does not enable the policy retention capability.
From VM ware version 4.0 onwards the ESX hy ervisor supports distributed switch
architecture with which a common switching i 'rastructure is deployed. Cisco Nexus 1000V
with VN-link functionality is currently the only distributed switch option that completely
addresses policy retention per VM granular policy and management.
~. With the Cisco Nexus 1000V the administrator gets a single point from which the policies are
configured, and the applied policies follow the VM upon migration to another ESX host
without the need to pre-provision any network arameters on each ESX host where the VM
could move. Now the process of adding a new ESX host to a cluster and enabling it for VM
vMotion is much easier.
Furthermore, the policies follow the VM upon ligration, and the port statistics also follow the
virtual machine, which makes network monitori g more precise and debugging or
troubleshooting easier and more VM-aware.
From the perspective of the Cisco Unified Computing Solution, the virtual distributed
switchCisco Nexus 1000Vbecomes the virtual access layer of the network.
A2-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Virtual Machine SAN Connectivity
VM1 VM2
N PIV for virtual machine level SAN
segmentation:
II WWN assignment per VM
.. Per-VM administration
II Per-VM access control and LUN masking
Each VM has
.. Per-VM traffic men<>norr O'1t own virtual HBA
N_port
• MOS 9124
Single phySical
Fibre Channel link
: . "'5: _Rii
MOS9134
MOS 9500
Virtual machines typically exist in a couple of files with the VMDK being the one that holds
the actual virtual machine image (you could also call it a disk). Such a file resides on a disk
array LUN that is deployed with the VMFS file systems used to address concurrent VM image
access by different ESX hosts. Sometimes the virtual machine requires access to an individual
LUNthis might be necessary when a clustering solution is deployed. In such cases, per VM
granularity for the storage policy s required. The per VM storage granularity enables the
assignment of WWN and FCID to an individual VM, which in fact enables deployment of
access control (zoning), LUN masking, and traffic management.
The granularity is achieved by enabling the NPIV functionality-with this turned on the F_port
on the MDS switch (where server is connected) accepts multiple WWNs.
The functionality can be deployed on Cisco MDS 9000 family switches and Cisco Nexus 5000
(the Fibre Channel functionality of the switch).
Cisco UCS
For server deployment the Cisco UCS brings the benefit of retaining the server identity; i.e., the
server becomes stateless by storing identity information such as MAC address(es), NrC
firmware and settings, WWN address(es), HBA firmware and settings, UUID, BIOS firmware
and settings, boot order, drive controller firmware, and disk drive firmware in a Cisco UCS
service profile.
Combining the service profile with the remote b ot options (SAN or LAN) really makes the
server stateless. In other words, it decouples the server personality from the underlying physical
hardware.
With the Cisco Unified Computing solution, the repurposing of a server, or replacing or
migrating a server, becomes a quick and easy taskno configuration is required; no operating
system or application installation is required.
A2-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
• Policy retention enables IT professionals to apply and preserve
network, storage, and se rver level policies per virtual machines.
• Policy retention enables IT professionals to preserve server
personality upon replac ement/migration.
• Virtualization, mobility, a nd transparency influence policy retention
operation.
• Cisco Nexus 1000V is used to achieve VM network connectiv~y
granularity.
• Cisco UCS service profiles enable server personal~y
preservation.
1;. :'''(. ,':.. ...0 ,-; .... " ' <-' .:~ : ..;'''' ,-~-,:,_,; (<;:; ",~ .:: ;; - '~. :,"" ..;,- .
Summary (Cont )
• Per VM SAN connectivity is deployed using the NPIV feature.
• Cisco UCS uses service profiles to abstract the server personality
from the physical blade .
Objectives
Upon completing this lesson, you will be able to identify and describe the general
environmental aspects observed for the Cisco DCUC solution. This ability includes being able
to meet these objectives:
• Identify general environmenta I aspects for the Cisco Unified Computing solution.
• Review the equipment environmental properties for the Cisco Unified Computing solution.
Environmental Aspects Ove view
This topic identifies general environmental aspects of CDCUC.
Environmental aspects need to be observed in the design to properly plan the facilities,
including space, floor weighting, power, cooling, racking, cabling, delivery, and storage for the
installation of the Cisco Unified Computing sol tion components.
Addressing environmental aspects consists of two steps:
• First, evaluate the environmental requirements of the design. This means that based on the
solution design you need to determine the operating parameters of the equipment that will
be used for the solution implementation.
• Second, when the Cisco Unified Computing solution is deployed in an existing facility, the
environmental parameters of that facility need to be assessed.
Once that information is gathered, the site prepa ation documentation can be created. This
documentation includes:
• A Site Requirements Specification (SRS) document where the equipment operating
parameters are recorded
• A Site Survey Form (SSF), which is used upon facility environmental conditions inspection
A3-2 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems. Inc .
Evaluate Design Environmental
Requirements
• Determine the operating p arameters for the equipment:
- Terrperature
- Humidity
- Altitude
- Grounding requirements
- Dimensions and required space
- Required power, power cables, and voltage
- Airflow and heat dissipation
- Weight
- Cabling and interface req uirements
.. Determine the existing environmental conditions of the fadlity:
- Cabinet and rack specification and availabiltty
- Maximum floor loading and weight distribution
- Terrperature and humidity levels
- Available power, power distribution, UPS availability
Environmental factors can adversely affect the performance and life span of equipment.
Sensitive equipment typically reqnires a dry, clean, well-ventilated, and air-conditioned
environment. To ensure normal operation, an ambient airflow must be maintained. If the
airflow is blocked or restricted, or if the intake air is too warm, an overtemperature condition
can occur which may result in a failure.
Temperature
Temperature extremes can cause the equipment to operate at reduced efficiency and can cause a
variety of problems, including early degradation, failure of chips, and failure of equipment. To
control the equipment temperature, you must make sure that the equipment has adequate
airflow.
Humidity
High humidity can cause moisture to seep into the equipment. Moisture can cause corrosion of
internal components and degradat on of electrical resistance, thermal conductivity, and physical
strength. Buildings in which the climate is controlled by air conditioning in the warmer months
and by heat during the colder months usually maintain an acceptable level of humidity for the
equipment.
Altitude
If you operate equipment at a high altitude (low pressure), the efficiency of forced convection
cooling is reduced and can result in electrical problems. This condition can also cause sealed
components with internal pressure, such as electrolytic capacitors, to fail or to perform at a
reduced efficiency.
Power
You should use dedicated power circuits (rather than sharing circuits with other heavy electrical
equipment). For input-source redundancy, it is r ommended that you use two dedicated AC-
power sources, each of which powers half of the power supply units in the devices.
Air Flow
If your site has hot and cold aisles, align the rack air intake to a cold aisle and exhaust to a hot
aisle. Also, make sure that you do not install the equipment so that it takes in exhaust air flow
from other equipment.
A3-4 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Ioc.
Define Site Preparation
• Ba sed on:
- Design required parameters
- Environmental conditions
;:f"0';&ij:i\Slill\t:stie{r1iQtdlimSS!iY'V%~1)/"
:'#fJ§t%$<4rtTIE~~
,Ali
--<,,<,:J1';Wf:::~~ i>:,m~=B:t*Kft ~:}&%1~~~~1#
~ xWi@ \x.@':::;~%J%t<3":::';;':>#$1ifP:<' x >Yi~:~w:fi-W~*:, 'k,::'
~" "r '\ "'}:' ~)5""L~~~ ::::<-! ~ . . ' ~~~vwi;/,\:/~;, ':<, "Pf;:>' \A( ~'<.;
• Site details
• Environmental considerations
• Electrical considerations
• Cooling considerations
• Cabling
• Equipment requirements
• Environmental parameters
• Power
• LAN connectivity
• SAN connectivity
• Site details
• Environmental considerations
• Electrical considerations
Cisco Data Center Unified Computing solutions encompass network, storage, and computing
equipment. Thus when assessing the operational requirements and evaluating the environmental
conditions these parameters must be checked for all equipment used in the solution.
A3-6 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UCS 610()XP Environmental
Properties
,~+"' <\ 0q~Ii9J.(rI
"t~ ~;:*t«%~ e
'%i"; ~ ~ ":; ,", df <- t,,1i1~'6~~g~}&.i;>iF%t{\St/iJI€l~
!-r:>Y 'lj:2n~%~'t ,': ~'~ /3*~¥t.J:' ~>7: ~/~% ~
o1fg.'};W&'
hAl?
%Tkl' v "
• OlasSs »ith two pOMlrsupplies, one 210 410 module , two fan rrodules
M OlassB w~h two pov.ersl4Jplies, too 210410 modules, five fan modules
The table above summarizes the environmental properties for the Cisco UCS Fabric
Interconnects. The difference between the Cisco UCS 6100 series switches is in weight, size,
and cooling requirements.
.~'
One of the parameters that need to be observed is cabling. Cabling for the Cisco Unified
Computing System is governed by the type of S + that is put into the fabric interconnects and
I/O modules.
For the Cisco UCS I/O Module to Fabric Interconnect connectivity typically copper-based
SPF+ are used, whereas for the uplink connecth ity a short range SPF+ is chosen.
A3-8 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco UCS 5108 Environmental Properties
'" %:;.,
~.,::;:,;<:.,
[ffiffiW..4W",i ~"""wl< ;.S::"'"/.~<t, "x
't'
'~,.,'
""'h':--;' /fi;;:
~''''~"'' I¥w_''
W,,~~:%
i :t{tx'~9-Jl~~fij,~ ;(1 [F':;%:;,/' ~. .4,J~ifiua7;
m.., f<~~p_fPNt.
;;: "7..:,' x x::=<m' »'x:;;v,
.;.' .,,:. :~ "; ~ ." ..:,,,:i. ::: -.: .•\ ;: .:~ % :~.:>< ..:.~.
Since the Cisco UCS consists of chassis and server blades, the equipment has its own
environmental requirements. The tables above summarize the environmental parameters for
that part of the equipment.
If the parameters for different equ ipment do not overlap, use the least common denominator to
stay within the boundaries that are appropriate for all equipment.
The network part of the Cisco Unified Computi g solution would typically comprise Cisco
Nexus 7000 or Cisco Catalyst 6500, depending on the required functionality and number of
lOGE interfaces.
The table above summarizes the environmental parameters for a 10- and I8-slot Cisco Nexus
7000 chassis. The weight and required power parameters depend on the exact hardware setup -
i.e., number of modules that are put into the chassis.
A3-1 0 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems , Inc.
Cisco Nexus 7000 Power Requirements
• Power redundancy mode :
- Combined- no redun dancy
-- N+1 redundancy - gua rd against one PSU failure
- Grid redundancy - gu ard against one input circuit failure
_. Full redundancy - guard against power supply or grid failure
~fq(ZAjlemlhlmnt "~ <74 J0~ :NliaJ}irriJrlMBOWer .j'j4r,~t<6f~lliiffmieat'~VlbWerll~':\;?!~
" ,«:Ix;; ,,< '.. ;;: u.-. .. " NNW ,:;;'5. ...__
.. ~ ,., ., ,,;,:, ",;-t, .. x .. " " ... ..,
The power scheme of the Cisco Nexus 7000 depends on the setup, and it can protect against
single power supply failure or agl' inst input grid failure. The best option is to protect from both.
The amount of power required by the Cisco Nexus 7000 can be calculated by summarizing the
individual component requirements. An even better option is to use the power calculator
available at the Cisco Connection Online web site at http://tools.cisco.com/cpcl.
Depends on the power scheme and PSU used (up to 8750 W per
PSU)
12RU 15RU
48.8 x 44.5 x 46.0 em 62.2 x 44.5 x 46.0 em
The table above summarizes the environmental requirements for a 6 and 9 slot Cisco Catalyst
6500 chassis. There are also 3, 4, 9 with vertical alignment, and 13 slot chassis with their own
environmental requirements. The differences ar weight, size, and required power.
A3-12 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Cisco MDS 9000 Environmental Properties
,~
Summary
• The Cisco UCS sizing process include ~ server blade, blade
chassis, and fabric interconnect desigr.
• A Cisco UCS Bill of Materials is create d with the NetformX
DesignXpert tool.
• Sizing Cisco UCS for a new implemen :ation is mainly governed
by the server requirements .
.. Migrating an existing solution to Cisco UCS requires detailed
analysis of the current solution.
• Designing a service provider multitena 1t solution requires defining
the building blocks (blades, chassis, C sco UCS system) to ease
maintenance and scaling.
A3-14 Cisco Data Center Unified Computing Design (DCUCD) v3.0 © 2009 Cisco Systems, Inc.
Americas Headquarters Asia Pacific Headquarters Europe Headquarters
Cisco Systems, Inc. Cisc c Systems, Inc. Cisco Systems Internationa l BV
t 70 West Tasman Drive 168 Ro binson Road Haarlerbergpark
.. 11.111. San Jose, CA 95134-1 706
USA
www.cisco.com
#28-( 1 Capital Tower
Singapore 068912
www cisco.com
Haarlerbergweg 13-19
1101 CH Amsterdam
The Netherlands
CISCO," Te l 408 526-4000
800 553-NETS (6387)
Te l: +65 63 177777 www-europe.cisco.com
Fax: +35 631 7 7799 Te l: +31 08000200791
Fax: 408 527-0883 Fax: +310203571 100
Cisco has more than 200 offices wo rldwide, Addresses, phone numbers, and fax nUTI bers are listed on the Cisco Webs ite at www,cisco,com/ go/ offic es..
.ft' © 2007 C isco Systems, Inc. All rights reserved. CCVP, the Gisco logo, and the C isco Square Bridge logo are tra d e m a~ k s of C isco Systems, Inc.; Changing the Way We Work. Live, Play. and Learn is a service rT'E-< =- =s:.:
\.1 Systems, Inc.; and Access Registrar; Aironet. BPX. Catalyst. CCOA CCDP. CCIE. CCIP. GeNA, CCNP. CCSP, Cisco. 18 Cisco Certified Internetwork Expert logo. Cisco lOS. Cisco Press, Cisco Systems. C ~ 5. "5"::'::-~
Cepital, the Cisco Systems logo, Cisco Unity. Ente rprise/Solve r; EtherChannel. EtherFast, EtherSwitch . Fast Step, Follow Me Browsing, FormShare. GigaDrive. GigaStack. HomeLink, Internet Quotient, IOS. IP/TV.:O =>.=e-s= --=
I() 'ogo. iO Net Readiness Scorecard. iQuick Study. LightStream. Linksys. MeetingPlace. MGX. Networking Academy. Net vork Reg istrar, Packet. PIX. ProConnect. RateMUX. ScriptSha re. SlideCast. SMARTnet, S :E:;.... ' S~ ---~
=asmst Way to Increase Your Internet Quotient. and TransPath are registered trademarks of Cisco Systems. Inc. and/or its .'lffi liates in th e United States and certain other countr ies .
.;,. other trademarks mentioned in this document or Webs ite are the property of their respective owne rs. The use of the IAJrd partner does not imply a partnership relationship between Cisco and any oth e~ co""" 3E. - ~
STC/ l 1266 2