Sie sind auf Seite 1von 71

Reference Architecture:

Lenovo Client Virtualization


with VMware Horizon

Last update: 04 May 2015


Version 1.1

Reference Architecture for


VMware Horizon (with View)

Describes variety of storage


models including SAN storage
and hyper-converged systems

Mike Perks
Kenny Bain
Chandrakandh Mouleeswaran

Contains performance data and


sizing recommendations for
servers, storage, and networking
otrageincluding Lenovo clients,
servers, storage, and networking
hardware used in LCV solutions
recommended, 3 lines maximum
Contains detailed bill of materials
for servers, storage, networking,
and racks

Table of Contents
1

Introduction ................................................................................................ 1

Architectural overview .............................................................................. 2

Component model ..................................................................................... 3


3.1

VMware Horizon provisioning .................................................................................. 5

3.2

Storage model .......................................................................................................... 5

3.3

Atlantis Computing ................................................................................................... 6

3.3.1

Atlantis Hyper-converged Volume ........................................................................................... 6

3.3.2

Atlantis Simple Hybrid Volume (ILIO Persistent VDI) ............................................................... 7

3.3.3

Atlantis Simple In-Memory Volume (ILIO Diskless VDI) ........................................................... 7

3.4

VMware Virtual SAN ................................................................................................ 7

3.4.1

Operational model ................................................................................... 10


4.1

Operational model scenarios ................................................................................. 10

4.1.1

Enterprise operational model ................................................................................................ 11

4.1.2

SMB operational model ......................................................................................................... 11

4.1.3

Hyper-converged operational model ...................................................................................... 11

4.2

Compute servers for virtual desktops ..................................................................... 12

4.2.1

Intel Xeon E5-2600 v3 processor family servers .................................................................... 12

4.2.2

Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 15

4.3

Compute servers for hosted desktops.................................................................... 19

4.3.1

Intel Xeon E5-2600 v3 processor family servers .................................................................... 19

4.3.2

Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 20

4.4

Compute servers for hyper-converged systems..................................................... 21

4.4.1

Intel Xeon E5-2600 v3 processor family servers with VMware VSAN ..................................... 21

4.4.2

Intel Xeon E5-2600 v3 processor family servers with Atlantis USX......................................... 27

4.5

Graphics Acceleration ............................................................................................ 28

4.6

Management servers ............................................................................................. 30

4.7

Shared storage ...................................................................................................... 31

4.7.1

IBM Storwize V7000 and IBM Storwize V3700 storage .......................................................... 33

4.7.2

IBM FlashSystem 840 with Atlantis ILIO storage acceleration ................................................ 35

4.8
ii

Virtual SAN Storage Policies ................................................................................................... 8

Networking ............................................................................................................. 36
Reference Architecture: Lenovo Client Virtualization with VMware Horizon
version 1.1

4.8.1

10 GbE networking ............................................................................................................... 36

4.8.2

10 GbE FCoE networking...................................................................................................... 36

4.8.3

Fibre Channel networking ..................................................................................................... 37

4.8.4

1 GbE administration networking........................................................................................... 37

4.9

Racks ..................................................................................................................... 38

4.10 Proxy server ........................................................................................................... 38


4.11 Deployment models ............................................................................................... 39

4.11.1

Deployment example 1: Flex Solution with single Flex System chassis .................................. 39

4.11.2

Deployment example 2: Flex System with 4500 stateless users ............................................ 40

4.11.3

Deployment example 3: System x server with Storwize V7000 and FCoE .............................. 43

Appendix: Bill of materials ..................................................................... 45


5.1

BOM for enterprise and SMB compute servers ...................................................... 45

5.2

BOM for hosted desktops....................................................................................... 50

5.3

BOM for hyper-converged compute servers .......................................................... 54

5.4

BOM for enterprise and SMB management servers .............................................. 57

5.5

BOM for shared storage ......................................................................................... 61

5.6

BOM for networking ............................................................................................... 63

5.7

BOM for racks ........................................................................................................ 64

5.8

BOM for Flex System chassis ................................................................................ 64

5.9

BOM for OEM storage hardware ............................................................................ 65

5.10 BOM for OEM networking hardware ...................................................................... 65

Resources ....................................................................................................... 66
Document history .......................................................................................... 67

iii

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

1 Introduction
The intended audience for this document is technical IT architects, system administrators, and managers who
are interested in server-based desktop virtualization and server-based computing (terminal services or
application virtualization) that uses VMware Horizon (with View). In this document, the term client
virtualization is used as to refer to all of these variations. Compare this term to server virtualization, which
refers to the virtualization of server-based business logic and databases.
This document describes the reference architecture for VMware Horizon 6.x and also supports the previous
versions of VMware Horizon 5.x. This document should be read with the Lenovo Client Virtualization (LCV)
base reference architecture document that is available at this website: lenovopress.com/tips1275.
The business problem, business value, requirements, and hardware details are described in the LCV base
reference architecture document and are not repeated here for brevity.
This document gives an architecture overview and logical component model of VMware Horizon. The
document also provides the operational model of VMware Horizon by combining Lenovo hardware platforms
such as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM hardware
and software such as IBM Storwize and FlashSystem storage, VMware Virtual SAN, and Atlantis Computing
software. The operational model presents performance benchmark measurements and discussion, sizing
guidance, and some example deployment models. The last section contains detailed bill of material
configurations for each piece of hardware.

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

2 Architectural overview
Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with VMware
Horizon on VMware ESXi hypervisor. It also shows remote access, authorization, and traffic monitoring. This
reference architecture does not address the general issues of multi-site deployment and network management.

Active Directory, DNS

Hosted Desktops and Apps

Hypervisor

Dedicated Virtual Desktops

Hypervisor

FIREWALL

Internet
Clients

Stateless Virtual Desktops

Hypervisor

View
Connection
Server

SQL Database Server

Proxy
Server

vCenter Pools
vCenter Server
Internal
Clients
Figure 1: LCV reference architecture with VMware Horizon

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Shared
Storage

3 Component model
Figure 2 is a layered view of the LCV solution that is mapped to the VMware Horizon virtualization
infrastructure.
View Horizon
Administrator

Administrator
GUIs for
Support
Services

View Client Devices


Client
Agent

Client

Agent

Client
Agent

HTTP/HTTPS

Client
Agent

RDP and PCoIP

View Connection Server

Management Protocols

vCenter Server

View Composer

Management Services

Dedicated
Virtual
Desktops

Stateless
Virtual
Desktops

VM
Agent

VM
Agent

VM
Agent

VM
Agent
Local SSD
Storage

Hosted
Desktops
and Apps
Hosted
Desktop
Hosted
Desktop
Hosted
Application

Accelerator
VM

Accelerator
VM

Accelerator
VM

Hypervisor

Hypervisor

Hypervisor

Support Service Protocols

vCenter
Operations for
View

NFS and CIFS


View Event
Database

VM
Repository

VM Linked
Clones

User
Profiles

Directory
DNS
DHCP

OS Licensing

Lenovo Thin
Client Manager

User
Data Files

Shared Storage
Lenovo Client Virtualization- VMware Horizon

Support
Services

Figure 2: Component model with VMware Horizon


VMware Horizon with the VMware ESXi hypervisor features the following main components:
View Horizon

By using this web-based application, administrators can configure

Administrator

ViewConnection Server, deploy and manage View desktops, control user


authentication, and troubleshoot user issues. It is installed during the installation
of ViewConnection Server instances and is not required to be installed on local
(administrator) devices.

vCenter Operations

This tool provides end-to-end visibility into the health, performance, and

for View

efficiency of the virtual desktop infrastructure (VDI) configuration. It enables


administrators to proactively ensure the best user experience possible, avert
incidents, and eliminate bottlenecks before they become larger issues.

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

View Connection

The VMware Horizon Connection Server is the point of contact for client devices

Server

that are requesting virtual desktops. It authenticates users and directs the virtual
desktop request to the appropriate virtual machine (VM) or desktop, which
ensures that only valid users are allowed access. After the authentication is
complete, users are directed to their assigned VM or desktop.
If a virtual desktop is unavailable, the View Connection Server works with the
management and the provisioning layer to have the VM ready and available.
In a VMware vCenter Server instance, View Composer is installed. View

View Composer

Composer is required when linked clones are created from a parent VM.
vCenter Server

By using a single console, vCenter Server provides centralized management of


the virtual machines (VMs) for the VMware ESXi hypervisor. VMware vCenter
can be used to perform live migration (called VMware vMotion), which allows a
running VM to be moved from one physical server to another without downtime.
Redundancy for vCenter Server is achieved through VMware high availability
(HA). The vCenter Server also contains a licensing server for VMware ESXi.

vCenter SQL Server

vCenter Server for VMware ESXi hypervisor requires an SQL database. The
vCenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL
Server. Because the vCenter SQL server is a critical component, redundant
servers must be available to provide fault tolerance. Customer SQL databases
(including respective redundancy) can be used.

View Event database

VMware Horizon can be configured to record events and their details into a
Microsoft SQL Server or Oracle database. Business intelligence (BI) reporting
engines can be used to analyze this database.

Clients

VMware Horizon supports a broad set of devices and all major device operating
platforms, including Apple iOS, Google Android, and Google ChromeOS. Each
client device has a VMware View Client, which acts as the agent to
communicate with the virtual desktop.

RDP, PCoIP

The virtual desktop image is streamed to the user access device by using the
display protocol. Depending on the solution, the choice of protocols available are
Remote Desktop Protocol (RDP) and PC over IP (PCoIP).

Hypervisor ESXi

ESXi is a bare-metal hypervisor for the compute servers. ESXi also contains
support for VSAN storage. For more information, see VMware Virtual SAN on
page 6.

Accelerator VM

The optional accelerator VM in this case is Atlantis Computing. For more


information, see Atlantis Computing on page 6.

Shared storage

Shared storage is used to store user profiles and user data files. Depending on
the provisioning model that is used, different data is stored for VM images. For
more information, see Storage model.

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

For more information, see the Lenovo Client Virtualization base reference architecture document that is
available at this website: lenovopress.com/tips1275.

3.1 VMware Horizon provisioning


VMware Horizon supports stateless and dedicated models. Provisioning for VMware Horizon is a function of
vCenter server and View Composer for linked clones.
vCenter Server allows for manually created pools and automatic pools. It allows for provisioning full clones and
linked clones of a parent image for dedicated and stateless virtual desktops.
Because dedicated virtual desktops use large amounts of storage, linked clones can be used to reduce the
storage requirements. Linked clones are created from a snapshot (replica) that is taken from a golden master
image. The golden master image and replica should be on shared storage area network (SAN) storage. One
pool can contain up to 1000 linked clones.
This document describes the use of automated pools (with linked clones) for dedicated and stateless virtual
desktops. The deployment requirements for full clones are beyond the scope of this document.

3.2 Storage model


This section describes the different types of shared data stored for stateless and dedicated desktops.
Stateless and dedicated virtual desktops should have the following common shared storage items:

The paging file (or vSwap) is transient data that can be redirected to Network File System (NFS)
storage. In general, it is recommended to disable swapping, which reduces storage use (shared or
local). The desktop memory size should be chosen to match the user workload rather than depending
on a smaller image and swapping, which reduces overall desktop performance.

User profiles (from Microsoft Roaming Profiles) are stored by using Common Internet File System
(CIFS).

User data files are stored by using CIFS.

Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on
NFS or block I/O shared storage:

NFS or block I/O is used to store all virtual desktops associated data, such as the master image,
replicas, and linked clones.

NFS is used to store View Composer persistent disks when View Persona Management is used for
user profile and user data files. This feature is not recommended.

NFS is used to store all virtual images for linked clones. The replicas and linked clones can be stored
on local solid-state drive (SSD) storage. These items are discarded when the VM is shut down.

For more information, see the following VMware knowledge base article about creating linked clones on NFS storage:

http://kb.vmware.com/kb/2046165

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

3.3 Atlantis Computing


Atlantis Computing provides a software-defined storage solution, which can deliver better performance than
physical PC and reduce storage requirements by up to 95% in virtual desktop environments of all types. The
key is Atlantis HyperDup content-aware data services, which fundamentally changes the way VMs use storage.
This change reduces the storage footprints by up to 95% while minimizing (and in some cases, entirely
eliminating) I/O to external storage. The net effect is a reduced CAPEX and a marked increase in performance
to start, log in, start applications, search, and use virtual desktops or hosted desktops and applications. Atlantis
software uses random access memory (RAM) for write-back caching of data blocks, real-time inline
de-duplication of data, coalescing of blocks, and compression, which significantly reduces the data that is
cached and persistently stored in addition to greatly reducing network traffic.
Atlantis software works with any type of heterogeneous storage, including server RAM, direct-attached storage
(DAS), SAN, or network-attached storage (NAS). It is provided as a VMware ESXi compatible VM that presents
the virtualized storage to the hypervisor as a native data store, which makes deployment and integration
straightforward. Atlantis Computing also provides other utilities for managing VMs and backing up and
recovering data stores.
Atlantis provides a number of volume types suitable for virtual desktops and shared desktops. Different volume
types support different application requirements and deployment models. Table 1 compares the Atlantis
volume types.
Table 1: Atlantis Volume Types
Volume

Min Number

Performance

Capacity

Type

of Servers

Tier

Tier

Hyperconverged
Simple
Hybrid
Simple
All Flash
Simple
In-Memory

Cluster of 3
1
1
1

Memory or
local flash

DAS

Memory

Shared

or flash

storage

Local flash

Shared
flash

HA

USX HA
Hypervisor HA
Hypervisor HA

Comments
Good balance between
performance and capacity
Functionally equivalent to
Atlantis ILIO Persistent VDI
Good performance, but lower
capacity

Memory

Memory

N/A (daily

Functionally equivalent to

or flash

or flash

backup)

Atlantis ILIO Diskless VDI

3.3.1 Atlantis Hyper-converged Volume


Atlantis hyper-converged volumes are a hybrid between memory or local flash for accelerating performance
and direct access storage (DAS) for capacity and provide a good balance between performance and capacity
needed for virtual desktops.
As shown in Figure 3, hyper-converged volumes are clustered across three or more servers and have built-in
resiliency in which the volume can be migrated to other servers in the cluster in case of server failure or
entering maintenance mode. Hyper-converged volumes are supported for ESXi.

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Figure 3: Atlantis USX Hyper-converged Cluster

3.3.2 Atlantis Simple Hybrid Volume (ILIO Persistent VDI)


Atlantis simple hybrid volumes are targeted at dedicated virtual desktop environments. This volume type
provides the optimal solution for desktop virtualization customers that are using traditional or existing storage
technologies that are optimized by Atlantis software with server RAM. In this scenario, Atlantis employs
memory as a tier and uses a small amount of server RAM for all I/O processing while using the existing SAN,
NAS, or all-flash arrays storage as the primary storage. Atlantis storage optimizations increase the number of
desktops that the storage can support by up to 20 times while improving performance. Disk-backed
configurations can use various different storage types, including host-based flash memory cards, external
all-flash arrays, and conventional spinning disk arrays.
A variation of the simple hybrid volume type is the simple all-flash volume that uses fast, low-latency shared
flash storage whereby very little RAM is used and all I/O requests are sent to the flash storage after the inline
de-duplication and compression are performed.
This reference architecture concentrates on the simple hybrid volume type for dedicated desktops, stateless
desktops that use local SSDs, and host-shared desktops and applications. To cover the widest variety of
shared storage, the simple all-flash volume type is not considered.

3.3.3 Atlantis Simple In-Memory Volume (ILIO Diskless VDI)


Atlantis simple in-memory volumes eliminate storage from stateless VDI deployments by using local server
RAM and the ILIO in-memory storage optimization technology. Server RAM is used as the primary storage for
stateless virtual desktops, which ensures that read and write I/O occurs at memory speeds and eliminates
network traffic. An option allows for in-line compression and decompression to reduce the RAM usage. The
ILIO SnapClone technology is used to persist the ILIO data store in case of ILIO VM reboots, power outages,
or other failures.

3.4 VMware Virtual SAN


VMware Virtual SAN (VSAN) is a Software Defined Storage (SDS) solution embedded in the ESXi hypervisor.
Virtual SAN pools flash caching devices and magnetic disks across three or more 10 GbE connected servers
into a single shared datastore that is resilient and simple to manage.
Virtual SAN can be scaled to 64 servers, with each server supporting up to 5 disk groups, with each disk group
consisting of a single flash caching device (SSD) and up to 7 HDDs. Performance and capacity can easily be

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

increased simply by adding more components: disks, flash or servers.


The flash cache is used to accelerate both reads and writes. Frequently read data is kept in read cache; writes
are coalesced in cache and destaged to disk efficiently, greatly improving application performance.
VSAN manages data in the form of flexible data containers that are called objects and the following types of
objects for VMs are available:

VM Home
VM swap (.vswp)

VMDK (.vmdk)

Snapshots (.vmsn)

Internally, VM objects are split into multiple components that are based on performance and availability
requirements that are defined in the VM storage profile. These components are distributed across multiple
hosts in a cluster to tolerate simultaneous failures and meet performance requirements. VSAN uses a
distributed RAID architecture to distribute data across the cluster. Components are distributed with the use of
the following main techniques:

Striping (RAID 0): Number of stripes per object

Mirroring (RAID 1): Number of failures to tolerate

For more information about VMware Horizon virtual desktop types, objects, and components, see VMware
Virtual SAN Design and Sizing Guide for Horizon Virtual Desktop Infrastructures, which is available at this
website: vmware.com/files/pdf/products/vsan/VMW-TMD-Virt-SAN-Dsn-Szing-Guid-Horizon-View.pdf

3.4.1 Virtual SAN Storage Policies


Virtual SAN uses Storage Policy-based Management (SPBM) function in vSphere to enable policy driven
virtual machine provisioning, and uses vSphere APIs for Storage Awareness (VASA) to expose VSAN's
storage capabilities to vCenter.
This approach means that storage resources are dynamically provisioned based on requested policy, and not
pre-allocated as with many traditional storage solutions. Storage services are precisely aligned to VM
boundaries; change the policy, and VSAN will implement the changes for the selected VMs.
VMware Horizon has predefined storage policies and default values for linked clones and full clones. Table 2
lists the VMware Horizon default storage policies for linked clones.

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 2: VMware Horizon default storage policy values for linked clones

a storage object is

OS Disk

per object

Replica

across which each replica of

Max

VM_HOME

disk stripes

Min

Persistent Disk

Defines the number of HDDs

(Floating Pool)

OS Disk

Number of

(Dedicated Pool)

Replica

Definition

Stateless

VM_HOME

Storage Policy

Persistent

12

0%

100%

0%

10%

0%

0%

0%

10%

0%

No

Yes

No

No

No

No

No

No

No

0%

100%

0%

0%

0%

100%

0%

0%

0%

distributed.
Flash-memory

Defines the flash memory

read cache

capacity reserved as the

reservation

read cache for the storage


object.

Number of

Defines the number of

failures to

server, disk, and network

tolerate (FTT)

failures that a storage object


can tolerate.
For n failures tolerated, n + 1
copies of the object are
created, and 2n + 1 hosts of
contributing storage are
required.

Force

Determines whether the

provisioning

object is provisioned, even


when available resources do
not meet the VM storage
policy requirements

Object-space

Defines the percentage of

reservation

the logical size of the storage


object that must be reserved
(thick provisioned) upon VM
provisioning (the remainder
of the storage object is thin
provisioned)

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4 Operational model
This section describes the options for mapping the logical components of a client virtualization solution onto
hardware and software. The Operational model scenarios section gives an overview of the available
mappings and has pointers into the other sections for the related hardware. Each subsection contains
performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM
configurations that are described in section 5 on page 45. The last part of this section contains some
deployment models for example customer scenarios.

4.1 Operational model scenarios


Figure 4 shows the following operational models (solutions) in Lenovo Client Virtualization: enterprise,
small-medium business (SMB), and hyper-converged.

SMB (<600 users)

Enterprise (>600 users)

Traditional
Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC

Converged
Flex System Chassis
Compute/Management Servers
Flex System x240
Hypervisor
ESXi
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC

Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V3700
Networking
10GbE

Hyper-converged
Compute/Management Servers
x3650
Hypervisor
ESXi
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
Not applicable
Networking
10GbE

Not Recommended

Figure 4: Operational model scenarios


The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 is
termed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. The
last column in Figure 4 (labelled hyper-converged) spans both halves because a hyper-converged solution
can be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000).
The horizontal axis is split into three columns. The left-most column represents traditional rack-based systems
with top-of-rack (TOR) switches and shared storage. The middle column represents converged systems where
the compute, networking, and sometimes storage are converged into a chassis, such as the Flex System. The
10

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

right-most column represents hyper-converged systems and the software that is used in these systems. For
the purposes of this reference architecture, the traditional and converged columns are merged for enterprise
solutions; the only significant differences are the networking, form factor, and capabilities of the compute
servers.
Converged systems are not generally recommended for the SMB space because the converged hardware
chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the
converged chassis can be used for other workloads to make this hardware architecture more cost-effective.
The VMware ESXi 6.0 hypervisor is recommended for all operational models. Similar performance results were
also achieved with the ESXi 5.5 U2 hypervisor. The ESXi hypervisor is convenient because it can boot from a
USB flash drive or boot from SAN and does not require any extra local storage.

4.1.1 Enterprise operational model


For the enterprise operational model, see the following sections for more information about each component,
its performance, and sizing guidance:

4.2 Compute servers for virtual desktops

4.3 Compute servers for hosted desktops

4.5 Graphics Acceleration

4.6 Management servers

4.7 Shared storage

4.8 Networking

4.9 Racks

4.10 Proxy server

To show the enterprise operational model for different sized customer environments, four different sizing
models are provided for supporting 600, 1500, 4500, and 10000 users.

4.1.2 SMB operational model


Currently, the SMB model is the same as the Enterprise model for traditional systems.

4.1.3 Hyper-converged operational model


For the hyper-converged operational model, see the following sections for more information about each
component, its performance, and sizing guidance:

4.4 Compute servers for hyper-converged

4.5 Graphics Acceleration

4.6 Management servers

4.8.1 10 GbE networking

4.9 Racks

4.10 Proxy server

To show the hyper-converged operational model for different sized customer environments, four different sizing
models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a
hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage.

11

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4.2 Compute servers for virtual desktops


This section describes stateless and dedicated virtual desktop models. Stateless desktops that allow live
migration of a VM from one physical server to another are considered the same as dedicated desktops
because they both require shared storage. In some customer environments, stateless and dedicated desktop
models might be required, which requires a hybrid implementation.
Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerations
for the performance of the compute server, including the processor family and clock speed, the number of
processors, the speed and size of main memory, and local storage options.
The use of the Aero theme in Microsoft Windows 7 or other intensive workloads has an effect on the
maximum number of virtual desktops that can be supported on each compute server. Windows 8 also requires
more processor resources than Windows 7, whereas little difference was observed between 32-bit and 64-bit
Windows 7. Although a slower processor can be used and still not exhaust the processor power, it is a good
policy to have excess capacity.
Another important consideration for compute servers is system memory. For stateless users, the typical range
of memory that is required for each desktop is 2 GB - 4 GB. For dedicated users, the range of memory for each
desktop is 2 GB - 6 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GB
of RAM per desktop. In general, power users that require larger memory sizes also require more virtual
processors. This reference architecture standardizes on 2 GB per desktop as the minimum requirement of a
Windows 7 desktop. The virtual desktop memory should be large enough so that swapping is not needed and
vSwap can be disabled.
For more information, see BOM for enterprise and SMB compute servers section on page 45.

4.2.1 Intel Xeon E5-2600 v3 processor family servers


Table 3 lists the LoginVSI performance of E5-2600 v3 processors from Intel that use the Login VSI 4.1 office
worker workload with ESXi 6.0. Similar performance results were also achieved with ESXi 5.5 U2.
Table 3: Performance with office worker workload
Processor with office worker workload

Stateless

Dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W

188 users

197 users

Two E5-2670 v3 2.30 GHz, 12C 120W

232 users

234 users

Two E5-2680 v3 2.50 GHz, 12C 120W

239 users

244 users

Two E5-2690 v3 2.60 GHz, 12C 135W

243 users

246 users

Table 4 lists the results for the Login VSI 4.1 knowledge worker workload.
Table 4: Performance with knowledge worker workload
Processor with knowledge worker workload

Stateless

Dedicated

Two E5-2680 v3 2.50 GHz, 12C 120W

189 users

190 users

Two E5-2690 v3 2.60 GHz, 12C 135W

191 users

200 users

12

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

These results indicate the comparative processor performance. The following conclusions can be drawn:

The performance for stateless and dedicated virtual desktops is similar.

The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon
E5-2690v2 processor (IvyBridge), but uses less power and is less expensive.

The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon
E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost.

Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series
processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W)
series processors. The cost per user increases with each processor but with a corresponding increase in user
density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might
outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might
not provide any added value. Some users require the fastest processor and for those users, the Xeon
E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an
average configuration.
Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 5 gives a
comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that
Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1.
Table 5: Comparison of Login VSI 3.7 and 4.1 Workloads
Processor

Workload

Stateless

Dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W

4.1 Office worker

188 users

197 users

Two E5-2650 v3 2.30 GHz, 10C 105W

3.7 Medium

254 users

260 users

Two E5-2690 v3 2.60 GHz, 12C 135W

4.1 Office worker

243 users

246 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Medium

316 users

313 users

Two E5-2690 v3 2.60 GHz, 12C 135W

4.1 Knowledge worker

191 users

200 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Heavy

275 users

277 users

Table 6 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by using
the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3
processors are 25% - 30% faster than the previous generation with the equivalent processor names.

13

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 6: Comparison of E5-2600 v2 and E5-2600 v3 processors


Processor

Workload

Stateless

Dedicated

Two E5-2650 v2 2.60 GHz, 8C 85W

3.7 Medium

202 users

205 users

Two E5-2650 v3 2.30 GHz, 10C 105W

3.7 Medium

254 users

260 users

Two E5-2690 v2 3.0 GHz, 10C 130W

3.7 Medium

240 users

260 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Medium

316 users

313 users

Two E5-2690 v2 3.0 GHz, 10C 130W

3.7 Heavy

208 users

220 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Heavy

275 users

277 users

The default recommendation for this processor family is the Xeon E5-2650v3 processor and 512 GB of system
memory because this configuration provides the best coverage for a range of users. For users who need VMs
that are larger than 3 GB, Lenovo recommends the use of 768 GB and the Xeon E5-2680v3 processor.
Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of
89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 7 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.
Table 7: Processor usage
Processor

Workload

Users per Server

Stateless Utilization

Dedicated Utilization

Two E5-2650 v3

Office worker

150 normal node

79%

78%

Two E5-2650 v3

Office worker

180 failover mode

86%

86%

Two E5-2680 v3

Knowledge worker

150 normal node

76%

74%

Two E5-2680 v3

Knowledge worker

180 failover mode

92%

90%

Table 8 lists the recommended number of virtual desktops per server for different VM memory sizes. The
number of users is reduced in some cases to fit within the available memory and still maintain a reasonably
balanced system of compute and memory.
Table 8: Recommended number of virtual desktops per server
Processor

E5-2650v3

E5-2650v3

E5-2680v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Desktops per server (normal mode)

150

140

150

Desktops per server (failover mode)

180

168

180

14

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 9 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 9: Compute servers needed for different numbers of users and VM sizes
Desktop memory size (2 GB or 4 GB)

600 users

1500 users

4500 users

10000 users

Compute servers @150 users (normal)

10

30

68

Compute servers @180 users (failover)

25

56

Failover ratio

4:1

4:1

5:1

5:1

Desktop memory size (3 GB)

600 users

1500 users

4500 users

10000 users

Compute servers @140 users (normal)

11

33

72

Compute servers @168 users (failover)

27

60

Failover ratio

4:1

4.5:1

4.5:1

5:1

For stateless desktops, local SSDs can be used to store the VMware replicas and linked clones for improved
performance. Two replicas must be stored for each master image. Each stateless virtual desktop requires a
linked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high speed 200 GB
SSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB or even 800 GB
SSDs might be needed. Because of the stateless nature of the architecture, there is little added value in
configuring reliable SSDs in more redundant configurations.

4.2.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides storage optimization by using a 100% software solution. There is a cost for processor
and memory usage while offering decreased storage usage and increased input/output operations per second
(IOPS). This section contains performance measurements for processor and memory utilization of ILIO
technology and gives an indication of the storage usage and performance. Dedicated and stateless virtual
desktops have different performance measurements and recommendations.
VMs under ILIO are deployed on a per server basis. It is also recommended to use a separate storage logical
unit number (LUN) for each ILIO VM to support failover. Therefore, the performance measurements and
recommendations in this section are on a per server basis. Note that these measurements are currently for the
E5-2600 v2 processor using Login VSI 3.7.

Dedicated virtual desktops


For environments that are not using Atlantis ILIO, it is recommended to use linked clones to conserve shared
storage space. However, with Atlantis ILIO, it is recommended to use full clones for persistent desktops
because they de-duplicate more efficiently than the linked clones and can support more desktops per server.
ILIO Persistent VDI with disk-backed mode (USX simple hybrid volume) is used for dedicated virtual desktops.
The memory that is required for in-memory mode is high and is not examined further in this version of the
reference architecture. Table 10 shows the Login VSI performance with and without the ILIO Persistent VDI
disk-backed solution on ESXi 5.5.

15

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 10: Performance of persistent desktops with ILIO Persistent VDI


Processor

Workload

Dedicated

Dedicated with ILIO Persistent VDI

Two E5-2650v2 8C 2.7 GHz

Medium

205 users

189 users

Two E5-2690v2 10C 3.0 GHz

Medium

260 users

232 users

Two E5-2690v2 10C 3.0 GHz

Heavy

220 users

198 users

There is an average difference of 20% - 30% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as E5-2690v2) are used to maximize density.
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM and
Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 275 VMs used 35 GB out of
the 50 GB RAM. In practice, most servers host less VMs, but each VM is much larger. Proof of concept (POC)
testing can help determine the amount of RAM, but for most situations 50 GB of RAM should be sufficient.
Assuming 4 GB for the hypervisor, 59 GB (50 + 5 + 4) of system memory should be reserved. It is
recommended that at least 384 GB of server memory is used for ILIO Persistent VDI deployments.
Table 11 lists the recommended number of virtual desktops per server for different VM memory sizes for a
medium workload. This configuration can be a more cost-effective, higher-density route for larger VMs that
balance RAM and processor utilization.
Table 11: Recommended number of virtual desktops per server with ILIO Persistent VDI
Processor

E5-2690v2

E5-2690v2

E5-2690v2

VM memory size

2 GB (default)

3 GB

4 GB

Total system memory

384 GB

512 GB

768 GB

Reserved system memory

59 GB

59 GB

59 GB

System memory for desktop VMs

325 GB

452 GB

709 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 12 lists the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and
768 GB system memory is used for 4 GB VMs.
Table 12: Compute servers needed for different numbers of users with ILIO Persistent VDI
600 users

1500 users

4500 users

10000 users

Compute servers for 125 users (normal)

12

36

80

Compute servers for 150 users (failover)

10

30

67

Failover ratio

4:1

5:1

5:1

4:1

The amount of disk storage that is used depends on several factors, including the size of the original image,
the amount of user unique storage, and the de-duplication and compression ratios that can be achieved.

16

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Here is a best case example: A Windows 7 image uses 21 GB out of an allocated 30 GB. For 160 VMs that are
using full clones, the actual storage space that is needed is 3360 GB. For ILIO, the storage space that is used
is 60 GB out of an allocated datastore of 250 GB. This configuration is a saving of 98% and is best case, even
if you add the 50 GB of disk space that is needed by the ILIO VM.
It is still a best practice to separate the user folder and any other shared folders into separate storage. That
leaves all of the other possible changes that might occur in a full clone must be stored in the ILIO data store.
This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GB
of unique data per persistent VM is sufficient. Comparing against the 4800 GB that is needed for 160 full clone
VMs, this configuration still represents a saving of 88%. It is recommended to reserve 10% - 20% of the total
storage that is required for the ILIO data store.
As a result of the use of ILIO Persistent VDI, the only read operations are to fill the cache for the first time. For
all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storage
are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count is
substantially reduced.
Assuming the use of a fast, low-latency shared storage device, such as the IBM FlashSystem 840 system, a
single VM boot can take 20 - 25 seconds to get past the display of the logon window and get all of the other
services fully loaded. This process takes this time because boot operations are mainly read operations,
although the actual boot time can vary depending on the VM.
Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases,
the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login
every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple
logins underway and the main bottleneck is the processor.

Stateless virtual desktops


Two different options were tested for stateless virtual desktops: one is ILIO Persistent VDI with disk-backed
mode to local SSDs, and the other is ILIO Diskless VDI (USX simple in-memory volume) to server memory
without compression. For ILIO Persistent VDI, the difference data is stored on the local SSDs as before. For
ILIO Diskless ILIO, it is important to issue a SnapClone to a backing store so that the diskless VMs do not need
to be re-created each time the ILIO Diskless VM is started. Table 13 lists the Login VSI performance with and
without the ILIO VM on ESXi 5.5.
Table 13: Performance of stateless desktops
Processor

Workload

Stateless

Stateless with ILIO

Stateless with

Persistent VDI with

ILIO Diskless VDI

local SSD
Two E5-2650v2 8C 2.7 GHz

Medium

202 users

181 users

159 users

Two E5-2690v2 10C 3.0 GHz

Medium

240 users

227 users

224 users

Two E5-2690v2 10C 3.0 GHz

Heavy

208 users

196 users

196 users

There is an average difference of 20% - 35% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as the E5-2690v2) are used to maximize density. The
17

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

maximum number of users that is supported is slightly higher for ILIO Diskless VDI, but the RAM requirement
is also much higher.
For the ILIO Persistent VDI that uses local SSDs, the memory calculation is similar to that for persistent virtual
desktops. It is recommended that at least 384 GB of server memory is used for ILIO Persistent VDI
deployments. For more information about recommendations for ILIO Persistent VDI that use local SSDs for
stateless virtual desktops, see Table 11 and Table 12. The same configuration can also be used for stateless
desktops with shared storage; however, the performance of the write operations likely becomes much worse.
The ILIO Diskless VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache and RAM data store requires
extra RAM. Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 230 VMs used 69
GB of RAM. In practice, most servers host less VMs and each VM has more differences. POC testing can help
determine the amount of RAM, but 128 GB should be sufficient for most situations. Assuming 4 GB for the
hypervisor, 137 GB (128 + 5 + 4) of system memory should be reserved. In general, it is recommended that a
minimum of 512 GB of server memory is used for ILIO Diskless VDI deployments.
Table 14 lists the recommended number of stateless virtual desktops per server for different VM memory sizes
for a medium workload.
Table 14: Recommended number of virtual desktops per server with ILIO Diskless VDI
Processor

E5-2690v2

E5-2690v2

E5-2690v2

VM memory size

2 GB (default)

3 GB

4 GB

Total system memory

512 GB

512 GB

768 GB

Reserved system memory

137 GB

137 GB

137 GB

System memory for desktop VMs

375 GB

375 GB

631 GB

Desktops per server (normal mode)

125

100

125

Desktops per server (failover mode)

150

125

150

Table 15 shows the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 512 GB system memory is used for 2 GB and 3 GB VMs, and 768 GB system memory is used for 4
GB VMs.
Table 15: Compute servers needed for different numbers of users with ILIO Diskless VDI
Desktop memory size (2GB or 4GB)

600 users

1500 users

4500 users

10000 users

Compute servers for 125 users (normal)

11

30

67

Compute servers for 150 users (failover)

25

56

Failover ratio

4:1

4.5:1

5:1

5:1

Desktop memory size (3 GB)

600 users

1500 users

4500 users

10000 users

Compute servers for 100 users (normal)

12

36

80

Compute servers for 125 users (failover)

10

30

67

Failover ratio

4:1

5:1

5:1

4:1

18

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Disk storage is needed for the master images and each SnapClone data store for ILIO Diskless VDI VMs. This
storage does not need to be fast because it is used only to initially load the master image or to recover an ILIO
Diskless VDI VM that was rebooted.
As with persistent virtual desktops, the addition of the ILIO technology reduces the IOPS that is needed for
boot, login, remaining in steady state, and logoff. This reduces the time to bring a VM online and reduces user
response time.

4.3 Compute servers for hosted desktops


This section describes compute servers for hosted desktops, which is a new feature in VMware Horizon 6.x.
Hosted desktops are more suited to task workers that require little in desktop customization.
As the name implies, multiple desktops share a single VM; however, because of this sharing, the compute
resources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient for
servers with two processors.
Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore,
four VMs are recommended to reduce the license costs for Windows Server 2012 R2.
For more information, see BOM for hosted desktops section on page 50.

4.3.1 Intel Xeon E5-2600 v3 processor family servers


Table 16 lists the processor performance results for different size workloads that use four Windows Server
2012 R2 VMs with the Xeon E5-2600v3 series processors and ESXi 6.0 hypervisor.
Table 16: Performance results for hosted desktops using the E5-2600 v3 processors
Processor

Workload

Hosted Desktops

Two E5-2650 v3 2.30 GHz, 10C 105W

Office Worker

222 users

Two E5-2690 v3 2.60 GHz, 12C 135W

Office Worker

298 users

Two E5-2690 v3 2.60 GHz, 12C 135W

Knowledge Worker

244 users

Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on
that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends
204 hosted desktops per server. It is important to keep a 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 17 lists the processor usage for the recommended number of users.
Table 17: Processor usage
Processor

Workload

Users per Server

Utilization

Two E5-2650 v3

Office worker

170 normal node

73%

Two E5-2650 v3

Office worker

204 failover mode

81%

Two E5-2690 v3

Knowledge worker

170 normal node

64%

Two E5-2690 v3

Knowledge worker

204 failover mode

79%

19

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 18 lists the number of compute servers that are needed for different number of users. Each compute
server has 128 GB of system memory for the four VMs.
Table 18: Compute servers needed for different numbers of users and VM sizes
600 users

1500 users

4500 users

10000 users

Compute servers for 170 users (normal)

10

27

59

Compute servers for 204 users (failover)

22

49

Failover ratio

3:1

4:1

4.5:1

5:1

4.3.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides in-memory storage optimization by using a 100% software solution. There is an effect on
processor and memory usage while offering decreased storage usage and increased IOPS. This section
contains performance measurements for processor and memory utilization of ILIO technology and describes
the storage usage and performance.
VMs under ILIO are deployed on a per server basis. It is recommended to use a separate storage LUN for
each ILIO VM to support failover. The performance measurements and recommendations in this section are on
a per server basis. Note that these measurements are currently for the E5-2600 v2 processor using Login VSI
3.7.
The performance measurements and recommendations are for the use of ILIO Persistent VDI with hosted
desktops. Table 19 lists the processor performance results for the Xeon E5-2600 v2 series of processors.
Table 19: Performance results for hosted desktops using the E5-2600 v2 processors
Workload

Processor

RDP

RDP desktops with

PCoIP

PCoIP desktops with

desktops

ILIO Persistent VM

desktops

ILIO Persistent VM

Medium

Two E5-2690v2

266 users

243 users

220 users

205 users

Heavy

Two E5-2690v2

213 users

197 users

173 users

163 users

On average, there is a difference of 20% - 30% that can be attributed to work that is done by the two vCPUs of
the Atlantis ILIO VM. It is recommended that higher-end processors, such as E5-2690v2, are used to maximize
density.
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM. Atlantis
Computing provides a calculator for this RAM. Lenovo testing found that the four VMs used 32 GB. In practice,
most servers host less VMs and each VM is much larger. POC testing can help determine the amount of RAM.
However, for most circumstances, 60 GB should be enough. It is recommended that at least 192 GB of server
memory is used for ILIO Persistent VDI deployments of hosted desktops.
Table 20 shows the recommended number of shared hosted desktops per compute server that uses two Xeon
E5-2690v2 series processors, which allows for some processor headroom for the hypervisor and a 5:1 failover
ratio in the compute servers.

20

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 20: Recommended number of hosted desktops per server


Workload

Normal case

Normal utilization

Failover case

Failover utilization

Medium

150

73%

180

88%

Heavy

160

73%

192

87%

Table 21 shows the number of compute servers that is needed for different numbers of users. Each compute
server has 256 GB of system memory for the four VMs and the ILIO Persistent VDI VM.
Table 21: Compute servers needed for different numbers of users and VM sizes
600 users

1500 users

4500 users

10000 users

Compute servers for 150 users (normal)

10

30

67

Compute servers for 180 users (failover)

25

56

Failover ratio

4:1

4:1

5:1

5:1

The amount of disk storage that is used depends on several factors, including the size of the original Windows
Server image, the amount of unique storage, and the de-duplication and compression ratios that can be
achieved. A Windows 2008 R2 image uses 19 GB. For four VMs, the actual storage space that is needed is 76
GB. For ILIO, the storage space that is used is 25 GB, which is a saving of 67%.
As a result of the use of ILIO Persistent VDI, the only read I/O operations that are needed are those to fill the
cache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM.
Writes to persistent storage are still needed for booting, logging in, remaining in steady state, and logging off,
but the overall IOPS count is substantially reduced.

4.4 Compute servers for hyper-converged systems


This section presents the compute servers for different hyper-converged systems including VMware VSAN and
Atlantis USX. Additional processing and memory is often required to support storing data locally, as well as
additional SSDs or HDDs. Typically HDDs are used for data capacity and SSDs and memory is used for
provide performance. As the price per GB for flash memory continues to reduce, there is a trend to also use all
SSDs to provide the overall best performance for hyper-converged systems.
For more information, see BOM for hyper-converged compute servers on page 54.

4.4.1 Intel Xeon E5-2600 v3 processor family servers with VMware VSAN
VMware VSAN is tested by using the office worker and knowledge worker workloads of Login VSI 4.1. Four
Lenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch
with two 10 GbE connections per server.

Server Performance and Sizing Recommendations


Each server was configured with two disk groups of different sizes. A single disk group does not provide the
necessary resiliency. The two disk groups per server had one 400 GB SSD and three or six HDDs. Both disk
groups provided more than enough capacity for the linked clone VMs.

21

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 22 lists the Login VSI results for stateless desktops by using linked clones and the VMware default
storage policy of number of failures to tolerate (FTT) of 0 and stripe of 1.
Table 22: Performance results for stateless desktops
Storage Policy
WorkLoad

VSI max for 2 disk groups

Stateless
desktops

OS Disk

tested

Used
Capacity

1 SSD and

1 SSD and

6 HDDs

3 HDDs

Office worker

FTT = 0 and Stripe = 1

1000

5.26 TB

888

886

Knowledge worker

FTT = 0 and Stripe = 1

800

4.45 TB

696

674

Table 23 lists the Login VSI results for persistent desktops by using linked clones and the VMware default
storage policy of fault tolerance of 1 and 1 stripe.
Table 23: Performance results for dedicated desktops
Storage Policy
Test Scenario

OS

Persisten

desktops

t Disk

tested

Disk
Office worker

Knowledge worker

FTT = 1

FTT = 1

Stripe = 1

Stripe = 1

FTT = 1

FTT = 1

Stripe = 1

Stripe = 1

VSI max for 2 disk groups

Dedicated

Used
Capacity

1 SSD and

1 SSD and

6 HDDs

3 HDDs

1000

10.86 TB

905

895

800

9.28 TB

700

668

These results show that there is no significant performance difference between disk groups with three and
seven HDDs and there are enough IOPS for the disk writes. Persistent desktops might need more hard drive
capacity or larger SSDs to improve performance for full clones or provide space for growth of linked clones.
The Lenovo M5210 raid controller for the x3650 M5 can be used in two modes: integrated MegaRaid (iMR)
mode without the flash cache module or MegaRaid (MR) mode with the flash cache module and at least 1 GB
of battery backed flash memory. In both modes, RAID 0 virtual drives were configured for use by the disk
groups. For more information, see this website: lenovopress.com/tips1069.html.
Table 24 lists the measured queue depth for the M5210 raid controller in iMR and MR modes. Lenovo
recommends using the M5210 raid controller with the flash cache module because it has a much better queue
depth and has better IOPS performance.
Table 24: Raid Controller Queue Depth
Raid Controller
M5210

iMR mode queue depth

MR mode queue depth

Drive queue depth

234

895

128

Lenovo testing shows that 125 users per server is a good baseline and has an average of 77% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of
89% usage rate. It is important to keep 25% headroom on servers to cope with possible failover scenarios.
Lenovo recommends a general failover ratio of 5:1.
22

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 25 lists the processor usage for the recommended number of users
Table 25: Processor usage
Processor

Workload

Users per Server

Stateless Utilization

Dedicated Utilization

Two E5-2680 v3

Office worker

125 normal node

51%

50%

Two E5-2680 v3

Office worker

150 failover mode

62%

59%

Two E5-2680 v3

Knowledge worker

125 normal node

62%

61%

Two E5-2680 v3

Knowledge worker

150 failover mode

81%

78%

Table 25 lists the recommended number of virtual desktops per server for different VM memory sizes. The
number of users is reduced in some cases to fit within the available memory and still maintain a reasonably
balanced system of compute and memory.
Table 26: Recommended number of virtual desktops per server for VSAN
Processor

E5-2680 v3

E5-2680 v3

E5-2680 v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 27 shows the number of servers that is needed for different numbers of users. By using the target of 125
users per server, the maximum number of users is 4000. The minimum number of servers that is required for
VSAN is 3 and this requirement is reflected in the extra capacity for 300 user case because the configuration
can actually support up to 450 users.
Table 27: Compute servers needed for different numbers of users for VSAN
300 users

600 users

1500 users

3000 users

Compute servers for 125 users (normal)

12

24

Compute servers for 150 users (failover)

10

20

Failover ratio

3:1

4:1

5:1

5:1

The processor and I/O usage graphs from esxtop are helpful to understand the performance characteristics of
VSAN. The graphs are for 150 users per server and three servers to show the worst-case load in a failover
scenario.
Figure 5 shows the processor usage with three curves (one for each server). The Y axis is percentage usage
0% - 100%. The curves have the classic Login VSI shape with a gradual increase of processor usage and then
flat during the steady state period. The curves then go close to 0 as the logoffs are completed.

23

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 5: VSAN processor usage for 450 virtual desktops


Figure 6 shows the SSD reads and writes with six curves, one for each SSD. The Y axis is 0 - 10,000 IOPS for
the reads and 0 - 3,000 IOPS for the writes. The read curves generally show a gradual increase of reads until
the steady state and then drop off again for the logoff phase. This pattern is more well-defined for the second
set of curves for the SSD writes.
Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 6: VSAN SSD reads and writes for 450 virtual desktops
Figure 7 shows the HDD reads and writes with 36 curves, one for each HDD. The Y axis is 0 - 2,000 IOPS for
the reads and 0 - 1,000 IOPS for the writes. The number of read IOPS has an average peak of 200 IOPS and
24

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

many of the drives are idle much of the time. The number of write IOPS has a peak of 500 IOPS; however, as
can be seen, the writes occur in batches as data is destaged from the SSD cache onto the greater capacity
HDDs. The first group of write peaks is during the logon and last group of write peaks correspond to the logoff
period.
Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 7: VSAN HDD reads and writes for 450 virtual desktops

VSAN Resiliency Tests


An important part of a hyper-converged system is the resiliency to failures when a compute server is
unavailable. System performance was measured for the following use cases that featured 450 users:

Enter maintenance mode by using the VSAN migration mode of ensure accessibility (VSAN default)

Enter maintenance mode by using the VSAN migration mode of no data migration

Server power off by using the VSAN migration mode of ensure accessibility (VSAN default)

For each use case, Login VSI was run and then the compute server was removed. This process was done
during the login phase as new virtual desktops are logged in and during the steady state phase. For the steady
state phase, 114 - 120 VMs must be migrated from the failed server to the other three servers with each
server gaining 38 - 40 VMs.
Table 28 lists the completion time or downtime for the VSAN system with the three different use cases.

25

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 28: VSAN Resiliency Testing and Recovery Time


Login Phase
Use Case

Steady State Phase

Completion

Downtime

Completion

Downtime

Time (seconds)

(seconds)

Time (seconds)

(seconds)

vSAN Migration Mode

Maintenance Mode

Ensure Accessibility

605

598

Maintenance Mode

No Data Migration

408

430

Server power off

Ensure Accessibility

N/A

226

N/A

316

For the two maintenance mode cases, all of the VMs migrated smoothly to the other servers and there was no
significant interruption in the Login VSI test.
For the power off use case, there is a significant period for the system to readjust. During the login phase for a
Login VSI test, the following process was observed:
1. All logged in users were logged out from the failed node.
2. Login failed for all new users logging to the desktops running on the failed node.
3. Desktop status changed to "Agent Unreachable" for all desktops in the failed node.
4. All desktops are migrated to other nodes.
5. Desktops status changed to "Available"
6. Login continued successfully for all new users.
In a production system, users with persistent desktops that are running on the failed server must login again
after their VM was successfully migrated to another server. Stateless users can continue working almost
immediately, assuming that the system is not at full capacity and other stateless VMs are ready to be used.
Figure 8 shows the processor usage for the four servers during the login phase and steady state phase by
using Login VSI with the knowledge worker workload when one of the servers is powered off. The processor
spike for the three remaining servers is apparent.

Login Phase

Steady State Phase

Figure 8: VSAN processor utilization: Server power off

26

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yet
VSAN can recover quite quickly. However, this situation is best avoided and it is important to build in
redundancy at multiple levels for all mission critical systems.

4.4.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX
Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5
servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was
installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across
the four servers that were running ESXi 5.5 U2.
This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see
the difference if one server is unavailable. Table 29 lists the processor usage for the recommended number of
users.
Table 29: Processor usage for Atlantis USX
Processor

Workload

Servers

Users per Server

Utilization

Two E5-2680 v3

Knowledge worker

125 normal node

66%

Two E5-2680 v3

Knowledge worker

167 failover mode

89%

From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per
server in failover mode. Lenovo recommends a general failover ratio of 5:1.
Table 30 lists the recommended number of virtual desktops per server for different VM memory sizes.
Table 30: Recommended number of virtual desktops per server for Atlantis USX
Processor

E5-2680 v3

E5-2680 v3

E5-2680 v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Memory for ESXi and Atlantis USX

63 GB

63 GB

63 GB

Memory for virtual machines

321 GB

449 GB

705 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 31 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 31: Compute servers needed for different numbers of users for Atlantis USX
Desktop memory size

300 users

600 users

1500 users

3000 users

Compute servers for 125 users (normal)

12

24

Compute servers for 150 users (failover)

10

20

Failover ratio

3:1

4:1

5:1

5:1

27

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

An important part of a hyper-converged system is the resiliency to failures when a compute server is no
unavailable. Login VSI was run and then the compute server was powered off. This process was done during
the steady state phase. For the steady state phase, 114 - 120 VMs were migrated from the failed server to the
other three servers with each server gaining 38 - 40 VMs.
Figure 9 shows the processor usage for the four servers during the steady state phase and when one of the
servers is powered off. The processor spike for the three remaining servers is noticeable.

Figure 9: Atlantis USX processor usage: server power off


There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yet
VSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancy
at multiple levels for all mission critical systems.

4.5 Graphics Acceleration


The VMware ESXi 6.0 hypervisor supports the following options for graphics acceleration:

Dedicated GPU with one GPU per user, which is called virtual dedicated graphics acceleration (vDGA)
mode.

Shared GPU with users sharing a GPU, which is called virtual shared graphics acceleration (vSGA)
mode and is not recommended because of user contention for shared use of the GPU.

GPU hardware virtualization (vGPU) that partitions each GPU for 1 - 8 users. This option requires
Horizon 6.1 and is not considered in this release of the Reference Architecture.

VMware also provides software emulation of a GPU, which can be processor-intensive and disruptive to other
users who have a choppy experience because of reduced processor performance. Software emulation is not
recommended for any user who requires graphics acceleration.
The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using
the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers
supports up to two GRID adapters. No significant performance differences were found between these two
servers when used for graphics acceleration and the results apply to both.

28

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Because the vDGA option offers a low user density (8 for GRID K1 and 4 for GRID K2), it is recommended that
this configuration is used only for power users, designers, engineers, or scientists that require powerful
graphics acceleration. Horizon 6.1 is needed to support higher user densities of up to 64 users per server with
two GRID K1 adapters by using the hardware virtualization of the GPU (vGPU mode).
Lenovo recommends that a high powered CPU, such as the E5-2680v3, is used for vDGA and vGPU because
accelerated graphics tends to put an extra load on the processor. For the vDGA option, with only four or eight
users per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users who
might need 16 GB or even 24 GB per VM.
The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image
quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or
knowledge workers usually have less intense graphics workloads and can achieve higher frame rates.
Table 32 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user
with the GRID K1 adapter by using vDGA mode with DirectX 11.
Table 32: Performance of GRID K1 vDGA mode by using DirectX 11
Quality

Tessellation

Anti-Aliasing

Resolution

FPS

High

Normal

1024x768

15.8

High

Normal

1280x768

13.1

High

Normal

1280x1024

11.1

Table 33 lists the results of the Heaven benchmark as FPS that is available to each user with the GRID K2
adapter by using vDGA mode with DirectX 11.
Table 33: Performance of GRID K2 vDGA mode by using DirectX 11
Quality

Tessellation

Anti-Aliasing

Resolution

FPS

Ultra

Extreme

1680x1050

28.4

Ultra

Extreme

1920x1080

24.9

Ultra

Extreme

1920x1200

22.9

Ultra

Extreme

2560x1600

13.8

The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality,
tessellation, and anti-aliasing options. This result is expected because of the relative performance
characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution
increases.
Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is
done in the customer environment to verify the performance for the required user workloads.
For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650
M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs:

29

BOM for enterprise and SMB compute servers section on page 45.

BOM for hyper-converged compute servers on page 54.


Reference Architecture: Lenovo Client Virtualization with VMware Horizon
version 1.1

4.6 Management servers


Management servers should have the same hardware specification as compute servers so that they can be
used interchangeably in a worst-case scenario. The VMware Horizon management servers also use the same
ESXi hypervisor, but have management VMs instead of user desktops.
Table 34 lists the VM requirements and performance characteristics of each management service.
Table 34: Characteristics of VMware Horizon management services
Management

Virtual

System

service VM

processors

memory

vCenter Server

4 GB

vCenter SQL

4 GB

Storage

Windows

HA

Performance

OS

needed

characteristic

15 GB

2008 R2

Yes

Up to 2000 VMs.

15 GB

2008 R2

Yes

Double the virtual

Server

processors and
memory for more
than 2500 users.
4

View Connection

10 GB

40 GB

2008 R2

Yes

Server

Up to 2000
connections.

Table 35 lists the number of management VMs for each size of users following the high-availability and
performance characteristics. The number of vCenter servers is half of the number of vCenter clusters because
each vCenter server can handle two clusters of up to 1000 desktops.
Table 35: Management VMs needed
Horizon management service VM

600 users

1500 users

4500 users

10000 users

vCenter servers

vCenter SQL servers

2 (1+1)

2 (1+1)

2 (1+1)

2 (1+1)

View Connection Server

2 (1 + 1)

2 (1 + 1)

4 (3 + 1)

7 (5 + 2)

Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough
capacity in the management servers for all of these VMs. Table 36 lists an example mapping of the
management VMs to the four physical management servers for 4500 users.
Table 36: Management server VM mapping (4500 users)
Management service for 4500

Management

Management

Management

Management

stateless users

server 1

server 2

server 3

server 4

vCenter servers (3)

vCenter database (2)

View Connection Server (4)

It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol
(DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment.
For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O
30

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

servers that support CIFS or NFS shares and translate file requests to the block storage system. For high
availability, two or more Windows storage servers are clustered.
Based on the number and type of desktops, Table 37 lists the recommended number of physical management
servers. In all cases, there is redundancy in the physical management servers and the management VMs.
Table 37: Management servers needed
Management servers

600 users

1500 users

4500 users

10000 users

Stateless desktop model

Dedicated desktop model

Windows Storage Server 2012

For more information, see BOM for enterprise and SMB management servers on page 57.

4.7 Shared storage


VDI workloads, such as virtual desktop provisioning, VM loading across the network, and access to user
profiles and data files place huge demands on network shared storage.
Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performance
takes precedence over storage capacity. This precedence means that more slower speed drives are needed to
match the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm),
there can still be excess capacity in the storage system because extra spindles are needed to provide the
IOPS performance. From experience, this extra storage is more than sufficient for the other types of data such
as SQL databases and transaction logs.
The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can be
ameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations are
based on the peak performance requirement, which usually occurs during the so-called logon storm. This is
when all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time.
It is always recommended that user data files (shared folders) and user profile data are stored separately from
the user image. By default, this has to be done for stateless virtual desktops and should also be done for
dedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent access
to user data and profiles.
In View 5.1, VMware introduced the View Storage Accelerator (VSA) feature that is based on the ESXi
Content-Based Read Cache (CBRC). VSA provides a per-host RAM-based solution for VMs, which
considerably reduces the read I/O requests that are issued to the shared storage. Performance measurements
by Lenovo show that VSA has a negligible effect on the number of virtual desktops that can be used on a
compute server while it reduces the read requests to storage by one-fifth.
Stateless virtual desktops can use SSDs for the linked clones and the replicas. Table 38 lists the peak IOPS
and disk space requirements for stateless virtual desktops on a per-user basis.

31

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 38: Stateless virtual desktop shared storage performance requirements


Stateless virtual desktops

Protocol

Size

IOPS

Write %

vSwap (recommended to be disabled)

NFS or Block

User data files

CIFS/NFS

5 GB

75%

User profile (through MSRP)

CIFS

100 MB

0.8

75%

Table 39 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual
desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of
disk space for the VMware linked clones. Note that the linked clones also can grow in size over time. Stateless
users that require mobility and have no local SSDs also fall into this category. The last three rows of Table 39
are the same as in Table 38 for stateless desktops.
Table 39: Dedicated or shared stateless virtual desktop shared storage performance requirements
Dedicated virtual desktops

Protocol

Size

IOPS

Write %

Replica

Block/NFS

30 GB

Linked clones

Block/NFS

10 GB

18

85%

vSwap (recommended to be disabled)

NFS or Block

User data files

CIFS/NFS

5 GB

75%

User profile (through MSRP)

CIFS

100 MB

0.8

75%

User AppData folder

The sizes and IOPS for user data files and user profiles that are listed in Table 38 and Table 39 can vary
depending on the customer environment. For example, power users might require 10 GB and five IOPS for
user files because of the applications they use. It is assumed that 100% of the users at peak load times require
concurrent access to user data files and profiles.
Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for
dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any
storage controller configuration.
The storage configurations that are presented in this section include conservative assumptions about the VM
size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most
demanding user scenarios.
This reference architecture describes the following different shared storage solutions:

32

Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Fibre Channel (FC)

Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE)

Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System
Interface (iSCSI)

Block I/O to IBM FlashSystem 840 with Atlantis ILIO storage acceleration

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4.7.1 IBM Storwize V7000 and IBM Storwize V3700 storage


The IBM Storwize V7000 generation 2 storage system supports up to 504 drives by using up to 20 expansion
enclosures. Up to four controller enclosures can be clustered for a maximum of 1056 drives (44 expansion
enclosures). The Storwize V7000 generation 2 storage system also has a 64 GB cache, which is expandable to
128 GB.
The IBM Storwize V3700 storage system is somewhat similar to the Storwize V7000 storage, but is restricted
to a maximum of five expansion enclosures for a total of 120 drives. The maximum size of the cache for the
Storwize V3700 is 8 GB.
The Storwize cache acts as a read cache and a write-through cache and is useful to cache commonly used
data for VDI workloads. The read and write cache are managed separately. The write cache is divided up
across the storage pools that are defined for the Storwize storage system.
In addition, Storwize storage offers the IBM Easy Tier function, which allows commonly used data blocks to
be transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used,
which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space is
used for SSDs to give the best balance between price and performance.
The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives can
be used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops and
desktop images.
To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, as
described in Management servers on page 30.
The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storage
performance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs or
alternatives (such as a flash storage system) are required.
For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data and
uses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB
10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. If
users need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drives
can be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for shared
folders and profile data.
Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linked
clones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drives
configured as RAID 10 were not sufficient and extra drives were required to achieve the necessary
performance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktops
and shared folders and profile data.
Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VM
master images. This configuration help with performance of provisioning virtual desktops; that is, a boot storm.
Each master image requires at least double its space. The actual number of SSDs in the array depends on the
number and size of images. In general, more users require more images.

33

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 40: VM images and SSDs


600 users

1500 users

4500 users

10000 users

Image size

30 GB

30 GB

30 GB

30 GB

Number of master images

16

Required disk space (doubled)

120 GB

240 GB

480 GB

960 GB

400 GB SSD configuration

RAID 1 (2)

RAID 1 (2)

Two RAID 1

Four RAID 1

arrays (4)

arrays (8)

Table 41 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one
Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 41, the
IBM Storwize V3700 storage system can support up to 7000 users only.
Table 41: Storwize storage configuration for stateless users
Stateless storage

600 users

1500 users

4500 users

10000 users

400 GB SSDs in RAID 1 for master images

Hot spare SSDs

600 GB 10k rpm in RAID 10 for users

12

28

80

168

Hot spare 600 GB drives

12

Storwize control enclosures

Storwize expansion enclosures

Table 42 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless
user counts. The top four rows of Table 42 are the same as for stateless desktops. Lenovo recommends
clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500
or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally across
all of the controllers. Based on the assumptions in Table 42, the IBM Storwize V3700 storage system can
support up to 1200 users.

34

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 42: Storwize storage configuration for dedicated or shared stateless users
Dedicated or shared stateless storage

600 users

1500 users

4500 users

10000 users

400 GB SSDs in RAID 1 for master images

Hot spare SSDs

600 GB 10k rpm in RAID 10 for users

12

28

80

168

Hot spare 600 GB 10k rpm drives

12

300 GB 15k rpm in RAID 10 for persistent

40

104

304

672

Hot spare 300 GB 15k rpm drives

12

400 GB SSDs for Easy Tier

12

32

64

Storwize control enclosures

Storwize expansion enclosures

16 (2 x 8)

36 (4 x 9)

desktops

Refer to the BOM for shared storage on page 61 for more details.

4.7.2 IBM FlashSystem 840 with Atlantis ILIO storage acceleration


The IBM FlashSystem 840 storage has low latencies and supports high IOPS. It can be used for solutions;
however, it is not cost effective. However, if the Atlantis ILIO VM is used to provide storage optimization
(capacity reduction and IOPS reduction), it becomes a much more cost-effective solution.
Each FlashSystem 840 storage device supports up to 20 GB or 40 GB of storage, depending on the size of the
flash modules. To maintain the integrity and redundancy of the storage, it is recommended to use RAID 5. It is
not recommended to use this device for user counts because it is not cost-efficient.
Persistent virtual desktops require the most storage space and are the best candidate for this storage device.
The device also can be used for user folders, snap clones, and image management, although these items can
be placed on other slower shared storage.
The amount of required storage for persistent virtual desktops varies and depends on the environment. Table
43 is provided for guidance purposes only.
Table 43: FlashSystem 840 storage configuration for dedicated users with Atlantis ILIO VM
Dedicated storage

1000 users

3000 users

5000 users

10000 users

IBM FlashSystem 840 storage

2 TB flash module

12

4 TB flash module

12

Capacity

4 TB

12 TB

20 TB

40 TB

Refer to the BOM for OEM storage hardware on page 65 for more details.

35

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4.8 Networking
The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the
shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or
16 Gbps FC, 10 GbE FCoE, or 10 GbE iSCSI connection is needed. Other types of storage can be network
attached by using 1 Gb or 10 Gb Ethernet.
Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb
Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this
website: lenovopress.com/tips1275.
Automated failover and redundancy of the entire network infrastructure and shared storage is important. This
failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths
between the compute servers, management servers, and shared storage.
If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR
switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required.
For more information, see BOM for networking on page 63.

4.8.1 10 GbE networking


For 10 GbE networking, the use of CIFS, NFS, or iSCSI, the Lenovo RackSwitch G8124E, and G8264R
TOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy and
automated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP)
and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used and
connected to a G8124 or G8264 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassis
or external connectivity. iSCSI also requires converged network adapters (CNAs) that have the LOM extension.
Table 44 lists the TOR 10 GbE network switches for each user size.
Table 44: TOR 10 GbE network switches needed
10 GbE TOR network switch

600 users

1500 users

4500 users

10000 users

G8124E 24-port switch

G8264R 64-port switch

4.8.2 10 GbE FCoE networking


FCoE on a 10 GbE network requires converged networking switches such as pairs of the CN4093 switch or the
Flex System chassis and the G8264CS TOR converged switch. The TOR converged switches are needed for
multiple Flex System chassis or for clustering of multiple IBM Storwize V7000 storage systems. FCoE also
requires CNAs that have the LOM extension. Table 45 summarizes the TOR converged network switches for
each user size.

36

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 45: TOR 10 GbE converged network switches needed


10 GbE TOR network switch

600 users

1500 users

4500 users

10000 users

G8264CS 64-port switch (including up

to 12 fibre channel ports)

4.8.3 Fibre Channel networking


Fibre Channel (FC) networking for shared storage requires switches. Pairs of the FC3171 8 Gbps FC or
FC5022 16 Gbps FC SAN switch are needed for the Flex System chassis. For top of rack, the Lenovo 3873 16
Gbps FC switches should be used. The TOR SAN switches are needed for multiple Flex System chassis or for
clustering of multiple IBM V7000 Storwize storage systems. Table 46 lists the TOR FC SAN network switches
for each user size.
Table 46: TOR FC network switches needed
Fibre Channel TOR network switch

600 users

1500 users

Lenovo 3873 AR2 24-port switch

Lenovo 3873 BR1 48-port switch

4500 users

10000 users

4.8.4 1 GbE administration networking


A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switches
are used for the IT administration network. Lenovo recommends that redundancy is also built into this network
at the switch level. At minimum, a second switch should be available in case a switch goes down. Table 47 lists
the number of 1 GbE switches for each user size.
Table 47: TOR 1 GbE network switches needed
1 GbE TOR network switch

600 users

1500 users

4500 users

10000 users

G8052 48 port switch

Table 48 shows the number of 1 GbE connections that are needed for the administration network and switches
for each type of device. The total number of connections is the sum of the device counts multiplied by the
number of each device.
Table 48: 1 GbE connections needed
Device

Number of 1 GbE connections for administration

System x rack server

Flex System Enterprise Chassis CMM

Flex System Enterprise Chassis switches

1 per switch (optional)

IBM Storwize V7000 storage controller

TOR switches

37

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4.9 Racks
The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that
is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex
System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the
total height of all of the components. For more information, see the BOM for racks section on page 64.

4.10 Proxy server


As shown in Figure 1 on page 2, you can see that there is a proxy server behind the firewall. This proxy server
performs several important tasks including, user authorization and secure access, traffic management, and
providing high availability to the VMware connection servers. An example is the BIG-IP system from F5.
The F5 BIG-IP Access Policy Manager (APM) provides user authorization and secure access. Other options,
including more advanced traffic management options, a single namespace, and username persistence, are
available when BIG-IP Local Traffic Manager (LTM) is added to APM. APM and LTM also provide various
logging and reporting facilities for the system administrator and a web-based configuration utility that is called
iApp.
Figure 10 shows the BIG-IP APM in the demilitarized zone (DMZ) to protect access to the rest of the VDI
infrastructure, including the active directory servers. An Internet user presents security credentials by using a
secure HTTP connection (TCP 443), which is verified by APM that uses Active Directory.

UDP
4172

SSL Decryption
Authentication
High Availability
PCoIP Proxy

TCP 80

TCP/UDP 4172

Stateless Virtual Desktops

Hosted Desktops and Apps

Dedicated Virtual Desktops

Hypervisor

External
Clients

TCP
80

APM

Hypervisor

TCP
443

View
Connection
Server

Hypervisor

DMZ

vCenter Pools

Active Directory

Internal
Clients

Figure 10: Traffic Flow for BIG-IP Access Policy Manager


The PCoIP connection (UDP 4172) is then natively proxied by APM in a reliable and secure manner, passing it
internally to any available VMware connection server within the View pod, which then interprets the connection
as a normal internal PCoIP session. This process provides the scalability benefits of a BIG-IP appliance and
gives APM and LTM visibility into the PCoIP traffic, which enables more advanced access management
decisions. This process also removes the need for VMware secure connection servers. Untrusted internal

38

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

users can also be secured by directing all traffic through APM. Alternatively, trusted internal users can directly
use VMware connection servers.
Various deployment models are described in the F5 BIG-IP deployment guide. For more information, see
Deploying F5 with VMware View and Horizon, which is available from this website:
f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdf
For this reference architecture, the BIG-IP APM was tested to determine if it introduced any performance
degradation because of the added functionality of authentication, high availability, and proxy serving. The
BIG-IP APM also includes facilities to improve the performance of the PCoIP protocol.
External clients often are connected over a relatively slow wide area network (WAN). To reduce the effects of a
slow network connection, the external clients were connected by using a 10 GbE local area network (LAN).
Table 49 shows the results with and without the F5 BIG-IP APM by using LoginVSI against a single compute
server. The results show that APM can slightly increase the throughput. Testing was not done to determine the
performance with many thousands of simultaneous users because this scenario is highly dependent on a
customers environment and network configuration.
Table 49: Performance comparison of using F5 BIG-IP APM

Processor with medium workload (Dedicated)

Without F5 BIG-IP

With F5 BIG-IP

208 users

218 users

4.11 Deployment models


This section describes the following examples of different deployment models:

Flex Solution with single Flex System chassis

Flex System with 4500 stateless users

System x server with Storwize V7000 and FCoE

4.11.1 Deployment example 1: Flex Solution with single Flex System chassis
As shown in Table 50, this example is for 1250 stateless users that are using a single Flex System chassis.
There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to
two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex
System chassis.

39

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Table 50: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes
Stateless virtual desktop

1250 users

x240 compute servers

10

x240 management servers

x240 Windows storage servers (WSS)

Total 300 GB 15k rpm drives

40

Hot spare 300 GB 15k drives

Total 400 GB SSDs

IBM Storwize V7000


expansion enclosure

IBM Storwize V7000


controller enclosure
Compute

Compute

Compute

Compute

Storwize V7000 controller enclosure

Compute

Compute

Storwize V7000 expansion enclosures

Compute

Compute

Flex System EN4093R switches

Flex System FC5022 switches

Compute

Compute

Flex System Enterprise Chassis

Manage

Manage

Total height

14U

WSS

WSS

Number of Flex System racks

4.11.2 Deployment example 2: Flex System with 4500 stateless users


As shown in Table 51, this example is for 4500 stateless users who are using a Flex System based chassis
with each of the 36 compute nodes supporting 125 users in normal mode and 150 in the failover case.
Table 51: Deployment configuration for 4500 stateless users
Stateless virtual desktop

4500 users

Compute servers
Management servers
V7000 Storwize controller
V7000 Storwize expansion
Flex System EN4093R switches
Flex System FC3171 switches
Flex System Enterprise Chassis
10 GbE network switches
1 GbE network switches
SAN network switches
Total height
Number of racks

36
4
1
3
6
6
3
2 x G8264R
2 x G8052
2 x SAN24B-5
44U
2

Figure 11 shows the deployment diagram for this configuration. The first rack contains the compute and
management servers and the second rack contains the shared storage.

40

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

TOR Switches
2 x G8052
2 x SAN24B-5
2 x G8124

M1 VMs

vCenter Server
Connection Server
Cxx
M4 VMs
vCenter Server
Connection Server

Each compute
server has
125 user VMs

M2 VMs
vCenter Server
Connection Server
vCenter SQL Server

M3 VMs
vCenter Server
Connection Server
vCenter SQL Server

Figure 11: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage
Figure 12 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System
Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle
and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the
purpose of clarity.
Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack
level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches
and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected
to each of the SAN24B-5 switches.

41

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Figure 12: Network diagram for 4500 stateless users using Storwize V7000 shared storage

42

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

4.11.3 Deployment example 3: System x server with Storwize V7000 and FCoE
This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are
stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the
average VM size is 2.1 GB.
Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24
compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute
server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to
384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs.
Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of
1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI
(A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade
and two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs.
In addition, there are three management servers. For interchangeability in case of server failure, these extra
servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi 5.5.
There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the
operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage
server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get
a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it
reduces the window of failure for the two critical Windows storage servers.
All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR
RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully
redundant. As an alternative, iSCSI with G8264 10GbE switches can be used.
For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users
require space for user folders and profile data. Stateless users need space for master images and persistent
users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which
substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be
taken offline and only have maintenance performed on it after all of the users are logged off rather than being
able to use vMotion. If a server crashes, this issue is immaterial.
It is estimated that this configuration requires the following IBM Storwize V7000 drives:

Two 400 GB SSD

RAID 1 for master images

Thirty 300 GB 15K drives

RAID 10 for persistent images

Four 400 GB SSD

Easy Tier for persistent images

Sixty 600 GB 10K drives

RAID 10 for user folders

This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion
enclosures.
Figure 13 shows the deployment configuration for this example in a single rack. Because the rack has 36 items,
it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12
43

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

C13 sockets.

Figure 13: Deployment configuration for 3000 stateless users with System x servers

44

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

5 Appendix: Bill of materials


This appendix contains the bill of materials (BOMs) for different configurations of hardware for VMware Horizon
deployments. There are sections for user servers, management servers, storage, networking switches, chassis,
and racks that are orderable from Lenovo. The last section is for hardware orderable from an OEM.
The BOM lists in this appendix are not meant to be exhaustive and must always be double-checked with the
configuration tools. Any discussion of pricing, support, and maintenance options is outside the scope of this
document.
For connections between TOR switches and devices (servers, storage, and chassis), the connector cables are
configured with the device. The TOR switch configuration includes only transceivers or other cabling that is
needed for failover or redundancy.

5.1 BOM for enterprise and SMB compute servers


This section contains the bill of materials for enterprise and SMB compute servers.

Flex System x240


Code

Description

9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG

Flex System node x240 M5 Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Flex System x240 M5 Compute Node
System Documentation and Software-US English
Flex System x240 M5 2.5" HDD Backplane
A5RP
Flex System CN4052 2-port 10Gb Virtual Fabric Adapter
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV
Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD)
A1BM
Flex System FC3172 2-port 8Gb FC Adapter
Flex System FC5022 2-port 16Gb FC Adapter
A1BP
Select SSD storage for stateless virtual desktops
5978
Select Storage devices - Lenovo-configured RAID
7860
Integrated Solid State Striping
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor
A5TJ
Lenovo SD Media Adapter for System x
ASCH
RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media)

45

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
1
1
1
1
1
2
24
16
24
1
1

System x3550 M5
Code

Description

5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX

System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit

6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A4TR
A5UT
A5AF
A5AG

System x3550 M5 Slide Kit G4


System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

300GB 15K 6Gbps SAS 2.5" G3HS HDD


Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0)
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
5978
Select Storage devices - Lenovo-configured RAID
2302
RAID configuration
A2K6
Primary Array - RAID 0 (2 drives required)
2499
Enable selection of Solid State Drives for Primary Array
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor
A5R7
32GB Enterprise Value USB Memory Key

46

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
1
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1

System x3650 M5
Code

Description

5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5

System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
5978
Select Storage devices - Lenovo-configured RAID
2302
RAID configuration
A2K6
Primary Array - RAID 0 (2 drives required)
2499
Enable selection of Solid State Drives for Primary Array
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor
A5R7
32GB Enterprise Value USB Memory Key

47

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
2
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1

NeXtScale nx360 M5
Code

Description

5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML

nx360 M5 Compute Node Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
nx360 M5 Compute Node
nx360 M5 IMM Management Interposer
Lenovo Integration Management Module Standard Upgrade
Lenovo Integrated Management Module Advanced Upgrade
A5KD
nx360 M5 PCI bracket KIT
A5JF
System Documentation and Software-US English
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
3591
Brocade 8Gb FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
A2XV
Brocade 16Gb FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
400GB 12G SAS 2.5" MLC G3HS Enterprise SSD
AS7E
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor
A5TJ
Lenovo SD Media Adapter for System x

Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
1

NeXtScale n1200 chassis


Code

Description

5456HC1
A41D
A4MM
6201
A42S
A4AK

n1200 Enclosure Chassis Base Model


n1200 Enclosure Chassis
CFF 1300W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
System Documentation and Software - US English
KVM Dongle Cable

48

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
6
6
1
1

NeXtScale nx360 M5 with graphics acceleration


Code

Description

5465AC1
A5HH
A5J0
A5JU
A4MB
A5JX
A1MK

nx360 M5 Compute Node Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
nx360 M5 Compute Node
NeXtScale PCIe Native Expansion Tray
nx360 M5 IMM Management Interposer
Lenovo Integration Management Module Standard Upgrade
A1ML
Lenovo Integrated Management Module Advanced Upgrade
A5KD
nx360 M5 PCI bracket KIT
A5JF
System Documentation and Software-US English
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
3591
Brocade 8Gb FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
400GB 12G SAS 2.5" MLC G3HS Enterprise SSD
AS7E
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select GRID K1 or GRID K2 for graphics acceleration
A3GM
NVIDIA GRID K1
A3GN
NVIDIA GRID K2
Select flash memory for ESXi hypervisor
A5TJ
Lenovo SD Media Adapter for System x

49

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
2
2
1

5.2 BOM for hosted desktops


Table 18 on page 20 lists the number of management servers that are needed for the different numbers of
users and this section contains the corresponding bill of materials.

Flex System x240


Code

Description

9532AC1 Flex System node x240 M5 Base Model


A5SX
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
A5TE
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
A5RM
Flex System x240 M5 Compute Node
A5S0
System Documentation and Software-US English
A5SG
Flex System x240 M5 2.5" HDD Backplane
A5RP
Flex System CN4052 2-port 10Gb Virtual Fabric Adapter
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV
Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD)
A1BM
Flex System FC3172 2-port 8Gb FC Adapter
A1BP
Flex System FC5022 2-port 16Gb FC Adapter
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Select flash memory for ESXi hypervisor
A5TJ
Lenovo SD Media Adapter for System x
ASCH
RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media)

50

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
1
1
1
8
16
1
1

System x3550 M5
Code

Description

5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX

System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit

6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A5UT
A5AF
A5AG

System x3550 M5 Slide Kit G4


System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x


System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0)
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Select flash memory for ESXi hypervisor
A5R7
32GB Enterprise Value USB Memory Key

51

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1

System x3650 M5
Code

Description

5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5

System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A5UT
9297

Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x


2U Bracket for Emulex b10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Select flash memory for ESXi hypervisor
A5R7
32GB Enterprise Value USB Memory Key

52

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1

NeXtScale nx360 M5
Code

Description

5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML

nx360 M5 Compute Node Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
nx360 M5 Compute Node
nx360 M5 IMM Management Interposer
Lenovo Integration Management Module Standard Upgrade
Lenovo Integrated Management Module Advanced Upgrade
A5KD
nx360 M5 PCI bracket KIT
A5JF
System Documentation and Software-US English
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
3591
Brocade 8Gb FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Select flash memory for ESXi hypervisor
A5TJ
Lenovo SD Media Adapter for System x

Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
1
8
16
1

NeXtScale n1200 chassis


Code

Description

5456HC1
A41D
A4MM
6201
A42S

n1200 Enclosure Chassis Base Model


n1200 Enclosure Chassis
CFF 1300W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
System Documentation and Software - US English

1
1
6
6
1

A4AK

KVM Dongle Cable

53

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity

5.3 BOM for hyper-converged compute servers


This section contains the bill of materials for hyper-converged compute servers.

System x3550 M5
Code

Description

5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
6311
A1ML
A5AB
A5AK
A59F
A59W
A59X
A3YZ
A3Z2
5977
A5UT
A5AF
A5AG

System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
System x3550 M5 Slide Kit G4
System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
System x3550 M5 4x 2.5" HS HDD Kit PLUS
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

Select Storage devices - no Lenovo-configured RAID


Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0)
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4TP
1.2TB 10K 6Gbps SAS 2.5" G3HS HDD
2498
Install largest capacity, faster drives starting in Array 1
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key

54

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
24
16
24
4
2
6
1
1

System x3650 M5
Code

Description

5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5

System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977
A5UT
9297

Select Storage devices - no Lenovo-configured RAID


Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x

Select amount of system memory


A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4TP
1.2TB 10K 6Gbps SAS 2.5" G3HS HDD
2498
Install largest capacity, faster drives starting in Array 1
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key

55

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
24
16
24
4
2
14
1
1

System x3650 M5 with graphics acceleration


Code

Description

5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5

System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977

Select Storage devices - no Lenovo-configured RAID


Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x

A5UT
9297
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4TP
1.2TB 10K 6Gbps SAS 2.5" G3HS HDD
2498
Install largest capacity, faster drives starting in Array 1
Select GRID K1 or GRID K2 for graphics acceleration
AS3G
NVIDIA Grid K1 (Actively Cooled)
A470
NVIDIA Grid K2 (Actively Cooled)
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key

56

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
12
4
2
14
1
2
2
1

5.4 BOM for enterprise and SMB management servers


Table 37 on page 31 lists the number of management servers that are needed for the different numbers of
users. To help with redundancy, the bill of materials for management servers must be the same as compute
servers. For more information, see BOM for enterprise and SMB compute servers on page 45.
Because the Windows storage servers use a bare-metal operating system (OS) installation, they require much
less memory and can have a reduced configuration as listed below.

Flex System x240


Code

Description

9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG
5978
8039
A4TR
A5B8
A5RP

Flex System node x240 M5 Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Flex System x240 M5 Compute Node
System Documentation and Software-US English
Flex System x240 M5 2.5" HDD Backplane
Select Storage devices - Lenovo-configured RAID
Integrated SAS Mirroring - 2 identical HDDs required
300GB 15K 6Gbps SAS 2.5" G3HS HDD
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Flex System CN4052 2-port 10Gb Virtual Fabric Adapter
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A5RV
Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD)
A1BM
Flex System FC3172 2-port 8Gb FC Adapter
Flex System FC5022 2-port 16Gb FC Adapter
A1BP

57

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
1
1
8
1
1
1
1

System x3550 M5
Code

Description

5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX

System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit

6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
5978
A2K7
A4TR
A5B8

System x3550 M5 Slide Kit G4


System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
Select Storage devices - Lenovo-configured RAID
Primary Array - RAID 1 (2 drives required)
300GB 15K 6Gbps SAS 2.5" G3HS HDD
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0)

A5UT
A5AF
A5AG
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)

58

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
1
2
1
2

System x3650 M5
Code

Description

5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5

System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade

A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
5978
A2K7

Select Storage devices - Lenovo-configured RAID


Primary Array - RAID 1 (2 drives required)
300GB 15K 6Gbps SAS 2.5" G3HS HDD
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM

A4TR
A5B8
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)

59

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
2
1
2

NeXtScale nx360 M5
Code

Description

5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML

nx360 M5 Compute Node Base Model


Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
nx360 M5 Compute Node
nx360 M5 IMM Management Interposer
Lenovo Integration Management Module Standard Upgrade
Lenovo Integrated Management Module Advanced Upgrade
nx360 M5 PCI bracket KIT
System Documentation and Software-US English
Select Storage devices - Lenovo-configured RAID
Primary Array - RAID 1 (2 drives required)
300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System

A5KD
A5JF
5978
A2K7
ASBZ
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A5B8
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
Brocade 8Gb FC Dual-port HBA for System x
3591
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)

Quantity
1
1
1
1
1
1
1
1
1
1
1
2
1
1
1
8
1
1
1
1
1
2
1
2

NeXtScale n1200 chassis


Code

Description

5456HC1
A41D
A4MM
6201
A42S
A4AK

n1200 Enclosure Chassis Base Model


n1200 Enclosure Chassis
CFF 1300W Power Supply
1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
System Documentation and Software - US English
KVM Dongle Cable

60

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
6
6
1
1

5.5 BOM for shared storage


This section contains the bill of materials for shared storage.

IBM Storwize V7000 storage


Table 41 and Table 42 on page 34 list the number of Storwize V7000 storage controllers and expansion units
that are needed for different user counts.

IBM Storwize V7000 expansion enclosure


Code

Description

619524F
AHE1
AHE2
AHF9
AHH2
AS26

IBM Storwize V7000 Disk Expansion Enclosure


300 GB 2.5-inch 15K RPM SAS HDD
600 GB 2.5-inch 15K RPM SAS HDD
600 GB 10K 2.5-inch HDD
400 GB 2.5-inch SSD (E-MLC)
Power Cord - PDU connection

Quantity
1
20
0
0
4
1

IBM Storwize V7000 control enclosure


Code

Description

6195524
IBM Storwize V7000 Disk Control Enclosure
AHE1
300 GB 2.5-inch 15K RPM SAS HDD
AHE2
600 GB 2.5-inch 15K RPM SAS HDD
AHF9
600 GB 10K 2.5-inch HDD
AHH2
400 GB 2.5-inch SSD (E-MLC)
AHCB
64 GB to 128 GB Cache Upgrade
AS26
Power Cord PDU connection
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
AHB5
10Gb Ethernet 4 port Adapter Cards (Pair)
AHB1
8Gb 4 port FC Adapter Cards (Pair)

61

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
20
0
0
4
2
1
1
1

IBM Storwize V3700 storage


Table 41 and Table 42 on page 35 show the number of Storwize V3700 storage controllers and expansion units
that are needed for different user counts.

IBM Storwize V3700 control enclosure


Code

Description

609924C
IBM Storwize V3700 Disk Control Enclosure
ACLB
300 GB 2.5-inch 15K RPM SAS HDD
ACLC
600 GB 2.5-inch 15K RPM SAS HDD
ACLK
600 GB 10K 2.5-inch HDD
ACME
400 GB 2.5-inch SSD (E-MLC)
ACHB
Cache 8 GB
ACFA
Turbo Performance
ACFN
Easy Tier
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
ACHM
10Gb iSCSI - FCoE 2 Port Host Interface Card
ACHK
8Gb FC 4 Port Host Interface Card
ACHS
8Gb FC SW SPF Transceivers (Pair)

Quantity
1
20
0
0
4
2
1
1
2
2
2

IBM Storwize V3700 expansion enclosure


Code

Description

609924E
ACLB
ACLC
ACLK
ACME

IBM Storwize V3700 Disk Expansion Enclosure


300 GB 2.5-inch 15K RPM SAS HDD
600 GB 2.5-inch 15K RPM SAS HDD
600 GB 10K 2.5-inch HDD
400 GB 2.5-inch SSD (E-MLC)

62

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
20
0
0
4

5.6 BOM for networking


For more information about the number and type of TOR network switches that are needed for different user
counts, see the Networking section on page 36.

RackSwitch G8052
Code

Description

7309HC1
6201
3802
A3KP

IBM System Networking RackSwitch G8052 (Rear to Front)


1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
1.5m Blue Cat5e Cable
IBM System Networking Adjustable 19" 4 Post Rail Kit

Quantity
1
2
3
1

RackSwitch G8124E
Code

Description

7309HC6
6201
3802
A1DK

IBM System Networking RackSwitch G8124E (Rear to Front)


1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
1.5m Blue Cat5e Cable
IBM 19" Flexible 4 Post Rail Kit

Quantity
1
2
1
1

RackSwitch G8264
Code

Description

7309HC3
6201
A3KP
5053
A1DP
A1DM

IBM System Networking RackSwitch G8264 (Rear to Front)


1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
IBM System Networking Adjustable 19" 4 Post Rail Kit
IBM SFP+ SR Transceiver
1m IBM QSFP+-to-QSFP+ cable
3m IBM QSFP+ DAC Break Out Cable

Quantity
1
2
1
2
1
0

RackSwitch G8264CS
Code

Description

7309HCK
6201
A2ME
A1DK
A1DP
A1DM
5075

IBM System Networking RackSwitch G8264CS (Rear to Front)


1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Hot-Swappable, Rear-to-Front Fan Assembly Spare
IBM 19" Flexible 4 Post Rail Kit
1m IBM QSFP+-to-QSFP+ cable
3m IBM QSFP+ DAC Break Out Cable
IBM 8Gb SFP + SW Optical Transceiver

63

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
2
2
1
1
0
12

5.7 BOM for racks


The rack count depends on the deployment model. The number of PDUs assumes a fully-populated rack.

Flex System rack


Code

Description

9363RC4
5897

IBM Flex System 42U Rack


IBM 1U 9 C19/3 C13 Switched and Monitored 60A 3 Phase PDU

Quantity
1
6

System x rack
Code

Description

9363RC4
6012

IBM 42U 1100mm Enterprise V2 Dynamic Rack


DPI Single-phase 30A/208V C13 Enterprise PDU (US)

Quantity
1
6

5.8 BOM for Flex System chassis


The number of Flex System chassis that are needed for different numbers of users depends on the deployment
model.
Code

Description

8721HC1
IBM Flex System Enterprise Chassis Base Model
A0TA
IBM Flex System Enterprise Chassis
A0UC
IBM Flex System Enterprise Chassis 2500W Power Module Standard
6252
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A1PH
IBM Flex System Enterprise Chassis 2500W Power Module
3803
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A0UA
IBM Flex System Enterprise Chassis 80mm Fan Module
A0UE
IBM Flex System Chassis Management Module
3803
3 m Blue Cat5e Cable
5053
IBM SFP+ SR Transceiver
A1DP
1 m IBM QSFP+ to QSFP+ Cable
A1PJ
3 m IBM Passive DAC SFP+ Cable
A1NF
IBM Flex System Console Breakout Cable
5075
BladeCenter Chassis Configuration
6756ND0
Rack Installation >1U Component
675686H
IBM Fabric Manager Manufacturing Instruction
Select network connectivity for 10 GbE, 10 GbE FCoE or iSCSI, 8 Gb FC, or 16 Gb FC
A3J6
IBM Flex System Fabric EN4093R 10Gb Scalable Switch
A3HH
IBM Flex System Fabric CN4093 10Gb Scalable Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A0UD
IBM Flex System FC3171 8Gb SAN Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A3DP
IBM Flex System FC5022 16Gb SAN Scalable Switch
A22R
Brocade 16Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
64

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
1
2
2
4
4
4
1
2
2
2
4
1
4
1
1
2
2
4
4
2
4
4
2
4
4

5.9 BOM for OEM storage hardware


This section contains the bill of materials for OEM shared storage.

IBM FlashSystem 840 storage


Table 43 on page 35 shows how much FlashSystem storage is needed for different user counts.
Code

Description

9840-AE1
IBM FlashSystem 840
AF11
4TB eMLC Flash Module
AF10
2TB eMLC Flash Module
AF1B
1 TB eMLC Flash Module
AF14
Encryption Enablement Pack
Select network connectivity for 10 GbE iSCSI, 10 GbE FCoE, 8 Gb FC, or 16 Gb FC
AF17
iSCSI Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF18
8 Gb FC 8 Port Host Optics
3701
5 m Fiber Cable (LC-LC)
AF15
FC/FCoE Host Interface Card
AF19
16 Gb FC 4 Port Host Optics
3701
5 m Fiber Cable (LC-LC)

Quantity
1
12
0
0
1
2
2
2
2
2
2
8
2
2
4

5.10 BOM for OEM networking hardware


For more information about the number and type and TOR network switches that are needed for different user
counts, see Networking section on page 36.

Lenovo 3873 AR2


Code

Description

3873HC2
Brocade 6505 FC SAN Switch
00MT457
Brocade 6505 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver

Quantity
1
1
24
24

Lenovo 3873 BR1


Code

Description

3873HC3
Brocade 6510 FC SAN Switch
00MT459
Brocade 6510 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver

65

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Quantity
1
2
48
48

Resources
For more information, see the following resources:

Lenovo Client Virtualization reference architecture


lenovopress.com/tips1275

VMware vSphere
vmware.com/products/datacenter-virtualization/vsphere

VMware Horizon (with View)


vmware.com/products/horizon-view

VMware VSAN
vmware.com/products/virtual-san

Atlantis Computing ILIO


atlantiscomputing.com/products

Flex System Interoperability Guide


ibm.com/redbooks/abstracts/redpfsig.html

IBM System Storage Interoperation Center (SSIC)


ibm.com/systems/support/storage/ssic/interoperability.wss

View Architecture Planning VMware Horizon 6.0


pubs.vmware.com/horizon-view-60/topic/com.vmware.ICbase/PDF/horizon-view-60-architecture-

planning.pdf

F5 BIG-IP
f5.com/products/big-ip

Deploying F5 with VMware View and Horizon


f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdf

66

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Document history
Version 1.0

Version 1.1

30 Jan 2015

5 May 2015

Conversion to Lenovo format

Added Lenovo thin clients

Added VMware VSAN for storage

Added performance measurements and recommendations for


the Intel E5-2600 v3 processor family with ESXi 6.0 and added
BOMs for Lenovo M5 series servers.

Added more performance measurements and resiliency testing


for VMware VSAN hyper-converged solution.

Added Atlantis USX hyper-converged solution.

Added graphics acceleration performance measurements for


VMware ESXi vDGA mode.

67

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1

Trademarks and special notices


Copyright Lenovo 2015.
References in this document to Lenovo products or services do not imply that Lenovo intends to make them
available in every country.
Lenovo, the Lenovo logo, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and Rescue and Recovery are
trademarks of Lenovo.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used Lenovo
products and the results they may have achieved. Actual environmental costs and performance characteristics
may vary by customer.
Information concerning non-Lenovo products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of such
products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. Lenovo has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims related
to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the
supplier of those products.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice,
and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the
full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive
statement of a commitment to specific levels of performance, function or delivery schedules with respect to any
future products. Such commitments are only made in Lenovo product announcements. The information is
presented here to communicate Lenovos current investment and development activities as a good faith effort
to help with our customers' future planning.
Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an individual
user will achieve throughput or performance improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-Lenovo websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this Lenovo product and use of those websites is at your own risk.

68

Reference Architecture: Lenovo Client Virtualization with VMware Horizon


version 1.1