Sie sind auf Seite 1von 12

Maximizing SQL Server Virtualization Performance

Maximizing

Virtualization Performance
By Michael Otey
Without a doubt, performance is the database professionals number one concern
when it comes to virtualizing Microsoft SQL Server. While virtualizing SQL Server
is nothing new, even today there are some people who still think that SQL Server is
too resource-intensive to virtualize. Thats definitely not the case. However, there
are several tips and best practices that you need to follow to achieve optimum performance and availability for your virtual SQL Server instances. In this whitepaper,
youll learn about the best practices, techniques, and server platform for virtualizing SQL Server to obtain the maximum virtualized database performance.

Sponsored by

In the first part of this whitepaper, youll learn


about some of the best practices for configuring your virtualization hosts central processing unit (CPU), memory, and storage. Next,
youll learn about the best practices for configuring a guest virtual machine (VM) to run SQL
Server. Youll see best practices for configuring virtual CPUs and using dynamic memory.
Youll also learn about using virtual hard disks
(VHDs), configuring SQL Server VM storage,
1

Maximizing SQL Server Virtualization Performance

and using solid state disks (SSDs) with your SQL Server VMs. Then youll see how
you can maximize SQL Server 2014 online transaction processing (OLTP) application performance by taking advantage of the new In-Memory OLTP feature.
The second part of this whitepaper will cover some of the practical implementation
details required to get the best performance for your SQL Server VMs. Although the
specific configuration steps are vital, its equally important to select the right virtualization platform to provide the scalability and reliability that your organization needs
to meet its service level agreements (SLAs). In this section, youll learn about using
the NEC Express5800/A2000 Series Server as a virtualization platform. Here youll
see how its Capacity OPTimization (COPT) feature and high random-access memory
(RAM) capacity enable it to support dense virtualization workloads. Then youll see
how NECs ProgrammableFlow Network Suite and PF1000 virtual switch integrate with
Microsoft Hyper-V and Microsoft System Center Virtual Machine Manager (SCVMM) to
provide predictable network bandwidth for your business-critical applications.

As a general rule for the best performance in your tier


1 VMs, you should plan for a 1:1 ratio of virtual CPUs
and physical cores in the system.
Maximizing Host CPU and Memory
Making sure the host is correctly configured is one of the most fundamental aspects
for optimizing your virtualization environment. If your host lacks the processing
power, RAM, or network bandwidth to run your VMs, youll never achieve the
performance that you need for your tier 1 applications. First, the host has to be
sized adequately to run the workloads of all of the VMs that will be simultaneously
active. To plan for the proper host capacity, you should use Performance Monitor to
create a performance baseline for the workload you intend to virtualize by measuring the peak and average CPU and memory utilization. This workload can be running on a VM, or it can be a physical installation that you plan to migrate to a VM.
Aggregating these values for all the different servers that you want to run on your
virtualization host will tell you the base processing power and RAM thats needed.
As a general rule for the best performance in your tier 1 VMs, you should plan for
2

Maximizing SQL Server Virtualization Performance

a 1:1 ratio of virtual CPUs and physical cores in the system. While nothing prevents you from overcommitting the CPUs for either Hyper-V or VMware vSphere,
matching you physical cores to your virtual CPUs will ensure that you always have
computing power for that workload. When youre planning the number of virtual
CPUs to use in the guest, be sure to remember that the maximum number of virtual
CPUs supported can vary depending on the guest OS. Both Windows Server 2012
R2 Hyper-V and vSphere 5.5 provide support for hosts with up to 320 cores and
VMs with up to 64 virtual CPUs.
Next, while youre planning your hosts computing resources, you should make
sure that the host supports Second Level Address Translation (SLAT) and Non-Uniform Memory Access (NUMA). Most modern servers from tier 1 vendors provide
these features, but they might not be present if youre considering using an older
hardware platform for virtualization. Both are very important for VM scalability.
SLAT has different names, depending on the CPU manufacturer. Intels version is

The host memory is the next most important


consideration after the hosts CPU support.
called Extended Page Tables (EPT) and AMD calls it Rapid Virtualization Indexing
(RVI). SLAT allows the processor to directly handle the translation of guest virtual
addresses to host physical addresses without the need for the hypervisor to keep
track of a shadow page table, thereby reducing the load on the hypervisor for every
guest VM. NUMA support allows NUMA-aware applications like SQL Server to optimize threads in high-speed memory thats owned (should this be owned?) by a
local processor. The latest version of Windows Server 2012 R2 Hyper-V and vSphere
5.5 both provide NUMA support for guest VMs.
The host memory is the next most important consideration after the hosts CPU
support. First, make sure that you dont allocate all the available host physical RAM
to the VM. Plan to keep about 1GB of memory reserved for the host to manage the
running VMs. To prepare for future scalability requirements, its a best practice to
select a host system that supports hot-add RAM. RAM is typically the limiting factor
3

Maximizing SQL Server Virtualization Performance

to how many VMs you can run simultaneously, and hot-add RAM enables you to
upgrade the host without incurring any downtime. Windows Server 2012/R2 supports hot-add RAM but you should be aware that hot-add RAM is not supported in
every server hardware platform. You should be sure to look for this capability when
evaluating virtualization server platforms.
Making sure that theres adequate network bandwidth for your production workloads is the next critical step in the virtualizations host configuration. Trying to
funnel all the network traffic for your VMs through too few host network interface
cards (NICs) is a common virtualization configuration mistake. You can use Performance Monitor to get an idea of your aggregated network bandwidth requirements
just like you did to estimate the hosts CPU and memory requirements. In addition,
you should plan for one dedicated NIC for management purposes as well as one
dedicated NIC for live migration or vMotion. This will help to separate the network
traffic required by these management tasks from your production workloads.
Finally, you should plan for the hosts OS to be installed on a separate storage
location from the guest VHDs or virtual machine disks (VMDKs). More details
about guest VM storage is presented in the following section. In addition, if youre
running anti-virus (AV) software on the host, be sure to exclude the VMs from AV
scanning. AV scans will impact the performance of the VM, which is something
that you want to avoid for your tier 1 applications. Any AV scanning should occur
within the VM guest.

One of the most important guest configuration guidelines


is to be sure to provide enough memory for the guest.
Guest VM Configuration Guidelines
One of the most important guest configuration guidelines is to be sure to provide
enough memory for the guest. This is especially true if the guest is running a
database application like SQL Server or Microsoft SharePoint. As a general rule
of thumb, the more memory you can give SQL Server VMs the betterup to a
point. The actual requirements depend on the application and workload. One best
4

Maximizing SQL Server Virtualization Performance

practice is to take advantage of the hypervisors ability to support dynamic memory.


Both Hyper-V and vSphere can take advantage of dynamic memory. Microsoft fully
supports running SQL Server with dynamic memory to increase server consolidation ratios and increase database performance. One best performance practice with
dynamic memory is to avoid setting a maximum ceiling and to let the VM expand
the memory if the VM experiences memory pressure.

When you create a guest VM,


you have three basic choices for VHD types.
When you create a guest VM, you have three basic choices for VHD types. Microsoft
and VMware each have slightly different names for these different VHD formats,
but theyre essentially the same: fixed virtual disks, dynamic disks, and differencing disks. Fixed virtual disks provide the best performance, but they also require
the most disk storage. Fixed virtual disks provide almost the same performance as
native Direct Attached Storage (DAS).
Dynamic disks are slightly slower and require much less storage than fixed virtual
disks. However, the hypervisor will expand dynamic disks when they need more storage, and the execution of the VM is paused during this process. You would typically
use fixed virtual disks to avoid this situation for business-critical SQL Server instances.
Differencing disks are the slowest type of VHD, but they also require the least disk
space. Differencing disks are best suited for lab and help desk scenarios and not for
running production applications.
Next, when youre configuring the VM itself for SQL Server, one of the most important best practices is to create multiple VHDs and use them to split out the SQL
Server production database and log files as well tempdb. If you dont change the
defaults, the SQL Server installation puts everything on the drive with the SQL
Server binaries. In the case of a VM, this means that the guest OS, the database
data files, the database log files, tempdb, and the other system databases would

Maximizing SQL Server Virtualization Performance

all be on the same VHD. That configuration can work for some small installations,
but it certainly wont give you the best database performance. Putting the data and
log files on separate VHDs that use different drives will definitely provide far better
performance. In addition, like in a physical installation, you should place the VHD
containing the log files on fast-writing drives that use RAID 1 or RAID 10. Another
best storage configuration practice is to put tempdb on its own drive using a VHD
thats separate from the data and log files. Tempdb can be a very active database
with lots of write activity, so like the log files, a best practice is to use RAID 1 or
RAID 10 if possible for the drives on which the tempdb database is placed.

Another important factor for performance is the


installation of SQL Server Integration Services.
Another important factor for performance thats easy to overlook is the installation
of SQL Server Integration Services (SSIS) or VM Tools on the guest. These VM addins provide optimized device drivers for the VM. For instance, when you install SSIS
on a Hyper-V VM, you get the high-performance synthetic network device driver.
If you dont install SSIS, your Hyper-V VMs will use the Legacy Network adapter.
The Legacy Network adapter is an emulated device, and its activity is handled by
a worker thread in the Hyper-V hosts parent partition. This will result in slower
network performance for that VM as well as all of the other VMs on the host.

Using SSDs
The continued advancements in computing power and large memory support have
resulted in the input/output (I/O) subsystem being a bottleneck for some VM
installations. Traditional hard disk drives (HDDs) have gotten larger, but they really
havent gotten faster. SSDs use high-performance flash memory for storage, and
they can provide significantly higher throughput than standard rotational HDDs. An
HDD Serial Attached SCSI (SAS) drive spinning at 15,000 revolutions per minute
(rpm) can deliver about 150MB to 200MB of sequential throughput per second. In
contrast, an SSD drive on a 6GB controller can provide about 550MB of sequential
throughput per second.

Maximizing SQL Server Virtualization Performance

When youre considering using SSDs with SQL Server VMs, you have several different implementation options:
Moving data files onto SSDs. Data files typically experience more reads than
writes and can be a good choice for SSDs if the SSDs are large enough to contain the data files.
Moving indexes onto SSDs. Most index access is read-heavy, making them ideal
candidates for SSD drives, which excel at random read access.
Moving log files onto SSDs. Log files experience a high degree of writes and
therefore might not be as good a candidate as data files or indexes for moving
onto SSDs. If you do move the log files onto SSDs, plan on using drive mirroring
and RAID to protect against drive failure.
Moving tempdb onto SSDs. Tempdb typically experiences a high volume of
write activity. Moving tempdb onto SSDs can provide improved performance,
but you need to be sure to monitor the drive status and have a replacement
strategy. Like with log files, if you move tempdb onto SSDs, plan on using drive
mirroring and RAID to protect against drive failure.

The more write operations an SSD has,


the shorter its life expectancy will be.
Although SSDs provide better performance than HDDs, there are a couple caveats
to using SSDs. First, its important to realize that they arent a silver bullet for your
performance issues. SSDs wont fix a lack of memory or processing power. Likewise, they wont fix poorly written queries. Next, the SSD lifecycle is significantly
shorter than a rotational HDD. The more write operations an SSD has, the shorter
its life expectancy will be. Furthermore, the write performance for an SSD will
degrade over time. High I/O implementations like SQL Server will also shorten the
lifecycle of an SDD. In addition, the fuller the SSD drive is, the faster it will degrade.
This essentially means that if you plan to use SSDs for your SQL Server VMs, you
need to plan to keep about 50 percent of the drives space unallocated and you
should plan on a two to three year replacement cycle.

Maximizing SQL Server Virtualization Performance

The life expectancy of SSDs also varies greatly according to the type of SSD drive. There
are two basic types: single-level cell (SLC) and multi-level cell (MLC). SLCs are enterprise
grade. Although theyre more costly, they deliver better performance and a longer lifespan
than MLCs. MLCs are typically found in consumer-grade devices and have lower performance and shorter lifespans than SLCs.
Finally, if you implement SSDs, dont attempt to defragment them. They dont store or
retrieve data like HDDs. Defragmentation will only increase the wear on the drive.

Revving Up VMs with the SQL Server 2014 In-Memory OLTP Engine
The upcoming SQL Server 2014 release will provide the all-new In-Memory OLTP engine,
which promises to significantly boost application performance. Microsoft has shown application performance improvements ranging from 7x to 20x using the new In-Memory OLTP
engine. Equally significant is the fact that this new In-Memory OLTP engine can work just
as well in a VM as it can in a physical system. SQL Server 2014s new In-Memory OLTP
support works by moving select tables and stored procedures into memory. Plus, the new
In-Memory OLTP engine provides an all new lock-free optimistic locking design that maximizes the throughput of the engine.

Memory access speeds are much faster


than disk access speeds.
Memory access speeds are much faster than disk access speeds. However, to really take
advantage of the In-Memory OLTP engine, you need to be running on a platform that can
support the large memory capacities required to move the selected tables and stored procedures into RAM. The latest versions of Windows Server 2012 R2 Hyper-V and vSphere 5.5
both support hosts with up to 320 cores and 4TB of RAM. In addition, both offer support for
VMs with up to 1TB of virtual memory. These large memory sizes, coupled with a physical
host that supports this much RAM, enable SQL Server VMs to take full advantage of the new
In-Memory OLTP performance feature.

Maximizing SQL Server Virtualization Performance

Virtualization on the NEC Express5800


Selecting the proper hardware platform is essential for providing maximum performance
and scalability to your SQL Server VMs. NECs new Express5800/A2000 Series Server (or
CX) brings mainframe-class performance and reliability to your enterprise virtualization
implementations. The CX series is NECs highest performing line of systems, and its sixth
generation of Intel-based enterprise server systems. The CX series uses the latest high-performance Intel Xeon processor E7 v2 Product Family. The new Intel Xeon E7 v2 processors
can be configured with up to 15 cores per processor, and they support twice the amount of
memory compared to the previous generation of CPUs. In its maximum configuration, the
CX supports up to four processors, where each CPU has 15 cores for a total of 60 cores. The
high number of cores enables the CX to be able to dedicate physical CPU resources to each
vCPU running in the SQL Server VMs thereby maximizing performance. The CX is also ideal
for memory-intensive applications, offering support for up to 4TB of RAM. You can see the
NEC Express5800 in Figure 1.

Figure 1: NEC Express5800/A2000 Series Server (CX)


Beyond pure scalability, the NEC CX supports a unique core optimization capability called
COPT (Capacity OPTimization). COPT is essentially a dynamic CPU core activation control
similar to UNIXs capacity on demand capability. COPT provides improved reliability and
scalability by enabling you to dynamically add available unused CPU cores. COPT allows
you to pay as you grow by dynamically adding cores for increased scalability using a core
activation key. The NEC Express5800/A2040b COPT model can seamlessly scale up from
1 to 60 cores. This core optimization capability is completely independent of the operating
systems. It works with Linux and vSphere in addition to Windows Server 2012 and Windows Server 2008 R2 SP1. In the case of Linux and Windows, the cores can be added without requiring a server reboot. You can see an overview of NECs COPT feature in Figure 2.

Maximizing SQL Server Virtualization Performance

Figure 2: An Overview of NECs COPT Feature


In Figure 2 you can see how COPT can be used to dynamically scale performance. On the
left side of the graph you can see where the system configuration starts off with two CPUs
each with two cores enabled. As demands on the system increase cores can be added by
simply enabling more cores via a software activation key. In the middle section you can
see that two additional license keys have been used to add one additional core per CPU.
On the far right you can see where you can subsequently add more additional cores to
accommodate future growth. COPT allows you to dynamically add cores up to the systems maximum of 60 cores across 4 CPUs. COPT is a powerful and unique feature. It can
provide protection from CPU failures and it enables increased scalability without requiring
any physical hardware maintenance or intervention.
In addition to unique COPT features, the CX supports a number of advanced Reliability, Availability, and Serviceability (RAS) features. These features are critical when selecting a server platform as they help avoid any single point of failure. This is particularly important in virtualizing a
10

Maximizing SQL Server Virtualization Performance

Tier-1 application like SQL Server where availability is critical. Memory modules and I/O cards
can be added on-the-fly without shutting the system down. Memory is constantly monitored
for errors and support for Double Device Data Correction (DDDC) allows DRAMs with memory
errors to be dynamically removed from the systems memory map. Enhanced MCA recovery
enables uncorrectable errors to be detected and recovered while only the affected application
will be shutdown. You can see an overview of the CXs main RAS features in Figure 3.

Availability

Single Node

Flexibility

Features

Next Gen A2040b


A2040bCOPT

Next Gen (Sub-models)


A202b

CPU (Core) Capacity


on demand [COPT]

Yes (W/L)
(Reboot NOT required)

No

Memory module addition on the fly

Yes (W/L)

Yes (W/L)

I/O Card hot plug

Yes (W/L)

Yes (W/L)

DynamicCore De-allocation
and Sparing

Yes (L)

Yes (L), but sparing


not supported

DynamicMemory Page
De-allocation/PFA
(Predictive Failure Analysis) for ECC

Yes (W/L/V)

Yes (W/L/V)

Memory chip data correction

DDDC

DDDC

Recovery for CPU/Memory failure


[MCA Recovery]

Yes (W/L/V)

Yes (W/L/V)

Failure log correction and report

Yes

Yes

HW resource (Core IO/Service


Processor/Clock) Sparing

Yes

Yes

Supporting the above features sometimes depends on OS readiness.


W = Windows, L = Standard Linux (RHEL or Oracle UEK) + NECs RAS Driver, V = VMware
Figure 3: NEC Express58000/A2000 Series RAS Features
Predictable Network Performance with ProgrammableFlow
Providing the raw processing power to support your virtual workloads is the first step
toward achieving enterprise-level virtualization performance. However, you still need to be
able to deliver that power to your end users. Your network infrastructure is the vital conduit
for connecting your virtualized applications to the end users that need them. Its important
to realize that the network can be a bottleneck, especially in highly virtualized environments. Software-defined networking (SDN) technologies like NECs ProgrammableFlow
Networking Suite can enable you to more quickly deploy applications as well as control the
11

Maximizing SQL Server Virtualization Performance

utilization of your network resources. The end result is improved ability to meet your SLAs
and deliver predictable application performance to your end users.
Designed to support high-density virtualization platforms like the Express5800/A2000
Series Server, NECs ProgrammableFlow SDN technology ensures that all of your VMs can
meet their SLAs by enabling you to create a logical or virtual network thats abstracted
from the underlying physical network infrastructure. You can associate your virtual networks with specific applications, eliminating the need to manually create Virtual Local
Area Networks (VLANs) when you deploy your applications. These associations also enable
you to manage the network bandwidth for your applications using defined policies. NECs
ProgrammableFlow is completely integrated with Microsoft System Center and Windows
Server 2012 R2 Hyper-V network virtualization, enabling you to manage your VMs and your
virtual networks using SCVMM 2012. When you create virtual networks using SCVMM,
NECs ProgrammableFlow SDN capabilities will handle all the required underlying network
configurations. NECs ProgrammableFlow Networking Suite uses the OpenFlow protocol to
automatically provision and manage both the physical switches and Hyper-Vs Extensible
Switch (also known as the Virtual Switch).

Summary
The days of considering SQL Server to be a workload that cant be virtualized are definitely
in the past. Todays high-performance computing platforms like the NEC Express5800/
A2000 Series Server provides a level of performance and scalability thats ideal for running
virtually all production SQL Server workloads. In addition, the latest generation of hypervisors like Windows Server 2012 R2 and vSphere 5.5 enable you to take full advantage of the
hosts compute and memory capabilities, allowing you to run the most resource-intensive
enterprise workloads. In order to ensure maximum performance and scalability you need
to be sure to start with a hardware platform that provides the essential computing power
plus the high memory capacity required to support multiple concurrent workloads. With
support for up to 60 cores and 4 TB of RAM the NEC Express5800/A2000 series delivers the
performance and scalability required to run the most resource intensive workloads. Beyond
pure scalability its RAS and COPT features provide mainframe class reliability for your
SQL Server VMs. With that said, youll find that by following the essential virtualization
host and VM guest configuration practices, selecting the right server platform like the NEC
Express5800/A2000 Series Server, and by taking advantage of SDN, you can ensure that
youll get the maximum performance for your SQL Server VMs.
12

Das könnte Ihnen auch gefallen