Beruflich Dokumente
Kultur Dokumente
Maximizing
Virtualization Performance
By Michael Otey
Without a doubt, performance is the database professionals number one concern
when it comes to virtualizing Microsoft SQL Server. While virtualizing SQL Server
is nothing new, even today there are some people who still think that SQL Server is
too resource-intensive to virtualize. Thats definitely not the case. However, there
are several tips and best practices that you need to follow to achieve optimum performance and availability for your virtual SQL Server instances. In this whitepaper,
youll learn about the best practices, techniques, and server platform for virtualizing SQL Server to obtain the maximum virtualized database performance.
Sponsored by
and using solid state disks (SSDs) with your SQL Server VMs. Then youll see how
you can maximize SQL Server 2014 online transaction processing (OLTP) application performance by taking advantage of the new In-Memory OLTP feature.
The second part of this whitepaper will cover some of the practical implementation
details required to get the best performance for your SQL Server VMs. Although the
specific configuration steps are vital, its equally important to select the right virtualization platform to provide the scalability and reliability that your organization needs
to meet its service level agreements (SLAs). In this section, youll learn about using
the NEC Express5800/A2000 Series Server as a virtualization platform. Here youll
see how its Capacity OPTimization (COPT) feature and high random-access memory
(RAM) capacity enable it to support dense virtualization workloads. Then youll see
how NECs ProgrammableFlow Network Suite and PF1000 virtual switch integrate with
Microsoft Hyper-V and Microsoft System Center Virtual Machine Manager (SCVMM) to
provide predictable network bandwidth for your business-critical applications.
a 1:1 ratio of virtual CPUs and physical cores in the system. While nothing prevents you from overcommitting the CPUs for either Hyper-V or VMware vSphere,
matching you physical cores to your virtual CPUs will ensure that you always have
computing power for that workload. When youre planning the number of virtual
CPUs to use in the guest, be sure to remember that the maximum number of virtual
CPUs supported can vary depending on the guest OS. Both Windows Server 2012
R2 Hyper-V and vSphere 5.5 provide support for hosts with up to 320 cores and
VMs with up to 64 virtual CPUs.
Next, while youre planning your hosts computing resources, you should make
sure that the host supports Second Level Address Translation (SLAT) and Non-Uniform Memory Access (NUMA). Most modern servers from tier 1 vendors provide
these features, but they might not be present if youre considering using an older
hardware platform for virtualization. Both are very important for VM scalability.
SLAT has different names, depending on the CPU manufacturer. Intels version is
to how many VMs you can run simultaneously, and hot-add RAM enables you to
upgrade the host without incurring any downtime. Windows Server 2012/R2 supports hot-add RAM but you should be aware that hot-add RAM is not supported in
every server hardware platform. You should be sure to look for this capability when
evaluating virtualization server platforms.
Making sure that theres adequate network bandwidth for your production workloads is the next critical step in the virtualizations host configuration. Trying to
funnel all the network traffic for your VMs through too few host network interface
cards (NICs) is a common virtualization configuration mistake. You can use Performance Monitor to get an idea of your aggregated network bandwidth requirements
just like you did to estimate the hosts CPU and memory requirements. In addition,
you should plan for one dedicated NIC for management purposes as well as one
dedicated NIC for live migration or vMotion. This will help to separate the network
traffic required by these management tasks from your production workloads.
Finally, you should plan for the hosts OS to be installed on a separate storage
location from the guest VHDs or virtual machine disks (VMDKs). More details
about guest VM storage is presented in the following section. In addition, if youre
running anti-virus (AV) software on the host, be sure to exclude the VMs from AV
scanning. AV scans will impact the performance of the VM, which is something
that you want to avoid for your tier 1 applications. Any AV scanning should occur
within the VM guest.
all be on the same VHD. That configuration can work for some small installations,
but it certainly wont give you the best database performance. Putting the data and
log files on separate VHDs that use different drives will definitely provide far better
performance. In addition, like in a physical installation, you should place the VHD
containing the log files on fast-writing drives that use RAID 1 or RAID 10. Another
best storage configuration practice is to put tempdb on its own drive using a VHD
thats separate from the data and log files. Tempdb can be a very active database
with lots of write activity, so like the log files, a best practice is to use RAID 1 or
RAID 10 if possible for the drives on which the tempdb database is placed.
Using SSDs
The continued advancements in computing power and large memory support have
resulted in the input/output (I/O) subsystem being a bottleneck for some VM
installations. Traditional hard disk drives (HDDs) have gotten larger, but they really
havent gotten faster. SSDs use high-performance flash memory for storage, and
they can provide significantly higher throughput than standard rotational HDDs. An
HDD Serial Attached SCSI (SAS) drive spinning at 15,000 revolutions per minute
(rpm) can deliver about 150MB to 200MB of sequential throughput per second. In
contrast, an SSD drive on a 6GB controller can provide about 550MB of sequential
throughput per second.
When youre considering using SSDs with SQL Server VMs, you have several different implementation options:
Moving data files onto SSDs. Data files typically experience more reads than
writes and can be a good choice for SSDs if the SSDs are large enough to contain the data files.
Moving indexes onto SSDs. Most index access is read-heavy, making them ideal
candidates for SSD drives, which excel at random read access.
Moving log files onto SSDs. Log files experience a high degree of writes and
therefore might not be as good a candidate as data files or indexes for moving
onto SSDs. If you do move the log files onto SSDs, plan on using drive mirroring
and RAID to protect against drive failure.
Moving tempdb onto SSDs. Tempdb typically experiences a high volume of
write activity. Moving tempdb onto SSDs can provide improved performance,
but you need to be sure to monitor the drive status and have a replacement
strategy. Like with log files, if you move tempdb onto SSDs, plan on using drive
mirroring and RAID to protect against drive failure.
The life expectancy of SSDs also varies greatly according to the type of SSD drive. There
are two basic types: single-level cell (SLC) and multi-level cell (MLC). SLCs are enterprise
grade. Although theyre more costly, they deliver better performance and a longer lifespan
than MLCs. MLCs are typically found in consumer-grade devices and have lower performance and shorter lifespans than SLCs.
Finally, if you implement SSDs, dont attempt to defragment them. They dont store or
retrieve data like HDDs. Defragmentation will only increase the wear on the drive.
Revving Up VMs with the SQL Server 2014 In-Memory OLTP Engine
The upcoming SQL Server 2014 release will provide the all-new In-Memory OLTP engine,
which promises to significantly boost application performance. Microsoft has shown application performance improvements ranging from 7x to 20x using the new In-Memory OLTP
engine. Equally significant is the fact that this new In-Memory OLTP engine can work just
as well in a VM as it can in a physical system. SQL Server 2014s new In-Memory OLTP
support works by moving select tables and stored procedures into memory. Plus, the new
In-Memory OLTP engine provides an all new lock-free optimistic locking design that maximizes the throughput of the engine.
Tier-1 application like SQL Server where availability is critical. Memory modules and I/O cards
can be added on-the-fly without shutting the system down. Memory is constantly monitored
for errors and support for Double Device Data Correction (DDDC) allows DRAMs with memory
errors to be dynamically removed from the systems memory map. Enhanced MCA recovery
enables uncorrectable errors to be detected and recovered while only the affected application
will be shutdown. You can see an overview of the CXs main RAS features in Figure 3.
Availability
Single Node
Flexibility
Features
Yes (W/L)
(Reboot NOT required)
No
Yes (W/L)
Yes (W/L)
Yes (W/L)
Yes (W/L)
DynamicCore De-allocation
and Sparing
Yes (L)
DynamicMemory Page
De-allocation/PFA
(Predictive Failure Analysis) for ECC
Yes (W/L/V)
Yes (W/L/V)
DDDC
DDDC
Yes (W/L/V)
Yes (W/L/V)
Yes
Yes
Yes
Yes
utilization of your network resources. The end result is improved ability to meet your SLAs
and deliver predictable application performance to your end users.
Designed to support high-density virtualization platforms like the Express5800/A2000
Series Server, NECs ProgrammableFlow SDN technology ensures that all of your VMs can
meet their SLAs by enabling you to create a logical or virtual network thats abstracted
from the underlying physical network infrastructure. You can associate your virtual networks with specific applications, eliminating the need to manually create Virtual Local
Area Networks (VLANs) when you deploy your applications. These associations also enable
you to manage the network bandwidth for your applications using defined policies. NECs
ProgrammableFlow is completely integrated with Microsoft System Center and Windows
Server 2012 R2 Hyper-V network virtualization, enabling you to manage your VMs and your
virtual networks using SCVMM 2012. When you create virtual networks using SCVMM,
NECs ProgrammableFlow SDN capabilities will handle all the required underlying network
configurations. NECs ProgrammableFlow Networking Suite uses the OpenFlow protocol to
automatically provision and manage both the physical switches and Hyper-Vs Extensible
Switch (also known as the Virtual Switch).
Summary
The days of considering SQL Server to be a workload that cant be virtualized are definitely
in the past. Todays high-performance computing platforms like the NEC Express5800/
A2000 Series Server provides a level of performance and scalability thats ideal for running
virtually all production SQL Server workloads. In addition, the latest generation of hypervisors like Windows Server 2012 R2 and vSphere 5.5 enable you to take full advantage of the
hosts compute and memory capabilities, allowing you to run the most resource-intensive
enterprise workloads. In order to ensure maximum performance and scalability you need
to be sure to start with a hardware platform that provides the essential computing power
plus the high memory capacity required to support multiple concurrent workloads. With
support for up to 60 cores and 4 TB of RAM the NEC Express5800/A2000 series delivers the
performance and scalability required to run the most resource intensive workloads. Beyond
pure scalability its RAS and COPT features provide mainframe class reliability for your
SQL Server VMs. With that said, youll find that by following the essential virtualization
host and VM guest configuration practices, selecting the right server platform like the NEC
Express5800/A2000 Series Server, and by taking advantage of SDN, you can ensure that
youll get the maximum performance for your SQL Server VMs.
12