Beruflich Dokumente
Kultur Dokumente
Session 5
Learning Objectives
To explain the strategy to manage and monitor the
Database Server
Chapter 11
Application Optimization
The goal of performance tuning SQL Server 2014 is to minimize the
response time for each SQL statement and increase system throughput
1. Defining a Workload
In most cases, it is not possible to test SQL Server to scale with the actual
demands on the application while in production
We must set up a test environment that best matches the production system,
and then use a load generator such as Quest Benchmark Factory, Idera
SQLscaler, or the built-in Distributed Replay Adapter to simulate the database
workload
The Silent Killer: I/O Problems
1. SQL Server I/O Process Model
Windows Server I/O Manager handles all I/O operations and fulfills all I/O
(read or write) requests by means of scatter-gather or asynchronous
methods
The SQL Server storage engine manages when disk I/O operations are
performed, how they are performed, and the number of operations that
are performed.
The job of the database storage engine is to manage or mitigate as much of
the cost of these I/O operations as possible
In-memory OLTP represents a major advance for SQL Server in the effort to
optimize servers, as well as the code they run, and alleviate I/O contention
2. Database File Placement
SQL Server stores its database on the operating system files—that is, physical
disks or Logical Unit Numbers (LUNs) surfaced from a disk array
Using a fast and dedicated I/O subsystem for database files enables it to
perform most efficiently
To maximize the performance gain, make sure you place the individual data
files and the log files all on separate physical LUNs
We can place reference-archived data or data that is rarely updated in a
read-only filegroup
3. tempdb Considerations
The output on your system depends on your database setup and usage. The
output of this query might appear as follows:
We must perform the following actions when configuring tempdb:
Pre-allocate space for tempdb files based on the results of your testing,
but to prevent SQL Server from stopping.
Per SQL Server instance, as a rule of thumb, create one tempdb data file
per CPU or processor core, all equal in size up to eight data files.
Ensure that tempdb is in simple recovery mode, which enables space
recovery.
Place tempdb on a fast and dedicated I/O subsystem.
Use instant database file initialization. See Chapter 10 for more
information on setting up instant database file initialization
Windows System Performance Monitor:
Three most pertinent allocation bitmaps when dealing with with databases
and tempdb
1. Page Free Space (PFS)—This tracks the allocated pages and the page status
2. Global Allocation Map (GAM)—This tracks dedicated extents with 1 bit per
extent and Shared Global Allocation Map (SGAM)—This tracks mixed extents
with 1 bit per extent
3. Index Allocation Map (IAM)—This page tracks all other pages that are
allocated to one particular object
Two processes based on algorithms used to allocate space within all SQL
Server data files
1. Proportional fill determines how much data is written to each of the files
in a multi-file filegroup based on the proportion of free space within each
file
2. Round Robin is the pattern in which a new filegroup is selected in a multi-
file filegroup once that file has met its proportional fill limit before a
growth operation is required
It is important to remember that the files maintain an even size in order to
keep an even distribution of data to each file in a multi-file filegroup
Table and Index Partitioning
Partitioning is the breaking up of a large object (such as a table) into
smaller, manageable pieces. A row is the unit on which partitioning is
based.
CreatePartitionFunction
CREATE PARTITION FUNCTION
PFL_Years (datetime)
AS RANGE RIGHT FOR VALUES
( '20050101 00:00:00.000', '20070101 00:00:00.000',
'20090101 00:00:00.000', '20110101 00:00:00.000',
'20120101 00:00:00.000')
3. Creating Filegroups
We should create filegroups to support the strategy set by the partition
function
user objects should be created and mapped to a filegroup outside of the
primary filegroup, leaving the primary filegroup for system objects
1. Row Compression
Row compression affects data at the row level and changes the internal
structure completely
uncompressed data record compared to the
structure of a compressed data record.
2. Page Compression
Page compression includes row compression and then implements two other
compression operations:
Prefix compression—For each page and each column, a prefix value is
identified that can be used to reduce the storage requirements. values are
replaced by a reference to the prefix stored in the CI.
Dictionary compression—This involves searches for repeated values
anywhere in the page, which are replaced by a reference to the CI.
To create a new compressed table with page compression, use the following
commands:
To change the compression setting of a table, use the ALTER TABLE command.
an example of applying
compression to a
partitioned table and index:
Table partition operations on a compression partition table have the following
behaviors:
For monitoring data compression at the SQL Server 2014 instance level, two
counters are available in the SQL Server:Access Method object that is found in
Windows Performance Monitor:
• Page compression attempts/sec counts the number of page compression
attempts per second.
• Pages compressed/sec counts the number of pages compressed per
second
In a server consolidation or
multiple-instance
environment, for more
predictable performance,
SQL Server may be
configured to bind CPUs to
specific instances, reducing
the chance for cross-instance
contention
4. Parallelism
Max Degree of Parallelism (MAXDOP)
By default, the MAXDOP value is set to 0, which enables SQL Server to
consider all processors when creating an execution plan.
In most systems, a MAXDOP setting equivalent to the number of cores in
one NUMA node is recommended
When SQL Server looks at the overall cost of a serial query plan,
depending on the level of optimization, the SQL Server optimizer can also
generate a parallel plan to compare which is the “cheaper” and thus
easiest to execute
the results of this query. We see that two stored procedures each have a
cost of over 5, but under 15.
the Cost Threshold for Parallelism
Memory Considerations and
Enhancements
There are a few memory considerations and enhancements in SQL Server
2014
1. Resource Pools
The resource pools are the physical resources of the SQL Server. During SQL
Server installation, two pools are created: internal and default
We can create user-defined pools using the CREATE RESOURCE POOL DDL
statement, or by using SQL Server Management Studi
2. Workload Groups
A workload group is a container for similar sessions according to the defined
classification rules, and applies the policy to each session of the group
We can create the user-defined workload group by using the CREATE
WORKLOAD GROUP command, modify it by using the ALTER WORKLOAD
GROUP command, and drop it by using the DROP WORKLOAD GROUP
command.
You can apply several configuration settings to a workload group:
Maximum memory allocation per request
Maximum CPU time per request
Maximum IOPS per request/per second
Minimum IOPS per request/per second
Resource timeout per request
Relative importance setting per request
Workgroup limit per number of requests
Maximum degree of parallelism
Specific resource pool
3. Classification
Only one user-defined classification function can be designated as a
classifier; and after registering it, it takes effect after an ALTER
RESOURCE GOVERNOR RECONFIGURE command is executed.