Beruflich Dokumente
Kultur Dokumente
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
Henry Lau
Microsoft Corporation
November 1998
Applies to:
Microsoft SQL Server 7.0 Only
Summary: Shows SAP R/3 database administrators and others who work on very large databases how to tune Microsoft SQL
Server version 7.0 for the workload conditions of the SAP R/3 environment. 22 printed pages Covers:
Configuration options to consider for Microsoft Windows 2000 Server
Configuration options for SQL Server in the SAP R/3 environment
SQL Server index design as it pertains to SAP R/3
Note"Microsoft SQL Server 7.0 Performance Tuning Guide" is companion reading to the index analysis
section. Index analysis tends to be an involved process that will need to be performed on an ongoing basis
for best database performance.
Optimal use of SQL Server files and file groups in the R/3 database environment
Contents
Windows 2000 Configurations
SQL Server Configurations
Index Design and Maintenance
File and File group Design
Finding More Information
1/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
2/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
2. Type regedt32.
To find the appropriate key in the Registry Editor
1. On the Window menu, select HKEY_LOCAL_MACHINE.
2. In the left pane of Registry Editor, doubleclick System.
3. Doubleclick CurrentControlSet, doubleclick Services, doubleclick NDIS, and then doubleclick Parameters.
To enter zero for ProcessorAffinityMask
1. In the right pane of Registry Editor, doubleclick ProcessorAffinityMask.
2. Type 0 zero, and then click OK.
3. On the Registry menu, click Exit.
R/3 Instance
Minimum value
Maximum value
Default
Default
Update Instance
Central Instance
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
3/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
4/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
4. When prompted to restart SQL Server, click Yes, and then click OK.
5/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
See "Microsoft SQL Server 7.0 Performance Tuning Guide" for more details about clustered and nonclustered index selection.
Sample data
The following script creates a table called saptest1 and loads 100,000 records into it. The first column, named col1, has no
selectivity. Every row has the same value for col1 "000". This is designed to simulate the very common MANDT column in SAP
R/3, which is usually not very selective. The second column, named col2, is designed to have some selectivity because a value of
'a' is inserted every one hundred rows. The SQL Server modulo "%" operator is used to detect every one hundredth row insert.
The final column, named col3, is a very high selectivity. Every row has a unique value for col3.
To create the sample data Query Analyzer
1. Type the following commands in the Query window:
createtablesaptest1(
col1char(4)notnulldefault'000',
col2char(4)notnulldefault'zzzz',
col3intnotnull,fillerchar(300)default'abc')
declare@counterint
setnocounton
set@counter=1
while(@counter<=100000)
begin
if(@counter%1000=0)
PRINT'loaded'+CONVERT(VARCHAR(10),@counter)
+'of100000record'
if(@counter%100=0)
begin
insertsaptest1(col2,col3)values('a',@counter)
end
else
insertsaptest1(col3)values(@counter)
set@counter=@counter+1
end
2. Press CTRL+E to execute the commands.
Sample indexes
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
6/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
SAP R/3 default configuration for SQL Server primary keys is to make the primary key a clustered primary key. This provides
excellent performance in most situations. But there may be some isolated tables that would benefit greatly from placing the
clustered index on a column besides the columns that comprise the primary key for the table.
The clustered primary key column defined for saptest1 is typical of the R/3 database environment because it places the
completely nonselective column, col1 which is modeled after the MANDT in typical R/3 environments, in the beginning of the
index.
The nonclustered index nkey2 is modeled after typical R/3 indexes in that it is a multiple column index.
To create the sample indexes Query Analyzer
1. Type the following commands in the Query window:
altertablesaptest1addconstraintsapt_c1
PRIMARYKEYclustered(col1,col2,col3)
createindexnkey2onsaptest1(col2,col3)
2. Press CTRL+E to execute the commands.
Sample queries
select*fromsaptest1wherecol3=5000
Query 1 fetches a single row from the test table based on a matching value for col3.
select*fromsaptest1wherecol2='a'
Query 2 is a range scan that fetches 1,000 rows from the table based on a matching value for col2.
7/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
|IndexScan(OBJECT:([pubs].[dbo].[saptest1].[nkey2]),
WHERE:([saptest1].[col3]=5000))
8/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
9/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
index scan means that SQL Server needed to read all or part of the leaf level of sapt_c1's Btree structure which are the actual
rows of the table in order to find the key values 'a'. This operation required 4,500 8KB pages to be read from the buffer cache.
The readahead reads of 4010 indicates that SQL Server read in 4,010 8KB pages in 64KB chunks using the ReadAhead
Manager. Readahead reads are more efficient than physical reads. The physical read of 1 indicates that SQL Server needed to
read one 8KB page as a single 8KB page from disk. Because they are both physical disk reads, readahead reads and physical
reads are much slower than logical reads, which are reads from buffer cache. That is why it should be your primary performance
tuning goal to limit physical disk reads and try to satisfy all database page reads from buffer cache.
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
10/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
11/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
12/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
13/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
2. Select each query separately, and then press CTRL+L to display the graphical Showplan.
3. With the second query, a bookmark lookup was not required because the nonclustered index contains the clustering key
implicitly and, hence, covers the query.
sp_recompile
When troubleshooting longrunning R/3 processes, one potential action item to keep in mind is the use of sp_recompile to
mark stored procedures for recompilation quickly. The sp_recompile command takes very little time to execute and can be very
helpful. The sp_recompile command marks the stored procedure quickly so that a new query plan is generated for the stored
procedureone that reflects the most current state of the table's data, indexes, and statistics.
NoteUnder most normal R/3 operating conditions, there is no need to run sp_recompile because SQL Server
recompiles stored procedures automatically when it is advantageous to do so. But there have been circumstances
in the R/3 environments, where SAP and Microsoft have observed very positive benefits to running sp_recompile
on tables that have long running and poorly performing update and batch processes running on them.
One of the most convenient ways to use sp_recompile is to submit the table name as the parameter for the command. This will
mark for recompilation all stored procedures associated with the table name. For example, if CCMS reveals that update processes
operating on the table VBRP are taking an unusually long time to execute, it is worthwhile to run sp_recompile on the table.
Example of executing sp_recompile Query Analyzer
1. Type exec sp_recompile 'VBRP'.
2. Press CTRL+E to execute the command.
Update Statistics
SQL Server 7.0 provides automatic generation and maintenance of column and index statistics. Statistics assist the query
processor in determining optimal query plans. By default, there are statistics created for all indexes, and SQL Server creates
singlecolumn statistics automatically when compiling queries for columns where column statistics would be useful and the
optimizer would have to guess them.
To avoid longterm maintenance of unused statistics, SQL Server ages the automatically created statistics only those that are not
a byproduct of the index creation. After several automatic updates, the column statistics are dropped rather than updated. If
they are needed in the future, they may be created again. There is no substantial cost difference between statistics update and
create. This aging does not affect user created statistics.
It is recommended that automatic statistics be used for best performance. Automatic statistics creation and update are the
default configuration for SQL Server 7.0. The only exceptions to this recommendation are the tables VBHDR, VBMOD, and
VBDATA. For these tables, it is recommended that automatic statistics be turned off. VBHDR, VBMOD, and VBDATA are very
dynamic in nature, which means they may change from being empty to becoming very large, and then drop to empty again on a
frequent basis. R/3 access to these tables is done only with the primary keys. Additional statistics on these tables will not be
helpful because the same query plan using the primary key is used for every access. It is for these reasons that it is advantageous
to turn off automatic statistics on these tables.
The following set of commands will prevent any future generation of statistics on the tables VBHDR, VBMOD, and VBDATA.
To turn off automatic statistics for VBHDR, VBMOD, and VBDATA Query Analyzer
1. Type the following commands in the Query window:
execsp_autostatsVBHDR,'OFF'
execsp_autostatsVBMOD,'OFF'
execsp_autostatsVBDATA,'OFF'
2. Press CTRL+E to execute the commands.
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
14/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
3. Existing statistics on the VBHDR, VBMOD, and VBDATA tables can be deleted from the database with the following
commands.
To drop existing statistics Query Analyzer
1. Use the sp_helpindex command to figure out the name of the statistics to drop. For example, to display the names of any
existing statistics on the VBMOD, type the following commands in the Query window:
execsp_helpindexVBMOD
2. Press CTRL+E to execute the command. The column index_name in the results pane of Query Analyzer will display the
names of all indexes and statistics.
3. Use the name of the statistics in the drop statistics command. For example, to drop the statistic named
_WA_Sys_VBELN_0AEA10A3, type the following commands in the Query window:
dropstatisticsVBRP._WA_Sys_VBELN_0AEA10A3
4. Press CTRL+E to execute the commands.
5. Repeat Steps 1 through 4 for all statistics on the tables VBMOD, VBHDR, and VBDATA.
DBCC SHOWCONTIG
The DBCC SHOWCONTIG command is used to evaluate the level of physical fragmentation if any occurring on a table.
Example of running DBCC SHOWCONTIG Query Analyzer
1. Type the following commands in the Query window:
declare@idint
select@id=object_id('saptest1')
dbccshowcontig(@id)
2. Press CTRL+E to execute the commands.
3. The following output should result:
DBCCSHOWCONTIGscanning'saptest1'table...
Table:'saptest1'(933578364);indexID:1,databaseID:5
TABLElevelscanperformed.
PagesScanned................................:4167
ExtentsScanned..............................:521
ExtentSwitches..............................:520
Avg.PagesperExtent........................:8.0
ScanDensity[BestCount:ActualCount].......:100.00%[521:521]
LogicalScanFragmentation..................:11.21%
ExtentScanFragmentation...................:0.96%
Avg.BytesFreeperPage.....................:198.6
Avg.PageDensity(full).....................:97.55%
DBCCexecutioncompleted.IfDBCCprintederrormessages,
contactyoursystemadministrator.
Scan Density and Extent Scan Fragmentation help assess how well organized a table is on disk. One hundred percent Scan
Density is the best possible value because it indicates that optimal number of extents are in use for example, each extent is fully
utilized with eight pages per extent. Extent Scan Fragmentation provides additional information on page splitting by
indicating if the extents associated with the table ever move physically out of sequence on disk. Extent Scan Fragmentation is
usable information only when there is a clustered index defined on the table.
Avg. Page Density full indicates average amount of data on each SQL Server data page as a percentage. Sometimes this
percentage is also referred to as the fullness of the data page. A high percentage means that more data is brought into the SQL
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
15/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
Server buffer cache with each 8KB read. Overall, a high percentage means that the cache will contain more usable information.
As an example, consider if the DBCC SHOWCONTIG indicated that there were several tables in your database that had an
average page density of 50 percent. If these tables held a majority of the data that is retrieved, the SQL Server buffer cache will
contain mostly data pages that only contain 50 percent of useful data. This would mean that a 1GB buffer cache would contain
only 500 MB of SQLServer data. If the average page density across tables being read into buffer cache were to improve to near
100 percent, a 1GB buffer cache would contain close to 1 GB of SQLServer data, a much better situation.
If response times for queries accessing a table grow to unacceptably high levels, run the DBCC SHOWCONTIG command on
that table. If Avg. Pages per Extent is significantly less than 8.0, Extent Scan Fragmentation is greater than 1020 percent or
Avg. Page Density full is significantly lower than 100 percent, it is worthwhile to consider rebuilding the clustered index on
the table in order to physically resequence the data in the table onto physically contiguous extents. Rebuilding the clustered
index also provides the option of choosing a fuller page fill in order to compact more data per 8KB page.
Having wellcompressed and contiguous data on disk helps I/O performance because SQL Server can make use of sequential I/O
which is much faster than nonsequential disk I/O when fetching from this table and brings the maximum amount of usable SQL
Server data into buffer cache with each read.
FillFactor
FillFactor is an option available with the CREATE INDEX statement that allows for control of the fullness of the leaf level of
indexes. The leaf level of a table's clustered index are the data pages of the table, so use of the FillFactor option allows for
control of the fullness of data pages on tables that use a clustered index.
The default value for FillFactor is zero. This default value enforces 100 percent fill in all of the data pages of a table. Microsoft's
IT organization has been using the default value for FillFactor for a majority of the SQL Server tables in its SAP R/3 environment
with excellent performance results. It is recommended that the default value for FillFactor be used as a starting point for R/3
database server testing.
The key relationship to keep in mind with FillFactor is that the I/O performance benefit of having the maximum amount of data
packed into each data and index page should be balanced against the performance benefit of avoiding page splits. Page splits
occur when data needs to be inserted into a page but the page is full. A new page has to be used, and data is reorganized across
the old and new page. The enhanced storage structures of SQL Server 7.0 make page split operations much more efficient than
SQL Server 6.5; there is not as much of a performance penalty from page splits. That is the reason the default FillFactor setting is
a good place to start. If the DBCC SHOWCONTIG command reports that there is significant physical fragmentation occurring on
a table and response times for the table are poor, the clustered index on that table should be rebuilt in order keep the index B
tree structures in optimal form.
The DROP_EXISTING option of the CREATE INDEX command is required in order to rebuild primary keys. It also provides
enhanced performance for any index rebuild. The examples to follow assume that the indexes from the original saptest1 table
described earlier are being rebuilt.
Example of rebuilding a clustered primary key Query Analyzer
1. Type the following command in the Query window:
createuniqueclusteredindexsapt_c1onsaptest2(col1,col2,col3)
withdrop_existing
2. Press CTRL+E to execute the command.
Example of rebuilding a nonclustered primary key Query Analyzer
1. Type the following command in the Query window:
createuniqueindexsapt_c1onsaptest2(col1,col2,col3)
withdrop_existing
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
16/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
tempdb Sizing
It is recommended that tempdb be sized at a minimum of 250 MB. Autogrow should be enabled, but if it is known through
testing and previous experience that a large tempdb size will be required, it is recommended that tempdb be set to that size,
versus letting autogrow expand tempdb to the required larger size, from the initial size of 250 MB.
To limit autogrowth of tempdb to 4 GB Query Analyzer
1. Type the following command in the Query window:
Execsp_helpdbtempdb
2. Press CTRL+E to execute the command.
3. In the first column of the second section of the result set that the command returns, there will be the logical file names of
the data and log files associated with tempdb. The logical file name for the tempdb data file will be used in the ALTER
DATABASE command. In this case, the logical file name for the tempdb data file is tempdev.
4. Type the following command in the Query window:
alterdatabasetempdbmodifyfile(name=tempdev,maxsize=4000)
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
17/18
25/7/2016
SAPR/3PerformanceTuningGuideforMicrosoftSQLServer7.0
https://msdn.microsoft.com/enus/library/aa226172(v=sql.70).aspx
18/18