Sie sind auf Seite 1von 18

Tips for Performance Tuning SQL Server Hardware..........................................................

2
Tips for Using Performance Monitor..................................................................................3
Password cracking tools for SQL Server..........................................................................14
Moving Database Files Detach/Attach or ALTER DATABASE?....................................17

Tips for Performance Tuning SQL Server Hardware


When selecting your CPU for your server, select one with a large L2 cache. This
is especially important if you have multiple-processor servers. Select at least a 1MB L2
cache if you have one or two CPUs. If you have 4 or more CPUs, get at a least 2MB L2
cache in each CPU. The greater the L2 cache, the greater the server's CPU performance
because it reduces the amount of wait time experienced by the CPU when reading and
writing data to main memory. [6.5, 7.0, 2000] More info from The PCGuide
Microsoft recommends that the minimum amount of RAM your server should have if it
is running SQL Server 2000 and NT Server 4.0 is 128MB, and if it is running SQL Server
2000 and Windows 2000, the minimum recommendation is 256MB. And as you know,
Microsoft always underestimates the amount of RAM you need to run its software.
Because of this, you may want to consider doubling these figures as a starting point to
determining how much RAM your SQL Server should have. [2000] Added 9-21-2000
Don't use NT Server's software-based RAID, instead use hardware-based RAID.
Software-based RAID is much slower because it can't offload the work to a separate
processor, as does hard-based RAID solutions. [6.5, 7.0, 2000]
Select the best I/O controller you can get. Top-notch controllers offload much of the
I/O work onto its own local CPU, freeing up CPU time on the server to do other tasks. For
the ultimate in I/O controllers, consider a fiber channel connection instead of a SCSI
connection. [6.5, 7.0, 2000] More info from The PCGuide

Don't put DAT, DLT, CD-ROM, scanners or other non-hard disk devices on the
same I/O controllers that connect to your hard disk arrays. In addition, don't put
hard disks on the same I/O controller if they have different speeds. Putting different
devices on the same I/O controller slows the faster devices. Always put slower devices on
their own I/O controller. [6.5, 7.0, 2000] Added 8-29-2000
NTFS-formatted partitions should not exceed 80% of their capacity. For example, if
you have a 20GB drive, it should never be fuller than 16GB. Why? NTFS needs room to
work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer
for it. You may want to create a SQL Server alert to notify you when your arrays exceed
80% of their capacity so you can take immediate action to correct the problem. [6.5, 7.0,
2000]
Avoid locating read-intensive and write-intensive activity on the same drive or
array. For example, don't locate a OLTP and a OLAP database on the same physical
device. The same goes for avoiding putting both heavily random and sequential activity
on the same device. Whenever a drive or array has to change back and forth between
activities, efficiency is lost. [6.5, 7.0, 2000] Added 8-14-2000
At the very minimum, your server should have at least one 100Mbs network card,
and perhaps two. Two cards can be used to increase network throughput and to offer
redundancy. In addition, the network card(s) should be connected to full-duplex switched
ports for best performance. [6.5, 7.0, 2000]
Be sure that your the network card(s) in your server are set to the same duplex
level (half or full) as the switched port they are connected to (assuming they are
connected to a switch, and not a hub). If there is a mismatch, the server may still be
able to connect to the network, but network performance can be significantly impaired.
[6.5, 7.0, 2000]

Assuming SQL Server is running on a dedicated server (for best performance, it should
be), limit the number of network protocols installed on the server. NT Server
allows multiple network protocols (NetBEUI, NWLink, TCP/IP, DLC, AppleTalk, etc.) to be
installed on a server. Unnecessary network protocols increase overhead on the server and
send out unnecessary network traffic. For the best overall performance, only install
TCP/IP on the server, and use the SQL Server TCP/IP Network Library to communicate to
clients. [6.5, 7.0, 2000]

Tips for Using Performance Monitor


General Performance Monitor Tips
CPU Performance Counters
I/O Performance Counters
Memory Performance Counters
Network Performance Counters
SQL Server Performance Counters

General Performance Monitor Tips


The "Performance Monitor" under the "Microsoft SQL Server" entry under your Start
Menu is the same "Performance Monitor" under the "Administrative Tools" entry under
your Start Menu. They are the same programs. What is different is that when you bring
up Performance Monitor from under the "Microsoft SQL Server" entry, is that it comes up
already running several pre-configured SQL Server performance counters.
The Performance Monitor under the "Administrative Tools" entry does not come with any
pre-configured counters loaded. Personally, I dislike the "Microsoft SQL Server" option
and always choose the Performance Monitor option under "Administrative Tools". This
way, I always get to choose the SQL Server Performance Monitor counters I prefer to
use. [6.5, 7.0] More info from Microsoft
*****
Once you have identified the Performance Monitor counters you like to use, you
can save them in a file and then later reload them when you want to see them again. Use
Performance Monitor's "File" menu option to save and load your counter files. You must
save different files for each of the four different Performance Monitor modes, such as
"Chart" and "Log". [6.5, 7.0, 2000]
*****
When monitoring a SQL Server using Performance Monitor, don't run Performance
Monitor on the same server you are monitoring. Instead, run it on a different server
or workstation and remotely monitor the SQL Server. Running Performance Monitor on
the same server you are monitoring will skew the results.
For the most reliable results when using Performance Monitor to track SQL Server and
other related server counters, use Performance Monitor's Log Mode. This allows you to
collect counter data over a period of time. If you are monitoring for periods of less than
eight hours, then begin by collecting information every 15 to 60 seconds.
If you want to collect data for longer periods, you may want to try to collect information
somewhere between 300 and 600 seconds. If you sample the data too frequently,
especially for long periods, the amount of data collected will be huge. On the other hand,
if you collect information too infrequently, then you may miss important detail. You may
have to experiment with various collection intervals until you come up with one that is
best for your circumstances. [6.5, 7.0, 2000]
*****
To collect counter information from the I/O subsystem, you must turn on the disk
counters entering this command at the NT Server command prompt, "diskperf -y", and
then reboot the server. Turning these counters on can use up to 2 to 3 percent of the CPU

utilization of the server. If your server is not CPU bound, then you should not be worried
by this as it will not affect SQL Server's performance.
If you are CPU bound, but you do not want to turn the counters off, Microsoft has a
suggested registry change you can make that will alleviate most of the CPU hit. To find
out more about this registry change, search www.microsoft.com for this article, "MS
Windows NT Server 4.0 Enterprise File Server Scalability and Performance". This article
covers how to make the necessary registry change by adding the
"LargeIrpStackLocations" registry key. [6.5, 7.0, 2000]
*****
Don't use the "diskperf -ye" command for hardware-based RAID. The "-ye" is
designed for NT Server's software-based RAID. [6.5, 7.0, 2000]
*****
If your server is performance bound and you have done everything you can
think of to boost performance, and you still are having performance problems, you can
gain a "little" more performance by starting SQL Server by using the -x startup
parameter. What this option does is to turn off CPU time and cache-hit ratio statistics,
reducing overhead just a little. This option, of course, will not let you use Performance
Monitor to its full potential. [6.5, 7.0, 2000] More info From Microsoft
*****
In virtually all cases, use the Physical Disk counters, not the Logical Disk counters,
when monitoring I/O activity. The Logical Disk counters will not always provide accurate
data, especially when you are using RAID arrays. [6.5, 7.0, 2000]
*****
Periodically, collect performance data on your SQL Servers, and then save this
data in a spreadsheet or database so that you can determine trends in your server's
performance.
For example, you may consider collecting data on your servers on a daily basis, collecting
data every 600 or 900 seconds, then each day dumping this data into a database for
trend analysis. While this can be a lot of work, it will provide solid data you can use to
plan future hardware expansions. [6.5, 7.0, 2000]
*****
Take advantage of SQL Server's ability to create SQL Server Performance Condition
Alerts. You can create alerts that are fired when performance monitor conditions, that
you set, are reached. For example, if you want to know if the number of SQL Server user
connections exceeds 500, you can create the alert, and when it is fired, SQL Server can
e-mail you with the alert message. Create these alerts using SQL Server Enterprise
Manger.
In many ways, these alerts are similar to the alerts that you can create with the
Performance Monitor. But you will find that the ones you create using Enterprise Manager
are easier to set up and are more robust. [7.0, 2000]
*****
While most DBAs are familiar with the Performance Monitor tool included with Windows
NT Server, many are not aware of "service" version of this tool, called "monitor.exe",
which is available with the Windows NT 4.0 Resource Kit. This version of the
Performance Monitor runs as a service, not as a foreground application. Because it is
a service, it can be started and stopped at a command line. This provides the additional
benefit of allowing the service to be automatically started and stopped by using the NT
Server AT service, or similar scheduling tool. This way, you can schedule performance
data to be automatically collected when you want. In addition, this service version of
Performance Monitor uses less server resources that the foreground version. [6.5, 7.0]
*****
When performance tuning some SQL Server-based applications, it would be handy to be
able to measure such things as the number of invoices entered per hour, or the number
of checks written per hour, or perhaps the number of times a particular stored procedure
has run. You can, and you can monitor these, or almost any type of performance indictor,
using Performance Monitor. SQL Server includes a Performance Monitor Object called the
User Settable Object, which is a set of ten performance monitor counters that you can

customize for your own purpose. In other words, you can create up to ten of your own
SQL Server Performance Monitor counters.
Creating your own counters is not hard if you know how to program stored procedures
using Transact-SQL. SQL Server includes ten system stored procedures (named
sp_user_counter1 through sp_user_counter10), and you can assign any integer value
you want to any of the ten available counters. Generally, what you will do is encapsulate
one of these special stored procedures in a stored procedure you write that is used to
calculate the value you want displayed for the Performance Monitor counter.
One important thing to keep in mind when creating your own SQL Server Performance
Monitor counters is that you don't want the stored procedure you create to track your
performance monitor counter to itself be a burden on SQL Server's performance. Keep
your Transact-SQL code in your stored procedure as simple as possible. [6.5, 7.0, 2000]
More info from Microsoft Added 8-28-2000
*****
As you probably know, each SQL Server connection is associated with a SPID (SQL
Server Process ID). Current connection information (including the SPID) can be seen
using Enterprise Manager's Current Activity window. On rare occasions, a user
connection can virtually tie up an entire server's CPUs if something is wrong
with the connection. In other cases, a long running query may cause a performance
problem. Unfortunately, the Current Activity Window does not always show this is
happening with individual user connections, although you know there is something wrong
with the server because your CPU cycles are being used up, as can be seen using
Performance Monitor and monitoring the System Object: % Total Processor Time counter.
So if you know that a user connection is causing problems, but you can't identify it using
the Current Activity window, here's a method you can use to identify which user
connection is causing the problem.
Your first step is to verify that the SQL Server process (sqlservr), and not some other
process, is responsible for the excessive CPU use. Remember, the "sqlservr" process is
made up of many different treads, each one generally (but not always) representing a
specific user connection.
The way to do this is to use the Performance Monitor and go to the Process Object: %
Processor Time counter using the Chart View. When you do this, there will be many
instances of running processes on your server listed in the Instance list box. What you
need to do is to select all of the instances from the Instance list box except the _Total
instance (which just sums the total of all the other instances). Once the instances are
displayed in the Chart View, you need to change the format of the Chart from Graph to
Histogram. This is done from the Options|Chart from the Performance Monitor's main
menu. Now, you should be able to easily see which process is causing the problem.
(Note: if "Idle" is the highest rated process, then not much is going one with your
server.) If SQL Server is the problem, then the instance with the highest % Processor
Time should be "sqlservr". If this is the case, then proceed with the rest of this tip,
otherwise, you need to find out what other process on your server is causing you
problems.
Assuming the "sqlservr" process is the guilty party, the next step is to identify which user
connection (SPID) is causing the problem. To do this, use Performance Monitor and go to
the Thread Object: % Processor Time counter using the Chart View. When you do this,
there will be many instances of running threads on your server listed in the Instance list
box. What you need to do is to select all of the instances that begin with "sqlservr" from
the list box. Once the instances are displayed in the Chart View, you need to change the
format of the Chart from Graph to Histogram. This is done from the Options|Chart from
the Performance Monitor's main menu. Now, you should be able to easily see which
thread is eating up your CPU resources. While we have now identified which thread (and
its Performance Monitor Instance Number) is causing the problem, we still haven't figured
out which user connection is causing the problem. Although we have identified a specific
Performance Monitor Instance Number as being the culprit, there is no correlation
between a Performance Monitor Instance Number and a SPID in SQL Server. If there was,
our job would be much easier. Unfortunately, it will take us two more steps to make this
correlation. So here goes.

Our next step is to correlate the Performance Monitor Instance Number we just identified
and match it with something called a KPID (Kernel Process ID) or a ID Thread. KPID and
ID Thread are exactly the same thing. A KPID or ID Thread is a system-wide identifier
that uniquely identifies a thread in NT Server. To do this, use Performance Monitor and go
to the Thread Object: ID Thread counter using the Report View. When you do this, there
will be many instances of running threads on your server listed in the Instance list box.
What you need to do is to select all of the instances that begin with "sqlservr" from the
list box, similar to what you did in the previous step. The reason we are using the Report
View is because what you need to do now is to find out which Thread ID (KPID) matches
the Performance Monitor Instance Number (this is the number identified in the column
headings in the report) you identified in the previous step. Now that we know this, we
are only one step away from finding our culprit.
Our last step is to correlate the Thread ID (KPID) identified in the last step to the SPID.
To so this, run the following query in Query analyzer:
use master
select spid, kpid, hostname, dbid, cmd from master..sysprocesses

This query will display a report listing both SPIDs and KPIDs. And this is where you make
the last correlation. Just match up the Thread ID (KPID) found in the last step, and then
match it to the corresponding SPID, and now you know which SQL Server user
connection is causing the problem.
This has been a lot of work, but doing this has helped me to resolve more than one
performance-related problem. [6.5, 7.0] Added 9-30-2000

CPU Performance Counters


Measuring the CPU activity of your SQL Server is a key way to identify potential CPU
bottlenecks. The Process Object: % Processor Time counter, is available for each CPU
(instance), and measures the utilization of each individual CPU.
The System Object: % Total Processor Time counter measures the average of all the
CPUs in your server. This is the key counter to watch for CPU utilization. If the % Total
Processor Time counter exceeds 80% for continuous periods (over 10 minutes or so),
then you may have a CPU bottleneck. The solutions are to reduce the server load, get
faster CPUs, or get more CPUs. [6.5, 7.0, 2000]
*****
Another valuable indicator of CPU performance is the System Object: Processor Queue
Length. If the Processor Queue Length exceeds 2 per CPU for continuous periods (over
10 minutes or so), then you probably have a CPU bottleneck. For example, if you have 4
CPUs in your server, the Processor Queue Length should not exceed a total of 8 for the
entire server.
If the Processor Queue Length regularly exceeds the recommended maximum, but the
CPU utilization is not correspondingly as high (which is typical), then consider reducing
the SQL Server "max worker threads" configuration setting. It is possible the reason that
the Processor Queue Length is high is because there are an excess number of worker
threads waiting to take their turn. By reducing the number of "maximum worker
threads", what you are doing is forcing thread pooling to kick in (if it hasn't already), or
to take greater advantage of thread pooling.
Use both the Processor Queue Length and the % Total Process Time counters together to
determine if you have a CPU bottleneck. If both indicators are exceeding their
recommended amounts during the same continuous time periods, you can be assured
there is a CPU bottleneck. [6.5, 7.0, 2000]
*****
If the System Object: % Total Processor Time counter in your multiple CPU server
regularly runs over 80% or so, then you may want to start monitoring the System:
Context Switches/Sec counter. This counter measures how often NT Server switches
between threads. Heavy context switching hurts performance and should be minimized.
If the System: Context Switches/Sec counter nears 8000, then you should consider using
NT Windows Fibers. Fibers are subcomponents of threads that perform similarly to

threads. The advantage of using them is that they have less overhead when being
switched between CPUs in a multiple CPU server.
This benefit does not show up unless the server's CPUs are running at near maximum or
the Performance Monitor System: Context Switching/Sec counter nears 8000 switches
per second for continual periods (over 10 minutes), so don't select this option unless
your server's CPUs are nearly maxed out. You will also want to carefully test (before and
after) the affect of this setting on your server's performance. This option is available
under the SQL Server's "Properties", "Processor" tab. [7.0, 2000] More info from
Microsoft Added 9-22-2000

I/O Performance Counters


If your I/O subsystem is working efficiently, then each time SQL Server wants to write or
read data, it can without waiting. But if the load on the server is too great, then reads
and writes will have to wait, each taking their turn. This can significantly reduce SQL
Server's performance.
The best way to do this is to use the PhysicalDisk Object: Avg. Disk Queue Length to
monitor each disk array in your server. If the Avg. Disk Queue Length exceeds 2 for
continuous periods (over 10 minutes or so) for each disk drive in an array, then you
probably have an I/O bottleneck for that array. You will need to calculate this figure
because Performance Monitor does not know how many physical drives are in arrays.
For example, if you have an array of 6 physical disks, and the Avg. Disk Queue Length is
10 for a particular array, then the actual Avg. Disk Queue Length for each drive is .83
(10/12=.83), which is well within the recommend 2 per physical disk. The solutions to
this include: adding drives to an array (if you can), getting faster drives, adding cache
memory to the controller card (if you can), using a different version of RAID, getting a
faster controller, or reducing the load on the server.
Before using this counter, be sure to manually turn it on by going to the NT Command
Prompt and entering the following: "diskperf -y", and then rebooting your server. This is
required to turn on the disk counters on for the first time. [6.5, 7.0, 2000]
*****
The Physical Disk Object: % Disk Time counter is a handy tool for several reasons.
This counter measures how busy a physical array is (not a logical partition or individual
disks in an array). It provides a good relative measure of how busy your arrays are, and
over a period of time, can be used to determine if I/O needs are your server are
increasing, indicating a potential need for more I/O capacity in the near future.
As a rule of thumb, the % Disk Time counter should run less than 90%. If this counter
exceeds 90% for continuous periods (over 10 minutes or so), then your SQL Server may
be experiencing an I/O bottleneck. If you suspect a physical disk bottleneck, you may
also want to monitor the % Disk Read Time counter and the % Disk Write Time counter
in order to help determine if the I/O bottleneck is being mostly caused by reads or
writes.
Also, this counter is a good indicator of how busy each array on your server is. By
monitoring each array, you can tell how well balanced your I/O is over each array. Ideally,
you want to distribute the I/O load of SQL Server as evenly as possible over your arrays,
and this counter will tell you how successful you have been doing this.
Before using this counter, be sure to manually turn it on by going to the NT Command
Prompt and entering the following: "diskperf -y", and then rebooting your server. This is
required to turn on the disk counters on for the first time. [6.5, 7.0, 2000] Updated 828-2000
*****
If you are not sure what to make the fillfactor for your indexes, your first step is to
determine the ratio of disk writes to reads. The way to do this is to use these two
counters: Physical Disk Object: % Disk Read Time and Physical Disk Object: % Write
Time. When you run both counters on an array, you should get a good feel for what
percentage of your I/O is reads and writes. You will want to run this over a period of time
representative of your typical server load.

Once you know the ratio of disk write to reads, you now have the information you need
to help you determine an optimum fillfactor for your indexes.
Before using this counter, be sure to manually turn it on by going to the NT Command
Prompt and entering the following: "diskperf -y", and then rebooting your server. This is
required to turn on the disk counters on for the first time. [6.5, 7.0, 2000]

Memory Performance Counters


This discussion assumes that your server is dedicated to SQL Server, and perhaps some
related server utilities. One of the key counters you should be regularly watching is the
Memory Object: Pages/Sec. This measures the number of pages per second that are
paged out of memory to disk, or paged into memory from disk. Assuming that SQL
Server is the only major application running on your server, then this figure should
average nearly zero, except for occasional spikes, which are normal.
Over continuous periods of time (10 minutes or so) the Pages/Sec should ideally be near
zero. If this is not the case, this means that NT Server is having to page data, which it
should not be doing. This is because SQL Server does not use NT Server's page file, and
since this should be the only application on your server, there should be little or no
paging going on.
If there is regular paging going on, this means that you are running other applications on
your server, which is causing NT Server to page, or you have set the SQL Server Max
Server Memory configuration setting to some other setting other than "Dynamically
configure SQL Server memory". Determine which is the problem and fix it, as this paging
is slowing down SQL Server's performance. Ideally, remove the NT applications causing
the paging.
If you have changed the SQL Server Max Server Memory configuration to some other
value other than "Dynamically configure SQL Server memory", then change it back to
this setting. SQL Server should be allowed to take as much RAM as it wants for its own
use without having to compete for RAM with other applications. [6.5, 7.0, 2000]
*****
Another way to check to see if your SQL Server has enough physical RAM is to check the
Memory Object: Available Bytes counter. This counter can be viewed from
Performance Monitor or from the NT Server or Windows 2000 Task Manager (see the
Performance tab). This value should be greater than 5MB. If not, then your SQL Server
needs more physical RAM. On a server dedicated to SQL Server, SQL Server attempts to
maintain from 4-10MB of free physical memory. The remaining physical RAM is used by
the operating system and SQL Server. When the amount of available bytes is less than
4MB, most likely SQL Server is also paging (which it shouldn't) and is experiencing a
performance hit. When this happens, you either need to increase the amount of physical
RAM in the server, reduce the load on the server, or change your SQL Server's memory
configuration settings appropriately. [7.0, 2000] Updated 9-1-2000

Network Performance Counters


Before you can use the network performance counters, the Network Monitor Agent
service must be installed on your server. After installing it, you will have to reboot. Also,
don't forget to rerun the NT service pack to update the files added during the installation
process. [6.5, 7.0, 2000]
*****
One of the best ways to monitor if you have a network bottleneck is to watch the
Network Interface Object: Bytes Total/Sec counter. This counter measures the
number of bytes that are being sent back and forth between your server and the
network. This includes both SQL Server and non-SQL Server network traffic. Assuming
your server is a dedicated SQL Server, then the vast majority of the traffic measured by
this counter should be from SQL Server.

There is no hard and fast "correct" number for this counter as it measures the actual
traffic. To help you decide if your server has a network bottleneck, one way to use this
number is to compare it with the maximum traffic supported by the network connection
your server is using. Also, this is another important counter to watch over time. It is
important to know if your network traffic is increasing regularly. If it is, then you can use
this information to help you plan for future hardware needs. [6.5, 7.0, 2000] Added 9-52000
*****
If you think that you have a network bottleneck, it is easy to check using the Network
Segment Object: % Network Utilization counter. This counter provides you with what
percentage of the bandwidth is being used by the network connection your server is
using. This is not the amount of bandwidth being sent to and from your server, but the
total bandwidth being used on the connection the network card is attached to.
This connection could be of many different types, including a shared hub or a switched
port running at half-duplex or full-duplex. The connection might be 10Mbp, 100Mbp, or
even 1Gbp. Given this, the results you receive from the counter must be interpreted in
the light of which type of connection you have. Ideally, you will want a network
connection to its own dedicated switch port for maximum performance. [6.5, 7.0, 2000]
*****
If you want to find out how much data is being sent back and forth from your
server to the network, use the Server Object: Bytes Received/sec and the Server
Object: Bytes Transmitted/sec. These counters will help you found out how busy your
actual server is over the network, and are good counters to watch over time. [6.5, 7.0,
2000]

SQL Server Performance Counters


One cause of excess I/O on a SQL Server is page splitting. Page splitting occurs when an
index or data page becomes full, and then is split between the current page and a newly
allocated page. While occasional page splitting is normal, excess page splitting can cause
performance issues.
If you want to find out if your SQL Server is experiencing a large number of page
splits, monitor the SQL Server Access Methods object: Page Splits/sec. If you find out
that the number of page splits is high, consider increasing the fillfactor of your indexes.
[6.5, 7.0, 2000] Updated 9-7-2000
*****
Another key counter to watch is the SQL Server Buffer Manager Object: Buffer Cache
Hit Ratio. This indicates how often SQL Server goes to the buffer, not the hard disk, to
get data. In OLTP applications, this ratio should exceed 95%. If it doesn't, then you need
to add more RAM to your server to increase performance.
In OLAP applications, the ratio could be much less because of the nature of how OLAP
works. In any case, more RAM should increase the performance of SQL Server. [6.5, 7.0,
2000] Updated 9-1-2000
*****
Since the number of users using SQL Server affects its performance, you may want to
keep an eye on the SQL Server General Statistics Object: User Connections. This shows
the number of user connections, not the number of users, that currently are connected to
SQL Server.
When interpreting this number, keep in mind that a single user can have multiple
connections open, and also that multiple people can share a single user connection. Don't
make the assumption that this number represents actual users. Instead, use it as a
relative measure of how "used" the server is. Watch the number over time to get a feel if
your server is being more used, or less used. [6.5, 7.0, 2000]
*****
If you want to see how much physical RAM is devoted to SQL Server's data cache,
monitor the SQL Server Buffer Manager Object: Cache Size (pages). This number is

presented in pages, so you will have to take this number and multiply it by 8K (8,192) to
determine the amount of RAM in K that is being used.
Generally, this number should almost come close to the total amount of RAM in your
computer, assuming you are devoting your server to SQL Server. This number should be
close to the total amount of RAM in the server, less the RAM used by NT, SQL Server, and
any utilities you have running on the server.
If the amount of RAM devoted to the data cache is much smaller than you would expect,
then you need to do some investigating to find out why. Perhaps you aren't allowing SQL
Server to dynamically allocate RAM. Whatever the cause, you need to find a solution, as
the amount of data cache available to SQL Server can significantly affect SQL Server's
performance. [6.5, 7.0, 2000]
*****
If your databases are suffering from deadlocks, you can track then by using the SQL
Server Locks Object: Number of Deadlocks/sec. But unless this number is relatively
high, you want see much here because the measure is by second, and it takes quite a
few deadlocks per second to be noticeable.
But still, it is worth checking out if you are having a deadlock problem. Better yet, use
the Profiler's ability to track deadlocks. It will provide you with more detailed information.
What you might consider doing is to use the Number of Deadlocks/sec counter on a
regular basis to get the "big" picture, and if you discover deadlock problems, then use
the Profiler to "drill" down on the problem for a more detailed analysis. [6.5, 7.0, 2000]
*****
If your users are complaining that they have to wait for their transactions to
complete, you may want to find out if object locking on the server is contributing to this
problem. To do this, use the SQL Server Locks Object: Average Wait Time (ms). You can
use this counter to measure the average wait time of a variety of locks, including:
database, extent, Key, Page, RID, and table.
If you can identify one or more types of locks causing transaction delays, then you will
want to investigate further to see if you can identify what specific transactions are
causing the locking. The Profiler is the best tool for this detailed analysis. [6.5, 7.0,
2000]
*****
While table scans are a fact of life, and sometimes faster than index seeks, generally it
is better to have fewer table scans than more. To find out how many table scans
your server is performing, use the SQL Server Access Methods Object: Full Scans/sec.
Note that this counter is for an entire server, not just a single database. One thing you
will notice with this counter is that there often appears to a pattern of scans occurring
periodically. In many cases, these are table scans SQL Server is performing on a regular
basis for internal use.
What you want to look for are the random table scans that represent your application. If
you see what you consider to be an inordinate number of table scans, then break out the
Profiler and Index Tuning Wizard to help you determine exactly what is causing them,
and if adding any indexes can help reduce the table scans. Of course, SQL may just be
doing its job well, and performing table scans instead of using indexes because it is just
plain more efficient. [6.5, 7.0, 2000]
*****
If you suspect that your backup or restore operations are running at suboptimal speeds, you can help verify this by using the SQL Server Backup Device Object:
Device Throughput Bytes/sec. This counter will give you a good feel for how fast your
backups are performing. You will also want to use the Physical Disk Object: Avg. Disk
Queue Length counter to help collaborate your suspicions. Most likely, if your are having
backup or restore performance issues, it is because of an I/O bottleneck. [6.5, 7.0, 2000]
*****
If you are using transactional replication, you may want to monitor the latency that it
takes the Log Reader to move transactions from a database's transaction log until it puts
it in the distribution database, and to also monitor the latency it takes the Distributor
Agent to move transactions from the distribution database to the Subscriber database.

The total of these two figures is the amount of time it takes a transaction to get from the
publication database to the subscriber database.
The counters for these two processes are the: SQL Server Replication LogReader:
Delivery Latency counter and the SQL Server Replication Dist.: Delivery Latency counter.
If you see a significant increase in the latency for either of these processes, this should
be a signal to you to find out what new or different has happened to cause the increased
latency. [6.5, 7.0, 2000]

Windows 2000
Since Windows 2000 is so new, you will need to ensure that all the hardware you run
it on, and their related drivers, have been tested for use with Windows 2000. Using
an outdated or buggy driver can wreck havoc with performance. [7.0, 2000]
*****
If you want to upgrade a current server running NT Server 4.0, you will be best off if
you install Windows 2000 from scratch, instead of using the upgrade procedure
included with Windows 2000. Ideally, reformat all of the drives and start completely
fresh. Upgrading can introduce hard to identify performance problems, such as bad or
outdated drivers not be upgraded, fragmented drives not be defragmented, and so on.
[7.0, 2000]
*****
Install Windows 2000 as a stand-alone server, not as a domain controller. Domain
controllers have extra overhead and perform functions not required by SQL Server. Along
the same lines, don't install any unnecessary server components, such as DNS, DHCP,
etc, on your SQL Server. The goal is to dedicate all of the server's power to SQL Server.
[7.0, 2000]
*****
Windows 2000 supports a larger MTU (maximum transmission unit) window
size than Windows NT 4.0, ranging from 1.5KB to 9KB. The larger the MTU window size,
the fewer packets that have to be sent over the network, reducing both server and
network overhead. To implement this new Windows 2000 feature, you will have to use a
network card that supports the new larger, 9KB MTU window size, and configure this
setting at the network card. [7.0, 2000]
*****
Defragment the drives or arrays regularly using the built-in Disk Defragmenter
(part of the Computer Management Console), or using a third-party tool designed for
Windows 2000. This fixes disk fragmentation and boosts disk I/O. [7.0, 2000]
*****
Set the "Application Response" setting to "Optimize Performance for
Background Services". This ensures that all applications running on Windows 2000
(foreground and background) get an equal shot at the CPU. Set this option by going to
the "System" icon in the "Control Panel", then select the "Advanced" tab, and then select
the "Performance Options" button. [7.0, 2000]
*****
Format all the disk arrays on your server using NTFS 5.0, the new NTFS file system
format included (but not required) for Windows 2000. The new format includes some new
performance enhancements. [7.0, 2000] Added 8-9-2000
*****
Avoid using NTFS data file encryption on SQL Server database and log files.
While the performance hit is minimal on small, lightly used databases, it is noticeable on
larger, busy databases. [7.0, 2000] Added 8-9-2000
*****
When running SQL Server 7.0 or SQL 2000 under Windows 2000, the ideal cluster size
when formatting NTFS partitions is 64K. If your hard disk is larger than 32MB, this is
the default choice selected by Windows 2000 when formatting drives. [7.0, 2000] Added
9-21-2000

Windows 98/ME
Windows NT Server 4.0
Windows 2000

Windows 98/ME
If you are running SQL Server under Windows 98/ME, consider configuring the swap
file to a constant swap file size, instead of the dynamic swap file used by default in
Windows. This reduces overhead because Windows no longer has to resize the swap file.
In addition, it helps to reduce hard disk fragmentation.
If you decide to so this, you will want to defrag your hard disk first to ensure contiguous
hard disk space for the swap file. I would choose a fixed swap file size at least twice as
large as the amount of RAM in your computer. If SQL Server is your only application on
this computer, then this size should be adequate. To change the swap file size, go to
Control Panel | System | Performance | Virtual Memory. If you have more than one hard
drive, locate the swap file on the fastest drive.
*****
If you are running SQL Server under Windows 98/ME, and the computer has at least
16MB of RAM (and it should have at least 64MB if you want decent performance),
consider changing the computer's role from Desktop Computer to Network
Server. This setting allows more data to be cached in RAM, boosting performance. To
change this option, go to Control Panel | System | Performance | File System.

Windows NT Server 4.0


Check to be sure the "Application Performance" for your server is set to "None". This
ensures that any foreground applications you run on your server will not get a higher
priority than SQL Server. To find this setting, go to the "Control Panel", click on the
"System" icon, then click on the "Performance" tab. [6.5, 7.0, 2000]
*****
Check to be sure the "Optimization" for your server is set to "Maximize Throughput for
Network Applications." This will ensure that NT Server allocates more RAM to SQL
Server than to its file cache. To find this setting, to go the "Control Panel, click on the
"Network" icon, then click on the Services tab, then click on "Server", and then click on
"Properties". [6.5, 7.0, 2000]
*****
Assuming that SQL Server is located on a dedicated server, the location of the
PAGEFILE.SYS is not critical. This is because SQL Server does not normally do any
paging on its own. If you do notice that your SQL Server is paging regularly, then it
needs to be tuned appropriately so that paging is virtually stopped. Generally, leave the
PAGEFILE.SYS file on the same drive as the operating system.
If your server is paging on a dedicated SQL Server, the most likely cause of this is that
you are not allowing SQL Server to dynamically allocate RAM on its own. Check how you
have configured the "Memory" tab under the SQL Server "Properties" of your server. It
should ideally be set to "Dynamically configure SQL Server memory". [7.0, 2000]
*****
Since the PAGEFILE.SYS is not used by SQL Server, and only barely used by NT (on a
dedicated SQL Server), you don't have to have a large PAGEFILE.SYS file. Generally,
the default size is overkill for most servers and disk space can be reclaimed by making
the PAGEFILE.SYS file smaller.
The best way to size the PAGEFILE.SYS is to monitor how much of it is used during
production using the Performance Monitor Page File Object: % Usage counter, and then
resize the PAGEFILE.SYS with a minimum size just slightly larger than the amount that is

actually being used (based on the Performance Monitor counter), and with a maximum
size of 50MB larger than the minimum size.
The PAGEFILE.SYS setting can be viewed and changed by going to the "Control Panel",
selecting the "Performance" tab, and then clicking on the "Virtual Memory" button. If you
change the virtual memory settings, you will have to reboot your server for the new
settings to go into affect. [6.5, 7.0, 2000]
*****
NTFS-formatted partitions should not exceed 80% of their capacity. For example, if
you have a 20GB drive, it should never be fuller than 16GB. Why? NTFS needs room to
work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer
for it. You may want to create a SQL Server alert to notify you when your arrays exceed
80% of their capacity so you can take immediate action to correct the problem. [6.5, 7.0,
2000]
*****
Remove all unessential services and network protocols from your SQL Server.
These can include, but are not limited to: the web server service, FTP server service,
Gopher, SMTP, WINS, DHCP, Alerter, Clipboard Server, Messenger, Network DDE,
Directory Replicator, Schedule, Spooler. It also includes unused network protocols, such
as DLC, AppleTalk, NWLink, and NetBEUI. Each one you remove frees up RAM and CPU
cycles, making them available for SQL Server. Of course, if you really need one or more
of these services or protocols, then don't disable or unload them. The ones listed above
aren't required for a dedicated SQL Server. [6.5, 7.0, 2000]
*****
Configure NT Server 4.0 to be a member server, not a Primary Domain Controller
(PDC) or a Backup Domain Controller (BDC). The task of being a Domain Controller
drains away resources from SQL Server. [6.5, 7.0, 2000]
*****
Don't put SQL Server program, database, or log files on compressed NTFS partitions.
The performance is terrible. In fact, make it a rule not to use NTFS compression for any
files other than rarely accessed archive data. [6.5, 7.0, 2000]

Password cracking tools for SQL Server


Kevin Beaver, CISSP
Rating: -5.00- (out of 5)

05.09.2006

If you're performing a penetration test or higher-level security audit of your


SQL Server systems, there's one test you must not miss. It seems obvious,
but many people overlook it: SQL Server password testing. Given the
inherent weaknesses compared with more secure Windows authentication,
you should especially test for password flaws if you're using SQL Server
authentication in mixed mode. Password testing will help you determine
how easily others can break into your database and help you ensure SQL
Server users are being responsible with their accounts.
To get things rolling, you need to determine which systems are available to
test. You may know your environment like the back of your hand, but it
doesn't hurt to ferret out servers you may have forgotten or those
someone else connected to the network. You should at least run SQLPing2,
but I highly recommend SQLRecon to find SQL instances you might not
otherwise be able to discover. Both tools are downloadable at Chip
Andrews' site.
In the figure below, you'll see how SQLPing2 discovered various SQL Server
systems and determined that one of the systems has a blank sa password.
This is SQL Server password cracking at the most basic level.

SQLPing2: Discovering a blank sa password

SQLPing2 can also run dictionary attacks against SQL Server. This is as
simple as loading your own user account and password lists, as shown in
the following figure.

SQLPing2: Running dictionary attacks


Another free tool, Cain and Abel, allows you to dump and crack SQL Server
hashes, as shown in the following figure:

Cain and Abel: Dumping and cracking SQL Server hashes


On the commercial side, NGSSoftware's NGSSQLCrack product is a good
tool for performing both dictionary and brute-force password cracking.
There's also Application Security's AppDetective, which comes with built-in
password cracking functionality as shown in the following figure:

AppDetective: Built-in password cracking


It's important to remember that SQL Server password cracking shouldn't
be taken lightly. Treat this as a formal penetration test or audit and get the
approval of management and carefully plan things out. You don't want to
create trouble. Speaking of that, there are a few downsides to password
cracking to keep in mind:

Password cracking can eat up valuable system resources including CPU time, memory and network
bandwidth literally to the point of creating a denial-of-service attack on the system.
Dictionary and brute-force attacks can take a lot of time -- something you may not have, especially if you
can only test your systems during a certain window of time.
Dictionary attacks are only as good as the dictionary you're using, so make sure you've got reliable
dictionaries at your disposal. I have found the following to be good resources:

o
o
o
o
o

http://packetstormsecurity.nl/Crackers/wordlists
ftp://ftp.ox.ac.uk/pub/wordlists
ftp://ftp.cerias.purdue.edu/pub/dict
http://www.outpost9.com/files/WordLists.html
http://www.elcomsoft.com/prs.html#dict

Finally -- and perhaps most importantly -- make sure you follow up on your
findings. That may mean sharing your findings with upper management,
tweaking your password policy and making others aware that they need to
be more security conscious.
About the author: Kevin Beaver is an independent information security
consultant, author and speaker with Atlanta-based Principle Logic, LLC. He
has more than 18 years of experience in IT and specializes in performing
information security assessments. Kevin has written five books including
"Hacking For Dummies" (Wiley), "Hacking Wireless Networks For
Dummies," and "The Practical Guide to HIPAA Privacy and Security
Compliance" (Auerbach). He can be reached at
kbeaver@principlelogic.com.
More information from SearchSQLServer.com

Tip: Ten

hacker tricks to exploit SQL Server systems


to configure and lock down SQL Server services
Tip: Using Metasploit for real-world security tests

Tip: Tool

DISCLAIMER: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to
learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of
information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of
the Ask The Expert services and your reliance on any questions, answers, information or other materials received
through this Web site is at your own risk.

Moving Database Files Detach/Attach or ALTER DATABASE?


By Jonathan Kehayias, 2009/05/27
Total article views: 8668 | Views in the last 30 days: 8668
Rate this | Join the discussion | Print
At times it can be necessary to move the data and or log files from one location to another on the same
SQL Server. There are two ways to go about doing this task, detaching the database from the SQL Server
Instance, moving the files to the new location in the operating system, and then reattaching the database to
the SQL Server Instance, and using ALTER DATABASE with the MODIFY FILE option to move the files
through a metadata switch, taking the database offline, moving the file in the operating system and then
bringing the database back online. Both accomplish the same task, but there are a number of reasons why
the ALTER DATABASE method can make more sense for doing this kind of task.
First lets look at the syntax of both operations. Using the AdventureWorks database as an example, to
move the database files from their current location to a new one by detatching the database issue the
following TSQL statement:

EXEC sp_detach_db N'AdventureWorks'


After the database is detached, the data files can be moved to their new location and the database can then
be attached to the SQL instance with the following TSQL statement:

EXEC sp_attach_db N'AdventureWorks',


'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\AdventureWorks_Data.mdf',
'c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\AdventureWorks_Log.LDF'
This probably isn't news to many people since this is how moving databases has been performed in SQL
Server for a long time. In fact there are Microsoft Knowledgebase articles that cover through SQL Server
2005, showing this as an appropriate method to move database files.
However, there are a number of problems that can be introduced by using this legacy, and soon to be
deprecated method in SQL Server 2005 and SQL Server 2008. The sp_attach_db command topic in the
Books Online has the following common warning for features that will be removed in the future:
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new
development work, and plan to modify applications that currently use this feature. We recommend that you
use CREATE DATABASE database_name FOR ATTACH instead. For more information, see CREATE
DATABASE (Transact-SQL).
To move the data files for AdventureWorks Doing a detach/attach operation on a simple database like
AdventureWorks using the more current ALTER DATABASE first you need to identify the filenames
associated with the database:

select name, physical_name


from sys.master_files
where database_id = db_id('AdventureWorks')
Once the filename and physical_name have been determined, the database can be moved using ALTER
DATABASE with the MODIFY FILE command as follows:

ALTER DATABASE AdventureWorks


MODIFY FILE (NAME = AdventureWorks_Data, FILENAME =
'D:\SQLData\AdventureWorks_Data.mdf');
ALTER DATABASE AdventureWorks
MODIFY FILE (NAME = AdventureWorks_Log, FILENAME =
'D:\SQLData\AdventureWorks_Log.ldf');
Once you have run the above statements, to complete the move, set the database offline:

ALTER DATABASE AdventureWorks SET OFFLINE


and then move the data files to the new location, then bring the database back online:

ALTER DATABASE database_name SET ONLINE


So why exactly is this important, and what difference does it really make? Well, there are a number of things
that can be affected by the use of attach/detach that are not affected when using ALTER DATABASE. For
example if your database uses Service Broker, by using detach/attach, Service Broker is disabled on the
database, whereas when using ALTER DATABASE MODIFY FILE, Service Broker remains enabled. To reenable Service Broker for the database requires exclusive access to the database, which means that you
will have to kick any active connection out of the database to use ALTER DATABASE ENABLE BROKER,
once you realize that there is a problem. In addition, if you have enable TRUSTWORTHY for the database
for SQLCLR or cross database ownership chaining, this is disabled using attach/detach where it is not using
ALTER DATABASE MODIFY FILE. The reason for this is security. When you attach a database, it may not
be from a trusted source, and for this reason, TRUSTWORTHY is always disabled upon attaching the
database making it necessary for a DBA to reset this flag marking the database as trusted.
While it is possible to still move a database to a different file system location using detach/attach, there are
potential unplanned consequences to doing so. For expedience and stability of your application/database,
ALTER DATABASE should be the preferred method of moving the database inside of the same SQL
Instance.

Das könnte Ihnen auch gefallen