Beruflich Dokumente
Kultur Dokumente
2
Tips for Using Performance Monitor..................................................................................3
Password cracking tools for SQL Server..........................................................................14
Moving Database Files Detach/Attach or ALTER DATABASE?....................................17
Don't put DAT, DLT, CD-ROM, scanners or other non-hard disk devices on the
same I/O controllers that connect to your hard disk arrays. In addition, don't put
hard disks on the same I/O controller if they have different speeds. Putting different
devices on the same I/O controller slows the faster devices. Always put slower devices on
their own I/O controller. [6.5, 7.0, 2000] Added 8-29-2000
NTFS-formatted partitions should not exceed 80% of their capacity. For example, if
you have a 20GB drive, it should never be fuller than 16GB. Why? NTFS needs room to
work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer
for it. You may want to create a SQL Server alert to notify you when your arrays exceed
80% of their capacity so you can take immediate action to correct the problem. [6.5, 7.0,
2000]
Avoid locating read-intensive and write-intensive activity on the same drive or
array. For example, don't locate a OLTP and a OLAP database on the same physical
device. The same goes for avoiding putting both heavily random and sequential activity
on the same device. Whenever a drive or array has to change back and forth between
activities, efficiency is lost. [6.5, 7.0, 2000] Added 8-14-2000
At the very minimum, your server should have at least one 100Mbs network card,
and perhaps two. Two cards can be used to increase network throughput and to offer
redundancy. In addition, the network card(s) should be connected to full-duplex switched
ports for best performance. [6.5, 7.0, 2000]
Be sure that your the network card(s) in your server are set to the same duplex
level (half or full) as the switched port they are connected to (assuming they are
connected to a switch, and not a hub). If there is a mismatch, the server may still be
able to connect to the network, but network performance can be significantly impaired.
[6.5, 7.0, 2000]
Assuming SQL Server is running on a dedicated server (for best performance, it should
be), limit the number of network protocols installed on the server. NT Server
allows multiple network protocols (NetBEUI, NWLink, TCP/IP, DLC, AppleTalk, etc.) to be
installed on a server. Unnecessary network protocols increase overhead on the server and
send out unnecessary network traffic. For the best overall performance, only install
TCP/IP on the server, and use the SQL Server TCP/IP Network Library to communicate to
clients. [6.5, 7.0, 2000]
utilization of the server. If your server is not CPU bound, then you should not be worried
by this as it will not affect SQL Server's performance.
If you are CPU bound, but you do not want to turn the counters off, Microsoft has a
suggested registry change you can make that will alleviate most of the CPU hit. To find
out more about this registry change, search www.microsoft.com for this article, "MS
Windows NT Server 4.0 Enterprise File Server Scalability and Performance". This article
covers how to make the necessary registry change by adding the
"LargeIrpStackLocations" registry key. [6.5, 7.0, 2000]
*****
Don't use the "diskperf -ye" command for hardware-based RAID. The "-ye" is
designed for NT Server's software-based RAID. [6.5, 7.0, 2000]
*****
If your server is performance bound and you have done everything you can
think of to boost performance, and you still are having performance problems, you can
gain a "little" more performance by starting SQL Server by using the -x startup
parameter. What this option does is to turn off CPU time and cache-hit ratio statistics,
reducing overhead just a little. This option, of course, will not let you use Performance
Monitor to its full potential. [6.5, 7.0, 2000] More info From Microsoft
*****
In virtually all cases, use the Physical Disk counters, not the Logical Disk counters,
when monitoring I/O activity. The Logical Disk counters will not always provide accurate
data, especially when you are using RAID arrays. [6.5, 7.0, 2000]
*****
Periodically, collect performance data on your SQL Servers, and then save this
data in a spreadsheet or database so that you can determine trends in your server's
performance.
For example, you may consider collecting data on your servers on a daily basis, collecting
data every 600 or 900 seconds, then each day dumping this data into a database for
trend analysis. While this can be a lot of work, it will provide solid data you can use to
plan future hardware expansions. [6.5, 7.0, 2000]
*****
Take advantage of SQL Server's ability to create SQL Server Performance Condition
Alerts. You can create alerts that are fired when performance monitor conditions, that
you set, are reached. For example, if you want to know if the number of SQL Server user
connections exceeds 500, you can create the alert, and when it is fired, SQL Server can
e-mail you with the alert message. Create these alerts using SQL Server Enterprise
Manger.
In many ways, these alerts are similar to the alerts that you can create with the
Performance Monitor. But you will find that the ones you create using Enterprise Manager
are easier to set up and are more robust. [7.0, 2000]
*****
While most DBAs are familiar with the Performance Monitor tool included with Windows
NT Server, many are not aware of "service" version of this tool, called "monitor.exe",
which is available with the Windows NT 4.0 Resource Kit. This version of the
Performance Monitor runs as a service, not as a foreground application. Because it is
a service, it can be started and stopped at a command line. This provides the additional
benefit of allowing the service to be automatically started and stopped by using the NT
Server AT service, or similar scheduling tool. This way, you can schedule performance
data to be automatically collected when you want. In addition, this service version of
Performance Monitor uses less server resources that the foreground version. [6.5, 7.0]
*****
When performance tuning some SQL Server-based applications, it would be handy to be
able to measure such things as the number of invoices entered per hour, or the number
of checks written per hour, or perhaps the number of times a particular stored procedure
has run. You can, and you can monitor these, or almost any type of performance indictor,
using Performance Monitor. SQL Server includes a Performance Monitor Object called the
User Settable Object, which is a set of ten performance monitor counters that you can
customize for your own purpose. In other words, you can create up to ten of your own
SQL Server Performance Monitor counters.
Creating your own counters is not hard if you know how to program stored procedures
using Transact-SQL. SQL Server includes ten system stored procedures (named
sp_user_counter1 through sp_user_counter10), and you can assign any integer value
you want to any of the ten available counters. Generally, what you will do is encapsulate
one of these special stored procedures in a stored procedure you write that is used to
calculate the value you want displayed for the Performance Monitor counter.
One important thing to keep in mind when creating your own SQL Server Performance
Monitor counters is that you don't want the stored procedure you create to track your
performance monitor counter to itself be a burden on SQL Server's performance. Keep
your Transact-SQL code in your stored procedure as simple as possible. [6.5, 7.0, 2000]
More info from Microsoft Added 8-28-2000
*****
As you probably know, each SQL Server connection is associated with a SPID (SQL
Server Process ID). Current connection information (including the SPID) can be seen
using Enterprise Manager's Current Activity window. On rare occasions, a user
connection can virtually tie up an entire server's CPUs if something is wrong
with the connection. In other cases, a long running query may cause a performance
problem. Unfortunately, the Current Activity Window does not always show this is
happening with individual user connections, although you know there is something wrong
with the server because your CPU cycles are being used up, as can be seen using
Performance Monitor and monitoring the System Object: % Total Processor Time counter.
So if you know that a user connection is causing problems, but you can't identify it using
the Current Activity window, here's a method you can use to identify which user
connection is causing the problem.
Your first step is to verify that the SQL Server process (sqlservr), and not some other
process, is responsible for the excessive CPU use. Remember, the "sqlservr" process is
made up of many different treads, each one generally (but not always) representing a
specific user connection.
The way to do this is to use the Performance Monitor and go to the Process Object: %
Processor Time counter using the Chart View. When you do this, there will be many
instances of running processes on your server listed in the Instance list box. What you
need to do is to select all of the instances from the Instance list box except the _Total
instance (which just sums the total of all the other instances). Once the instances are
displayed in the Chart View, you need to change the format of the Chart from Graph to
Histogram. This is done from the Options|Chart from the Performance Monitor's main
menu. Now, you should be able to easily see which process is causing the problem.
(Note: if "Idle" is the highest rated process, then not much is going one with your
server.) If SQL Server is the problem, then the instance with the highest % Processor
Time should be "sqlservr". If this is the case, then proceed with the rest of this tip,
otherwise, you need to find out what other process on your server is causing you
problems.
Assuming the "sqlservr" process is the guilty party, the next step is to identify which user
connection (SPID) is causing the problem. To do this, use Performance Monitor and go to
the Thread Object: % Processor Time counter using the Chart View. When you do this,
there will be many instances of running threads on your server listed in the Instance list
box. What you need to do is to select all of the instances that begin with "sqlservr" from
the list box. Once the instances are displayed in the Chart View, you need to change the
format of the Chart from Graph to Histogram. This is done from the Options|Chart from
the Performance Monitor's main menu. Now, you should be able to easily see which
thread is eating up your CPU resources. While we have now identified which thread (and
its Performance Monitor Instance Number) is causing the problem, we still haven't figured
out which user connection is causing the problem. Although we have identified a specific
Performance Monitor Instance Number as being the culprit, there is no correlation
between a Performance Monitor Instance Number and a SPID in SQL Server. If there was,
our job would be much easier. Unfortunately, it will take us two more steps to make this
correlation. So here goes.
Our next step is to correlate the Performance Monitor Instance Number we just identified
and match it with something called a KPID (Kernel Process ID) or a ID Thread. KPID and
ID Thread are exactly the same thing. A KPID or ID Thread is a system-wide identifier
that uniquely identifies a thread in NT Server. To do this, use Performance Monitor and go
to the Thread Object: ID Thread counter using the Report View. When you do this, there
will be many instances of running threads on your server listed in the Instance list box.
What you need to do is to select all of the instances that begin with "sqlservr" from the
list box, similar to what you did in the previous step. The reason we are using the Report
View is because what you need to do now is to find out which Thread ID (KPID) matches
the Performance Monitor Instance Number (this is the number identified in the column
headings in the report) you identified in the previous step. Now that we know this, we
are only one step away from finding our culprit.
Our last step is to correlate the Thread ID (KPID) identified in the last step to the SPID.
To so this, run the following query in Query analyzer:
use master
select spid, kpid, hostname, dbid, cmd from master..sysprocesses
This query will display a report listing both SPIDs and KPIDs. And this is where you make
the last correlation. Just match up the Thread ID (KPID) found in the last step, and then
match it to the corresponding SPID, and now you know which SQL Server user
connection is causing the problem.
This has been a lot of work, but doing this has helped me to resolve more than one
performance-related problem. [6.5, 7.0] Added 9-30-2000
threads. The advantage of using them is that they have less overhead when being
switched between CPUs in a multiple CPU server.
This benefit does not show up unless the server's CPUs are running at near maximum or
the Performance Monitor System: Context Switching/Sec counter nears 8000 switches
per second for continual periods (over 10 minutes), so don't select this option unless
your server's CPUs are nearly maxed out. You will also want to carefully test (before and
after) the affect of this setting on your server's performance. This option is available
under the SQL Server's "Properties", "Processor" tab. [7.0, 2000] More info from
Microsoft Added 9-22-2000
Once you know the ratio of disk write to reads, you now have the information you need
to help you determine an optimum fillfactor for your indexes.
Before using this counter, be sure to manually turn it on by going to the NT Command
Prompt and entering the following: "diskperf -y", and then rebooting your server. This is
required to turn on the disk counters on for the first time. [6.5, 7.0, 2000]
There is no hard and fast "correct" number for this counter as it measures the actual
traffic. To help you decide if your server has a network bottleneck, one way to use this
number is to compare it with the maximum traffic supported by the network connection
your server is using. Also, this is another important counter to watch over time. It is
important to know if your network traffic is increasing regularly. If it is, then you can use
this information to help you plan for future hardware needs. [6.5, 7.0, 2000] Added 9-52000
*****
If you think that you have a network bottleneck, it is easy to check using the Network
Segment Object: % Network Utilization counter. This counter provides you with what
percentage of the bandwidth is being used by the network connection your server is
using. This is not the amount of bandwidth being sent to and from your server, but the
total bandwidth being used on the connection the network card is attached to.
This connection could be of many different types, including a shared hub or a switched
port running at half-duplex or full-duplex. The connection might be 10Mbp, 100Mbp, or
even 1Gbp. Given this, the results you receive from the counter must be interpreted in
the light of which type of connection you have. Ideally, you will want a network
connection to its own dedicated switch port for maximum performance. [6.5, 7.0, 2000]
*****
If you want to find out how much data is being sent back and forth from your
server to the network, use the Server Object: Bytes Received/sec and the Server
Object: Bytes Transmitted/sec. These counters will help you found out how busy your
actual server is over the network, and are good counters to watch over time. [6.5, 7.0,
2000]
presented in pages, so you will have to take this number and multiply it by 8K (8,192) to
determine the amount of RAM in K that is being used.
Generally, this number should almost come close to the total amount of RAM in your
computer, assuming you are devoting your server to SQL Server. This number should be
close to the total amount of RAM in the server, less the RAM used by NT, SQL Server, and
any utilities you have running on the server.
If the amount of RAM devoted to the data cache is much smaller than you would expect,
then you need to do some investigating to find out why. Perhaps you aren't allowing SQL
Server to dynamically allocate RAM. Whatever the cause, you need to find a solution, as
the amount of data cache available to SQL Server can significantly affect SQL Server's
performance. [6.5, 7.0, 2000]
*****
If your databases are suffering from deadlocks, you can track then by using the SQL
Server Locks Object: Number of Deadlocks/sec. But unless this number is relatively
high, you want see much here because the measure is by second, and it takes quite a
few deadlocks per second to be noticeable.
But still, it is worth checking out if you are having a deadlock problem. Better yet, use
the Profiler's ability to track deadlocks. It will provide you with more detailed information.
What you might consider doing is to use the Number of Deadlocks/sec counter on a
regular basis to get the "big" picture, and if you discover deadlock problems, then use
the Profiler to "drill" down on the problem for a more detailed analysis. [6.5, 7.0, 2000]
*****
If your users are complaining that they have to wait for their transactions to
complete, you may want to find out if object locking on the server is contributing to this
problem. To do this, use the SQL Server Locks Object: Average Wait Time (ms). You can
use this counter to measure the average wait time of a variety of locks, including:
database, extent, Key, Page, RID, and table.
If you can identify one or more types of locks causing transaction delays, then you will
want to investigate further to see if you can identify what specific transactions are
causing the locking. The Profiler is the best tool for this detailed analysis. [6.5, 7.0,
2000]
*****
While table scans are a fact of life, and sometimes faster than index seeks, generally it
is better to have fewer table scans than more. To find out how many table scans
your server is performing, use the SQL Server Access Methods Object: Full Scans/sec.
Note that this counter is for an entire server, not just a single database. One thing you
will notice with this counter is that there often appears to a pattern of scans occurring
periodically. In many cases, these are table scans SQL Server is performing on a regular
basis for internal use.
What you want to look for are the random table scans that represent your application. If
you see what you consider to be an inordinate number of table scans, then break out the
Profiler and Index Tuning Wizard to help you determine exactly what is causing them,
and if adding any indexes can help reduce the table scans. Of course, SQL may just be
doing its job well, and performing table scans instead of using indexes because it is just
plain more efficient. [6.5, 7.0, 2000]
*****
If you suspect that your backup or restore operations are running at suboptimal speeds, you can help verify this by using the SQL Server Backup Device Object:
Device Throughput Bytes/sec. This counter will give you a good feel for how fast your
backups are performing. You will also want to use the Physical Disk Object: Avg. Disk
Queue Length counter to help collaborate your suspicions. Most likely, if your are having
backup or restore performance issues, it is because of an I/O bottleneck. [6.5, 7.0, 2000]
*****
If you are using transactional replication, you may want to monitor the latency that it
takes the Log Reader to move transactions from a database's transaction log until it puts
it in the distribution database, and to also monitor the latency it takes the Distributor
Agent to move transactions from the distribution database to the Subscriber database.
The total of these two figures is the amount of time it takes a transaction to get from the
publication database to the subscriber database.
The counters for these two processes are the: SQL Server Replication LogReader:
Delivery Latency counter and the SQL Server Replication Dist.: Delivery Latency counter.
If you see a significant increase in the latency for either of these processes, this should
be a signal to you to find out what new or different has happened to cause the increased
latency. [6.5, 7.0, 2000]
Windows 2000
Since Windows 2000 is so new, you will need to ensure that all the hardware you run
it on, and their related drivers, have been tested for use with Windows 2000. Using
an outdated or buggy driver can wreck havoc with performance. [7.0, 2000]
*****
If you want to upgrade a current server running NT Server 4.0, you will be best off if
you install Windows 2000 from scratch, instead of using the upgrade procedure
included with Windows 2000. Ideally, reformat all of the drives and start completely
fresh. Upgrading can introduce hard to identify performance problems, such as bad or
outdated drivers not be upgraded, fragmented drives not be defragmented, and so on.
[7.0, 2000]
*****
Install Windows 2000 as a stand-alone server, not as a domain controller. Domain
controllers have extra overhead and perform functions not required by SQL Server. Along
the same lines, don't install any unnecessary server components, such as DNS, DHCP,
etc, on your SQL Server. The goal is to dedicate all of the server's power to SQL Server.
[7.0, 2000]
*****
Windows 2000 supports a larger MTU (maximum transmission unit) window
size than Windows NT 4.0, ranging from 1.5KB to 9KB. The larger the MTU window size,
the fewer packets that have to be sent over the network, reducing both server and
network overhead. To implement this new Windows 2000 feature, you will have to use a
network card that supports the new larger, 9KB MTU window size, and configure this
setting at the network card. [7.0, 2000]
*****
Defragment the drives or arrays regularly using the built-in Disk Defragmenter
(part of the Computer Management Console), or using a third-party tool designed for
Windows 2000. This fixes disk fragmentation and boosts disk I/O. [7.0, 2000]
*****
Set the "Application Response" setting to "Optimize Performance for
Background Services". This ensures that all applications running on Windows 2000
(foreground and background) get an equal shot at the CPU. Set this option by going to
the "System" icon in the "Control Panel", then select the "Advanced" tab, and then select
the "Performance Options" button. [7.0, 2000]
*****
Format all the disk arrays on your server using NTFS 5.0, the new NTFS file system
format included (but not required) for Windows 2000. The new format includes some new
performance enhancements. [7.0, 2000] Added 8-9-2000
*****
Avoid using NTFS data file encryption on SQL Server database and log files.
While the performance hit is minimal on small, lightly used databases, it is noticeable on
larger, busy databases. [7.0, 2000] Added 8-9-2000
*****
When running SQL Server 7.0 or SQL 2000 under Windows 2000, the ideal cluster size
when formatting NTFS partitions is 64K. If your hard disk is larger than 32MB, this is
the default choice selected by Windows 2000 when formatting drives. [7.0, 2000] Added
9-21-2000
Windows 98/ME
Windows NT Server 4.0
Windows 2000
Windows 98/ME
If you are running SQL Server under Windows 98/ME, consider configuring the swap
file to a constant swap file size, instead of the dynamic swap file used by default in
Windows. This reduces overhead because Windows no longer has to resize the swap file.
In addition, it helps to reduce hard disk fragmentation.
If you decide to so this, you will want to defrag your hard disk first to ensure contiguous
hard disk space for the swap file. I would choose a fixed swap file size at least twice as
large as the amount of RAM in your computer. If SQL Server is your only application on
this computer, then this size should be adequate. To change the swap file size, go to
Control Panel | System | Performance | Virtual Memory. If you have more than one hard
drive, locate the swap file on the fastest drive.
*****
If you are running SQL Server under Windows 98/ME, and the computer has at least
16MB of RAM (and it should have at least 64MB if you want decent performance),
consider changing the computer's role from Desktop Computer to Network
Server. This setting allows more data to be cached in RAM, boosting performance. To
change this option, go to Control Panel | System | Performance | File System.
actually being used (based on the Performance Monitor counter), and with a maximum
size of 50MB larger than the minimum size.
The PAGEFILE.SYS setting can be viewed and changed by going to the "Control Panel",
selecting the "Performance" tab, and then clicking on the "Virtual Memory" button. If you
change the virtual memory settings, you will have to reboot your server for the new
settings to go into affect. [6.5, 7.0, 2000]
*****
NTFS-formatted partitions should not exceed 80% of their capacity. For example, if
you have a 20GB drive, it should never be fuller than 16GB. Why? NTFS needs room to
work, and when you exceed 80% capacity, NTFS become less efficient and I/O can suffer
for it. You may want to create a SQL Server alert to notify you when your arrays exceed
80% of their capacity so you can take immediate action to correct the problem. [6.5, 7.0,
2000]
*****
Remove all unessential services and network protocols from your SQL Server.
These can include, but are not limited to: the web server service, FTP server service,
Gopher, SMTP, WINS, DHCP, Alerter, Clipboard Server, Messenger, Network DDE,
Directory Replicator, Schedule, Spooler. It also includes unused network protocols, such
as DLC, AppleTalk, NWLink, and NetBEUI. Each one you remove frees up RAM and CPU
cycles, making them available for SQL Server. Of course, if you really need one or more
of these services or protocols, then don't disable or unload them. The ones listed above
aren't required for a dedicated SQL Server. [6.5, 7.0, 2000]
*****
Configure NT Server 4.0 to be a member server, not a Primary Domain Controller
(PDC) or a Backup Domain Controller (BDC). The task of being a Domain Controller
drains away resources from SQL Server. [6.5, 7.0, 2000]
*****
Don't put SQL Server program, database, or log files on compressed NTFS partitions.
The performance is terrible. In fact, make it a rule not to use NTFS compression for any
files other than rarely accessed archive data. [6.5, 7.0, 2000]
05.09.2006
SQLPing2 can also run dictionary attacks against SQL Server. This is as
simple as loading your own user account and password lists, as shown in
the following figure.
Password cracking can eat up valuable system resources including CPU time, memory and network
bandwidth literally to the point of creating a denial-of-service attack on the system.
Dictionary and brute-force attacks can take a lot of time -- something you may not have, especially if you
can only test your systems during a certain window of time.
Dictionary attacks are only as good as the dictionary you're using, so make sure you've got reliable
dictionaries at your disposal. I have found the following to be good resources:
o
o
o
o
o
http://packetstormsecurity.nl/Crackers/wordlists
ftp://ftp.ox.ac.uk/pub/wordlists
ftp://ftp.cerias.purdue.edu/pub/dict
http://www.outpost9.com/files/WordLists.html
http://www.elcomsoft.com/prs.html#dict
Finally -- and perhaps most importantly -- make sure you follow up on your
findings. That may mean sharing your findings with upper management,
tweaking your password policy and making others aware that they need to
be more security conscious.
About the author: Kevin Beaver is an independent information security
consultant, author and speaker with Atlanta-based Principle Logic, LLC. He
has more than 18 years of experience in IT and specializes in performing
information security assessments. Kevin has written five books including
"Hacking For Dummies" (Wiley), "Hacking Wireless Networks For
Dummies," and "The Practical Guide to HIPAA Privacy and Security
Compliance" (Auerbach). He can be reached at
kbeaver@principlelogic.com.
More information from SearchSQLServer.com
Tip: Ten
Tip: Tool
DISCLAIMER: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to
learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of
information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of
the Ask The Expert services and your reliance on any questions, answers, information or other materials received
through this Web site is at your own risk.