Sie sind auf Seite 1von 14

What are the Differences Between SQL Server 2000 and SQL Server

2005? Part - I
I've been asked this question every time that there's a new version and yet I've never been able to
give what I think is a nice, concise, logical answer that satisfies the asker. Probably it's a lack of my
ability to easily form words in my mouth and get them out in the proper order, so I decided it might
make some sense to do this on paper (metaphorically speaking) and help others out.
Like many of you, I usually get this question from someone outside of SQL Server. A windows admin,
a network guy, etc., someone who has little contact with SQL Server. Or maybe it's someone who's
been stuck with admin'ing a SQL Server instance.
In any case, I wanted to try and explain this concisely for the non-DBAs. As I began this project,
however I soon realized that it's not easy to just give a good general answer. As with everything
else in SQL Server it seems that "it depends" is the best general answer, so I broke this up into a
few areas. This part will look at the administrative differences and the next will cover more of the
development differences.

The Administrative Differences


Administering a SQL Server instance to me means making sure the server service runs efficiently
and is stable and allows clients to access the data. The instance should keep data intact and
function according to the rules of the code implemented while being well maintained.
Or for the non-DBAs, it means that you are the sysadmin and it just works.
The overall differences are few. Sure we use Management Studio instead of Enterprise Manager, but
that's not really a big deal. Really many of the changes, like being able to change connections for a
query, are superficial improvements that don't really present a substantial change. If you think they
do, you might be in the wrong job.
Security is one area that is a very nice improvement. The separation of the schema from the owner
makes administrative changes easier, but that is a big deal because it greatly increases the chances
you won't keep an old account active because it's a pain to change owners on objects. There's also
more granularity and ease of administration using the schema as another level of assigning
permissions.
Another big security change is the ability to secure your web services using certificates instead of
requiring authentication using a name and password. Add to that the capability to encrypt data, and
manage the keys, can make a big difference in the overall security of your data. You have to
carefully ensure your application and access is properly secured, but just the marketing value of
encryption when you have credit card, financial, or medical data is huge. SQL Server 2000 had no
real security features for data, allowing an administrator to see all data. You could purchase a third
party add-on, but it was expensive and required staff training. Not that you don't need to learn
about SQL Server 2005, but it should be a skill that most DBAs will learn and be able to bring to your
organization over time.
High availability is becoming more and more important to all sizes of businesses. In the past,
clustering or log shipping were your main choices, but both were expensive and required the
Enterprise Edition. This put these features out of the reach of many companies, or at least, out of
many DBAs' budgets. With SQL Server 2005, you can now implement clustering, log shipping, or the
new Database Mirroring with the Standard edition. With the ability of Database Mirroring to use
commodity hardware, even disparate hardware between the primary and mirror databases, this is a
very reasonable cost solution for almost any enterprise.
There are also online indexes, online restores, and fast recovery in the Enterprise Edition that can
help ensure that you take less downtime. Fast recovery especially can be an important feature,
allowing the database to be accessed as the undo operations start. With a log of open transactions
when a database is restarted, this can really add up to significant amounts of time. In SQL Server
2000, you had to have a complete, intact database before anyone could access it. With redo/undo
operations sometimes taking a significant amount of time, this could delay the time from Windows
startup to database availability by minutes.
Data sizes always grow and for most companies, performance is always an issue on some server.
With SQL Server 2000, you were limited to using 2GB of RAM and 4 CPUs on the Standard Edition.
The number of CPUs hasn't changed, but you can now use as much RAM as the OS allows. There
also is no limit to the database size, not that the 1,048,516 TB in SQL Server 2000. Since RAM is
usually a limiting factor in the performance of many databases, upgrading to SQL Server 2005 could
be something you can take advantage of. SQL Server 2005 also has more options and capabilities
on the 64-bit platform than SQL Server 2000.

Why Upgrade?
This is an interesting question and one I've been asked quite a bit over the last 18 months since SQL
Server 2005 has been released. The short answer is that if SQL Server 2000 meets your needs, then
there's no reason to upgrade. SQL Server 2000 is a strong, stable platform that has worked well for
millions of installations. If it meets your needs, you are not running up against the limits of the
platform, and you are happy with your system, then don't upgrade.
However, there is a caveat to this. First the support timeline for SQL Server 2000 shows mainstream
support ending next year, in April 2008. I can't imagine that Microsoft wouldn't extend that given
the large number of installations of SQL Server 2000, but with the next version of SQL Server likely
to come out next year, I can see this being the point at which you cannot call for regular support.
The extended support timeline continues through 2013, but that's an expensive option.
The other consideration is that with a new version coming out next year, you might want to just
start making plans to upgrade to that version even if you're happy with SQL Server 2000. If the plan
is to release a new version every 2-3 years, you'll need to upgrade at least every 5-6 years to
maintain support options.
Be sure that in any case you are sure the application you are upgrading, if it's a third party, is
supported on SQL Server 2005.
Lastly, if you have multiple servers and are considering new hardware for more than 1 of them, it
might make some sense to be sure to look at buying one large 64-bit server and performing some
consolidations. I might recommend that you wait for the next version of SQL Server if you are
worried about conflicts as I have heard rumors of switches to help govern the resource usage in
Katmai (SQL Server 2008).
A quick summary of the differences:

Feature SQL Server 2000 SQL Server 2005


Schema is separate. Better granularity in
Owner = Schema, hard to remove old
Security easily controlling security. Logins can be
users at times
authenticated by certificates.
No options built in, expensive third
Encryption party options with proprietary skills Encryption and key management build in.
required to implement properly.
Clustering or Log Shipping require Clustering, Database Mirroring or Log Shipping
High
Enterprise Edition. Expensive available in Standard Edition. Database
Availability
hardware. Mirroring can use cheap hardware.
Limited to 2GB, 4CPUs in Standard 4 CPU, no RAM limit in Standard Edition. More
Scalability
Edition. Limited 64-bit support. 64-bit options offer chances for consolidation.

Conclusion
These seem to be the major highlights from my perspective as an administrator. While there are
other improvements, such as the schema changes flowing through replication, I'm not sure that
they represent compelling changes for the non-DBA.
In the next article, I'll examine some of the changes from a developer perspective and see if any of
those give you a reason to upgrade.
And I welcome your comments and thoughts on this as well. Perhaps there are some features I've
missed in my short summary.
What are the Differences Between SQL Server 2000 and SQL Server
2005? – Part II
In part I of this series I looked at the administrative differences and in this part I'll cover some of the
development differences between the versions. I'm looking to make a concise, short list of things
you can tell a developer who is interested, but not necessarily knowledgeable about SQL Server, to
help them decide which version might be best suited to meet their needs.
And hopefully help you do decide if an upgrade is worth your time and effort.
One short note here. As I was working on this, it seemed that there are a great many features that I
might put in the BI or security space instead of administrator or development. This may not be
comprehensive, but I'm looking to try and show things from the main database developer
perspective.

The Development Differences


Developing against SQL Server 2005 is in many ways similar to SQL Server 2000. Most all of the T-
SQL that you've built against SQL Server 2000 will work in SQL Server 2005, it just doesn't take
advantage of the newer features. And there are a great many new extensions to T-SQL to make
many tasks easier as well as changes in other areas.
One of the biggest changes is the addition of programming with .NET languages and taking
advantage of the CLR being embedded in the database engine. This means that you can write
complex regular expressions, string manipulation, and most anything you can think of that can be
done in C#, VB.NET, or whatever your language of choice may be. There's still some debate over
how much this should be used and to what extent this impacts performance of your database
engine, but there's not denying this is an extremely powerful capability.
The closest thing to this in SQL Server 2000 was the ability to write extended stored procedures and
install them on the server. However this was using C++ with all the dangers of programming in a
low level language.
However there are many new extensions to T-SQL that might mean you never need to build a CLR
stored procedure, trigger, or other structure. The main extension for database developers, in my
mind, is the addition of the TRY/CATCH construct and better error information. Error handling has
been one of the weakest parts of T-SQL for years. This alone allows developers to build much more
robust applications.
There are also many other T-SQL additions, PIVOT, APPLY, and other ranking and windowing
functions. You might not use these very often, but they come in handy. The same applies to
Common Table Expressions (CTEs), which make some particular problems very easy to solve. The
classic recursion of working through employees and their managers, or menu systems, have been
complex in the past, but with CTEs, they are very easy to return in a query.
One of the other big T-SQL additions is the OUTPUT clause. This allows you to return values from an
INSERT, UPDATE, or DELETE (DML) statement to the calling statements. In an OUTPUT statement,
just like in a trigger in SQL Server 2000, you can access the data in the inserted or deleted tables.
One of the programming structures that many developers have gotten more and more exposure to
over the last decade is XML. More and more applications make use of XML, it's used in web services,
data transfers, etc. XML is something I see developers excited about and with SQL Server 2005
there is now a native XML data type, support for schemas, XPATH and XQUERY and many other XML
functions. For database developers, there is no longer the need to decompose and rebuilt XML
documents to get it in and out of SQL Server. Whether you should is another story, but the
capabilities are there.
There are a couple other enhancements that developers will appreciate. The new large datatypes,
like varchar(max) allow you to store large amounts of data in a column without jumping through the
hoops of working with the TEXT datatype.
Auditing is much easier with DDL triggers and event notifications. Event notifications in particular,
allowing you to respond to almost anything that can happen in SQL Server 2005, can allow you to
build some amazing new applications.
The last enhancement in T-SQL that I think developers will greatly appreciate is ROW_NUMBER(). I
can't tell you how many times I've seen forum posts asking how to get the row number in a result
set, but this feature is probably greatly appreciated by developers.
There are a number of other areas that developers will find useful. Service Broker, providing an
asynchronous messaging system can make SOA applications a much easier to develop. Until now,
this is a system that appears easy to build, but allows unlimited opportunities for mistakes. Native
web services are also a welcome addition to allow you to extend your data to a variety of
applications without requiring complex security infrastructures.
Reporting Services has grown tremendously, allowing more flexibility in how you deploy reports to
end users. Integration Services is probably the feature that most requires development skills as this
ETL tool now really is more of a developer than a DBA system. However with the added complexity,
it has grown into an extremely rich and tremendously capable tool.
There are other changes with SQL Server, ADO.NET has been enhanced, Visual Studio has been
tightly integrated with it's extensions for various features as well as its influence on the Business
Intelligence Design Studio, and the Team System for DB Pros. The Full-Text Search capabilities have
been expanded and they work better, allowing integration with third party word-breakers and
stemmers as well as working with noise words.

Why Upgrade?
This is an interesting question. As with part I of this series, I'm not completely sure of how to
recommend this. If your server is running well as an administrator, there's no reason to upgrade. As
a developer, however, it's a bit more complicated.
Developers, almost by definition, are looking to change things on a regular basis. For developers,
they are fixing things, enhancing them, or rebuilding them. In the first or even second case, it may
not make much sense to upgrade if your application is working well. In the latter case, I'd really
think hard about upgrading because a rebuild, or re-architecture, takes a lot of time and resources.
If you're investing in a new application, or a new version of an application, then SQL Server 2005
might make sense to take advantage of the features of SQL Server 2005.
I'm guessing that many of these features will be around through at least the next two versions of
SQL Server. While I can see there being a radical rewrite after Katmai (SQL Server 2008), I can't
imagine that many things won't still be around in the version after that. They may get deprecated
after that, but they should be there for that version, which should see support through 2018 or
2019.
If you are struggling with ETL, trying to implement messaging, or web services, then it also might
make sense to upgrade your database server to SQL Server 2005.
A quick summary of the differences:

Feature SQL Server 2000 SQL Server 2005


Limited to extended stored The incorporation of the CLR into the relational
Server
procedures, which are difficult to engine allows managed code written in .NET
Programming
write and can impact the server languages to run. Different levels of security can
Extensions
stability. protect the server from poorly written code.
Addition of TRY/CATCH allows more mature error
T-SQL Error Limited to checking @@error, no
handling. More error_xx functions can gather
Handling much flexibility.
additional information about errors.
SQL Language enhanced from All the power of SQL Server 2000 with the addition
T-SQL previous versions providing of CTEs for complex, recursive problems,
Language strong data manipulation enhanced TOP capabilities, PIVOT/APPLY/Ranking
capabilities. functions, and ROW_NUMBER
Robust event handling with EVENT
Limited support using triggers to
Auditing NOTIFICATIONS, the OUTPUT clauses, and DDL
audit changes.
triggers.
Limited to 8k for normal data
without moving to TEXT
Large Data Includes the new varchar(max) types that can
datatypes. TEXT is hard to work
Types store up to 2GB of data in a single column/row.
with in programming
environments.
XML Limited to transforming relational Native XML datatype, support for schemas and full
data into XML with SELECT XPATH/XQUERY querying of data.
statements, and some simple
query work with transformed
documents.
v2 has more features, including automatic failover
v1.1 of ADO.NET included
for database mirroring, support for multiple active
ADO.NET enhancements for client
result sets (MARS), tracing of calls, statistics, new
development.
isolation levels and more.
Includes Service Broker, a full-featured
No messaging built into SQL asynchronous messaging system that has evolved
Messaging
Server. from Microsoft Message Queue (MSMQ), which is
integrated into Windows.
Numerous enhancements, run-time sorting, direct
Reporting An extremely powerful reporting
printing, viewer controls and an enhanced
Services environment, but a 1.0 product.
developer experience.
Integration Services is a true programming
DTS is a very easy to use and
environment allowing almost any source of data
intuitive tool. Limited capabilities
to be used and many more types of
ETL for sources and transformations.
transformations to occur. Very complex
Some constructs, such as loops,
environment that is difficult for non-DBAs to use.
were very difficult to implement.
Requires programming skills.
Workable solution, but limited in More open architecture, allowing integration and
Full-Text
its capabilities. Cumbersome to plug-ins of third party extensions. Much more
Search
work with in many situations. flexible in search capabilities.

Conclusion
These are the highlights that I see as a developer and that are of interest. There are other features
in the security area, scalability, etc. that might be of interest, but I think these are the main ones.
I welcome your comments and thoughts on this as well. Perhaps there are some features I've
missed in my short summary that you might point out and let me know if you think it makes sense
to discuss some of the security changes. As far as BI stuff, hopefully one of you will send me some
differences in an article of your own.

SQL Server Best Practices

• Write comments in your stored procedures, triggers and SQL batches generously,
whenever something is not very obvious. This helps other programmers understand
your code clearly. Don't worry about the length of the comments, as it won't impact the
performance, unlike interpreted languages like ASP 2.0.

• Do not use SELECT * in your queries. Always write the required column names after the
SELECT statement, like:

SELECT CustomerID, CustomerFirstName, City

This technique results in reduced disk I/O and better performance.

• Try to avoid server side cursors as much as possible. Always stick to a 'set-based
approach' instead of a 'procedural approach' for accessing and manipulating data.
Cursors can often be avoided by using SELECT statements instead.
If a cursor is unavoidable, use a WHILE loop instead. I have personally tested and
concluded that a WHILE loop is always faster than a cursor. But for a WHILE loop to
replace a cursor you need a column (primary key or unique key) to identify each row
uniquely. I personally believe every table must have a primary or unique key.

• Avoid the creation of temporary tables while processing data as much as possible, as
creating a temporary table means more disks I/O. Consider using advanced SQL, views,
SQL Server 2000 table variable, or derived tables, instead of temporary tables.

Diffrence between #table , ##tables , @tableVariables

• Try to avoid wildcard characters at the beginning of a word while searching using the
LIKE keyword, as that results in an index scan, which defeats the purpose of an index.
The following statement results in an index scan, while the second statement results in
an index seek:

SELECT LocationID FROM Locations WHERE Specialities LIKE '%pples'


SELECT LocationID FROM Locations WHERE Specialities LIKE 'A%s'

Also avoid searching using not equals operators (<> and NOT) as they result in table
and index scans.

• Use 'Derived tables' wherever possible, as they perform better. Consider the following
query to find the second highest salary from the Employees table:

SELECT MIN(Salary)
FROM Employees
WHERE EmpID IN
(
SELECT TOP 2 EmpID
FROM Employees
ORDER BY Salary Desc
)

The same query can be re-written using a derived table, as shown below, and it
performs twice as fast as the above query:

SELECT MIN(Salary)
FROM
(
SELECT TOP 2 Salary
FROM Employees
ORDER BY Salary DESC
) AS A

This is just an example, and your results might differ in different scenarios depending
on the database design, indexes, volume of data, etc. So, test all the possible ways a
query could be written and go with the most efficient one.

• Prefix the table names with the owner's name, as this improves readability and avoids
any unnecessary confusion. Microsoft SQL Server Books Online even states that
qualifying table names with owner names helps in execution plan reuse, further
boosting performance.

• Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and
triggers in production environments, as this suppresses messages like '(1 row(s)
affected)' after executing INSERT, UPDATE, DELETE and SELECT statements. This
improves the performance of stored procedures by reducing network traffic.

• Use the more readable ANSI-Standard Join clauses instead of the old style joins. With
ANSI joins, the WHERE clause is used only for filtering data. Where as with older style
joins, the WHERE clause handles both the join condition and filtering data. The first of
the following two queries shows the old style join, while the second one shows the new
ANSI join syntax:

SELECT a.au_id, t.title


FROM titles t, authors a, titleauthor ta
WHERE
a.au_id = ta.au_id AND
ta.title_id = t.title_id AND
t.title LIKE '%Computer%'

SELECT a.au_id, t.title


FROM authors a
INNER JOIN
titleauthor ta
ON
a.au_id = ta.au_id
INNER JOIN
titles t
ON
ta.title_id = t.title_id
WHERE t.title LIKE '%Computer%'

 Do not prefix your stored procedure names with "sp_". The prefix sp_ is reserved for

system stored procedure that ship with SQL Server. Whenever SQL Server encounters a
procedure name starting with sp_, it first tries to locate the procedure in the master
database, then it looks for any qualifiers (database, owner) provided, then it tries dbo as
the owner. So you can really save time in locating the stored procedure by avoiding the
"sp_" prefix.

 Views are generally used to show specific data to specific users based on their interest.

Views are also used to restrict access to the base tables by granting permission only on
views. Yet another significant use of views is that they simplify your queries. Incorporate
your frequently required, complicated joins and calculations into a view so that you don't
have to repeat those joins/calculations in all your queries. Instead, just select from the
view.

 Do not let your front-end applications query/manipulate the data directly using SELECT or

INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your


applications access these stored procedures. This keeps the data access clean and consistent
across all the modules of your application, and at the same time centralizing the business logic
within the database.

 If you have a choice, do not store binary or image files (Binary Large Objects or BLOBs)

inside the database. Instead, store the path to the binary or image file in the database and use
that as a pointer to the actual binary file stored elsewhere on a server. Retrieving and
manipulating these large binary files is better performed outside the database, and after all, a
database is not meant for storing files.

 Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than

static SQL, as SQL Server must generate an execution plan every time at runtime. IF and CASE
statements come in handy to avoid dynamic SQL. Another major disadvantage of using
dynamic SQL is that it requires users to have direct access permissions on all accessed objects,
like tables and views. Generally, users are given access to the stored procedures which
reference the tables, but not directly on the tables. In this case, dynamic SQL will not work.
Consider the following scenario where a user named 'dSQLuser' is added to the pubs database
and is granted access to a procedure named 'dSQLproc', but not on any other tables in the
pubs database. The procedure dSQLproc executes a direct SELECT on titles table and that
works. The second statement runs the same SELECT on titles table, using dynamic SQL and it
fails with the following error:

Server: Msg 229, Level 14, State 5, Line 1


SELECT permission denied on object 'titles', database 'pubs', owner 'dbo'.

To reproduce the above problem, use the following commands:

sp_addlogin 'dSQLuser'
GO
sp_defaultdb 'dSQLuser', 'pubs'
USE pubs
GO
sp_adduser 'dSQLUser', 'dSQLUser'
GO
CREATE PROC dSQLProc
AS
BEGIN
SELECT * FROM titles WHERE title_id = 'BU1032' --This works
DECLARE @str CHAR(100)
SET @str = 'SELECT * FROM titles WHERE title_id = ''BU1032'''
EXEC (@str) --This fails
END
GO
GRANT EXEC ON dSQLProc TO dSQLuser
GO

Now login to the pubs database using the login dSQLuser and execute the procedure dSQLproc
to see the problem.

 Consider the following drawbacks before using the IDENTITY property for generating

primary keys. IDENTITY is very much SQL Server specific, and you will have problems porting
your database application to some other RDBMS. IDENTITY columns have other inherent
problems. For example, IDENTITY columns can run out of numbers at some point, depending on
the data type selected; numbers can't be reused automatically, after deleting rows; and
replication and IDENTITY columns don't always get along well.

So, come up with an algorithm to generate a primary key in the front-end or from within the
inserting stored procedure. There still could be issues with generating your own primary keys
too, like concurrency while generating the key, or running out of values. So, consider both
options and go with the one that suits you best.

• Minimize the use of NULLs, as they often confuse the front-end applications, unless
the applications are coded intelligently to eliminate NULLs or convert the NULLs
into some other form. Any expression that deals with NULL results in a NULL
output. ISNULL and COALESCE functions are helpful in dealing with NULL values.
Here's an example that explains the problem:
Consider the following table, Customers which stores the names of the customers
and the middle name can be NULL.

CREATE TABLE Customers


(
FirstName varchar(20),
MiddleName varchar(20),
LastName varchar(20)
)

Now insert a customer into the table whose name is Tony Blair, without a middle
name:

INSERT INTO Customers


(FirstName, MiddleName, LastName)
VALUES ('Tony',NULL,'Blair')

The following SELECT statement returns NULL, instead of the customer name:

SELECT FirstName + ' ' + MiddleName + ' ' + LastName FROM Customers

To avoid this problem, use ISNULL as shown below:


SELECT FirstName + ' ' + ISNULL(MiddleName + ' ','') + LastName FROM
Customers

• Use Unicode datatypes, like NCHAR, NVARCHAR, or NTEXT, if your database is


going to store not just plain English characters, but a variety of characters used all
over the world. Use these datatypes only when they are absolutely needed as they
use twice as much space as non-Unicode datatypes.

 Always use a column list in your INSERT statements. This helps in avoiding problems when

the table structure changes (like adding or dropping a column). Here's an example which
shows the problem.

Consider the following table:

CREATE TABLE EuropeanCountries


(
CountryID int PRIMARY KEY,
CountryName varchar(25)
)
Here's an INSERT statement without a column list , that works perfectly:
INSERT INTO EuropeanCountries
VALUES (1, 'Ireland')

Now, let's add a new column to this table:

ALTER TABLE EuropeanCountries


ADD EuroSupport bit

Now run the above INSERT statement. You get the following error from SQL Server:

Server: Msg 213, Level 16, State 4, Line 1


Insert Error: Column name or number of supplied values does not match table definition.

This problem can be avoided by writing an INSERT statement with a column list as shown
below:

INSERT INTO EuropeanCountries


(CountryID, CountryName)
VALUES (1, 'England')

 Perform all your referential integrity checks and data validations using constraints (foreign

key and check constraints) instead of triggers, as they are faster. Limit the use triggers only for
auditing, custom tasks and validations that can not be performed using constraints. Constraints
save you time as well, as you don't have to write code for these validations, allowing the
RDBMS to do all the work for you.

 Always access tables in the same order in all your stored procedures and triggers

consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks
are: Keep your transactions as short as possible. Touch as few data as possible during a
transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher
level locking hints or restrictive isolation levels unless they are absolutely needed. Make your
front-end applications deadlock-intelligent, that is, these applications should be able to
resubmit the transaction incase the previous transaction fails with error 1205. In your
applications, process all the results returned by SQL Server immediately so that the locks on
the processed rows are released, hence no blocking.

 Offload tasks, like string manipulations, concatenations, row numbering, case conversions,

type conversions etc., to the front-end applications if these operations are going to consume
more CPU cycles on the database server. Also try to do basic validations in the front-end itself
during data entry. This saves unnecessary network roundtrips.

 Do not call functions repeatedly within your stored procedures, triggers, functions and

batches. For example, you might need the length of a string variable in many places of your
procedure, but don't call the LEN function whenever it's needed, instead, call the LEN function
once, and store the result in a variable, for later use.

 Make sure your stored procedures always return a value indicating their status. Standardize

on the return values of stored procedures for success and failures. The RETURN statement is
meant for returning the execution status only, but not data. If you need to return data, use
OUTPUT parameters.

 If your stored procedure always returns a single row resultset, consider returning the

resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output
parameters faster than resultsets returned by SELECT statements.

 Always check the global variable @@ERROR immediately after executing a data

manipulation statement (like INSERT/UPDATE/DELETE), so that you can rollback the transaction
in case of an error (@@ERROR will be greater than 0 in case of an error). This is important,
because, by default, SQL Server will not rollback all the previous changes within a transaction if
a particular statement fails. This behavior can be changed by executing SET XACT_ABORT ON.
The @@ROWCOUNT variable also plays an important role in determining how many rows were
affected by a previous data manipulation (also, retrieval) statement, and based on that you
could choose to commit or rollback a particular transaction.

BEGIN TRAN
TRY
BEGIN
-- Your code
COMMIT TRAN
END
CATCH
BEGIN
ROLLBACK TRAN
END

 To make SQL Statements more readable, start each clause on a new line and indent when
needed. Following is an example:

SELECT title_id, title, bookname


FROM titles
INNER JOIN books ON Books.book_id = titles. title_id
WHERE title LIKE 'Computer%' AND
bookname like ‘Computer%’

SELECT title_id, title


FROM titles
WHERE title LIKE 'Computer%' AND
title LIKE 'Cook%'
 Always be consistent with the usage of case in your code. On a case insensitive server, your

code might work fine, but it will fail on a case sensitive SQL Server if your code is not
consistent in case. For example, if you create a table in SQL Server or a database that has a
case-sensitive or binary sort order, all references to the table must use the same case that was
specified in the CREATE TABLE statement. If you name the table as 'MyTable' in the CREATE
TABLE statement and use 'mytable' in the SELECT statement, you get an 'object not found'
error.

 Do not use column numbers in the ORDER BY clause. Consider the following example in

which the second query is more readable than the first one:

SELECT OrderID, OrderDate


FROM Orders
ORDER BY 2
SELECT OrderID, OrderDate
FROM Orders
ORDER BY OrderDate

Well, this is all for now folks. I welcome your feedback on this,

Happy database programming!

Das könnte Ihnen auch gefallen