Sie sind auf Seite 1von 8

SQL Server 2000 Cluster Steps, Part 1

By Siva Ram Mosalakanti

Detailed Description of the Servers The following hardware and software was used for the solution. Hardware: 2 Dell Power Edge 6450 (as the two cluster nodes) 4 -700MHz Pentium III Xeon processors per node 6 GB RAM per node 3 PERC 2/DC controllers in each node (a total of 6 controllers) one PERC 2/DC was used for the internal disks, and the other two were used for the external cluster disks 2 Power Vault 210S storage units (SCSI storage) 20 disk drives (10 in each Power Vault 210S) 2 Ethernet NICs (one per node, in addition to the embedded NIC-a total of four NICs) the added NIC is the Intel Pro/100+ Server Adapter and the embedded NIC is the Intel 8255x-based PCI Ethernet Adapter (10/100) Domain Name Server connected to the public network (Castle) A network switch connecting nodes with the domain name server Software: Windows 2000 Advanced Server Windows 2000 Service Pack 3 Windows 2000 Security Patch SQL Server 2000 Enterprise Edition SQL Server 2000 Service Pack 3 The primary node is SomeName with IP Address as xx.xx.x.xxx and the secondary node is SomeOtherName with IP Address as xx.xx.x.xxx. The private network IP Address is xx.xx.x.xxx This document assumes that you have already installed Cluster Service. If you have not done this part, please refer to www.microsoft.com, hardly ten steps and very easy. These are the steps to be performed to complete SQL Server 2000 Cluster environment. 1. Verifying the Cluster Setup To verify that the cluster is functioning correctly, open the Cluster Administrator on either node and go to Start -> Programs -> Administrative Tools -> Cluster Administrator. Both nodes should appear when clicked on Groups folder.

2. Managing Disks in Cluster Administrator For the purpose of SQL Server 2000 installation, it is necessary to group data and log disks together so that they can be selected as one group for SQL Server fail over. At Some Company Name, we had one quorum disk, and only one data disk. So, the two groups in the Groups folder of Cluster Administrator are: Cluster Group Disk 1, later renamed as SQL_Server 3. Configuring MS DTC for the Cluster MS DTC (Microsoft Distributed Transaction Coordinator) is required in a cluster for distributed queries and two-phase commit transactions. The comclust.exe program must be run on each node to configure MS DTC to run in clustered mode. In the Cluster Administrator window, click on the Cluster Group in the left pane. If MSDTC does not show up as a resource in the right pane, configure MS DTC for clustering by running the following command at a command prompt, first on Node A (Voyager), then on Node B (Columbia)
4. >Comclust.exe

Make sure to do this on both nodes, one at a time. Now MS DTC should appear as a resource in the Cluster Group. Now that Windows 2000 Service pack 2, Security pack, and MSDTC are installed, SQL Server 2000 can be installed now on the Primary node only. 5. Adding SQL Server 2000 to the Cluster After installing MSCS (Microsoft Cluster Server), running the Cluster Configuration Wizard, and testing cluster functionality, it is time to begin the SQL Server 2000 Setup program on the primary node. It is not necessary to install SQL Server manually on the other node this is done automatically when Setup is run on the primary node as long as the other node is powered on and available in the cluster. So have both cluster nodes turned on during the SQL Server Setup program. The Setup program detects that the system is a cluster, and asks several questions concerning cluster and virtual server information (named it as SHUTTLESQL) at the beginning of the install process. Then the normal SQL Server installation process continues. Once all the information is entered, the Setup program automatically installs a new, separate instance of SQL Server binaries on the local disk of each server in the cluster. The binaries are installed in exactly the same path on each cluster node, so it is important to ensure that each node has a

local drive letter in common with all the other nodes in the cluster, such as C:\SQL2000. The Setup program also installs the system databases on the specified cluster (shared) disk. System databases must be on a clustered disk so that they can be shared between the nodes (and failed over when necessary), because these databases contain specific user login and database object information that must be the same for each node. The virtual server name will allow users access to the online node. Few setup options chosen as per Siebel 7 requirement: Mixed mode authentication was used, the appropriate authentication mode for our situation was chosen as Dictionary order, case sensitive, for use with 1252 character set, Processor license for 4 processors. -------------Part2 5. Installing the SQL Server 2000 Service pack on a SQL Server 2000 Cluster Because of the nature of SQL Server 2000 clustering, you want to ensure that it is as reliable as possible. One of the ways to help ensure that is to be sure that you have installed the latest SQL Server 2000 Service Pack on the SQL Server 2000 Cluster before you put it into production. Its installation requires a reboot of both nodes, and adding the service pack now, rather than later, can reduce potential downtime later on. The service pack three was downloaded from the Microsoft site and stored under C:\Tools\SQLSP3. This file was executed on the primary node of the cluster (Voyager). Recommended step after installing Service pack After SQL Server 2000 has been successfully installed, bring up Cluster Administrator; you will note that the Disk Group that contains the disk resource you assigned when installing SQL Server 2000 will include all of the SQL Server Clustering resources. What I like to do is to rename this resource group to a more easily understandable name, such as SQL_Server. Observe in the SQL_Server group the following. In the SQL_Server resource group, you should find these resources:
o o

Disk G: This is a logical drive from the shared disk array that was assigned as a SQL Server Cluster resource during the installation. SQL IP Address1: This is the virtual IP address assigned to the SQL Server cluster during installation.

o o o o

SQL Network Name: This is the virtual cluster name assigned to the SQL Server cluster during installation. SQL Server: This is the MSSQLSERVER service that was installed when the SQL Server cluster was installed. SQL Server Agent: This is the SQLSERVERAGENT service that was installed when the SQL Server Cluster was installed. SQL Server Fulltext: This is the SQL Server Full-Text Search service that was installed when the SQL Server Cluster was installed.

All these cluster resources need to be in a single group because they are all dependent on each other, and should a fail over occur, all these resources must fail over as a group. 6. Testing SQL Server 2000 Cluster After completing the installation of Service Pack on SQL Server 2000 primary node, most likely the SQL Server 2000 cluster is ready to be used. But before I trust any production systems on a new cluster, it is a good idea to test the SQL Server 2000 cluster to see if it is really working, as it should. Test Number 1: Moving Groups The first test I performed is very simple that is to move the current resources (including the Cluster Group and SQL Server resource group) from the primary cluster node to the secondary cluster node, and then back again. The steps to see if SQL Server Clustering is functioning properly are as follows:
1. 2.

3.

4.

5. 6.

Start Cluster Administrator. In the Explorer pane at the left side of the Cluster Administrator, open up the "Groups" folder. Inside it you should see the Cluster Group and the SQL Server resource group (The name we have given is SQL_Server) Click on "Cluster Group" to highlight it. In the right pane of the screen, you will see the cluster resources that make up this group. Note the "Owner" of the resources, that is Voyager Each of the groups must be moved to the other node, one at a time. First, right-click on "Cluster Group," then select "Move Group." As soon as you do this, you will see the "State" change from "Online" to "Offline pending" to "Offline" to "Online pending" to "Online." This will happen very quickly. Also note that the "Owner" changes from the primary node to the secondary node that is from Voyager to Columbia. Now do the same for the SQL_Server resource group. Assuming there are no problems, both groups will have moved to the secondary node, which, in effect, has now become the primary node. Once both nodes have been moved, look in the Event Viewer to see if any error

messages were generated. If everything worked correctly, there should be no error messages. 7. Now, move both groups back to the original node by repeating above steps three through six above.

This is a very basic test, but it helps to determine if the cluster is working as it should. The following tests are slightly more comprehensive, helping you to root out other potential problems. Test number 2: Initiate Failure This test is very similar to the one above, except we are pretending to failover the nodes. In effect, we are manually simulating a failover. Here's how I performed this test:
8. 9.

Start Cluster Administrator. In the Explorer pane at the left side of the Cluster Administrator, open up the "Groups" folder. Inside it you should see the Cluster Group and the SQL Server resource group. 10. Click on "Cluster Group" to highlight it. In the right pane of the screen, you will see the cluster resources that make up this group. Note the "Owner" of the resources that is Voyager 11. Now, right-click on the "Cluster IP Address" resource in the right pane of the window, the select "Initiate Failure." What this does is to tell Cluster Service that the virtual IP address has failed. 12. After you select this option, you will notice some activity under "state," but that fairly quickly the resource returns to an "Online" status and that the "Owner" has not changed. It appears as if no failover has occurred. And you are correct, no failover has occurred. This is normal and to be expected. This is because Cluster Services will try to restart a failed resource up to three times before it actual fails over (this number can be changed). So to actually initiate a failover, you must redo step number four above for a total of four times before an actual failover occurs. When failover occurs, you will also notice that all of the resources in the "Cluster Group" also failover. 13. Now if you click on the SQL Server resource group, you will notice that the SQL Server resources did not fail over. This is also normal. This is because a failover will only force dependent resources to failover as a group, and the "Cluster Group" we failed over earlier is not dependent on the SQL Server resource group, so it did not fail over. To fail over the SQL Server resource group, right-click on the disk resource that contains

the system database files in the right pane of the window, and select "Initiate failure." You will have to do this a total of four times in order to failover the disk resource to the other node. 14. Now that you have done, reverse your steps, and failover the "Cluster Group" and the SQL Server resource group back to the primary node.

Like the previous test, check out the Event Viewer logs to see if any error messages occurred. If everything worked as expected, you are ready for the next test. Test number 3: Turn off Each Node While the first two tests were performed from the Cluster Administrator, the next three tests are more real world. In this test, you will first need to ensure that all of the default groups are located on the primary node. Then you will physically turn off (flip the switch) the primary node. If you are watching the cluster groups from the Cluster Administrator from the secondary node after turning off the primary node, you should see a failover occur and all the resources should be automatically failed over to the secondary node. Check the Event Log for any potential error messages after this occurs. Once you have checked for any potential problems, turn on the primary node and wait until it fully boots. You will note that turning on the primary node does not cause the cluster to fail back. The cluster resources will remain on the secondary node until you force them to return to the primary node. Now turn off the secondary node, repeating what you did earlier with the primary node. As before, you can use the Cluster Administrator from the primary node to watch the groups fail over to the primary node. Check the Event Log for any potential error messages. Once the groups fail back to the primary node, turn the secondary node back on, and wait until it boots up fully. This is a very good test to see if failover will work in the real world. If no problems arose from this test, then you are ready for the next test. Test number 4: Break Network Connectivity This test is similar in concept as the above test. What we want to do is force a fail over. But instead of simulating a computer failure, we will be simulating a network-related error.

From the primary node, remove the network cable from the public network card. This will simulate a failure of the primary node, and should initiate a failover to the secondary node. If you are watching the cluster groups from the Cluster Administrator from the secondary node, you should see a failover occur and the resources should be automatically failed over. Check the Event Log for any potential error messages. Once you have checked for any potential problems, plug the network cable back into the primary node, and then remove the network cable from the public network card on the secondary node. As before, you can use the Cluster Administrator to watch the groups fail over to the primary node. Check the Event Log for any potential error messages. Once you are done, plug the network cable back into the public network card on the secondary node. If no problems arose from this test, then you are sure that SQL Server 2000 Cluster Server is working smoothly. 6. Log files locations The SQL Server Cluster log file location is C:\WinNT\sqlclstr.log The Cluster log file location is C:\WinNT\Cluster\Cluster.log 7. Setting up Siebel Database After successful testing and looking into log files, its time to create Siebel Database, according to the requirements. The Database was created with name as SiebelDB with size as 5GB, out of which 4GB was reserved for Database files and 1GB for log file with 10% file growth, with Auto-shrink unchecked. A backup account for sa was created. Using sa (system administrator in SQL Server) account for SQL Server Administration is not a best practice. A sys admin on SiebelDB was also created for Some User so that she can administer Siebel database. All the necessary parameters for database like memory, services, database settings, and security were configured. Backup was also scheduled at 12pm everyday only for SiebelDB and every hour transaction log backup. 8. Final thoughts Refer to www.microsoft.com for more information on SQL Cluster Steps.

Das könnte Ihnen auch gefallen