Beruflich Dokumente
Kultur Dokumente
2
3.5.4Insert ACI on Cluster.......................................................................................................... 75
3.5.4.1 Change ACI Services Properties on Cluster.................................................................75
3.5.5All Resources in Cluster Administrator............................................................................77
3.5.6Set ACI environment variables.......................................................................................... 78
3.5.7Changes in the Registry.................................................................................................... 78
3.5.8Shared Data........................................................................................................................ 78
3.5.9Verify of Installation and Configuration............................................................................78
3.6 Hard/Software Requirements and Customer Supply......................................80
3.6.1Requirements Section (for Windows)...............................................................................80
3.6.2Customer Supply Section.................................................................................................. 85
3.6.3Order Packages.................................................................................................................. 85
Table of
3
0 List of Figures and Tables
Figure 1 2-Node MSCS Cluster...................................................................................................8
Figure 2 ACI Configuration in a Cluster.......................................................................................9
Figure 3 The Select An Account page........................................................................................15
Figure 4 The Add Or Remove Managed Disks page..................................................................15
Figure 5 The Cluster File Storage page.....................................................................................16
Figure 6 The Configure Cluster Networks page.........................................................................16
Figure 7 The Network Connections page...................................................................................17
Figure 8 The Internal Cluster Communication page...................................................................18
Figure 9 The Cluster IP Address page.......................................................................................18
Figure 10 Cluster Administrator.................................................................................................20
Figure 11 Cluster Administrator..................................................................................................20
Figure 12 Cluster Administrator's Open Connection To Cluster dialog box................................21
Figure 13 The Node Offline icon................................................................................................22
Figure 14 The Failback tab of the Cluster Group Properties window.........................................30
Figure 15 The Advanced tab of the Cluster IP Address Properties window................................32
Figure 16 Versant in a Cluster with shared RAID System..........................................................38
Figure 17 Versantd properties, General, (local Computer).........................................................39
Figure 18 Versantd properties, LogOn, (locaol Computer).........................................................40
Figure 19 Versantd Properties, Genaral....................................................................................42
Figure 20 Versantd Properties, Dependencies..........................................................................42
Figure 21 Versantd Properties, Parameters...............................................................................43
Figure 22 Cluster Administrator, Database Group.....................................................................43
Figure 23 Corba in a Cluster......................................................................................................46
Figure 24 Orbix Environment with Centralized Configuration....................................................48
Figure 25 Corba Services configuration.....................................................................................57
Figure 26 IT activator default-domain Properties.......................................................................58
Figure 27 IT config_rep cfr-ACI_Network Properties.................................................................59
Figure 28 IT activator default-domain Properties.......................................................................60
Figure 29 IT activator default-domain Properties.......................................................................61
Figure 30 IT activator default-domain Properties.......................................................................62
Figure 31 IT activator default-domain Properties.......................................................................63
Figure 32 IT activator default-domain General properties..........................................................65
Figure 33 IT activator default-domain Dependencies properties................................................66
Figure 34 IT activator default-domain Advanced properties.......................................................66
Figure 35 IT activator default-domain Parameter properties......................................................67
Figure 36 IT activator default-domain Registry properties.........................................................67
Figure 37 Cluster Administrator, Group including corba services...............................................68
Figure 38 ACI in a Cluster..........................................................................................................70
Figure 39 ACI Service configuration..........................................................................................71
Figure 40 Domain manager Properties......................................................................................72
Figure 41 General properties.....................................................................................................75
Figure 42 Dependencies properties...........................................................................................76
Figure 43 Advanced properties..................................................................................................76
Figure 44 Parameter properties.................................................................................................77
Figure 45 Registry properties....................................................................................................77
Figure 46 Cluster Administrator shows Group including online services....................................78
4
1 LMA F 150 Introduction
In certain industries, down-time has always been unacceptable (e.g., communications/telephony,
finance and banking, reservation systems). Today, given the realities of global competition,
difficulties in product differentiation, low operating margins and the like, many industries must
be up and running for their customers whenever the customer sees fit to call.
In doing so you need an audit of your system. When you audit network risk, you identify the possible failures that can
interrupt access to network resources. A single point of failure is any component in your environment that would block
data or applications if it failed. Single points of failure can be hardware, software, or external dependencies, such as a
power supply. The following are some of the possible failure points.
Network hub
Power Supply
Server connection
Disk
Other server hardware, such as CPU or memory
Server software, such as the operating system or specific applications
WAN links, such as routers and dedicated lines
There are varying levels of solutions one could use for from use of UPS for power supply to say use of a RAID
(Redundant Array of Inexpensive disks) for disk replication.
In general, you provide maximum reliability when you:
Minimise the number of single points of failure in your environment.
Provide mechanisms that maintain service when a failure occurs.
A good idea of downtime can be understood by looking at possible slippages. A slippage of one-tenth of one percent in
uptime can cause minutes, if not hours, of server outages during the course of a year.
For example, an availability level of 99.999% on a round-the-clock basis would mean that an organisation would
experience at least five minutes of unscheduled downtime during a year. A level of 99.99% would mean 52 minutes of
downtime. A level of 99.9% would translate to 8.7 hours of downtime. A level of 99% would equal 3.7 days of downtime
throughout the course of a year.
Users of business applications may be able to cope with a few seconds, or even minutes, of downtime during a business
day. But many minutes of downtime would cause productivity and business losses that most companies would find
unacceptable.
By taking advantage of high availability solutions, such as clustering, organisations can improve availability, reduce
downtime and reduce user disruption for unplanned outages and planned maintenance. For example, without clustering, a
disk drive failure might require the user to sign on to the backup system, restore data from backup files, restart the
application and re-enter one or more transactions. Clustering support will make possible a planned switchover for
scheduled backups with only a slight delay at the user's workstation.
To declare a system to be declared highly available would mean a uptime of 99.9% to 99.999%, which means a
downtime of about a maximum of 8 hours and a minimum of 5 minutes a year.
The scope of this LMA is limited to ensuring a high level of availability of:
the server connections, both to the ACI clients and the ACI servers.
server hardware (CPU, memory) and
server software such as the operating system or the ACI server applications.
5
In case the host on which it runs suffers from a power failure or a malfunction of any of the OS resources such
that it becomes unavailable, the process monitor becomes totally in adequate and cannot do anything is such a
situation.
2. Limit learning of the network by the ACI server processes to only a first time start (cold start).
Currently ACI learns parts of the network even on warm starts (i.e. subsequent starts), which itself could take hours.
The solution of requirement 1, should be able to negate this behaviour. In other words ACI should be able to assume
that it is always connected to the network and that the database always reflects the true state of the network. In case of
a loss of connection to the network for a maximum delay of say 10-30 seconds, it should still be possible to assume a
synchronised state between the MIB and the network. Although a possibility of a forced resynchronisation can be
carried out through a consistency check.
3. Clients and ACI server process, should not proceed to shutdown in case it recognises a failure in connection to
its corresponding server processes.
Currently within ACI, clients shutdown when they sense a loss in connection the server processes
and additionally the server processes themselves need to be re-started when they lose
connection to the server they have been connected to. For e.g. the case of a DCN server
crashes, the Classic server or any EMS connected to it would need to be restarted. This
behaviour would need to be changed. This implies in case a client loses connection, it should
inform the operator that there has been a loss of connection and that the client was trying to re-
establish one. All objects w.r.t. the current operation should be cleaned up. In the case of a server
it should also behave in a similar way, i.e. simply wait for its corresponding server to be made
available again.
The requirements listed above are necessary for the current ACI versions. Additional requirements, which maybe
necessary, are:
4. Provide High Availability of ACI as an add-on option to customers, which may want it, which would imply to
sell it as a separate feature.
This implies the following.
The existing installation program would not need to be modified for making any changes that may be need (such
as registry entries, or copies of software on different drives) for high availability. A separate installation
program would need to be written such that it makes the necessary changes on the ACI server machines or nodes.
The behaviour defined in requirement 2 would not be satisfied in this case (i.e. with no high availability) and
hence ACI should be able to detect that it is not running in a high availability environment and should
learn/force synchronisation with the network.
5. Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a new service
pack or the customer would like to add a new utility software which requires a reboot of the system or even upgrade
the system to a new hard disk or memory, the ACI server processes are disrupted and need to be restarted. This causes
a loss of service to ACI. The possibility that the customer can do the above should exist.
6
2 Product Architecture
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
The following figure illustrates components of a two-node server cluster that may be composed of servers running either
Windows 2000 Advanced Server or Windows NT Server 4.0, Enterprise Edition with shared storage device connections
using SCSI or SCSI over Fiber Channel.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects all the defined groups and his resources to that of the failed node,
resources like the share disk, the Global Ip Address, Global Hostname, Versant, Corba and ACI.
7
High Availability for Access Integrator Server
DM Client
GUI GUI
Application Application
Logic Logic
API API
ACI Group
Switch over
Database Files
ACI Logs
8
3 Installation Procedure
All processes are to be installed on both nodes on the boot disk. It is not recommended to install executables on share disk
because of following reason (LMA 150 and HA Definitions):
Maximum reliability
- Minimize the number of single points of failure in your environment
Comment: The installation on the share disk is a single point. When the executable files of ACI in
mistake are overwrite, is not possible a restart from the other node and the high availability is stopped.
(Here have not in mind a hardware crash of disk!). A overwrite is always possible throw other process or
ACI self.
- Provide mechanisms that maintain service when a failure occurs
Comment: General is an upgrade of version not possible on the share disk without a stop of the High
Availability because the files are locked. The High Availability definition set a maximum of 8 hours in
year for state out of function. In this case is this lot of time not enough for maintenance and upgrade.
LMA150 part
Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
- Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a
new service pack or the customer would like to add a new utility software which requires a reboot of the
system or even upgrade the system to a new hard disk or memory, the ACI server processes are disrupted
and need to be restarted. This causes a loss of service to ACI. The possibility that the customer can do the
above should exist.
Databases, logs, os_backup, corba variables are to be installed and configured on the shared disk.
9
3.1 Installing und Configuring Cluster Service
Installation Options
Three options are available for installing Cluster Service:
Installation with a fresh installation of Windows 2000 Advanced Server
Installation on an existing installation of Windows 2000 Advanced Server
Unattended Cluster Service installation
Regardless of the installation type you select, you will use the Cluscfg.exe application. Cluscfg.exe runs as a standard
Windows 2000 wizard unless you automate the installation of Cluster Service. When you run Cluscfg.exe from the
command line, it supports the command line options listed in Table 3.1.
Table 3.1 Cluscfg.exe Command-Line Options
Parameter Description
ACC[OUNT] <accountname> Specifies the domain service account used for Cluster Service.
ACT[ION] {FORM | JOIN} Specifies whether to form a new cluster or join an existing cluster.
D[OMAIN] <domainname> Specifies the domain used by Cluster Service.
EXC[LUDEDRIVE] <drive list> Specifies which drives should not be used by Cluster Service as shared disks.
I[PADDR] <xxx.xxx.xxx.xxx> Specifies the IP address for the cluster.
Specifies a disk on a nonshared SCSI bus that should be used as the quorum
L[OCALQUORUM]
device.
NA[ME] <clustername> Specifies the name of the cluster.
NE[TWORK] <connectionname>
Specifies how the network connection specified should be used by Cluster
{INTERNAL | CLIENT | ALL}
Service.
[priority]
Specifies the password for the domain service account used for Cluster
P[ASSWORD] <password>
Service.
Q[UORUM] <x:> Specifies the drive letter to use for the quorum device.
S[UBNET] <xxx.xxx.xxx.xxx> Specifies the subnet to use for the private network.
Suppresses the user interface in order to perform an unattended installation.
U[NATTEND] [<path to answer file>]
Also specifies an optional external answer file.
Installation with Windows 2000 Advanced Server
The most common method for installing Cluster Service is to install it as an option during the operating system setup
process. When you install Windows 2000 Advanced Server, you can select Cluster Service in the Windows Components
Setup dialog box. This causes the Cluster Configuration Wizard to start at the end of the Windows setup process. When
you install Cluster Service as part of a new setup, you must manually enter the appropriate information for Cluster Service.
An alternative to this is the unattended installation option described later in this section. You must have administrative
rights in order to install Cluster Service.
10
Unattended Installation
If you are installing and configuring a number of clusters, you can elect to automate the setup process for Cluster Service
as part of a new Windows 2000 Advanced Server installation or as an installation on an existing server. In either case,
you will use the Cluscfg.exe application with an associated answer file. When you use Cluscfg.exe to automate an
installation, the answers it requires can come from the answer file used by Sysprep or from an external answer file that you
create.
NOTE
Sysprep is used to install only new instances of Windows 2000 Advanced Server. To automate the installation process of
Cluster Service on an existing Windows 2000 Advanced Server, you must supply an external answer file.
In order for Cluscfg.exe to run in an unattended manner, the setup process must first complete the installation of Windows
2000, reboot the server, and have you log in with administrative rights.
11
Account = <account name>
This key specifies the name of the account under which Cluster Service runs. This key is required only if Action = Form.
(See below.)
Example:
Account = adminname
Action
Action = <Form | Join>
This key specifies whether a cluster is to be formed or joined.
Form specifies that the cluster is to be created. If this is the first node in a cluster, you are creating a new cluster. When you
specify Form, you must specify the Account and Domain keys.
Join specifies that your machine is to join an existing cluster. If at least one other node already exists, you are joining a
cluster. When you specify Join, you should not specify the Account and Domain keys.
Example:
Action = Form
Domain
Domain = <domain name>
This key specifies the domain to which the cluster belongs. It is required only if Action = Form.
Example:
Domain = domainname
ExcludeDrive
ExcludeDrive = <drive letter>[, <drive letter> [, . . . ]]
This optional key specifies a drive to be excluded from the list of possible quorum devices.
Example:
ExcludeDrive = q, r
IPAddr
IPAddr = <IP address>
This key specifies the IP address of the cluster.
Example:
IPAddr = 193.1.1.95
LocalQuorum
LocalQuorum = Yes | No
This optional key specifies that a system drive should be used as the quorum device. (Normally, only a disk that is on a
shared SCSI bus not used by the system disk can be selected as the quorum device.)
Example:
LocalQuorum = Yes
NOTE
This parameter should be used only for demo, testing, and development purposes. The local quorum resource cannot fail
over.
Name
Name = <cluster name>
This key specifies the name of the cluster. The value can contain a maximum of 15 characters.
Example:
Name = MyCluster
Network
Network = <connection name string>, <role>[, <priority>]
This key specifies the connection name associated with a network adapter and the role that adapter is to fulfill in the
cluster. The first two parameters, <connection name string> and <role>, are required. The third parameter, <priority>,
should be supplied only for network connections configured for internal communications.
The <role> parameter specifies the type of cluster communication for the network connection. Valid parameters are All,
Internal, and Client. To use the network connections for communication with clients and between the nodes, specify All.
To use the network connections only for internal communication between the nodes, specify Internal. To use the network
connections only for communication with clients, specify Client.
The <priority> parameter specifies the order in which the network connections are used for internal communication.
Example:
Network="Local Area Connection 2", INTERNAL, 1
Password
Password = <password>
This key specifies the password of the account under which Cluster Service runs.
Example:
Password = MyPassword
NOTE
12
Some security risks are associated with using the Password key because the password is stored as plain text within the
answer file. However, the Password key is deleted after the upgrade.
Quorum
Quorum = <drive letter>
This key specifies the drive to be used as the quorum device.
Example:
Quorum = Q:
Subnet
Subnet = <IP subnet mask>
This key specifies the IP subnet mask of the cluster.
Example:
Subnet = 255.255.0.0
Hardware Configuration
Although Cluster Service can be implemented on a variety of hardware configurations, Microsoft supports only Cluster
Service installations performed on configurations listed on the Cluster Service Hardware Compatibility List (HCL). The
wizard's Hardware Configuration page, shown in Figure 3.1, reviews this policy. For more information about these
configurations, see Chapter 2, Lesson 1.
Figure 3.1 The Cluster Service Configuration Wizard's Hardware Configuration page
To continue with the installation of Cluster Service you must confirm that you understand Microsoft's support policy by
clicking I Understand.
Select An Account
Before running the wizard, you must first create a domain user account for the cluster. This account must be a Domain
Administrator or have local administrative rights on each node, plus the following permissions:
Lock pages in memory
Log on as a service
13
Act as part of the operating system
Back up files and directories
Increase quotas
Increase scheduling priority
Load and unload device drivers
Restore files and directories
The wizard requires you to enter this account information on the Select An Account page, shown in Figure 3.3.
After you enter the information for the Cluster Service account, the wizard validates the user account and password. If the
node on which you are installing Cluster Service is a member server, you will be prompted to add this account to the local
Administrators group.
14
Figure 5 The Cluster File Storage page
Network Connections
The Network Connections page, shown in Figure 3.7, allows you to configure the cluster to allow it to communicate
properly.
15
Figure 7 The Network Connections page
This page contains the following properties:
Network Name In the Network Name text box, enter the name of the connection. This should match the name
used for the private network connection. By naming your connections appropriately based on their use, it will be
easier to manage and maintain the cluster. This is especially important when multiple administrators are managing
the cluster.
Device The Device text box is populated automatically with the name of the network adapter currently being
configured. Your server should have more than one adapter, so you should make sure to apply the private network
settings and public network settings on the appropriate adapter.
IP Address In the IP Address text box, enter the IP address that the cluster will use to communicate with the other
nodes in the cluster.
Enable This Network For Cluster Use If you select this option, Cluster Service will use this network adapter by
default.
Client Access Only (Public Network) When this option is selected, the public network adapter will be used by
the cluster only for communication with clients. No node-to-node communication will occur on this adapter. You
should select this option only if you have another adapter that can act as a backup if the primary private adapter
becomes unavailable.
Internal Cluster Communications Only (Private Network) If you select this option, Cluster Service will not
use this adapter for any client communication. This adapter will be used only for internal node-to-node
communication within the cluster. You should configure the second adapter in each node to act as a backup for
this adapter in the event of a failure.
All Communications (Mixed Network) By default, this option is selected for the adapter card you are
configuring. It specifies that Cluster Service can use this card for client communication as well as private, node-
to-node communication. You should select this option if you have only two adapters and the other adapter is being
used exclusively for node-to-node communication. If the other adapter fails, this adapter will assume
responsibility for all cluster communication.
16
Figure 8 The Internal Cluster Communication page
Cluster IP Address
The Cluster IP Address page, shown in Figure 3.9, requires you to enter the public IP address assigned to the cluster.
In this practice, you will install Cluster Service on an existing Windows 2000 Advanced Server. You should already have
Windows 2000 Advanced Server installed and configured on both nodes (and the shared device). (For information on how
to do this, see the practices in Chapter 2.)
To see a demonstration of this practice, run the Cluster Install demonstration located in the Media folder on the companion
CD.
17
1. Click Next.
1. In the Password text box, type a password.
1. In the Confirm Password text box, type the same password.
1. Select the User Cannot Change Password and Password Never Expires check boxes.
1. Click Next.
1. Click Finish.
1. In the right pane of the Active Directory Users And Computers snap-in, right-click the Cluster ServiceAdmin
Account and click Add Members To Group.
1. Click Administrators, and then click OK.
40397. Close the Active Directory Users And Computers window.
18
1. On the first server, from the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and
click Cluster Administrator.
1. Cluster Administrator should look like Figure 3.10. If you cannot open Cluster Administrator or if you see errors
in it, the Cluster Service installation did not complete successfully or your node is incorrectly configured.
19
Figure 11 Cluster Administrator
In addition to using Cluster Administrator from any node in order to manage the cluster, you can install it on nonclustered
computers for remote administration. The cluster domain must be able to authenticate the account you use on the remote
system. To install it on a computer that is not part of the cluster, you can use the Adminpak.msi installer file included with
Windows 2000.
Cluster Administrator supports the following operating systems:
Windows NT 4 Server Enterprise Edition, Service Pack 3
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
To install Cluster Administrator manually on a computer that isn't a node of a cluster, follow these steps:
2. From the Windows 2000 Start menu, click Run.
3. Type Adminpak.msi and press Enter.
4. Follow the instructions on the screen.
After the installation is complete, Cluster Administrator will be listed on the Windows 2000 Administrative Tools menu.
20
Renaming resources and resource groups You can use Cluster Administrator to rename groups and edit their
properties.
Removing resources and resource groups Using Cluster Administrator, you can delete resources and groups.
When a group is deleted, all the resources that were members of the group are also deleted. A resource cannot be
deleted until all resources that depend on it are deleted.
Viewing default groups Every new cluster includes two default groups: the Cluster Group and the Disk Group.
These groups contain default cluster settings and general information about failover policies for the cluster.
The default Cluster Group includes the IP Address and Network Name resources for the cluster. The resource information
presented in this group was entered when you configured the new cluster using the Cluster Service Configuration Wizard.
This group is required for administration of the cluster and should not be renamed.
The Disk Group is also created when you initially install Cluster Service. Each disk on the shared storage device will
receive its own disk group that includes a Physical Disk resource.
When you create new groups, you should implement them as modifications to the disk groups. You can then rename each
group to something meaningful. For example, you might add resources to Disk Group 1 that will be used by your Web
server. Once the resources have been added, you can rename Disk Group 1 to Web Group. The Cluster Group name does
not change.
Modifying the state of groups and resources You can use Cluster Administrator to bring resources and groups
online or take them offline. If you change the state of a group, all the resources within that group will be updated
automatically. These resources have their state changed in the order of their dependencies.
Changing ownership Using Cluster Administrator, you can specify the ownership of a resource or an entire
group. Resources are owned by groups, and groups, in turn, are owned by a node. You can transfer resources
between groups to satisfy dependencies and application requirements. You can also transfer group ownership,
using the Move Group command, to assign groups to other nodes in the cluster.
You typically transfer group ownership when you need to bring down a node for maintenance or upgrades. When a group's
ownership is moved to another node, all resources in that group are taken offline, the group is then transferred, and the
resources are brought back online. As a result, you must carefully plan when to move groups because clients might be
affected temporarily as the resources are shut down and then restarted.
Once the resource ownership has changed, the resource will be automatically brought online. However, when a resource is
moved between groups on the same node, it will not be taken offline.
Changing the maximum Quorum log size By default, the Quorum log file size is set to 64 KB. Depending on
the number of shares supported on the cluster and the number of transactions managed by the cluster, this might
be too small. In this case, you will receive a notification in the Event Viewer. When the Quorum log reaches the
specified size, Cluster Service will save the database and reset the log file. If you change the Quorum size on one
node, it will automatically take affect on the other.
Initiating a failure In order to help you test your failover policies, Cluster Administrator can initiate a failure.
This feature also allows you to test the restart settings on individual resources.
Identifying failovers In addition to configuring and managing a cluster, Cluster Administrator can also quickly
provide information on the health of the cluster. This is accomplished with indicators such as the Node Offline
icon shown in Figure 4.3.
Information Description
If the application will run on a new virtual server, you need a unique IP address. You do not
IP address
need an IP address if your application will run on an existing virtual server.
Virtual server Even though the application resides on the cluster, clients access the application using a
21
standard computer name. If you do not want to run the application on an existing virtual server,
the wizard will prompt you for the new virtual server's name and a unique IP address. The
wizard will then create the appropriate resource group and implement the virtual server.
Resource type for The wizard lets you create a resource to manage your application. You must select the
your application appropriate resource type for your needs.
Application
If you use the wizard to create a resource for your application, you must name the resource.
resource name
Application
If the application requires other resources in order to run, you can create resource dependencies
resource
using the wizard.
dependencies
Changing the Quorum resource location You can use Cluster Administrator to configure the location of the
Quorum resource after you install Cluster Service. The cluster's Property page includes a Quorum tab with a
number of settings, including the location of the Quorum resource. You can edit and reconfigure the Quorum
resource settings as needed.
Using Cluster.exe
In addition to managing your cluster using the GUI-based Cluster Administrator, you can also execute administrative tasks
from the command line. For example, you might need to configure a property on more than one cluster. Using Cluster.exe,
you can set properties through a single command execution. You can also execute command line tasks from within a script
to automate the configuration of many clusters, nodes, resources, and resource groups.
Cluster.exe is automatically installed with Cluster Service on each node. You can also run Cluster.exe in Windows NT 4
Server Enterprise Edition with Service Pack 3 or later.
NOTE
Unlike Cluster Administrator, Cluster.exe does not automatically restore previous connections when you use it to
administer a cluster.
Table 4.2 describes the primary arguments supported by Cluster.exe. For a complete listing of the properties and options
supported by each command, see Appendix B.
All but the first two options listed in Table 4.2 apply to the /CLUSTER options. If these options are used alone, Cluster.exe
will attempt to connect to the cluster on the node that is running Cluster.exe and apply the command-line option to this
cluster.
Table 4.2 Cluster.exe Command-Line Arguments
Argument Description
Displays a list of clusters in the specified domain. If no domain is specified,
/LIST[:domain-name] the domain that the computer belongs to is used. Do not use the cluster name
with this option.
If you do not specify the cluster name, Cluster.exe will attempt to connect to
[[/CLUSTER:]cluster-name] the cluster running on the node that is running Cluster.exe. If the name of your
<options> cluster is also a cluster command or its abbreviation, such as cluster or c, use
/cluster: to explicitly specify the cluster name.
Displays or sets the cluster's common properties. See Appendix B for more
/PROP[ERTIES] [<prop-list>]
information on common properties.
Displays or sets the cluster's private properties. See Appendix B for more
/PRIV[PROPERTIES] [<prop- list>]
information on private properties.
/REN[AME]:cluster-name Renames the cluster to the specified name.
/VER[SION] Displays the Cluster Service version number.
/QUORUM[RESOURCE]
Changes the name or location of the Quorum resource or the sizeof the
[:resource-name] [/ PATH:path]
Quorum log.
[/MAXLOGSIZE:max-size-kbytes]
/REG[ADMIN]EXT:admin-
Registers a Cluster Administrator extension DLL with the cluster.
extension-dll[,admin-extension-dll...]
/UNREG[ADMIN]EXT:admin-
extension-dll[,admin-extension- Unregisters a Cluster Administrator extension DLL from the cluster.
dll...]
A node-specific cluster command. See Appendix B for a list of available
NODE [node-name] node-command
commands.
GROUP [group-name] group- A group-specific cluster command. See Appendix B for a list of available
command commands.
RES[OURCE] [resource- A resource-specific cluster command. See Appendix B for a list of available
name]resource-command commands.
{RESOURCETYPE|RESTYPE} A resource typespecific cluster command. See Appendix B for a list of
22
[resourcetype- name].-command available commands
A network-specific cluster command. See Appendix B for a list of network-
NET[WORK] [network-name]
commanavailable commands.
NETINT[ERFACE] [interface- A network interfacespecific cluster command. See Appendix B for a list of
name] interface-command available commands.
/? Or /help Displays cluster command line options and syntax.
Practice: Administering a Cluster Using Cluster Administrator
23
1. Type 128 in the Reset Quorum Log At field and click OK. This will return the maximum Quorum log size to its
original value.
Pausing and Resuming a Node
1. Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /PAUSE at the command
prompt, and press Enter. A message will indicate that NodeA has been paused.
1. Switch to Cluster Administrator. You should see an icon with a exclamation point in a yellow triangle on NodeA,
indicating that it has been paused.
1. Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /RESUME at the command
prompt, and press Enter. A message will indicate that NodeA has resumed.
1. Switch to Cluster Administrator.
1. The icon indicating that NodeA was paused should no longer be present.
3.1.3 Configuring Resources, Resource Groups, and Virtual Servers
24
Network Name This resource type is used to assign a name to a resource on the cluster. This is typically
associated with an IP Address resource type in order to create a virtual server. Many applications and services that
you might want to cluster require a virtual server.
Generic Application When you implement an application that is not cluster-aware, you can use this resource type
to provide basic clustering capabilities. If the application qualifies to be clustered and can support being
terminated and restarted as the result of a failover, the Generic Application type might be all that is required to
increase the the application's availability. If the application is not compatible with the generic type, you might
need to implement a cluster resource DLL in order for the application to support Cluster Service.
When you implement a cluster-unaware application using the Generic Application resource type, you must verify that the
application can run from both nodes in the cluster. This includes installing copies of the application on each node. If you
want the application to support failing over, you must configure it to use a shared disk for data storage. In this way, if the
application fails over to the other node, it can still access the required data.
An alternative to installing the application on each node is to install the application to a shared disk. While this
implementation offers the benefit of using less drive space (because it does not have to be installed twice), it does not
support rolling upgrades. If you intend to perform rolling upgrades to the application, the application will need to be
installed locally on each node.
Generic Service This is similar to the Generic Application resource type. If you intend to support a cluster-
unaware service, you can use the Generic Service resource type for basic cluster functionality. This resource type
will provide only the most fundamental level of clustering services. If your service requires advanced clustering
support, you must develop and use a custom resource DLL for the service.
Resource Property
DHCP Service DHCP database file path
25
DHCP database files backup path
Audit log file location
Distributed Transaction Coordinator None
Access permissions
Simultaneous user limit
File Share
Share name and comment
Path
Command line
Current directory
Generic Application
Use network name for computer name
Whether the application can interact with the desktop
Service name
Generic Service Startup parameters
Use network name for computer name
Service for this instance (FTP or WWW)
IIS Server Instance
Alias used by the virtual root
IP address
Subnet mask
IP Address
Network parameters
NetBIOS option
MSMQ Server None
Network Name Computer name
Drive to be managed (cannot change once the resource has been
Physical Disk
configured)
Path for the print spooler folder
Print Spooler
Job completion time-out
Path to WINS database
WINS Service
Path to WINS backup database
Cluster Resource Groups
Resources must be organized into groups, called resource groups, which are managed by Cluster Service. In addition to the
general properties such as name, description, and preferred owner, groups also have failover and failback properties.
Together, these properties control how the resource group and the associated application or service responds when a node
is taken offline.
Failover Policy
The failover policy for a group is set using the Failover tab of the group's property sheet. You can set the Failover
Threshold and Failover Period properties based on your needs. The Failover Threshold specifies the number of times the
group can fail within the number of hours specified by the Failover Period property. If the group fails more than the
threshold value, Cluster Service will leave the affected resource within the group offline. For example, if a group Failover
Threshold is set to 3 and its Failover Period is set to 8, Cluster Service will fail over the group up to three times within an
eight-hour period. The fourth time a resource in the group fails, Cluster Service will leave the resource in the offline state
instead of failing over the group. All other resources in the group will be unaffected.
Failback Policy
By default, resource groups are not configured to fail back to the original node. Instead, after a failover, the group remains
on the second node until you manually move the group to the appropriate node. If you want a group to run on a preferred
node and return to that node after a failover, you must implement a failback policy for the group. You can specify whether
the group should fail back immediately after the original node comes back online or at a specified time during the day. For
example, you might want to fail back a group only during non-business hours to minimize the impact on clients. In order
for a group to fail back to a specific node, you must set the Preferred Owners property of the group.
In this practice, you will configure a File Share resource and associated group and then manually bring the group online.
Creating a Group
5. Open Cluster Administrator. The Open Connection To Cluster dialog box appears.
6. Type the name of the cluster (in this case, MYCLUSTER) and click Open.
7. Right-click Groups, point to New, and click Group. The New Group Wizard will start.
26
8. Type Cluster Printer in the Name box.
9. Type Group For Printer Resources in the Description box.
10. Click Next.
11. In the Preferred Owners dialog box, add both nodes to the Preferred Owners list.
12. Click Finish. A message box will appear stating that the group was created successfully.
13. Click OK.
Transferring a Resource
4. Open the Resources folder.
5. Right-click Disk W:, and from the popup menu that appears, click Change Group. A listing with all of the
available groups in the cluster will appear.
6. Click the Cluster Printer group. A message box will appear asking if you are sure you want to change the group.
7. Click Yes. The Disk W: resource will be displayed as part of the Cluster Printer group. Having the disk resource
as part of the group will allow you to add resources to the Cluster Printer group that have a dependency on a disk
resource.
27
Creating a Group
1. Open Cluster Administrator.
2. In the Open Connection To Cluster dialog box, type the name of the cluster (in this case, MYCLUSTER) and
click Open.
1. Right-click Groups, point to New, and click Group.
1. Type Virtual Server in the Name box.
1. Type Group for virtual server in the Description box.
1. Click Next.
1. In the Preferred Owners dialog box, add NodeA to the Preferred Owners list.
1. Click Finish. A message box will appear stating that the group was created successfully.
1. Click OK.
28
Several properties for resources and groups determine the actions that occur during a failover or failback. To set these
properties, you can use Cluster Administrator or the Cluster.exe command-line utility. This practice will introduce several
of the properties that are important in configuring and monitoring the failover and failback processes.
29
Figure 14 The Failback tab of the Cluster Group Properties window.
1. Click the option to prevent failback and then click OK to close the Cluster Group Properties dialog box.
30
RestartAction This property specifies the action to perform if a resource fails. You can use one of the following
settings:
ClusterResourceDontRestart (0) Do not restart after a failure.
ClusterResourceRestartNoNotify (1) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will not attempt to fail over the group to
another node in the cluster.
ClusterResourceRestartNotify (2) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will attempt to fail over the group to another
node in the cluster. This is the default setting.
Unless the RestartAction property is set to ClusterResourceDontRestart, Cluster Service will attempt to restart a failed
resource.
RestartThreshold This property specifies the number of restart attempts that will be made on a resource before
Cluster Service initiates the action specified by the RestartAction property. These restart attempts must also be
made within the time interval specified by the RestartPeriod property. Both the RestartPeriod and the
RestartThreshold properties are used to limit restart attempts.
RestartPeriod This property specifies the amount of time, in milliseconds, during which restart attempts will be
made on a resource. The number of attempts allowed within a RestartPeriod is determined by the
RestartThreshold setting. Both the RestartPeriod and the RestartThreshold properties are used to limit restart
attempts. The RestartPeriod property is reset to 0 once the interval setting is exceeded. If no value is specified for
RestartPeriod, the default value of 90000 is used.
PendingTimeout This property specifies the amount of time, in seconds, that a resource in a Pending Online or
Pending Offline state must resolve its status before Cluster Service fails the resource or puts it offline. The default
value is three minutes.
PendingTimeout has the following relationship with RestartPeriod and RestartThreshold:
RestartPeriod >= RestartThreshold x PendingTimeout
RetryPeriodOnFailure This property specifies the amount of time, in milliseconds, that a resource will remain in
a failed state before Cluster Service attempts to restart it. Until an attempt is made to locate and restart a failed
resource, the resource will remain in a failed state by default. Setting the RetryPeriodOnFailure property allows a
resource to automatically recover from a failure.
31
Figure 15 The Advanced tab of the Cluster IP Address Properties window.
1. Make sure that the resource is set to restart after a failure, and set the Restart Threshold back to its original value
of 3.
1. Set the Restart Period back to its original value of 900000 seconds.
1. Set the Pending Timeout field back to its original value of 180000 seconds.
1. Close the Cluster IP Address Properties dialog box to save the new settings.
32
3.2 Creating an ACI Group and basic resources
For the inserting of ACI parts and necessary applications as Corba and Versant are 2 basic steps necessary
creating of a ACI group
creating of basic Resources in the cluster
Note
Physical Disk This resource type is for managing shared drives on your cluster. Because data corruption can occur if
more than one node has control of the drive, the Physical Disk type allows you to configure which node has control of
the resource at a given time.
IP Address A number of cluster implementations require IP addresses. The IP Address resource type is used for this
purpose. Typically, the IP Address resource is used with a Network Name resource in order to create a virtual server.
Network Name This resource type is used to assign a name to a resource on the cluster. This is typically associated
with an IP Address resource type in order to create a virtual server. Many applications and services that you might
want to cluster require a virtual server.
33
Property Description
Name Required. Specifies the name of the resource.
Description Optional. Describes the resource.
Required. Specifies which nodes own the resource. If the Quorum resource resides on the disk, all
Possible Owners
nodes must be owners.
Required
None
Dependencies
Disk Required. Specifies the drive letter or letters for the Physical Disk resource.
Once created, the Physical Disk resource can be brought online, used as a dependency, or otherwise controlled using the
Resource Management functions. It will appear as a Cluster resource in Cluster Administrator and in Cluster.exe. You can
view and set the Physical Disk resource properties by using the Properties page, shown in Figure 5.1.
Drive
The Drive property specifies the drive letter for the Physical Disk resource. If you're using the Drive property and multiple
drive letters are associated with the disk, you must set the Drive property to include all of the drive letters. You must also
make sure that the assigned drive letter does not conflict with existing drive letters anywhere in the cluster, including each
node's local drives.
34
Signature
The Signature property specifies an identifier for the disk. It is a DWORD value with a range from 0 to 0xFFFFFFFF.
When you create a new disk resource using Cluster.exe, you set the Drive or Signature private property to the drive or
signature of the disk. You must set one of these two properties, but you cannot set both. Neither property can be changed
once the assignment is made and the resource is created. When you create a new disk resource using Cluster Administrator,
you're not required to provide one of these properties. Instead, a list of available disks is displayed for you to choose from.
SkipChkdsk
The SkipChkdsk property determines whether the operating system runs chkdsk on a physical disk before attempting to
mount the disk. A TRUE setting causes the operating system to mount the disk without running chkdsk. A FALSE setting
causes the operating system to run chkdsk first and, if errors are found, take action based on the ConditionalMount
property. However, if both the SkpChkDsk and ConditionalMount values are 0 (FALSE), chkdsk will not run and the disk
will be left offline. Table 5.3 summarizes the interaction between SkipChkdsk and ConditionalMount.
Table 5.3 SkipChkdsk and ConditionalMount Interaction
Chkdsk
SkipChkdsk Setting ConditionalMount Setting Disk Mounted?
Runs?
FALSE TRUE Yes If chkdsk reports errors, no. Otherwise, yes.
FALSE FALSE No No
TRUE TRUE No Yes
TRUE FALSE No Yes
Because forcing a disk to mount when chkdsk reports errors can result in data loss, you should exercise caution when
changing these properties.
ConditionalMount
The ConditionalMount property determines whether a physical disk is mounted, depending on the results of chkdsk. A
TRUE setting prevents the operating system from mounting the disk if chkdsk reports errors. A FALSE setting causes the
operating system to attempt to mount the disk regardless of chkdsk failures. The default is TRUE. Note that if chkdsk has
not run, it will not produce errors, so the operating system will attempt to mount the disk regardless of the
ConditionalMount setting.
MountVolumeInfo
The MountVolumeInfo property stores information used by the Windows 2000 Disk Manager. Cluster Service updates the
property data stored in MountVolumeInfo whenever a disk resource is brought online. Cluster Service also updates
MountVolumeInfo when the drive letter of a disk resource is changed using Disk Manager.
MountVolumeInfo data consists of a byte array organized as follows:
A 16-byte "header" consisting of the disk signature (first 8 bytes) and the number of volumes (second 8 bytes).
One or more 48-byte descriptive entries. (See Table 5.4.)
Table 5.4 MountVolumeInfo Data
Position Data
First 16 bytes Starting offset
Second 16 bytes Partition length
Next 8 bytes Volume number
Next 2 bytes Disk type
Next 2 bytes Drive letter
Last 4 bytes Padding
Displaying Private Properties
You can display the Physical Disk resource private properties by using Cluster.exe. These properties can help
administrators determine when to run chkdsk against a cluster disk. You can use the following command to display disk
resource private properties:
cluster <clustername> resource "Disk Q:" /priv
Here's an example of the output for a disk resource named Disk Q:
Listing private properties for `Disk Q:':
T Resource Name Value
D Disk Q: Signature 1415371731 (0x545cdbd3)
D Disk Q: SkipChkdsk 0 (0x0)
35
D Disk Q: ConditionalMount 1 (0x1)
B Disk Q: DiskInfo 03 00 00 00 ... (264 bytes)
B Disk Q: MountVolumeInfo D3 DB 5C 54 ... (104 bytes)
The values assigned to SkipChkdsk and ConditionalMount determine the behavior of chkdsk. If the MSCS folder on the
Quorum drive is inaccessible or if the disk is found to be corrupt (via checking of the dirty bit), chkdsk will behave as
follows:
If SkipChkdsk = 1 (which means TRUE), Cluster Service will not run chkdsk against the dirty drive and will
mount the disk for immediate use. (Note that SkipChkdsk = 1 overrides the ConditionalMount setting and that
Cluster Service performs the same no matter what the ConditionalMount property is set to.)
If SkipChkdsk = 0 (which means FALSE) and ConditionalMount = 0, Cluster Service fails the disk resource and
leaves it offline.
If SkipChkdsk = 0 and ConditionalMount = 1, Cluster Service runs chkdsk /f against the volume found to be dirty
and then mounts it. This is the current default behavior for Windows 2000 clusters and is the only behavior for
Windows NT 4 clusters.
You can use the following commands to modify these resource private properties:
cluster clustername res "Disk Q:" /priv Skipchkdsk=0[1]
cluster clustername res "Disk Q:" /priv ConditionalMount=0[1]
You can track disk management changes using the fixed-length values returned by the MountVolumeInfo property.
MountVolumeInfo replaces DiskInfo in Windows 2000.
Here's a sample MountVolumeInfo entry:
D3DB5C540400000000020000000000000000400600000000010000000746000000024
0060000000000FE3F060000000002000000074B00000000800C000000000000400600
00000003000000074C00000040C0120000000000C03F06000000000400000007490000
The signature is D3DB5C54, and the number of volumes is 04000000. The table below describes how to interpret the rest
of the information.
Offset Partition Length Volume Letter Disk Type Drive Number Padding
00020000.00000000 00004006.0000000 0 0100000 0 07 46 0000
00024006.00000000 00FE3F06.0000000 0 0200000 0 07 4B 0000
0000800C.00000000 00004006.0000000 0 0300000 0 07 4C 0000
0040C012.00000000 00C03F06.0000000 0 0400000 0 07 49 0000
For compatibility in a mixed-node cluster where one node is running Windows 2000 and the other is running Windows NT
4, DiskInfo is retained in the properties of the disk resource.
Whenever a disk resource is brought online, Cluster Service checks the physical disk configuration and updates the
information in MountVolumeInfo and DiskInfo. Corrections are made to the physical disk configuration registry entries as
needed. When changes are made using Disk Manager, any values related to drive letters are updated dynamically.
36
3.2.4 Creating a Network Name Resource
2. Click the Groups folder.
3. Right-click Virtual Server, point to New, and click Resource.
4. In the New Resource dialog box, type in the information below:
5.
Name Test Share A
Name Network Name
Description Network name for virtual server
Resource Type Choose Network Name
Group Choose Virtual Server
Do not select the Run This Resource In A Separate Resource Monitor check box.
5. Click Next.
6. NodeA should appear in the list of Possible Owners. If it does not, add it to the list.
7. Click Next.
8. In the Dependencies dialog box, add the server IP Address resource to the Resource Dependencies list and click
Next.
9. In the Network Name Parameters dialog box, type CLUSTERSVR.
10. Click Finish. A message box will appear stating that the Network Name -resource was created successfully.
11. Click OK.
37
3.3 Installing Versant on Microsoft Cluster Server
3.3.1 Generic
Overview
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
Configuration
There are a number of configurations for server clusters. Here the two nodes share a disk array. The secondary node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and changes the IP address of the backup node to that of the failed node.
V e rs a n t
G U I G U I
C lie n t
A p p lic a tio n A p p lic a tio n
L o g ic L o g ic
V e rs a n t V e rs a n t
A P I A P I
V e rs a n t R e d u n d a n t
S e rv e r N o d e
V e rs a n t V e rs a n t
S e rv e r P ro c e s s S e rv e r P ro c e s s
S h a r e d R A ID S y ste m
D a t a b a s e F ile s S w itc h
o v er
38
After installing Versant and before rebooting the machines, the Startup Type for the Versantd service needs to be changed
from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and double
click on Versantd and change the setting as shown below.
39
Figure 18 Versantd properties, LogOn, (locaol Computer)
The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop this local Versantd services.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group as we will see in the following sections.
40
Note
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
For a Versant database group you will need the following resources to be part of this new group,
41
Figure 19 Versantd Properties, Genaral
The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name.
42
Then, Startup parameters should be as follows for the Versantd service
VERSANT_HOST_NAME= GlobalHostName VERSANT_IP=10.233.3.33 HOMEDRIVE=R: HOMEPATH=\db\
VERSANT_DB=R:\DB VERSANT_DBID=R:\DB VERSANT_DBID_NODE= GlobalHostName
Important Note : With Windows NT 4.0 Cluster Server there was a bug in the way the startup parameters are passed on
to the service, as a workaround add an extra parameter at the start and end of the parameter list say junk=123. With
Windows 2000 Advanced Cluster Server this bug is fixed.
43
Figure 22 Cluster Administrator, Database Group
Host file changes
Add a line in the hosts file on both nodes in C:\WINNT\system32\drivers\etc\hosts as shown below,
44
3.3.9 Verify of Installation and Configuration
A simple way to check the functionality of versant on cluster is following:
Precondition
Versant is installed and configured on cluster
One node is active
One 3rd computer (no node from cluster) belongs to the same domain and have installed the same version of versant
From command prompt of the 3rd computer (login as domainuser x) start the commands for creation of a test
database:
makedb g dbname@globalhostname
createdb i dbname@globalhostname
Normal flow
From command prompt of the 3rd computer (login as domainuser x) start the command
db2tty d dbname@globalhostname
dbname@globalhostname is a placeholder for your extended database name (with cluster hostname)
If not exists an output, then is the installation in this node ok. Then move versant to the another node and repeat the test.
Sample
C:\>makedb -g test1@lion2
VERSANT Utility MAKEDB Version 6.0.0.2.0
Copyright (c) 1989-2000 VERSANT Corporation
C:\>createdb -i test1@lion2
VERSANT Utility CREATEDB Version 6.0.0.2.0
Copyright (c) 1989-2000 VERSANT Corporation
C:\>db2tty -d test1@lion2
C:\>
45
3.4 Installing Orbix/Corba on Microsoft Cluster Server
3.4.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.
C o rb a
G U I G U I
C lie n t
A p p lic a tio n A p p lic a tio n
L o g ic L o g ic
A P I A P I
C o rb a R e d u n d a n t
S e rv e r I N o d e
C o rb a C o rb a
S e rv e r P ro c e s s S e rv e r P ro c e s s
S h a r ed R A ID S y stem
D a t a b a s e F ile s S w itc h
o v er
46
be performed. Namely, changes to the configuration domain can be carried out from any host in the
configuration domain and are visible to all the machines that are configured to use it.
Both of them allows sharing a configuration domain between a number of hosts. However, the File
Based type implies a creation of configuration files and then making them available to other
machines. This can be realized either through copying them from the host where the domain has
been initially created or through a shared network file system (e.g. Windows Networking). The
second type is more flexible thus preferred, it assumes creation of configuration information on one
highly reliable and always accessible host and links from other hosts Orbix environments to the
created configuration domain. The configuration repository approach allows a centralized store of
configuration information (such as loaded pug-ins, initial object references) for all machines running
ACI servers. This model is depicted on the figure below. The configuration repository itself is an NT
service that runs on the dedicated host. This host would also be running domain wide services such
as the Naming Service. It can be either a stand alone machine or the one running ACI servers.
Especially, a good candidate is one that is running Network Manager NM server as this server needs
to contact more frequently the Naming Service where the Domain Managers TDM servers are
registered.
47
Config.
Repos.
CORBA transport
Preparation of Orbix 2000 environment for ACI usage involves three step process. The first step is
executed on every machine destined to run any ACI server or a dedicated host to run a configuration
repository and other Orbix services. The second step is performed only on the host destined to run
the configuration repository and other Orbix services. The last step applies to all machines except the
one hosting the configuration repository and Orbix services. These steps include :
1. Stop Orbix services starting with IT * name (Control Panel | Services) if any started
2. Uninstall Orbix 2000
3. Manually delete the previous Orbix 2000 directory (after uninstallation)
4. Use Regedit.exe to remove all branches starting with IT * name under
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Service]
...
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet003\Control\Service]
48
3.4.2.1.3 Step 2 - Creation of a configuration domain
%Orbix-installed-dir%\orbix_art\1.2\bin\
%Orbix-installed-dir%\orbix_art\1.2\bin\configure
Configuration Domain
Locator Daemon
Activator Daemon
Orbix 2000 Services
Configuration Scripts
You can enter input interactively or from a pre-configured file.
49
machine. Do you want to:
Enter [1]:
Enter [1]:
50
well-known location. You may also want to specify the hostname used
to advertise the Configuration Repository. If you do not specify a
hostname, the Configuration Repository will use the system's current
hostname.
Where do you want to place databases and logfiles for this domain [E:\Program
Files\IONA\var\ACI_Network]:
Deploy Locator
-----------------------------------------------
The Orbix 2000 Locator service manages persistent servers. You must
deploy this service if you intend to develop servers using PERSISTENT
POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
To insure interoperability among services of the same type across domains, a unique name
must be provided for
the locator.
51
Use the default local hosts name
The Orbix 2000 Locator must listen on a fixed port so that applications
can access it at a well-known location. You may also want to specify
the hostname used to advertise the Locator. If you do not specify a
hostname, the Locator will use the system's current hostname.
What hostname should the Locator advertise (leave blank for default):
Deploy Activator
-----------------------------------------------
The Orbix 2000 Activator service starts applications on a
particular machine. You must deploy an Activator service on every
machine on which you intend to use Orbix 2000's automatic server
activation feature.
Services [4]:
52
configure other machines with access to this domain.
Would you like to generate client and service preparation files [yes]:
Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The creation of Orbix configuration domain ACI_Network is now done. The computer must be
restarted to allow changes to take effects. If all the ACI servers are intended to run on this machine
(hosting the configuration repository at the same time) this is the only step needed, otherwise
continue with step 3 for other machines
53
Step 3 - Linking an installed Orbix environment to an existing configuration domain (created
in the Step 2)
This step is intended to repeat for all the machines where the ACI servers are to be run. If the
given ACI server is intended to be run on the same host as the one where configuration
domain has been created (in the previous step) please omit this step
Precondition :The machine hosting the configuration repository must be running while
executing this step
%Orbix-installed-dir%\orbix_art\1.2\bin\
%Orbix-installed-dir%\orbix_art\1.2\bin\configure
Configuration Domain
Locator Daemon
Activator Daemon
Orbix 2000 Services
Configuration Scripts
You can enter input interactively or from a pre-configured file.
54
This will show a selection menu :
Configuration Activities
-----------------------------------------------
Orbix 2000 organizes configuration information into domains that can be
shared across multiple machines. This utility allows you to create a
new domain or build a link to an existing domain running on another
machine. Do you want to:
Enter [1]:
Enter the hostname where the configuration domain has been created (the host used in the step 2)
Where do you want to place databases and logfiles for this domain [E:\Program
Files\IONA\var\ACI_Network]:
55
Use as Default Domain
-----------------------------------------------
Orbix 2000 can designate one domain as being the default domain for
this machine. This domain will be used for applications that are
started without an explicit domain.
Deploy Locator
-----------------------------------------------
The Orbix 2000 Locator service manages persistent servers. You must
deploy this service if you intend to develop servers using PERSISTENT
POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
Enter no, a Locator needs to be run only on the host where the configuration domain has been
created
Deploy Activator
-----------------------------------------------
The Orbix 2000 Activator service starts applications on a
particular machine. You must deploy an Activator service on every
machine on which you intend to use Orbix 2000's automatic server
activation feature.
Enter no, an Activator needs to be run only on the host where the configuration domain has been
created
56
to setup your java environment for this domain
Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The configuration of this machine to use ACI_Network configuration domain is now completed. The
computer must be restarted to allow changes to take effects.
Next Step:
For Cluster Software you have to change the properties of all local Corba services on both nodes with the computer
Management tool:
The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop these local Corba services.
57
Figure 26 IT activator default-domain Properties.
58
Figure 27 IT config_rep cfr-ACI_Network Properties.
59
Figure 28 IT activator default-domain Properties.
60
Figure 29 IT activator default-domain Properties.
61
Figure 30 IT activator default-domain Properties.
62
Figure 31 IT activator default-domain Properties.
Note: If the 1. Installation is erroneous, then is recommended a manual delete of the registry for the IT services.
3.4.3.1 Introduction
Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.
63
Note
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
For a Corba database group you will need the following resources to be part of this new group,
The order of succession to the booting from the corba services under a cluster configuration directs itself after the real
internal dependencies these services.
Corba services became follow order of succession by booting:
1. IT config_rep cfr-ACI_Network
2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
64
5. IT ifr default-domain
6. IT event default-domain
Use or created a new Cluster Group with following resources Shard Physical Disk, Network Name, IP Address.
Add Generic Services as resources for each corba services.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.
The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name.
Following pictures show the properties of the IT_Activator_default_domain service.This example can be use as sample for
the other corba services
For the next side is to heed the order of succession of the dependencies.
Corba services became follow order of succession by booting:
1. IT config_rep cfr-ACI_Network
2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
5. IT ifr default-domain
6. IT event default-domain
65
Figure 33 IT activator default-domain Dependencies properties
66
Figure 35 IT activator default-domain Parameter properties
67
Figure 37 Cluster Administrator, Group including corba services
68
3.4.9 Verify of Installation and Configuration
A simple way to check the functionality of Corba on cluster is following:
Precondition
Corba is installed and configured on cluster
One node is active
One 3rd computer (no node from cluster) belongs to the same domain and have installed the same version of Corba and
links to the same Corba Domain
Normal flow
From command prompt of the 3rd computer (login as domainuser x) start the command
itadmin
then puts
locator show
If the output uses the globalhostname, is the installation and configuration ok in the node
Then move Corba group to the another node and repeat the test.
Sample
C:\>itadmin
% locator show
Locator Name: GlobalHostName /it_locator
Domain Name: GlobalHostName
Host Name: GlobalHostName
%
69
3.5 Installing ACI Process on Microsoft Cluster Server
3.5.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.
ACI
GUI GUI
Client
Application Application
Logic Logic
API API
ACI Redundant
Server I Node
ACI ACI
Server Process Server Process
70
ACI is a placeholder for NM, DM or EM servers.
Reliability:
This configuration provides the same level of reliability as is available on a RAID configuration, because all needed
process are already configured for MSCS. Here not exists single point of failure.
Availability
The duration of an outage of Application will depend primarily on the time required to restart all resources of his group
and clients (server, client, snmpserver, brass).
Failback and Failover
Failback time is expected to be symmetric with failover.
3.5.2 Configuration instructions
Install ACI software (NM, DM or EM) on both nodes on a local disk (C:). Let the installation pick the default directories.
Each node shall place to disposition an own ACI service for each needed process (NM server, DM server, EM server,
SNMPserver, QD2server, Brass). This part of the ACI installation and configuration is independently from the cluster
configuration but is needed first to activate the node resources (disk, IP Adress, hostname, corba and versant), because they
are needed for your Installation. During the installation (NM, DM or EM) is to place the shared data on the shared disk.
Shared data are all Databases, all logs, BACKGROUND). Here is the installation of the Database the most important
point.
After the installation is recommended to review the files host and services under C:\WINNT\system32\drivers\etc.
127.0.0.1 localhost
218.1.17.83 VERSANDHOST
218.1.17.80 Node1
218.1.17.81 Node2
Next Step:
For Cluster Software you have to change the properties of local ACI service on both nodes with the computer Management
tool:
71
3.5.2.2 Change ACI Services Properties on local nodes
After installing ACI application and before rebooting the machines, the Startup Type for the ACI services needs to be
changed from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and
double click on ACI services and change the setting as shown below.
The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop these local ACI services.
72
3.5.3 Using Cluster Administrator
3.5.3.1 Introduction
Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.
73
Note
The default Cluster Group contains an IP Address resource, a Cluster Name resource, and a Time Service
resource. (This group is essential for connectivity to the cluster).
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
For a database group you will need the following resources to be part of this new group,
The order of succession to the booting from the ACI services under a cluster configuration corresponds directs itself to the
real internal dependencies these services.
ACI service should be booting after follow resources:
Disk
IP address
Hostname
74
Corba
Versant
Important: The order of succession for the booting of Resources is flexible. For example, it is possible corba after versant
to start up, or reversed, there both Resources are really apart independent
Use or created a new Cluster Group with following resources Shard Physical Disk, Network Name, IP Address.
Add Generic Services as resources for the ACI service.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.
The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name and IP Address.
Following pictures show the properties of DM service as sample for the step
75
DM gets starting after the last corba service
76
Figure 44 Parameter properties
77
Figure 46 Cluster Administrator shows Group including online services
Second step is the configuration under cluster administrator for the registry key properties of the resource to this process.
Normal flow
From the 3rd computer (login as domainuser x) start the clients with the command parameter
SERVER=GlobalHostName
If the connection to server work, then is the installation in this node ok. Then move the group to the another node and
repeat the test.
78
3.6 Hard/Software Requirements and Customer Supply
3.6.1 Requirements Section (for Windows)
Please notice: all values in this section are minimum requirements for sufficient operating performance. The recommended values are as defined in the Customer Section.
Operation with less performant servers is possible, but will decrease operating performance. It is advised to meet at least memory reqirement.
Hardware Requirements
Component Type Remark
General Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
Tower
CPU
Pentium III 700
MHz
RAM 512 MB >10.000 subscriber additional 512 MB recommended
Hard Disk 6 GB (Ultra-SCSI) max. DB size = 500 MB
9 GB (Ultra-SCSI) max. DB size = 2 GB
Floppy 1.44 MB
CD-ROM 32x (SCSI)
DAT-Streamer 12 GB (SCSI) optional, for backup
LAN-Adapter 10/100 Mbit/s
Disk-Controller SCSI
Keyboard PS/2 (recommended)
Mouse PS/2 (recommended)
Graphics Adapter Card SVGA, 4 MB VRAM
Audio Controller any WinNT compatible typically needed only for Single User configuration
Monitor Server Color, 17
Monitor Single User Color, 21
Software Requirements
Component Type Remark
Operating System Windows NT 4.0 Server Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Windows 2000 Server Will be supported from ACI Version 8.2 onwards
ACI-Client
General
An ACI-Client shall be planned to handle 6.000 up to 10.000 subscribers ( depending on customer operational and maintenance organization).
Hardware Requirements
Component Type/Value Remark
General Mini-Tower/Desktop Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
CPU Pentium III 600 MHz
Cache
512 kB Sec.
Level Cache
RAM 128 MB
Hard Disk 3,2 GB (Fast-IDE)
Floppy 1.44 MB
CD-ROM 32x IDE
Disk Controller FAST-IDE
LAN-Adapter 10/100 M bit/s
Keyboard PS/2 (recommended)
Mouse PS/2 (recommended)
Graphics Adapter Card SVGA, 4 MB VRAM
Audio Controller Any WinNT/2000 compatible
Monitor Color, 21
Software Requirements
Component Type Remark
Operating System Windows NT 4.0 WS Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Windows 2000 Professional Will be supported from ACI Version 8.2 onwards
Windows NT 4.0 Server Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Windows 2000 Server Will be supported from ACI Version 8.2 onwards
82
ACI/-TRIAL/-MINI
General
That PC can be used as an ACI-Trial (small configuration, max. 120 subscribers) or as an ACI-Mini , which configuration was defined due to the necessity to have a low
cost system for small networks up to 1.000 subscribers. It is used as a single user system.
Hardware Requirements
Component Type/Value Remark
General Mini-Tower/Desktop Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
CPU Pentium III 600 MHz
Cache
512 kB Sec.
Level Cache
RAM 128 MB
Hard Disk 6 GB (Fast-IDE) max. DB size=500MB
Floppy 1.44 MB
DAT-Streamer 4/8 GB (SCSI) optional, for backup (only for ACI-Mini)
CD-ROM 32x DIE
Disk Controller FAST-DIE
LAN-Adapter 10/100 M bit/s
Keyboard PS/2 (recommended)
Mouse PS/2 (recommended)
Graphics Adapter Card SVGA, 4 MB VRAM
Audio Controller Any WinNT/2000 compatible
Monitor Color, 17
Software Requirements
Component Type Remark
Operating System Windows NT 4.0 Server Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Windows 2000 Server Will be supported from ACI Version 8.2 onwards
83
ACI-High-Availability-Server-Cluster
General
The High-Availability-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability mentioned in chapter 2.
LCT
Hardware Requirements
Component Type/Value Remark
General Notebook/Laptop with optional accumulator power supply
Interfaces (needed): 1 x RS232C
additional Interfaces (recommendation): 1 x VGA, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
Software Requirements
Component Type Remark
Operating System Windows NT 4.0 Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Windows 2000 Will be supported from ACI Version 8.2 onwards
84
3.6.2 Customer Supply Section
This section shows the hard/software which is currently shipped to the customer. It can also be seen as the definition for recommended hard- and software.
85
S42022-D1801-V102 ACI - English UK Only on request
S42022-D1801-V103 ACI - France Only on request
S42022-D1801-V104 ACI - German Only on request
Error: Reference source Speaker
2 Fujitsu-
not found
SIEMEN
S
S26361-F2560-L1 Audio Controller Creativlabs CT5808
4 Fujitsu-
SIEMEN
S
Error: Reference source Monitor 21
3 Fujitsu-
not found
SIEMEN
S
ACI-Server
General Comment
The ACI-Server can be a MIDDLE or HIGH system. The MIDDLE variant is equipped with a second processor in addition to the configuration of the BASIC system (ACI
Single User).
The HIGH variant also includes two CPUs and in addition to this it has a RAID controller and redundant power supply inside. Furthermore it is equipped with 3 hardddisk.
The two additional harddisk should be used to store and mirror the ACI-Database.
Configuration MIDDLE
Component Type/Value Order Number Remark Package #
General PRIMERGY F200 GE FS S26361-K643-V102 Floorstand 1
CPU Pentium III 1,13 Ghz/512kB Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
Floppy 3.5 /1.44 MB power-supply1 (+1) x 400 W hot-plug / redundant (optional)
SCSI-Controller 1xU160,int/ex, SE S2636-F2399-E1
LAN-Adapter Intel 10/100 on-board
Graphics Adapter Card PCI-Graphic ATI 8MB on bord
Flexy Bay Option FD S26361-F2575-E1
2nd Processor PentiumIII 1,13GHz/512kB S26361-F2599-E113
Power-Supply 400W Upgrade (hot-plug) S26113-F453-E1
Power-Supply-Modul 400W (hot-plug) S26113-F453-E10
Fan-Unit Upgrade-Kit hot-plug redundant S26361-F2544-E1
RAM 1 GB SDRAM PC133 ECC S26361-F2306-E524
86
CD-ROM ATAPI / IDE SNP:SY-F2240E1-A
Hard Disk 18GB, 10k, U160, Hot Plug, 1 SNP:SY-F2336E118-P used for NT, ACI and DB
DAT-Streamer DDS-3, 12 GB intern S26361-F1730-E2
Keyboard KBPC S2 S26381-K297-V122 German (D)/(INT) 2
Country-Kit T26139-Y1740-E10 German (D) 3
2x Power-Cable grey 1,8m T26139-Y1744-L10 English (UK)/Ireland(IR)
Operating System Windows NT Server (US) V4.0+10CL S26361-F2565-E305 4
Audio Controller Creativlabs CT5808 S26361-F2560-L1 optional
Speaker optional
Monitor 17 MCM 17P3 S26361-K707-V150
87
Order Packages MIDDLE
Package # Supplier Order Number Content Remark
1 Fujitsu-SIEMENS S42022-D1801-V201 ACI-MIDDLE - US/International
S42022-D1801-V202 ACI-MIDDLE - English UK Only on request
S42022-D1801-V203 ACI-MIDDLE France Only on request
S42022-D1801-V204 ACI-MIDDLE German Only on request
S26361-F2560-L1 Audio Controller Creativlabs CT5808
4 Fujitsu-
SIEMEN
S
Error: Reference source Speaker
2 Fujitsu-
not found
SIEMEN
S
S26361-K707-V150 Monitor 17
3 Fujitsu-
SIEMEN
S
88
Configuration HIGH
89
Order Packages HIGH
90
ACI/-CLIENT
Configuration
91
Order Packages
92
ACI- High-Availability-Cluster-Server
General
The High-Availibility-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability.
Configuration general (with storage)
Component Type/Value Order Number Remark
General 1x DataCenter Rack 38U SNP:SY-K614V101-P
2x blindplate 3U SNP:SY-F1609E3-P
3x blindplate 5U SNP:SY-F1609E5-P
1x flexirailpair for Rack-Components SNP:SY-F1331E51-P
2x Keyb.-mon.-mouse SNP:SY-F2293E500-P
Servercableset customized
1x Consolswitch 4x (ES4+) 1U + SNP:SY-F2293E40-P
built-in, UPS
1x Rack Console for TFT monitor + SNP:SY-F1806E12-P
keyboard, UPS
1x UPS APC grey COM signal-cabel S26113-F231-E1
for built-in + NT
1x UPS APC COM-Port additional + S26113-F81-E1
built-in
1x UPS APC 3000 VA, 3U SNP:PS-E421E1-P
3x Rack-built-in SNP:SY-F1647E301-P
1x carrier-angle 2U SNP:SY-F2262E15-P
2x cabel FC Cu HSSDC-HSSDC 3m SNP:SY-F1828E3-P
+ built-in
Storage 1x PRIMERGY S60 RH storage, S26231-K714-V210
UPS saved
Supsystem S60 FC RAID Ctrl64MB S26361-F2436-E1
BBU
Fibre Channel GBIC Cu SNP:SY_F1832E1-P
3x hard disk 36GB,10k, Hot plug,1 S26361-F2435-E136
1x built-in-kit 19" DC-Rack S30/S60 SNP:SY-F2261E8-P
Configuration Server 1
Component Type/Value Order Number Remark
93
Generals 1x PRIMERGY F200 GE RS PIII S26361-K643-V303
1,26GHz/512kB UPS-saved
CPU/Cache PIII 1,26GHz 512kB S26361-F2599-E126
RAM 1GB SDRAM PC133 ECC S26361-F2306-E524
Flexy Bay Option FD S26361-F2575-E1
DAT_STREAMER DDS4 20GB, 3MB/s, intern S26361-F2233-E3
CD-ROM DVD-ROM, ATAPI SNP:SY-F2234E1-A
Hard Disk 2x 18GB,10k,U160, hotplug,1" SNP:SY-F2336E118-P
Raid Controller U160 int/ext,16MB, Mylex S26361-F2406-E16
FC Controller 66MHz, Cu Interface SNP:SY-F2244E1-A
LAN-Adapter Fast Ether-Express-Pro/100+ Server SNP:SY-F2071E1-A PCI-Card
Power-Supply 400W Upgrade (hot-plug) S26113-F453-E1
Power-Supply-Modul 400W (hot-plug) S26113-F453-E10
Fan-Unit Upgrade-Kit hot-plug redundant S26361-F2544-E1
Built-In-Kit 19" DC-Rack P6xx/Hxxx/F2xx SNP:SY-F2261E31-A
Software Windows 2000 Adv.SRV + 25 CL,1- S26361-F2565-E706
8Proz. US
94
DAT_STREAMER DDS4 20GB, 3MB/s, intern S26361-F2233-E3
CD-ROM DVD-ROM, ATAPI SNP:SY-F2234E1-A
Hard Disk 2x 18GB,10k,U160, hotplug,1" SNP:SY-F2336E118-P
Raid Controller U160 int/ext,16MB, Mylex S26361-F2406-E16
FC Controller 66MHz, Cu Interface SNP:SY-F2244E1-A
LAN-Adapter Fast Ether-Express-Pro/100+ Server SNP:SY-F2071E1-A PCI-Card
Power-Supply 400W Upgrade (hot-plug) S26113-F453-E1
Power-Supply-Modul 400W (hot-plug) S26113-F453-E10
Fan-Unit Upgrade-Kit hot-plug redundant S26361-F2544-E1
Built-In-Kit 19" DC-Rack P6xx/Hxxx/F2xx SNP:SY-F2261E31-A
Order Packages
95
LCT
Configuration
Component Type/Value Order Number Remark Package #
General LIFEBOOK E-6646 PIII 1066Mhz S26391-K114-V170 LCD TFT 14,1 SXGA+1400x1050, ATI Mobility-M6, 1x 1
for E-6646 seriell, 1x parallel, 1x VGA, 1x PS/2
Floppy 3.5 /1.44 MB LI-ION Battery, Card Bus-Connectors
Disk-Controller onboard
Graphics Adapter 16MB Video RAM
RAM SDRAM 128 MB 133 MHz S26391-F2424-E200
Order Packages
96