Sie sind auf Seite 1von 17

Edition November 2007

Microsoft Cluster Service (MSCS)


Version 3
Windows Server 2003
FibreCAT SX60, SX80 and SX88
Pages 177

MICROSOFT CLUSTER VARIANTS .......................................................................................... 3


NLB (Network Load Balancing) .............................................................................................................................................3

CLB (Component Load Balancing) .......................................................................................................................................3

MSCS (Microsoft Cluster Service).........................................................................................................................................4

MSCS DETAILS .......................................................................................................................... 4


Failover Capability ....................................................................................................................................................................5

Failback........................................................................................................................................................................................5

Shared Storage ..........................................................................................................................................................................5

Software Components of the MSCS .....................................................................................................................................5

SAMPLE CONFIGURATIONS .................................................................................................... 6

MSCS Examples – Entry-Level Solution..............................................................................................................................6

MSCS Examples – Controller Redundancy ........................................................................................................................6

MSCS Examples – Controller Redundancy and Extensibility (Optimal Configuration) ..........................................7

HIGH AVAILABILITY CLUSTERS .............................................................................................. 7

INCREASING MSCS AVAILABILITY BY MEANS OF ADD-ON SOFTWARE ........................... 9


MSCS Examples – Controller Redundancy and Mirroring..............................................................................................9

CLUSTER RELEASE & CERTIFICATION ................................................................................ 10


Cluster Release by Fujitsu Siemens Computers.............................................................................................................10

Cluster Certification................................................................................................................................................................10

SUMMARY ................................................................................................................................ 13
Cluster EQP  Windows Server Catalog ......................................................................................................................13
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 2 / 17

Clusters – Detail Information................................................................................................................................................14

Clusters – Direct Cabling ......................................................................................................................................................14

SAMPLE CONFIGURATION – MSCS CLUSTER OPTIMAL.................................................... 15


White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 3 / 17

Microsoft Cluster Variants

The failure of key systems can have serious consequences for organizations and may even
endanger their continued survival. PRIMERGY servers in cluster operation mode deliver a high
level of failsafe performance. The Microsoft Windows Server 2003 operating system features
extensive software support for cluster operation of PRIMERGY servers used in conjunction with
storage subsystems such as FibreCAT SX. On Microsoft servers the following three distinct
technologies are available to support clustering.

• NLB (Network Load Balancing)


• CLB (Component Load Balancing)
• MSCS (Microsoft Cluster Service)

Fibre Channel storage systems such as the FibreCAT SX family from Fujitsu Siemens
Computers GmbH are frequently used with MSCS (Microsoft Cluster Service).

NLB (Network Load Balancing)

In an NLB cluster (network load balancing cluster) several PRIMERGY servers are deployed
simultaneously to balance traffic load and ensure high availability of a resource. The software
needed to do this is included in all Microsoft Server 2003 operating systems and supports up to
32 nodes. This software is suitable for Web servers, firewalls and Web services but not for all
application, email and database programs. Microsoft regards NLB simply as a system feature
which therefore requires NO separate certification. NLBs (network load balancing clusters) need
no shared storage subsystem (FibreCAT SX).

CLB (Component Load Balancing)

Component Load Balancing distributes workload over several servers. The appropriate
software solution runs on these servers. CLB achieves dynamic balancing of the COM(+)
components across a group of up to eight identical PRIMERGY servers. With CLB, the COM(+)
components on the PRIMERGY servers are located in a separate COM(+) cluster. Calls to
activate COM(+) components are distributed uniformly to the various PRIMERGY server nodes
within the COM(+) cluster. CLB complements the two other clustering options (NLB and MSCS).
CLB and MSCS (Microsoft Cluster Service) can run on the same group of computers. CLB
includes cluster functionality on application level, for example "Oracle RAC", "Microsoft COM(+)
Load Balancing Cluster" etc., and is NOT tested and released separately by Fujitsu Siemens
Computers GmbH in the context of cluster certification. Likewise, the CLB variant is NOT
certified separately as this is not required by Microsoft. CLBs need no shared FibreCAT SX
storage system.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 4 / 17

MSCS (Microsoft Cluster Service)

The Microsoft Cluster Service1 (MSCS) is designed to provide high


availability and supports up to 8 nodes. To operate meaningfully, MSCS
requires at least one shared FibreCAT SX storage system. MSCS is an
integral part of the Windows Server 2003 operating systems2. The MS
Cluster Service acts as a back-end cluster; it provides high availability for applications such as
databases, messaging and print and file services. It attempts to minimize the effect of a failure
in the system if any node (a server in the cluster) fails or is taken offline. A special feature of
MSCS is that the cluster service is always certified and sold as a bundle in a predefined
configuration. In this context, certification means very careful planning and thorough testing.
This is one of the prime prerequisites for successful operation in a production environment. In
addition, the Microsoft Cluster Service is one of the most affordable entry-level solutions into
the world of high availability clusters.

MSCS Details

The cluster functionality is provided by a specific part of the Microsoft Windows Server 2003
operating system known as the Microsoft Cluster Service (MSCS). MSCS attempts to
minimize the effect of a system failure if nodes fail or are taken offline. MS Cluster Service
provides several easy-to-use tools to simplify the configuration and management of clusters.
MSCS was designed for use with Intel-based hardware such as PRIMERGY servers. The
solution does not target "true fault tolerance“3. However, combining multiple-redundant
hardware into a single, high availability computer resource does mean that, in total, less
redundant hardware is required. And this is one of the benefits of the MSCS cluster. The
Microsoft Cluster Service is based on the following clustering model.

o Shared nothing

This means that at any one time (for example, at 11:54 on 23 November 2007) only one
privileged node may have access to a resource (such as FibreCAT SX) although technically all
nodes could equally well access the FibreCAT SX subsystem. The privileged PRIMERGY
server node alone manages the resource. Only if the privileged node fails, does another node

1
Microsoft Cluster Service = Microsoft Cluster Server => abbreviated to MSCS
2
Windows Server 2003, Enterprise Edition and Datacenter Edition
and up to 4 nodes with the Windows 2000 Server variants
3
The term (100%) "error tolerant" is generally used to describe a technology that offers an even higher level of failsafe operation and recovery. As a rule, truly error
tolerant servers deploy much more complex hardware and software to ensure almost immediate recovery of hardware or software failures Such systems are more
expensive than cluster solutions because they require large capital investment in redundant hardware that is used only when it is necessary to recover from a failure.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 5 / 17

"stand in" and assume its role. In this situation, the resource may be either a physical or
logical component (for example, individual volumes or LUNs on the FibreCAT SX system). The
resource is managed within the cluster and hosted by a node; if necessary (if the node fails),
the resource is moved to a different node. The MS Cluster Service acts as a back-end cluster.
This means high availability for applications such as database, messaging and print and
file services.

Failover Capability

Each node has the following components.

(1) Memory
(2) System disk (HDD)
(3) Operating system
(4) Subset of cluster resources (infrastructure and data)

If a node fails, another node steps in to do its work. In a sense, it assumes the ownership rights
of the failed node to the resources (this process is known as "failover"). Microsoft Cluster
Service registers the network address (IP) of the resource (FibreCAT SX) on the new node so
that client traffic is routed to the new node without the owners of the clients noticing any effect
on their work.

Failback

If the failed resource is only out of action temporarily, MSCS can automatically reallocate the
resource (FibreCAT SX). This process is known as failback.

Shared Storage

All applications running when failover occurs can be resumed if all nodes have access to shared
storage. Parameters and application status can be held and exchanged on shared storage.

Software Components of the MSCS

MSCS consists primarily of the following software components.

(1) Cluster Service


(2) Resource Monitor
(3) Resource DLL
(4) Cluster Administrator

The Cluster Administrator enables software developers to write what are known as extension
DLLs for management tasks.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 6 / 17

Sample Configurations

MSCS Examples – Entry-Level Solution

Figure 1 Example 1: Two PRIMERGY servers (connected via Fibre Channel)


as nodes in the MSCS cluster and a FibreCAT SX60, with one
controller, as a shared storage system.

MSCS Examples – Controller Redundancy

Figure 2 Example 2: Two PRIMERGY servers as nodes in MSCS cluster operation mode with
multipathing and activated communication between the Fibre Channel controllers
(Interconnect = ON), and a FibreCAT SX60 as a shared storage system.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 7 / 17

MSCS Examples – Controller Redundancy and Extensibility (Optimal Configuration)

Figure 3 Example 3: Two PRIMERGY servers as nodes in MSCS cluster operation mode with
two switches with multipathing and deactivated communication between the Fibre
Channel controllers (Interconnect = OFF), and a FibreCAT SX88 (SX60 / SX80) as a
shared storage system.

For further information on the Interconnect setting options between the two FibreCAT SX
storage controllers, refer to the Getting started guide (www.FibreCAT.net Documents)

High Availability Clusters

High availability of servers and data is crucial to users of the Fujitsu Siemens Computers
PRIMERGY server family and associated Fujitsu Siemens Computers STORAGE products.
The IEEE4 defines availability as the degree to which a device runs and is accessible as
needed by an authorized user. Availability is defined as a probability using the following formula.

Operating time - Downtime


Operating time x 100 (%)

4
IEEE: Institute of Electrical and Electronics Engineers: A worldwide professional association of engineers in the areas of electrical and electronics engineering and
computer science; well-known for its high professional quality. IEEE is one of the largest technical, professional associations in the world.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 8 / 17

The more an enterprise is reliant on the availability of its computer systems to conduct its daily
business, the greater is the importance attached to high availability. Downtimes are
synonymous with standstill and result in unnecessary delays in business operations. It is
essential to prevent downtimes as they lead to loss of revenue, not only in worst case
scenarios. High availability systems should be tailor-made because this makes it necessary to
identify the exact requirements of each customer. Further key factors in preventing system
failures are

o an examination of organizational processes

o staff training to generate an awareness for high availability

o a requirements-driven IT service

o suitable hardware and software (Silkworm switches, PRIMERGY servers, FibreCAT SX)

Figure 4 Classification of availability and high availability

In other words, high availability is ensured by a variety of established measures such as


redundancy5, for example. Servers in particular must be redundant. Here, a central role is
played by clustering which involves combining two or more servers into a server cluster role.
Clustering facilitates high availability and load balancing. Mutual monitoring of the PRIMERGY

5
Duplication of identical hardware and software for servers and peripherals.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 9 / 17

server nodes and major storage systems such as FibreCAT SX ensures that another
PRIMERGY server node takes over automatically if a disaster – the worst of all scenarios -
occurs. Redundancy also provides valuable opportunities to carry out maintenance work (on
software and hardware). As a rule, there is no need for administrator intervention and users are
not aware of any interruption. Depending on the specific situation, this is achieved by
temporarily moving running applications, access rights and storage systems such as FibreCAT
SX to other PRIMERGY server nodes.

Increasing MSCS Availability by means of Add-On Software

Fujitsu Siemens Computers' own PRIMERGY DuplexDataManager® (DDM) can be


deployed to boost the availability of a Microsoft Cluster Service. This ensures that the data on a
FibreCAT SX system is copied synchronously from the attached server to a second FibreCAT
SX (mirroring).

MSCS Examples – Controller Redundancy and Mirroring

Figure 5 Example 4: Microsoft Cluster with PRIMERGY DuplexDataManager® (DDM) for path
redundancy and mirroring of cluster resources with two premium FibreCAT SX88.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 10 / 17

For more information on this topic, refer to the Multipathing White Paper at www.FibreCAT.net:
Downloads  Documents, or go to www.fujitsu-siemens.com/manuals.

Cluster Release & Certification

Cluster Release by Fujitsu Siemens Computers


To meet high quality standards, the PRIMERGY servers, all their hardware and software
components, and peripherals are subjected to extensive FUJITSU SIEMENS COMPUTERS
component and full system tests before they are released for sale.
These are followed by further tests for all products that are also provided for cluster release,
particularly those with cluster-relevant components; these are the planned server systems, the
disk subsystems, and also the I/O controllers. The latter require very specific and careful detail
testing of their drivers, firmware and BIOS levels. The correct functioning of each individual
component is a prerequisite for the trouble-free functioning of the entire server cluster.

Cluster Certification
The above FUJITSU SIEMENS COMPUTERS release tests are followed by Microsoft cluster
certification. The following facts are important in this context.
For certification purposes, FUJITSU SIEMENS COMPUTERS uses the current version of the
specific Microsoft software suite known as the Cluster Test Kit. This kit is used to perform the
"HW Compatibility Test“ (HCT) for each different cluster configuration.
A "cluster configuration" is determined by
- the Windows operating system used and therefore the Cluster Service version used (Windows
2003 x86 or Windows 2003 x64 (64 bit))
- the PRIMERGY server model
- the host bus adapter (HBA) for connecting the disk subsystem including driver, firmware and
BIOS in the applicable version:
 for FC: the FC controller
 for SCSI: the RAID controller
 for ISCSI: the LAN controller + SW initiator / ISCSI HBA
- also, in the case of Fibre Channel, the SAN switch and the firmware and BIOS version used
- the Multipath software version (if used)
- the model of the disk storage subsystem

Successfully completed certifications are published on the Microsoft and Fujitsu Siemens
Computers Internet (see next section).
The above list gives an indication of the large number of potential combinations, and therefore
cluster configurations, that may need to be certified. Each different variant of the above cluster
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 11 / 17

components represents a separate configuration and therefore requires separate certification.


To keep certification effort within limits, a series of representative configuration profiles is
certified proactively as standard. If specific configurations required for your customer projects
are not certified and are not included in PRIMERGY roadmap planning, use the PRIMERGY
Sales Hotline to discuss your needs with PRIMERGY Product Marketing/Product Management.

Cluster Logo Entry in the MS Windows Server Catalog


As a reference, each certified cluster configuration has an entry in the MS Windows Server
Catalog. Sales, customers or prospects can view this catalog on the Internet at any time
(www.windowsservercatalog.com).

"Full" and "Supported": Two Different Windows 2003 Cluster Logos


Each PRIMERGY cluster entry in the MS Windows Catalog is assigned to one of two cluster
logo types which indicate the level of compatibility of the PRIMERGY servers certified in a
cluster configuration.

Designed for

The "Full Logo" indicates that the server hardware is fully compatible with the
current Hardware Design Guide for Windows 2003.

MS Supported (without Logo)

The "Supported Logo" indicates that the server nodes do not fully comply with the Hardware
Design Guide (they originate from the previous generation of the current PRIMERGY hardware
line), but that Windows Server 2003 runs correctly on the hardware.

Important: Fujitsu Siemens Computers and Microsoft provide full support for the
"Supported" category if problems are encountered with the PRIMERGY cluster server hardware
or with the Windows operating system.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 12 / 17

Overview of certified FUJITSU SIEMENS COMPUTERS cluster types


Fujitsu Siemens Computers certifies the following cluster types (with the specified number of
servers) classified according to operating system and connection topology.

Cluster Type/OS Max. Comments


No. of
Servers
W2003 Enterprise Edition 2 with
SCSI
4 with FC Technically, a maximum of 8 is possible with FC
4 with
ISCSI
W2003 Datacenter 4 FC only, technically a maximum of 8 is possible
Server
W2003 Geo Cluster 2 Cluster dispersed geographically over 2 locations

Enterprise Qualification Program (EQP)


Server clustering is an important building block in enterprise computing concepts and imposes
far-reaching demands on design, implementation engineering, quality assurance, and field
support by server hardware vendors. To emphasize this fact, Microsoft – in conjunction with
other important hardware manufacturers, and therefore with the active participation of Fujitsu
Siemens Computers – has instigated the Enterprise Qualification Program (EQP). The EQP
requires compliance with the manufacturer capabilities listed above and, in a sense, certifies the
manufacturer for his enterprise activities in relation to Microsoft Cluster and Windows
Datacenter certification and marketing. Fujitsu Siemens Computers with its PRIMERGY servers
is a major EQP participant.
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 13 / 17

Summary

The Microsoft Cluster Service (MSCS) supports high availability through the use of affordable
standard hardware and software. Computer resources are utilized to the full. The Microsoft
Cluster Service in Windows Server 2003 provides powerful tools so that all applications are able
to communicate via high availability interfaces.
The Microsoft Cluster Service (MSCS) is available in the following current operating systems.

Windows Server 2003 Enterprise Edition.


Windows Server 2003 Datacenter Edition,
In the x86 and x64 (64 bit version) variants

FibreCAT SX is the ideal system for your Microsoft cluster solution for the following reasons.
• It offers an outstanding price/performance ratio
• It features intuitive operation and installation
• It is certified for Microsoft Cluster
• It provides a simple entry into the Storage Area Network (SAN) world
• It supports redundant controller configuration in the storage system and redundant Fibre
Channel connection (Multipathing  see www.FibreCAT.net  Documents 
Whitepaper Multipathing)
• FUJITSU SIEMENS COMPUTERS' own DDM add-on software (see "

• Increasing MSCS Availability by means of Add-On Software") also permits the
implementation of disaster-tolerant configurations

The FibreCAT SX systems have, of course, the "Full Logo" = "Designed for Windows 2003"

Cluster EQP  Windows Server Catalog


As there is always a delay between FSC EQP certification and publication of the relevant
information on http://www.windowsservercatalog.com, the EQP Excel table should always be
consulted to obtain the latest detailed information (at http://www.fujitsu-siemens.com/EQP or via
the direct link http://extranet.fujitsu-siemens.com/vil/pc/vil/primergy/operating_systems/windows-
2003/eqp.zip).
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 14 / 17

Clusters – Detail Information


The EQP list also includes information on the number of cluster nodes, controller firmware,
Fibre Channel switches used, etc.

Clusters – Direct Cabling


For cluster configurations with direct cabling (in other words, without Fibre Channel switches),
there is a separate configuration description at www.FibreCAT.net  Documents  Getting
started guide (at the end of the manual).
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 15 / 17

Sample Configuration – MSCS Cluster Optimal

You are also referred to the White Paper entitled "My very First SAN" at www.FibreCAT.net.
"My very first SAN" includes a special price and service advantage that simplifies entry into
the S A N world. The relevant Data Sheets indicate which FibreCAT SX is best suited for your
application. This sample configuration (see also MSCS Examples – Controller Redundancy and Extensibility
(Optimal Configuration) is based on a FibreCAT SX 80 system with SAS6 disks.

6
SAS=Serial attached SCSI
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 16 / 17

Product level: 15.02.2007

System: PY_PRIMECENTER Rack 24U, 1000 deep

Product no. Name Number

S26361-K826-V102 PY_PRIMECENTER Rack 24U, 1000 deep 1

19" rack (24U), stackable, system enclosure as per protection type IP00 with side panels, lockable
front and rear door, including tilting protection, horizontal self-ventilation, mounting frame 2x 2U
vertical, circumferential cable management at rear, cable outlet to top, bottom, left and right.
HxWxD = 1220x700x1000 mm; incl. installation preparation, packing and accessory pack (various
installation material, manual).

S26361-F2735-E10 Adapter angle PC/DC rack, up to 50Kg 2


S26361-F2262-E31 Socket strip, 3-phase 3x 8 sockets 1
SNP:SY-F1609E1-P Dummy panel 1U 1
SNP:SY-F1609E2-P Dummy panel 2U 1
SNP:SY-F1609E3-P Dummy panel 3U 2
SNP:SY-F1609E5-P Dummy panel 5U 1
S26361-F2293-E801 Console switch KVM S2-0801, 1U + installation 1

FC-SX 80 Base Unit

D:FCSX80-BASE FC-SX 80 Base Unit 1

FibreCAT SX80 Base Unit, incl. 1x SX80 RAID controller,


2x redundant power supplies with 2.5m power cords, 12 SATA / SAS HDD bays, 1 bay for second
RAID controller, 8 snapshots (4x per RAID controller), RAIL Kit for rack mounting.

D:FCSX-SAS73 SX-HDD SAS 73GB 15k 4


D:FCSX-RD80 RAID controller SX80 1
D:FCSX-INPSR Rack mounting SX60/SX80 ex factory 1
D:FCSX-ANWIN SX60/SX80 with Windows 2
D:FCSX-ANPS SX60/SX80 Primergy connection 1
CPS:ST-INT-00002 General storage solution service (1 MD) 1
CPS:ST-ONL-12001 Design of FibreCAT solution (from 1 hour) 1

RC23 17" TFT German \ US English

S26361-K1023-V100 RC23 17" TFT German \ US English 1

Rack console RC23 with 17" TFT monitor with rack slide-in module, incl. German/US English
keyboard with touchpad (88 keys) and installation kit for DC/PC/3rd P racks

PY TX200S3ri/X 5130_Cluster_Node1

S26361-K981-V313 PY TX200S3ri/X 5130 1

19" rack mountable server (4U) incl. 1 power supply module and universal rack mounting kit, 2
fans; dual system board with 1x Xeon DP Dual Core processor, DDR RAM PC2-4200F/5300F
ECC; iRMC onboard server management and graphics controller, 1xGbit Ethernet LAN onboard ,
Fast-IDE controller (for 1xDVD), SAS controller with 8 ports (IME included = RAID 1); SATA
controller with 6 ports; Software: ServerStart package incl. ServerBooks CD, ServerSupport CD
and ServerView CD.

S26361-F3313-B521 1GB 2x512MB Base FBD533 PC2-4200F ECC 1


SNP:SY-F2234E1-A DVD-ROM ATAPI 1
S26361-F3204-E573 HD SAS 3Gb/s 73GB 15k hot plug 3.5" 2
S26361-F3085-E202 RAID Ctrl 0-Channel SAS 128MB LP LSI 1
S26361-F3306-E1 FC Ctrl 4GBit/s LPe1150 MMF LC 2
White Paper  Edition: May 2007 White Paper Windows 2003 Server with FibreCAT SX Systems Page 17 / 17

S26361-F3142-E1 Eth Ctrl 1x1Gbit PCI32 PRO/1000GT Cu 1


S26113-F453-E4 Power supply 600W upgrade (hot plug) 1
S26113-F453-E30 Power supply module 600W (hot plug) 1
SNP:SY-F1647E301-P Rack mounting ex factory 1
S26361-F2544-E4 Redundant hot-plug fan kit 1
CPS:PR-OSY-21001F Base installation Windows 2003/2000/NT4 1

PY TX200S3ri/X 5130_Cluster_Node2

S26361-K981-V313 PY TX200S3ri/X 5130 1

19" rack mountable server (4U) incl. 1 power supply module and universal rack mounting kit, 2
fans; dual system board with 1x Xeon DP Dual Core processor, DDR RAM PC2-4200F/5300F
ECC; iRMC onboard server management and graphics controller, 1xGbit Ethernet LAN onboard ,
Fast-IDE controller (for 1xDVD), SAS controller with 8 ports (IME included = RAID 1); SATA
controller with 6 ports; Software: ServerStart package incl. ServerBooks CD, ServerSupport CD
and ServerView CD.

S26361-F3313-B521 1GB 2x512MB Base FBD533 PC2-4200F ECC 1


SNP:SY-F2234E1-A DVD-ROM ATAPI 1
S26361-F3204-E573 HD SAS 3Gb/s 73GB 15k hot plug 3.5" 2
S26361-F3085-E202 RAID Ctrl 0-Channel SAS 128MB LP LSI 1
S26361-F3306-E1 FC Ctrl 4GBit/s LPe1150 MMF LC 2
S26361-F3142-E1 Eth Ctrl 1x1Gbit PCI32 PRO/1000GT Cu 1
S26113-F453-E4 Power supply 600W upgrade (hot plug) 1
S26113-F453-E30 Power supply module 600W (hot plug) 1
SNP:SY-F1647E301-P Rack mounting ex factory 1
S26361-F2544-E4 Redundant hot-plug fan kit 1
CPS:PR-OSY-21001F Base installation Windows 2003/2000/NT4 1

FC Switch BRCD200E, 8 Port, ZO

D:FCSWR-08P200EL FC Switch BRCD200E, 8 ports, ZO 1

Switch 4Gb/s, 8 ports WT, ZO NOTE: The licenses for port activation and for software are
included separately as a PaperPack with the switch

D:FCSW-RK2G-1HE FC Switch Rackmount Kit SW-16P2G/SW-8P2G 1


D:FCSFP-MM4G 4GB SFP MULTI MODE/D:FCSFP-MM4G 4

FC Switch BRCD200E, 8 Port, ZO_1

D:FCSWR-08P200EL FC Switch BRCD200E, 8 ports, ZO 1

Switch 4Gb/s, 8 ports WT, ZO NOTE: The licenses for port activation and for software are
included separately as a PaperPack with the switch

D:FCSW-RK2G-1HE FC Switch Rackmount Kit SW-16P2G/SW-8P2G 1


D:FCSFP-MM4G 4GB SFP MULTI MODE/D:FCSFP-MM4G 4

Further information is available at


www.FibreCAT.net
Delivery subject to availability, specifications subject to change without notice, correction of errors Published by:
and omissions excepted. All conditions quoted (TCs) are recommended cost prices in EURO excl. Fujitsu Siemens Computers
VAT (unless stated otherwise in the text). All hardware and software names used are brand names Enterprise Products Storage Business
and/or trademarks of their respective holders. All hardware and software names used are brand Product Marketing
names and/or trademarks of their respective holders. Artukovic-Tuboly, Antal
storage-pm@fujitsu-siemens.com
Copyright  Fujitsu Siemens Computers, 2007 http://www.fujitsu-siemens.de/

Das könnte Ihnen auch gefallen