Beruflich Dokumente
Kultur Dokumente
Proprietary Information
This document contains proprietary information that is protected by copyright.
Copyright
Copyright © 2012 Koninklijke Philips Electronics N.V. All Rights Reserved.
Manufacturer
Philips Medical Systems
3000 Minuteman Road
Andover, MA 01810-1099
USA
(978) 687-1501
This document was printed in the United States of America.
Trademark Acknowledgements
Symbol is a trademark of Symbol Technologies, Inc.
HP is a registered trademark of Hewlett-Packard Company
Cisco is a registered trademark of Cisco Systems
MS-SQL is a registered trademark of Microsoft Corporation
Nortel is a registered trademark of Nortel Networks Limited
3COM is a registered trademark of 3COM Corporation
Extreme is a registered trademark of Extreme Networks
All other trademarks, trade names and company names referenced herein are used for identification
purposes only and are the property of their respective owners.
Warranty
The information contained in this document is subject to change without notice. Philips Medical Systems
makes no warranty of any kind with regard to this material, including, but not limited to, the implied
warranties or merchantability and fitness for a particular purpose. Philips Medical Systems shall not be
liable for errors contained herein or for incidental or consequential damages in connection with the
furnishing, performance, or use of this material.
Printing History
New editions of this document will incorporate all material updated since the previous edition. The
documentation printing date and part number indicate its current edition. The printing date and edition
number change when a new edition is printed. The document part number changes when extensive
technical changes are incorporated.
This document replaces M3185-91931. If you require a previous version of this document, please refer to
the M3185-91931 part number.
First Edition October 2012
MOCN213
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Enterprise Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
PIIC Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
PIIC iX Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Smart-Hopping Network Deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
iii
MOCN213
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PIIC Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PIIC iX Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PSCN to HLAN Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Direct Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Layer Two Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Layer 3 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Gateway Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Non-Routed Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Non-Routed Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Cisco Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
HP Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Routed Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Connection Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Router/Core Switch Connection Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Distribution Switch Connection Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Access Switch Connection Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Router/Core Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Distribution Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Access Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Routed Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Single Distribution Switch Pair Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Maximum Distribution Switch Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Using 100 Megabit Distribution Switches in a PIIC or PIIC iX Deployment . . . . . . . . . . . 4-8
Using Access Switches in a PIIC or PIIC iX Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Management VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Router Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Subnet Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Router Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Spanning Tree Protocol Considerations for PIIC and PIIC iX . . . . . . . . . . . . . . . . . . . . . . 4-13
Direct Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Connecting from a Layer 2 PSCN Directly to the HLAN . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Basic Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Information You Will Need to Request from IT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Existing Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Ring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Layer 2 Only Star Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Physical Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
VLAN Numbering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
HLAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Connecting from a Routed PSCN Directly to the HLAN . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Basic Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Information You Will Need to Request from IT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Existing Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
iv
MOCN213
v
MOCN213
vi
MOCN213
Preface
• Audience
• Notational Conventions
• Related Documentation
• Terminology
Audience
This guide is written for Philips-trained service providers who will design the IntelliVue
Clinical Network topologies including Database Domains and the IntelliVue Telemetry
System.
Notational Conventions
This guide uses the following conventions:
Note
Caution
Cautionary statements call attention to a condition that could result in loss of data or damage
to equipment.
Warning
Warnings call attention to a condition that could result in physical injury.
vii
MOCN213
About This Guide
Related Documentation
Please refer to these other documents for additional installation service information about the
IntelliVue Telemetry System and IntelliVue Clinical Network:
The PIIC iX Network Installation and Service Manual provides detailed information about
PIIC iX deployment, and the IntelliVue Telemetry System Infrastructure Installation and
Service Guide provides complete information on the 1.4 GHz/2.4 GHz IntelliVue Telemetry
System wireless network rules and topologies.
Terminology
• Advanced Network Design and Implementation (ANDI) - ANDI is a network design
and implementation delivery channel for Philips Clinical Systems which utilizes the
highest level of field network expertise to provide the greatest design flexibility to meet
specific customer networking needs. This channel has access to some hardware and
configurations that are not available in the normal, FSE channels.
• Database Domain (DBSD) - This term is used only for PIIC deployments (Release L, M,
and N) and describes a network that contains the Standalone IntelliVue Information
Center, or the IntelliVue Database Server and its connected Information Centers, Clients,
bedsides, and infrastructure. This term applies to both routed and non-routed topologies.
• Deployment - Refers to the overall PIIC and PIIC iX monitoring solution products.
• IntelliVue Clinical Network (ICN) - This term refers to the entire Philips network. In a
routed topology, the ICN includes the routers and all inter-connected network devices and
the IntelliVue Telemetry System Wireless Subnet.
• Philips IntelliVue Information Center (PIIC) - This term is refers to the Phillips
IntelliVue Information Center.
• Philips IntelliVue Information Center (PIIC) iX - This term refers to the Philips
IntelliVue Information Center iX.
viii
MOCN213
Introduction
Overview
A Philips IntelliVue Information Center (PIIC and PIIC iX) monitoring system
captures complete waveforms, trends, alarms, and numerics from networked Philips
patient monitors and telemetry systems. A PIIC monitoring system stores monitor
data and also exports it to a hospital information system to be stored in a patient
electronic medical record. This comprehensive monitoring system is also referred to
as an IntelliVue Clinical Network (ICN).
In PIIC, the two ICN components are tightly coupled. A deployment of clinical
devices requires a specific network topology. For example, a large PIIC Database
server system deployment is always confined to a single VLAN with a prescribed IP
address scheme.
With the introduction of the PIIC iX, the two ICN components are no longer
coupled. PIIC iX clinical devices, such as Servers, Surveillance PIIC iX systems,
Overview PIIC iX systems, and Monitors can be deployed on varied network
topologies, multiple VLANs, and use a variety of IP address schemes.
1-1 Introduction
MOCN213
Enterprise Overview
address the customer need with the most appropriate PIIC iX deployment, and then
accommodate this deployment with the most appropriate network topology.
For PIIC iX, the term ICN is synonymous with only the PIIC iX deployment and does not in
any way imply a specific network layer. Therefore, throughout this document, the
following terms are used in very specific ways:
• Deployment --Refers to the overall PIIC and PIIC iX monitoring solution products.
• Topology--Refers to the network.
• Enterprise--Refers to the use of the PIIC or PIIC iX monitoring solution within a single
network infrastructure or topology.
Enterprise Overview
A Philips enterprise monitoring solution can be comprised of one or more PIIC monitoring
systems or deployments. Each deployment could be on an independent network infrastructure,
or all on one common network infrastructure. Each PIIC and PIIC iX monitoring deployment
could be a current PIIC or a PIIC iX. Figure 1-3 provides an example of an enterprise
monitoring solution.
1-2 Introduction
MOCN213
Enterprise Overview
deployments and the smart-Hopping deployment. This complete solution can be supported by
a single network infrastructure or topology.
PIIC Deployment
A PIIC deployment consists of up to eight IntelliVue Information Centers (IICs) and 12
IntelliVue Information Center Clients connected to a network. Additionally, you can use a
PIIC deployment to connect up to 128 IntelliVue Patient Monitors (both wired and wireless).
PIIC iX Deployment
A PIIC iX Deployment consists of Surveillance PIIC iX systems and Overview PIIC iX
systems that can be connected to a Primary Server iX. Additionally, you can use a PIIC iX
Deployment to connect to IntelliVue Patient Monitors (both wired and wireless).
• Unshared Deployments -- Deployments with less than 48 APs that reside in the same
subnet as the PIIC or PIIC iX deployment using the Smart-hopping infrastructure.
• Shared Deployments -- Deployments that reside in their own subnet and are connected
via a routed connection to one or more PIIC and/or PIIC iX deployments in their
respective subnets. A shared deployment can consist of up to nine APCs and 320 APs.
1-3 Introduction
MOCN213
Enterprise Overview
1-4 Introduction
MOCN213
Star Topology
Overview
The PIIC iX Network supports use of the Star network topology. In the Star,
multiple network switches are all connected back to a central switch tier, using a star
configuration. Due to its scalability and ease of configuration, as of December 31,
2010, the Star Topology is the required network topology for all new Philips
clinical network installations.
Note
The design that you create is based on site-specific variables and requires that you
consider existing cabling infrastructure and specific customer requirements that are
unique to your network.This chapter provides the information to assist you in
designing and configuring the layout of your clinical network.
The Star topology is a hierarchical network design. Star topologies offer the best
levels of link and device redundancy and are in line with industry-standard best
networking practices. In a star network topology, all network switches are connected
back to a central switch layer in a star configuration.
• Routed
• Non-Routed
• Router/Core
• Distribution
• Access
Router/Core Switch
In the simplest form of the routed star topology, the Access Switches are directly connected to
the Routers, which act as both a Core Switch and Router. The Router/Core Switch is above
the Distribution and Access Switches in the cabling hierarchy. For PIIC, the Router/Core
switch is used primarily to allow wireless clients on the Smart-hopping deployment to
transmit data via Layer 3 to a PIIC deployment. PIIC iX devices may use the Router/Core
switch to communicate. This Layer 3 communication may be between a Primary Server iX,
Surveillance PIIC iX, Overview PIIC iX, Bedside Monitors, and Smart-hopping devices.
Distribution Switch
To support a greater number of Access Switches and end devices, and to extend the distance
between core and access layers, you may use a second type of routed star network topology
which uses Distribution switches. For this method, Access Switches are not directly
connected to the Routers/Core Switches but are instead connected to Distribution Switches.
The Distribution Switch may also be a node point in multi-building sites.
Location in Hierarchy
• The Distribution Switch is below Router/Core and above the Access Switches in the
cabling hierarchy.
• The Distribution Switches are not the root bridge in a spanning tree instance, however
their root bridge priority is higher than switches at the Access Layer.
Access Switch
An Access Switch can connect directly to the Router/Core Switch, or it can connect to the
Router/Core switch via a Distribution Switch. VLAN access is configured on each Access
Switch port as required for PIIC, PIIC iX and/or Smart-hopping deployments. The Access
Switch is below the Router/Core and Distribution switches in the cabling hierarchy.
Switch Interconnection
Star Topology switch interconnection can be 100 Megabit or 1000 Megabit (Gigabit). The
need for a specific connection between two switches is dictated by the PIIC or PIIC iX
deployment, the particular interconnection media type used, and /or future site expansion
plans. For details on switch interconnection, see Chapter 3, Network Design and Chapter 4,
Implementation.
Network Design
Overview
This chapter provides network design guidance for Star topology for PIIC, PIIC iX
and IntelliVue Smart-hopping deployments. It also provides hospital resource
Interoperability, deployment coexistence, and deployment migration design
considerations.
PIIC Design
A PIIC Deployment requires specific network design considerations. This chapter
outlines the infrastructure design guidelines that are required by a PIIC deployment.
Each PIIC deployment must reside in a single subnet. Each single subnet must be
a /21 and a specific IP schema must be followed. For PIIC implementation details,
see Chapter 4 and Chapter 5.
PIIC iX Design
As compared to PIIC deployments, PIIC iX deployments have fewer restrictions on
the network infrastructure design. Therefore, a PIIC iX design process is less
proscriptive and more suggestive. However, one requirement of a PIIC iX network
infrastructure topology is that you use a single connect (touch-point) between the
PIIC iX monitoring topology and the hospital infrastructure. This requirement has a
significant impact on the topology design. For PIIC iX implementation details, see
Chapter 4 and Chapter 6.
Smart-hopping Design
An IntelliVue Smart-hopping infrastructure can be deployed in one of the following ways:
• A Non-Routed topology
• A Routed topology
Non-Routed Smart-Hopping
When a Smart-hopping infrastructure is deployed in the same subnet with a PIIC deployment,
the following design guidelines must be followed:
Routed Smart-Hopping
The Smart-hopping infrastructure is deployed in a separate subnet to which multiple PIIC and/
or PIIC iX deployments have access via routers. When designing a shared Smart-hopping
infrastructure, you must adhere to the following guidelines:
• A deployment can have up to the specified upper limit of APs and Remote Antennas
installed for each deployment
• A Smart-hopping deployment can be any combination of Standard and Core APs, as long
as the maximum number does not exceed the specified upper limit of APs for each
deployment.
Refer to the ITS Infrastructure Installation for IntelliVue Smart-hopping implementation
details.
Co-Existence
Co-existence is defined as the sharing of network infrastructure between PIIC and PIIC iX
deployments. Co-existence applies to customers with an existing PIIC deployment who want
to deploy PIIC iX on the same infrastructure. Co-existence may involve the sharing of the
routed IntelliVue Smart-hopping infrastructure. It may require multicast address space
planning and external interface planning. These topics are only a few of the possible conflicts
that need to be discussed when planning a PIIC and PIIC iX co-existence scenario. The
following sections address the possible co-existence concerns.
Table 3-1 lists the existing PSCN infrastructure topologies containing a PIIC deployment and
possible deployment of a PIIC iX, resulting in PIIC and PIIC iX co-existence.
Layer 2 Considerations
The issues to consider are IGMP settings at the layer 2 switches, port speed and duplex
settings for PIIC iX vs PIIC systems and bedsides and spanning tree priorities for any new
VLANs that will be created on a PSCN or CSCN. The new VLANs may need to be explicitly
included in trunks links connecting two switches together.
Layer 3 Considerations
PIIC iX systems are single-homed which means that all traffic to and from the HLAN will
have to go through a routed interface. There are several connectivity options available
documenting physical and logical connectivity between the PSCN and HLAN. In general, the
issues are around choosing an IP Routing protocol, advertising new subnets to the HLAN.
The wireless Patient Worn Monitor (PWM)/Patient Worn Device (PWD) statistics will be
stored in the correct locations, since the location depends on the PIIC/PIIC iX sector that they
are attached to.
Depending on the location of the new PIIC iX unit relative to the PIIC units, it may be
possible to add a new smart-hopping group and assign APs serving PIIC iX wireless clients to
that group.
• Enable Philips monitoring devices to connect and associate with a Surveillance PIIC iX or
PIIC systems.
• Manage Clinical Care Units
• Philips bedside monitor alarm reflectors.
• Manage Philips bedside-to-bedside overview behavior
• Propagate time to Philips devices
Note
In PIIC and PIIC iX co-existence situations, IGMP snooping may be turned on for each
VLAN containing PIIC iX, but IGMP snooping must be turned off for each VLAN
containing PIIC versions less than L.0.
The specific start address is changeable, and defaults to the 239.255.0.0 RFC 1918 private
address space. See the PIIC iX Installation and Configuration Guide for detailed information
on assigning the start address for a block of addresses via the Multicast Address Range dialog
box.
• Multicast IGMP snooping must be turned off for VLANs containing PIIC versions prior to
L.0. IGMP snooping may be turned on for VLANs containing PIIC L.0 and later.
• A single multicast address is required for enabling Philips patient monitoring devices to
connect and associate with a PIIC. This multicast address must be configured to
224.0.23.63 (by default)
• A contiguous block of 17 unique multicast addresses is needed for each PIIC Database
Domain requiring routed Philips patient monitoring device association.
ip igmp snooping
If the infrastructure has VLAN(s) containing PIIC deployment(s), then IGMP snooping needs
to be turned off on these VLAN(s). Example 2 shows how to turn IGMP snooping on or off
on a per-VLAN basis.
Example 2
Server farm; Router to Distribution Switch Fiber (using approved 1000FX fiber SFP)
UTP Copper via 2960-S, 2960 TC, or
2960 TT ports.
Server farm; Distribution Switch to Distribution Switch Fiber (using approved 1000FX fiber SFP)
UTP Copper via 2960-S or 2960TC Dual-
personality Port
Note that the Gigabit link speed may not mandate the use of Gigabit fiber, but the distance
between the devices requires the use of fiber media for which the supported SFP modules are
only available with Gigabit speed. See “Full Gigabit Topology” on page 3-9 for more
information.
• Access ports may be configured for either autonegotiate or fixed speed and duplex as
required by the attached client device.
• There are no restrictions on the number of VLANs that can be assigned to ports on the
distribution and access layers in a full Gigabit network.
No more than 18 access switches can connect to a distribution pair in a full Gigabit Topology.
Design/Implementation
Migration
Migration is the movement from an existing Philips-supplied infrastructure to a new or
different Philips supplied infrastructure. Many factors can lead to the need for an
infrastructure migration. For example:
The first step in any migration is to have a clear understanding of the existing network
topology. An existing network topology will be some form of a ring or star topology. See
Figure 3-2 for examples.
The first step in the migration path is to add a new star distribution layer to the existing router.
Note that the same migration path is recommended for both existing star and ring topologies.
Interoperability
Overview
PIIC and PIIC iX deployments may require connection to hospital resources and services.
These interoperability connections require consideration when designing a network
infrastructure to support the PIIC and PIIC iX deployments.
PIIC Interoperability
PIIC support connects to hospital resources and services. All external interoperability
connections are made through the PIIC Database server second network interface card. It is
important to consider PIIC deployment interoperability. However, because all connections to
the hospital are through the second network interface card, there is no impact to the Philips
provided network infrastructure design or implementation.
PIIC iX Interoperability
PIIC iX support a large number of connections for purpose of interoperability with variety
External systems. When designing and implementing a network infrastructure for the purpose
of supporting a PIIC iX deployment, each individual connection must be considered.
Table 3-3 lists each application interface, communication session initiated direction, and the
possible PIIC iX device initiating or receiving a connection.
Interoperability
Session Direction PIIC iX Connection
Connection
Interoperability
Session Direction PIIC iX Connection
Connection
Philips Service Agent (PSA) Outbound Each Local Database PIIC iX,
Surveillance Overview PIIC iX ,
Physiological Server iX, Primary
Server iX, and APM requiring
PSA Access.
Direct Connection
Direct connection between a PSCN and the HLAN has PSCN design implications. This
section presents:
• The design details that are associated with connecting a PSCN network directly to the
customer network.
• Use of a Philips-supplied gateway appliance for the purpose of interfacing a PIIC iX
system to devices on the Hospital network.
• Layer 2 and Layer 3 PSCN network connection designs
Non-routed PSCN Philips distribution layer or access layer switches can be connected to the
HLAN. Philips switch interfaces should be configured as routed interfaces to avoid spanning
tree issues between the Philips and customer switch networks.
Layer 3 Connectivity
Gateway Connection
Figure 3-6 illustrates a Layer 2 PSCN connecting to the HLAN via the Juniper SRX100
Gateway device. Figure 3-7 illustrates a Layer 3 PSCN connecting to the HLAN via the
Juniper SRX100 Gateway device. For more details about using the SRX100 Gateway in your
network, See the Juniper SRX100 Gateway Installation and Service Manual for the PIIC iX.
Implementation
Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC and PIIC iX and includes the following topics:
Cisco Small Non-Routed Star Topologies can be designed using all supported
Access switches. While these systems will only use a single VLAN, they may have
multiple VLANs defined in their configurations as a result of the routed star
topology templates used to configure them (see below). Figure 4-1 represents the two
ways a Cisco non-routed system can be connected.
4-1 Implementation
MOCN213
Non-Routed Star Network Topology
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 11X 13 X 23 X
SYST
RPS
MASTR 1 2
STAT
DUPLX
SPEED
2X 12X 14 X 24 X
MODE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 1 1X 13X 23X
SYST
RPS
MASTR 1 2
STAT
DU PLX
SPEED
2X 12X 14X 24X
MODE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
VLAN
1X 11X 13 X 23 X 1X 1 1X 13X 23X
SYST SYST
RPS RPS
MASTR 1 2 MASTR 1 2
STAT STAT
DU PLX DU PLX
SPEED SPEED
2X 12 X 14 X 24 X 2X 12X 14X 24X
MODE MODE
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 1 1X 13X 23X
SYST
RPS
MASTR 1 2
STAT
DU PLX
SPEED
2X 12X 14X 24X
MODE
Note
Inter-switch connections should be made using Gigabit connections. Client devices can be
connected to any port 1-24 on any of the switches.
HP Implementation Examples
HP Small Non-Routed Star Topologies can be designed using all supported HP Access
switches. These systems will only use a single VLAN (VLAN101).
Non-Routed 2 and 3 switch systems (using the HP2510) are connected using
redundant gigabit trunk links. 100Mb interswitch links are not supported.
4-2 Implementation
MOCN213
Non-Routed Star Network Topology
Figure 4-2 represents the 2 ways a HP non-routed system can be connected. No other
variations of 2 or 3 switch topologies are supported.
HP2510
HP2510
HP2510 HP2510
HP2510
Note
Fixed Mode Monitoring is not supported for PIIC iX A.00, but is supported for PIIC and
PIIC iX A.01 and later.
Note
Inter-switch connections should be made using Gigabit connections. Client devices can be
connected to any port 1-24 on any of the switches. Inter-switch links should be set to auto-
negotiate.
Implementation 4-3
MOCN213
Routed Star Network Topology
Overview
The following sections provide the examples and specific details you need to implement a
Routed Star Topology network for use with a PIIC or PIIC iX deployment. It includes the
following topics:
Note
You may use Cisco or HP switches in a star topology. However, the use of HP routers in a
star topology is restricted to and is supported by the Advanced Network Design and
Implementation (ANDI) network delivery model only. See the Advanced Networking Design
and Implementation Guidelines for Star Topologies document for details and requirements.
Note
If you are using Cisco switches in your network, use the star topology configuration files
supplied on the Network Infrastructure Tools Suite to properly configure each Cisco switch
for its function (Router/Core, Distribution, or Access). Star topology configuration files are
available only for Cisco switches and for the HP 2510 switch at the time of this printing.
Connection Details
Core switches on Star networks are always connected by a direct link. The configuration of
this link can vary if a server farm is present, but the direct link is always one of the following:
• An ether channel
• A single-cable trunked connection
4-4 Implementation
MOCN213
Routed Star Network Topology
Host devices (such as a PIIC DBS, Physiological Server iX, or a Smart-hopping APCs) may
not be directly connected to the Core Switch. Only a PIIC iX Primary Server can connect
directly to the core switch and that is only if the core switch has an available appropriate
(100Mbit or gigabit) access interface for connectivity. In standard Star Topology
configurations, Gigabit ports on a Core Switch are used to inter-connect the Core Switch to
another Core Switch as shown in Figure 4-3. These connections utilize the 1 ft. (.03m) cables
with the SFP connectors that ship with each router.
Router A Router B
Core Switch Core Switch
Ports 19-24 are typically used as the uplink ports. These ports are configured as trunks and
have spanning tree enabled. (This does not apply to the Cisco 3750 12-port switch.)
Depending on the switch interconnect link speeds used in the network, there may be a limit to
the number of VLANs on the distribution switch that can be assigned to access ports on the
distribution switch. (The trunk ports on the distribution switch may pass an unlimited number
of VLANs.)
Gigabit ports 0/1 and 0/2 on the Access Switch are always used as the uplink ports to either
the Core Switches (Routers) or Distribution Switches. These ports may be run at 100
Megabits if necessary, based on the port type available on the distribution or core software
above it. The remaining 24 ports can be configured for device connections.
Implementation 4-5
MOCN213
Routed Star Network Topology
To Core or Distribution
Switches
Access Switch
Ports 1 to 24
Switch Limits
• Because the currently used Core Switches have a maximum of 24 ports, this star topology
design is limited to a total of 24 Access switches.
• The total number of end devices (both DBSD and ITS) that can be connected to a router/
core switch network is 576.
The Distribution Switch enables the support of large networks requiring more Access
switches than the Core can support.
• The Distribution Switches must be used in pairs and are directly connected to the Routers
and Core Switches in the topology.
• There is a maximum of 24 Distribution Switch pairs for each star network.
• You may attach a maximum of 24 Access Switches directly to the Router/Core Switch.
• Multiple PIIC and/or PIIC iX deployments should not be mixed on an Access Switch.
• For PIIC deployments, the Access Switch is limited to a maximum of two VLANs per
switch, one VLAN for a single PIIC deployment and one VLAN for the Smart-hopping
deployment.
• When attaching Access Switches to the Router/Core Switch via a Distribution Tier, you
can connect a maximum of six Access Switches per Distribution Switch pair.
• With up to 24 Distribution Pairs supported, this brings the maximum total number of
Access Switches to 144.
4-6 Implementation
MOCN213
Routed Star Network Topology
Note
The following examples are based on switch to switch uplinks using 100 Megabit links. If
Gigabit links are used, the noted limitations do not apply. See “Switch Interconnection”
on page 2-3 for more information on 100 Megabit and Gigabit limitations.
Figure 4-6 shows a star topology that uses a distribution layer with a single pair of
Distribution Switches.
Router A Router B
Core 1 Core 2
Distribution 1A Distribution 1B
DBSD/ITS
Network
Access 1 Access 2 Access 3 Access 4 Access 6
Up to 24 Distribution Switch pairs may be used per network. With six Access Switches
allowed per Distribution Switch pair, a maximum Distribution Switch deployment would
allow 144 Access Switches to be connected to the network. Each Distribution Switch may
also use up to 18 ports for end devices, bringing the total number of end devices that can be
connected to a Star Topology with a fully populated Distribution Layer to 4320.
Implementation 4-7
MOCN213
Routed Star Network Topology
Figure 4-7 represents a routed star network topology with a maximum Distribution Switch
build out. Note that for the sake of brevity, Figure 4-7 does not show all Distribution or
Access Switches.
Router A Router B
Core 1 Core 2
Access Access Access Access Access Access Access Access Access Access
1 2 3 4 6 139 140 141 142 144
When designing your network, you should factor in the total number of end devices that must
be supported and the location of these devices within the installation site.
Note
If you are using Gigabit all the way to the core layer, there are no limitations. Therefore, the
following rules are not restrictions for Gigabit switches.
1. Host devices may be directly connected to a Distribution Switch. The Distribution Switch
is limited to:
• A maximum of two (2) PIIC or PIIC iX Deployments per switch, and one ITS. (This
rule does not apply to the Cisco 3750 12-port switch.)
2. The remaining ports (ports 1-18) can be configured for connection to devices as follows:
• Three VLANs are assigned to ports on a Distribution Switch. The VLANs can be
implemented only in the following configurations:
4-8 Implementation
MOCN213
Management VLAN
Management VLAN
By default VLAN 1, the Management VLAN is used to manage PIIC and PIIC iX network
switches and not used for data traffic. VLAN 1 can be used to connect to switches for
management of the network infrastructure. In addition, all switch management interfaces are
in the same subnet, no matter which Application Deployment they are connected to. It is also
possible to connect VLAN 1 to the hospital management network. This requires a connection
between one or both of the Core switches in order to be able to reach the hospital management
console. A static route must be added to the Core switch(es) in order to access the
management console.
Requirements
The following requirements must be met before the customer is allowed to access the
Management VLAN.
• An Access Control List (ACL) must be applied on the Management VLAN interface. This
ACL will only allow customer management access from specific management subnets.
This is to restrict access to devices from only the Network Operation Center (NOC) and
from no other point on the hospital LAN.
• Only SNMP Read Only (RO) access is allowed. The SNMP RO password may not be the
default ’public’.
If you use the Ping command to access devices on the subnets, the Internet Control
Message Protocol (ICMP) ping access guidelines must be followed.
Implementation 4-9
MOCN213
Router Configurations
Router Configurations
A Cisco Layer 2/Layer 3 Switch will be used as a router on routed network topologies.
With the exception of the 3750 12-port, all Cisco routers will be pre-configured at the factory
for use in Star topologies and requires little to no configuration in the field. The ports and
subnet designations are pre-set.
This section describes the router configurations for routed topologies and includes:
• Subnet Configuration
• Router Load Balancing
• Spanning Tree Protocol Considerations for PIIC and PIIC iX
Subnet Configuration
Table 4-1 lists the subnet configuration of the routers when used with star topologies. This
configuration will allow legacy and new topologies to co-exist using one standard router
configuration. You will be required to change the router’s default, factory configuration to
support star topologies. The router configurations required to support star topologies is
provided on the Network Tools Suite.
When configured to support star topologies, the router interfaces will function as trunks. A
trunk is a router or switch interface that is configured to carry multiple VLANs.
Table 4-1: Subnet Configuration for Star Topologies for PIIC and PIIC iX
Physical Router
Function/Name Subnet
Interface/Port #
4-10 Implementation
MOCN213
Router Configurations
Table 4-1: Subnet Configuration for Star Topologies for PIIC and PIIC iX
Physical Router
Function/Name Subnet
Interface/Port #
Implementation 4-11
MOCN213
Router Configurations
Note
VLANs 101 to 124 are configured on the routers and enabled by default.
VLANs 200 through 222 are included in the router templates, but commented out by default.
VLANs VLANs 201 through 222 are for use on PIIC iX only with a /24 subnet mask.
4-12 Implementation
MOCN213
Router Configurations
Depending on the topology type, a different STP algorithm may be used. Cisco's Rapid Per
VLAN Spanning Tree algorithm (PVST+) is used. Table 4-3 lists the root bridge priority level
for each of the switches in the hierarchy.
Table 4-3: STP Root Bridge Priority Level for Network Switches
Star Topologies
1
In Star Topologies, the root bridge priority of the Core 1, Core 2,
Distribution 1 and Distribution 2 switches will alternate by VLAN.
The STP types and values will be included in the standard configuration files for each switch
type on the Network Tools Suite.
Another necessary STP parameter that must be configured on each switch is the portfast
configuration parameter. For HP switches this is referred to as edgeport. This parameter is
intended for use on ports that have servers, bedsides, APs, APCs and devices other than
switches or routers attached to them.
Implementation 4-13
MOCN213
Direct Connection
All other STP parameters should be allowed to default. This will yield the correct spanning
tree configuration and topology.
Direct Connection
Note
The Direct Connect methods show below are optional approaches to connecting the PSCN to
the HLAN. Philips Standard approach uses the Juniper SRX100 Firewall to connect the
PSCN to the HLAN. See the Juniper Installation and Service Manual for the PIIC iX for
complete details.
Basic Steps
Existing Topologies
The pre-existing Philips network will look like one of the topology types below. Please note
that specific numbers and topologies of switches will vary from site to site. Some smaller
networks will be as simple as one or two switches.
4-14 Implementation
MOCN213
Direct Connection
Ring Network
Physical Connectivity
In order to connect the Philips network to the HLAN you will first need to configure a
physical connection between the two networks. For most switches this will be a 100 Mbps
copper connection. Depending on the switch type, there may be a dual personality gigabit
connection available. In that case you can use a gigabit copper connection or an SFP to
connect to the HLAN.
For Ring networks, you must connect to the HLAN from the ICN Core switch. For very small
star networks you may connect from an Access switch. For L2 Star networks with
Distributions switches, connect to the HLAN from the Distribution A switch.
The interface that connects from the PSCN to the HLAN must be an access port. Make sure to
disable spanning-tree portfast (edge port on HP switches) on this interface.
Implementation 4-15
MOCN213
Direct Connection
VLAN Numbering
It is recommended but not necessary to use a VLAN number on the PSCN switches that
matches the VLAN number on the HLAN. Using the same VLAN number will avoid any
VLAN mismatch between switches. However this will most likely require re-numbering the
existing VLAN in use on the PSCN switches.
For Ring networks, all ports are in VLAN 1. Renumbering will require all ports, including
inter-switch link ports to be configured as access ports in the new VLAN.
For Star networks, only the existing access ports need to be reconfigured in the new VLAN.
HLAN configuration
The gateway for the ICN is configured on the HLAN switches or routers. The HLAN switch
port may be configured as an access port or a routed port (no switchport). If configured as an
access port, the interface should be in the same VLAN as the SVI interface. The SVI (Layer 3
VLAN) interface may be configured on the edge switch or elsewhere in the Hospital network.
The preferred method of configuring the HLAN interface is as a routed interface. This is to
create a spanning tree boundary between the two networks.The Hospital IT department must
also enable multicast routing and PIM on their routed or VLAN interface, and L2 interface.
Verify connectivity to the HLAN by pinging the gateway address from devices attached to the
PSCN switches. You should also be able to ping devices on other subnets, such as the HL7
server.
Basic Steps
4-16 Implementation
MOCN213
Direct Connection
5. For EIGRP
a. AS number for EIGRP.
6. For OSPF
a. Additional Loopback Interface IP address for router ID.
Existing Topologies
See Figure 4-5 for an example of the pre-existing Philips network. Note that this is a
simplified view.
Physical Connectivity
In order to connect the Philips network to the HLAN you will first need to configure a
physical connection between the two networks. Networks that use the Cisco 3560V2-24TS
switch require 100 Mbps copper connections.
For a 3750-24 FS, or 3750v2-24 FS you may either use 100 Mbps multimode fiber, or one of
the Gigabit SFP connections. If the Gigabit SFP connections are already in use for the Core A
to Core B links or for a Server Farm, you may make a new Core A to Core B link using two
of the 100 Mbps MT-RJ Multi-mode ports and use the Gigabit link to connect to the HLAN.
For a 3750G-12S you may use any of the unused SFP ports with the desired SFP modules
(MM Fiber, SM Fiber or Copper).
See Table 4-6 for a summary of physical connection types for each switch.
Implementation 4-17
MOCN213
Direct Connection
For any Star network, you must always have a link between the two PSCN Core switches. It
is recommended to use an ether-channel link.
For existing networks that have all Core switch ports already in use you may use an ANDI
design to stack an additional 3750 switch with the existing 3750. See the ANDI Guidelines for
Star Topologies guide for more details.
You may use one or two links to connect the PSCN to the HLAN, however two links are
recommended. If using only one link, connect to the HLAN from the PSCN Core A switch.
4-18 Implementation
MOCN213
Direct Connection
The customer’s edge switch interfaces should either also be configured as routed interfaces, or
should be Access ports in the VLAN that they are homing back to the Distribution or Core.
Example
In this example two subnets were created. 10.20.20.0/30 is used to connect Philips Core A to
HLAN Switch A, and 10.30.30.0/30 was used to connect Philips Core B to HLAN Switch B.
Fa1/0/1 on each Philips Core Switch was used to connect between the PSCN and HLAN
Switches. Any unused interface can be used. The interfaces were configured as routed
interfaces using the no switchport command.
On Philips Core A:
interface FastEthernet1/0/1
description Link to HLAN A Switch
no switchport
ip address 10.20.20.2 255.255.255.252
duplex full
On Philips Core B:
interface FastEthernet1/0/1
description Link to HLAN B Switch
no switchport
ip address 10.30.30.2 255.255.255.252
duplex full
Once the connection is made and the link lights are on, L2 and L3 connectivity can be tested
between the PSCN and HLAN devices. Pings should be repeated for all IP’s.
The following example illustrates the commands used for Next Hop Connectivity testing.
These commands are shown for reference and do not show the complete output.
Implementation 4-19
MOCN213
Direct Connection
RouterA_3750_Star#ping 10.20.20.1
A simple method of sharing routes between the PSCN and HLAN is to use static routing. A
default route is put into each of the PSCN Core switches. The customer IT department points
to the Philips Clinical subnets with static routes. To maintain connectivity in the event of a
switch failure, the HLAN routers should also be configured to advertise the static routes into
their routing protocol. This can be done by redistributing the static route into the routing
protocol on each HLAN switch.
HSRP interface tracking must be enabled on the PSCN Core switches. This is critical in
order to protect against black holing traffic in the even that one of the HLAN switches or links
fails.
In the following example, the default route is pointed to 10.20.20.1 from Core A and
10.30.30.1 from Core B. The clinical VLAN is VLAN 120. Interface tracking is used to track
the connection to the HLAN (FastEthernet 1/0/1) on each Core Switch. The HSRP priority is
set to decrement by 50 in the event of an interface failure. Be sure to enable standby preempt
on both Core switches, as this is required with interface tracking.
On Philips Core A:
ip route 0.0.0.0 0.0.0.0 10.20.20.1
interface Vlan120
description ICN #20 - Wired Subnet
ip address 172.31.152.2 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
standby 0 ip 172.31.152.1
standby 0 priority 110
standby 0 preempt
standby 0 track FastEthernet1/0/1 50
4-20 Implementation
MOCN213
Direct Connection
On Philips Core B:
ip route 0.0.0.0 0.0.0.0 10.30.30.1
interface Vlan120
description ICN #20 - Wired Subnet
ip address 172.31.152.3 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
standby 0 ip 172.31.152.1
standby 0 priority 90
standby 0 preempt
standby 0 track FastEthernet1/0/1 50
The Clinical subnets do not need to be advertised to the HLAN routers because the HLAN
routers will be configured with static routes to point to them.
On HLAN Router A:
ip route 172.31.152.0 255.255.248.0 10.20.20.2
The default route is listed as a static route on each of the PSCN Core switches. You will need
to ping from end-to-end to confirm the customer IT department has correctly configured their
routers.
RouterA_3750_Star#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user
static route
o - ODR, P - periodic downloaded static route
Implementation 4-21
MOCN213
Direct Connection
EIGRP is already in use on the PSCN Cores, so a new EIGRP Autonomous System (AS)
number should be used. You will need to make sure that the EIGRP 1 is not already in use on
the HLAN. If EIGRP 1 is already in use on the HLAN you will need to use a different AS
number on the PSCN.
First you need to enable the routing process and advertise the network that is linking the
PSCN network to the HLAN. In this example EIGRP 999 was used with the two subnet
methods discussed above. If using the one subnet method, the wildcard mask will change in
the EIGRP network statement.
Examples
On Philips Core A:
router eigrp 999
network 10.20.20.0 0.0.0.3
On Philips Core B:
4-22 Implementation
MOCN213
Direct Connection
Next you need to determine which ICN subnet you want to advertise to the HLAN. In this
case, the VLAN 120 subnet of 172.31.152.0 was used for the PIIC iX devices and advertised.
If a different subnet or masking was used you would need to update the configuration
accordingly. If a /24 subnet mask is used, the wildcard mask would be 0.0.0.255.
On Philips Core A:
router eigrp 999
network 10.20.20.0 0.0.0.3
network 172.31.152.0 0.0.7.255
On Philips Core B:
router eigrp 999
network 10.30.30.0 0.0.0.3
network 172.31.152.0 0.0.7.255
You should see the EIGRP neighbors display on the console. You can also do the following
troubleshooting commands:
Additionally, use the show ip route command to look at all of the EIGRP routes that are local
and that have been learned from the HLAN. EIGRP routes are shown with a code of D. This
example output is from the one subnet method, and is abbreviated (Note that all connected
routes are not shown). Additional test HLAN networks are show as being learned from
EIGRP. HLAN subnet numbers will vary for each site.
RouterA_3750_Star#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user static
route
o - ODR, P - periodic downloaded static route
Implementation 4-23
MOCN213
Direct Connection
It is very unlikely that the Philips switches will be allowed to connect into the HLAN OSPF
area 0. They will most likely connect to a stub area or an independent OSPF process that will
be redistributed into the Hospital’s main OSPF process.
Examples
On Philips Core A:
interface Loopback1
ip address 3.3.3.3 255.255.255.255
router ospf 1
router-id 3.3.3.3
network 10.20.20.2 0.0.0.0 area 0
network 172.31.152.0 0.0.7.255 area 0
4-24 Implementation
MOCN213
Direct Connection
On Philips Core B:
interface Loopback1
ip address 4.4.4.4 255.255.255.255
router ospf 1
router-id 4.4.4.4
network 10.30.30.2 0.0.0.0 area 0
network 172.31.152.0 0.0.7.255 area 0
You should see the OSPF neighbors come up on the console. You can also do the following
troubleshooting commands:
You can also use the show ip route command to look at all of the EIGRP routes that are local
and that have been learned from the HLAN. EIGRP routes are shown with a code of D. This
example output is from the one subnet method, and is abbreviated (all connected routes are
not shown). Additional test HLAN networks are show as being learned from EIGRP. HLAN
subnet numbers will vary for each site.
RouterA_3750_Star#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user
static route
o - ODR, P - periodic downloaded static route
Implementation 4-25
MOCN213
Network Time Protocol
All Layer 2 Access and Distribution switch templates contain configuration elements that
point to the Layer 3 Core switch management VLAN virtual IP addres as the time source.
When enabled, the Layer 3 Core switch acts as an NTP client to the external time source. In
turn, it acts as an NTP server to all Layer 2 Access and Distribution switches, as well as
firewall devices. To enable an External NTP time source at the Core level, both Core A and B
switch templates must be modified and loaded in their respective switches. External NTP time
source configuration sections are already in the core switch templates, but commented out.
To implement an external NTP time source on the core level, two changes must be made to
the Core A and B template file.
1. An external NTP IP address must be added to a template configuration line and the line
must be uncommented.
2. The appropriate time zone configuration lines must be uncommented.
4-26 Implementation
MOCN213
Network Time Protocol
For example, if the NTP Server IP is 10.0.10.50 and the system is deployed in the Central
time zone, the template would be edited to appear as follows:
*********************************************************************
!
! Replace "xxx" below with the NTP Server IP Address if an external
! NTP server is going to be used (eg.: "10.0.4.100"). Then uncomment
! the line by removing the "!" in the front of it.
!
! If no external NTP Server is used, no edits are needed
!
!
*********************************************************************
!
ntp server 10.0.10.50
!
!
*********************************************************************
!
! If using NTP, uncomment the appropriate TWO lines below for the
! time zone you are installing in. Mainland US time zones are shown.
! For other time zones edit the time zone name (text string) and the
! UTC offset value.
!
! If no external NTP Server is used, no edits are needed
!
!
*********************************************************************
!
!clock timezone EST -5
!clock summer-time EDT recurring
!
clock timezone CST -6
clock summer-time EDT recurring
!
!clock timezone MST -7
!clock summer-time EDT recurring
!
!clock timezone PST -8
!clock summer-time PDT recurring
!
Implementation 4-27
MOCN213
Network Time Protocol
4-28 Implementation
MOCN213
Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC Deployments only and includes the following topics:
Be aware of the following points when creating a Cisco Non-Routed Star Topology
network:
• In order to pass system validation, switch IP addresses and subnet masks must
reside in the address range of the specific PIIC Deployment number that is being
used.
• The default gateway must point to the address of the DBS or Standalone IIC.
See Table 5-1 for details.
n represents the network number and starts at 0 for a single PIIC Deployment.
This variable increments by 8 from there for additional PIIC Deployments. For
example, for PIIC Deployment 2, n would equal 8, for PIIC Deployment 3, n
would equal 16, and so on. See Table 5-2.
When designing your network, you should factor in the total number of end devices that must
be supported and the location of these devices within the installation site.
1. For the star topology, the IP addresses of all of the Core, Distribution and Access Switches
reside in a separate subnet, VLAN 1, the Management VLAN.
3. The Core Switches (routers), are assigned the following IP addresses in the Management
subnet (VLAN 1), but the PIIC system does not currently use these addresses:
• Router A: 172.31.200.2
• Router B: 172.31.200.3
5. The Access Switches get addresses within the range of 172.31.200.100 to 172.31.200.244
on the Management VLAN, VLAN 1.
6. The Core Switches (routers) are also assigned addresses in each of the other subnets. In
the Configuration Wizard, we continue to use the addresses assigned in the applicable
DBSD subnet:
• Router A: 172.31.n.2
• Router B: 172.31.n.3
• Virtual address: 172.31.n.1
7. Because the PIIC system does not use the IP addresses from the Management VLAN
(VLAN 1), the Scan Device function does not work for Network Switches used on the star
topology.
Table 5-3: Star and ITS Wireless Subnet Device IP Addresses for PIIC
Device Types DBSD Subnet IPs DBSD Default ITS Wireless Subnet IPs ITS Default
(with Routed Subnet) Mask: 255.255.248.0 Gateway Mask: 255.255.240.0 Gateway
Router A - <used for ITS Wireless Subnet Router> 172.31.n.2 172.31.240.2 172.31.240.1
Router B- <used for ITS Wireless Subnet Router> 172.31.n.3 172.31.240.3 172.31.240.1
Network Switches and Remote Client Infrastructure 172.31.n.10 - 102 172.31.n.1 172.31.240.10 - 20 172.31.240.1
Device Types DBSD Subnet IPs DBSD Default ITS Wireless Subnet IPs ITS Default
(with Routed Subnet) Mask: 255.255.248.0 Gateway Mask: 255.255.240.0 Gateway
1. The reuse of address space may lead to address conflict issue on the PIIC database server.
2. This strategy could position these PIIC customers for a much easier PIIC to PIIC iX
migration in the future.
This section outlines the PSCN IP address changes needed to accommodate an alternate IP
address scheme request.
Note
All proposed PSCN IP addressing changes should be reviewed by Hospital IT to ensure all
needs are being accommodated.
The default IP addresses used for Philips devices are within the 172.31.x.x range. This
alternate IP scheme enables the first two octets to be changed as long as the required subnet
masks are used.
Implementation details and examples are provided in “Layer 3 Routers (Core A and B)
Configuration Changes” on page 5-6.
Note
Changing an existing ITS IP address (first two octets) causes a major clinical disruption.
Planning for down time with clinical staff is required. A back-out plan must be provided to
the customer.
Examples of each of these configuration changes are provided in the sections that follow.
interface Vlan1
ip address x.x.200.10x 255.255.255.0
no ip route-cache
no shutdown
ip default-gateway x.x.200.1
Overview
A concentration of like-servers within a data center is often called a server farm. You can
support connection to a PIIC Deployment Database Server Farm by using a star topology as
shown in the example in Figure 5-1. Customers with large and multiple PIIC Deployments
may wish to co-locate PIIC Deployment Database Servers in a data center. The data center
provides the proper security, ambient environment, and primary and backup electrical
supplies for the valuable PIIC Deployment Database Servers.
Distribution Distribution
Switch 1A GBit Switch 1B
Links
Router A Router B
Core Switch 1 Core Switch 2
Requirements
Note the following requirements when supporting connection to a PIIC Deployment Database
Server Farm using a star topology:
• The PIIC Deployment Database Servers in the Server Farm connect directly to a pair of
Distribution Switches.
• The pair of Distribution Switches connected to the PIIC Deployment Database Server
Farm must be linked together using a Gigabit trunk.
• Each Distribution Switch connected to the DBSs must connect to a Router using a Gigabit
trunk.
• The Core Switches (routers) must be interconnected using a GBit trunk.
• The following Configuration files are provided on the Network Tools Suite to configure
the Cisco Model 2960-TC and 2960-S Switches for use as Distribution Switches in a
Server Farm:
• DISTRIBUTIONA_SERVER_FARM_2960TC_STAR.TXT,
• DISTRIBUTIONB_SERVER_FARM_2960TC_STAR.TXT,
• DISTRIBUTIONA_SERVER_FARM_2960S-TS_STAR_TEMPLATE.TXT
• DISTRIBUTIONB_SERVER_FARM_2960S-TS_STAR_TEMPLATE.TXT
Note
Up to four (4) PIIC Deployment Database Servers may be connected to each Distribution
Switch in the pair of Distribution Switches. If you need to support more than eight (8) PIIC
Deployment Database Servers, then you must add another Distribution Switch pair. Because
a Server Farm requires gigabit connectivity, if a second Server Farm Distribution Switch
pair requires an ANDI diagram. Refer to the ANDI Reference Guide, for guidelines on
gigabit connectivity, VLANs, and connecting Database Servers.
Table 5-4: Switches and Routers Supported for Use in a Server Farm
SFP Module All Cisco Gigabit SFP Modules Gigabit Fiber and copper only,
supported by Philips for the 100FX not supported
Cisco 3750, 3560, and 2960
switches.
Configuration
Setting Notes
Element
extend system-id
Access interfaces 1-24 Ports 1-4, set as needed, all ports Set as needed for all defined VLANs
should be access ports, single on which database servers will
VLAN assigned to each as dictated reside.
by design.
Uplink interfaces Gbit Media type SFP or rj45 Connections to routers may only be
SFP, no copper SFP supported in
Full duplex routers.
Mode trunk
No shutdown
Configuration
Setting Notes
Element
line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login
Note
Extra blank “!” lines are needed after the “media-type” to create some delay before the next
(duplex) command is executed. It has been observed on the Cisco 2960TC that the media
type command needs a delay before the switch can accept the next command, otherwise the
command following the media type command will fail. As shown, this command line string
works with a line delay setting of 100mS.
! ************************************************************
! Configuration File for Server Farm Distribution Switch "A"
! Cisco 2960TC
! DISTRIBUTIONA_SERVER_FARM_2960TC_STAR.TXT
! Ver. B.2 25-May-2011
!
! Vlans 1,101-124 are defined
! Sets VLANs 101, 103, 105, 107 on ports 1-4
! Shuts down ports 5-24
! Gig port 1 is configured for 1000Fx SFP (to router)
! Gig port 2 is configured for embedded copper (to dist sw B)
! Mgmnt VLAN IP is 172.31.200.20 255.255.255.0
! ************************************************************
!
config t
!
no service pad
service timestamps debug uptime
service timestamps log uptime
service password-encryption
!
hostname Server_Farm_Distribution_Switch_A
!
enable secret m3150e
enable password m3150
!
ip subnet-zero
!
no ip igmp snooping
no ip igmp filter
vtp domain philips
vtp mode transparent
!
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id
!
interface GigabitEthernet0/2
description UTP Upink Port to Distribution Switch B
switchport trunk allowed vlan 1,101-108
switchport mode trunk
media-type rj45
Speed 1000
duplex full
exit
! Uncomment the following section if you will be using fiber a link
! between the two distribution switches
!
! interface GigabitEthernet0/2
! media-type sfp
!
! Edit Vlan1 IP address if values other than those shown are to be used
!
interface Vlan1
ip address 172.31.200.20 255.255.255.0
no ip route-cache
no shutdown
!
exit
!
ip default-gateway 172.31.200.1
ip http server
!
snmp-server community public ro
!
banner motd #
Warning: Access to this system is restricted to authorized personnel
only. Unauthorized access will be prosecuted.
#
!
line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login
!
exit
!
exit
!
wri mem
!
Note the following about the settings in the sample configuration file:
In the corresponding server farm switch “B,” VLANs 102, 104, 106, 108 are configured for
ports 1-4 respectively. The presumption is that DBS will be added alternately to each switch
as the PIIC Deployment VLANs increment. This will provide some load balancing across the
distribution switches. As such, the difference in the number of DBSs on each of the
Distribution Switches should not be greater than one.
The maximum number of DBSs on the server farm (total across both switches) is eight.
The following configuration attributes may be edited in the configuration file as needed:
• The mapping of VLANs to interfaces. Only one VLAN should be assigned to each port to
which a DBS is connected.
• The management VLAN IP and subnet mask
• The media type for gig port 2. If fiber SFPs are used to interconnect the distribution
switches, the port media type must be changed. The speed must be kept at 1000.
Table 5-6 lists the settings required to configure a Router to support a server farm.
Configuration
Setting Notes
Element
extend system-id
Uplink interfaces Gbit Media type SFP With the exception of the 3750-12,
no copper SFP supported in routers,
Full duplex fiber SFP must be used.
Encapsulation dot1q
channel-group 1 mode on
No shutdown
Configuration
Setting Notes
Element
line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login
The following files are pre-loaded into the router flash memory and provided on the Network
Tools Suite to configure a router to support a server farm.
You may use the following star topology Core Switch/Router configuration files:
• ROUTERA_3560_STAR.TXT
• ROUTERB_3560_STAR.TXT
• ROUTERA_3750_STAR.TXT
• ROUTERB_3750_STAR.TXT
After you have configured the 3560 Router or 3750 Router to support a server farm by
loading the appropriate configuration file, you must take the additional step of disabling port
channeling on the router’s gigabit uplink ports. Use one of the following files to do this (these
files are also available in the Network Tools Suite).
• ROUTER_3560_STAR_SERVER_FARM_PREP.TXT
• ROUTER_3750_STAR_SERVER_FARM_PREP.TXT
Each of these files disable port channeling on the router’s gigabit uplink ports. Note that you
must disable port channeling on both the A and B Core Switches within the server farm
topology.
Note
When you deploy Cisco 3750s as Layer 2 switches in a routed topology, you must also use
Cisco 3750s as the routers to which the L2 switches connect. Configuring the Cisco 3750 as
a Layer 2 Switch provides the additional benefit of eliminating the use of media converters
to connect multiple Cisco 3750 Layer 2 Switches to Cisco 3750 Routers since both device
types have 24 100FX MRTJ fiber ports and two Gigabit Ethernet SFP ports. Note that the
Cisco 3750 will not provide routing functions when configured as a Layer 2 switch.
The Cisco WS-3750-24FS front-panel ports 1/0/1 through 1/0/24 support multi-mode fiber
connections only. If other fiber modalities are required, the Cisco WS-C3750V2-24FS with
24 SFP ports may be used. In this case, the purchase of additional non-multimode SFPs may
be required.
ROUTER-A ROUTER-B
Cisco 3750 Cisco 3750
172.31.200.2 172.31.200.3
Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3
TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK
Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2
TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK
STAR-ACCESS1A STAR-ACCESS1A STAR-ACCESS1A STAR-ACCESS2A STAR-ACCESS2A STAR-ACCESS2A
Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960
172.31.200.21 172.31.200.22 172.31.200.23 172.31.200.31 172.31.200.32 172.31.200.33
For each device listed in Table 5-7, the required port configuration on the host switch is
given. These rules must be met and supersede any and all other inferences in this document.
Additionally, supported topologies may be further limited by a low density switch that may
have only auto negotiate ports. Operational connection speed is the speed and duplex at which
the link operates under the specified NIC and switch port settings.
Table 5-7: Network Switch Port Requirements for End Devices in PIIC Networks
(Star Topology)
Star Topology Rules
Figure 5-4: Non-Routed Star Uplink Port and IP Address / Subnet Mask
Assignments
Note
Extension switches are needed only for Cisco Star topologies. No Extension switches are
needed with HP implementations.
Note
Access Switches
• Use ACCESS_2960(TC/TC)_TEMPLATE.TXT
• All host ports 1-22 are configured as access ports for PIIC Deployment VLAN (101 in this
example)
• Uplink ports 22 and 23 (connected to Extension Switches) are configured as access ports
for PIIC Deployment VLAN (101).
Figure 5-5: Fixed Mode Monitoring Routed Star Uplink Port and IP
Address / Subnet Mask Assignments
Note
Extension switches are needed only for Cisco Star topologies. No extension switches are
needed for HP implementations.
• The IP and Subnet Mask and Default Gateway must reside in the address range of the
specific ICN # (extension switches shown in Figure 5-5 to pass system validation. All
other switches (Access, Distribution and Core) follow VLAN 1 Management IP and SM
scheme.
• A maximum of 2 Extension switches can be uplinked to an Access Switch
• Switch IP Address and SM must reside in the address range of the PIIC Deployment
(e.g. PIIC Deployment 1(vlan 101): 172.31.0.10 255.255.248.0)
• Default Gateway must point to DBS or Standalone IIC for that PIIC Deployment
(e.g. PIIC Deployment 1 DBS-172.31.3.0)
• Use Access_2960_Extension_Switch.TXT template
• All host ports 1-24 are configured as access ports for VLAN1
• Uplink port (G0/1) configured as access 0
Access Switches
• Use ACCESS_2960(TC/TC)_TEMPLATE.TXT
• All host ports 1-22 are configured as access ports for PIIC Deployment VLAN (PIIC
Deployment1 VLAN 101 in this example)
• All host ports 1-22 can be configured as access ports for ITS VLAN # (124)
• Host Ports can be divided as access ports for PIIC Deployment and ITS
• Uplink ports 22 and 23 (connected to Extension Switches) are configured as access ports
for PIIC Deployment VLAN (e.g. 101)
• Uplink ports G0/1 (connected to Distribution Switches) are configured as Trunk ports for
VLANS 1, 101-124
Distribution Switches
• Use DISTRIBUTION(A&B)_CISCO_2960(TC/TT)_TEMPLATE.TXT
• All host ports 1-18 are configured as access ports for 2 PIIC Deployment VLANs # (e.g.
101, 102)
• All host ports 1-18 are configured as access ports for ITS VLAN # (124)
• All host ports 1-18 divided as access ports for 2 PIIC Deployment VLAN # (e.g. 101, 102)
and ITS # (124)
• Uplink ports 19-24 connected to Access Switches are configured as Trunk ports for
VLANS 1,101-124
• Uplink port G0/2 connecting to Distribution Switches (A-B) together is configured as a
Trunk port for VLANS 1,101-124
• Uplink port G0/1 connecting to appropriate Router (A/B) configured as Trunk ports for
VLANS 1, 101-124
Core Routers
You may use Cisco or HP switches in a star topology. If a Cisco Switch is used at the Core,
only Cisco switches may be used for Distribution or Access layers. If an HP Switch is used at
the Core, HP must be used at the Distribution layer, but HP or Cisco may be used at the
Access layer.
Note
Use the star topology configuration file templates supplied on the Network Infrastructure
Tools Suite to properly configure each network switch on your network for its function
(Router/Core, Distribution, or Access). Refer to the Advanced Networking Design and
Implementation Guidelines for Star Topologies for details on HP network implementations.
The Access Switch to which the Fixed Mode Bedside Switch is connected must have a port
provisioned for that connection. Access Switches typically have all their ports configured for
end devices, not other switches. For that reason, the following settings must be configured for
the Access Switch port to which the Fixed Mode Bedside Switch will be connected. The
initial configuration state is presumed to be that of a star topology Access Switch with no
“accesssetting.txt” file executed.
The port must be assigned to the VLAN that the Fixed Mode Bedside Switch beds are to be
part of:
!
interface FastEthernet0/24
description uplink port to Fixed Mode Bedside SW
switchport access vlan 101
switchport mode access
speed 100
duplex full
spanning-tree portfast disable
!
The following settings must be configured for the Fixed Mode Bedside Switch:
Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC iX deployments and includes the following topics:
For turnkey PIIC iX deployments, you will need to map the IP address/subnets to
VLANs and do so in a way that can be replicated from one customer to another. The
IP Address subnet size and scheme will generally depend on whether or not the
system is for a routed or non-routed solution, the size of the system, existence of
telemetry and the network infrastructure requirements needed to support.
Because they are standalone, Ring Topologies do not have the concept of VLANs (i.e. all
ports in a non-routed Ring are usually on the same default VLAN 1). For a non-routed system
to communicate with external applications outside of its subnet, interfacing to the HLAN is
required. This can be achieved by either using a Layer 2 solution or via a Gateway device.
Careful IP address planning and collaboration with the HLAN IT department is still necessary
when choosing which IP Scheme to use to avoid complex NAT rules.
• The PIIC addressing schemes and VLAN scheme can be used for PIIC iX deployments if
the customer will not exceed a 128-bed system and plans to reuse existing switches.
Systems larger than 128 beds will not support Non-routed configurations.
Table 6-1 provides the PIIC iX values for a Non-routed (Layer 2 network) with Non-
routed ITS (ITS infrastructure is included in the subnet.)
See Appendix B for additional IP addressing rules regarding non-routed PIIC iX with a non-
routed smart-hopping network.
Note
PIIC iX VLANs 201 - 222 are commented out in the routed template. To add these VLANs,
uncomment the required VLANs. There is a Cisco limit of 32 HSPRP groups. The default
router configuration uses VLANs 1, 101-124 for a total of 25 HSRP groups. This leaves 7
HSRP groups that can be added (uncommented) to the routed configuration. If more than 7
VLANs are needed, then any unused VLANs will need to be removed (commented out)
from the router configuration.
**********************************************************************
! Interface Configuration for VLANs 201-222 are left commented out due
! **********************************************************************
!interface Vlan201
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
!!
!interface Vlan202
! ip pim sparse-dense-mode
! ip mroute-cache distributed
!!
!interface Vlan203
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
The following example illustrates how to add VLANs 201 -203. Note that VLANs 117-122
are not in use and are commented out.
!interface Vlan117
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.128.1
! standby timers 1 3
! standby preempt
!!
! interface Vlan118
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.136.1
! standby timers 1 3
! standby preempt
!!
! interface Vlan119
! ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.144.1
! standby timers 1 3
! standby preempt
! interface Vlan120
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.152.1
! standby timers 1 3
! standby preempt
!!
! interface Vlan121
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.160.1
! standby timers 1 3
! standby preempt
!!
! interface Vlan122
! no ip redirects
! ip pim sparse-dense-mode
! ip mroute-cache distributed
! standby ip 172.31.168.1
! standby timers 1 3
! standby preempt
!!
interface Vlan201
no ip redirects
ip pim sparse-dense-mode
ip mroute-cache distributed
interface Vlan202
ip pim sparse-dense-mode
ip mroute-cache distributed
interface Vlan203
no ip redirects
ip pim sparse-dense-mode
ip mroute-cache distributed
Care must be taken in the new PIIC iX VLAN scheme because the Clinical domain can span
multiple subnets and or VLANs. This means that better layer 3 interface descriptions as well
as VLAN descriptions are necessary in order to correctly identify all subnets or VLANs
belonging to the same Deployment.
If you make a decision to re-use an existing VLAN/IP address scheme for a PIIC iX, then you
are bound by the same system size rules that apply to PIIC.
Philips bedside devices require DHCP to obtain an IP address. DHCP must be provided by a
PIIC iX DHCP server or customer-provided DHCP server. Name resolution is required by
PIIC iX PCs and Servers. For PIIC iX PCs and Servers residing in the same subnet, names
can be resolved via NetBIOS or Domain Name System (DNS). For PIIC iX PCs and Servers
residing in different subnets, DNS name resolution must be provided. PIIC iX PCs and
Servers may also need to resolve hospital information system server names.
One approach to limiting the need for a DNS server name resolution would be to put the PIIC
iX servers, Surveillance, and Overview PC’s in the same subnet. In this case, NetBIOS could
be used for name resolution, eliminated the need for DNS name resolution. However, this
approach does not address the possible name resolution needs of external interfaces, such as
HL7, IEM, etc.
Note
An IP Helper must be used for DHCP server solutions that require a DHCP server to provide
an address to one or more VLANs other than the VLAN containing the DHCP server. In
these cases, an IP helper must be configured on each VLAN requiring DHCP-provided
addresses, but not containing a DHCP server. The IP helper identifies the IP address of the
DHCP server expected to reply to DHCP IP address requests.
Note
IP addresses for Smart-hopping Patient Worn Devices (PWDs) and Patient Worn Monitors
(PWMs) are provided by the Philips Access Point Controller (APC) BOOTP/DHCP server.
Since PIIC iX has routing capabilities, the default network will be a /24 for customers looking
to deploy a system size greater than 128 bedsides. Customers can also choose this subnet size
for smaller deployments (less than 128 bed systems) that require a flexible IP address scheme
due to HLAN interfacing requirements or to avoid using NAT.
The standard PIIC IP addressing and VLAN scheme can be used if the customer will not go
over 128 bed system and is planning on re-using the routers and switches currently in place.
Please note that if there is external interfacing needed (HLAN -> PSCN), then the IP
addresses will either have to be unique or NAT will be necessary.
• The total system size for each Distribution/Access layer is 446 total devices. If you need
more than 446 devices but are below 512 bedsides, you will need to add a second
Distribution layer with several access layer switches. The system cannot exceed 512
bedsides
• For systems larger than >512 bedsides, you must use Gigabit uplinks for all uplinks. In
order to meet this requirement, you may need to add new routers that support Gigabit
interfaces.
• Use the Cisco Catalyst 2960-S switch for all Distribution switches. This switch supports
all Gigabit interfaces (24 copper and 2 SFP). See the Cisco Catalyst 2960-S Installation
and Service manual (453564327661) for details.
• Connection from the 2960-S switch to the Routers is done on the SFP port using
1000 FX GBIC
• All Primary Server iX and Physiological Server iX servers are located in the Distribution
switches
• Surveillance PIIC iX systems and Clients are located in the Access layer
• PIIC iX systems for a large installation are located in the same Distribution/Access layer
• Remote bedside devices can be located in a different Distribution/Access layer as the PIIC
iX systems.
• Access layer switches must have Gigabit uplinks to the Distribution layer switches
• Up to 18 Access switches are allowed for each Distribution pair
• Up to 10 Surveillance PIIC iX systems, or Clients, or Physiological Server iX systems are
allowed for each Access layer switch.
• All uplink ports must be trunked.
• All trunks must pass all PIIC iX VLANs as well as the ITS and Management VLAN (200-
220, 124)
• IP addresses on the router for VLANs 201-220 will default to /24s and are unique to the
Customer network if possible to avoid NAT. However, before using NAT be sure to
consider the potential impact to current and future Hospital Information System
interfacing.
• All PIIC iX systems are located on the first /24 subnet. Bedside devices are allocated to /
24s subnets as needed.
Note
Access and distribution level switch ports can be used for Fixed-Mode monitoring, therefore,
both are listed in Table 6-3.
865339 / 453564197311 Cisco Systems Cisco Catalyst 3750G-12S-12 12.2(55) SE4 YES
Gigabit Ethernet SFP ports
(distribution)
866051 / 453564261781 Cisco Systems Cisco 3750V2-24 Ethernet 100FX 12.2(58) SE1 YES
SFP ports and 2 SFP Gigabit
Ethernet ports (distribution)
865054 / 453564061211 Cisco Systems Cisco Catalyst 2960 24 10/1 00 + 12 2 (55) SE YES
2 10/100/1000
865055 / 453564099371 Cisco Systems Cisco Catalyst 2960 24 10/1 00 12 2 (55) SE YES
and 2 dual-purpose uplinks
Note
Currently only Cisco access switches are supported for LLDP-MED and the dependant
Fixed Mode Monitoring feature.
The LLDP-MED feature must be enabled by default on each switch access port used for
PIIC iX Fixed Mode Monitoring to function. Vendor-specific switch configuration must be
done on each fixed mode monitoring access switch. For Cisco switches, LLDP-MED can be
enabled globally or on each individual switch port. Unlike the PIIC Fixed Mode Monitoring
implementation, an Extension switch is NOT required. Any switch port can be used as a Fixed
Mode Monitoring port. Access level switch ports are the likely connection location for a
Fixed Mode Monitoring ports and devices. However, a Fixed Mode Monitoring access port
can be in a Distribution, or Access layer switch port.
On Cisco access switches LLDP is enabled by default on all supported interfaces to send and
to receive LLDP information. LLDP can be disabled globally, using the following commands:
Overview
The PIIC iX Network supports use of the Star network topology. As of December
31, 2010, the Star Topology is the required network topology for all new (i.e.,
System Release J and later) Philips clinical network installations. See Chapter 3
for full details about Star Topology concepts and implementation. Additionally,
Table 7-1 summarizes the specific topology support requirements determined by the
network plan.
Philips
Part/Option Number Description
Support P/N
Currently installed ring network requesting Allowed, but you must convert all non-
an expansion. redundant core switch rings to a
redundant ring and you should consider
a possible conversion to star topology.1
network.
This chapter describes the conversion process for moving from a Ring Topology to a
Star Topology and includes the following topics:
• Supported network hardware for a Star Topology and the technical differences
between ring and star topologies
• High level conversion process
• Ring to Star conversion scenarios
Table 7-3 lists the devices that are supported for a Star topology.
• Planning
• Preparation
• Conversion
• Verification
Step 1 - Planning
The first step is the planning phase and requires that you prepare a detailed written plan that
outlines the desired Star network. The plan should identify all required switches, cable runs,
and specific port usage. Additionally, you must define and create all of the new configurations
for each switch in the star network.
Step 2 - Preparation
Use the following preparation recommendation steps:
1. Obtain a complete and correct network plan for the existing network design,
implementation, installation, and configuration.
2. Collect all existing ring configurations from the switches.
3. Develop a back out plan to return the network to its original state so that you are prepared
if the conversion effort must be aborted.
4. Pre-configure and label any devices from the Ring network that will not be used in the
Star network.
5. Prepare for the removal and reallocation of the devices from the Ring network that will be
reused in the Star network. To do this:
• Free the cabling from trays and cable management devices
• Remove as many rack screws as possible
6. For larger systems, you must prepare a conversion script. The script should
chronologically identify the conversion steps and resources required for script execution.
7. Meet with the hospital clinical staff to determine appropriate network down times and best
approach to minimizing the impact of the down time. Confirm that the clinical staff is
ready for the necessary conversion down time of the network before bringing any devices
down.
Step 3 - Conversion
Use the following conversion steps:
1. Meet with the hospital clinical staff to manage and minimize the impact of down time.
2. Follow the original network design that you created in the planning phase. Do not make
any changes or additions to the planned design.
3. Use all of the basic network viability checks for physical connections, link lights, ping and
application connection and communication.
4. Use a Phased Go-Live conversion implementation by bringing up access layers one at a
time and adding devices to them prior to attaching the next access device.
Step 4 - Verification
Use the following test and integration steps:
1. Verify at every step of the conversion. Hold design reviews for the results of the network
planning and preparation phases.This should be done as part of the design and configuration
file development phases. (Steps 1 and 2.)
2. Follow all of the standard test and integration installation steps as part of the installation
and implementation verification phases. (Steps 3 and 4.)
3. Confirm application functionality from the IIC to every device (each bedside monitor, the
DBS, etc.)
Conversion Scenarios
This chapter describes the conversion scenarios for moving from a Ring Topology to a Star
Topology and provides a summary of each of the following types of conversions:
Table 7-4 provides a summary of the various ring to star conversion scenarios along with
details about the type of driver(s) and strategies.
Note: In this table, bold text indicates the addition of new hardware.
1When
converting switches and routers to a Star topology you must first issue the following commands: del vlan.dat, write
erase, and reload (Cisco factory default)
Be aware of the following characteristics and requirements for a non-redundant ring to non-
routed star conversion:
• You cannot mix HP and Cisco equipment in a non-routed star topology. This topology
requires a single-vendor solution.
• A full system shutdown is usually required.
• A small number of devices and possible reuse of ports and cabling should minimize the
down time.
• Conversion of a network with three non-redundant switches could be a straightforward
procedure that can be accomplished in the following two steps:
• Connect the lower-tier switches
• Load the new configurations into the three non-redundant switches.
Be aware of the following characteristics and requirements for this type of conversion:
• HP Routed Star topologies are supported through the Advanced Network Design and
Implementation (ANDI) process only.
• Significant new hardware is required for this type of conversion.
• For this type of conversion, the router layer can be built first. Existing switches are then
configured as access switches and attached to the router.
• Make sure that you plan for downtime while the access switches are being reconfigured.
• Possible reuse of existing cabling and ports can be done with this type of conversion.
• Distribution layers may or may not be required.
Be aware of the following characteristics and requirements for this type of conversion:
• Possible reuse of existing cabling and ports can be done with this type of conversion.
• It is possible to have 100% switch reuse with only configuration changes.
Be aware of the following characteristics and requirements for this type of conversion:
Be aware of the following characteristics and requirements for this type of conversion:
Hardware Overview
Note
Third-party personnel who install the cable plant used for the Philips IntelliVue
Clinical Network must be certified to install Category 5 (or greater) Unshielded
Twisted Pair and/or fiber cabling. Upon completion of the cable plant installation,
the cable installation personnel must provide Philips (and the customer, i.e. the
hospital IT staff) with documented segment-by-segment test results that
demonstrate the quality and reliability of the cable plant installation.
UTP Cables
Unshielded Twisted Pair (UTP) cabling (in-house and patch) must be compliant to
Electronic Industries Association (EIA)/Telecommunication Industries Association
(TIA) 568-B (copper) or International Organization for Standardization (ISO)/
International Electrotechnical Commission (IEC) 11801 (copper and fiber)
specifications.
Per the EIA/TIA 568, cabling must meet Category 5 (or greater) specifications. Per
ISO/IEC 11801 cabling must meet Class D (or greater) specifications. Patch panel
and patch cable terminations are RJ-45.
Note
Direct connect patch cables and in-wall wiring use the 568A version on both ends. Cross
over cables use a 568A version on one end and a 568B version on the other. Therefore, they
invert the transmission and reception wires. When purchased from Philips, cross over cables
have black boots on cable ends for identification.
Note
Single, continuous-length fiber optic cables are limited to 1000 meters (3281 ft.). You must
use 1000 Megabit, full-duplex connections on fiber optic runs. 100/half connections are not
supported over fiber.
Note
Fiber optic cables use four different types of connectors, SC, ST, MT-RJ, or LC as shown in
Figure A-1.
SC Connector ST Connector
SC connectors have a square cross section and are used with a Network Switch.
ST connectors have a round cross section and are used with the 10 Mbps Media
Translator.
MT-RJ connectors are small form-factor fiber optic connectors that resemble the
RJ-45 connectors used in Ethernet networks.
LC connectors look just like SC connectors, but they are half the size with a 1.25mm
ferrule instead of 2.5mm.
* - A mode-conditioning patch cord, as specified by the IEEE standard, is required
regardless of the span length.
Wall Boxes
RJ-45 wall boxes for UTP cable connectors are available from Philips for connecting
Patient Monitors, PIIC and PIIC iX Surveillance systems, PIIC and PIIC iX Overview
systems, PIIC and PIIC iX Physiologic Data servers, and the PIIC or PIIC iX Primary
Server to the Network. Quad-port (Option M3199AI #A12) wall boxes are available
for US installations.
Surface mount kits for mounting quad-port wall boxes (Option M3199AI #A13) are
also available. Single port wall boxes and surface mount kits are also available for
certain countries. See your Philips Representative for specific part numbers.
A dual-port RJ-45 wall box is shown in Figure A-2. Each wall box includes places for
labeling UTP cables connecting to each port. A typical label would include the patch
panel number and port number the cable connects to. For example, a label 2-14 would
indicate that the connecting UTP cable came from patch panel 2 - port 14.
UTP wire connections for each pin of port jacks are the 568A Version shown in
Figure A-2. Wiring of UTP cables to internal connectors of wall boxes must be
performed by a certified CAT5 cable plant installer. They should be wired as shown in
Figure A-2.
Port A Label
RJ-45 Port B
Port B Label
2 BLUE
1 WHITE/blue
Figure A-2: RJ-45 Wall Box and Wire Connections for UTP Cable
Patch Panels
The IntelliVue Clinical Network contains many interconnecting cables and wires. To
assure a robust, reliable, and accessible network, each wire connection must be secure,
and identification of wires and cables must be clear. To assist this process, 24-Port
Patch Panels are available from Philips (Option M3199AI #A01). For large systems
with many wires, it is recommended that patch panels be mounted in a floor standing
rack designed for that purpose. However, Philips also provides a Patch Panel Wall
Mount Kit (M3199AI #A05) for mounting the patch panel on a vertical wall.
The 24-port patch panel from Philips is shown in Figure 6. The Front Panel has 24,
RJ-45 ports for connecting 24 UTP CAT5 RJ-45 connectors. Each front panel port
should be labeled for cable identification. Places for port labeling are provided. Snap-
in Philips labels are also included for each port.
The Rear Panel has 24 sections for connecting the 8 individual wires from 24 different
UTP CAT5 cables. Wiring of UTP cables to the rear of the Patch Panel must be
performed by a certified CAT5 cable plant installer. They should be wired as shown in
Figure A-3.
Front Panel
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
AMP
Category 5
23 21 19 17 15 13 11 9 7 5 3 1
WHITE/brown
WHITE/green
/stripe color
WHITE/blue
ORANGE
BROWN
GREEN
BLUE
PRIMARY COLOR
1 2 3 4 5 6 7 8
Copper Connections
UTP Copper connections are supported for interconnecting network devices where
port availability and EIA/TIA Telecommunication compliances for Category 5 (or
greater) are met.
Fiber Connections
Fiber optic cabling may be used for network device interconnection in cases where
longer point-to-point distances must be spanned. Port requirements for fiber may
dictate specific network device types.
Fiber links may be either 100 or 1000Mbit (Gigabit).
Server Farm Connections between the Routers and the Server Farm
Distribution Switches must be Gigabit speed.
Overview
This appendix is a reference to be used with Chapter 6 and includes the following
PIIC iX IP Addressing scheme tables:
B-1 B-1
Chapter B: Overview
MOCN213
Table B-1: IP Address/VLAN Mappings for PIIC iX Systems
Large Server (Routed) 128 - 1024 Beds, 160
Systems Large Server (Non-routed with Smart-hopping)
Large Server (Non-routed no Smart-hopping)
128 Beds, up to 160 Systems
Standalone (Non-routed no Smart-hopping) 128 Beds, up to 160 Systems
IP Address 32 Beds, 8 Systems
Small Server (Non-routed no Smart-hopping)
1st 2nd 3rd 4th 64 Beds, 8 Systems
172 21 200 0 VLAN Subnet Mask - /24 # Devices VLAN Subnet Mask - /23 # Devices VLAN Subnet Mask - /22 # Devices
172 21 200 0 200 255.255.255.0 254 200 255.255.254.0 510 200 255.255.252.0 1022
172 21 201 0 201 255.255.255.0 254 --MGMT-- ---- SKIPPED ---- --MGMT-- ---- SKIPPED ----
172 21 202 0 202 255.255.255.0 254 202 255.255.254.0 510
172 21 203 0 203 255.255.255.0 254
172 21 204 0 204 255.255.255.0 254 204 255.255.254.0 510 204 255.255.252.0 1022
172 21 205 0 205 255.255.255.0 254
172 21 206 0 206 255.255.255.0 254 206 255.255.254.0 510
172 21 207 0 207 255.255.255.0 254
172 21 208 0 208 255.255.255.0 254 208 255.255.254.0 510 208 255.255.252.0 1022
172 21 209 0 209 255.255.255.0 254
172 21 210 0 210 255.255.255.0 254 210 255.255.254.0 510
172 21 211 0 211 255.255.255.0 254
172 21 212 0 212 255.255.255.0 254 212 255.255.254.0 510 212 255.255.252.0 1022
172 21 213 0 213 255.255.255.0 254
172 21 214 0 214 255.255.255.0 254 214 255.255.254.0 510
172 21 215 0 215 255.255.255.0 254
172 21 216 0 216 255.255.255.0 254 216 255.255.254.0 510 216 255.255.252.0 1022
172 21 217 0 217 255.255.255.0 254
172 21 218 0 218 255.255.255.0 254 218 255.255.254.0 510
172 21 219 0 219 255.255.255.0 254
172 21 220 0 220 255.255.255.0 254 220 255.255.254.0 510 220 255.255.252.0 1022
172 21 221 0 221 255.255.255.0 254
172 21 222 0 222 255.255.255.0 254 222 255.255.254.0 510
172 21 223 0 223 255.255.255.0 254
172 21 224 0 224 255.255.255.0 254 224 255.255.254.0 510 224 255.255.252.0 1022
172 21 225 0 225 255.255.255.0 254
172 21 226 0 226 255.255.255.0 254 226 255.255.254.0 510
172 21 227 0 227 255.255.255.0 254
B--2
PIIC iX IP Address Tables
MOCN213
Table B-2: Flexible IP Addressing Scheme: Non-routed PIIC iX without IntelliVue Smart-hopping Network
Clinical Domain Size Local Database PIIC iX Small PIIC iX Server Large PIIC iX Server
B--3
Chapter B: Overview
MOCN213
Table B-3: Flexible IP Addressing Scheme: Non-routed PIIC iX with IntelliVue Smart-hopping Network
Clinical Domain Size Local Database iX Non-routed with Smart- Small Server iX Non-routed with Smart- Large Server iX Non-routed with Smart-
hopping hopping hopping
Clinical Domain Devices # IPs IP Address # IPs IP Address # IPs IP Address
Subnet Mask = 255.255.255.0 Subnet Mask = 255.255.254.0 Subnet Mask = 255.255.252.0
Subnet Address 1 172.20.204.0 1 172.20.204.0 1 172.20.204.0
Router VIP 1 172.20.204.1 1 172.20.204.1 1 172.20.204.1
Router A 1 172.20.204.2 1 172.20.204.2 1 172.20.204.2
Router B 1 172.20.204.3 1 172.20.204.3 1 172.20.204.3
Reserved (Network Devices) 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11
Service PC/APM 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13
DBS 1 172.20.204.14 1 172.20.204.14 1 172.20.204.14
Gateway 1 172.20.204.15 1 172.20.204.15 1 172.20.204.15
Physio DB 1 172.20.204.16 1 172.20.204.16 1 172.20.204.16
PIIC iXs Hosts (PIICs/ 8 172.20.204.17 to 172.20.204.24 8 172.20.204.17 to 172.20.204.24 60 172.20.204.17 to 172.20.204.76
Clients)
Wired Bedsides - 1st Device 32 172.20.204.25 to 172.20.204.56 64 172.20.204.25 to 172.20.204.88 128 172.20.204.77 to 172.20.204.204
Wired Bedsides - 2nd Device 32 172.20.204.57 to 172.20.204.88 64 172.20.204.89 to 172.20.204.152 128 172.20.204.205 to 172.20.205.76
Wired Bedsides - 3rd Device 24 172.20.204.89 to 172.20.204.112 48 172.20.204.153 to 172.20.204.200 96 172.20.205.77 to 172.20.205.172
XDS Devices 16 172.20.204.113 to 172.20.204.128 32 172.20.204.201 to 172.20.204.232 64 172.20.205.173 to 172.20.205.236
Printers (Static or DHCP) 10 172.20.204.129 to 172.20.204.138 15 172.20.204.233 to 172.20.204.247 19 172.20.205.237 to 172.20.205.255
ITS/Access Point Controllers 8 172.20.204.139 to 172.20.204.146 8 172.20.204.248 to 172.20.204.255 18 172.20.206.0 to 172.20.206.17
ITS/Access Points 40 172.20.204.147 to 172.20.204.186 113 172.20.205.0 to 172.20.205.112 128 172.20.206.18 to 172.20.206.145
Transceivers - (APC) - 64 172.20.204.187 to 172.20.204.250 128 172.20.205.113 to 172.20.205.240 256 172.20.206.146 to 172.20.207.145
DHCP
Unused Space 4 172.20.204.251 to 172.20.204.254 14 172.20.205.241 to 172.20.205.254 109 172.20.207.146 to 172.20.207.254
Broadcast Address 1 172.20.204.255 1 172.20.205.255 1 172.20.204.255
Total IP addresses 256 512 1024
B--4
PIIC iX IP Address Tables
MOCN213
Table B-4: Flexible IP Addressing Scheme: Routed IntelliVue Smart-hopping Network
Clinical Domain Size Small Smart-hopping Network Size Medium Smart-hopping Network Size Large Smart-hopping Network Size (Default)
256 Smart-hopping Devices + 9 APC + 128 AP 512 Smart-hopping Devices + 9 APC + 320 AP 1024 ITS Devices + 9 APCs + 320 AP
Clinical Domain Devices # IPs IP Address # IPs IP Address # IPs IP Address
Subnet Mask = 255.255.252.0 Subnet Mask = 255.255.248.0 Subnet Mask = 255.255.240.0
Subnet Address 1 172.31.240.0 1 172.31.240.0 1 172.31.240.0
Router VIP 1 172.31.240.1 1 172.31.240.1 1 172.31.240.1
Router A 1 172.31.240.2 1 172.31.240.2 1 172.31.240.2
Router B 1 172.31.240.3 1 172.31.240.3 1 172.31.240.3
Reserved (Service PC) 6 172.31.240.4 to 172.31.240.9 6 172.31.240.4 to 172.31.240.9 6 172.31.240.4 to 172.31.240.9
Switches/ Client 11 172.31.240.10 to 172.31.240.20 11 172.31.240.10 to 172.31.240.20 11 172.31.240.10 to 172.31.240.20
Reserved 44 172.31.240.21 to 172.31.240.64 34 172.31.240.21 to 172.31.240.54 235 172.31.240.21 to 172.31.240.255
ITS APC 9 172.31.240.65 to 172.31.240.73 9 172.31.240.55 to 172.31.240.63 9 172.31.241.0 to 172.31.241.8
ITS AP Static Range 182 172.31.240.74 to 172.31.240.255 320 172.31.240.64 to 172.31.241.127 759 172.31.241.9 to 172.31.243.255
ITS AP DHCP (APC) 256 172.31.241.0 to 172.31.240.255 640 172.31.241.128 to 172.31.243.255 1024 172.31.244.0 to 172.31.247.255
ITS Transceivers DHCP 511 172.31.242.0 to 172.31.240.254 1023 172.31.244.0 to 172.31.247.254 2047 172.31.248.0 to 172.31.255.254
Broadcast Address 1 172.31.243.255 1 172.31.247.255 1 172.31.255.255
Total IP addresses 1024 2048 4096
Note The Smart-hopping infrastructure Access Point Controllers APC provides DHCP IP address to Smart-hopping Access Points (AP) and Smart-
hopping monitoring devices. The APC DHCP server lease times are infinite. Therefore, as devices are replaced or upgraded additional spare
addresses must be planned for when allocating address space. The recommended address planning logic is as follows:
Allocation space for AP must be approximately 3 times the size of the number of APs needed.
B--5
MOCN213
Overview