Sie sind auf Seite 1von 128

MOCN213

Installation Guidelines and Topologies

4535 643 88161


Printed in the U.S.A.
October, 2012
First Edition
MOCN213

Proprietary Information
This document contains proprietary information that is protected by copyright.

Copyright
Copyright © 2012 Koninklijke Philips Electronics N.V. All Rights Reserved.

Manufacturer
Philips Medical Systems
3000 Minuteman Road
Andover, MA 01810-1099
USA
(978) 687-1501
This document was printed in the United States of America.

Trademark Acknowledgements
Symbol is a trademark of Symbol Technologies, Inc.
HP is a registered trademark of Hewlett-Packard Company
Cisco is a registered trademark of Cisco Systems
MS-SQL is a registered trademark of Microsoft Corporation
Nortel is a registered trademark of Nortel Networks Limited
3COM is a registered trademark of 3COM Corporation
Extreme is a registered trademark of Extreme Networks
All other trademarks, trade names and company names referenced herein are used for identification
purposes only and are the property of their respective owners.

Warranty
The information contained in this document is subject to change without notice. Philips Medical Systems
makes no warranty of any kind with regard to this material, including, but not limited to, the implied
warranties or merchantability and fitness for a particular purpose. Philips Medical Systems shall not be
liable for errors contained herein or for incidental or consequential damages in connection with the
furnishing, performance, or use of this material.

Printing History
New editions of this document will incorporate all material updated since the previous edition. The
documentation printing date and part number indicate its current edition. The printing date and edition
number change when a new edition is printed. The document part number changes when extensive
technical changes are incorporated.
This document replaces M3185-91931. If you require a previous version of this document, please refer to
the M3185-91931 part number.
First Edition October 2012
MOCN213

About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii


Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Notational Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Enterprise Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
PIIC Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
PIIC iX Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Smart-Hopping Network Deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3

Star Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Non-Routed Star Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
Routed Star Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Router/Core Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Distribution Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Location in Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Access Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Switch Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3

Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
PIIC Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
PIIC iX Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
PIIC iX Design Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Smart-hopping Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Non-Routed Smart-Hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Routed Smart-Hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Co-Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Network Infrastructure (Routers/Switches) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Layer 2 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Fixed-mode Monitoring for PIC and PIIC iX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
PIIC Fixed-Mode Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
PIIC iX Fixed-Mode Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Layer 3 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Routed IntelliVue Smart-hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Multicast Considerations for PIIC and PIIC iX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Enabling Multicast on the PIIC iX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Additional PIIC iX Multicast Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Enabling Multicast on the PIIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
Associating PIIC with Philips Monitoring Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
IGMP Snooping Command Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
100Megabit Topology vs. Gigabit Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Full Gigabit Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Design/Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12

iii
MOCN213

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PIIC Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PIIC iX Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
PSCN to HLAN Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Direct Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Layer Two Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14
Layer 3 Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
Gateway Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15

Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Non-Routed Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Non-Routed Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
Cisco Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
HP Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2
Routed Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Connection Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Router/Core Switch Connection Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Distribution Switch Connection Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Access Switch Connection Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5
Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Router/Core Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Distribution Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Access Switch Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
Routed Implementation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Single Distribution Switch Pair Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-7
Maximum Distribution Switch Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
Using 100 Megabit Distribution Switches in a PIIC or PIIC iX Deployment . . . . . . . . . . . 4-8
Using Access Switches in a PIIC or PIIC iX Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Management VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
Router Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Subnet Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10
Router Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-12
Spanning Tree Protocol Considerations for PIIC and PIIC iX . . . . . . . . . . . . . . . . . . . . . . 4-13
Direct Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Connecting from a Layer 2 PSCN Directly to the HLAN . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Basic Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Information You Will Need to Request from IT: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Existing Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14
Ring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Layer 2 Only Star Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
Physical Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-15
VLAN Numbering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
HLAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Connecting from a Routed PSCN Directly to the HLAN . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Basic Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Information You Will Need to Request from IT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Existing Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17

iv
MOCN213

Physical Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17


Layer Two Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-19
Example L2 and L3 Next Hop Connectivity Testing: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Routing between the PSCN and the HLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Static routing to HLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
Dynamic routing to HLAN with EIGRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22
Dynamic routing to HLAN with OSPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-24
Network Time Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26

PIIC Implementation Specifics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
IP Address Assignments for PIIC Non-Routed Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
IP Address Assignments for Routed PIIC Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Alternate IP Address Scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Layer 3 Routers (Core A and B) Configuration Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
Required Configuration Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6
Distribution and Access Layer Switch Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
PIIC Deployment of a DBS Server Farm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-7
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
Supported Network Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-9
Interconnecting Switches and Routers to Support a Server Farm . . . . . . . . . . . . . . . . . . . . . 5-9
Configuring Distribution Switches to Support a Server Farm. . . . . . . . . . . . . . . . . . . . . . . 5-11
Sample Distribution Switch Configuration File: Server Farm Using Cisco . . . . . . . . . . . . 5-13
Settings in Sample Configuration File: Server Farm Using Cisco . . . . . . . . . . . . . . . 5-16
Example of Router Configuration for Use in a Server Farm . . . . . . . . . . . . . . . . . . . . . . . . 5-16
Required Settings for Switch/Router Configuration in Support of a Server Farm. . . 5-17
Configuring the Cisco 3750 as a Layer 2 Switch in a Routed Topology . . . . . . . . . . . . . . 5-19
Using a Pair of Cisco 3750 24-Port Switches as Distribution Switches in a Star Topology5-20
Network Switch Port Requirements for End Devices in PIIC Networks . . . . . . . . . . . . . . . . . . 5-21
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks . 5-23
Extension Switches in a Non-Routed Star Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-23
Extension Switches in a Routed Star Topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
Extension Switches (Fixed Mode Monitoring Switches) . . . . . . . . . . . . . . . . . . . . . . 5-25
Access Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Distribution Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25
Core Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Access Switch Settings for PIIC Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
Example:Uplink Port Configuration Settings on Access Switch . . . . . . . . . . . . . . . . 5-26
Fixed Mode Bedside Switch Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-27

PIIC iX Implementation Specifics . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
Flexible IP Addressing in PIIC iX Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
IP Address Assignments for PIIC iX Non-routed Deployments . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
PIIC iX VLAN Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
DHCP and DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8

v
MOCN213

IP Address Assignments for Routed PIIC iX Deployments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9


Routing Information Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9
Routed PIIC iX with Routed ITS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9
PIIC iX Deployment for Routed Star Topology Overview and Example . . . . . . . . . . . . . . . . . 6-11
PIIC iX Fixed Mode Monitoring Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13
Fixed Mode Monitoring Access Port Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Commands Used to Enable and Disable LLDP-MED . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15
Cisco LLDP-MED Configuration Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16

Ring to Star Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
Technical Differences and Supported Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
High Level Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Step 1 - Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Step 2 - Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Step 3 - Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Step 4 - Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Conversion Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Conversion Scenario 1:Non-Redundant Ring to Non-Routed Star. . . . . . . . . . . . . . . . . . . . 7-6
Conversion Scenario 2: Non-Redundant Ring to Routed Star . . . . . . . . . . . . . . . . . . . . . . . 7-7
Conversion Scenario 3: Non-Routed Ring to Non-Routed Star . . . . . . . . . . . . . . . . . . . . . . 7-8
Conversion Scenario 4: Non-Routed Ring to Routed Star . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
Conversion Scenario 5: Routed Ring to Routed Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10

Hardware Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1


Network Cables and Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
UTP Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Fiber Optic Cables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-2
Wall Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3
Patch Panels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-4
Device/Switch Interconnection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
Copper Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6
Fiber Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-6

PIIC iX IP Address Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1

vi
MOCN213
Preface

About This Guide


This guide identifies and describes the supported network topologies for PIIC iX, Releases
A.0 (and later) and PIIC Releases J.0 (and later) of the IntelliVue Information Center. This
section describes the document and includes:

• Audience
• Notational Conventions
• Related Documentation
• Terminology

Audience
This guide is written for Philips-trained service providers who will design the IntelliVue
Clinical Network topologies including Database Domains and the IntelliVue Telemetry
System.

Notational Conventions
This guide uses the following conventions:

Note

Notes call attention to important information.

Caution
Cautionary statements call attention to a condition that could result in loss of data or damage
to equipment.

Warning
Warnings call attention to a condition that could result in physical injury.

vii
MOCN213
About This Guide

Related Documentation
Please refer to these other documents for additional installation service information about the
IntelliVue Telemetry System and IntelliVue Clinical Network:

• PIIC iX Network Installation and Service Guide


• HP 2510 Switch Installation and Service Manual
• Cisco 2960 Switch Installation and Service Manual
• Cisco 2960-S Switch Installation and Service Manual
• Cisco 3560 Router Installation and Service Manual
• Cisco 3750 L3 Router/L2 Switch Installation and Service Manual
• Advanced Networking Design and Implementation Guidelines for Star
Topologies
• MX40 Release B.0 Installation and Service Guide
• ITS Infrastructure Installation and Service Guide
• ITS Access Point Controller Installation Guide
• ITS 1.4 GHz Access Point Installation Guide
• ITS 2.4 GHz Access Point Installation Guide
• ITS Sync Unit Installation Guide

The PIIC iX Network Installation and Service Manual provides detailed information about
PIIC iX deployment, and the IntelliVue Telemetry System Infrastructure Installation and
Service Guide provides complete information on the 1.4 GHz/2.4 GHz IntelliVue Telemetry
System wireless network rules and topologies.

Terminology
• Advanced Network Design and Implementation (ANDI) - ANDI is a network design
and implementation delivery channel for Philips Clinical Systems which utilizes the
highest level of field network expertise to provide the greatest design flexibility to meet
specific customer networking needs. This channel has access to some hardware and
configurations that are not available in the normal, FSE channels.

• Database Domain (DBSD) - This term is used only for PIIC deployments (Release L, M,
and N) and describes a network that contains the Standalone IntelliVue Information
Center, or the IntelliVue Database Server and its connected Information Centers, Clients,
bedsides, and infrastructure. This term applies to both routed and non-routed topologies.

• Deployment - Refers to the overall PIIC and PIIC iX monitoring solution products.

• IntelliVue Clinical Network (ICN) - This term refers to the entire Philips network. In a
routed topology, the ICN includes the routers and all inter-connected network devices and
the IntelliVue Telemetry System Wireless Subnet.

• Philips IntelliVue Information Center (PIIC) - This term is refers to the Phillips
IntelliVue Information Center.

• Philips IntelliVue Information Center (PIIC) iX - This term refers to the Philips
IntelliVue Information Center iX.

viii
MOCN213

Introduction

Overview
A Philips IntelliVue Information Center (PIIC and PIIC iX) monitoring system
captures complete waveforms, trends, alarms, and numerics from networked Philips
patient monitors and telemetry systems. A PIIC monitoring system stores monitor
data and also exports it to a hospital information system to be stored in a patient
electronic medical record. This comprehensive monitoring system is also referred to
as an IntelliVue Clinical Network (ICN).

An ICN is made up of two components, clinical devices comprised of a combination


of hardware and software, and a network topology that provides clinical devices
connectivity.

In PIIC, the two ICN components are tightly coupled. A deployment of clinical
devices requires a specific network topology. For example, a large PIIC Database
server system deployment is always confined to a single VLAN with a prescribed IP
address scheme.

Application Deployment Layer


Network Topology Layer

Figure 1-1: Tightly Coupled ICN Components in PIIC

With the introduction of the PIIC iX, the two ICN components are no longer
coupled. PIIC iX clinical devices, such as Servers, Surveillance PIIC iX systems,
Overview PIIC iX systems, and Monitors can be deployed on varied network
topologies, multiple VLANs, and use a variety of IP address schemes.

Application Deployment Layer

Network Topology Layer


Figure 1-2: Loosely Coupled ICN Components in PIIC iX
Therefore, when designing a PIIC iX solution, it is important to consider the PIIC
iX deployment and the network topology components separately. We must

1-1 Introduction
MOCN213
Enterprise Overview

address the customer need with the most appropriate PIIC iX deployment, and then
accommodate this deployment with the most appropriate network topology.

For PIIC iX, the term ICN is synonymous with only the PIIC iX deployment and does not in
any way imply a specific network layer. Therefore, throughout this document, the
following terms are used in very specific ways:

• Deployment --Refers to the overall PIIC and PIIC iX monitoring solution products.
• Topology--Refers to the network.
• Enterprise--Refers to the use of the PIIC or PIIC iX monitoring solution within a single
network infrastructure or topology.

Enterprise Overview
A Philips enterprise monitoring solution can be comprised of one or more PIIC monitoring
systems or deployments. Each deployment could be on an independent network infrastructure,
or all on one common network infrastructure. Each PIIC and PIIC iX monitoring deployment
could be a current PIIC or a PIIC iX. Figure 1-3 provides an example of an enterprise
monitoring solution.

Figure 1-3: Enterprise Monitoring Solution Example

Figure 1-3 shows a PIIC iX deployment, a PIIC Deployment, and a Smart-Hopping


infrastructure deployment. The customer enterprise solution is the sum total of the two PIIC

1-2 Introduction
MOCN213
Enterprise Overview

deployments and the smart-Hopping deployment. This complete solution can be supported by
a single network infrastructure or topology.

PIIC Deployment
A PIIC deployment consists of up to eight IntelliVue Information Centers (IICs) and 12
IntelliVue Information Center Clients connected to a network. Additionally, you can use a
PIIC deployment to connect up to 128 IntelliVue Patient Monitors (both wired and wireless).

PIIC iX Deployment
A PIIC iX Deployment consists of Surveillance PIIC iX systems and Overview PIIC iX
systems that can be connected to a Primary Server iX. Additionally, you can use a PIIC iX
Deployment to connect to IntelliVue Patient Monitors (both wired and wireless).

Smart-Hopping Network Deployment


A Smart-hopping network deployment consists of a set of Smart-hopping Access Point
Controllers (APCs) and Access Points (APs). Together these APCs and APs provide a Smart-
hopping wireless infrastructure. There are two types of Smart-hopping deployments:

• Unshared Deployments -- Deployments with less than 48 APs that reside in the same
subnet as the PIIC or PIIC iX deployment using the Smart-hopping infrastructure.

• Shared Deployments -- Deployments that reside in their own subnet and are connected
via a routed connection to one or more PIIC and/or PIIC iX deployments in their
respective subnets. A shared deployment can consist of up to nine APCs and 320 APs.

1-3 Introduction
MOCN213
Enterprise Overview

1-4 Introduction
MOCN213

Star Topology

Overview
The PIIC iX Network supports use of the Star network topology. In the Star,
multiple network switches are all connected back to a central switch tier, using a star
configuration. Due to its scalability and ease of configuration, as of December 31,
2010, the Star Topology is the required network topology for all new Philips
clinical network installations.

Note

The design that you create is based on site-specific variables and requires that you
consider existing cabling infrastructure and specific customer requirements that are
unique to your network.This chapter provides the information to assist you in
designing and configuring the layout of your clinical network.

The Star topology is a hierarchical network design. Star topologies offer the best
levels of link and device redundancy and are in line with industry-standard best
networking practices. In a star network topology, all network switches are connected
back to a central switch layer in a star configuration.

There are two basic types of Star Topology:

• Routed
• Non-Routed

The following sections describe each of these types.

Non-Routed Star Topology


A Non-routed Star Topology is generally for smaller PIIC and PIIC iX deployments.
If the site requirements dictate the need for small, multi-switch non-routed
topologies, you should use a Non-Routed Star Topology. The smallest of these is the
single switch system. Beyond that, two and three switch systems are supported with
redundant links. See Figure 2-1.

2-1 Star Topology


MOCN213
Overview

Figure 2-1: Simple Three-Switch Non-Routed Star Topology

Routed Star Topology


The Star Topology is a hierarchical network design. The hierarchy structure has three levels:

• Router/Core
• Distribution
• Access

Figure 2-2: Star Topology

2-2 Star Topology


MOCN213
Overview

Router/Core Switch
In the simplest form of the routed star topology, the Access Switches are directly connected to
the Routers, which act as both a Core Switch and Router. The Router/Core Switch is above
the Distribution and Access Switches in the cabling hierarchy. For PIIC, the Router/Core
switch is used primarily to allow wireless clients on the Smart-hopping deployment to
transmit data via Layer 3 to a PIIC deployment. PIIC iX devices may use the Router/Core
switch to communicate. This Layer 3 communication may be between a Primary Server iX,
Surveillance PIIC iX, Overview PIIC iX, Bedside Monitors, and Smart-hopping devices.

Distribution Switch
To support a greater number of Access Switches and end devices, and to extend the distance
between core and access layers, you may use a second type of routed star network topology
which uses Distribution switches. For this method, Access Switches are not directly
connected to the Routers/Core Switches but are instead connected to Distribution Switches.
The Distribution Switch may also be a node point in multi-building sites.

Location in Hierarchy

• The Distribution Switch is below Router/Core and above the Access Switches in the
cabling hierarchy.
• The Distribution Switches are not the root bridge in a spanning tree instance, however
their root bridge priority is higher than switches at the Access Layer.

Access Switch
An Access Switch can connect directly to the Router/Core Switch, or it can connect to the
Router/Core switch via a Distribution Switch. VLAN access is configured on each Access
Switch port as required for PIIC, PIIC iX and/or Smart-hopping deployments. The Access
Switch is below the Router/Core and Distribution switches in the cabling hierarchy.

Switch Interconnection

Star Topology switch interconnection can be 100 Megabit or 1000 Megabit (Gigabit). The
need for a specific connection between two switches is dictated by the PIIC or PIIC iX
deployment, the particular interconnection media type used, and /or future site expansion
plans. For details on switch interconnection, see Chapter 3, Network Design and Chapter 4,
Implementation.

Star Topology 2-3


MOCN213
Overview

2-4 Star Topology


MOCN213

Network Design

Overview
This chapter provides network design guidance for Star topology for PIIC, PIIC iX
and IntelliVue Smart-hopping deployments. It also provides hospital resource
Interoperability, deployment coexistence, and deployment migration design
considerations.

PIIC Design
A PIIC Deployment requires specific network design considerations. This chapter
outlines the infrastructure design guidelines that are required by a PIIC deployment.
Each PIIC deployment must reside in a single subnet. Each single subnet must be
a /21 and a specific IP schema must be followed. For PIIC implementation details,
see Chapter 4 and Chapter 5.

PIIC iX Design
As compared to PIIC deployments, PIIC iX deployments have fewer restrictions on
the network infrastructure design. Therefore, a PIIC iX design process is less
proscriptive and more suggestive. However, one requirement of a PIIC iX network
infrastructure topology is that you use a single connect (touch-point) between the
PIIC iX monitoring topology and the hospital infrastructure. This requirement has a
significant impact on the topology design. For PIIC iX implementation details, see
Chapter 4 and Chapter 6.

3-1 Network Design


MOCN213
PIIC iX Design

PIIC iX Design Principles


You should consider the following design principles:

1. Keep the design simple.


2. Always consider PIIC iX interoperability concerns when designing the topology. External
interlacing/interoperability, such as HL7 could have a significant impact on the
infrastructure design. Consider each PIIC iX possible interoperability connection with the
Hospital systems.
3. When possible, use unique address space that is unique within the hospital address space.
4. Use static address assignment for Philips servers and systems.
5. If you must use a DHCP server, prioritize the servers in your network as follows:
• Make the Primary server DHCP server your first choice
• The hospital DHCP server should be the second choice
• A Philips-supplied server is your last choice.
• For devices that require dynamic addressing, IP Helper is needed for the VLANs
that reside on subnets that are separate from the DHCP server.
6. Always ensure that you are using a unique multicast address space. If your infrastructure
routes multicast between the hospital infrastructure and your infrastructure, make sure that
the multicast address space does not overlap any other hospital multicast address space.
7. Whenever possible, route traffic between the clinical network infrastructure and the
hospital network.
8. If you must isolate the PIIC iX address space from the hospital address space, use
Network Address Translation (NAT) to do this. However, before using NAT be sure to
consider the potential impact to current and future Hospital Information System
interfacing
9. Design your PIIC iX deployment in such a way that name resolution between Philips
devices does not require DNS. For example, (if possible) put the Philips Primary Server
iX, Physiological Server iX, Surveillance PIIC iX systems, and Overview PIIC iX systems
in the same VLAN. If you must use DNS for name resolution, consider using customer/
hospital supplied DNS services.
10. It is recommended that all Primary Server iX, and Physiological server devices can be
connected to the Distribution Layer switches. Surveillance PIIC iX systems and the
Overview PIIC iX systems should be connected to the Access Layer switches.
11. When possible, large PIIC iX installations should be deployed within a single
Distribution/Access star leg. However, remote bedsides can be located in a distribution/
access star leg that is different from the PIIC iX systems location.
12. Review and discuss the hospital infrastructure connection method. PIIC iX deployments
can be directly connected or via a Juniper SRX100 Gateway device (provided by Philips).
Each connection method has its own set of advantages and disadvantages that must be
carefully reviewed in order to select the best fit for each specific customer situation.

3-2 Network Design


MOCN213
Smart-hopping Design

Smart-hopping Design
An IntelliVue Smart-hopping infrastructure can be deployed in one of the following ways:

• A Non-Routed topology
• A Routed topology

Non-Routed Smart-Hopping
When a Smart-hopping infrastructure is deployed in the same subnet with a PIIC deployment,
the following design guidelines must be followed:

• All Smart-hopping wireless devices (transceivers/wireless bedside monitors) must only be


configured on PIIC deployment residing on the same subnet.
• Up to a specified upper limit of Smart-hopping Standard or Core Access Points may be
installed
• Only IntelliVue Smart-hopping infrastructure can be installed
• Multiple Smart-hopping deployments at a single hospital are supported only if the
topology configuration, and Sync Network requirements are met.

Refer to the ITS Infrastructure Installation for IntelliVue Smart-hopping implementation


details.

Routed Smart-Hopping
The Smart-hopping infrastructure is deployed in a separate subnet to which multiple PIIC and/
or PIIC iX deployments have access via routers. When designing a shared Smart-hopping
infrastructure, you must adhere to the following guidelines:

• A deployment can have up to the specified upper limit of APs and Remote Antennas
installed for each deployment
• A Smart-hopping deployment can be any combination of Standard and Core APs, as long
as the maximum number does not exceed the specified upper limit of APs for each
deployment.
Refer to the ITS Infrastructure Installation for IntelliVue Smart-hopping implementation
details.

Network Design 3-3


MOCN213
Co-Existence

Co-Existence
Co-existence is defined as the sharing of network infrastructure between PIIC and PIIC iX
deployments. Co-existence applies to customers with an existing PIIC deployment who want
to deploy PIIC iX on the same infrastructure. Co-existence may involve the sharing of the
routed IntelliVue Smart-hopping infrastructure. It may require multicast address space
planning and external interface planning. These topics are only a few of the possible conflicts
that need to be discussed when planning a PIIC and PIIC iX co-existence scenario. The
following sections address the possible co-existence concerns.

Table 3-1 lists the existing PSCN infrastructure topologies containing a PIIC deployment and
possible deployment of a PIIC iX, resulting in PIIC and PIIC iX co-existence.

Table 3-1: PIIC and PIIC iX Co-Existence Within Supported Topologies

Network Type Topology Supported Co-Existence Comments

PSCN Star Topology (Routed) Yes PIIC iX deployments must be


(With Existing deployed in a VLAN/Subnet other
PIIC Install) than one containing an existing PIIC
deployment.

Star Topology (Non-routed) Yes PIIC iX deployments must be


deployed in a VLAN/Subnet other
than one containing an existing PIIC
deployment.

Ring Topology (Routed) Yes PIIC iX deployments must be


deployed in a VLAN/Subnet other
than one containing an existing PIIC
deployment.

Ring Topology (Non-routed) No Not Supported. PIIC iX cannot be


deployed in the same VLAN as the
existing nonrouted PIIC deployment.

3-4 Network Design


MOCN213
Network Infrastructure (Routers/Switches)

Network Infrastructure (Routers/Switches)


Depending on the Revision of code running on the existing switches, it may be necessary to
upgrade the network routers and switches in order to support new features and protocols such
as per VLAN IGMP snooping. In cases of an existing non-routed topology, additional
network gear such as a firewall/router may be necessary to meet the connectivity requirements
of PIIC iX.

Layer 2 Considerations
The issues to consider are IGMP settings at the layer 2 switches, port speed and duplex
settings for PIIC iX vs PIIC systems and bedsides and spanning tree priorities for any new
VLANs that will be created on a PSCN or CSCN. The new VLANs may need to be explicitly
included in trunks links connecting two switches together.

Fixed-mode Monitoring for PIC and PIIC iX


Fixed mode monitoring is a special monitoring mode where the access switch port is mapped
to a specific sector of the PIIC or PIIC iX monitoring station. Both the PIIC and PIIC iX
support this feature using different network infrastructure mechanisms.

PIIC Fixed-Mode Monitoring


Fixed monitoring is only allowed on switches physically connected to the VLAN as access
switches. These switches must have no VLANs (other than management VLAN 1) defined.
Fixed monitoring is not allowed on switches defined with multiple VLANs. This would be
considered an extension switch. In the case of an extension switch, fixed-mode monitoring is
only allowed on switches having 24-ports. See Chapter 5, “PIIC Implementation Specifics”
for detailed implementation instructions.

PIIC iX Fixed-Mode Monitoring


PIIC iX fixed-mode monitoring is accomplished by using a Link-Layer Discovery Protocol
(LLDP). LLDP is a vendor-neutral link-layer protocol used by network devices for
advertising their identity, capabilities, and neighbors on an IEEE 802.3 local area network.
Layer 2 and Layer 3 switches must support LLDP and LLDP Media Endpoint Discovery
(LLDP-MED). This functionality must be supported and enabled on each switch port to which
a fixed-mode monitoring device is to be connected. See “PIIC iX Fixed Mode Monitoring
Implementation” on page 6-13 for detailed implementation instructions.

Network Design 3-5


MOCN213
Layer 3 Considerations

Layer 3 Considerations
PIIC iX systems are single-homed which means that all traffic to and from the HLAN will
have to go through a routed interface. There are several connectivity options available
documenting physical and logical connectivity between the PSCN and HLAN. In general, the
issues are around choosing an IP Routing protocol, advertising new subnets to the HLAN.

Routed IntelliVue Smart-hopping


If there is an existing Routed IntelliVue Smart-hopping deployment, it will have to be shared
between PIIC iX and PIIC deployments. Since Routed IntelliVue Smart-hopping already have
a router in the mix, connectivity is not a major concern. The co-existence between PIIC and
PIIC iX deployments and the sharing of IntelliVue smart-hopping require additional types of
considerations. For instance, the current Philips master APC can send alerts (Local Sync Loss/
Remote Sync Loss) to only a single destination/host. This means alarms can only be
processed by either a PIIC or a PIIC iX deployment.

The wireless Patient Worn Monitor (PWM)/Patient Worn Device (PWD) statistics will be
stored in the correct locations, since the location depends on the PIIC/PIIC iX sector that they
are attached to.

Depending on the location of the new PIIC iX unit relative to the PIIC units, it may be
possible to add a new smart-hopping group and assign APs serving PIIC iX wireless clients to
that group.

Multicast Considerations for PIIC and PIIC iX


PIIC and PIIC iX use multicast messaging to:

• Enable Philips monitoring devices to connect and associate with a Surveillance PIIC iX or
PIIC systems.
• Manage Clinical Care Units
• Philips bedside monitor alarm reflectors.
• Manage Philips bedside-to-bedside overview behavior
• Propagate time to Philips devices

Enabling Multicast on the PIIC iX


To Enable Multicast Features on the PIIC iX

To enable these features, configure the following on the network infrastructure:

• Enable multicast on the router layer.


• Globally enable IGMP snooping
• Enable Protocol Independent Multicasting (PIM) in Sparse or Sparse-Dense Mode for
each PIIC iX related VLAN requiring a Bedside to PIIC iX association

3-6 Network Design


MOCN213
Multicast Considerations for PIIC and PIIC iX

Additional PIIC iX Multicast Considerations

The following items must be configured:

• Multicast IGMP snooping may be turned on for Philips iX networks.


• A single multicast address is required for enabling Philips devices to connect and
associate with a PIIC iX Surveillance Station. This multicast address can be configured to
224.0.23.63 (default) or 224.0.23.173.

Note

In PIIC and PIIC iX co-existence situations, IGMP snooping may be turned on for each
VLAN containing PIIC iX, but IGMP snooping must be turned off for each VLAN
containing PIIC versions less than L.0.

• PIIC iX requires a block of contiguous multicast addresses to be reserved. The number of


multicast addresses is based on the number of PIIC iX Clinical Care Units. The multicast
address space size can be calculated using one of the following formulas:

For PIIC iX A.00

number of Clinical Care Units x 10 + 1

For PIIC iX A.01

number of Clinical Care Units x 3 + 1

The specific start address is changeable, and defaults to the 239.255.0.0 RFC 1918 private
address space. See the PIIC iX Installation and Configuration Guide for detailed information
on assigning the start address for a block of addresses via the Multicast Address Range dialog
box.

Enabling Multicast on the PIIC


To Enable Multicast Features on PIIC

To enable these features, configure the following on the network infrastructure:

• Enable multicast on the router layer.


• Turn off IGMP snooping for each VLAN containing a PIIC version less than L.0
• Enable Protocol Independent Multicasting (PIM) in Sparse or Sparse-Dense Mode for
each PIIC related VLAN requiring a Bedside to PIIC association.

Network Design 3-7


MOCN213
Multicast Considerations for PIIC and PIIC iX

Associating PIIC with Philips Monitoring Devices


The following items must be configured to enable PIIC association with Philips patient
monitoring devices:

• Multicast IGMP snooping must be turned off for VLANs containing PIIC versions prior to
L.0. IGMP snooping may be turned on for VLANs containing PIIC L.0 and later.
• A single multicast address is required for enabling Philips patient monitoring devices to
connect and associate with a PIIC. This multicast address must be configured to
224.0.23.63 (by default)
• A contiguous block of 17 unique multicast addresses is needed for each PIIC Database
Domain requiring routed Philips patient monitoring device association.

IGMP Snooping Command Examples


Example 1

ip igmp snooping

If the infrastructure has VLAN(s) containing PIIC deployment(s), then IGMP snooping needs
to be turned off on these VLAN(s). Example 2 shows how to turn IGMP snooping on or off
on a per-VLAN basis.

Example 2

To turn off IGMP snooping:


no ip igmp snooping vlan vlan-id

To turn on IGMP snooping:


ip igmp snooping [vlan vlan-id]

3-8 Network Design


MOCN213
100Megabit Topology vs. Gigabit Topology

100Megabit Topology vs. Gigabit Topology


With some hardware platforms, both 100Megabit and 1000Megabit (gigabit) connections may
be possible. The need for a particular media type may dictate the type of connection you use.
Gigabit connections are required for the following device interconnections:

Table 3-2: Required Gigabit Connections

Device Interconnection Connection type(s)

Router to Router SFP Interconnect Cables


Fiber (using approved 1000FX fiber SFP)

Server farm; Router to Distribution Switch Fiber (using approved 1000FX fiber SFP)
UTP Copper via 2960-S, 2960 TC, or
2960 TT ports.

Server farm; Distribution Switch to Distribution Switch Fiber (using approved 1000FX fiber SFP)
UTP Copper via 2960-S or 2960TC Dual-
personality Port

Note that the Gigabit link speed may not mandate the use of Gigabit fiber, but the distance
between the devices requires the use of fiber media for which the supported SFP modules are
only available with Gigabit speed. See “Full Gigabit Topology” on page 3-9 for more
information.

See Appendix A for a complete list of SFP connection types.

Full Gigabit Topology


A full Gigabit network is defined as a network that provides Gigabit Ethernet (abbreviated as
1 Gbps or 1000 Mbps) ports at the access and distribution layer while also providing Gigabit
uplinks (trunks) to the core layer. Be aware of the following limits and guidelines in a full
Gigabit topology:

• Access ports may be configured for either autonegotiate or fixed speed and duplex as
required by the attached client device.
• There are no restrictions on the number of VLANs that can be assigned to ports on the
distribution and access layers in a full Gigabit network.
No more than 18 access switches can connect to a distribution pair in a full Gigabit Topology.

Network Design 3-9


MOCN213
Design/Implementation

Design/Implementation

Figure 3-1: PIIC iX and PIIC Co-existence Logical Views

Migration
Migration is the movement from an existing Philips-supplied infrastructure to a new or
different Philips supplied infrastructure. Many factors can lead to the need for an
infrastructure migration. For example:

• A PIIC deployment change/expansion


• Replacing a PIIC deployment with a PIIC iX deployment
• Replacement of obsolete network infrastructure

A successful migration is dependant on a well-designed and executed migration path.

3-10 Network Design


MOCN213
Migration

The first step in any migration is to have a clear understanding of the existing network
topology. An existing network topology will be some form of a ring or star topology. See
Figure 3-2 for examples.

Figure 3-2: Existing Topology Examples

The first step in the migration path is to add a new star distribution layer to the existing router.
Note that the same migration path is recommended for both existing star and ring topologies.

Figure 3-3: Topology Migration


Once the new distribution layer is in place and the new PIIC iX deployment installation is
completed, the infrastructure can stay in this state, or if the intent of the migration is to replace
an old PIIC deployment with the new PIIC iX deployment, the old PIIC deployment can be
taken out of service and all unneeded equipment can be retired.

Network Design 3-11


MOCN213
Interoperability

Interoperability

Overview
PIIC and PIIC iX deployments may require connection to hospital resources and services.
These interoperability connections require consideration when designing a network
infrastructure to support the PIIC and PIIC iX deployments.

PIIC Interoperability
PIIC support connects to hospital resources and services. All external interoperability
connections are made through the PIIC Database server second network interface card. It is
important to consider PIIC deployment interoperability. However, because all connections to
the hospital are through the second network interface card, there is no impact to the Philips
provided network infrastructure design or implementation.

PIIC iX Interoperability
PIIC iX support a large number of connections for purpose of interoperability with variety
External systems. When designing and implementing a network infrastructure for the purpose
of supporting a PIIC iX deployment, each individual connection must be considered.

Table 3-3 lists each application interface, communication session initiated direction, and the
possible PIIC iX device initiating or receiving a connection.

Table 3-3: PIIC iX Interoperability

Interoperability
Session Direction PIIC iX Connection
Connection

IntelliSpace Event Outbound Each alert notification source


Management (IEM) (Local Database PIIC iX or
Primary Server iX)

HL7 Systems Unsolicited Outbound Each Local Database PIIC iX


Patient Data

HL7 Systems Solicited Inbound Each Local Database PIIC iX .


Patient Data

HL7 Systems - Patient Inbound Each Surveillance PIIC iX system


Demographics participating in HIS ADT

HL7 Systems Lab Inbound Each Surveillance PIIC iX system


consuming labs

3-12 Network Design


MOCN213
Interoperability

Interoperability
Session Direction PIIC iX Connection
Connection

12-Lead Management Outbound Each Surveillance PIIC iX


Systems exporting 12 Lead data

External Time Outbound Each Local Database PIIC iX or


Synchronization Primary Server iX synchronizing
with an external time source

PDF Reports Outbound Each PIIC iX Surveillance and


PIIC iX Overview exporting
reports

Hospital Printers Outbound Each PIIC iX Surveillance and


PIIC iX Overview expected to
print

SNMP Performance Inbound Each Application Performance


Monitoring Monitoring

Application Performance Outbound Each Local Database PIIC iX or


Monitoring (APM) Primary Server iX and
Surveillance system pushing data
to the APM

Web Access Inbound Each Local Database PIIC iX and


Primary Server iX providing Web
access

DNS Usage Outbound Each Surveillance PIIC iX,


Overview PIIC iX, Physiological
Server iX, Primary Server iX, and
APM requiring DNS

Philips Service Agent (PSA) Outbound Each Local Database PIIC iX,
Surveillance Overview PIIC iX ,
Physiological Server iX, Primary
Server iX, and APM requiring
PSA Access.

Network Design 3-13


MOCN213
PSCN to HLAN Connectivity

PSCN to HLAN Connectivity

Direct Connection
Direct connection between a PSCN and the HLAN has PSCN design implications. This
section presents:

• The design details that are associated with connecting a PSCN network directly to the
customer network.
• Use of a Philips-supplied gateway appliance for the purpose of interfacing a PIIC iX
system to devices on the Hospital network.
• Layer 2 and Layer 3 PSCN network connection designs

Layer Two Connectivity

Non-routed PSCN Philips distribution layer or access layer switches can be connected to the
HLAN. Philips switch interfaces should be configured as routed interfaces to avoid spanning
tree issues between the Philips and customer switch networks.

Figure 3-4: Layer 2 Connectivity

3-14 Network Design


MOCN213
PSCN to HLAN Connectivity

Layer 3 Connectivity

Routed PSCN Philips routers can be directly connected to the HLAN.

Figure 3-5: Layer 3 Connectivity

Gateway Connection
Figure 3-6 illustrates a Layer 2 PSCN connecting to the HLAN via the Juniper SRX100
Gateway device. Figure 3-7 illustrates a Layer 3 PSCN connecting to the HLAN via the
Juniper SRX100 Gateway device. For more details about using the SRX100 Gateway in your
network, See the Juniper SRX100 Gateway Installation and Service Manual for the PIIC iX.

Figure 3-6: Layer 2 PSCN Connecting to HLAN Via Gateway

Figure 3-7: Layer 3 PSCN Connecting to HLAN Via Gateway

Network Design 3-15


MOCN213
PSCN to HLAN Connectivity

3-16 Network Design


MOCN213

Implementation

Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC and PIIC iX and includes the following topics:

• Non-Routed Star Network Topology


• Routed Star Network Topology
• Management VLAN
• Router Configurations

Non-Routed Star Network Topology


The following sections provide the examples and specific details you need to
implement a Non-Routed Star Topology network for use with a PIIC or PIIC iX
deployment. It includes the following topics:

• Cisco Implementation Examples


• HP Implementation Examples

Non-Routed Implementation Examples

Cisco Implementation Examples

Cisco Small Non-Routed Star Topologies can be designed using all supported
Access switches. While these systems will only use a single VLAN, they may have
multiple VLANs defined in their configurations as a result of the routed star
topology templates used to configure them (see below). Figure 4-1 represents the two
ways a Cisco non-routed system can be connected.

4-1 Implementation
MOCN213
Non-Routed Star Network Topology

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 11X 13 X 23 X

SYST
RPS
MASTR 1 2
STAT
DUPLX
SPEED
2X 12X 14 X 24 X

MODE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 1 1X 13X 23X

SYST
RPS
MASTR 1 2
STAT
DU PLX
SPEED
2X 12X 14X 24X

MODE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES

VLAN
1X 11X 13 X 23 X 1X 1 1X 13X 23X

SYST SYST
RPS RPS
MASTR 1 2 MASTR 1 2
STAT STAT
DU PLX DU PLX
SPEED SPEED
2X 12 X 14 X 24 X 2X 12X 14X 24X

MODE MODE

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Catalyst 2960 SERIES
1X 1 1X 13X 23X

SYST
RPS
MASTR 1 2
STAT
DU PLX
SPEED
2X 12X 14X 24X

MODE

Figure 4-1: Cisco Small Non-Routed Connections


The configuration details are as follows:

Config template: ACCESS_2960TC_TEMPLATE.TXT


ACCESS_2960TT_TEMPLATE.TXT
Location in NWTS: Hardwired Networks\Current Network

Hardware\Configuration Files\Star Topologies\ACCESS\CISCO 2960


Hostname: <text string, no spaces, eg: "Access_Switch_1_2960TC">
Interfaces 1-24: Set speed and duplex as required for client devices
Assign VLAN 101 to all client ports
Leave unused ports in "shutdown"
IP Addresses: Use 172.31.0.10, 172.31.0.11, 172.31.0.12

Note

Inter-switch connections should be made using Gigabit connections. Client devices can be
connected to any port 1-24 on any of the switches.

HP Implementation Examples

HP Small Non-Routed Star Topologies can be designed using all supported HP Access
switches. These systems will only use a single VLAN (VLAN101).
Non-Routed 2 and 3 switch systems (using the HP2510) are connected using
redundant gigabit trunk links. 100Mb interswitch links are not supported.

4-2 Implementation
MOCN213
Non-Routed Star Network Topology

Figure 4-2 represents the 2 ways a HP non-routed system can be connected. No other
variations of 2 or 3 switch topologies are supported.
HP2510

HP2510

HP2510 HP2510

HP2510

Figure 4-2: HP Small Non-Routed connections


The template file for the HP2510 when used in the small non-routed topologies can be found
in the HARDWIRED NETWORKS\CURRENT NETWORK
HARDWARE\CONFIGURATION FILES\STAR TOPOLOGIES\NON-
ROUTED SYSTEMS\HP2510.

Note

Fixed Mode Monitoring is not supported for PIIC iX A.00, but is supported for PIIC and
PIIC iX A.01 and later.

The configuration details are as follows:

Config template: ACCESS_HP2510_TEMPLATE.TXT


Location in NWTS: Hardwired Networks\Current Network
Hardware\Configuration Files\Star Topologies\NON-ROUTED
SYSTEMS\HP2510\ACCESS_HP2510
Hostname: <text string, no spaces, eg: "Access_Switch_1_HP2510">
Interfaces 1-24: Set speed and duplex as required for client devices
Do not change VLAN assignment in template file (VLAN101)
Leave unused ports in "shutdown"
IP Addresses: Use 172.31.0.10, 172.31.0.11, 172.31.0.12

Note

If Fixed Mode Monitoring is used on an HP Non-Routed system, an extension switch is not


required.

Inter-switch connections should be made using Gigabit connections. Client devices can be
connected to any port 1-24 on any of the switches. Inter-switch links should be set to auto-
negotiate.

Implementation 4-3
MOCN213
Routed Star Network Topology

Routed Star Network Topology

Overview
The following sections provide the examples and specific details you need to implement a
Routed Star Topology network for use with a PIIC or PIIC iX deployment. It includes the
following topics:

Note

You may use Cisco or HP switches in a star topology. However, the use of HP routers in a
star topology is restricted to and is supported by the Advanced Network Design and
Implementation (ANDI) network delivery model only. See the Advanced Networking Design
and Implementation Guidelines for Star Topologies document for details and requirements.

Note

If you are using Cisco switches in your network, use the star topology configuration files
supplied on the Network Infrastructure Tools Suite to properly configure each Cisco switch
for its function (Router/Core, Distribution, or Access). Star topology configuration files are
available only for Cisco switches and for the HP 2510 switch at the time of this printing.

Connection Details

Router/Core Switch Connection Details

Core switches on Star networks are always connected by a direct link. The configuration of
this link can vary if a server farm is present, but the direct link is always one of the following:

• An ether channel
• A single-cable trunked connection

4-4 Implementation
MOCN213
Routed Star Network Topology

Host devices (such as a PIIC DBS, Physiological Server iX, or a Smart-hopping APCs) may
not be directly connected to the Core Switch. Only a PIIC iX Primary Server can connect
directly to the core switch and that is only if the core switch has an available appropriate
(100Mbit or gigabit) access interface for connectivity. In standard Star Topology
configurations, Gigabit ports on a Core Switch are used to inter-connect the Core Switch to
another Core Switch as shown in Figure 4-3. These connections utilize the 1 ft. (.03m) cables
with the SFP connectors that ship with each router.
Router A Router B
Core Switch Core Switch

To Router B/Core Switch 2 To Router A/Core Switch 1

To Access and Distribution Switches To Access and Distribution Switches

Figure 4-3: Inter-connecting Core Switches on a Star Topology

Distribution Switch Connection Details

If desired, host devices may be directly connected to a Distribution Switch.

Ports 19-24 are typically used as the uplink ports. These ports are configured as trunks and
have spanning tree enabled. (This does not apply to the Cisco 3750 12-port switch.)

Depending on the switch interconnect link speeds used in the network, there may be a limit to
the number of VLANs on the distribution switch that can be assigned to access ports on the
distribution switch. (The trunk ports on the distribution switch may pass an unlimited number
of VLANs.)

To Core Switch To Distribution


Switch B
Distribution Switch A

Ports 1 to 18: Ports 1 -18 for PIIC


or PIIC iX deployments and Ports 19 to 24:
Smart-hopping To Access Switches

Figure 4-4: Distribution Switch Connections

Access Switch Connection Details

Gigabit ports 0/1 and 0/2 on the Access Switch are always used as the uplink ports to either
the Core Switches (Routers) or Distribution Switches. These ports may be run at 100
Megabits if necessary, based on the port type available on the distribution or core software
above it. The remaining 24 ports can be configured for device connections.

Implementation 4-5
MOCN213
Routed Star Network Topology

Switches are not the root bridge in a spanning tree instance.

To Core or Distribution
Switches

Access Switch

Ports 1 to 24

Figure 4-5: Access Switch Connections

Switch Limits

Router/Core Switch Limits

• Because the currently used Core Switches have a maximum of 24 ports, this star topology
design is limited to a total of 24 Access switches.
• The total number of end devices (both DBSD and ITS) that can be connected to a router/
core switch network is 576.

Distribution Switch Limits

The Distribution Switch enables the support of large networks requiring more Access
switches than the Core can support.

• The Distribution Switches must be used in pairs and are directly connected to the Routers
and Core Switches in the topology.
• There is a maximum of 24 Distribution Switch pairs for each star network.

Access Switch Limits

• You may attach a maximum of 24 Access Switches directly to the Router/Core Switch.
• Multiple PIIC and/or PIIC iX deployments should not be mixed on an Access Switch.
• For PIIC deployments, the Access Switch is limited to a maximum of two VLANs per
switch, one VLAN for a single PIIC deployment and one VLAN for the Smart-hopping
deployment.
• When attaching Access Switches to the Router/Core Switch via a Distribution Tier, you
can connect a maximum of six Access Switches per Distribution Switch pair.
• With up to 24 Distribution Pairs supported, this brings the maximum total number of
Access Switches to 144.

4-6 Implementation
MOCN213
Routed Star Network Topology

Routed Implementation Examples

Note

The following examples are based on switch to switch uplinks using 100 Megabit links. If
Gigabit links are used, the noted limitations do not apply. See “Switch Interconnection”
on page 2-3 for more information on 100 Megabit and Gigabit limitations.

Single Distribution Switch Pair Example

Figure 4-6 shows a star topology that uses a distribution layer with a single pair of
Distribution Switches.

Router A Router B
Core 1 Core 2

Distribution 1A Distribution 1B
DBSD/ITS
Network
Access 1 Access 2 Access 3 Access 4 Access 6

Figure 4-6: Star Topology with Distribution Layer

Up to 24 Distribution Switch pairs may be used per network. With six Access Switches
allowed per Distribution Switch pair, a maximum Distribution Switch deployment would
allow 144 Access Switches to be connected to the network. Each Distribution Switch may
also use up to 18 ports for end devices, bringing the total number of end devices that can be
connected to a Star Topology with a fully populated Distribution Layer to 4320.

Implementation 4-7
MOCN213
Routed Star Network Topology

Maximum Distribution Switch Example

Figure 4-7 represents a routed star network topology with a maximum Distribution Switch
build out. Note that for the sake of brevity, Figure 4-7 does not show all Distribution or
Access Switches.

Router A Router B
Core 1 Core 2

Distribution Distribution Distribution Distribution Distribution Distribution


1A 1B 2A 2B 24A 24B
DBSD/ITS
Network

Access Access Access Access Access Access Access Access Access Access
1 2 3 4 6 139 140 141 142 144

Figure 4-7: Star Topology with Maximized Distribution Layer

When designing your network, you should factor in the total number of end devices that must
be supported and the location of these devices within the installation site.

Using 100 Megabit Distribution Switches in a PIIC or PIIC iX Deployment

Note

If you are using Gigabit all the way to the core layer, there are no limitations. Therefore, the
following rules are not restrictions for Gigabit switches.

1. Host devices may be directly connected to a Distribution Switch. The Distribution Switch
is limited to:

• A maximum of two (2) PIIC or PIIC iX Deployments per switch, and one ITS. (This
rule does not apply to the Cisco 3750 12-port switch.)

2. The remaining ports (ports 1-18) can be configured for connection to devices as follows:
• Three VLANs are assigned to ports on a Distribution Switch. The VLANs can be
implemented only in the following configurations:

• 2 PIIC or PIIC iX Deployments and 1 ITS


• 1 PIIC or PIIC iX Deployment and 1 ITS
• 2 PIIC or PIIC iX Deployments
• 1 ITS

4-8 Implementation
MOCN213
Management VLAN

Using Access Switches in a PIIC or PIIC iX Deployment


As noted in“Routed Implementation Examples” on page 4-7 Gigabit ports 0/1 and 0/2 on the
Access Switch are often used as the uplink ports to either the Core Switches (Routers) or
Distribution Switches. The remaining 24 ports can be configured for device connections as
follows:

• All ports (1 to 24) assigned to one single PIIC or PIIC iX deployment


• All ports (1 to 24) assigned to the ITS Wireless Subnet
• Ports 1 to 12 assigned to one single PIIC or PIIC iX deployment
• Ports 13 to 24 assigned to the ITS Wireless Subnet

Management VLAN
By default VLAN 1, the Management VLAN is used to manage PIIC and PIIC iX network
switches and not used for data traffic. VLAN 1 can be used to connect to switches for
management of the network infrastructure. In addition, all switch management interfaces are
in the same subnet, no matter which Application Deployment they are connected to. It is also
possible to connect VLAN 1 to the hospital management network. This requires a connection
between one or both of the Core switches in order to be able to reach the hospital management
console. A static route must be added to the Core switch(es) in order to access the
management console.

Requirements
The following requirements must be met before the customer is allowed to access the
Management VLAN.

• An Access Control List (ACL) must be applied on the Management VLAN interface. This
ACL will only allow customer management access from specific management subnets.
This is to restrict access to devices from only the Network Operation Center (NOC) and
from no other point on the hospital LAN.

• Only SNMP Read Only (RO) access is allowed. The SNMP RO password may not be the
default ’public’.

If you use the Ping command to access devices on the subnets, the Internet Control
Message Protocol (ICMP) ping access guidelines must be followed.

Implementation 4-9
MOCN213
Router Configurations

Router Configurations
A Cisco Layer 2/Layer 3 Switch will be used as a router on routed network topologies.

With the exception of the 3750 12-port, all Cisco routers will be pre-configured at the factory
for use in Star topologies and requires little to no configuration in the field. The ports and
subnet designations are pre-set.

This section describes the router configurations for routed topologies and includes:

• Subnet Configuration
• Router Load Balancing
• Spanning Tree Protocol Considerations for PIIC and PIIC iX

Subnet Configuration
Table 4-1 lists the subnet configuration of the routers when used with star topologies. This
configuration will allow legacy and new topologies to co-exist using one standard router
configuration. You will be required to change the router’s default, factory configuration to
support star topologies. The router configurations required to support star topologies is
provided on the Network Tools Suite.

When configured to support star topologies, the router interfaces will function as trunks. A
trunk is a router or switch interface that is configured to carry multiple VLANs.

Table 4-1: Subnet Configuration for Star Topologies for PIIC and PIIC iX
Physical Router
Function/Name Subnet
Interface/Port #

1 Trunk All VLAN Subnets (101 to 124, & 1)


2 Trunk All VLAN Subnets (101 to 124, & 1)
3 Trunk All VLAN Subnets (101 to 124, & 1)
4 Trunk All VLAN Subnets (101 to 124, & 1)
5 Trunk All VLAN Subnets (101 to 124, & 1)
6 Trunk All VLAN Subnets (101 to 124, & 1)
7 Trunk All VLAN Subnets (101 to 124, & 1)
8 Trunk All VLAN Subnets (101 to 124, & 1)
9 Trunk All VLAN Subnets (101 to 124, & 1)
10 Trunk All VLAN Subnets (101 to 124, & 1)
11 Trunk All VLAN Subnets (101 to 124, & 1)
12 Trunk All VLAN Subnets (101 to 124, & 1)
13 Trunk All VLAN Subnets (101 to 124, & 1)
14 Trunk All VLAN Subnets (101 to 124, & 1)

4-10 Implementation
MOCN213
Router Configurations

Table 4-1: Subnet Configuration for Star Topologies for PIIC and PIIC iX
Physical Router
Function/Name Subnet
Interface/Port #

15 Trunk All VLAN Subnets (101 to 124, & 1)


16 Trunk All VLAN Subnets (101 to 124, & 1)
17 Trunk All VLAN Subnets (101 to 124, & 1)
18 Trunk All VLAN Subnets (101 to 124, & 1)
19 Trunk All VLAN Subnets (101 to 124, & 1)
20 Trunk All VLAN Subnets (101 to 124, & 1)
21 Trunk All VLAN Subnets (101 to 124, & 1)
22 Trunk All VLAN Subnets (101 to 124, & 1)
23 Trunk All VLAN Subnets (101 to 124, & 1)
24 Trunk All VLAN Subnets (101 to 124, & 1)
VLAN 101 Network #1 172.31.0.0
VLAN 102 Network #2 172.31.8.0
VLAN 103 Network #3 172.31.16.0
VLAN 104 Network #4 172.31.24.0
VLAN 105 Network #5 172.31.32.0
VLAN 106 Network #6 172.31.40.0
VLAN 107 Network #7 172.31.48.0
VLAN 108 Network #8 172.31.56.0
VLAN 109 Network #9 172.31.64.0
VLAN 110 Network #10 172.31.72.0
VLAN 111 Network #11 172.31.80.0
VLAN 112 Network #12 172.31.88.0
VLAN 113 Network #13 172.31.96.0
VLAN 114 Network #14 172.31.104.0
VLAN 115 Network #15 172.31.112.0
VLAN 116 Network #16 172.31.120.0
VLAN 117 Network #17 172.31.128.0
VLAN 118 Network #18 172.31.136.0
VLAN 119 Network #19 172.31.144.0
VLAN 120 Network #20 172.31.152.0
VLAN 121 Network #21 172.31.160.0
VLAN 122 Network #22 172.31.168.0
VLAN 124 ITS #1 172.31.240.0

Implementation 4-11
MOCN213
Router Configurations

Router Load Balancing


In the redundant router topology with multiple DBS Domains, a Router Load Balancing
configuration is used to share the data-forwarding load between the redundant router pair.
Table 4-2 shows which router is the primary and which is the secondary router for each of the
networks in a star topology.

Note

VLANs 101 to 124 are configured on the routers and enabled by default.

VLANs 200 through 222 are included in the router templates, but commented out by default.

VLANs VLANs 201 through 222 are for use on PIIC iX only with a /24 subnet mask.

A maximum of 32 VLANs can be active in a router pair.

Table 4-2: Load Balancing Router Configuration for Star Topologies


Network
Network Name Primary Router Backup Router Port Trunk Ports
Address

Network #1 172.31.0.0 Router A Router B VLAN 101 1-24


Network #2 172.31.8.0 Router A Router B VLAN 102 1-24
Network #3 172.31.16.0 Router A Router B VLAN 103 1-24
Network #4 172.31.24.0 Router A Router B VLAN 104 1-24
Network #5 172.31.32.0 Router A Router B VLAN 105 1-24
Network #6 172.31.40.0 Router A Router B VLAN 106 1-24
Network #7 172.31.48.0 Router A Router B VLAN 107 1-24
Network #8 172.31.56.0 Router A Router B VLAN 108 1-24
Network #9 172.31.64.0 Router A Router B VLAN 109 1-24
Network #10 172.31.72.0 Router A Router B VLAN 110 1-24
Network #11 172.31.80.0 Router A Router B VLAN 111 1-24
Network #12 172.31.88.0 Router A Router B VLAN 112 1-24
Network #13 172.31.96.0 Router A Router B VLAN 113 1-24
Network #14 172.31.104.0 Router A Router B VLAN 114 1-24
Network #15 172.31.112.0 Router A Router B VLAN 115 1-24
Network #16 172.31.120.0 Router A Router B VLAN 116 1-24
Network #17 172.31.128.0 Router A Router B VLAN 117 1-24
Network #18 172.31.136.0 Router A Router B VLAN 118 1-24

4-12 Implementation
MOCN213
Router Configurations

Table 4-2: Load Balancing Router Configuration for Star Topologies


Network
Network Name Primary Router Backup Router Port Trunk Ports
Address

Network #19 172.31.144.0 Router A Router B VLAN 119 1-24


Network #20 172.31.152.0 Router A Router B VLAN 120 1-24
Network #21 172.31.160.0 Router A Router B VLAN 121 1-24
Network #22 172.31.168.0 Router A Router B VLAN 122 1-24
ITS Network 172.31.240.0 Router B Router A VLAN 124 1-24

Spanning Tree Protocol Considerations for PIIC and PIIC iX


Spanning Tree Protocol (STP) is a Layer 2 protocol used to determine the best loop free path
through a redundant network. Spanning tree is configured on a switch-by-switch basis,
however some switches may use default STP values while others need to have a specific level
base on their hierarchical level within a given topology.

Depending on the topology type, a different STP algorithm may be used. Cisco's Rapid Per
VLAN Spanning Tree algorithm (PVST+) is used. Table 4-3 lists the root bridge priority level
for each of the switches in the hierarchy.

Table 4-3: STP Root Bridge Priority Level for Network Switches
Star Topologies

Root Bridge Priority


Switch Type
VLAN 101-120 VLAN 123-124
Core 1 (Router A) 40961 81921
Core 2 (Router B) 81921 40961
Distribution 1 163841 204801
Distribution 2 204801 163841
Access 32768 (default) 32768 (default)

1
In Star Topologies, the root bridge priority of the Core 1, Core 2,
Distribution 1 and Distribution 2 switches will alternate by VLAN.
The STP types and values will be included in the standard configuration files for each switch
type on the Network Tools Suite.

Another necessary STP parameter that must be configured on each switch is the portfast
configuration parameter. For HP switches this is referred to as edgeport. This parameter is
intended for use on ports that have servers, bedsides, APs, APCs and devices other than
switches or routers attached to them.

Implementation 4-13
MOCN213
Direct Connection

All other STP parameters should be allowed to default. This will yield the correct spanning
tree configuration and topology.

Rapid Spanning Tree Protocol on Star Topologies


For Star Topologies, Rapid Per VLAN Spanning Tree+ Protocol (Rapid PVST+) will
be configured throughout the network.

Direct Connection

Note

The Direct Connect methods show below are optional approaches to connecting the PSCN to
the HLAN. Philips Standard approach uses the Juniper SRX100 Firewall to connect the
PSCN to the HLAN. See the Juniper Installation and Service Manual for the PIIC iX for
complete details.

Connecting from a Layer 2 PSCN Directly to the HLAN


This solution may be used with a PSCN that has Layer 2 switches when you need to connect
to the Hospital LAN (HLAN).

Basic Steps

1. Establish physical connection.


2. Determine the VLAN number to use on the PSCN.
3. Determine the IP address space to use on the PSCN.
4. Test connectivity to HLAN.

Information You Will Need to Request from IT:

1. Physical media details for the connections.


2. VLAN number to use on the PSCN switches.
3. IP addresses and subnet masks of the Clinical subnet (start with 172.31.n.0/21).

Existing Topologies
The pre-existing Philips network will look like one of the topology types below. Please note
that specific numbers and topologies of switches will vary from site to site. Some smaller
networks will be as simple as one or two switches.

4-14 Implementation
MOCN213
Direct Connection

Ring Network

Figure 4-8: Ring Network

Layer 2 Only Star Network

Figure 4-4: Layer 2 Only Star Network

Physical Connectivity
In order to connect the Philips network to the HLAN you will first need to configure a
physical connection between the two networks. For most switches this will be a 100 Mbps
copper connection. Depending on the switch type, there may be a dual personality gigabit
connection available. In that case you can use a gigabit copper connection or an SFP to
connect to the HLAN.

For Ring networks, you must connect to the HLAN from the ICN Core switch. For very small
star networks you may connect from an Access switch. For L2 Star networks with
Distributions switches, connect to the HLAN from the Distribution A switch.

The interface that connects from the PSCN to the HLAN must be an access port. Make sure to
disable spanning-tree portfast (edge port on HP switches) on this interface.

Implementation 4-15
MOCN213
Direct Connection

VLAN Numbering
It is recommended but not necessary to use a VLAN number on the PSCN switches that
matches the VLAN number on the HLAN. Using the same VLAN number will avoid any
VLAN mismatch between switches. However this will most likely require re-numbering the
existing VLAN in use on the PSCN switches.

For Ring networks, all ports are in VLAN 1. Renumbering will require all ports, including
inter-switch link ports to be configured as access ports in the new VLAN.

For Star networks, only the existing access ports need to be reconfigured in the new VLAN.

HLAN configuration
The gateway for the ICN is configured on the HLAN switches or routers. The HLAN switch
port may be configured as an access port or a routed port (no switchport). If configured as an
access port, the interface should be in the same VLAN as the SVI interface. The SVI (Layer 3
VLAN) interface may be configured on the edge switch or elsewhere in the Hospital network.

The preferred method of configuring the HLAN interface is as a routed interface. This is to
create a spanning tree boundary between the two networks.The Hospital IT department must
also enable multicast routing and PIM on their routed or VLAN interface, and L2 interface.

Verify connectivity to the HLAN by pinging the gateway address from devices attached to the
PSCN switches. You should also be able to ping devices on other subnets, such as the HL7
server.

Connecting from a Routed PSCN Directly to the HLAN


This solution may be used with a PSCN that has Layer 3 switches when you need to connect
to the Hospital LAN (HLAN).

Basic Steps

1. Establish the physical connections.


2. Establish Layer two connectivity.
3. Establish IP connectivity.
4. Determine what subnets need to be advertised to the HLAN.
5. Determine what routing type to use. Options are: static, OSPF, or EIGRP.

Information You Will Need to Request from IT

1. Physical media details for connections


2. IP addresses and subnet masks of connecting interfaces
3. IP addresses and subnet masks of Clinical subnets (start with 172.31.n.0/21)
4. For static routing
a. Next-hop addresses (should be the same as the connected interfaces).

b. Static routes added to HLAN for Clinical Subnets.

4-16 Implementation
MOCN213
Direct Connection

5. For EIGRP
a. AS number for EIGRP.

6. For OSPF
a. Additional Loopback Interface IP address for router ID.

b. OSPF Area Number.

Existing Topologies
See Figure 4-5 for an example of the pre-existing Philips network. Note that this is a
simplified view.

Figure 4-5: Pre-Existing Philips Network Example

Physical Connectivity
In order to connect the Philips network to the HLAN you will first need to configure a
physical connection between the two networks. Networks that use the Cisco 3560V2-24TS
switch require 100 Mbps copper connections.

For a 3750-24 FS, or 3750v2-24 FS you may either use 100 Mbps multimode fiber, or one of
the Gigabit SFP connections. If the Gigabit SFP connections are already in use for the Core A
to Core B links or for a Server Farm, you may make a new Core A to Core B link using two
of the 100 Mbps MT-RJ Multi-mode ports and use the Gigabit link to connect to the HLAN.

For a 3750G-12S you may use any of the unused SFP ports with the desired SFP modules
(MM Fiber, SM Fiber or Copper).

See Table 4-6 for a summary of physical connection types for each switch.

Implementation 4-17
MOCN213
Direct Connection

Table 4-6: Summary of Physical Connection Types per Switch


Switch Main Interface Connector Type Uplink Interface Connector Type
3560V2-24TS 100 Mbps Copper RJ45 Gigabit SFP LC(MM or SM
Fiber) or RJ45
3750-24 FS 100 Mbps MM Fiber MT-RJ Gigabit SFP LC(MM or SM
Fiber)
3750v2-24 FS 100 Mbps MM Fiber LC Gigabit SFP LC(MM or SM
Fiber) or RJ45
3750G-12S Gigabit SFP LC(MM or SM Fiber) N/A N/A
or RJ45

For any Star network, you must always have a link between the two PSCN Core switches. It
is recommended to use an ether-channel link.

For existing networks that have all Core switch ports already in use you may use an ANDI
design to stack an additional 3750 switch with the existing 3750. See the ANDI Guidelines for
Star Topologies guide for more details.

You may use one or two links to connect the PSCN to the HLAN, however two links are
recommended. If using only one link, connect to the HLAN from the PSCN Core A switch.

Figure 4-7: Connecting to HLAN from PSCN Core A Switch

Layer Two Connectivity


Philips Core switches will connect to the HLAN switches using routed interfaces (no
switchport). On the HLAN side, the customer may configure Layer 3 interfaces at their
network edge (on the Access Layer), or use VLANs to home back to Layer 3 interfaces at the
Distribution or Core Layer. In any of these cases the Philips switch interfaces should be
configured as routed interfaces to avoid spanning tree issues between the Philips and the
customer switch networks.

4-18 Implementation
MOCN213
Direct Connection

The customer’s edge switch interfaces should either also be configured as routed interfaces, or
should be Access ports in the VLAN that they are homing back to the Distribution or Core.

Example

In this example two subnets were created. 10.20.20.0/30 is used to connect Philips Core A to
HLAN Switch A, and 10.30.30.0/30 was used to connect Philips Core B to HLAN Switch B.

Figure 4-8: Layer 2 Connectivity Example

Fa1/0/1 on each Philips Core Switch was used to connect between the PSCN and HLAN
Switches. Any unused interface can be used. The interfaces were configured as routed
interfaces using the no switchport command.

On Philips Core A:
interface FastEthernet1/0/1
description Link to HLAN A Switch
no switchport
ip address 10.20.20.2 255.255.255.252
duplex full

On Philips Core B:

interface FastEthernet1/0/1
description Link to HLAN B Switch
no switchport
ip address 10.30.30.2 255.255.255.252
duplex full

Once the connection is made and the link lights are on, L2 and L3 connectivity can be tested
between the PSCN and HLAN devices. Pings should be repeated for all IP’s.

The following example illustrates the commands used for Next Hop Connectivity testing.
These commands are shown for reference and do not show the complete output.

Implementation 4-19
MOCN213
Direct Connection

Example L2 and L3 Next Hop Connectivity Testing:

RouterA_3750_Star#show ip interface brief


InterfaceIP-AddressOK? Method Status Protocol
FastEthernet1/0/110.20.20.2YESmanualupup

RouterA_3750_Star#ping 10.20.20.1

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 10.20.20.1, timeout is 2
seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/
9 ms
RouterA_3750_Star#

Routing between the PSCN and the HLAN

Static routing to HLAN

A simple method of sharing routes between the PSCN and HLAN is to use static routing. A
default route is put into each of the PSCN Core switches. The customer IT department points
to the Philips Clinical subnets with static routes. To maintain connectivity in the event of a
switch failure, the HLAN routers should also be configured to advertise the static routes into
their routing protocol. This can be done by redistributing the static route into the routing
protocol on each HLAN switch.

HSRP interface tracking must be enabled on the PSCN Core switches. This is critical in
order to protect against black holing traffic in the even that one of the HLAN switches or links
fails.

In the following example, the default route is pointed to 10.20.20.1 from Core A and
10.30.30.1 from Core B. The clinical VLAN is VLAN 120. Interface tracking is used to track
the connection to the HLAN (FastEthernet 1/0/1) on each Core Switch. The HSRP priority is
set to decrement by 50 in the event of an interface failure. Be sure to enable standby preempt
on both Core switches, as this is required with interface tracking.

On Philips Core A:
ip route 0.0.0.0 0.0.0.0 10.20.20.1

interface Vlan120
description ICN #20 - Wired Subnet
ip address 172.31.152.2 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
standby 0 ip 172.31.152.1
standby 0 priority 110
standby 0 preempt
standby 0 track FastEthernet1/0/1 50

4-20 Implementation
MOCN213
Direct Connection

On Philips Core B:
ip route 0.0.0.0 0.0.0.0 10.30.30.1

interface Vlan120
description ICN #20 - Wired Subnet
ip address 172.31.152.3 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
standby 0 ip 172.31.152.1
standby 0 priority 90
standby 0 preempt
standby 0 track FastEthernet1/0/1 50

The Clinical subnets do not need to be advertised to the HLAN routers because the HLAN
routers will be configured with static routes to point to them.

On HLAN Router A:
ip route 172.31.152.0 255.255.248.0 10.20.20.2

It is also necessary to redistribute the static route into the dynamic


routing protocol:

router ospf XYZ


redistribute static subnets route-map STATIC

access-list 10 permit 172.31.152.0

route-map STATIC permit 10


match ip address 10

The default route is listed as a static route on each of the PSCN Core switches. You will need
to ping from end-to-end to confirm the customer IT department has correctly configured their
routers.

RouterA_3750_Star#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user
static route
o - ODR, P - periodic downloaded static route

Gateway of last resort is 10.20.20.1 to network 0.0.0.0

10.0.0.0/30 is subnetted, 1 subnets


C 10.20.20.0 is directly connected, FastEthernet1/0/1
S* 0.0.0.0/0 [1/0] via 10.20.20.1
RouterA_3750_Star#

Implementation 4-21
MOCN213
Direct Connection

Dynamic routing to HLAN with EIGRP


In this example, one of the ICN subnets is used for a PIIC iX system, and the remaining ICN
subnets are left as is for legacy PIIC systems. In practice, single or multiple Clinical subnets
may be advertised to the HLAN for PIIC iX operation.

EIGRP is already in use on the PSCN Cores, so a new EIGRP Autonomous System (AS)
number should be used. You will need to make sure that the EIGRP 1 is not already in use on
the HLAN. If EIGRP 1 is already in use on the HLAN you will need to use a different AS
number on the PSCN.

First you need to enable the routing process and advertise the network that is linking the
PSCN network to the HLAN. In this example EIGRP 999 was used with the two subnet
methods discussed above. If using the one subnet method, the wildcard mask will change in
the EIGRP network statement.

Figure 4-9: HLAN with EIGRP

Examples

On Philips Core A:
router eigrp 999
network 10.20.20.0 0.0.0.3

On Philips Core B:

router eigrp 999


network 10.30.30.0 0.0.0.3

4-22 Implementation
MOCN213
Direct Connection

Next you need to determine which ICN subnet you want to advertise to the HLAN. In this
case, the VLAN 120 subnet of 172.31.152.0 was used for the PIIC iX devices and advertised.
If a different subnet or masking was used you would need to update the configuration
accordingly. If a /24 subnet mask is used, the wildcard mask would be 0.0.0.255.

On Philips Core A:
router eigrp 999
network 10.20.20.0 0.0.0.3
network 172.31.152.0 0.0.7.255

On Philips Core B:
router eigrp 999
network 10.30.30.0 0.0.0.3
network 172.31.152.0 0.0.7.255

You should see the EIGRP neighbors display on the console. You can also do the following
troubleshooting commands:

RouterA_3750_Star#show ip eigrp 999 neighbors


EIGRP-IPv4 Neighbors for AS(999)
H Address InterfaceHold Uptime SRTT RTO Q Seq
(sec) (ms) Cnt Num
1 172.31.152.3Vl120 11 02:08:411 200 0 66
0 10.20.20.1Fa1/0/1 12 02:10:375 200 0 107
RouterA_3750_Star#

Additionally, use the show ip route command to look at all of the EIGRP routes that are local
and that have been learned from the HLAN. EIGRP routes are shown with a code of D. This
example output is from the one subnet method, and is abbreviated (Note that all connected
routes are not shown). Additional test HLAN networks are show as being learned from
EIGRP. HLAN subnet numbers will vary for each site.

RouterA_3750_Star#show ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user static
route
o - ODR, P - periodic downloaded static route

Implementation 4-23
MOCN213
Direct Connection

Gateway of last resort is not set.

10.0.0.0/8 is variably subnetted, 13 subnets, 2 masks


D 10.50.56.0/24 [90/130816] via 10.20.20.3, 17:57:56, Vlan201
D 10.50.57.0/24 [90/130816] via 10.20.20.3, 17:57:56, Vlan201
D 10.50.58.0/24 [90/130816] via 10.20.20.3, 17:57:56, Vlan201
D 10.50.59.0/24 [90/130816] via 10.20.20.3, 17:57:56, Vlan201
D 10.70.70.0/24 [90/3072] via 10.20.20.3, 17:57:56, Vlan201
D 10.50.50.0/24 [90/130816] via 10.20.20.2, 17:58:43, Vlan201
D 10.40.40.0/24 [90/3072] via 10.20.20.3, 17:57:56, Vlan201
[90/3072] via 10.20.20.2, 17:57:56, Vlan201
C 10.20.20.0/29 is directly connected, Vlan201
D 10.50.51.0/24 [90/130816] via 10.20.20.2, 17:58:52, Vlan201
D 10.50.52.0/24 [90/130816] via 10.20.20.2, 17:58:52, Vlan201
D 10.50.53.0/24 [90/130816] via 10.20.20.2, 17:58:52, Vlan201
D 10.50.54.0/24 [90/130816] via 10.20.20.2, 17:58:52, Vlan201
D 10.50.55.0/24 [90/130816] via 10.20.20.3, 17:58:05, Vlan201
RouterA_3750_Star#

Dynamic routing to HLAN with OSPF


This example is similar to the one above, except that OSPF is used instead of EIGRP to
advertise routes to the HLAN. All of the subnetting is the same as in the previous example.
Please note that this sample shows a very simplified OSPF configuration and that router ID’s,
specific IP addresses and OSPF areas will be vary from site to site.

It is very unlikely that the Philips switches will be allowed to connect into the HLAN OSPF
area 0. They will most likely connect to a stub area or an independent OSPF process that will
be redistributed into the Hospital’s main OSPF process.

Examples

On Philips Core A:
interface Loopback1
ip address 3.3.3.3 255.255.255.255

router ospf 1
router-id 3.3.3.3
network 10.20.20.2 0.0.0.0 area 0
network 172.31.152.0 0.0.7.255 area 0

4-24 Implementation
MOCN213
Direct Connection

On Philips Core B:
interface Loopback1
ip address 4.4.4.4 255.255.255.255

router ospf 1
router-id 4.4.4.4
network 10.30.30.2 0.0.0.0 area 0
network 172.31.152.0 0.0.7.255 area 0

You should see the OSPF neighbors come up on the console. You can also do the following
troubleshooting commands:

RouterA_3750_Star#show ip ospf neighbor

Neighbor IDPriState Dead TimeAddress Interface


4.4.4.4 1 FULL/BDR00:00:34172.31.152.3Vlan120
1.1.1.1 1 FULL/BDR00:00:3610.20.20.1FastEthernet1/0/1

You can also use the show ip route command to look at all of the EIGRP routes that are local
and that have been learned from the HLAN. EIGRP routes are shown with a code of D. This
example output is from the one subnet method, and is abbreviated (all connected routes are
not shown). Additional test HLAN networks are show as being learned from EIGRP. HLAN
subnet numbers will vary for each site.

RouterA_3750_Star#sh ip route
Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS
level-2
ia - IS-IS inter area, * - candidate default, U - per-user
static route
o - ODR, P - periodic downloaded static route

Implementation 4-25
MOCN213
Network Time Protocol

Gateway of last resort is not set.

3.0.0.0/32 is subnetted, 1 subnets


C 3.3.3.3 is directly connected, Loopback1

10.0.0.0/8 is variably subnetted, 7 subnets, 3 masks


O 10.70.70.0/24 [110/3] via 172.31.152.3, 00:05:08, Vlan120
[110/3] via 10.20.20.1, 00:05:08,
FastEthernet1/0/1
O 10.50.51.1/32 [110/2] via 10.20.20.1, 00:05:08, FastEthernet1/
0/1
O 10.40.40.0/30 [110/2] via 10.20.20.1, 00:05:08, FastEthernet1/
0/1
O 10.30.30.0/30 [110/2] via 172.31.152.3, 00:05:08, Vlan120
C 10.20.20.0/30 is directly connected, FastEthernet1/0/1
O 10.50.50.1/32 [110/2] via 10.20.20.1, 00:05:08, FastEthernet1/
0/1
O 10.50.55.1/32 [110/3] via 172.31.152.3, 00:05:12, Vlan120
[110/3] via 10.20.20.1, 00:05:12, FastEthernet1/
0/1

Network Time Protocol


An external Network Time Protocol (NTP) source can be used to provide a single time and
date source for a PSCN. All Layer 2 switches, Layer 3 switches and firewall devices supplied
by Philips support NTP.

All Layer 2 Access and Distribution switch templates contain configuration elements that
point to the Layer 3 Core switch management VLAN virtual IP addres as the time source.

ntp server 172.31.200.1

When enabled, the Layer 3 Core switch acts as an NTP client to the external time source. In
turn, it acts as an NTP server to all Layer 2 Access and Distribution switches, as well as
firewall devices. To enable an External NTP time source at the Core level, both Core A and B
switch templates must be modified and loaded in their respective switches. External NTP time
source configuration sections are already in the core switch templates, but commented out.

To implement an external NTP time source on the core level, two changes must be made to
the Core A and B template file.

1. An external NTP IP address must be added to a template configuration line and the line
must be uncommented.
2. The appropriate time zone configuration lines must be uncommented.

4-26 Implementation
MOCN213
Network Time Protocol

For example, if the NTP Server IP is 10.0.10.50 and the system is deployed in the Central
time zone, the template would be edited to appear as follows:

*********************************************************************
!
! Replace "xxx" below with the NTP Server IP Address if an external
! NTP server is going to be used (eg.: "10.0.4.100"). Then uncomment
! the line by removing the "!" in the front of it.
!
! If no external NTP Server is used, no edits are needed
!
!
*********************************************************************
!
ntp server 10.0.10.50
!
!
*********************************************************************
!
! If using NTP, uncomment the appropriate TWO lines below for the
! time zone you are installing in. Mainland US time zones are shown.
! For other time zones edit the time zone name (text string) and the
! UTC offset value.
!
! If no external NTP Server is used, no edits are needed
!
!
*********************************************************************
!
!clock timezone EST -5
!clock summer-time EDT recurring
!
clock timezone CST -6
clock summer-time EDT recurring
!
!clock timezone MST -7
!clock summer-time EDT recurring
!
!clock timezone PST -8
!clock summer-time PDT recurring
!

Implementation 4-27
MOCN213
Network Time Protocol

4-28 Implementation
MOCN213

PIIC Implementation Specifics

Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC Deployments only and includes the following topics:

• IP Address Assignments for PIIC Non-Routed Devices


• IP Address Assignments for Routed PIIC Devices
• PIIC Deployment of a DBS Server Farm
• Network Switch Port Requirements for End Devices in PIIC Networks
• Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC
Networks

IP Address Assignments for PIIC Non-Routed Devices


See Table 5-1 for a list of the device IP address assignments used in a non-routed
configuration where the ITS infrastructure (if used) is installed on a PIIC deployment
subnet.

Be aware of the following points when creating a Cisco Non-Routed Star Topology
network:

• In order to pass system validation, switch IP addresses and subnet masks must
reside in the address range of the specific PIIC Deployment number that is being
used.

• The default gateway must point to the address of the DBS or Standalone IIC.
See Table 5-1 for details.

• Note the following regarding Table 5-1 and the value of n.

n represents the network number and starts at 0 for a single PIIC Deployment.
This variable increments by 8 from there for additional PIIC Deployments. For
example, for PIIC Deployment 2, n would equal 8, for PIIC Deployment 3, n
would equal 16, and so on. See Table 5-2.

5-1 PIIC Implementation Specifics


MOCN213
IP Address Assignments for PIIC Non-Routed Devices

Table 5-1: IP Address Assignment:Non-Routed PIIC Devices


IP Addresses Subnet
Device Types (with Non-routed Subnet) Default Gateway
Mask: 255.255.248.0
Network Subnet Address 172.31.n.0
Reserved for Routed Solution 172.31.n.1 - 3
Reserved for Service PC 172.31.n.4 - 9 IP Address of DBS or
M3150 Information Center
Network Switches and Remote Client Infrastructure (i.e., 172.31.n.10 - 102 IP Address of DBS or
Remote Client Router) M3150 Information Center
Reserved for Future Use 172.31.n.103 - 255
1.4/2.4 Ghz ITS APCs and IntelliVue 802.11 Devices 172.31.(n+1).0 - 63 IP Address of DBS or
M3150 Information Center
IntelliVue 802.11 Devices and legacy Proxim 2.4 GHz (ISM) 172.31.(n+1).64 - 127 IP Address of DBS or
Range LAN2/Harmony APs M3150 Information Center
Reserved for Future Use 172.31.(n+1).128 - 255
1.4/2.4 GHz ITS AP Static Range 172.31.(n+2).0 -127 IP Address of DBS or
M3150 Information Center
Bootp/DHCP Server 2 Range for 1.4/2.4 GHz ITS APs 172.31.(n+2).128 - 255 IP Address of DBS or
(configured in APC) M3150 Information Center
Database Server (NIC 1) 172.31.(n+3).0
Application Server (NIC 1) 172.31.(n+3).16 - 31 IP Address of DBS or
M3150 Information Center
Information Centers (NIC 1) 172.31.(n+3).32 - 63 IP Address of DBS or
M3150 Information Center
Information Center Clients 172.31.(n+3).64 - 95 IP Address of DBS or
M3150 Information Center
Reserved 172.31.(n+3).128 - 255
Bedside Monitors/Devices (Wired & ISM 2.4 GHz Wireless) 172.31.(n+4).0 - 255 IP Address or M3150
Information Center
Reserved for Future Use 172.31.(n+5).0 - 255
Boot/DHCP Server Range 1 for 1.4/2.4 GHz ITS 172.31.(n+6).0 - 255 IP Address of DBS or
Transceivers/Bedsides (config in APC) M3150 Information Center
IntelliVue XDS PC 172.31.(n+7).1 - 254 IP Address of DBS or
M3150 Information Center
Network Broadcast Address 172.31.(n+7).255

5-2 PIIC Implementation Specifics


MOCN213
IP Address Assignments for Routed PIIC Devices

Table 5-2: PIIC Numbering


Database Domain Number n
Generic n
1 0
2 8
3 16
4 24
5 32
6 40
7 48
8 56
9 64
10 72
11 80
12 88
13 96
14 104
15 112
16 120
17 128
18 136
19 144
20 152
21 160
22 168

When designing your network, you should factor in the total number of end devices that must
be supported and the location of these devices within the installation site.

IP Address Assignments for Routed PIIC Devices


Be aware of the following IP Address Assignment details for PIIC devices:

1. For the star topology, the IP addresses of all of the Core, Distribution and Access Switches
reside in a separate subnet, VLAN 1, the Management VLAN.

2. The IP address range of the Management VLAN is 172.31.200.0-172.31.200.255.

3. The Core Switches (routers), are assigned the following IP addresses in the Management
subnet (VLAN 1), but the PIIC system does not currently use these addresses:
• Router A: 172.31.200.2
• Router B: 172.31.200.3

4. The Distribution Switches have an address range of 172.31.200.10 to 172.31.200.58 on


the Management VLAN, VLAN 1.

PIIC Implementation Specifics 5-3


MOCN213
IP Address Assignments for Routed PIIC Devices

5. The Access Switches get addresses within the range of 172.31.200.100 to 172.31.200.244
on the Management VLAN, VLAN 1.

6. The Core Switches (routers) are also assigned addresses in each of the other subnets. In
the Configuration Wizard, we continue to use the addresses assigned in the applicable
DBSD subnet:
• Router A: 172.31.n.2
• Router B: 172.31.n.3
• Virtual address: 172.31.n.1

7. Because the PIIC system does not use the IP addresses from the Management VLAN
(VLAN 1), the Scan Device function does not work for Network Switches used on the star
topology.

Table 5-3: Star and ITS Wireless Subnet Device IP Addresses for PIIC

Device Types DBSD Subnet IPs DBSD Default ITS Wireless Subnet IPs ITS Default
(with Routed Subnet) Mask: 255.255.248.0 Gateway Mask: 255.255.240.0 Gateway

Network Subnet Address 172.31.n.0 172.31.240.0


(Used in Config Wizard for Router)

Gateway Address 172.31.n.1 172.31.240.1

Router A - <used for ITS Wireless Subnet Router> 172.31.n.2 172.31.240.2 172.31.240.1

Router B- <used for ITS Wireless Subnet Router> 172.31.n.3 172.31.240.3 172.31.240.1

Reserved for Service PC 172.31.n.4-9 172.31.n.1 172.31.240.4-9 172.31.240.1

Network Switches and Remote Client Infrastructure 172.31.n.10 - 102 172.31.n.1 172.31.240.10 - 20 172.31.240.1

Reserved for Future Use 172.31.n.103 - 255 172.31.240.21 -


172.31.240.255

ITS APCs 172.31.241.0 - 127 172.31.240.1

IntelliVue 802.11 Devices 172.31.(n+1).0 - 63 172.31.n.1

IntelliVue 802.11 Devices and legacy Proxim 172.31.(n+1).64 - 127 172.31.n.1


(RangeLAN2/Harmony) APs.
Note: Proxim devices are not supported on IIC
Release J (or higher)

Reserved for Future Use 172.31.(n+1).128 - 255 172.31.241.128 - 255

ITS AP Static Range (1.4/2/4 GHz) 172.31.242.0 -


172.31.244.127

ITS APC Bootp/DHCP Server Range 2 for 1.4/2.4 172.31.244.128 - 172.31.240.1


GHz APs 172.31.246.255

Database Server (NIC 1) 172.31.(n+3).0 - 15 Default blank

Application Server (NIC 1) 172.31.(n+3).16 - 31 172.31.n.1

Information Centers (NIC 1) 172.31.(n+3).32 - 63 172.31.n.1

Information Center Clients 172.31.(n+3).64 - 95 172.31.n.1

5-4 PIIC Implementation Specifics


MOCN213
Alternate IP Address Scheme

Device Types DBSD Subnet IPs DBSD Default ITS Wireless Subnet IPs ITS Default
(with Routed Subnet) Mask: 255.255.248.0 Gateway Mask: 255.255.240.0 Gateway

Printers (Set by BootP) 172.31.(n+3).96 - 127

Reserved 172.31.(n+3).128 - 255 172.31.247.0 - 255

Bedside Monitors/Devices (Wired & ISM 2.4GHz) 172.31.(n+4).0 - 255


(Set By BootP)

Reserved for Future Use 172.31.(n+5).0 - 255

ITS APC BootP/DHCP Server Range 1 for 172.31.248.0 - 172.31.240.1


Transceivers/Bedsides 172.31.253.255

IntelliVue XDS PC 172.31.(n+7).1 - 254

Network Broadcast Address 172.31.(n+7).255 172.31.n.1 172.31.255.255

Alternate IP Address Scheme


The default IP address scheme used for PSCN PIIC deployments is within the 172.31.x.x
range. For example, if PIIC is deployed on a PSCN and the 172.31.x.x address space is
already allocated for another hospital use, the customer may request the use of an alternate IP
address scheme. It is important that Philips accommodate this request for two reasons:

1. The reuse of address space may lead to address conflict issue on the PIIC database server.
2. This strategy could position these PIIC customers for a much easier PIIC to PIIC iX
migration in the future.

This section outlines the PSCN IP address changes needed to accommodate an alternate IP
address scheme request.

Note

All proposed PSCN IP addressing changes should be reviewed by Hospital IT to ensure all
needs are being accommodated.

The default IP addresses used for Philips devices are within the 172.31.x.x range. This
alternate IP scheme enables the first two octets to be changed as long as the required subnet
masks are used.

• Database domains require a subnet mask of 255.255.248.0 (/21)


• Routed ITS subnets require a subnet mask of 255.255.240.0 (/20)

Implementation details and examples are provided in “Layer 3 Routers (Core A and B)
Configuration Changes” on page 5-6.

PIIC Implementation Specifics 5-5


MOCN213
Alternate IP Address Scheme

Layer 3 Routers (Core A and B) Configuration Changes

Note

Changing an existing ITS IP address (first two octets) causes a major clinical disruption.
Planning for down time with clinical staff is required. A back-out plan must be provided to
the customer.

Required Configuration Changes

• Edit specific VLAN interfaces as well as HSRP addresses.


• Management VLAN interface and HSRP may also require changes if they are currently in
use by the hospital.
• Replace x.x with the approved first two octets IP

Examples of each of these configuration changes are provided in the sections that follow.

Router A -- VLAN 101 IP Example


!
interface Vlan101
description ICN #1 - Wired Subnet
ip address x.x.0.2 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
ip mroute-cache distributed
standby ip x.x.0.1
standby timers 1 3
standby priority 110
standby preempt
!

Router B -- VLAN 101 IP Example


!
interface Vlan101
description ICN #1 - Wired Subnet
ip address x.x.0.3 255.255.248.0
no ip redirects
ip pim sparse-dense-mode
ip mroute-cache distributed
standby ip x.x.0.1
standby timers 1 3
!

Editing EIGRP Network Statements


Edit EIGRP Network statements as shown in the following example:

network x.x.0.0 0.0.7.255 <<<< ICN VLAN 101


network x.x.240.0 0.0.15.255 <<<< ITS VLAN 124

5-6 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Distribution and Access Layer Switch Changes

Edit Management VLAN Interfaces Example

interface Vlan1
ip address x.x.200.10x 255.255.255.0
no ip route-cache
no shutdown

Edit IP Default Gateway Example

ip default-gateway x.x.200.1

PIIC Deployment of a DBS Server Farm

Overview
A concentration of like-servers within a data center is often called a server farm. You can
support connection to a PIIC Deployment Database Server Farm by using a star topology as
shown in the example in Figure 5-1. Customers with large and multiple PIIC Deployments
may wish to co-locate PIIC Deployment Database Servers in a data center. The data center
provides the proper security, ambient environment, and primary and backup electrical
supplies for the valuable PIIC Deployment Database Servers.

PIIC Implementation Specifics 5-7


MOCN213
PIIC Deployment of a DBS Server Farm

ICN Database Server Farm


DBS1 DBS2 DBS3 DBS4 DBS5 DBS6 DBS7 DBS8

Distribution Distribution
Switch 1A GBit Switch 1B
Links

Router A Router B
Core Switch 1 Core Switch 2

Links to Distribution Links to Distribution


and Access Switches and Access Switches

Figure 5-1: Example - Supporting a PIIC Deployment DBS Server Farm


with a Star Topology

Requirements
Note the following requirements when supporting connection to a PIIC Deployment Database
Server Farm using a star topology:

• The PIIC Deployment Database Servers in the Server Farm connect directly to a pair of
Distribution Switches.
• The pair of Distribution Switches connected to the PIIC Deployment Database Server
Farm must be linked together using a Gigabit trunk.
• Each Distribution Switch connected to the DBSs must connect to a Router using a Gigabit
trunk.
• The Core Switches (routers) must be interconnected using a GBit trunk.
• The following Configuration files are provided on the Network Tools Suite to configure
the Cisco Model 2960-TC and 2960-S Switches for use as Distribution Switches in a
Server Farm:

• DISTRIBUTIONA_SERVER_FARM_2960TC_STAR.TXT,
• DISTRIBUTIONB_SERVER_FARM_2960TC_STAR.TXT,
• DISTRIBUTIONA_SERVER_FARM_2960S-TS_STAR_TEMPLATE.TXT
• DISTRIBUTIONB_SERVER_FARM_2960S-TS_STAR_TEMPLATE.TXT

5-8 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Note

Up to four (4) PIIC Deployment Database Servers may be connected to each Distribution
Switch in the pair of Distribution Switches. If you need to support more than eight (8) PIIC
Deployment Database Servers, then you must add another Distribution Switch pair. Because
a Server Farm requires gigabit connectivity, if a second Server Farm Distribution Switch
pair requires an ANDI diagram. Refer to the ANDI Reference Guide, for guidelines on
gigabit connectivity, VLANs, and connecting Database Servers.

Supported Network Infrastructure


The network switches and routers you may use to connect to a PIIC Deployment
Server Farm within a star topology are listed in Table 5-4.

Table 5-4: Switches and Routers Supported for Use in a Server Farm

Device Supported Models Notes

Router/Core Switch Cisco 3750V2-24 Routers are implemented in pairs


Cisco 3750G-12 and must be the same model.
Cisco 3560

Server Farm Cisco 2960TC Distribution Switches are


Distribution Switch Cisco 2960-S implemented in pairs and must be
Cisco 3750-12 the same model.

SFP Module All Cisco Gigabit SFP Modules Gigabit Fiber and copper only,
supported by Philips for the 100FX not supported
Cisco 3750, 3560, and 2960
switches.

Interconnecting Switches and Routers to Support a Server Farm


The methods available to interconnect the server farm subsystems are determined by the
gigabit ports available on the particular model of supported network switches and routers you
use, and on the SFP modules you choose to use. Some possible interconnection methods are
shown in Figure 5-2.

PIIC Implementation Specifics 5-9


MOCN213
PIIC Deployment of a DBS Server Farm

Figure 5-2: Interconnecting Switches/Routers to Support a Server Farm

5-10 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Configuring Distribution Switches to Support a Server Farm


Table 5-5 lists the settings required to configure a Distribution Switch to support a server
farm.

Table 5-5: Distribution Switch Configuration to Support a Server Farm

Configuration
Setting Notes
Element

Spanning Tree mode= rapid-pvst

no spanning-tree optimize bpdu


transmission

extend system-id

vlan 1,101-122 priority 12288 VLAN balancing, alternate STP


priority values for second
vlan 123-124 priority 16384 Distribution Switch.

no spanning-tree portfast default

Device name Specify as Server Farm Distribution


Switch A/B

Access interfaces 1-24 Ports 1-4, set as needed, all ports Set as needed for all defined VLANs
should be access ports, single on which database servers will
VLAN assigned to each as dictated reside.
by design.

Fix speed and duplex to 100F.

spanning-tree portfast enabled

Disable ports 5-24 as a possible


security measure.

Uplink interfaces Gbit Media type SFP or rj45 Connections to routers may only be
SFP, no copper SFP supported in
Full duplex routers.

Mode trunk

Allow all PIIC Deployment VLANs, Use of integrated (dual personality)


including those assigned to access copper gig port permissible between
ports on Second Distribution distribution switches.
Switch.

Management VLAN1 Assign IP address and mask

No shutdown

IP route cache enable

PIIC Implementation Specifics 5-11


MOCN213
PIIC Deployment of a DBS Server Farm

Table 5-5: Distribution Switch Configuration to Support a Server Farm

Configuration
Setting Notes
Element

Other Settings no service pad


service timestamps debug uptime
service timestamps log uptime
service password-encryption
hostname
DistributionA_2960TC_Star
enable secret m3150e
enable password m3150
ip subnet-zero
no ip igmp snooping
no ip igmp filter
vtp domain philips
vtp mode transparent

Banner: “Warning: Access to this


system is restricted to authorized
personnel only. Unauthorized access
will be prosecuted.”

line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login

5-12 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Sample Distribution Switch Configuration File: Server Farm Using Cisco


In the following example, the file is for the first distribution switch A, and defines VLANs 1
and 101-108. In most cases, only VLANs 101-104 will use ports on the switch, data from all
VLANs (including management VLAN 1) needs to be defined on the switch in the event of a
link failure and a spanning tree re-route. This is also true for VLAN definitions on the gigabit
ports for each Distribution Switch. Also, this configuration assumes that the gigabit
connections to the routers are using fiber and the connections between the distribution
switches are copper. Note that this file is included for example purposes only and may not
match any supplied configuration files or files that you are using.

Note

Extra blank “!” lines are needed after the “media-type” to create some delay before the next
(duplex) command is executed. It has been observed on the Cisco 2960TC that the media
type command needs a delay before the switch can accept the next command, otherwise the
command following the media type command will fail. As shown, this command line string
works with a line delay setting of 100mS.

! ************************************************************
! Configuration File for Server Farm Distribution Switch "A"
! Cisco 2960TC
! DISTRIBUTIONA_SERVER_FARM_2960TC_STAR.TXT
! Ver. B.2 25-May-2011
!
! Vlans 1,101-124 are defined
! Sets VLANs 101, 103, 105, 107 on ports 1-4
! Shuts down ports 5-24
! Gig port 1 is configured for 1000Fx SFP (to router)
! Gig port 2 is configured for embedded copper (to dist sw B)
! Mgmnt VLAN IP is 172.31.200.20 255.255.255.0
! ************************************************************
!
config t
!
no service pad
service timestamps debug uptime
service timestamps log uptime
service password-encryption
!
hostname Server_Farm_Distribution_Switch_A
!
enable secret m3150e
enable password m3150
!
ip subnet-zero
!
no ip igmp snooping
no ip igmp filter
vtp domain philips
vtp mode transparent
!
spanning-tree mode rapid-pvst
no spanning-tree optimize bpdu transmission
spanning-tree extend system-id

PIIC Implementation Specifics 5-13


MOCN213
PIIC Deployment of a DBS Server Farm

spanning-tree vlan 1,101-122 priority 12288


spanning-tree vlan 123-124 priority 16384
no spanning-tree portfast default
!
vlan 1,101-124
!
! Edit VLAN numbers in the interface settings if values other than those
! shown are to be used
!
interface FastEthernet0/1
description Database Server Port
switchport mode access
speed 100
duplex full
switchport access vlan 101
spanning-tree portfast
exit
!
interface FastEthernet0/2
description Database Server Port
switchport mode access
speed 100
duplex full
switchport access vlan 103
spanning-tree portfast
exit
!
interface FastEthernet0/3
description Database Server Port
switchport mode access
speed 100
duplex full
switchport access vlan 105
spanning-tree portfast
exit
!
interface FastEthernet0/4
description Database Server Port
switchport mode access
speed 100
duplex full
switchport access vlan 107
spanning-tree portfast
exit
!
int range fa 0/5-24
shut
exit
!
interface GigabitEthernet0/1
description Fiber Uplink Port to RouterA
switchport trunk allowed vlan 1,101-108
switchport mode trunk
media-type sfp
! [DO NOT REMOVE THIS LINE]
! [DO NOT REMOVE THIS LINE]
! [DO NOT REMOVE THIS LINE]
! [DO NOT REMOVE THIS LINE]
! [DO NOT REMOVE THIS LINE]
! [DO NOT REMOVE THIS LINE]
exit

5-14 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

!
interface GigabitEthernet0/2
description UTP Upink Port to Distribution Switch B
switchport trunk allowed vlan 1,101-108
switchport mode trunk
media-type rj45
Speed 1000
duplex full
exit
! Uncomment the following section if you will be using fiber a link
! between the two distribution switches
!
! interface GigabitEthernet0/2
! media-type sfp
!
! Edit Vlan1 IP address if values other than those shown are to be used
!
interface Vlan1
ip address 172.31.200.20 255.255.255.0
no ip route-cache
no shutdown
!
exit
!
ip default-gateway 172.31.200.1
ip http server
!
snmp-server community public ro
!
banner motd #
Warning: Access to this system is restricted to authorized personnel
only. Unauthorized access will be prosecuted.
#
!
line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login
!
exit
!
exit
!
wri mem
!

PIIC Implementation Specifics 5-15


MOCN213
PIIC Deployment of a DBS Server Farm

Settings in Sample Configuration File: Server Farm Using Cisco

Note the following about the settings in the sample configuration file:

• VLANs 1,101-124 are defined


• VLANs 101, 103, 105, 107 are configured for ports 1-4 respectively
• Ports 5-24 are shut down
• Gig port 1 is configured for 1000Fx SFP (to router)
• Gig port 2 is configured for embedded copper (to dist sw B)
• The management VLAN IP is 172.31.200.20 255.255.255.0
• Although in normal cases, only VLANs 101-108 will use ports on the distribution
switches, data from all VLANs (including mgmnt VLAN 1) needs to be defined on the
switch in the event of a link failure and a spanning tree re-route. This is also true for
VLAN definitions on the gig ports on each dist switch.
• Extra blank “!” lines are needed after the “media-type” to create some delay before the
next (duplex) command is executed. It has been observed on the Cisco 2960TC that the
media type command needs a delay before the switch can accept the next command,
otherwise the command following the media type command will fail. As shown, this
command line string works with a line delay setting of 100mS in Hyperterminal.

In the corresponding server farm switch “B,” VLANs 102, 104, 106, 108 are configured for
ports 1-4 respectively. The presumption is that DBS will be added alternately to each switch
as the PIIC Deployment VLANs increment. This will provide some load balancing across the
distribution switches. As such, the difference in the number of DBSs on each of the
Distribution Switches should not be greater than one.

The maximum number of DBSs on the server farm (total across both switches) is eight.

The following configuration attributes may be edited in the configuration file as needed:

• The mapping of VLANs to interfaces. Only one VLAN should be assigned to each port to
which a DBS is connected.
• The management VLAN IP and subnet mask
• The media type for gig port 2. If fiber SFPs are used to interconnect the distribution
switches, the port media type must be changed. The speed must be kept at 1000.

Example of Router Configuration for Use in a Server Farm


The following example indicates the router configuration changes that you must make to the
basic Star topology configuration in order to support a server farm. (Note that this is presented
for example purposes only and the specific interface value will vary.)
interface GigabitEthernet0/2
description SFP to serverfarm dist B
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,101-124
switchport mode trunk
no channel-group 1
speed nonegotiate
exit
no int po1
exit
wri mem

5-16 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Required Settings for Switch/Router Configuration in Support of a Server Farm

Table 5-6 lists the settings required to configure a Router to support a server farm.

Table 5-6: Core Switch/Router Configuration to Support a Server Farm

Configuration
Setting Notes
Element

Spanning Tree mode= rapid-pvst

no spanning-tree optimize bpdu


transmission

extend system-id

vlan 1,101-122 priority 4096 VLAN balancing, alternate STP


priority values for second Router.
vlan 123-124 priority 8192

no spanning-tree portfast default

Device name Specify as Router A/B

Uplink interfaces As required for connected devices.

Uplink interfaces Gbit Media type SFP With the exception of the 3750-12,
no copper SFP supported in routers,
Full duplex fiber SFP must be used.

Mode trunk Consider only allowing VLANs


101-108 on these ports (Or ICN
Allow all VLANs, including those VLAN #s used by DBSs on farm)
assigned to access ports on Second
Dist Switch

Encapsulation dot1q

channel-group 1 mode on

Management VLAN1 Assign IP address and mask

No shutdown

IP route cache enable

Deployment VLANs As required for connected devices.

PIIC Implementation Specifics 5-17


MOCN213
PIIC Deployment of a DBS Server Farm

Table 5-6: Core Switch/Router Configuration to Support a Server Farm

Configuration
Setting Notes
Element

Other Settings no service pad


service timestamps debug uptime
service timestamps log uptime
service password-encryption
hostname
DistributionA_2960TC_Star
enable secret m3150e
enable password m3150
ip subnet-zero
no ip igmp snooping
no ip igmp filter
vtp domain philips
vtp mode transparent

Banner: “Warning: Access to this


system is restricted to authorized
personnel only. Unauthorized access
will be prosecuted.”

line con 0
exec-timeout 2 0
line vty 0 4
password m3150
login
line vty 5 15
password m3150
login

The following files are pre-loaded into the router flash memory and provided on the Network
Tools Suite to configure a router to support a server farm.

You may use the following star topology Core Switch/Router configuration files:

• ROUTERA_3560_STAR.TXT
• ROUTERB_3560_STAR.TXT
• ROUTERA_3750_STAR.TXT
• ROUTERB_3750_STAR.TXT

After you have configured the 3560 Router or 3750 Router to support a server farm by
loading the appropriate configuration file, you must take the additional step of disabling port
channeling on the router’s gigabit uplink ports. Use one of the following files to do this (these
files are also available in the Network Tools Suite).

• ROUTER_3560_STAR_SERVER_FARM_PREP.TXT
• ROUTER_3750_STAR_SERVER_FARM_PREP.TXT

5-18 PIIC Implementation Specifics


MOCN213
PIIC Deployment of a DBS Server Farm

Each of these files disable port channeling on the router’s gigabit uplink ports. Note that you
must disable port channeling on both the A and B Core Switches within the server farm
topology.

Configuring the Cisco 3750 as a Layer 2 Switch in a Routed Topology


In some network installations where the system components are particularly dispersed, more
than two fiber ports may be needed to interconnect network switches and routers. For such
installations, you may configure the Cisco 3750 as a Layer 2 switch for use on your IntelliVue
Deployment Network or IntelliVue Telemetry System wireless subnet using the configuration
files supplied on the Network Tools Suite.

Note

When you deploy Cisco 3750s as Layer 2 switches in a routed topology, you must also use
Cisco 3750s as the routers to which the L2 switches connect. Configuring the Cisco 3750 as
a Layer 2 Switch provides the additional benefit of eliminating the use of media converters
to connect multiple Cisco 3750 Layer 2 Switches to Cisco 3750 Routers since both device
types have 24 100FX MRTJ fiber ports and two Gigabit Ethernet SFP ports. Note that the
Cisco 3750 will not provide routing functions when configured as a Layer 2 switch.

The Cisco WS-3750-24FS front-panel ports 1/0/1 through 1/0/24 support multi-mode fiber
connections only. If other fiber modalities are required, the Cisco WS-C3750V2-24FS with
24 SFP ports may be used. In this case, the purchase of additional non-multimode SFPs may
be required.

PIIC Implementation Specifics 5-19


MOCN213
PIIC Deployment of a DBS Server Farm

Using a Pair of Cisco 3750 24-Port Switches as Distribution Switches in a Star


Topology
The main benefit of using a pair of Cisco 3750 24-port switches as Distribution Switches in a
star topology is that it enables customers who have large systems to expand their network
footprint by supporting long distance fiber connections to Access Switches.

ROUTER-A ROUTER-B
Cisco 3750 Cisco 3750
172.31.200.2 172.31.200.3

Fa 1/0/1 Fa 1/0/1 Fa 1/0/2


TRUNK TRUNK TRUNK
Fa 1/0/2
TRUNK

Gi 1/0/1 Gi 1/0/1 Gi 1/0/1


TRUNK TRUNK Gi 1/0/1 TRUNK
TRUNK
Gi 1/0/2 Gi 1/0/2 Gi 1/0/2 Gi 1/0/2
STAR-DIST1A TRUNK TRUNK STAR-DIST1B STAR-DIST2A TRUNK TRUNK STAR-DIST2B
Cisco 3750 Cisco 3750 Cisco 3750 Cisco 3750
172.31.200.10 172.31.200.11 172.31.200.12 172.31.200.13

Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3 Fa 1/0/1 Fa 1/0/2 Fa 1/0/3
TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK

Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2 Gi 0/1 Gi 0/2
TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK TRUNK
STAR-ACCESS1A STAR-ACCESS1A STAR-ACCESS1A STAR-ACCESS2A STAR-ACCESS2A STAR-ACCESS2A
Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960 Cisco 2960
172.31.200.21 172.31.200.22 172.31.200.23 172.31.200.31 172.31.200.32 172.31.200.33

Figure 5-3: Using a Pair of Cisco 3750 24-port Switches as L2


Distribution Switches in a Star Topology

5-20 PIIC Implementation Specifics


MOCN213
Network Switch Port Requirements for End Devices in PIIC Networks

Network Switch Port Requirements for End Devices in PIIC Networks


Table 5-7 lists and characterizes like-class end devices that may be deployed on the Network.
The devices are categorized in terms of their required connection speed and duplex. Port
speed and duplex matching within the Network must be carefully managed. While the
network is largely flexible in terms of possible topologies and implementations, extreme
caution must be used to ensure there are no unsupported speeds and/or duplex mis-matches.

For each device listed in Table 5-7, the required port configuration on the host switch is
given. These rules must be met and supersede any and all other inferences in this document.
Additionally, supported topologies may be further limited by a low density switch that may
have only auto negotiate ports. Operational connection speed is the speed and duplex at which
the link operates under the specified NIC and switch port settings.

Note that in some instances, connection of a hard-configured device to an auto negotiation


port is supported. In these cases, the devices have been tested with auto negotiation switch
ports and the resulting negotiation (usually a fall back to the lowest speed and duplex) is
predictable and supported. As a general rule however, it is best to avoid connection of a
device with fixed duplex and speed settings to a switch port configured for auto-negotiate if
another alternative is available.

PIIC Implementation Specifics 5-21


MOCN213
Network Switch Port Requirements for End Devices in PIIC Networks

Table 5-7: Network Switch Port Requirements for End Devices in PIIC Networks
(Star Topology)
Star Topology Rules

NIC Setting Port Setting Allowed


Device Comments
(Speed/Duplex) (Speed/Duplex) Switch Level Connection

Database Server (DBS) 100/Full 100/Full Distribution or Access


Database Server (DBS) Autonegotiate Autonegotiate Distribution or Access The auto-negotiate NIC
w/G5 Interface and port settings apply
Connected to Cisco 2950 only in the case where a
Switch DBS w/G5 interface is
connected to a Cisco
2950 Switch.
Small DBS 100/Full 100/Full Distribution or Access
Application Server 100/Full 100/Full Distribution or Access
(APS)
ITS APC* 100/Full 100/Full Distribution or Access
Information Center (IIC) 100/Full 100/Full Distribution or Access
ITS AP** 100/Full 100/Full Distribution or Access
Network Printer 10/Half 10/Half or Auto Distribution or Access
Bedside Monitor 10/Half 10/Half or Auto Distribution or Access
802.11 Symbol Auto-Negotiate Auto-Negotiate Distribution or Access Should negotiate to 100/
Wireless Switch*** Full
802.11 Symbol AP Auto-Negotiate Auto-Negotiate Connects directly to WS Should negotiate to 100/
Full
IntelliVue XDS PC Auto-Negotiate Auto-Negotiate Distribution or Access Should negotiate to 100/
Full

5-22 PIIC Implementation Specifics


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

Extension Switches in Cisco Systems to Support Fixed Mode


Monitoring in PIIC Networks

Extension Switches in a Non-Routed Star Topology


Figure 5-4 represents a non-routed star network topology using Cisco switches to support
fixed mode monitoring.

Figure 5-4: Non-Routed Star Uplink Port and IP Address / Subnet Mask
Assignments
Note

Extension switches are needed only for Cisco Star topologies. No Extension switches are
needed with HP implementations.

Default gateway example: 172.31.3.0 (address of DBS in PIIC Deployment 1)

Extension switches (Fixed mode monitoring switches)


• Use Access_2960_Extension_Switch.TXT template
• All host ports 1-24 are configured as access ports for VLAN1
• Uplink port (G0/1) configured as access ports for VLAN1

Note

A maximum of two Extension switches are allowed in a small nonrouted three-Access


switch Star topology. If additional Extension switches are needed, a Distribution layer is
required.

PIIC Implementation Specifics 5-23


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

Access Switches
• Use ACCESS_2960(TC/TC)_TEMPLATE.TXT
• All host ports 1-22 are configured as access ports for PIIC Deployment VLAN (101 in this
example)
• Uplink ports 22 and 23 (connected to Extension Switches) are configured as access ports
for PIIC Deployment VLAN (101).

Extension Switches in a Routed Star Topology

Figure 5-5: Fixed Mode Monitoring Routed Star Uplink Port and IP
Address / Subnet Mask Assignments
Note

Extension switches are needed only for Cisco Star topologies. No extension switches are
needed for HP implementations.

5-24 PIIC Implementation Specifics


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

Extension Switches (Fixed Mode Monitoring Switches)

• The IP and Subnet Mask and Default Gateway must reside in the address range of the
specific ICN # (extension switches shown in Figure 5-5 to pass system validation. All
other switches (Access, Distribution and Core) follow VLAN 1 Management IP and SM
scheme.
• A maximum of 2 Extension switches can be uplinked to an Access Switch
• Switch IP Address and SM must reside in the address range of the PIIC Deployment
(e.g. PIIC Deployment 1(vlan 101): 172.31.0.10 255.255.248.0)
• Default Gateway must point to DBS or Standalone IIC for that PIIC Deployment
(e.g. PIIC Deployment 1 DBS-172.31.3.0)
• Use Access_2960_Extension_Switch.TXT template
• All host ports 1-24 are configured as access ports for VLAN1
• Uplink port (G0/1) configured as access 0

Access Switches

• Use ACCESS_2960(TC/TC)_TEMPLATE.TXT
• All host ports 1-22 are configured as access ports for PIIC Deployment VLAN (PIIC
Deployment1 VLAN 101 in this example)
• All host ports 1-22 can be configured as access ports for ITS VLAN # (124)
• Host Ports can be divided as access ports for PIIC Deployment and ITS
• Uplink ports 22 and 23 (connected to Extension Switches) are configured as access ports
for PIIC Deployment VLAN (e.g. 101)
• Uplink ports G0/1 (connected to Distribution Switches) are configured as Trunk ports for
VLANS 1, 101-124

Distribution Switches

• Use DISTRIBUTION(A&B)_CISCO_2960(TC/TT)_TEMPLATE.TXT
• All host ports 1-18 are configured as access ports for 2 PIIC Deployment VLANs # (e.g.
101, 102)
• All host ports 1-18 are configured as access ports for ITS VLAN # (124)
• All host ports 1-18 divided as access ports for 2 PIIC Deployment VLAN # (e.g. 101, 102)
and ITS # (124)
• Uplink ports 19-24 connected to Access Switches are configured as Trunk ports for
VLANS 1,101-124
• Uplink port G0/2 connecting to Distribution Switches (A-B) together is configured as a
Trunk port for VLANS 1,101-124
• Uplink port G0/1 connecting to appropriate Router (A/B) configured as Trunk ports for
VLANS 1, 101-124

PIIC Implementation Specifics 5-25


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

Core Routers

• Routers are preconfigured at the factory (Route A default)


• All ports 1-24 are trunk ports (VLAN 1,101-124) for link to Distribution and Access
Switches
• Port G0/1 and G0/2 are trunk ports (VLAN 1,101-124) redundant link between Routers
A&B (ether-channel)

You may use Cisco or HP switches in a star topology. If a Cisco Switch is used at the Core,
only Cisco switches may be used for Distribution or Access layers. If an HP Switch is used at
the Core, HP must be used at the Distribution layer, but HP or Cisco may be used at the
Access layer.

Note

Use the star topology configuration file templates supplied on the Network Infrastructure
Tools Suite to properly configure each network switch on your network for its function
(Router/Core, Distribution, or Access). Refer to the Advanced Networking Design and
Implementation Guidelines for Star Topologies for details on HP network implementations.

Access Switch Settings for PIIC Networks

The Access Switch to which the Fixed Mode Bedside Switch is connected must have a port
provisioned for that connection. Access Switches typically have all their ports configured for
end devices, not other switches. For that reason, the following settings must be configured for
the Access Switch port to which the Fixed Mode Bedside Switch will be connected. The
initial configuration state is presumed to be that of a star topology Access Switch with no
“accesssetting.txt” file executed.

The port must be assigned to the VLAN that the Fixed Mode Bedside Switch beds are to be
part of:

>switch access vlan number


Portfast must be disabled on the port:

>port number no portfast


A description should be configured for the port such as “connection to extension switch”

Example:Uplink Port Configuration Settings on Access Switch

!
interface FastEthernet0/24
description uplink port to Fixed Mode Bedside SW
switchport access vlan 101
switchport mode access
speed 100
duplex full
spanning-tree portfast disable
!

5-26 PIIC Implementation Specifics


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

Fixed Mode Bedside Switch Settings

The following settings must be configured for the Fixed Mode Bedside Switch:

• All ports must be assigned to VLAN 1.


• The switch should be given an IP address in the valid range.
• The Default Gateway should be set.
• Gig port 1 should be used for connection to the Access Switch, and it should be configured
for 100F and given an appropriate description.
• All ports 1-24 should be set for 10H, and given an appropriate description (for example,
fixed monitoring bedside port).

PIIC Implementation Specifics 5-27


MOCN213
Extension Switches in Cisco Systems to Support Fixed Mode Monitoring in PIIC Networks

5-28 PIIC Implementation Specifics


MOCN213

PIIC iX Implementation Specifics

Overview
This chapter describes the details that are specific to the implementation of a Star
Topology for PIIC iX deployments and includes the following topics:

• Flexible IP Addressing in PIIC iX Deployments


• IP Address Assignments for PIIC iX Non-routed Deployments
• IP Address Assignments for Routed PIIC iX Deployments

Flexible IP Addressing in PIIC iX Deployments


One of the requirements for PIIC iX deployments is to allow for a flexible IP
addressing scheme. This means that the person setting up the network together with
the customer (biomed) and hospital IT must collaborate in the development of an
addressing scheme that takes the following factors into account:

• Number of systems being deployed


• Physical location of the devices
• Future growth in IP address requirements in each location
• Uniqueness of IP addressing chosen in the entire hospital if the site is
one that will directly connect to a customer’s HLAN.

For turnkey PIIC iX deployments, you will need to map the IP address/subnets to
VLANs and do so in a way that can be replicated from one customer to another. The
IP Address subnet size and scheme will generally depend on whether or not the
system is for a routed or non-routed solution, the size of the system, existence of
telemetry and the network infrastructure requirements needed to support.

6-1 PIIC iX Implementation Specifics


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

IP Address Assignments for PIIC iX Non-routed Deployments


A Non-routed PIIC iX deployment implies that the system is being deployed with a network
infrastructure that does not support routing between subnets. For a routed solution, these
systems are supported in Star or Ring Topologies that do not include any routing capabilities.

Because they are standalone, Ring Topologies do not have the concept of VLANs (i.e. all
ports in a non-routed Ring are usually on the same default VLAN 1). For a non-routed system
to communicate with external applications outside of its subnet, interfacing to the HLAN is
required. This can be achieved by either using a Layer 2 solution or via a Gateway device.
Careful IP address planning and collaboration with the HLAN IT department is still necessary
when choosing which IP Scheme to use to avoid complex NAT rules.

• The PIIC addressing schemes and VLAN scheme can be used for PIIC iX deployments if
the customer will not exceed a 128-bed system and plans to reuse existing switches.
Systems larger than 128 beds will not support Non-routed configurations.

Table 6-1 provides the PIIC iX values for a Non-routed (Layer 2 network) with Non-
routed ITS (ITS infrastructure is included in the subnet.)

Table 6-1: Non-routed PIIC iX with Non-routed Smart-hopping Network


PIIC iX Device Old Scheme New Scheme
Subnet Size VLANs Subnet Size VLANs
Surveillance
IPM /21 101-122 /24 /23 /22 201-228
PIIC iX
Local Database
PIIC iX 32 8 Yes Yes Yes No No Yes
Small Server iX 64 8 Yes Yes No Yes No Yes
Large Server iX 128 60 Yes Yes No No Yes Yes
128 - 1024 IPM 1024 160 N/A N/A N/A N/A N/A N/A

See Appendix B for additional IP addressing rules regarding non-routed PIIC iX with a non-
routed smart-hopping network.

6-2 PIIC iX Implementation Specifics


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

PIIC iX VLAN Schemes


A new PIIC iX VLAN will range from 201 – 222. VLAN 200 will be reserved for future use.
As described in Chapter 5, VLANs 101 – 125 will continue to be used for PIIC deployments.
This also ensures that in PIIC and PIIC iX deployments can coexist because there is a clear
difference in IP addressing and VLANs and no additional rework is necessary.

Note

PIIC iX VLANs 201 - 222 are commented out in the routed template. To add these VLANs,
uncomment the required VLANs. There is a Cisco limit of 32 HSPRP groups. The default
router configuration uses VLANs 1, 101-124 for a total of 25 HSRP groups. This leaves 7
HSRP groups that can be added (uncommented) to the routed configuration. If more than 7
VLANs are needed, then any unused VLANs will need to be removed (commented out)
from the router configuration.

The following provides a partial example of the default router configuration.

**********************************************************************

! Interface Configuration for VLANs 201-222 are left commented out due

! to the restriction of 32 HSRP groups. Please uncomment as needed but

! do not exceed 32 HSRP groups.

! **********************************************************************

!interface Vlan201

! description iX DBSD #1 - iX Wired Subnet

! ip address 172.21.201.2 255.255.255.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby 201 ip 172.21.201.1

! standby 201 timers 1 3

! standby 201 priority 110

! standby 201 preempt

PIIC iX Implementation Specifics 6-3


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

!!

!interface Vlan202

! description iX DBSD #2 - iX Wired Subnet

! ip address 172.21.202.2 255.255.255.0

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby 202 ip 172.212.8.1

! standby 202 timers 1 3

! standby 202 priority 110

! standby 202 preempt

!!

!interface Vlan203

! description iX DBSD #3 - iX Wired Subnet

! ip address 172.21.203.2 255.255.255.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby 203 ip 172.21.203.1

! standby 203 timers 1 3

! standby 203 priority 110

! standby 203 preempt

The following example illustrates how to add VLANs 201 -203. Note that VLANs 117-122
are not in use and are commented out.

!interface Vlan117

! description ICN #17 - Wired Subnet

! ip address 172.31.128.2 255.255.248.0

! no ip redirects

! ip pim sparse-dense-mode

6-4 PIIC iX Implementation Specifics


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

! ip mroute-cache distributed

! standby ip 172.31.128.1

! standby timers 1 3

! standby priority 110

! standby preempt

!!

! interface Vlan118

! description ICN #18 - Wired Subnet

! ip address 172.31.136.2 255.255.248.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby ip 172.31.136.1

! standby timers 1 3

! standby priority 110

! standby preempt

!!

! interface Vlan119

! description ICN #19 - Wired Subnet

! ip address 172.31.144.2 255.255.248.0

! ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby ip 172.31.144.1

! standby timers 1 3

! standby priority 110

! standby preempt

! interface Vlan120

PIIC iX Implementation Specifics 6-5


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

! description ICN #20 - Wired Subnet

! ip address 172.31.152.2 255.255.248.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby ip 172.31.152.1

! standby timers 1 3

! standby priority 110

! standby preempt

!!

! interface Vlan121

! description ICN #21 - Wired Subnet

! ip address 172.31.160.2 255.255.248.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby ip 172.31.160.1

! standby timers 1 3

! standby priority 110

! standby preempt

!!

! interface Vlan122

! description ICN #22 - Wired Subnet

! ip address 172.31.168.2 255.255.248.0

! no ip redirects

! ip pim sparse-dense-mode

! ip mroute-cache distributed

! standby ip 172.31.168.1

! standby timers 1 3

6-6 PIIC iX Implementation Specifics


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

! standby priority 110

! standby preempt

!!

interface Vlan201

description iX DBSD #1 - iX Wired Subnet

ip address 172.21.201.2 255.255.255.0

no ip redirects

ip pim sparse-dense-mode

ip mroute-cache distributed

standby 201 ip 172.21.201.1

standby 201 timers 1 3

standby 201 priority 110

standby 201 preempt

interface Vlan202

description iX DBSD #2 - iX Wired Subnet

ip address 172.21.202.2 255.255.255.0

ip pim sparse-dense-mode

ip mroute-cache distributed

standby 202 ip 172.212.8.1

standby 202 timers 1 3

standby 202 priority 110

standby 202 preempt

interface Vlan203

description iX DBSD #3 - iX Wired Subnet

ip address 172.21.203.2 255.255.255.0

no ip redirects

ip pim sparse-dense-mode

PIIC iX Implementation Specifics 6-7


MOCN213
IP Address Assignments for PIIC iX Non-routed Deployments

ip mroute-cache distributed

standby 203 ip 172.21.203.1

standby 203 timers 1 3

standby 203 priority 110

standby 203 preempt

Care must be taken in the new PIIC iX VLAN scheme because the Clinical domain can span
multiple subnets and or VLANs. This means that better layer 3 interface descriptions as well
as VLAN descriptions are necessary in order to correctly identify all subnets or VLANs
belonging to the same Deployment.

If you make a decision to re-use an existing VLAN/IP address scheme for a PIIC iX, then you
are bound by the same system size rules that apply to PIIC.

DHCP and DNS


Surveillance PIIC iX, Overview PIIC iX, Primary Server iX, and Physiological Server iX can
be assigned static IP addresses or use a Dynamic Host Configuration Protocol (DHCP) server
to obtain an IP address. If the DHCP method is used, only the customer-provided DHCP
server can be used.

Philips bedside devices require DHCP to obtain an IP address. DHCP must be provided by a
PIIC iX DHCP server or customer-provided DHCP server. Name resolution is required by
PIIC iX PCs and Servers. For PIIC iX PCs and Servers residing in the same subnet, names
can be resolved via NetBIOS or Domain Name System (DNS). For PIIC iX PCs and Servers
residing in different subnets, DNS name resolution must be provided. PIIC iX PCs and
Servers may also need to resolve hospital information system server names.

One approach to limiting the need for a DNS server name resolution would be to put the PIIC
iX servers, Surveillance, and Overview PC’s in the same subnet. In this case, NetBIOS could
be used for name resolution, eliminated the need for DNS name resolution. However, this
approach does not address the possible name resolution needs of external interfaces, such as
HL7, IEM, etc.

Note

An IP Helper must be used for DHCP server solutions that require a DHCP server to provide
an address to one or more VLANs other than the VLAN containing the DHCP server. In
these cases, an IP helper must be configured on each VLAN requiring DHCP-provided
addresses, but not containing a DHCP server. The IP helper identifies the IP address of the
DHCP server expected to reply to DHCP IP address requests.

6-8 PIIC iX Implementation Specifics


MOCN213
IP Address Assignments for Routed PIIC iX Deployments

VLAN IP Helper Configuration Example

interface VLAN 110


description PIIC iX Domain 1 - Subnet 2
ip address 172.31.72.1 255.255.255.248.0
ip helper-address 172.31.67.0
ip pim sparse-dense mode
!

Note

IP addresses for Smart-hopping Patient Worn Devices (PWDs) and Patient Worn Monitors
(PWMs) are provided by the Philips Access Point Controller (APC) BOOTP/DHCP server.

IP Address Assignments for Routed PIIC iX Deployments

Routing Information Exchange


Since PIIC iX systems are not dual-homed and do not have a defined subnet/interface in the
HLAN, all traffic from the clinical unit is routed to the HLAN using the monitoring LAN IP
address. This means that the IP address chosen should be unique in the Hospital if you do not
want to NAT all outbound traffic.

You should carefully choose an IP address block that is:

• Unique in the customer’s environment


• Allows for enough IP addresses
• Can be summarized into a larger Classless Inter Domain Routing (CIDR) for routing
protocol distribution purposes.

Routed PIIC iX with Routed ITS


Routed PIIC iX implies that the system is being deployed with a network infrastructure that
supports routing between subnets. For a routed PSCN solution, there are two routers in either
a Star or Ring Topology.

Since PIIC iX has routing capabilities, the default network will be a /24 for customers looking
to deploy a system size greater than 128 bedsides. Customers can also choose this subnet size
for smaller deployments (less than 128 bed systems) that require a flexible IP address scheme
due to HLAN interfacing requirements or to avoid using NAT.

The standard PIIC IP addressing and VLAN scheme can be used if the customer will not go
over 128 bed system and is planning on re-using the routers and switches currently in place.
Please note that if there is external interfacing needed (HLAN -> PSCN), then the IP
addresses will either have to be unique or NAT will be necessary.

PIIC iX Implementation Specifics 6-9


MOCN213
IP Address Assignments for Routed PIIC iX Deployments

Table 6-2: Routed PIIC iX with Routed Smart-Hopping


Devices Old Scheme New Flex Scheme
PIIC iX System Subnet Size VLANs Subnet Size VLANs
Beds Systems Total /21 101 - 122 /24 /23 /22 201 - 228
Local Database PIIC iX 32 8 139 Yes Yes Yes No No Yes
Small Server iX 64 8 243 Yes Yes Yes No No Yes
Large Server iX 128 60 510 Yes Yes Yes No No Yes
128 - 1024 Bed 1024 160 4096 No No Yes No No Yes
See Appendix B for additional IP addressing tables for routed PIIC iX with routed smart-
hopping.

6-10 PIIC iX Implementation Specifics


MOCN213
PIIC iX Deployment for Routed Star Topology Overview and Example

PIIC iX Deployment for Routed Star Topology Overview and Example


An example of a Routed Star Topology for a PIIC iX deployment with >128 bedside devices
is presented on the following page. Be aware of the following points when designing a Routed
Star topology for a PIIC iX deployment:

• The total system size for each Distribution/Access layer is 446 total devices. If you need
more than 446 devices but are below 512 bedsides, you will need to add a second
Distribution layer with several access layer switches. The system cannot exceed 512
bedsides
• For systems larger than >512 bedsides, you must use Gigabit uplinks for all uplinks. In
order to meet this requirement, you may need to add new routers that support Gigabit
interfaces.
• Use the Cisco Catalyst 2960-S switch for all Distribution switches. This switch supports
all Gigabit interfaces (24 copper and 2 SFP). See the Cisco Catalyst 2960-S Installation
and Service manual (453564327661) for details.
• Connection from the 2960-S switch to the Routers is done on the SFP port using
1000 FX GBIC
• All Primary Server iX and Physiological Server iX servers are located in the Distribution
switches
• Surveillance PIIC iX systems and Clients are located in the Access layer
• PIIC iX systems for a large installation are located in the same Distribution/Access layer
• Remote bedside devices can be located in a different Distribution/Access layer as the PIIC
iX systems.
• Access layer switches must have Gigabit uplinks to the Distribution layer switches
• Up to 18 Access switches are allowed for each Distribution pair
• Up to 10 Surveillance PIIC iX systems, or Clients, or Physiological Server iX systems are
allowed for each Access layer switch.
• All uplink ports must be trunked.
• All trunks must pass all PIIC iX VLANs as well as the ITS and Management VLAN (200-
220, 124)
• IP addresses on the router for VLANs 201-220 will default to /24s and are unique to the
Customer network if possible to avoid NAT. However, before using NAT be sure to
consider the potential impact to current and future Hospital Information System
interfacing.
• All PIIC iX systems are located on the first /24 subnet. Bedside devices are allocated to /
24s subnets as needed.

PIIC iX Implementation Specifics 6-11


MOCN213
PIIC iX Deployment for Routed Star Topology Overview and Example

6-12 PIIC iX Implementation Specifics


MOCN213
PIIC iX Fixed Mode Monitoring Implementation

PIIC iX Fixed Mode Monitoring Implementation


Fixed Mode Monitoring is dependant on the LLDP-MED switch feature. For Fixed Mode
Monitoring to function properly, the switch used to connect the IntelliVue Patient Monitor
must support LLDP-MED. The following table lists the Philips switch products and their
LLDP-MED support.

Note

Access and distribution level switch ports can be used for Fixed-Mode monitoring, therefore,
both are listed in Table 6-3.

Table 6-3: PIIC and PIIC iX Co-Existence Within Supported Topologies

Philips Product Number/ Firmware LLDP-MED


Manufacturer Description
Support Part Number Released Supported

862247 / 453564239911 Cisco Systems Cisco Catalyst WS-C3560V2- 12.2(55) SE YES


24TS-E: 24 Ethernet 10/100 ports
and 2 SFP-based Gigabit ports
(distribution)

865339 / 453564197311 Cisco Systems Cisco Catalyst 3750G-12S-12 12.2(55) SE4 YES
Gigabit Ethernet SFP ports
(distribution)

866051 / 453564261781 Cisco Systems Cisco 3750V2-24 Ethernet 100FX 12.2(58) SE1 YES
SFP ports and 2 SFP Gigabit
Ethernet ports (distribution)

865054 / 453564061211 Cisco Systems Cisco Catalyst 2960 24 10/1 00 + 12 2 (55) SE YES
2 10/100/1000

865055 / 453564099371 Cisco Systems Cisco Catalyst 2960 24 10/1 00 12 2 (55) SE YES
and 2 dual-purpose uplinks

866116 / 453564299601 Cisco Cisco Catalyst 2960 S 24 12 2(58) SE2 YES


Systems 10/100/1000 + 4 One Gigabit
Ethernet SFP ports

862084 / 453564206991 Hewlett-Packard HP E2510 24 has 24 10/100 ports Q.11.17 NO


Company and 2 Gigabit dual-personality (No firmware
ports for 10/100/1000 or mini support for LLDP-
GBIC connectivity. MED)

865330 / 453564194151 Hewlett-Packard HP 24-port ProCurve Switch K. 15.02.0007 NO


Company 2810 24G with 20 10/100/1000 (No firmware
ports and 4 dual personality ports support for LLDP-
for RJ-45 10/100/1000 or mini MED)
GBC fiber Gigabit connectivity

PIIC iX Implementation Specifics 6-13


MOCN213
PIIC iX Fixed Mode Monitoring Implementation

Note

Currently only Cisco access switches are supported for LLDP-MED and the dependant
Fixed Mode Monitoring feature.

The LLDP-MED feature must be enabled by default on each switch access port used for
PIIC iX Fixed Mode Monitoring to function. Vendor-specific switch configuration must be
done on each fixed mode monitoring access switch. For Cisco switches, LLDP-MED can be
enabled globally or on each individual switch port. Unlike the PIIC Fixed Mode Monitoring
implementation, an Extension switch is NOT required. Any switch port can be used as a Fixed
Mode Monitoring port. Access level switch ports are the likely connection location for a
Fixed Mode Monitoring ports and devices. However, a Fixed Mode Monitoring access port
can be in a Distribution, or Access layer switch port.

6-14 PIIC iX Implementation Specifics


MOCN213
PIIC iX Fixed Mode Monitoring Implementation

Fixed Mode Monitoring Access Port Locations


See Figure 6-1 for an example of possible fixed mode monitoring access port locations.

Figure 6-1: Possible Fixed Mode Monitoring Access Port Locations

Commands Used to Enable and Disable LLDP-MED


The following sections describe the configuration commands used to enable and disable the
LLDP-MED feature on either a Cisco or HP switch.

PIIC iX Implementation Specifics 6-15


MOCN213
PIIC iX Fixed Mode Monitoring Implementation

Cisco LLDP-MED Configuration Commands

LLDP and LLDP-MED cannot operate simultaneously in a network. By default, a network


device sends only LLDP packets until it receives LLDP-MED packets from an endpoint
device. The network device then sends out LLDP-MED packets until it receives LLDP-only
packets.

On Cisco access switches LLDP is enabled by default on all supported interfaces to send and
to receive LLDP information. LLDP can be disabled globally, using the following commands:

Switch (config) # no lldp run

LLDP can be enabled globally using the following command:

Switch (config) # lldp run

LLDP can be disabled on an interface using the following commands:

Switch (config) # interface GigabitEthernet1/0/1


Switch (config-if) # no lldp transmit
Switch (config-if) # no lldp receive

LLDP can be enabled on an interface using the following commands:

Switch (config) # interface GigabitEthernet 1/0/1


Switch (config-if) # lldp transmit
Switch (config-if) # lldp receive

6-16 PIIC iX Implementation Specifics


MOCN213

Ring to Star Conversion

Overview
The PIIC iX Network supports use of the Star network topology. As of December
31, 2010, the Star Topology is the required network topology for all new (i.e.,
System Release J and later) Philips clinical network installations. See Chapter 3
for full details about Star Topology concepts and implementation. Additionally,
Table 7-1 summarizes the specific topology support requirements determined by the
network plan.

Table 7-1: Topology Support Requirements

Philips
Part/Option Number Description
Support P/N

New network quoted after Ring is not allowed.


December 31, 2010. Star topology is the only option.

Currently installed ring network requesting Allowed, but you must convert all non-
an expansion. redundant core switch rings to a
redundant ring and you should consider
a possible conversion to star topology.1

Currently installed ring networks with no No change required. Ring topology is


modifications planned. supported.
1 Expansion of a Ring topology to support a new PIIC system is limited to a Routed Ring

network.

This chapter describes the conversion process for moving from a Ring Topology to a
Star Topology and includes the following topics:

• Supported network hardware for a Star Topology and the technical differences
between ring and star topologies
• High level conversion process
• Ring to Star conversion scenarios

7-1 Ring to Star Conversion


MOCN213
Technical Differences and Supported Hardware

Technical Differences and Supported Hardware


Table 7-2 summarizes the network device differences between the ring and star topologies.

Table 7-2: Network Device Differences Between Ring and Star


Network Device
Ring Star
Category
Cable plant including • Devices connected in a ring • Single core high level
media types and run that often require multiple structure, all lower level
paths vertical cable runs. devices run back to the core.
• Copper and fiber can both A single, vertical cable riser
be used. path can be used.
Network device • Mixed network of Cisco and • Cisco (Routed and Non-
hardware HP switches. Routed Star)
• Core layer functions more • HP (Non-Routed Star Only)
like a distribution layer. • A mix of vendor network
hardware (Cisco and HP) is
not allowed on Star
topology
Network device • Single VLAN per device • Multiple VLANs for each
configurations (Tier Switches) access switch

Table 7-3 lists the devices that are supported for a Star topology.

Table 7-3: Star Topology Supported Network Devices


Star
Implementation Supported Hardware
Layer
Access • 2960TT
• 2960TC
• 2960TS
• 2960-S (New)
• HP2510 (nonrouted systems only)
Distribution • 2960TT
• 2960TC
• 2960TS
• 2960-S (New)
• 3750-12 (New)
Core (router) • 3560
• 3750-24
• 3750-12 (New)

7-2 Ring to Star Conversion


MOCN213
High Level Conversion Process

High Level Conversion Process


The Ring to Star conversion process includes the following steps:

• Planning
• Preparation
• Conversion
• Verification

Step 1 - Planning
The first step is the planning phase and requires that you prepare a detailed written plan that
outlines the desired Star network. The plan should identify all required switches, cable runs,
and specific port usage. Additionally, you must define and create all of the new configurations
for each switch in the star network.

Step 2 - Preparation
Use the following preparation recommendation steps:

1. Obtain a complete and correct network plan for the existing network design,
implementation, installation, and configuration.
2. Collect all existing ring configurations from the switches.
3. Develop a back out plan to return the network to its original state so that you are prepared
if the conversion effort must be aborted.
4. Pre-configure and label any devices from the Ring network that will not be used in the
Star network.
5. Prepare for the removal and reallocation of the devices from the Ring network that will be
reused in the Star network. To do this:
• Free the cabling from trays and cable management devices
• Remove as many rack screws as possible
6. For larger systems, you must prepare a conversion script. The script should
chronologically identify the conversion steps and resources required for script execution.
7. Meet with the hospital clinical staff to determine appropriate network down times and best
approach to minimizing the impact of the down time. Confirm that the clinical staff is
ready for the necessary conversion down time of the network before bringing any devices
down.

Ring to Star Conversion 7-3


MOCN213
Conversion Scenarios

Step 3 - Conversion
Use the following conversion steps:

1. Meet with the hospital clinical staff to manage and minimize the impact of down time.
2. Follow the original network design that you created in the planning phase. Do not make
any changes or additions to the planned design.
3. Use all of the basic network viability checks for physical connections, link lights, ping and
application connection and communication.
4. Use a Phased Go-Live conversion implementation by bringing up access layers one at a
time and adding devices to them prior to attaching the next access device.

Step 4 - Verification
Use the following test and integration steps:

1. Verify at every step of the conversion. Hold design reviews for the results of the network
planning and preparation phases.This should be done as part of the design and configuration
file development phases. (Steps 1 and 2.)

2. Follow all of the standard test and integration installation steps as part of the installation
and implementation verification phases. (Steps 3 and 4.)

3. Confirm application functionality from the IIC to every device (each bedside monitor, the
DBS, etc.)

4. Document every step of the conversion.

Conversion Scenarios
This chapter describes the conversion scenarios for moving from a Ring Topology to a Star
Topology and provides a summary of each of the following types of conversions:

• Conversion that requires reuse of equipment


• Conversion that may require possible reuse of equipment
• Conversion that will require new switches be added to the network

Table 7-4 provides a summary of the various ring to star conversion scenarios along with
details about the type of driver(s) and strategies.

7-4 Ring to Star Conversion


MOCN213
Conversion Scenarios

Table 7-4: Ring to Star Conversion Scenarios

From To Driver(s) Strategies

Non-redundant Non-routed Star Expansion • Full system shutdown


Ring
Requirement/desire for
redundancy or Star
compliance

Non-redundant Routed Star Expansion for routed • Full system shutdown


Ring telemetry • Establish router layer first
• Convert existing switches onto the routing layer as
access switches following reconfiguration1

Non-routed Non-routed Star Expansion of simple NR • Full System Shutdown


Ring Ring • Establish distribution layer pair; establish Access
switch
Requirement/desire for
redundancy or Star
compliance

Non-routed Routed Star Expansion • Establish router layer first


Ring Addition of routed
telemetry

Routed Ring Routed Star Expansion • Full system shutdown


Requirement/desire for • Disconnect one Ring router, reconfigure as Star router,
redundancy or Star add additional switch as access layer
compliance • Convert Ring to Star access switch1
• As ring switches are removed, reconfigure them as Star
access switches and populate them on the Star side
• When all ring switches are converted, reconfigure other
Ring router as a Star router and connect into the Star
system

Note: In this table, bold text indicates the addition of new hardware.

1When
converting switches and routers to a Star topology you must first issue the following commands: del vlan.dat, write
erase, and reload (Cisco factory default)

Ring to Star Conversion 7-5


MOCN213
Conversion Scenarios

Conversion Scenario 1:Non-Redundant Ring to Non-Routed Star


Figure 7-1 illustrates a conversion scenario for a non-redundant ring topology to a non-routed
star.

Figure 7-1: Non-Redundant Ring to Non-Routed Star

Be aware of the following characteristics and requirements for a non-redundant ring to non-
routed star conversion:

• You cannot mix HP and Cisco equipment in a non-routed star topology. This topology
requires a single-vendor solution.
• A full system shutdown is usually required.
• A small number of devices and possible reuse of ports and cabling should minimize the
down time.
• Conversion of a network with three non-redundant switches could be a straightforward
procedure that can be accomplished in the following two steps:
• Connect the lower-tier switches
• Load the new configurations into the three non-redundant switches.

7-6 Ring to Star Conversion


MOCN213
Conversion Scenarios

Conversion Scenario 2: Non-Redundant Ring to Routed Star


Figure 7-2 illustrates a conversion scenario for a non-redundant ring topology to a routed star.

Figure 7-2: Non-Redundant Ring to Routed Star

Be aware of the following characteristics and requirements for this type of conversion:

• HP Routed Star topologies are supported through the Advanced Network Design and
Implementation (ANDI) process only.
• Significant new hardware is required for this type of conversion.
• For this type of conversion, the router layer can be built first. Existing switches are then
configured as access switches and attached to the router.
• Make sure that you plan for downtime while the access switches are being reconfigured.
• Possible reuse of existing cabling and ports can be done with this type of conversion.
• Distribution layers may or may not be required.

Ring to Star Conversion 7-7


MOCN213
Conversion Scenarios

Conversion Scenario 3: Non-Routed Ring to Non-Routed Star


Figure 7-4 illustrates a conversion scenario for a non-routed ring topology to a non-routed
star.

Figure 7-3: Non-Routed Ring to Non-Routed Star

Be aware of the following characteristics and requirements for this type of conversion:

• Possible reuse of existing cabling and ports can be done with this type of conversion.
• It is possible to have 100% switch reuse with only configuration changes.

7-8 Ring to Star Conversion


MOCN213
Conversion Scenarios

Conversion Scenario 4: Non-Routed Ring to Routed Star


Figure 7-4 illustrates a conversion scenario for a non-routed ring topology to a routed star.

Figure 7-4: Non-Routed Ring to Routed Star

Be aware of the following characteristics and requirements for this type of conversion:

• This is a complicated conversion which must be carefully managed. Conversion scripts


are recommended.
• Prepare for down time. The larger the network, the larger the down time impact.
• The potential exists for a high reuse of switches.
• Distribution layers may or may not be required.
• Significant new hardware is required for this type of conversion.
• For this type of conversion, the router layer can be built first. Existing switches are then
configured as access switches and attached to the router.
• Possible reuse of existing cabling and ports can be done with this type of conversion.
• Use of a mixed vendor hardware network on a routed star topology is not recommended.
HP Routed Star topologies are supported through the Advanced Network Design and
Implementation (ANDI) process only.

Ring to Star Conversion 7-9


MOCN213
Conversion Scenarios

Conversion Scenario 5: Routed Ring to Routed Star


Figure 7-5 illustrates a conversion scenario for a routed ring topology to a routed star.

Figure 7-5: Routed Ring to Routed Star

Be aware of the following characteristics and requirements for this type of conversion:

• This is a complicated conversion which must be carefully managed. Conversion scripts


are recommended.
• Prepare for down time. The larger the network, the larger the down time impact.
• The potential exists for a high reuse of switches.
• Distribution layers may or may not be required.
• Use of a mixed vendor hardware network on a routed star topology is not recommended.
HP Routed Star topologies are supported through the Advanced Network Design and
Implementation (ANDI) process only.

7-10 Ring to Star Conversion


MOCN213

Hardware Overview

Network Cables and Connectors


Passive standard Ethernet hardware components for the PIIC or PIIC iX
Network include:
• Unshielded Twisted Pair (UTP) Cables
• Fiber Optic Cables
• RJ-45 Wall Boxes
• 24-port Patch Panels
These components are described in detail in the sections that follow.

Note

Third-party personnel who install the cable plant used for the Philips IntelliVue
Clinical Network must be certified to install Category 5 (or greater) Unshielded
Twisted Pair and/or fiber cabling. Upon completion of the cable plant installation,
the cable installation personnel must provide Philips (and the customer, i.e. the
hospital IT staff) with documented segment-by-segment test results that
demonstrate the quality and reliability of the cable plant installation.

UTP Cables
Unshielded Twisted Pair (UTP) cabling (in-house and patch) must be compliant to
Electronic Industries Association (EIA)/Telecommunication Industries Association
(TIA) 568-B (copper) or International Organization for Standardization (ISO)/
International Electrotechnical Commission (IEC) 11801 (copper and fiber)
specifications.

Per the EIA/TIA 568, cabling must meet Category 5 (or greater) specifications. Per
ISO/IEC 11801 cabling must meet Class D (or greater) specifications. Patch panel
and patch cable terminations are RJ-45.

A-1 Hardware Overview A-1


MOCN213
Network Cables and Connectors

Note

Direct connect patch cables and in-wall wiring use the 568A version on both ends. Cross
over cables use a 568A version on one end and a 568B version on the other. Therefore, they
invert the transmission and reception wires. When purchased from Philips, cross over cables
have black boots on cable ends for identification.

Fiber Optic Cables


For noise immunity or extended distance cabling, single- or multi-mode fiber optic
cable is used on the IntelliVue Clinical Network.

Note

Single, continuous-length fiber optic cables are limited to 1000 meters (3281 ft.). You must
use 1000 Megabit, full-duplex connections on fiber optic runs. 100/half connections are not
supported over fiber.

Note

Fiber optic cables use four different types of connectors, SC, ST, MT-RJ, or LC as shown in
Figure A-1.

SC Connector ST Connector

MT-RJ Connector LC Connector

Figure A-1: Fiber Optic Cable Connectors

SC connectors have a square cross section and are used with a Network Switch.
ST connectors have a round cross section and are used with the 10 Mbps Media
Translator.

A-2 Hardware Overview


MOCN213
Network Cables and Connectors

MT-RJ connectors are small form-factor fiber optic connectors that resemble the
RJ-45 connectors used in Ethernet networks.
LC connectors look just like SC connectors, but they are half the size with a 1.25mm
ferrule instead of 2.5mm.
* - A mode-conditioning patch cord, as specified by the IEEE standard, is required
regardless of the span length.

Wall Boxes
RJ-45 wall boxes for UTP cable connectors are available from Philips for connecting
Patient Monitors, PIIC and PIIC iX Surveillance systems, PIIC and PIIC iX Overview
systems, PIIC and PIIC iX Physiologic Data servers, and the PIIC or PIIC iX Primary
Server to the Network. Quad-port (Option M3199AI #A12) wall boxes are available
for US installations.
Surface mount kits for mounting quad-port wall boxes (Option M3199AI #A13) are
also available. Single port wall boxes and surface mount kits are also available for
certain countries. See your Philips Representative for specific part numbers.
A dual-port RJ-45 wall box is shown in Figure A-2. Each wall box includes places for
labeling UTP cables connecting to each port. A typical label would include the patch
panel number and port number the cable connects to. For example, a label 2-14 would
indicate that the connecting UTP cable came from patch panel 2 - port 14.
UTP wire connections for each pin of port jacks are the 568A Version shown in
Figure A-2. Wiring of UTP cables to internal connectors of wall boxes must be
performed by a certified CAT5 cable plant installer. They should be wired as shown in
Figure A-2.

Hardware Overview A-3


MOCN213
Network Cables and Connectors

Port A Label

RJ-45 Port A Wall Box

RJ-45 Port B

Port B Label

BR RJ-45 Wiring Connections


OW
GR N PRIMARY COLOR/stripe color
OR EE
AN N
BL GE 8 BROWN
UE
7 WHITE/brown
6 GREEN
5 WHITE/green
4 ORANGE
3 WHITE/orange

2 BLUE
1 WHITE/blue

Figure A-2: RJ-45 Wall Box and Wire Connections for UTP Cable

Patch Panels
The IntelliVue Clinical Network contains many interconnecting cables and wires. To
assure a robust, reliable, and accessible network, each wire connection must be secure,
and identification of wires and cables must be clear. To assist this process, 24-Port
Patch Panels are available from Philips (Option M3199AI #A01). For large systems
with many wires, it is recommended that patch panels be mounted in a floor standing
rack designed for that purpose. However, Philips also provides a Patch Panel Wall
Mount Kit (M3199AI #A05) for mounting the patch panel on a vertical wall.
The 24-port patch panel from Philips is shown in Figure 6. The Front Panel has 24,
RJ-45 ports for connecting 24 UTP CAT5 RJ-45 connectors. Each front panel port
should be labeled for cable identification. Places for port labeling are provided. Snap-
in Philips labels are also included for each port.

A-4 Hardware Overview


MOCN213
Network Cables and Connectors

The Rear Panel has 24 sections for connecting the 8 individual wires from 24 different
UTP CAT5 cables. Wiring of UTP cables to the rear of the Patch Panel must be
performed by a certified CAT5 cable plant installer. They should be wired as shown in
Figure A-3.

Front Panel
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
AMP

Category 5

RJ-45 Ports Port Labels


Rear Panel
24 22 20 18 16 14 12 10 8 6 4 2

23 21 19 17 15 13 11 9 7 5 3 1

Rear Panel Wiring Connections


BLUE ORANGE GREEN BROWN color marks on connector block
WHITE/orange

WHITE/brown
WHITE/green

/stripe color
WHITE/blue

ORANGE

BROWN
GREEN
BLUE

PRIMARY COLOR
1 2 3 4 5 6 7 8

cable stripped back


no more than 5 cm (2 in)
wires untwisted no more
than 1.3 cm (0.5 in.)

Figure A-3: 24-Port Patch Panel and Wiring Connections

Hardware Overview A-5


MOCN213
Network Cables and Connectors

Device/Switch Interconnection Options

Copper Connections

UTP Copper connections are supported for interconnecting network devices where
port availability and EIA/TIA Telecommunication compliances for Category 5 (or
greater) are met.

Fiber Connections

Fiber optic cabling may be used for network device interconnection in cases where
longer point-to-point distances must be spanned. Port requirements for fiber may
dictate specific network device types.
Fiber links may be either 100 or 1000Mbit (Gigabit).

Gigabit Fiber Connections


Gigabit fiber SFPs are available for cases where Gigabit fiber links are needed. The
table below identifies the supported uses for Gigabit fiber.

Table A-1: Use Cases for Gigabit Fiber


Star Topologies

Server Farm Connections between the Routers and the Server Farm
Distribution Switches must be Gigabit speed.

Router-to-Router The router-to-router connections must be Gigabit


speed.

A-6 Hardware Overview


MOCN213
Network Cables and Connectors

Table A-2: Philips Products that Support Full Gigabit Ethernet


Capabilities.
Philips Product
Vendor Part Number Manufacturer Description
Number
Cisco Core Layer
865339 WS-C3750G-12S-E Cisco Systems 12-port SFP-based Gigabit
Ethernet managed stackable,
rack-mountable switch with EMI
installed.
Cisco Distribution/Access Layer
866116 WS-C2960S-24TS-L Cisco Systems Cisco Catalyst 2960-S 24/10/100/
1000 + 4 One Gigabit Ethernet
SFP ports
Cisco SFP
865223 A03 GLC-SX-NM Cisco Systems NI Transceiver Cisco SX 850NM
OPT A03
865223 A04 GLC-LH-SM Cisco Systems NI Transceiver Cisco SX 850NM
OPT A03
865223 A09 GCL-T Cisco Systems NI Transceiver Cisco Copper
OPT A09
HP Core Layer
865328 J9642A (v2) HP HP 5406zl-24G-PoE SW w PLic/
L4Svc (PMed)
(this product number represents a
bundle of HP products)
865328 A01 J9642A (v2) HP HP 24 port 10/100/1000 copper
J8702A (v1) module (V2)
865328 A02 J9537A (v2) HP HP 24 port fiber ready SFP
J8706A (v1) module (v2)
865328 A04 J9637A (v2 only) HP HP 12-port 10/100/1000 copper,
12-port SFP module (v2)
HP Distribution/Access Layer
865330 J9021A HP HP 24-port ProCurve Switch
2810 24G with 20 10/100/1000
ports and 4 dual-personality ports
for RJ-45 10/100/1000 or mini-
GBIC fiber Gigabit connectivity.
865223 A02 J9054B HP NI HP 100FX Fiber SFP option
A02
865223 A05 J4858C HP NI Transceiver HP Gigabit SX
option A05
865223 A06 J4859C HP NI Transceiver HP Gigabit LX
Opt A06
865223 A08 J8177C HP NI 1000 Tx Copper SFP Option
A08

Hardware Overview A-7


MOCN213
Network Cables and Connectors

A-8 Hardware Overview


MOCN213

PIIC iX IP Address Tables

Overview
This appendix is a reference to be used with Chapter 6 and includes the following
PIIC iX IP Addressing scheme tables:

• Table B-1, “IP Address/VLAN Mappings for PIIC iX Systems,”


• Table B-2, “Flexible IP Addressing Scheme: Non-routed PIIC iX without
IntelliVue Smart-hopping Network,”
• Table B-3, “Flexible IP Addressing Scheme: Non-routed PIIC iX with IntelliVue
Smart-hopping Network,”
• Table B-4, “Flexible IP Addressing Scheme: Routed IntelliVue Smart-hopping
Network,”
Note that these tables are provided as examples for one possible IP addressing
scheme per each deployment scenario, actual addressing schemes may differ.

B-1 B-1
Chapter B: Overview
MOCN213
Table B-1: IP Address/VLAN Mappings for PIIC iX Systems
Large Server (Routed) 128 - 1024 Beds, 160
Systems Large Server (Non-routed with Smart-hopping)
Large Server (Non-routed no Smart-hopping)
128 Beds, up to 160 Systems
Standalone (Non-routed no Smart-hopping) 128 Beds, up to 160 Systems
IP Address 32 Beds, 8 Systems
Small Server (Non-routed no Smart-hopping)
1st 2nd 3rd 4th 64 Beds, 8 Systems
172 21 200 0 VLAN Subnet Mask - /24 # Devices VLAN Subnet Mask - /23 # Devices VLAN Subnet Mask - /22 # Devices
172 21 200 0 200 255.255.255.0 254 200 255.255.254.0 510 200 255.255.252.0 1022
172 21 201 0 201 255.255.255.0 254 --MGMT-- ---- SKIPPED ---- --MGMT-- ---- SKIPPED ----
172 21 202 0 202 255.255.255.0 254 202 255.255.254.0 510
172 21 203 0 203 255.255.255.0 254
172 21 204 0 204 255.255.255.0 254 204 255.255.254.0 510 204 255.255.252.0 1022
172 21 205 0 205 255.255.255.0 254
172 21 206 0 206 255.255.255.0 254 206 255.255.254.0 510
172 21 207 0 207 255.255.255.0 254
172 21 208 0 208 255.255.255.0 254 208 255.255.254.0 510 208 255.255.252.0 1022
172 21 209 0 209 255.255.255.0 254
172 21 210 0 210 255.255.255.0 254 210 255.255.254.0 510
172 21 211 0 211 255.255.255.0 254
172 21 212 0 212 255.255.255.0 254 212 255.255.254.0 510 212 255.255.252.0 1022
172 21 213 0 213 255.255.255.0 254
172 21 214 0 214 255.255.255.0 254 214 255.255.254.0 510
172 21 215 0 215 255.255.255.0 254
172 21 216 0 216 255.255.255.0 254 216 255.255.254.0 510 216 255.255.252.0 1022
172 21 217 0 217 255.255.255.0 254
172 21 218 0 218 255.255.255.0 254 218 255.255.254.0 510
172 21 219 0 219 255.255.255.0 254
172 21 220 0 220 255.255.255.0 254 220 255.255.254.0 510 220 255.255.252.0 1022
172 21 221 0 221 255.255.255.0 254
172 21 222 0 222 255.255.255.0 254 222 255.255.254.0 510
172 21 223 0 223 255.255.255.0 254
172 21 224 0 224 255.255.255.0 254 224 255.255.254.0 510 224 255.255.252.0 1022
172 21 225 0 225 255.255.255.0 254
172 21 226 0 226 255.255.255.0 254 226 255.255.254.0 510
172 21 227 0 227 255.255.255.0 254

B--2
PIIC iX IP Address Tables
MOCN213
Table B-2: Flexible IP Addressing Scheme: Non-routed PIIC iX without IntelliVue Smart-hopping Network
Clinical Domain Size Local Database PIIC iX Small PIIC iX Server Large PIIC iX Server

32 Beds + 8 systems 64 Beds + 8 Systems 128 Beds + up to 60 Systems


Clinical Domain Devices # IPs IP Address # IPs IP Address # IPs IP Address
Subnet Mask = 255.255.255.0 Subnet Mask = 255.255.255.0 Subnet Mask = 255.255.254.0
Subnet Address 1 172.20.204.0 1 172.20.204.0 1 172.20.204.0
Router VIP 1 172.20.204.1 1 172.20.204.1 1 172.20.204.1
Router A 1 172.20.204.2 1 172.20.204.2 1 172.20.204.2
Router B 1 172.20.204.3 1 172.20.204.3 1 172.20.204.3
Reserved (Network Devices) 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11
Service PC/APM 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13
DBS 1 172.20.204.14 1 172.20.204.14 1 172.20.204.14
Gateway 1 172.20.204.15 1 172.20.204.15 1 172.20.204.15
Physio DB 1 172.20.204.16 1 172.20.204.16 1 172.20.204.16
PIIC iXs Hosts (PIICs/ 8 172.20.204.17 to 172.20.204.24 8 172.20.204.17 to 172.20.204.24 60 172.20.204.17 to 172.20.204.76
Clients)
Wired Bedsides - 1st Device 32 172.20.204.25 to 172.20.204.56 64 172.20.204.25 to 172.20.204.88 128 172.20.204.77 to 172.20.204.204
Wired Bedsides - 2nd Device 32 172.20.204.57 to 172.20.204.88 64 172.20.204.89 to 172.20.204.152 128 172.20.204.205 to 172.20.205.76
Wired Bedsides - 3rd Device 24 172.20.204.89 to 172.20.204.112 48 172.20.204.153 to 172.20.204.200 96 172.20.205.77 to 172.20.205.172
XDS Devices 16 172.20.204.113 to 172.20.204.128 32 172.20.204.201 to 172.20.204.232 64 172.20.205.173 to 172.20.205.236
Printers (Static or DHCP) 10 172.20.204.129 to 172.20.204.138 10 172.20.204.233 to 172.20.204.242 17 172.20.205.237 to 172.20.205.253
Unused Space 117 172.20.204.139 to 172.20.204.254 12 172.20.204.243 to 172.20.204.254 1 172.20.205.254
Broadcast Address 1 172.20.204.255 1 172.20.204.255 1 172.20.206.18
Total IP addresses 256 256 512

B--3
Chapter B: Overview
MOCN213
Table B-3: Flexible IP Addressing Scheme: Non-routed PIIC iX with IntelliVue Smart-hopping Network
Clinical Domain Size Local Database iX Non-routed with Smart- Small Server iX Non-routed with Smart- Large Server iX Non-routed with Smart-
hopping hopping hopping
Clinical Domain Devices # IPs IP Address # IPs IP Address # IPs IP Address
Subnet Mask = 255.255.255.0 Subnet Mask = 255.255.254.0 Subnet Mask = 255.255.252.0
Subnet Address 1 172.20.204.0 1 172.20.204.0 1 172.20.204.0
Router VIP 1 172.20.204.1 1 172.20.204.1 1 172.20.204.1
Router A 1 172.20.204.2 1 172.20.204.2 1 172.20.204.2
Router B 1 172.20.204.3 1 172.20.204.3 1 172.20.204.3
Reserved (Network Devices) 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11 8 172.20.204.4 to 172.20.204.11
Service PC/APM 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13 2 172.20.204.12 to 172.20.204.13
DBS 1 172.20.204.14 1 172.20.204.14 1 172.20.204.14
Gateway 1 172.20.204.15 1 172.20.204.15 1 172.20.204.15
Physio DB 1 172.20.204.16 1 172.20.204.16 1 172.20.204.16
PIIC iXs Hosts (PIICs/ 8 172.20.204.17 to 172.20.204.24 8 172.20.204.17 to 172.20.204.24 60 172.20.204.17 to 172.20.204.76
Clients)
Wired Bedsides - 1st Device 32 172.20.204.25 to 172.20.204.56 64 172.20.204.25 to 172.20.204.88 128 172.20.204.77 to 172.20.204.204
Wired Bedsides - 2nd Device 32 172.20.204.57 to 172.20.204.88 64 172.20.204.89 to 172.20.204.152 128 172.20.204.205 to 172.20.205.76
Wired Bedsides - 3rd Device 24 172.20.204.89 to 172.20.204.112 48 172.20.204.153 to 172.20.204.200 96 172.20.205.77 to 172.20.205.172
XDS Devices 16 172.20.204.113 to 172.20.204.128 32 172.20.204.201 to 172.20.204.232 64 172.20.205.173 to 172.20.205.236
Printers (Static or DHCP) 10 172.20.204.129 to 172.20.204.138 15 172.20.204.233 to 172.20.204.247 19 172.20.205.237 to 172.20.205.255
ITS/Access Point Controllers 8 172.20.204.139 to 172.20.204.146 8 172.20.204.248 to 172.20.204.255 18 172.20.206.0 to 172.20.206.17
ITS/Access Points 40 172.20.204.147 to 172.20.204.186 113 172.20.205.0 to 172.20.205.112 128 172.20.206.18 to 172.20.206.145
Transceivers - (APC) - 64 172.20.204.187 to 172.20.204.250 128 172.20.205.113 to 172.20.205.240 256 172.20.206.146 to 172.20.207.145
DHCP
Unused Space 4 172.20.204.251 to 172.20.204.254 14 172.20.205.241 to 172.20.205.254 109 172.20.207.146 to 172.20.207.254
Broadcast Address 1 172.20.204.255 1 172.20.205.255 1 172.20.204.255
Total IP addresses 256 512 1024

B--4
PIIC iX IP Address Tables
MOCN213
Table B-4: Flexible IP Addressing Scheme: Routed IntelliVue Smart-hopping Network
Clinical Domain Size Small Smart-hopping Network Size Medium Smart-hopping Network Size Large Smart-hopping Network Size (Default)

256 Smart-hopping Devices + 9 APC + 128 AP 512 Smart-hopping Devices + 9 APC + 320 AP 1024 ITS Devices + 9 APCs + 320 AP
Clinical Domain Devices # IPs IP Address # IPs IP Address # IPs IP Address
Subnet Mask = 255.255.252.0 Subnet Mask = 255.255.248.0 Subnet Mask = 255.255.240.0
Subnet Address 1 172.31.240.0 1 172.31.240.0 1 172.31.240.0
Router VIP 1 172.31.240.1 1 172.31.240.1 1 172.31.240.1
Router A 1 172.31.240.2 1 172.31.240.2 1 172.31.240.2
Router B 1 172.31.240.3 1 172.31.240.3 1 172.31.240.3
Reserved (Service PC) 6 172.31.240.4 to 172.31.240.9 6 172.31.240.4 to 172.31.240.9 6 172.31.240.4 to 172.31.240.9
Switches/ Client 11 172.31.240.10 to 172.31.240.20 11 172.31.240.10 to 172.31.240.20 11 172.31.240.10 to 172.31.240.20
Reserved 44 172.31.240.21 to 172.31.240.64 34 172.31.240.21 to 172.31.240.54 235 172.31.240.21 to 172.31.240.255
ITS APC 9 172.31.240.65 to 172.31.240.73 9 172.31.240.55 to 172.31.240.63 9 172.31.241.0 to 172.31.241.8
ITS AP Static Range 182 172.31.240.74 to 172.31.240.255 320 172.31.240.64 to 172.31.241.127 759 172.31.241.9 to 172.31.243.255
ITS AP DHCP (APC) 256 172.31.241.0 to 172.31.240.255 640 172.31.241.128 to 172.31.243.255 1024 172.31.244.0 to 172.31.247.255
ITS Transceivers DHCP 511 172.31.242.0 to 172.31.240.254 1023 172.31.244.0 to 172.31.247.254 2047 172.31.248.0 to 172.31.255.254
Broadcast Address 1 172.31.243.255 1 172.31.247.255 1 172.31.255.255
Total IP addresses 1024 2048 4096

Note The Smart-hopping infrastructure Access Point Controllers APC provides DHCP IP address to Smart-hopping Access Points (AP) and Smart-
hopping monitoring devices. The APC DHCP server lease times are infinite. Therefore, as devices are replaced or upgraded additional spare
addresses must be planned for when allocating address space. The recommended address planning logic is as follows:

Allocation space for AP must be approximately 3 times the size of the number of APs needed.

1. One static address for each AP needed


2. One DHCP address for each AP needed
3. One spare DHCP address for each AP needed (for AP repair/upgrade)
Allocation space for Monitoring must be 2 times number of devices needed.

1. One DHCP address for each monitoring device


One spare DHCP address for each monitoring device (for device repair/upgrade)

B--5
MOCN213
Overview

B-6 PIIC iX IP Address Tables

Das könnte Ihnen auch gefallen