Beruflich Dokumente
Kultur Dokumente
MOLINA HEALTHCARE
Data Center Networking High Level Design Document V1.0 (Draft)
Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000
Cisco Services.
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense. The following information is for FCC compliance of Class B devices: The equipment described in this manual generates and may radiate radio-frequency energy. If it is not installed in accordance with Ciscos installation instructions, it may cause interference with radio and television reception. This equipment has been tested and found to comply with the limits for a Class B digital device in accordance with the specifications in part 15 of the FCC rules. These specifications are designed to provide reasonable protection against such interference in a residential installation. However, there is no guarantee that interference will not occur in a particular installation. You can determine whether your equipment is causing interference by turning it off. If the interference stops, it was probably caused by the Cisco equipment or one of its peripheral devices. If the equipment causes interference to radio or television reception, try to correct the interference by using one or more of the following measures: Turn the television or radio antenna until the interference stops. Move the equipment to one side or the other of the television or radio. Move the equipment farther away from the television or radio. Plug the equipment into an outlet that is on a different circuit from the television or radio. (That is, make certain the equipment and the television or radio are on circuits controlled by different circuit breakers or fuses.) Modifications to this product not authorized by Cisco Systems, Inc. could void the FCC approval and negate your authority to operate the product. The following third-party software may be included with your product and will be subject to the software license agreement: CiscoWorks software and documentation are based in part on HP OpenView under license from the Hewlett-Packard Company. HP OpenView is a trademark of the Hewlett-Packard Company. Copyright 1992, 1993 Hewlett-Packard Company. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCBs public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. Network Time Protocol (NTP). Copyright 1992, David L. Mills. The University of Delaware makes no representations about the suitability of this software for any purpose. Point-to-Point Protocol. Copyright 1989, Carnegie-Mellon University. All rights reserved. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission. The Cisco implementation of TN3270 is an adaptation of the TN3270, curses, and termcap programs developed by the University of California, Berkeley (UCB) as part of the UCBs public domain version of the UNIX operating system. All rights reserved. Copyright 1981-1988, Regents of the University of California.
MOLINA HEALTHCARE
Data Center Networking High Level Design Document V1.0 (Draft)
Cisco Services.
Contents
Contents
Contents Figures Tables Document Information
Review and Distribution Modification History
3 7 8 9
9 9
Introduction
Preface 10 Audience 10 Scope 10 Assumptions 10 Related Documents References 10
10
10
Project Overview
Customer Description Project Overview Project Scope 11 Project Timeline Phase 1 Project Team 12 Project Sites 12
11
11 11 11
13
13
21
21
1.3.1 SAN Core 22 1.3.2 SAN Edge 23 1.3.3 SAN Connectivity 23 1.3.3.1 Inter-Switch Link (ISL) Connectivity 1.3.3.2 Host Connectivity 24 1.3.3.3 Storage Connectivity 24 1.3.3.4 Tape Backup Architecture 25
22
23
(Release)v1.3
Document Information
26
28
28
30
31
31
32
32
33
35
35 35 35 36
3.4.1 Logical Redundancy 36 3.4.1.1 HSRP (Hot Standby Router Protocol) 3.4.2 UDLD (Uni-Directional Link Detection) 37 3.4.3 NSF/SSO (Non Stop Forwarding/ Stateful Switchover)37 3.4.4 GOLD (Generic Online Diagnostics)38 3.4.5 uRPF (unicast reverse path forwarding) 39 3.4.6 Trunking 39 3.4.7 VTP (VLAN Trunking Protocol) 40 3.4.8 VLAN Hopping 40 3.4.9 Unused ports 40 3.4.10 ISSU 41 3.5 Control Plane and Management Plane Policing 41 3.5.1 Developing a CoPP Policy 43 3.5.2 COPP on NX-OS 44 3.5.3 CoPP Risk Assessment 45
4. Security Technologies
4.1 Firewall Technologies
4.1.1 Transparent Mode 46 Overview 46 4.1.1.1 Traffic Passing through Transparent firewall 47
46
46
(Release) v1.3
Document Information
4.1.1.2 Transparent Firewall in a Network 47 Figure 4.1 TRANSPARENT FIREWALL NETWORK48 4.1.1.3 Transparent Firewall Guidelines 48 4.1.1.4 Unsupported Features in Transparent Firewall 4.2 Routed Mode 50 Overview 50 4.2.1 Routed Firewall in a Network 50 4.3 ASA Virtual Context 50 Overview 50 4.3.1 Understanding Multiple Contexts 51 4.3.2 System execution space 51 4.3.3 Admin context 51 4.3.4 User or customer contexts 51 4.4 Packet Flow, Shared Interfaces and Classification in 4.5 Failover Functionality Overview on the ASA 52 4.5.1 Stateful failover 52 4.5.2 Failover and State Links 53 4.5.3 Intrusion Detection & Prevention 53
49
Multimode 52
56
57
58
66
67
67 69
(Release) v1.3
Document Information
7.3.2 VMWare with Standalone Server 7.3.3 VMWare with Blade Server 70
70
71
8. Management
8.1 Network Management
8.1.1 SNMP 74 8.1.2 SSH/Telnet 74 8.1.3 Logging 74 8.1.4 NTP 74 8.1.5 RBAC/AAA/TACACS+
73
74
75 75
8.2.1 Cut-Through Proxy (Management Firewall) 8.2.2 DCNM 75 8.2.3 ANM 76 8.2.4 CSM 76 8.2.5 Fabric Manager 76
75
9. Rack Design
9.1 Data Center Sizing No of Servers & Server NICs 9.2 RACK & POD Design
9.2.1 Rack Space Division 79 9.2.2 POD Assignments 79 9.2.3 POD Design 80
77
77 79
83
(Release) v1.3
Figures
Figures
Figure 1 13 BLOCK LEVEL DESIGN
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Contents
Figure 2 14 Figure 3 15 Figure 4 16 Figure 1.5 Figure 1.6 Figure 1.7 Figure 1.8 Figure 1.9 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 3.1 Figure 3.2 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 6.1 Figure 7.1 Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5 Figure 9.1 Figure 9.2
WAN EDGE DESIGN INET EDGE DESIGN DCN DESIGN DESIGN FOR 1G SERVERS DESIGN FOR 10G SERVERS SAN CORE EDGE DESIGN MNET DESIGN TAPE SAN DESIGN INTERNET EDGE FIREWALL DMZ FIREWALL PARTNER FIREWALL WAN EDGE FIREWALLS VIRTUAL FIREWALL DMX + TIME FINDER DMX+RECOVERPOINT CONTROL PLANE POLICING LOGICAL PLANES OF ROUTER TRANSPARENT FIREWALL NETWORK ROUTED FIREWALL NETWORK IDS PLACEMENT WEB TO DATABASE SERVER TRAFFIC FLOW FCoE TOPOLOGY VMWARE WITH STANDALONE SERVER ARCHITECTURE VMWARE WITH BLADE SERVER ARCHITECTURE TYPICAL SERVER CONNECTIVITY BLADE SERVER ARCHITECTURE WITH FLIPPED-U DESIGN AND VSS BLADE SERVER ARCHITECTURE IN PASS-THROUGH MODE POD DESIGN TYPE-1 1GIG RACK GROUP POD DESIGN TYPE-2 10 GIG RACK GROUP 17 18 19 20 25 28 29 29 30 32 34 34 42 43 49 51 55 55 61 66 67 68 68 69 80 81
(Release) v1.3
Document Information
Tables
Table 1 Table 2 Table 3 Project Team Contact Current Project Site List Project Contact Information 12 12 12
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Figures
Document Information
Author: Change Authority: Change Forecast: Template Version: Talha Hashmi CiscoAdvancedServices High 4.1
Modification History
Re v Date Originator Status Comment
10
(Release) v1.3
Figures
0.1 0.2
25-May-09 10-July-09
Draft Draft
11
(Release) v1.3
Introduction
Introduction
Preface
This document known as the High Level Design (HLD) addresses the architecture & technology recommendations for building the new Data Center in Albuquerque, New Mexico for Molina Healthcare. The information in this document is in view of the technical requirements gathered in the CRD and the final BOM. Specific implementation details of each technology will be covered in the Low Level Design Document.
Audience
This document is intended for use by the Cisco AS and Molina Healthcare engineering teams. Technologies and recommendations decided here will dictate the implementation details on the Low Level Design document.
Scope
The scope of this document will cover the New Mexico Data Center design architecture, technology integration in reference to the requirements gathered on CRD and the hardware available represented on Bill of Material. The HLD will include the features, and functions that will satisfy the stated technical objectives of the project.
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Document Information
Assumptions
This document is focused on the design specific to the New Mexico Data Center (NM DC-2 solution). Any Cisco hardware and/or software information in this document is based on current performance estimates and feature capabilities.
Related Documents
[1] [2] [4] [5] [6] SharePoint (All Network Related Documents ) (Molina) Molina CRD DOC_Final_Network.doc (Cisco / Molina) 702773_2938557_Molina_PDI_SOW_revRM20090220v1.doc (Cisco) Molina CRD_V1.4.doc (Cisco) BOM (Cisco)
References
None.
13
(Release) v1.3
Project Overview
Project Overview
Customer Description
Molina Healthcare, Inc., is among the most experienced managed healthcare companies serving patients who have traditionally faced barriers to quality healthcare- including individuals covered under Medicaid, the Healthy Families Program, the State Children's Health Insurance Program (SCHIP) and other government-sponsored health insurance programs. Molina has health plans in California, Michigan, New Mexico, Ohio, Texas, Utah, Washington, Missouri and Nevada, as well as 19 primary care clinics located in Northern and Southern California. The company's corporate headquarters are in Long Beach, California. Molina's success is based on the fact that it has focused primarily on the Medicaid and low-income population, and is committed to case management, member outreach and low-literacy programs. More than 25 years ago, the late C. David Molina, MD, founded the company to address the special needs of Medicaid patients. Today, Molina carries out his mission of emphasizing individualized care that places the physician in the pivotal role of managing healthcare.
Project Overview
The primary business requirements for building the new Data Centre aims to consolidate and migrate the existing Molina Healthcare network infrastructure from
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
DCN
high risk earthquake zone, while also increasing network capacity, High Availability and Resiliency.
Project Scope
The scope of this project covers the planning, design, testing and implementation of the NM DC as described in the SOW. Cisco Advanced Services has been engaged with Molina HC engineering team in collecting the requirements, which were compiled and delivered as CRD. The 2nd deliverable under Phase 1 of this project is the High Level Design Document which will be followed by a Low Level Design Document in the second Phase.
15
(Release) v1.3
DCN
Project Team
The following Molina and Cisco resources are members of the project team.
Table 1 Project Team Contact Information
Name Dale Singh Talha Hashmi Steve Hall Damon Li Eric Stiles
Title Project manager Lead Network Consulting Eng L4 L7 Consulting Engineer SAN N.Consulting Engineer Security N. Consulting Eng
Project Sites
The following sites are currently in scope for the DCN project.
Table 2 Current Project Site List
State CA NM CA NM
16
(Release) v1.3
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Project Overview
ISP
ISP 1 ISP 2
INSIDE
MPLS 2
CORE L3
LB
W AN EDGE
IPS
AGG L3-L2
PNET ACC
ACC L2
FW
SAN
OB-MGT
MGT
SAN
HUGHS W AY DR DC
1. 2. 3. 4. 5.
WAN EDGE INET EDGE ( Internet Edge ) PNET ( Partner Network ) DCN ( Data Center Network ) MNET ( Management Network )
Each block level design is further translated into High Level Design and technologies that match the required functions.
18
CN
(Release) v1.3
Project Overview
L3
VZ
WAN 2
WAN 1.2
WAN 1.1
WAN EDGE
TO MNET
TO INET
Recommenda tion In alignment with Molinas vision to have off shore NOC in addition to the remote NOC in Hughes Way. From security design prospective it will be feasible to have off shore NOC access provisioned through WAN Edge rather than through INET EDGE.
Not e 1. Service provisioning in this block is explained in detail under chapter 2 of this document. 2. WAAS and 6509 will be repurposed from the existing DC environment.
19
(Release) v1.3
Project Overview
ISP -1
ISP -2
INET EDGE
TO PNET
GSS
OUTSIDE
VPN FW INET FW
1 Gig
DMZ CORE
1 Gig
10 Gig
IDS
IDS
10 Gig
DMZ EDGE
Figure 1.3 INET EDGE DESIGN
DMZ SRV
INTERNET EDGE block will interface 1. Two ISP connections, 2. WAN Traffic through DCN CORE TO TO WAN EDGE WAN EDGE, 3. Provide secure access to the Partner network and 4. The DCN Core. This block will also host the DMZ servers. Services hosted in this block are 1. Global Site Selector, 2. Firewall, 3. Server Load Balancing, 4. Intrusion Prevention. Routers terminating the ISP will be 7206VXR which will aggregate to the CAT 6509 SW DMZ Core. The DMZ core will provision will security services for Partner and Internet access through dedicated ASA and to the DMZ servers through CAT 3750 SW.
Not e 1. Service provisioning in INET block is explained in detail under chapter 2 of this document. 2. DMZ Access SW will be repurposed from the existing DC environment. ( CAT 3750 are not DC class switches )
1.1.3 PNET
PARTNER access will be provisioned through INET EDGE block. Only P2P Layer 3 connectivity will be provisioned to the partner devices which will be managed by the partner.
Cauti on Since the PNET devices are not owned by Molina and only L3 connections are provisioned. Proper capacity planning is essential to provision a scalable design.
20
(Release) v1.3
Project Overview
If the predicted growth is high for such partner connections, its recommended to move the PNET design similar to the WAN EDGE and provide dedicated services.
TO INET EDGE
LAYER 3 CORE
P D
10 Gig
Gig 10
P D
10 Gig
LAYER 3 LAYER 2
10 G ig
AGGRE
P D
P D
IDS
Gi g
ID
Gig 10
SERVICE CHASSIS
10 Gig
10
10
10
10
4 Gig
Gi g
4 Gig
G ig
ig G
10 G
10 G
ig 1G
1G ig
FC E O
4Gi
1 Gig Servers
Project Overview
1.1.4 DCN
DCN Data Center Network is a 3 tired architecture which is based on Cisco best practice and also aligns with Molinas requirements. The DCN block will interface the INET EDGE with 10 Gig connectivity which will provide access to WAN, PNET and INET. The DCN architecture is N+1 design which will have 10 Gig connectivity in each layer i.e. Core, Aggregation and Access. The Core and the Aggregation layer will use the Nexus 7010 switch which can be virtualized to provide traffic segmentation for Production and Dev-Test environments as well as scalability for future growth. The selection of Nexus technology is based on Data Center class features recommended and discussed in detail under technology section. Additionally Cat 6509 will be used as a service chassis in the aggregation layer to provision services like SLB (Server Load Balancing) and Security. The service chassis design allows for modular approach to grow and scale services as required in the future. For Functional details of each DCN layer please refer to section 1.2 DCN Design Principals. For feature recommendations please refer to chapter 3. The DCN access layer is designed to reduce structured cabling by consolidating the server connections within the rack using ToR technologies. The DCN access layer design is of two types: Type 1: This design is to accommodate the existing servers that have 1 Gig NICs.
Not e
TO AGGREGATION
OS TBD
1:1.2 OS
1 Gig 1
22
(Release) v1.3
Project Overview
This design uses Nexus 2148 as ToR (Top of Rack) to provision 48 ports of 1 Gig connectivity in the Active/Active or Active/Standby NIC configuration. The Nx 5010 is used as MoR (Middle of Row) to aggregate the traffic from Nx 2148. Each Nx 2148 can provide upto 40 Gig of Bandwidth to the rack. The rack also consolidates the HBA connectivity through ToR MDS 9124 switches which provide 32 Gig of BW. The design represented above is termed as POD. Each POD constitutes 5 access racks.
Not e Exception for SAN ToR: Based on the server requirements, some servers may connect directly to SAN Core. For SAN ToR (Edge SW) Design Please refer to the SAN section 1.1.5
The Nx 5010 in the MoR is used for Nx 2148 ToR access aggregation & provides 10 Gig uplink connectivity to the DCN aggregation layer. Based on the desired oversubscription ratio per POD and the availability of the 10 Gig ports, the middle rack can also accommodate limited no of 10 Gig servers. The management design is also segmented to support 5 racks with Nx 2148 as ToR Edge and 5010 as MoR as Spine.
Cauti on The Nexus 2148 does not allow Port Channeling to the Host based on the NxOS available today. Therefore if there are any Server Blade Center Chassis requiring PC to the access port, the POD can provision limited connectivity by leveraging the first 8 ports of the Nx5010. Nx 5010 provisions L2 connectivity only which is in line with Molinas requirements.
Type 2: The design is to accommodate the New 10 Gig servers with FCoE capability.
ACCESS RACK
OS TBD
23
10 Gig
(Release) v1.3
Project Overview
The design uses Nexus 5010 as ToR to provision 10 Gig Ethernet with FCoE connectivity to the servers. Each rack has two dedicated Nx5010 with Ethernet uplinks to the Nexus 7k in the DCN aggregation layer and Fibre channel uplinks to the MDS 9513. The POD architecture not necessarily applies to this type as compared to type one. But from the management prospective the architecture remains the same i.e. one spine per 5 racks.
Not e For Functional details of each DCN layer please refer to section 1.2 DCN Design Principals. For feature recommendations please refer to chapter 3
1.1.5 SAN
Fabric A
Fabric B
Core 1
Core 2
Core 1
Core 2
Edge 1
Edge 2
Edge 3
...
Edge 39
Edge 40
Edge 1
Edge 2
Edge 3
...
Edge 39
Edge 40
The Storage Area Network (SAN) is a two tier core edge topology. Molinas requirements and Ciscos best practices.
It is based on
The SAN Core-Edge topology consists of two redundant fabrics. Each fabric has two core directors and multiple edge switches. In the core-edge architecture the core directors support all the storage or target ports in each fabric as well as ISL connectivity to the edge switches. The core directors act as the central insertion point for FCIP SAN extension for replication between sites and SANTap for network based traffic splitting. This topology provides consolidation of storage ports at the core.
24
(Release) v1.3
Project Overview
The hosts connect to edge switches. The edge switches are connected to the core via ISL trunks. Since storage is consolidated at the core switches in this topology, this design can supports advanced SAN features like IVR, SME, DMM, SANTap, FCIP and virtualization on the core switches. This topology also provides deterministic host to storage over subscription ratio. This is a future proof architecture for FCoE. When Molina is ready to deploy FCoE, the edge switches can be swap out for Cisco Nexus 5000 FCoE switches without additional cables and re-wiring.
TO WAN EDGE
MNET- FW
R1 R2 R3 R4 R5
TOR EDGE
1G
10/100/1000
MNET
1.1.6 MNET
MNET Management Network is designed to provide true out of band management to the network devices and servers in the DC. The Management network will host the management technologies. The access to the management will be secured through dedicated firewall between management network and other network segments. The management network was designed based on inputs from Molina engineering team which uses Nexus 2148 for 1gig connectivity and CAT 2960 for 10/100/1000 (for server connectivity ) as TOR on each rack. The aggregation of which will be done at the Nuxus 5010 (Spine SW). The aggregation design was segmented to 5 racks as preferred by Molina (except for Network racks which will be a group of 8 racks). Further all Spine switches will aggregate to the Management Core.
Cauti on There is limited redundancy in the management network as the Nx2148 which acts as a remote line card can only home to one 5010 as per the technology available today. Therefore the network dose posses a single point of failure in two cases. 1. If the spine switch fails, the connectivity to the entire TOR connecting to that spine will be lost. 2. If the TOR (2148 / 2960) fails, only connectivity to that rack will be lost. However there is N+1 redundancy on management core. The single point of failure can be eliminated by dual homing the ToR into 2 spines which is supported in a scheduled in July. If that route is selected there will be additional cabling required to support this
25
(Release) v1.3
Project Overview
redundancy.
Not e To keep the standard design, all access & network racks will be provisioned with management switch as TOR. The TOR SW may contain more physical ports then required specially in the Network racks as the No of devices will be less as compared to access racks. However this decision was reached in consideration to Molinas requirement to minimize inter rack cabling.
1.2.1.1 DC Core
This layer is purely L3 and provides high speed routing between the different aggregation layer switches. In addition DC Core layer also provides connectivity to INET EDGE. Core of the network will be based on Nexus 7000 platform. The Core will
26
(Release) v1.3
Project Overview
be virtualized using the VDC technology to provide path isolation between Production and Test Dev environments.
27
(Release) v1.3
Project Overview
The core directors have 18+4 cards installed for SANTap and FCIP. It is not determined which direction Molina will take for data replication across sites. The 18+4 cards will support SANTap or FCIP at the time of deployment.
28
(Release) v1.3
Project Overview
ISL ports should be placed into VSAN 1. This is a best practice, as VSAN 1 can never be deleted. However, to meet this requirement, VSAN 1 should never be placed into a suspended state. ISLs require dedicated bandwidth. Between switches, this is typically 4Gb. ISLs to bladecenter should follow the requirements. However, as a starting point, two by 4Gb ISLs between the bladecenter and its core switch has proven to be sufficient. With both an A and B fabric to connect to a bladecenter, this will provide 16 Gb of bandwidth. This is based on two, 4GB links per fabric.
Cauti on Cisco does not provide embedded fiber channel switches for blade switches. There are four HP c-class blade switches in Molina that have Brocade switches. Molina will have to procure the Cisco 9124e embedded blade switches from the server vendor. Fabric Manager Servers Performance Manager should be monitored to determine whether ISLs are under of over sized. ISLs should be increased if the trending and prediction queries in FMS show it is required. The description field on the ISL port should contain the source switch name and port and the remote switch name and port information. ISL [Source Switch name-physical port destination switch name-physical port] E.g. switch1-fc1/1 to switch2-fc1
Recommenda tion The descriptions for the server/host HBA connected ports on the switch should include the server name, HBA vendor and model and HBA instance on the server. Host [hostname-HBAvendor-hba instance] E.g. server1-hba0
29
(Release) v1.3
Project Overview
The Storage ports and the tape libraries will be connected to the 24 port line cards. The ports on this card can operate at 1/2/4 Gb/sec. At all speeds, the ports operate at full rate. Media servers should connect to the same director as tape devices to minimize traffic traversing the ISL between the directors. It is recommended for the ease of troubleshooting and management to leave all the unused ports in a shutdown state and to turn on ports as and when they are required. It is also recommended that detailed descriptions for each port be defined as they are enabled.
Recommenda tion The descriptions for the storage target port connected ports to the SAN switch should include the Array model, Array serial number and the port identifier. Storage [Storage array serial number and port id] E.g. 8300-3c
Storage ports should be added to line cards in a round-robin fashion. Storage ports would have the most potential to run their line rate.
Tape libraries
M edia server
Fabric A
Core 1 Core 2 Core 1 Core 2
Fabric B
Edge 1
Edge 2
Edge 3
...
Edge 39
Edge 40
Edge 1
Edge 2
Edge 3
...
Edge 39
Edge 40
As part of the unified SAN architecture, tape backup traffic will be integrated into production SAN MDS switches. Separate backup VSANs will be created to segregate tape backup traffic. The core edge SAN design accommodates VTLs and Tape devices connecting directly to the core switches. The current design allows up to 60 VTLs and/or tape devices for line rate throughput of 4Gb/s per fabric. Tape devices are not dual-homed, but can connect to either fabric. The backup servers can connect to multiple directors to maximize port utilization on both fabrics. Tape devices and media server pairs should be connected to the same SAN director to reduced ISL traffic and highest throughput.
30
(Release) v1.3
Project Overview
31
(Release) v1.3
Project Overview
Redundant SAN fabrics are utilized with this design, providing dual path connectivity to the host and storage devices. The SAN core is composed of two MDS 9513 directors on each fabric. The high availability features of the MDS 9513 include non-disruptive code upgrades, hot insertion and removal of blades, dual Supervisor modules with stateful failover, dual power-supplies. New edge switches can be added to the topology non-disruptively and without major changes to the architecture.
Not e Specific Technology recommendations & best practices are covered under chapter 3,4, 5 & 6
1.4.2 Resiliency
The biggest risk to a data center network is Spanning-tree loop. A spanning-tree loop in any part of the network can cause outage for the entire network. L2 access layer switches are required to support stateful devices such as load balancers and server NIC teaming. Use of the recommend technologies such as UDLD, CoPP etc can further reduce these risks. The infrastructure and security design provide redundancy at multiple levels for a robust, secure, and resilient environment. The security infrastructure achieves this by providing hardware failover and maintains session state during the failover. In addition to failover the segmentation design provides each zone protection against cross zone failure. The SAN network is a fully redundant architecture consists of two fabrics. Core-to-core directors and core-to-edge switches connectivity are bundled in a port channel. Link disruptions within the port channel will not affect data flow. Each host is connected separately to the redundant fabrics. Storage traffic continues to flow even in the event that one of the links is unavailable.
Not e Specific Technology recommendations & best practices are covered under chapter 3,4, 5 & 6
32
(Release) v1.3
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Internet
Outside Network
Inside Network
Internet Firewalls
DMZ CORE Primary Path Secondary Path The DMZ services zone focuses on providing Molina services to external customers. Data Center These services include web, e-portal, ftp, and others. They represent business critical
Figure 2.1: INTERNET EDGE FIREWALL
functions and increased security concerns. The design of this infrastructure is to provide a dedicated pair of hardware, management inbound and outbound communications, and isolate services to decrease exposure to security breaches. The secure infrastructure will be managed by a firewall and intrusion detection devices.
In rn t te e
In r e E g R u r ten t d e o tes
D ZF e a M ir w lls
D ZZ n A M oe
D ZZ n B M oe
D ZZ n C M oe
34
(Release) v1.3
Partner services provide a direct link to Molina business partners. These partners participate and assist Molina in day to day operation and represent a business critical connection/communications. The policy requirements for the partner zone differ greatly from all other zones and in this design we have segmented their traffic to a dedicated firewall to manage the security enforcement policy. The firewall infrastructure will segregate partners by interface and provide intrusion detection systems.
P rn r at e P rn r at e 1
P rn r at e
3 P rn r at e 4
Pr e Fe a atn r ir w lls
D ZC r M oe
35
(Release) v1.3
Rmt Oic e oe f e f
WN A I P 1 S
WN A I P 1 S Sc n ay eo d r
WN A I P 2 S
Itre nen t
W NF e a A ir w lls
Dt C ne aa e t r
2.2.3 Optimization
Molina currently has the Cisco WAAS solution for WAN optimization. The existing strategy for WAN optimization will be preserved in the new data center. The WAAS core boxes are placed near the servers being optimized. When these servers migrate to the New Mexico data center the core WAAS boxes will migrate as well.
36
(Release) v1.3
Molina HC requires state full inspection of communications within the data center core and security segmentation of critical application services. The placement of firewalls and intrusion detection systems provide the ability for Molina HC to build a services chain to protect user to server or server to server communications. This with the segmentation of critical assets based on internal security policy, external governance, and best practices provides secure services.
D C ata enter C re o
P s al F w hy ic ire all
ID S
D S B erver
Ap S p erver
B acke Se r nd rve
D S B erver
Application Server
37
(Release) v1.3
The ACE server load balancers will have virtual partitions in place to offer resources for the development network. This will allow a development environment that mimics the production network.
Fabric A
Core 1 Core 2 Core 1 Core 2
Fabric B
38
(Release) v1.3
In this design, two EMC RecoverPoint appliances with SANTAP module in MDS 9513 form a reliable cluster environment. Data replication can be load-balanced between two RPAs. This solution provides data replication capabilities between different EMC disk array frames that support for both Windows and VMWare servers. For a fully redundant replication solution, the following component will be needed: Two 18+4 linecard with SANTAP license on each fabric per data center. Including modules in HW DR, we should require at least, total of six 18+4 linecard. Four EMC RecoverPoint appliances will be needed Two redundant link between DC2 and HW should be required
NM DC2
HW (DR)
MPLS link
Core 1
Core 2
Core 1
Core 2
Fabric A
Fabric B
39
(Release) v1.3
Direct DMX to DMX replication with SRDF can also be deployed. In this scenario, the DMX with GE ports connects directly to the Cisco Cat 6500. DMX will handle the encryption and compression of the replication link between NM and HW data center. The MDS directors are able to transport FCIP traffic as well. Instead of the connecting the CAT 6500 to the DMX, this connection can be connected to the MDS director for transport FC packets over the IP network. For a redundant replication infrastructure, two MPLS links between the sites are recommended.
40
(Release) v1.3
Not e Depending on the application BW requirements each access POD uplinks can be manipulated to provide desired bandwidth & oversubscription ratios.
DCN distribution switches connect to Core routers via 10 gig links in a full mesh topology. Full mesh design provides for link redundancy and failover. LACP Etherchannels to core will be used.
3.2 L2 Design
The Access switches links to the Aggregation switches. The V-shaped is used to account for both an Aggregation switch and a port channel (multiple links) failure. The links will be Layer2 to allow for dot1q tagging and multiple VLANs being carried over between the Aggregation and Access. Rapid PVST+ Spanning Tree protocol will be used for within a second convergence in case of STP topology changes. HSRP will be used on the Aggregation switches to provide gateway functionalities for the servers and to provide predictable server egress traffic. VLAN numbering scheme will be preserved from existing DC to assist in migration
Not e
3.3 L3 Design
EIGRP is the protocol selected by Molina HC to be used as the IGP to the core. IP connectivity will be provided by configuring addresses on SVIs (Switch Virtual Interface). The use of SVI as opposed to configuration of ip addresses on the physical links provides capability to increase link capacity without any disruption to the traffic flow. A new physical link can be added to existing link without bringing the connections down. This provides operational flexibility for future growth. EIGRP will be used as the dynamic routing protocol. The network on the links will be advertised in EIGRP AS. There will be ECMP (Equal Cost Multi Path) from the distribution to the core block which will ensure equal cost load balancing. Load balancing allows a router to distribute traffic over all the router network ports that are the same distance from the destination address. Load balancing increases the utilization of network segments which increases the effective network bandwidth.
41
(Release) v1.3
Cisco NX-OS supports the Equal Cost Multiple Paths (ECMP) feature with up to 16 equalcost paths in the EIGRP route table and the unicast RIB. EIGRP can be configured to load balance traffic across some or all of those paths. Cisco NX-OS supports nonstop forwarding, graceful restart and MD5 Authentication for EIGRP. As a best practice, Cisco recommends to enable Authentication and Graceful Restart (High Availability). Configuration Guidelines for Core Based on discussion with Molina Architects, Cisco AS recommends the following:
3.4 HA Technologies
3.4.1 Logical Redundancy
3.4.1.1 HSRP (Hot Standby Router Protocol)
The design includes the proposal to use HSRP on the aggregation switches for those VLANs that are trunked to the aggregation or access switches. The aggregation switches provide layer 3 terminations (interface VLAN) for the access switch VLANs, HSRP configured on each switch pair will provide first hop redundancy for hosts connected to these VLANs. Cisco will recommend HSRP Hello=250msec and HSRP HOLD=750msec on Nexus 7000 for application which need faster recovery. For the rest of applications HSRP second timers can be used (1sec and 3sec). The HSRP protocol is a relatively mature and well understood protocol; consequently there are no major recommendations to be made in terms of the HSRP configuration within the core infrastructure. Cisco AS agrees with the recommendation to synchronize the configuration of the HSRP active router and the STP root switch per VLAN, even though the access switch design is a STP non-looped design, this is deemed a best practice. It is recommended to use unique HSRP groups per VLAN. Alternatively if the numbers of vlans exceed the number of HSRP groups allowed on the supervisor, it is still possible to re-use the same HSRP group in multiple VLANs as the Nexus 7000 maintains a separate MAC table per VLAN, however this design may not interoperate with other switch vendor hardware. Additionally Cisco AS recommends that the following HSRP features be used,
42
(Release) v1.3
HSRP authentication between peers (not critical in data center, however leading practice) HSRP timers (250 mess hello, 750 mess second hold time) HSRP preempt and delay (STP convergence)
43
(Release) v1.3
during switchover. During normal NSF operation, CEF on the active supervisor engine synchronizes its current FIB and adjacency databases with the FIB and adjacency databases on the redundant supervisor engine. Upon switchover of the active supervisor engine, the redundant supervisor engine initially has FIB and adjacency databases that are mirror images of those that were current on the active supervisor engine. Each protocol depends on CEF to continue forwarding packets during switchover while the routing protocols rebuild the Routing Information Base (RIB) tables. After the routing protocols have converged, CEF updates the FIB table and removes stale route entries. CEF then updates the line cards with the new FIB information.
Online diagnostics are categorized as bootup, on-demand, scheduled or health monitoring. Bootup diagnostics run during bootup, on-demand diagnostics run from the CLI, scheduled diagnostics run at user-designated intervals or specified time when the switch is connected to live network and health monitoring run in the background. Please refer to the GOLD whitepaper at the URL below for additional information, Cisco AS recommends online diagnostics as a very useful feature in determining the health of Nexus 7000 hardware. Cisco AS recommends at a minimum that the bootup diagnostics be set to level complete, In addition it is recommended to use the on-demand diagnostics to verify the correct operation of the new Catalyst 6500 hardware as it is installed at all the site. It is proposed to use specific online-diagnostics for pre-deployment testing and verification of new hardware before progressing to more advance testing. Please refer to Link: GOLD Recommendations, for information regarding those diagnostic tests that are recommended to be run as part of the network pre-deployment testing. http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_0/nxos/system_management/configuration/guide/sm_gold.html
Not e
44
(Release) v1.3
3.4.6 Trunking
Cisco recommends that all trunk ports be configured to use 802.1Q encapsulation (standard), additionally the trunk mode should be set to non-negotiate i.e. always on. By disabling negotiation i.e. DTP (Dynamic Trunking Protocol) faster convergence can be achieved by avoiding the latency associated with the protocol which prevents a port from being added to STP for up to 5 seconds. As VLAN 1 is enabled on all switch ports and trunks by default it is good practice to restrict the size of VLAN 1 as much as possible, therefore it is recommended to clear VLAN 1 from trunk ports. Note, even though VLAN 1 is cleared from the trunk port, this VLAN is still used by the processor to pass control frames such as VTP and CDP. Furthermore only the specific user VLANs and the management VLAN should be enabled on a trunk port, all other VLANs should be cleared from trunk ports. An 802.1Q trunk has a concept of a Native VLAN which is defined as the VLAN which a port will return when not trunking and is the untagged VLAN, by default this is VLAN 1, however as VLAN 1 will be cleared from trunks it is advisable to configure another VLAN (user or management) for the Native VLAN. Cisco understands that vlans will be reused at each access switch and all access switches might host any server vlans in the network.
45
(Release) v1.3
The approach for trunking vlans is to explicitly add them to the trunk as needed; as opposed to trunk all the vlans and then pruning them.
46
(Release) v1.3
3.4.10 ISSU
In a Nexus 7000 series chassis with dual supervisors, in-service software upgrade (ISSU) feature can be used to upgrade the system software while the system continues to forward traffic. ISSU uses the existing features of nonstop forwarding (NSF) with stateful switchover (SSO) to perform the software upgrade with no system downtime
47
(Release) v1.3
A router can be logically divided into three functional components or planes: 1. Data Plane 2. Management Plane 3. Control Plane
The vast majority of traffic generally travels through the router via the data plane; however, the Route Processor must handle certain packets, such as routing updates, keepalives, and network management. This traffic is often referred to as control and management plane traffic. The Route Processor is critical to network operation. Any service disruption to the route processor, and hence the control and management planes, can lead to businessimpacting network outages. A Denial of Service (DoS) attack targeting the route processor, which can be perpetrated either inadvertently or maliciously, typically involves high rates of traffic destined to the Route Processor itself that result in excessive CPU utilization. Such an attack can be devastating to network stability and availability and may include the following symptoms: High Route Processor CPU utilization (near 100%) Loss of line protocol keepalives and routing protocol updates, leading to route flaps and major network transitions Interactive sessions via the Command Line Interface (CLI) are slow or completely unresponsive due to high CPU utilization Route processor resource exhaustion: resources such as memory and buffers are unavailable for legitimate IP data packets
48
(Release) v1.3
Packet queues back up, leading to indiscriminate drops (or drops due to lack of buffer resources) of other incoming packets CoPP addresses the need to protect the control and management planes, ultimately ensuring routing stability, reachability, and packet delivery. It uses a dedicated control-plane configuration via the IOS Modular Quality of Service CLI (MQC) to provide filtering and rate limiting capabilities for control plane packets.
49
(Release) v1.3
needed, the other CoPP policy entries can be updated to account for this traffic
The supervisor module has both the management plane and control plane and is critical to the operation of the network. Any disruption or attacks to the supervisor module will result in serious network outages. For example, excessive traffic to the supervisor module could overload and slow down the performance of the entire NX-OS device. Attacks on the supervisor module can be of various types such as denial-ofservice (DoS) that generates IP traffic streams to the control plane at a very high rate. These attacks force the control plane to spend a large amount of time in handling these packets and prevents the control plane from processing genuine traffic. These attacks can impact the device performance and have the following negative effects: High supervisor CPU utilization. Loss of line protocol keep-alive messages and routing protocol updates, which lead to route flaps and major network outages. Interactive sessions using the CLI become slow or completely unresponsive due to high CPU utilization. Resources, such as the memory and buffers, might be unavailable for legitimate IP data packets. Packet queues fill up, which can cause indiscriminate packet drops.
50
(Release) v1.3
51
(Release) v1.3
4. Security Technologies
To achieve the design presented in this document Cisco will utilize multiple security technologies to fulfil the design requirements. These technologies include firewall statefull failover, virtualized firewall, intrusion detection systems, and virtualized intrusion detection sensors.
Firewalls mentioned in the above section will be configured in the routed mode of operation. However, the Global Data Center Service firewall will be configured in the transparent mode of operation to ease insertion of new services into the network without making any changes to the IP address space. In multiple context modes, the firewall mode cannot be set separately for each context; only one firewall mode can be set for the entire security appliance.
Not e
52
(Release) v1.3
Cisco firewall includes many advanced features, such as multiple security contexts (virtualized firewalls), transparent (Layer 2) firewall or routed (Layer 3) firewall operation etc. A transparent firewall is a layer 2 firewall that acts like a "bump in the wire" and
is not seen as a router hop to connected devices. The firewall connects the same network on its inside and outside interfaces, but each interface must be on a different VLAN.
Not e The transparent mode security appliance does not pass CDP packets or IPv6 packets, or any packets that do not have a valid EtherType greater than or equal to 0x600. For example, IS-IS packets are not allowed to pass. An exception is made for BPDUs, which are supported. For example, it is possible to establish routing protocol adjacencies through a transparent firewall; one can allow OSPF, RIP, EIGRP, or BGP traffic through based on an extended access list. Likewise, protocols like HSRP or VRRP can pass through the security appliance. Non-IP traffic (for example AppleTalk, IPX, BPDUs, and MPLS) can be configured to go through using an EtherType access list. When the security appliance runs in transparent mode without NAT, the outgoing interface of a packet is determined by performing a MAC address lookup instead of a route lookup. Route statements can still be configured, but they only apply to security appliance-originated traffic. For example, if syslog server is located on a remote network, you must use a static route so the security appliance can reach that subnet. If you use NAT, then the security appliance uses a route lookup instead of a MAC address lookup.
53
(Release) v1.3
54
(Release) v1.3
Table 3
IPv6 Multicast
Description -The transparent firewall can act as a DHCP server, but it does not support the DHCP relay commands. DHCP relay is not required because you can allow DHCP traffic to pass through using two extended access lists: one that allows DCHP requests from the inside interface to the outside, and one that allows the replies from the server in the other direction. However static routes can be added for traffic originating on the security appliance. Dynamic routing protocols can be allowed through the security appliance using an extended access list. IPv6 traffic cannot allow using an EtherType access list. Multicast traffic can be allowed through the security appliance by allowing it in an extended access list. -The transparent firewall supports site-to-site VPN tunnels for management connections only. It does not terminate VPN connections for traffic through the security appliance. You can pass VPN traffic through the security appliance using an extended access list, but it does not terminate non-management connections. WebVPN is also not supported.
55
(Release) v1.3
56
(Release) v1.3
Not e The system execution space configuration resides in Non-Volatile RAM (NVRAM) whereas the actual configurations for security contexts are stored in either local Flash memory or on a network server or both. Context configurations residing on a network server can be accessed via the TFTP, FTP, HTTPS or HTTP protocols from the system execution space. Access from the system execution space across the network is provided via the designated admin context.
Not e The admin context must be created before defining other contexts. Additionally, it must reside on the local disk. Using the admin context as a regular context for through traffic is not recommended even though it is possible.
57
(Release) v1.3
Recommenda tion To allow contexts to share interfaces it is recommended that you assign unique MAC addresses to each context interface. The MAC address is used to classify packets within a context. If you share an interface, but do not have unique MAC addresses for the interface in each context, then the destination IP address is used to classify packets. The destination address is matched with the NAT configuration however this method does have some limitations when compared to the unique MAC address method. For this reason the Commander configuration the mac-address auto command will appear in the system.
58
(Release) v1.3
During normal operation, the active module continually passes per-connection stateful information to the standby module every 10 seconds. After a failover occurs, the same connection information is available at the new active module. Supported end-user applications are not required to reconnect to keep the same communication session. The state information passed to the standby module includes the following data: NAT translation table. TCP connection states. UDP connection states (for connections lasting at least 15 seconds). HTTP connection states (Optional). H.323, SIP, and MGCP UDP media connections. ARP table.
State Link - To use stateful failover, a stateful failover link needs to be configured to pass all state information. This link can be the same as the failover link, but we recommend that a different link be used for the same. The state traffic can be large, and performance is improved with separate links. The IP address and MAC address for the failover and state link do not change at failover. For multiple context mode, the failover and state links reside in the system configuration. These interfaces are the only configurable interfaces in the system configuration.
59
(Release) v1.3
Traffic distribution to multiple IDS sensors can be achieved by using mirroring technologies (RSPAN and VACL) for multi-gigabit traffic analysis. The logical topology shows the IDS placement at the presentation tier and at the database tier. When a web/application server has been compromised and the hacker attacks the database, the second sensor reports the attack. In a consolidated data center environment, servers for the different tiers may be connected to the same physical infrastructure, and multiple IDS sensors can provide the same function as in the logical topology of Figure 4.3.
In Figure 4.3, IDS1 monitors client-to-web server traffic and IDS2 monitors web/application server-to-database traffic. When a hacker compromises the web/application tier, IDS1 reports an alarm; when a compromised web/application server attacks the database, IDS2 reports an alarm. HTTPS traffic can be inspected if the IDS sensors are combined with a device that provides SSL offload.
60
(Release) v1.3
The following sequence takes place: 1. The Multilayer Switch Feature Card (MSFC) receives client-to-server traffic from the data center core. 2. The ACE diverts traffic directed to the VIP address. 3. The ACE sends HTTPS client-to-server traffic to the SSLSM for decryption. 4. The SSLSM decrypts the traffic and sends it in clear text on an internal VLAN to the CSM. 5. The IDS sensor monitors traffic on this VLAN. 6. The ACE performs the load balancing decision and sends the traffic back to the SSLSM for re-encryption.
61
(Release) v1.3
The solution being provided to Molina Health Care includes some routed configuration on the ACE server load balancing module as well as some one-armed configurations. The ACE server load balancer also has the ability to operate a number of virtual contexts. These allow for multiple modes to run concurrently. Each mode has its own dedicated hardware resources as well as distinct configuration.
Cauti on The specific load balancing rules are still being investigated at this time. Each load balanced application will be looked at to determine the proper load balanced topology and virtual context
62
(Release) v1.3
5.1
5.2
WAN Optimization
The Cisco Wide Area Application Service (WAAS) is currently deployed at Molina Healthcare. The existing WAAS deployment strategy will remain unchanged in the new data center. When the core servers move to the new data center the core WAAS appliances will migrate with them.
63
(Release) v1.3
64
(Release) v1.3
6. SAN Technologies
The feature recommendations are based on the following state of the art technologies: MDS 9513 director with Gen-2 chassis and Line cards SANTAP with RecoverPoint as integrated data replication solution for VMWare and Windows NPV and Flexattach features FCIP for redundant link between data centers
Scope of Use Provide logical isolation between open systems disk and Backup tape access. Provides ability to use of human readable names to be associated with the PWWN of the end devices and the ability to manipulate then easily if the PWWN changes due to device replacement. To ensure zones and zonesets are safely deployed with minimal chance for human risk. To leverage the port density of the deployed hardware without sacrificing performance. Role based Access control allows Molina to control the access and management of various physical and logical entities in the SAN infrastructure being deployed. Cisco Fabric Servers is used to replicate information between connected switches. Some of the applications supported by CFS are device-alias, ntp, AAA, roles to name a few. The ability of provide notification through email on the events and incidents happening on the switch in real time. Can be configured to send automatic notification of critical events to Cisco to generate a Case and automatically open a support call to resolve the incident. Provides larger aggregate bandwidth and enhances High Availability Allow multiple FCID login to one N port. Must be enabled for NPV connections. Reduced domain ID usage, function as linecard extension to NPIV enabled switch
CFS
Callhome
65
(Release) v1.3
FCIP SANTap
Extend the SAN over IP. Connect Fabrics (Switches) over IP using Fibre Channel protocol. Offload host traffic, allows network based traffic splitting to redirect data to RecoverPoint appliances.
6.1.1 VSANs
The proposed SAN design includes the following VSAN(s) based on Molinas requirements and Ciscos best practices. The design allows having multiple storage environments within the same fabric. The use of VSANs will be deployed to support these technologies. This logical isolation will enable environments to be isolated from each other. Changes to the development environment zoning will not affect the production environment. As per Molinas requirement for supporting shared storage, backup environment, and dedicated storage environment within a single fabric; VSANs will be deployed to support these environments. This logical isolation; for example, will enable environments that do not require daily zoning such as tape, to be isolated from those that require daily zoning activities, such as the shared disk and dedicated storage environments. Limit the granularity of VSANs. Typically a production, backup, dev-test. Production for all business critical devices. Highest change management requirements. Do not create separate Production VSANs for different applications. This will unnecessarily complicate the design and maintenance of the SAN Backup VSAN. Typically, backup devices require high maintenance. Media server reloads, tape drive replacements. High activity of backup devices is normally opposite the production high activity windows. Maintenance can be done during the day time. Change management windows are normally not as stringent as Production environments. Having a dedicated backup VSAN allows for making changes without affecting business critical functions. Dev/Test VSAN. A physically dedicated test environment would be the ideal. There is the ability to test different versions of switch software, and features without any impact to the by the Production environment.
However, this is quite costly to have separate equipment. Using some ports in a separate dev/test VSAN is the next best thing. Host HBAs and drivers can be tested in a SAN environment. The only requirement is the availability of dedicated array ports for the dev/test environment. Making VSANs more granular by dedicating a Windows VSAN, AIX VSAN, SAP VSAN, HR VSAN, Domino VSAN does reduce the potential for inadvertent operator error from causing wide spread damage. However, it does require a large number of available array ports. As each VSAN requires a minimum of two dedicated array ports to satisfy the A and B fabrics. In general, we do not recommend this approach. Some VSANs may grossly underutilize an array ports bandwidth, while other VSANs may experience bottle necks due to the inability to assign additional array ports to that VSAN. Common industry practice is what we have proposed.Having a smaller number of large VSANs does require some housekeeping to ease management.
66
(Release) v1.3
For each Fabric the numbering scheme consists of using an offset of 500 between the A and B fabric. The VSAN ranges for the two Data Centers are show in 6.
Table 5 VSAN Ranges for Various Data Centers
Data Centers NM HW
For each Fabric the numbering scheme will consist of using an offset of 1 between the A and B fabric. Fabric A will start at VSAN 101 while Fabric B will start at VSAN 102. VSAN 1 will not be used on either fabric for any fibre channel fabric. VSANs in the different Data Centres are not assumed to share resources. Except in the case of the replication VSAN. Only the replication VSAN shares a common VSAN number. By providing the non-replication VSANs unique VSAN numbers, there is no possibility of an unintentional VSAN merge. The Domain ID numbering scheme will be the domainIDs will make use of odd/even scheme to further differentiate which fabric (A or B) it is on. The Domain ID numbering scheme specified in the Low Level Design document. The number of VSAN will be determining in the Low Level Design document as well. It will consists are least two, production host/disk SAN and tape backup SAN. All VSANs will have the following features enabled: Static port based VSANs. Load balancing scheme: src/dst/oxid Static Domain IDs VSANs defined on all the directors and switches. VSAN-1 utilized only for administrative use and for non-host to storage access. FCID Persistency configured for all VSAN.
67
(Release) v1.3
6.1.4 Security
This High Level SAN design describes the necessary security features based on Molinas requirements and Ciscos best practices. The Following Security features are listed by Security Type of access such as Management, Fabric or Data. The features selected might include multiple items from each Security Type or just from one or two Security Types are based again on Molinas requirements and Ciscos best practices. Management Access (Security Type) Password protected Console Secure Shell (SSH) Secure File Transfer Protocol (SFTP) Role Based Access Control (RBAC) Each user to have their own unique username and password with roles providing them the minimal privileges or rules required to perform their job.
SNMPv3 (FMServer)/SNMPv1 (for SRM support) TACACS+ to be used for authentication to the MDS infrastructure.
68
(Release) v1.3
Data Access (Security Type) VSANs Enhanced Zoning Only fiber channel attached devices (HBAs, Storage, Tape, analyzers) can access the fiber channel network. In order for end devices to communicate, administratively controlled access must be granted. End device access security is controlled by allowing specific parameters of those devices (Port World Wide Numbers (pWWN), or specific ports). End devices must be in the correct VSAN, zone and zoneset. All require administrator authorization. The MDS has a port security feature to defeat port spoofing that can be enabled to prevent a duplicate pWWN to logon to the fabric. In general this is not deployed as Data Center physical access is tightly controlled. Current tape backup environment has encryption built-in on the tape devices. As noted: Cisco also has the capability to encrypt tape. This is a licensed feature. It can make use of the 18/4 line cards currently deployed in the Molina network. Actual quantities of 18/4s would require an analysis of the backup environment. IPsec can be enabled on FCIP links to provide data encryption over a wide area network. This is normally deployed when running over public networks.
RBAC Scope of Responsibility For those members of the SAN team that require complete write access to the MDS. For those personnel that do not require write access to the MDS. For those personnel that perform day to day operations such as port enabling and zoning.
6.1.6 Logging
This High Level SAN design describes the external logging to be implemented based on Molinas requirements and Ciscos best practices. Logging to be done to the FM Servers syslog service or to a standalone syslog server.
69
(Release) v1.3
6.1.7 Monitoring
Monitoring of the fabric to be configured using Cisco Fabric Manager Server. Additional flows need to be configured to monitor the ISLs and critical servers to enable Molina to be able to do performance trending and capacity planning. Custom reports can be configured using the FMS web interface to monitor the fabric health as well as inventory. Fabric Manager will allow you to manage multiple fabric (FMS license required). Tabs within FM will allow you to quickly toggle between physical fabrics. If the switches do not have a FMS license, only one physical fabric may be opened at one time.
6.1.9 Port-Channel
When there are multiple ISL between any two MDS switches it is recommended that they are configured as a port-channel. This is the recommended practice. It is also recommended that while configuring the port-channel to turn on port-channel protocol. The load balancing algorithm on the port-channel is preferable to be set to exchange id based load balancing.
70
(Release) v1.3
Enabling NPIV on the MDS switch is the only configuration requirement for the switch. This is a global setting and applies to all VSANs.NPV enabled end devices are required to connect to a NPIV enabled switch. NPIV is enabled for all core directors.
6.1.12 Licences
The proposed High Level Design includes various features which will require the following Product License to be installed:
Table 7 Feature License
71
(Release) v1.3
The proposed Molina SAN will seamlessly integrate with the Nexus 5000 switches. In the Molina architecture, the new Nexus switches are a drop in replacement for 9124 switches resulting in FCoE capable network. No additional cabling or re-wiring is need for FCoE.
72
(Release) v1.3
7.1.2 AGGREGATION
The Aggregation layer switches will require interconnecting the Access switches. In order to provide high-speed computing capabilities, 10-Gigabit ether channel will be used from the Aggregation to the Access. The servers will connect using copper 1-gig links to the Access switches. Nexus7k is the recommended platform for the Aggregation layer as it will provide Molina with future bandwidth and port density scalability. This choice of platform will also enable Molina to move toward Cisco Virtual Port Channel (vPC) in the future should there be need for it. The number of line cards (payload cards) being used to accommodate the existing environment is 4, where the chassis can support up to 8.
7.1.3 ACCESS
Most part of the access architecture is designed to accommodate the 1 Gig Physical servers Molina has today. The Nexus 5010 architecture can be repurposed when Molina upgrades the servers from 1 Gig to 10Gig The Access switches links to the Aggregation switches. The V-shaped is used to account for both an Aggregation switch and a port channel (multiple links) failure. The links will be Layer2 to allow for dot1q tagging and multiple VLANs being carried over between the Aggregation and Access. vPC (virtual Port Channel) technology can be used based on BW requirements & Rapid PVST+ Spanning Tree protocol will be used for within a second convergence in case of STP topology changes. HSRP will be used
73
(Release) v1.3
on the Aggregation switches to provide gateway functionalities for the servers and to provide predictable server egress traffic.
7.1.4 Services
The services chassis in the Aggregation layer about 60% populated with the service modules and can be scaled further as the service modules like ACE can be virtualized for efficient resource utilization.
74
(Release) v1.3
1 Links int vP 6 he C
A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 Series devices to appear as a single port channel by a third device (see Figure 7.1). The third device can be a switch, server, or any other networking device that supports port channels. A vPC can provide Layer 2 multipathing, which allows you to create redundancy and increase bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load balancing traffic.
...
75
(Release) v1.3
VDC-1 PRODUCTION
VDC -4
VDC-3
When a VDC gets created, the NX-OS software takes several of the control plane processes and replicates them for the VDC. This replication of processes allows VDC administrators to use virtual routing and forwarding instance (VRF) names and VLAN IDs independent of those used in other VDCs. Each VDC administrator essentially interacts with a separate set of processes, VRFs, and VLANs. Molina can leverage this technology to provide path isolation between their production and test dev environment.
76
(Release) v1.3
In Blades, inverted U shaped design should be used to maximize throughput Configure Broadcast Storm control for VMotion VLANs to avoid Broadcast Storm from an active host affecting non-active Host
77
(Release) v1.3
However, with Blade servers, all servers in the chassis are sharing the same uplinks. These uplinks consist of 8 ports, with 4 ports coming out of each Blade switch. With this in mind, it is best to consider a design that will allow both Blade switches uplinks in the forwarding state to maximize the throughput. Figure 7.4 shows the proposed architecture for Molinas Blade servers environment.
Figure 7.4 BLADE SERVER ARCHITECTURE WITH FLIPPED-U DESIGN AND vPC
The flipped-U topology design will maximized the throughput of the Blade servers and at the same time also provides the required redundancy. For this architecture to work seamlessly, the server chassis should support VLANs, Dot1q trunking, RPVST (Spanning tree protocol) and trunk tracking
78
(Release) v1.3
The servers on the blades should be deployed in NIC teaming fashion with the primary NICs on the blades split across the Blade switches The Blade switches should support uplinks, downlinks tracking such that if uplinks from integrated switch goes down, the server primary links can be shutdown so that server can start using alternate path without impacting user traffic . In Molina's new Data Center, Blade Chassis will be used in Pass-Through mode where the Blades connect to the Nx5010 Access switches directly. Since the Blades directly connect to Nexus 5010, vPC can be leveraged and cross-chassis PortChannel (LACP Link Aggregation Control Protocol) can be deployed for higher bandwidth gains.
79
(Release) v1.3
8. Management
The primary goal of the management module is to facilitate the secure management of all devices and hosts within the enterprise network architecture. Because the management network has administrative access to nearly every area of the network, it can be a very attractive target to hackers. The management module is key for any network security management and reporting strategy. It provides the servers, services, and connectivity needed for the following: Device access and configuration Event collection for monitoring, analysis, and correlation Device and user authentication, authorization, and accounting Device time synchronization Configuration and image repository Network access control manager and profilers The Management firewall Provides granular access control for traffic flows between the management hosts and the managed devices for in-band management. Firewall also provides secure VPN access to the management module for administrators located at the campus, branches and other places in the network. The following are some of the expected threat vectors affecting the management module: Unauthorized Access Denial-of-Service (DoS) Distributed DoS (DDoS) Man-in-the-Middle (MITM) Attacks Privilege escalation Intrusions Network reconnaissance Password attacks IP spoofing
The OOB network segment hosts console servers, network management stations, AAA servers, analysis and correlation tools, NTP, FTP, syslog servers, network compliance management, and any other management and control services. A single OOB management network may serve all the enterprise network modules located at the headquarters. An OOB management network should be deployed using the following best practices: Provide network isolation Enforce access control Prevent data traffic from transiting the management network
The OOB management network is implemented at the headquarters using dedicated switches that are independent and physically disparate from the data network. The OOB management may also be logically implemented with isolated and segregated VLANs. Routers, switches, firewalls, IPS, and other network devices connect to the OOB network through dedicated management interfaces. The management subnet should operate
80
(Release) v1.3
under an address space that is completely separate from the rest of the production data network. This facilitates the enforcement of controls, such as making sure the management network is not advertised by any routing protocols. This also enables the production network devices to block any traffic from the management subnets that appears on the production network links. Devices being managed by the OOB management network at the headquarters connect to the management network using a dedicated management interface or a spare Ethernet interface configured as a management interface. The interface connecting to the management network should be a routing protocol passive-interface and the IP address assigned to the interface should not be advertised in the internal routing protocol used for the data network. Access-lists using inbound and outbound accessgroups are applied to the management interface to only allow access to the management network from the IP address assigned to the management interface and, conversely, only allow access from the management network to that management interface address. In addition, only protocols that are needed for the management of these devices are permitted. These protocols could include SSH, NTP, FTP, SNMP, TACACS+, etc. Data traffic should never transit the devices using the connection to the management network.
8.1.2 SSH/Telnet
SSH should be the preferred way of remote access to all switches in the data center. Cisco switches support SSHv1 and any open source client can be used to access the switches via SSH. SSH keys are locally generated on the switches and are time based. SSH users can be authenticated locally or via AAA services using TACACS+ or RADIUS protocol. User access should be limited using the IP access list so that access to the switches is allowed from the network administration subnets only. All remote access configurations should be done conforming to Molinas remote access policy.
8.1.3 Logging
Any device logging configuration should be done according to existing Molinas network logging standards. Cisco switches support both local and remote logging to hosts, configuration template would be developed as part of LLD (low level design) for the actual deployment of switches.
8.1.4 NTP
All switches in the new core network will be configured with NTP source for clock synchronization. NTP best practices white paper is available at the following URL: http://www.cisco.com/en/US/tech/tk869/tk769/technologies_white_paper09186a0080117070.shtml
81
(Release) v1.3
If there are any existing NTP configuration best practices adapted at Molina those should be taken into consideration for deployment in the new core infrastructure.
8.1.5 RBAC/AAA/TACACS+
In NX-OS, RBAC, AAA and TACACS+ (as well as Radius) are integrated with each other. The AAA feature allows Admin to verify the identity of, grant access to, and track the actions of users managing an NX-OS device. Cisco NX-OS devices support Remote Access Dial-In User Service (RADIUS) or Terminal Access Controller Access Control device Plus (TACACS+) protocols. Cisco NX-OS devices perform local authentication or authorization using the local database or remote authentication or authorization using one or more AAA servers. A preshared secret key provides security for communication between the NX-OS device and AAA servers. Admin can configure a common secret key for all AAA servers or for only a specific AAA server.
8.2.2 DCNM
DCNM is a management solution that maximizes overall data center infrastructure uptime and reliability, which improves business continuity. Focused on the management requirements of the data center network, DCNM provides a robust framework and rich feature set that fulfils the switching needs of present and future data centers. In particular, DCNM automates the provisioning process. DCNM is a solution designed for Cisco NX-OS-enabled hardware platforms. Cisco NXOS provides the foundation for the Cisco Nexus product family, including the Cisco Nexus 7000 Series.
82
(Release) v1.3
8.2.3 ANM
Cisco Application Networking Manager (ANM) software enables centralized provisioning, operations, and basic monitoring of Cisco data center networking equipment and services. It focuses on providing provisioning capability for Cisco Application Control Engine (ACE) devices, including ACE modules and appliances. It also supports operations management and monitoring for ACE devices. Cisco ANM simplifies Cisco ACE provisioning through forms-based configuration management of Layer 47 virtualized network devices and services. With Cisco ANM, network managers can create, modify, and delete all virtual contexts of Cisco ACE, as well as control the allocation of resources among the virtual contexts.
8.2.4 CSM
Cisco Security Manager is a powerful but easy-to-use solution for configuring firewall, VPN, and intrusion prevention system (IPS) policies on Cisco security firewalls, routers, and appliances. To deal with the complexity of different security devices, operating systems, and configuration interfaces, Cisco Security Manager has been designed to act as a layer of abstraction. The result is an application with usability enhancements that deliver a superior look and feel to simplify the process of scalable policy definition and deployment. For example, if a network or security administrator wants to implement a policy of limited instant-messaging traffic during business hours, they can do so in a series of simple clicks. The user experience is the same regardless of the actual security device type that is enforcing the rule-whether it is a Cisco PIX firewall, a Cisco IOS Software-based integrated services router, a Cisco ASA adaptive security appliance, or a Cisco Catalyst switch services module. Cisco Security Manager helps administrators reduce user error and maintain consistent policies across the secure network infrastructure.
83
(Release) v1.3
9. Rack Design
9.1 Data Center Sizing No of Servers & Server NICs
A key component of Data Center architectural development methodology is to understand the number of servers and the NICs per server in each of the server Aggregation Module (a.k.a. POD) within the Data Center. Following table defines the number of NICs for Windows server from 1GS and HW. 1GS SERVER DETAILS:
84
(Release) v1.3
Server Description
Total Number of Physical servers Virtual Servers
Quantit y of Phy.srv
291 205
Backup NIC
HBA
Remarks
Physical Servers with Single NIC Physical servers with Dual NIC Physical servers with 3 NIC Physical servers with 4 NIC Physical servers with 5 NIC Physical servers with 9 NIC Physical servers with 11 NIC Servers with Unknown NIC Count
154 104 8 2 1 7 3 12
154 208 24 8 5 63 33 24
Not available Not available Not available Not available Not available
Not available Not available Not available Not available Not available
Not available
Not available
Total Number of Servers Total Number of Production NIC's Total Number of Management NIC of servers Total Number of Appliance NIC Total Number of NIC count
85
(Release) v1.3
Make
Quantit y phy.srv
Rack space
Productio n NIC
Backup NIC
Manageme nt NIC
HBA
Remarks
IBM DELL HP Unisys Total Number of Physical Servers Total Number of Virtual servers Total Number of Servers
37 11 8 3 59 29
88 152U
Cauti on This above statistics has been captured based on the 1GS Hardware Inventory 1.3(10-06-2009).The 1GS Hardware Inventory ver1.3 has been sent for validation from the Customer & Yet to be confirmed by Molina. These counts may change under following: 1. The servers from Non-Migration list may move to Migration. 2. There is a possibility of New appliance that needs to be Hosted in a New Datacenter. 3. There is a possibility of New servers being hosted in the New Data Center 4. The above Count is Includes the Appliance list as per the DCOPS_IP_Address_List 6-02-2009.xlsx & the NIC count of these appliances needs to be explored. 5. In case of any changes in the Assumption.
REQUIRED RACKS SPACE 1. SERVER BASED : 1GS = 291 (Phy.Srv) / 16 ( Srv/Rack ) = 18.18 ~ = 4 PODS OR RACK UNIT BASED : 1GS = 428 U/ (32U/Rack ) = 13.3 Rack ~= 3 PODS : HWS =59 (Phy.Srv) / 16 ( Srv/Rack ) = 3.68 ~ = 1 POD OR RACK UNIT BASED : HWS = 152 U/ (32U/Rack ) = 4.75 Rack ~= 1 PODS
2. SERVER BASED
86
(Release) v1.3
Cauti on VOICE POD: No information available assuming POD TYPE 1 without SAN switches. & L2 connectivity only.
87
(Release) v1.3
46 U
Ci c o N e x 2 1 4 8 s u sT 1G E F a b r E x t n d er c i e S TA T D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 27 28 2 93 0 3 1 3 2 3 3 3 4 3 53 6 3 7 3 8 3 9 4 0 4 1 4 2 4 34 4 4 5 4 6 4 7 4 8 1 2 3 4
46 U
MGT
C i c o N e x u s 48 s 21T G 1 E F ab r E x t n d e r c i e ST AT D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 93 0 3 1 3 2 3 3 3 4 3 53 6 37 3 8 39 4 0 4 1 4 2 4 3 4 4 4 5 4 6 4 7 4 8 1 2 3 4
46 U
MGT
C i c o N e xu s 4 8 s 21 T 1 G E F a b r E xt n d e r c i e ST AT
46 U
MGT
C i c o N e x2 1 4 8 s us T G 1 E F a b r E x t nd e r c i e ST AT 3 3 3 4 3 5 3 6 3 7 3 8 3 9 4 0 4 1 4 2 4 3 4 4 4 54 6 47 48 1 2 3 4 D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 19 20 2 1 2 2 23 24 2 5 2 6 2 7 2 8 2 9 3 0 3 13 2 3 3 3 4 3 5 3 6 3 7 3 8 3 94 0 4 1 2 4 4 3 4 4 4 54 6 4 7 4 8 1 2 3 4
D I
7 8
1 0
1 1 1 2
13 14
1 5 1 6
17 1 8
1 9 2 0 2 1 2 2
2 3 2 4
2 5 2 6
2 7 2 8
2 9 3 0
3 1 3 2
MGT
C i c o N e x 2 1s 4 T s u 8 1 GE F a b r E x t n d e r c i e ST AT
D I
D I
ID
D I ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 3 2 33 34 35 36 37 38 39 40 41 42 43 4 4 45 46 47 48 1 2 3 4
ID
10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25 26
27 28
29 3 0
31 32
33 34
35 36
37 38
39 40
4 1 42
43 44
45 46
47 48
Cisco N x 21 8 e us 4 T 1 GE F bricEx n r a te de
STA T Ci c Nxu 24 s o e s 18T 1 E F ric E e er G ab xtnd ST AT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 5 16 1 7 18 19 20 21 22 2 3 24 25 26 27 28 29 3 0 31 32 3 3 34 3 5 36 3 7 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 3 2 33 34 35 36 37 38 39 40 41 42 43 4 4 45 46 47 48 1 2 3 4
MD 91 M S 34 ULTI A YE FA RI SW L R B C ITC H
CON S L O E
M GM T 1 /1 0 0 0
STA S TU P /S FA N
DS-C9 1 4 -K9 3 -K 9 CON S L O E M GM T 1 0 /1 0 0
STA TUS LIN K AT C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 4 15 16 17 18 19 20 2 0 21 22 23 2 4 25 26 27 28 29 30 3 1 32 10GE1 10G 10GE2 E2 P/S FA N LIN K AC T 1 2 3 4 5 6 7 8 9 10 11 12 1 3 14 15 16 17 18 19 20 2 1 22 23 24 2 5 26 27 2 8 29 30 31 32 10GE1 10 E2 10G GE2
MGTAGG
1 2 3 4 5 6 7 8 9 10 11 12 1 3 14 15 16 17 18 19 20 2 1 22 23 24 2 5 26 27 2 8 29 30 31 32 10GE1 10 E2 10G GE2
MGTAGG
M GMT 1 0 0 0 /1
STA TUS P /S FA N
DS- 9 3 -K9 DS-C9 1 4 - 9 C 1 K
L K IN
AT C
L INK
AC T
10
1 1
12
13
14 1 4
15
16
17
18
1 9
20
21
22 22
23
24
25
2 6
2 7
28
29
30
31
32
10G E1
10G 10GE2 E2
M S9 D 134M L U TILAYER F BRIC SWITC A H
M 914 M DS 3 ULTI AYER FABRIC S W L ITCH MD 91 M S 34 ULTI A YE FA RI SW L R B C ITC H CONSOLE CON S L O E M GM T 1 /1 0 0 0 CONSOLE CON S L O E M GM T 1 0 /1 0 0 MG MT 1 0 0 0 /1 M GMT 1 0 0 0 /1
STA S TU P /S FA N
DS-C9 1 4 -K9 3 -K 9
STA S TU P/S P /S FA N
DS-C9 1 4 -K9 3 - 9 K
STA TUS P /S FA N
DS- 9 3 -K9 DS-C9 1 4 - 9 C 1 K
LIN K
A CT
L INK
AC T
10
1 1
12
13
14 1 4
15
16
17
18
1 9
20
21
22 22
23
24
25
2 6
2 7
28
29
30
31
32
10G E1
10G 10GE2 E2
LIN K
AC T
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
CO NSOLE
MGM T 1 /1 0 0 0
ST U S AT P /S F AN
D S 134 DS-C9 4 -K9 -C9
LIN K
AT C
1 0
11
12
13
14
15
1 6
1 7
18
19
20
21 2 22 2
23
24
25
26
27
28
29
3 0
31
3 2
10 GE1
10G E2 E2
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
CO NSOLE
MGM T 1 /1 0 0 0
ST U S AT P /S F AN
D S 134 DS-C9 4 -K9 -C9
LIN K
AT C
1 0
11
12
13
14
15
1 6
1 7
18
19
20
21 2 22 2
23
24
25
26
27
28
29
3 0
31
3 2
10 GE1
10G E2 E2
BW available Per rack is = 40 Gig CONNECTIVITY(Based on 16server/R provisioning 20 Gi) Racks 1,2,4,5 have similar TOR design which will interface with Connections to the MOR ( Rack 3 ) Connections to the SAN CORE Total N.Space Required /Rack = 5 RU Cables/Rack For R 1,2,4,5 ~ (2K X 3= 6)+(MDS X 2=8)=14 Cables
Rack 3 is the MOR for the POD which will interface with Connections from Intra POD racks
88
(Release) v1.3
Connections to the DCN Aggregation Connections to the SAN Core Total N.Space Required /Rack = 8 RU Cables/Rack For R 3 Intra POD (2K X 3= 6C* 4R) = 24 Cables MDS X 2= 8 Cables Access to Aggrigation = (5k X 2) = 8 ~ 16 Cables To MNET CORE = 2 Cables
46 U
C i c o N e 2 1 4T8 s x u s 1 G EF a b r E x t nd e r c i e S T AT D I 1 2 3 4 5 6 7 8 9 1 0 1 11 2 1 31 4 1 51 6 1 718 1 920 2 12 2 2 32 4 2 52 6 2 72 8 2 93 0 3 13 2 3 33 4 3 53 6 3 7 3 8 3 94 0 41 4 2 4 34 4 4 54 6 4 74 8 1 2 3 4
46 U
MGT
C i c o N e 2 1 s T8 s x u 4 G 1E F a b r E x t n d er c i e S T A T D I 1 2 3 4 5 6 7 8 9 1 0 1 11 2 1 31 4 1 516 1 71 8 1 920 2 12 2 2 32 4 2 5 2 6 2 72 8 2 9 30 3 13 2 3 3 34 3 53 6 37 3 8 3 94 0 41 2 4 4 344 4 54 6 4 748 1 2 3 4
46 U
MGT
MGT MGTAGG
C i c o N e 2 u sT8 s x1 4 1 G E F ab r E x t n d e r c i e S T A T C i c o N e 2 1T 8 s x u4 s G 1 E F a b r E x t n de r c i e S T AT D I 1 2 3 4 5 6 7 8 9 10 1 11 2 13 1 4 1 5 16 1 7 1 81 9 2 0 2 12 2 2 3 2 4 2 52 6 27 2 8 2 93 0 31 3 2 3 33 4 35 3 6 3 7 3 83 9 4 0 4 14 2 4 34 4 45 4 6 4 74 8 1 2 3 4 D I 1 2 3 4 5 6 7 8 91 0 11 1 2 13 1 4 1 51 6
46 U
17 1 8 1 92 0 2 12 2 2 32 4 2 52 6 2 7 2 8 2 93 0 31 3 2 3 33 4 35 3 6 3 738 39 4 0 4 1 4 2 4 34 4 4 54 6 4 74 8 1 2 3 4
MGT
C i c o N e 2 1 sT8 s x u 4 G 1 E F a b r Ex t n d e r c i e S T AT
D I
91 0
1 1
MD S 9 3 4 M ULTILAY 1 ER FABR IC SWITCH M DS 9 1 4 M ULTILAYE R FAB RIC SW 3 ITCH M DS 9 1 4 MU LT 3 ILA YE FAB RIC SWITCH R MD S9 1 3 MULTILAYER FA BRIC SWI C H 4 T ST T S AU P/S FN A DS-C 91 34 - K9 K 9 LK N I A CT 1 2 3 4 5 CONSOL E C ONS OL E C ONSO L E CO NS L E O M G MT 10 /10 0 MG MT 1 0 /1 00 MG M T 1 0/1 00 M GMT 1 0 /10 0
ST AT US P/S P/S FA N
DS -C9 13 4-K 9
S TA T US P/S F N A
STATUS P /S S / FA N
DS- 9 13 4-K DS-C 1 34 - 9 C K
LI NK
A CT
10
11
12
1 3
14 14
1 5
1 6
1 7
18
19
2 0 2 1 22
23
24
2 5
2 6
2 7 2 8 29
30
31
32
1 GE 1 0
10 GE2 10GE 2
L INK
AC T
1 0 11
12
13
14
15
16
1 7
1 8
19
20
21
22
23 24
25 26
2 7
2 8
29
3 0
31 3 2
10 GE1
10GE2
MDS 9 1 3 MU LTILAYER FA BRIC S W C H 4 T I
L INK
AC T
1 0
11
12
1 3 1 4 15 4
16
1 7 18
1 9
20 20
21 22 22
2 3
2 4
25
26
27
28
2 9
30
31
32
1 0 E1 G
10 2 0 GE 2 GE
M DS 9 1 4 MU LT 3 ILA YE FAB RIC SWITCH R
C ONS OL E
C ONSO L E
MG MT 1 0/ 1 00
STA TUS
C ONSO L E CO NS L E O M G MT 10 /10 0 MG MT 1 0 /1 00
STATUS P /S S / FA N
DS- 9 13 4-K DS-C 1 34 - 9 C K
ST AT US P/S P/S FA N
P/ S P/S F AN
D -C 9 3 4-K 99 S-C91 13 4-K DS
S TA T US P/S F N A
DS-C9 1 3 4-K 9 34 -K
LINK
A CT
10
1 1
1 2
1 3
14 1 4
15
1 6
17
1 8 1 9 20
21
22 2 2
23
24
25
2 6
27
28
29
30 31 32
1 0GE1
10 GE2 1 0GE2
LIN K
A CT
LINK
A CT
10
11 12 13 1 4
1 5
16
1 7
18
19
2 0 2 1 22
23
24
25
26
27
2 8 2 9 30
31
32
1 0GE1
1G 10 E2 0GE2
L INK
AC T
1 0
11
12
1 3 1 4 15 4
16
1 7 18
1 9
20 20
21 22 22
2 3
2 4
25
26
27
28
2 9
30
31
32
1 0 E1 G
10 2 0 GE 2 GE
DS -C9 13 4-K 9
LI NK
A CT
10
11
12
1 3
14 14
1 5
1 6
1 7
18
19
2 0 2 1 22
23
24
2 5
2 6
2 7 2 8 29
30
31
32
1 GE 1 0
10 GE2 10GE 2
C ONS OL E
MG MT 1 0 / 00 1
STA TUS
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
P/ S P/S F AN
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
LINK
A CT
10
1 1
1 2
1 3
14 1 4
15
1 6
17
1 8 1 9 20
21
22 2 2
23
24
25
2 6
27
28
29
30 31 32
1 0GE1
10 GE2 1 0GE2
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U
21
24
10 GIG RACK GROUP Total Rack Space Required by N. Equipment = 24 RU ( with MDS as TOR)
BW available Per rack is = 10 Gig Ports at line rate Cables / Rack -Access to Aggregation = TBD ( MAX 16)
89
(Release) v1.3
90
(Release) v1.3
DCN
10 Document Acceptance
Name Title Compan y Signatur e Date Name Title Compan y Signatur e Date
91
(Release) v1.3
Appendix A
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.
Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.
COMMERCIAL IN CONFIDENCE.
Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100
European Headquarters Cisco Systems Europe 11 Rue Camille Desmoulins 92782 Issy-Les-Moulineaux Cedex 9 France www-europe.cisco.com Tel: 33 1 58 04 60 00 Fax: 33 1 58 04 61 00
Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-7660 Fax: 408 527-0883
Asia Pacific Headquarters Cisco Systems Australia, Pty., Ltd Level 9, 80 Pacific Highway P.O. Box 469 North Sydney NSW 2060 Australia www.cisco.com Tel: +61 2 8448 7100 Fax: +61 2 9957 4350
Cisco Systems has more than 200 offices in the following countries and regions. Addresses, phone numbers, and fax numbers are listed on the Cisco Web site at www.cisco.com/go/offices.
Argentina Australia Austria Belgium Brazil Bulgaria Canada Chile China Colombia Costa Rica Croatia Czech Republic Denmark Dubai, UAE Finland France Germany Greece Hong Kong SAR Hungary India Indonesia Ireland Israel Italy Japan Korea Luxembourg Malaysia Mexico The Netherlands New Zealand Norway Peru Philippines Poland Portugal Puerto Rico Romania Russia Saudi Arabia Singapore Slovakia Slovenia South Africa Spain Sweden Switzerland Taiwan Thailand Turkey Ukraine United Kingdom United States Venezuela Vietnam Zimbabwe