Sie sind auf Seite 1von 93

CiscoAdvancedServices

MOLINA HEALTHCARE
Data Center Networking High Level Design Document V1.0 (Draft)

Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

Cisco Services.

THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The following information is for FCC compliance of Class A devices: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case users will be required to correct the interference at their own expense. The following information is for FCC compliance of Class B devices: The equipment described in this manual generates and may radiate radio-frequency energy. If it is not installed in accordance with Ciscos installation instructions, it may cause interference with radio and television reception. This equipment has been tested and found to comply with the limits for a Class B digital device in accordance with the specifications in part 15 of the FCC rules. These specifications are designed to provide reasonable protection against such interference in a residential installation. However, there is no guarantee that interference will not occur in a particular installation. You can determine whether your equipment is causing interference by turning it off. If the interference stops, it was probably caused by the Cisco equipment or one of its peripheral devices. If the equipment causes interference to radio or television reception, try to correct the interference by using one or more of the following measures: Turn the television or radio antenna until the interference stops. Move the equipment to one side or the other of the television or radio. Move the equipment farther away from the television or radio. Plug the equipment into an outlet that is on a different circuit from the television or radio. (That is, make certain the equipment and the television or radio are on circuits controlled by different circuit breakers or fuses.) Modifications to this product not authorized by Cisco Systems, Inc. could void the FCC approval and negate your authority to operate the product. The following third-party software may be included with your product and will be subject to the software license agreement: CiscoWorks software and documentation are based in part on HP OpenView under license from the Hewlett-Packard Company. HP OpenView is a trademark of the Hewlett-Packard Company. Copyright 1992, 1993 Hewlett-Packard Company. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCBs public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. Network Time Protocol (NTP). Copyright 1992, David L. Mills. The University of Delaware makes no representations about the suitability of this software for any purpose. Point-to-Point Protocol. Copyright 1989, Carnegie-Mellon University. All rights reserved. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission. The Cisco implementation of TN3270 is an adaptation of the TN3270, curses, and termcap programs developed by the University of California, Berkeley (UCB) as part of the UCBs public domain version of the UNIX operating system. All rights reserved. Copyright 1981-1988, Regents of the University of California.

MOLINA HEALTHCARE
Data Center Networking High Level Design Document V1.0 (Draft)

Corporate Headquarters Cisco Systems, Inc.

Cisco Services.

Contents

Contents
Contents Figures Tables Document Information
Review and Distribution Modification History

3 7 8 9
9 9

Introduction
Preface 10 Audience 10 Scope 10 Assumptions 10 Related Documents References 10

10

10

Project Overview
Customer Description Project Overview Project Scope 11 Project Timeline Phase 1 Project Team 12 Project Sites 12

11
11 11 11

1. High Level Data Center Network Design


1.1 DCN Functional Blocks
1.1.1 WAN EDGE 14 1.1.2 INET EDGE 15 1.1.3 PNET 15 1.1.4 DCN 16 1.1.5 SAN 19 1.1.6 MNET 20 1.1.6.1 Out of Band Management 20

13
13

1.2 DCN Design Principles


1.2.1 Multi-layer Tiers 21 1.2.1.1 DC Core 21 1.2.1.2 Service & Distribution 1.2.1.3 Access Layer 22

21

21

1.3SAN Design principles:

1.3.1 SAN Core 22 1.3.2 SAN Edge 23 1.3.3 SAN Connectivity 23 1.3.3.1 Inter-Switch Link (ISL) Connectivity 1.3.3.2 Host Connectivity 24 1.3.3.3 Storage Connectivity 24 1.3.3.4 Tape Backup Architecture 25

22

23

(Release)v1.3

Document Information

1.3.3.5 1.4.1 1.4.2

Data encryption 25 High Availability Resiliency 26 27

1.4 High Availability & Resiliency

26

2. Data Center Services


DMZ Services 28 2.1 INET Services
2.1.1 Security 28 2.1.2 Load Balancing 30

28
28

2.2 WAN Services


2.2.1 Security 30 2.2.2 Load Balancing 2.2.3 Optimization 30 31

30

2.3 DCN Aggregation Services


2.3.1 Production Network 31 2.3.1.1 Security 31 2.3.1.2 Load balancing 2.3.2 Development Network 32 2.3.2.1 Security 32 2.3.2.2 Load balancing

31

31

32

2.4 SAN Services

2.4.1 Data Replication 32 2.4.1.1 Local replication 32 2.4.1.2 Remote replication

32

33

3. Layer 1, 2 ,3 Design & HA Technologies


3.1 3.2 3.3 3.4 L1 Design L2 Design L3 Design HA Technologies
36

35
35 35 35 36

3.4.1 Logical Redundancy 36 3.4.1.1 HSRP (Hot Standby Router Protocol) 3.4.2 UDLD (Uni-Directional Link Detection) 37 3.4.3 NSF/SSO (Non Stop Forwarding/ Stateful Switchover)37 3.4.4 GOLD (Generic Online Diagnostics)38 3.4.5 uRPF (unicast reverse path forwarding) 39 3.4.6 Trunking 39 3.4.7 VTP (VLAN Trunking Protocol) 40 3.4.8 VLAN Hopping 40 3.4.9 Unused ports 40 3.4.10 ISSU 41 3.5 Control Plane and Management Plane Policing 41 3.5.1 Developing a CoPP Policy 43 3.5.2 COPP on NX-OS 44 3.5.3 CoPP Risk Assessment 45

4. Security Technologies
4.1 Firewall Technologies
4.1.1 Transparent Mode 46 Overview 46 4.1.1.1 Traffic Passing through Transparent firewall 47

46
46

(Release) v1.3

Document Information

4.1.1.2 Transparent Firewall in a Network 47 Figure 4.1 TRANSPARENT FIREWALL NETWORK48 4.1.1.3 Transparent Firewall Guidelines 48 4.1.1.4 Unsupported Features in Transparent Firewall 4.2 Routed Mode 50 Overview 50 4.2.1 Routed Firewall in a Network 50 4.3 ASA Virtual Context 50 Overview 50 4.3.1 Understanding Multiple Contexts 51 4.3.2 System execution space 51 4.3.3 Admin context 51 4.3.4 User or customer contexts 51 4.4 Packet Flow, Shared Interfaces and Classification in 4.5 Failover Functionality Overview on the ASA 52 4.5.1 Stateful failover 52 4.5.2 Failover and State Links 53 4.5.3 Intrusion Detection & Prevention 53

49

Multimode 52

5. Load Balancing & Technologies


5.1 Server Load Balancing 56 5.1.1 Routed Mode 56 5.1.2 One-Armed Mode56 5.2 Global Server Load Balancing 5.3 WAN Optimization 57

56

57

6.1 Feature Recommendations


6.1.1 VSANs 59 6.1.2 Devices Aliases 60 6.1.3 Zoning and Zonesets 60 6.1.4 Security 61 6.1.5 Role Based Access Control (RBAC) 62 6.1.6 Logging 62 6.1.7 Monitoring 63 6.1.8 Call Home 63 6.1.9 Port-Channel 63 6.1.10 N Port Identifier Virtualization 63 6.1.11 N Port Virtualizer 64 6.1.12 Licences 64

58

6.2 Future SAN 64


6.2.1 Consolidated IO (Fibre Channel over Ethernet FCoE) 64

7. Scalability & Virtualization


7.1 DCN Scalability 7.1.1 CORE 66 7.1.2 AGGREGATION 7.1.3 ACCESS 66 7.1.4 Services 67 7.1.5 SAN Scalability 66 66

66

67

7.2 Network Virtualization Technologies


7.2.1 Virtual Port Channel ( vPC ) 68 7.2.2 Virtual Device Context ( VDC ) 68

67 69

7.3 Server Virtualization


7.3.1 Deploying VMWare 69

(Release) v1.3

Document Information

7.3.2 VMWare with Standalone Server 7.3.3 VMWare with Blade Server 70

70

7.4 Access Layer Architecture for Blade Server

71

8. Management
8.1 Network Management
8.1.1 SNMP 74 8.1.2 SSH/Telnet 74 8.1.3 Logging 74 8.1.4 NTP 74 8.1.5 RBAC/AAA/TACACS+

73
74

75 75

8.2 Management Technologies

8.2.1 Cut-Through Proxy (Management Firewall) 8.2.2 DCNM 75 8.2.3 ANM 76 8.2.4 CSM 76 8.2.5 Fabric Manager 76

75

9. Rack Design
9.1 Data Center Sizing No of Servers & Server NICs 9.2 RACK & POD Design
9.2.1 Rack Space Division 79 9.2.2 POD Assignments 79 9.2.3 POD Design 80

77
77 79

10 Document Acceptance 10 Document Acceptance

83

(Release) v1.3

Figures

Figures
Figure 1 13 BLOCK LEVEL DESIGN

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

Contents

Figure 2 14 Figure 3 15 Figure 4 16 Figure 1.5 Figure 1.6 Figure 1.7 Figure 1.8 Figure 1.9 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 3.1 Figure 3.2 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 6.1 Figure 7.1 Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5 Figure 9.1 Figure 9.2

WAN EDGE DESIGN INET EDGE DESIGN DCN DESIGN DESIGN FOR 1G SERVERS DESIGN FOR 10G SERVERS SAN CORE EDGE DESIGN MNET DESIGN TAPE SAN DESIGN INTERNET EDGE FIREWALL DMZ FIREWALL PARTNER FIREWALL WAN EDGE FIREWALLS VIRTUAL FIREWALL DMX + TIME FINDER DMX+RECOVERPOINT CONTROL PLANE POLICING LOGICAL PLANES OF ROUTER TRANSPARENT FIREWALL NETWORK ROUTED FIREWALL NETWORK IDS PLACEMENT WEB TO DATABASE SERVER TRAFFIC FLOW FCoE TOPOLOGY VMWARE WITH STANDALONE SERVER ARCHITECTURE VMWARE WITH BLADE SERVER ARCHITECTURE TYPICAL SERVER CONNECTIVITY BLADE SERVER ARCHITECTURE WITH FLIPPED-U DESIGN AND VSS BLADE SERVER ARCHITECTURE IN PASS-THROUGH MODE POD DESIGN TYPE-1 1GIG RACK GROUP POD DESIGN TYPE-2 10 GIG RACK GROUP 17 18 19 20 25 28 29 29 30 32 34 34 42 43 49 51 55 55 61 66 67 68 68 69 80 81

(Release) v1.3

Document Information

Tables
Table 1 Table 2 Table 3 Project Team Contact Current Project Site List Project Contact Information 12 12 12

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

Figures

Document Information
Author: Change Authority: Change Forecast: Template Version: Talha Hashmi CiscoAdvancedServices High 4.1

Review and Distribution


Organization Molina Healthcare Molina Healthcare Molina Healthcare Molina Healthcare Molina Healthcare Molina Healthcare Molina Healthcare Cisco Advanced Services Cisco Advanced Services Cisco Advanced Services Cisco Advanced Services Cisco Advanced Services Cisco Advanced Services Name Amir Desai Sri Bharadwaj Shawn Shahzad Larry Santucci Joel Pastrana Sudhakar Gummadi Rajeev Siddappa Dale Singh Talha Hashmi Steve Hall Eric Stiles Damon Li Umar Saeed Title CIO, IT NOC & Operations Director, IT Infrastructure Director, IT Transaction Services Director, IT Infrastructure Manager, IT Infrastructure Manager, IT Networking & Telecom Project Manager Lead Network Consulting Engineer L4 - L7 Network Consulting Engineer Security. Network Consulting Engineer SAN. Network Consulting Engineer Delivery Manager

Modification History
Re v Date Originator Status Comment

10

(Release) v1.3

Figures

0.1 0.2

25-May-09 10-July-09

Talha Hashmi Talha Hashmi

Draft Draft

Initial draft for [internal] Cisco review Released.

11

(Release) v1.3

Introduction

Introduction
Preface
This document known as the High Level Design (HLD) addresses the architecture & technology recommendations for building the new Data Center in Albuquerque, New Mexico for Molina Healthcare. The information in this document is in view of the technical requirements gathered in the CRD and the final BOM. Specific implementation details of each technology will be covered in the Low Level Design Document.

Audience
This document is intended for use by the Cisco AS and Molina Healthcare engineering teams. Technologies and recommendations decided here will dictate the implementation details on the Low Level Design document.

Scope
The scope of this document will cover the New Mexico Data Center design architecture, technology integration in reference to the requirements gathered on CRD and the hardware available represented on Bill of Material. The HLD will include the features, and functions that will satisfy the stated technical objectives of the project.

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

Document Information

Assumptions
This document is focused on the design specific to the New Mexico Data Center (NM DC-2 solution). Any Cisco hardware and/or software information in this document is based on current performance estimates and feature capabilities.

Related Documents
[1] [2] [4] [5] [6] SharePoint (All Network Related Documents ) (Molina) Molina CRD DOC_Final_Network.doc (Cisco / Molina) 702773_2938557_Molina_PDI_SOW_revRM20090220v1.doc (Cisco) Molina CRD_V1.4.doc (Cisco) BOM (Cisco)

References
None.

13

(Release) v1.3

Project Overview

Project Overview
Customer Description
Molina Healthcare, Inc., is among the most experienced managed healthcare companies serving patients who have traditionally faced barriers to quality healthcare- including individuals covered under Medicaid, the Healthy Families Program, the State Children's Health Insurance Program (SCHIP) and other government-sponsored health insurance programs. Molina has health plans in California, Michigan, New Mexico, Ohio, Texas, Utah, Washington, Missouri and Nevada, as well as 19 primary care clinics located in Northern and Southern California. The company's corporate headquarters are in Long Beach, California. Molina's success is based on the fact that it has focused primarily on the Medicaid and low-income population, and is committed to case management, member outreach and low-literacy programs. More than 25 years ago, the late C. David Molina, MD, founded the company to address the special needs of Medicaid patients. Today, Molina carries out his mission of emphasizing individualized care that places the physician in the pivotal role of managing healthcare.

Project Overview
The primary business requirements for building the new Data Centre aims to consolidate and migrate the existing Molina Healthcare network infrastructure from
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

DCN

high risk earthquake zone, while also increasing network capacity, High Availability and Resiliency.

Project Scope
The scope of this project covers the planning, design, testing and implementation of the NM DC as described in the SOW. Cisco Advanced Services has been engaged with Molina HC engineering team in collecting the requirements, which were compiled and delivered as CRD. The 2nd deliverable under Phase 1 of this project is the High Level Design Document which will be followed by a Low Level Design Document in the second Phase.

Project Timeline Phase 1


CRD Workshop: CRD Delivery: CRD Acceptance: BOM (Draft): BOM (Final): HLD (Draft): HLD (Final): April 20th, 2009 May 11th, 2009 May 15th, 2009 May 9th, 2009 May 15th, 2009 June 10th, 2009 June 15th, 2009

15

(Release) v1.3

DCN

Project Team
The following Molina and Cisco resources are members of the project team.
Table 1 Project Team Contact Information

Name Dale Singh Talha Hashmi Steve Hall Damon Li Eric Stiles

Title Project manager Lead Network Consulting Eng L4 L7 Consulting Engineer SAN N.Consulting Engineer Security N. Consulting Eng

Organiza tion Cisco Cisco Cisco Cisco Cisco

Email dalising@cisco.com tahashmi@cisco.com sthall@cisco.com damonli@cisco.com erstiles@cisco.com

Project Sites
The following sites are currently in scope for the DCN project.
Table 2 Current Project Site List

Address One Golden Shore 5610 Turning Dr SE 1500 Hughes Way

City Long Beach Albuquerque Long Beach Albuquerque

State CA NM CA NM

Postal Code 90802 (1GS) 87106 (ALB) 90810 (HWS) (NM-COLO)

16

(Release) v1.3

High level DCN Design

1. High Level Data Center Network Design


1.1 DCN Functional Blocks
Based on the requirements gathered in the CRD from Molina HC engineering team, attached below is a block level representation of the NM-DC-2 design. The diagram represents over all data center design segmented into different functional blocks explained below in detail.

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

Project Overview

ISP
ISP 1 ISP 2

GSS OUTSIDE LB IPS DMZ FW DMZ

WAN ACCESS 1. BRANCHES 2. CORP USER


MPLS VZ FW WAAS

INSIDE

PARTNER ACCESS INET EDGE 1. B2B Access

MPLS 2

CORE L3

LB

W AN EDGE
IPS

AGG L3-L2

PNET ACC
ACC L2

FW

SAN

OB-MGT

MGT

SAN

HUGHS W AY DR DC

Figure 1.2 BLOCK LEVEL DESIGN

1. 2. 3. 4. 5.

WAN EDGE INET EDGE ( Internet Edge ) PNET ( Partner Network ) DCN ( Data Center Network ) MNET ( Management Network )

Each block level design is further translated into High Level Design and technologies that match the required functions.

18

CN

(Release) v1.3

Project Overview

L3

VZ

WAN 2

WAN 1.2

WAN 1.1

WAN FW WAAS WAN AGG

WAN EDGE

TO MNET

TO INET

Figure 1.2 WAN EDGE DESIGN

1.1.1 WAN EDGE


WAN EDGE block will enable remote branch access termination through WAN MPLS circuits and further provide aggregation of WAN traffic. This block will host Security(Firewall, IPS, Proxy Services) and Wide Area Application Services for optimizing and securing the WAN traffic. The traffic passing through this block will be destined for either, 1. DC Core (DCN) for access to data & Voice services, 2. Internet (INET Edge) for access to the internet, 3. Management Network (MNET) for Network Operations Center 4. ( INET Edge for DMZ ) servers. The technologies used at the edge are 7206 VXR which will aggregate to CAT 6509. The WAN traffic has dedicated ASA for securing access to the DCN and WAAS appliance to optimizing the WAN traffic.

Recommenda tion In alignment with Molinas vision to have off shore NOC in addition to the remote NOC in Hughes Way. From security design prospective it will be feasible to have off shore NOC access provisioned through WAN Edge rather than through INET EDGE.

Not e 1. Service provisioning in this block is explained in detail under chapter 2 of this document. 2. WAAS and 6509 will be repurposed from the existing DC environment.

19

(Release) v1.3

Project Overview

ISP -1

ISP -2

INET EDGE

TO PNET

GSS

OUTSIDE
VPN FW INET FW
1 Gig

DMZ CORE

1 Gig

10 Gig

IDS

IDS

10 Gig

DMZ EDGE
Figure 1.3 INET EDGE DESIGN

1.1.2 INET EDGE

DMZ SRV

INTERNET EDGE block will interface 1. Two ISP connections, 2. WAN Traffic through DCN CORE TO TO WAN EDGE WAN EDGE, 3. Provide secure access to the Partner network and 4. The DCN Core. This block will also host the DMZ servers. Services hosted in this block are 1. Global Site Selector, 2. Firewall, 3. Server Load Balancing, 4. Intrusion Prevention. Routers terminating the ISP will be 7206VXR which will aggregate to the CAT 6509 SW DMZ Core. The DMZ core will provision will security services for Partner and Internet access through dedicated ASA and to the DMZ servers through CAT 3750 SW.

Not e 1. Service provisioning in INET block is explained in detail under chapter 2 of this document. 2. DMZ Access SW will be repurposed from the existing DC environment. ( CAT 3750 are not DC class switches )

1.1.3 PNET
PARTNER access will be provisioned through INET EDGE block. Only P2P Layer 3 connectivity will be provisioned to the partner devices which will be managed by the partner.

Cauti on Since the PNET devices are not owned by Molina and only L3 connections are provisioned. Proper capacity planning is essential to provision a scalable design.

20

(Release) v1.3

Project Overview

If the predicted growth is high for such partner connections, its recommended to move the PNET design similar to the WAN EDGE and provide dedicated services.

TO INET EDGE

LAYER 3 CORE

P D
10 Gig
Gig 10

P D
10 Gig

LAYER 3 LAYER 2
10 G ig

AGGRE

P D

P D

IDS
Gi g

ID
Gig 10

SERVICE CHASSIS

10 Gig

10

10
10

10

4 Gig

Gi g

10 Gig Servers 10 Gig + FCOE


21 (Release) v1.3

4 Gig

G ig

ig G

10 G

10 G

ig 1G

1G ig

FC E O

Figure. 4 DCN DESIGN

4Gi

1 Gig Servers

Project Overview

1.1.4 DCN
DCN Data Center Network is a 3 tired architecture which is based on Cisco best practice and also aligns with Molinas requirements. The DCN block will interface the INET EDGE with 10 Gig connectivity which will provide access to WAN, PNET and INET. The DCN architecture is N+1 design which will have 10 Gig connectivity in each layer i.e. Core, Aggregation and Access. The Core and the Aggregation layer will use the Nexus 7010 switch which can be virtualized to provide traffic segmentation for Production and Dev-Test environments as well as scalability for future growth. The selection of Nexus technology is based on Data Center class features recommended and discussed in detail under technology section. Additionally Cat 6509 will be used as a service chassis in the aggregation layer to provision services like SLB (Server Load Balancing) and Security. The service chassis design allows for modular approach to grow and scale services as required in the future. For Functional details of each DCN layer please refer to section 1.2 DCN Design Principals. For feature recommendations please refer to chapter 3. The DCN access layer is designed to reduce structured cabling by consolidating the server connections within the rack using ToR technologies. The DCN access layer design is of two types: Type 1: This design is to accommodate the existing servers that have 1 Gig NICs.

Not e

TO AGGREGATION

OS TBD

NEXUS 5000 ACCESS RACK


G 20

1:1.2 OS

NEXUS 2148 MDS 9124

1 Gig 1

Figure 1.5 DESIGN FOR 1G SERVERS

22

(Release) v1.3

Project Overview

This design uses Nexus 2148 as ToR (Top of Rack) to provision 48 ports of 1 Gig connectivity in the Active/Active or Active/Standby NIC configuration. The Nx 5010 is used as MoR (Middle of Row) to aggregate the traffic from Nx 2148. Each Nx 2148 can provide upto 40 Gig of Bandwidth to the rack. The rack also consolidates the HBA connectivity through ToR MDS 9124 switches which provide 32 Gig of BW. The design represented above is termed as POD. Each POD constitutes 5 access racks.

Not e Exception for SAN ToR: Based on the server requirements, some servers may connect directly to SAN Core. For SAN ToR (Edge SW) Design Please refer to the SAN section 1.1.5

The Nx 5010 in the MoR is used for Nx 2148 ToR access aggregation & provides 10 Gig uplink connectivity to the DCN aggregation layer. Based on the desired oversubscription ratio per POD and the availability of the 10 Gig ports, the middle rack can also accommodate limited no of 10 Gig servers. The management design is also segmented to support 5 racks with Nx 2148 as ToR Edge and 5010 as MoR as Spine.

Cauti on The Nexus 2148 does not allow Port Channeling to the Host based on the NxOS available today. Therefore if there are any Server Blade Center Chassis requiring PC to the access port, the POD can provision limited connectivity by leveraging the first 8 ports of the Nx5010. Nx 5010 provisions L2 connectivity only which is in line with Molinas requirements.

Type 2: The design is to accommodate the New 10 Gig servers with FCoE capability.

ACCESS RACK

OS TBD

23

10 Gig

(Release) v1.3

Project Overview

Figure 1.6 DESIGNS FOR 10G SERVERS

The design uses Nexus 5010 as ToR to provision 10 Gig Ethernet with FCoE connectivity to the servers. Each rack has two dedicated Nx5010 with Ethernet uplinks to the Nexus 7k in the DCN aggregation layer and Fibre channel uplinks to the MDS 9513. The POD architecture not necessarily applies to this type as compared to type one. But from the management prospective the architecture remains the same i.e. one spine per 5 racks.

Not e For Functional details of each DCN layer please refer to section 1.2 DCN Design Principals. For feature recommendations please refer to chapter 3

1.1.5 SAN

Fabric A

Fabric B

Core 1

Core 2

Core 1

Core 2

Edge 1

Edge 2

Edge 3

...

Edge 39

Edge 40

Edge 1

Edge 2

Edge 3

...

Edge 39

Edge 40

Figure 1.7 SAN CORE EDGE DESIGN

The Storage Area Network (SAN) is a two tier core edge topology. Molinas requirements and Ciscos best practices.

It is based on

The SAN Core-Edge topology consists of two redundant fabrics. Each fabric has two core directors and multiple edge switches. In the core-edge architecture the core directors support all the storage or target ports in each fabric as well as ISL connectivity to the edge switches. The core directors act as the central insertion point for FCIP SAN extension for replication between sites and SANTap for network based traffic splitting. This topology provides consolidation of storage ports at the core.

24

(Release) v1.3

Project Overview

The hosts connect to edge switches. The edge switches are connected to the core via ISL trunks. Since storage is consolidated at the core switches in this topology, this design can supports advanced SAN features like IVR, SME, DMM, SANTap, FCIP and virtualization on the core switches. This topology also provides deterministic host to storage over subscription ratio. This is a future proof architecture for FCoE. When Molina is ready to deploy FCoE, the edge switches can be swap out for Cisco Nexus 5000 FCoE switches without additional cables and re-wiring.

TO WAN EDGE

MNET- FW
R1 R2 R3 R4 R5

MGT CORE MGT SPINE

TOR EDGE

1G

10/100/1000

MNET

Figure 1.8 MNET DESIGN

1.1.6 MNET
MNET Management Network is designed to provide true out of band management to the network devices and servers in the DC. The Management network will host the management technologies. The access to the management will be secured through dedicated firewall between management network and other network segments. The management network was designed based on inputs from Molina engineering team which uses Nexus 2148 for 1gig connectivity and CAT 2960 for 10/100/1000 (for server connectivity ) as TOR on each rack. The aggregation of which will be done at the Nuxus 5010 (Spine SW). The aggregation design was segmented to 5 racks as preferred by Molina (except for Network racks which will be a group of 8 racks). Further all Spine switches will aggregate to the Management Core.

Cauti on There is limited redundancy in the management network as the Nx2148 which acts as a remote line card can only home to one 5010 as per the technology available today. Therefore the network dose posses a single point of failure in two cases. 1. If the spine switch fails, the connectivity to the entire TOR connecting to that spine will be lost. 2. If the TOR (2148 / 2960) fails, only connectivity to that rack will be lost. However there is N+1 redundancy on management core. The single point of failure can be eliminated by dual homing the ToR into 2 spines which is supported in a scheduled in July. If that route is selected there will be additional cabling required to support this

25

(Release) v1.3

Project Overview

redundancy.

1.1.6.1 Out of Band Management


Out of Band Management refers to having dedicated physical facilities connecting the NMS and the managed equipment (network devices, managed servers, etc). This ensures that the NMS traffic is not interfering with application traffic, and vice versa. Indeed, the primary value is that the NMS traffic can be immune to any problems on the application links, and can be used to repair application link problems, either due to congestion, broken devices/facilities, or configuration error. From the NMS product perspective, the out of band provisioning is handled simply by the IP addressing

Not e To keep the standard design, all access & network racks will be provisioned with management switch as TOR. The TOR SW may contain more physical ports then required specially in the Network racks as the No of devices will be less as compared to access racks. However this decision was reached in consideration to Molinas requirement to minimize inter rack cabling.

1.2 DCN Design Principles


This section describes the overall functioning design components and the design rules. The design proposed for Molina HC Data center is based on Ciscos multi-layer hierarchical architecture and best practices.

1.2.1 Multi-layer Tiers


The hierarchical three-tiered network design is the preferred architecture for most Local Area Networks (LANs). The hierarchical design principle is applicable to enterprise data center network. In a data center network, the three-tiered architecture is comprised of the Access layer, the Aggregation layer, and the Core layer. Each layer of the three-tiered network design has different responsibilities that permit some degree of flexibility for aggregating servers, configuring network redundancy, and supplying services to the network. The multi-layer architecture provides a secure, scalable, and resilient infrastructure. Molina Data Center Network design is based on hierarchical three tier architecture.

1.2.1.1 DC Core
This layer is purely L3 and provides high speed routing between the different aggregation layer switches. In addition DC Core layer also provides connectivity to INET EDGE. Core of the network will be based on Nexus 7000 platform. The Core will

26

(Release) v1.3

Project Overview

be virtualized using the VDC technology to provide path isolation between Production and Test Dev environments.

1.2.1.2 Service & Distribution


Distribution/Aggregation layer comprises of layer 3 switches providing aggregation point and layer 2 connectivity to access layer switches and also providing layer 3 connectivity to the DC core. The services chassis are attached to the aggregation switches. Service chassis are designed to host load balancing & security service. Distribution layer will be built on Nexus 7000 platform with a pair of CAT 6500 as the services chassis. The distribution switches will be virtualized using the VDC technology to provide path isolation between Production and Test Dev environments.

1.2.1.3 Access Layer


Access Layer will provide the capability for 10 Gig & 1 Gig connectivity and FCoE with Nexus 5k and 2k technologies. Segmented POD architecture is proposed for provisioning deterministic oversubscription and bandwidth allocation. vPC (virtual Port Channel) technology can be leveraged to reduce over subscriptions.

Not e For Virtualization technologies please refer to chapter 7.

1.3 SAN Design principles:


This section details overall SAN design components and design rules. The design proposed for Molina SAN is based on Ciscos SAN core edge architecture and best practices.

1.3.1 SAN Core


There are four MDS 9513 as core directors, two MDS 9513 directors in each fabric as high density and high performance core switching. Servers and storage that require high bandwidth and high performance connect directly to the MDS 9513 core switches via home run links. Blade servers with 9124e also connect directly to the core as well. Edge switches connect to the core switches with multiple ISLs to maximize performance from the host to the storage device. The current configuration recommends 4 x 4G ISLs for edge to core links. Total of 16G is available from the edge to the core a per fabric. Each rack has a total of 32G uplink capacity on both fabrics. The core directors are enabled with NPIV for multiple logins per N port to support NPV edge switches. Edge switches runs in NPV mode. NPV reduces domain ID usages and eliminates most of the configuration on the edge switches. After the initial switch setup, NPV enabled switches will be treated as an HBA. The edge switches will not run any fabric services such as fabric login and name servers. Zoning and other operational activities will be performed at the core directors, resulting in minimum management overhead.

27

(Release) v1.3

Project Overview

The core directors have 18+4 cards installed for SANTap and FCIP. It is not determined which direction Molina will take for data replication across sites. The 18+4 cards will support SANTap or FCIP at the time of deployment.

Not e Refer to replication section below for more information.

1.3.2 SAN Edge


Edge switches connect to the core switches with multiple ISLs to maximize performance from the host to the storage device. The current configuration recommends 4 x 4G ISLs for edge to core links. Total of 16G is available from the edge to the core per fabric. Each rack has a total of 32G uplink capacity on both fabrics. Dual 9124 switches are located in each rack function as edge switches to provide costeffective connectivity to servers. Most production servers should connect to the edge switches. Fabric Manager will be deployed for SAN management. Fabric Manager and Device Manager Offer fabric and device management of the SAN fabric.

1.3.3 SAN Connectivity


1.3.3.1 Inter-Switch Link (ISL) Connectivity
The 24 port linecard offers dedicated full 4Gb/s per port. There are 8 ISLs between core directors in each fabric. There are 4ISLs between the core director and the edge switches with 16 4GB/sec to the uplink director. NPIV is enabled on all core directors and NPV is enabled on all edge switches. High Availability ISL design requires the following: More than one ISL between switches. This ensures switch to switch connectivity is maintained in the event one ISL fails due to a cable, line card, and port group issue. ISLs between switch should be on different line cards. This ensures that switch to switch connectivity is maintained in the event of a line card failure Multiple ISLs on the same line card should be placed into separate port groups of a line card. This allow for an even distribution of bandwidth on a line card. In addition, a port group failure will not affect all ISLs. ISL should be bundled into a port-channel to allow for bandwidth management. Port-Channels allow for non-disruptive changes to the number of ISLs. Portchannels provide load balancing.

28

(Release) v1.3

Project Overview

ISL ports should be placed into VSAN 1. This is a best practice, as VSAN 1 can never be deleted. However, to meet this requirement, VSAN 1 should never be placed into a suspended state. ISLs require dedicated bandwidth. Between switches, this is typically 4Gb. ISLs to bladecenter should follow the requirements. However, as a starting point, two by 4Gb ISLs between the bladecenter and its core switch has proven to be sufficient. With both an A and B fabric to connect to a bladecenter, this will provide 16 Gb of bandwidth. This is based on two, 4GB links per fabric.

Cauti on Cisco does not provide embedded fiber channel switches for blade switches. There are four HP c-class blade switches in Molina that have Brocade switches. Molina will have to procure the Cisco 9124e embedded blade switches from the server vendor. Fabric Manager Servers Performance Manager should be monitored to determine whether ISLs are under of over sized. ISLs should be increased if the trending and prediction queries in FMS show it is required. The description field on the ISL port should contain the source switch name and port and the remote switch name and port information. ISL [Source Switch name-physical port destination switch name-physical port] E.g. switch1-fc1/1 to switch2-fc1

1.3.3.2 Host Connectivity


The SAN directors for Molina are configured with a combination of 24 port and 18+4 line cards to provide and FC ports, FCIP and SANTap support as required. Each rack will have two 9124 switches with uplinks to the core directors. The 9124 ports are 4 Gb/sec line rate with a maximum of 5:1 oversubscription for the uplink. High performance servers and tape devices should connect directly to the core directors for full 4 Gb/sec bandwidth. It is recommended for the ease of troubleshooting and management to leave all the unused ports in a shutdown state and to turn on ports as and when they are required. It is also recommended to have detailed descriptions for each port as they are enabled.

Recommenda tion The descriptions for the server/host HBA connected ports on the switch should include the server name, HBA vendor and model and HBA instance on the server. Host [hostname-HBAvendor-hba instance] E.g. server1-hba0

1.3.3.3 Storage Connectivity

29

(Release) v1.3

Project Overview

The Storage ports and the tape libraries will be connected to the 24 port line cards. The ports on this card can operate at 1/2/4 Gb/sec. At all speeds, the ports operate at full rate. Media servers should connect to the same director as tape devices to minimize traffic traversing the ISL between the directors. It is recommended for the ease of troubleshooting and management to leave all the unused ports in a shutdown state and to turn on ports as and when they are required. It is also recommended that detailed descriptions for each port be defined as they are enabled.

Recommenda tion The descriptions for the storage target port connected ports to the SAN switch should include the Array model, Array serial number and the port identifier. Storage [Storage array serial number and port id] E.g. 8300-3c

Storage ports should be added to line cards in a round-robin fashion. Storage ports would have the most potential to run their line rate.

1.3.3.4 Tape Backup Architecture

Tape libraries

M edia server

Tape SAN Design


Tape libraries M edia server

Fabric A
Core 1 Core 2 Core 1 Core 2

Fabric B

Edge 1

Edge 2

Edge 3

...

Edge 39

Edge 40

Edge 1

Edge 2

Edge 3

...

Edge 39

Edge 40

Figure 1.9 TAPE SAN DESIGN

As part of the unified SAN architecture, tape backup traffic will be integrated into production SAN MDS switches. Separate backup VSANs will be created to segregate tape backup traffic. The core edge SAN design accommodates VTLs and Tape devices connecting directly to the core switches. The current design allows up to 60 VTLs and/or tape devices for line rate throughput of 4Gb/s per fabric. Tape devices are not dual-homed, but can connect to either fabric. The backup servers can connect to multiple directors to maximize port utilization on both fabrics. Tape devices and media server pairs should be connected to the same SAN director to reduced ISL traffic and highest throughput.

30

(Release) v1.3

Project Overview

1.3.3.5 Data encryption


Data encryption is performed on the tape devices with Quantum i500 tape library. Quantum Encryption Key Management (Q-EKM) system is used for key management. Molina currently does not have requirements for data at rest. Symantec PureDisk provides 256-bit AES encryption for data at rest and data in transit. Cisco does offer Storage Media Encryption (SME) on the SAN network. The current backup environment is not expected to change with MDS SAN design.

1.2 High Availability & Resiliency


1.4.1 High Availability
The objective of designing a highly available network is to reduce recovery time in case of any failure and also to maximize resource usage. High available network should have predictable normal and failure behaviour. To design a high available network redundancy should be used where appropriate and utilize a multi-tier architecture design. A crucial component of the network is to fine tune network parameter and utilize advance features ensuring availability. Each element in the Molina data center network is built with high-availability in mind. A highly available supervisors with fast failover protocol such as SSO/NSF is used in Cisco Catalyst 6500 plus all interconnect are configured in port channel form such that a single link failure does not cause an outage or a reconvergence (may that be Layer2 or Layer3) event. The V-shaped interconnect design principles are used such that multiple link/devices does not cause an outage, which is in accordance to Molina HC network availability requirement. The design includes the proposal to use HSRP on the aggregation switches for those VLANs that are trunked to the aggregation or access switches. The aggregation switches provide layer 3 terminations (interface VLAN) for the access switch VLANs, HSRP configured on each switch pair will provide first hop redundancy for hosts connected to these VLANs. The security infrastructure has been designed with a focus on high available and single points of failure avoidance. We have achieved this resilience by designing the infrastructure around segmentation of functionality and physical hardware separation. As you continue to review this document you will notice we have divided the infrastructure into multiple zones based on function and within each zone high availability is achieved by providing active/standby pairs for the security enforcement points. This provides sub-second failover and maintains TCP sessions to provide seamless session redirection to the standby device. The server load balancing solution utilizes stateful connection tracking so existing user traffic is unaffected in the event of a device fail over. The fault tolerant mechanism of the server load balancers can also interact with the switching infrastructure (by tracking interfaces and HSRP groups) to make optimal decisions on which device is active.

31

(Release) v1.3

Project Overview

Redundant SAN fabrics are utilized with this design, providing dual path connectivity to the host and storage devices. The SAN core is composed of two MDS 9513 directors on each fabric. The high availability features of the MDS 9513 include non-disruptive code upgrades, hot insertion and removal of blades, dual Supervisor modules with stateful failover, dual power-supplies. New edge switches can be added to the topology non-disruptively and without major changes to the architecture.

Not e Specific Technology recommendations & best practices are covered under chapter 3,4, 5 & 6

1.4.2 Resiliency
The biggest risk to a data center network is Spanning-tree loop. A spanning-tree loop in any part of the network can cause outage for the entire network. L2 access layer switches are required to support stateful devices such as load balancers and server NIC teaming. Use of the recommend technologies such as UDLD, CoPP etc can further reduce these risks. The infrastructure and security design provide redundancy at multiple levels for a robust, secure, and resilient environment. The security infrastructure achieves this by providing hardware failover and maintains session state during the failover. In addition to failover the segmentation design provides each zone protection against cross zone failure. The SAN network is a fully redundant architecture consists of two fabrics. Core-to-core directors and core-to-edge switches connectivity are bundled in a port channel. Link disruptions within the port channel will not affect data flow. Each host is connected separately to the redundant fabrics. Storage traffic continues to flow even in the event that one of the links is unavailable.

Not e Specific Technology recommendations & best practices are covered under chapter 3,4, 5 & 6

32

(Release) v1.3

Data Center Services

2. Data Center Services


DMZ Services
The DMZ zone requires 4 separate functions to Molinas infrastructure. They are: Internet Services Internet Outbound traffic management Partner Services Partner communications DMZ Services Internet Inbound traffic management The DMZ edge environment WAN Services Remote office communication and Remote office Internet traffic management

2.1 INET Services


2.1.1 Security
Internet Services function is designed to manage communications from the data center to unknown destination on the Internet. The focus of this security enforcement point is to validate outbound connections against internal security policy. Given various governance requirements applied to Molinas infrastructure it is required to manage all outbound communications. This enforcement point will not permit any inbound originating traffic.
Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

High level DCN Design

Internet

Outside Network

Internet Edge Routers

Inside Network

Internet Firewalls

DMZ CORE Primary Path Secondary Path The DMZ services zone focuses on providing Molina services to external customers. Data Center These services include web, e-portal, ftp, and others. They represent business critical
Figure 2.1: INTERNET EDGE FIREWALL

functions and increased security concerns. The design of this infrastructure is to provide a dedicated pair of hardware, management inbound and outbound communications, and isolate services to decrease exposure to security breaches. The secure infrastructure will be managed by a firewall and intrusion detection devices.

In rn t te e

In r e E g R u r ten t d e o tes

D ZF e a M ir w lls

D ZZ n A M oe

D ZZ n B M oe

D ZZ n C M oe

34

(Release) v1.3

High level DCN Design

Figure 2.2: DMZ FIREWALL

Partner services provide a direct link to Molina business partners. These partners participate and assist Molina in day to day operation and represent a business critical connection/communications. The policy requirements for the partner zone differ greatly from all other zones and in this design we have segmented their traffic to a dedicated firewall to manage the security enforcement policy. The firewall infrastructure will segregate partners by interface and provide intrusion detection systems.

P rn r at e P rn r at e 1

P rn r at e

3 P rn r at e 4

Pr e Fe a atn r ir w lls

D ZC r M oe

Figure 2.3: PARTNER FIREWALL

2.1.2 Load Balancing


ACE server load balancers will be utilized in both the DMZ and inside network aggregation. These are service modules that utilize the fast backplane and advanced routing of the catalyst 6500 switching platform. The ACE server load balancers have the ability to virtualize into many virtual modules, allowing separation of hardware resources and configuration. This allows the same device to perform in different functions .

2.2 WAN Services


2.2.1 Security
WAN Services provides remote offices and agencies direct access to data center resources and an internet security enforcement point for proxy services. The WAN security infrastructure will include firewalls and intrusion detection devices. The infrastructure will provide a direct internet link for the remote offices and segment this traffic from traversing the data center routing core.

35

(Release) v1.3

High level DCN Design

Rmt Oic e oe f e f

WN A I P 1 S

WN A I P 1 S Sc n ay eo d r

WN A I P 2 S

Itre nen t

W NF e a A ir w lls

Dt C ne aa e t r

Figure 2.4: WAN EDGE FIREWALLS

2.2.2 Load Balancing


The Cisco ACE Global Site Selector (GSS) will be staged in advance to prepare for future global site load balancing (GSLB). The GSS uses the DNS infrastructure to direct client requests to the preferred site. By integrating the GSS with Molinas existing DNS infrastructure a robust a resilient and scalable GSLB solution can be achieved. The Cisco GSS integrates with the Cisco ACE load balancer, providing advanced server availability metrics based not only on availability.

2.2.3 Optimization
Molina currently has the Cisco WAAS solution for WAN optimization. The existing strategy for WAN optimization will be preserved in the new data center. The WAAS core boxes are placed near the servers being optimized. When these servers migrate to the New Mexico data center the core WAAS boxes will migrate as well.

2.3 DCN Aggregation Services


2.3.1 Production Network
2.3.1.1 Security

36

(Release) v1.3

High level DCN Design

Molina HC requires state full inspection of communications within the data center core and security segmentation of critical application services. The placement of firewalls and intrusion detection systems provide the ability for Molina HC to build a services chain to protect user to server or server to server communications. This with the segmentation of critical assets based on internal security policy, external governance, and best practices provides secure services.

D C ata enter C ore

D C ata enter C re o

P s al F w hy ic ire all

V irtualiz d F e irew ll C ntex a o ts

ID S

D S B erver

Ap S p erver

B acke Se r nd rve

D S B erver

Application Server

Figure 2.5: VIRTUAL FIREWALL

2.3.1.2 Load balancing


ACE server load balancers will be utilized in both the DMZ and inside network aggregation. These are service modules that utilize the fast backplane and advanced routing of the catalyst 6500 switching platform. The ACE server load balancers have the ability to virtualize into many virtual modules, allowing separation of hardware resources and configuration. This allows the same device to perform in different functions

2.3.2 Development Network


2.3.2.1 Security
Molinas development environment is where they test applications, develop services, and product new business products. This environment utilizes real data in the testing procedure and requires an equal amount of segmentation and security enforcement. The environment will be treated as the same as a services chain and be protected with a virtualized firewall and intrusion detection systems.

2.3.2.2 Load balancing

37

(Release) v1.3

High level DCN Design

The ACE server load balancers will have virtual partitions in place to offer resources for the development network. This will allow a development environment that mimics the production network.

2.4 SAN Services


2.4.1 Data Replication
For local data replication, Molina requested to have point-in-time backend data refreshing solution. For remote data replication, Molina requested to have single solution to provide capability to replicate data between heterogeneous disk array which both VMWare and windows attached to.

2.4.1.1 Local replication


EMC provides software solution for point-in-time copy in the DMX disk array, called TimeFinder. TimeFinder will mirror and copy data from a primary source data volume to a backup data volume. TimeFinder also can take snapshot of the data volumes for point of time (PIT) instances.

DMX V -MAX with Tim eFinder

Fabric A
Core 1 Core 2 Core 1 Core 2

Fabric B

Figure2.6- DMX + TIME FINDER

2.4.1.2 Remote replication


Remote data replication between production and disaster recovery sites is critical. SANTAP is recommended to deploy on CISCO MDS 9513 combined with EMC RecoverPoint in asynchronous mode to replicate data between DC2 and HW.

38

(Release) v1.3

High level DCN Design

Figure 2.7- DMX+RecoverPoint

In this design, two EMC RecoverPoint appliances with SANTAP module in MDS 9513 form a reliable cluster environment. Data replication can be load-balanced between two RPAs. This solution provides data replication capabilities between different EMC disk array frames that support for both Windows and VMWare servers. For a fully redundant replication solution, the following component will be needed: Two 18+4 linecard with SANTAP license on each fabric per data center. Including modules in HW DR, we should require at least, total of six 18+4 linecard. Four EMC RecoverPoint appliances will be needed Two redundant link between DC2 and HW should be required

NM DC2

HW (DR)

DMX V-MAX with SRDF

MPLS link

DMX with SRDF

Core 1

Core 2

Core 1

Core 2

Fabric A

Fabric B

39

(Release) v1.3

High level DCN Design

Figure 2.8 SAN REPLICATION

Direct DMX to DMX replication with SRDF can also be deployed. In this scenario, the DMX with GE ports connects directly to the Cisco Cat 6500. DMX will handle the encryption and compression of the replication link between NM and HW data center. The MDS directors are able to transport FCIP traffic as well. Instead of the connecting the CAT 6500 to the DMX, this connection can be connected to the MDS director for transport FC packets over the IP network. For a redundant replication infrastructure, two MPLS links between the sites are recommended.

40

(Release) v1.3

High level DCN Design

3. Layer 1, 2, 3 Design & HA Technologies


3.1 L1 Design
Access in each service block connects to the distribution switches of that block. Triangle topologies are chosen versus U-type connectivity. All links to distribution will be 10 Gig in DCN and 1 Gig in other service blocks which ensures enough capacity.

Not e Depending on the application BW requirements each access POD uplinks can be manipulated to provide desired bandwidth & oversubscription ratios.

DCN distribution switches connect to Core routers via 10 gig links in a full mesh topology. Full mesh design provides for link redundancy and failover. LACP Etherchannels to core will be used.

3.2 L2 Design
The Access switches links to the Aggregation switches. The V-shaped is used to account for both an Aggregation switch and a port channel (multiple links) failure. The links will be Layer2 to allow for dot1q tagging and multiple VLANs being carried over between the Aggregation and Access. Rapid PVST+ Spanning Tree protocol will be used for within a second convergence in case of STP topology changes. HSRP will be used on the Aggregation switches to provide gateway functionalities for the servers and to provide predictable server egress traffic. VLAN numbering scheme will be preserved from existing DC to assist in migration

Not e

3.3 L3 Design
EIGRP is the protocol selected by Molina HC to be used as the IGP to the core. IP connectivity will be provided by configuring addresses on SVIs (Switch Virtual Interface). The use of SVI as opposed to configuration of ip addresses on the physical links provides capability to increase link capacity without any disruption to the traffic flow. A new physical link can be added to existing link without bringing the connections down. This provides operational flexibility for future growth. EIGRP will be used as the dynamic routing protocol. The network on the links will be advertised in EIGRP AS. There will be ECMP (Equal Cost Multi Path) from the distribution to the core block which will ensure equal cost load balancing. Load balancing allows a router to distribute traffic over all the router network ports that are the same distance from the destination address. Load balancing increases the utilization of network segments which increases the effective network bandwidth.

41

(Release) v1.3

High level DCN Design

Cisco NX-OS supports the Equal Cost Multiple Paths (ECMP) feature with up to 16 equalcost paths in the EIGRP route table and the unicast RIB. EIGRP can be configured to load balance traffic across some or all of those paths. Cisco NX-OS supports nonstop forwarding, graceful restart and MD5 Authentication for EIGRP. As a best practice, Cisco recommends to enable Authentication and Graceful Restart (High Availability). Configuration Guidelines for Core Based on discussion with Molina Architects, Cisco AS recommends the following:

1. There shouldnt be any static Route configured on Nexus.


2. EIGRP should be used to learn routes dynamically. 3. Authentication (using MD5) should be enabled for EIGRP.

4. For graceful restart, neighbouring devices participating in the graceful restart


must be NSF-aware or NSF-capable.

3.4 HA Technologies
3.4.1 Logical Redundancy
3.4.1.1 HSRP (Hot Standby Router Protocol)
The design includes the proposal to use HSRP on the aggregation switches for those VLANs that are trunked to the aggregation or access switches. The aggregation switches provide layer 3 terminations (interface VLAN) for the access switch VLANs, HSRP configured on each switch pair will provide first hop redundancy for hosts connected to these VLANs. Cisco will recommend HSRP Hello=250msec and HSRP HOLD=750msec on Nexus 7000 for application which need faster recovery. For the rest of applications HSRP second timers can be used (1sec and 3sec). The HSRP protocol is a relatively mature and well understood protocol; consequently there are no major recommendations to be made in terms of the HSRP configuration within the core infrastructure. Cisco AS agrees with the recommendation to synchronize the configuration of the HSRP active router and the STP root switch per VLAN, even though the access switch design is a STP non-looped design, this is deemed a best practice. It is recommended to use unique HSRP groups per VLAN. Alternatively if the numbers of vlans exceed the number of HSRP groups allowed on the supervisor, it is still possible to re-use the same HSRP group in multiple VLANs as the Nexus 7000 maintains a separate MAC table per VLAN, however this design may not interoperate with other switch vendor hardware. Additionally Cisco AS recommends that the following HSRP features be used,

42

(Release) v1.3

High level DCN Design

HSRP authentication between peers (not critical in data center, however leading practice) HSRP timers (250 mess hello, 750 mess second hold time) HSRP preempt and delay (STP convergence)

3.4.2 UDLD (Uni-Directional Link Detection)


The UDLD protocol allows devices that are connected through fiber-optic or copper (for example, Category 5 cabling) Ethernet cables to monitor the physical configuration of the cables and detect when a unidirectional link exists. When a unidirectional link is detected, UDLD shuts down the affected port and alerts the user. Unidirectional links can cause a variety of problems, including spanning tree topology loops. We recommend that you enable UDLD on all the fiber interfaces. As a Cisco proprietary feature, UDLD can be configured at the UNI level only if the customer device is a UDLD-enabled device. Aggressive mode is the preferred UDLD operational mode. Besides detecting unidirectional links due to misconnected fiber interfaces, the aggressive mode detects unidirectional links due to one way traffic on fiber-optic interfaces

3.4.3 NSF/SSO (Non Stop Forwarding/ Stateful Switchover)


The data center design includes the use of redundant supervisors in single Nexus 7000. Redundant supervisors have been proposed to provide higher availability within the Nexus 7000 chassis and due to the presence of many business critical services. It is recommended that SSO (Stateful Switch Over) in conjunction with NSF (Non-Stop Forwarding) be enabled on the redundant supervisors so as to provide high availability and synchronization of layer 2 and layer 3 protocol configuration and state on the switch. When enabled, SSO establishes one of the supervisors as active while the other is designated as standby, then SSO synchronizes information between them, this includes configuration, layer 2 protocol and hardware forwarding (PFC) information. A switchover from the active to the redundant supervisor engine occurs when the active supervisor engine fails, or is removed from the switch, or is manually shut down for maintenance. This type of switchover ensures that Layer 2 traffic is not interrupted. During switchover, system control and routing protocol execution is transferred from the active supervisor engine to the redundant supervisor. The switch requires between 0 and 3 seconds to switchover from the active to the redundant supervisor. NSF in conjunction with SSO (NSF always runs with SSO) provides redundancy for layer 3 traffic. The main purpose of NSF is to continue forwarding IP packets during a supervisor switchover. A key element of NSF is packet forwarding which is provided by CEF (Cisco Express Forwarding) in the Supervisor. CEF maintains the FIB (Forwarding Information Base) and uses the FIB information that was current at the time of the switchover to continue forwarding packets during a switchover. This reduces traffic interruption

43

(Release) v1.3

High level DCN Design

during switchover. During normal NSF operation, CEF on the active supervisor engine synchronizes its current FIB and adjacency databases with the FIB and adjacency databases on the redundant supervisor engine. Upon switchover of the active supervisor engine, the redundant supervisor engine initially has FIB and adjacency databases that are mirror images of those that were current on the active supervisor engine. Each protocol depends on CEF to continue forwarding packets during switchover while the routing protocols rebuild the Routing Information Base (RIB) tables. After the routing protocols have converged, CEF updates the FIB table and removes stale route entries. CEF then updates the line cards with the new FIB information.

3.4.4 GOLD (Generic Online Diagnostics)


The design document proposes the use of Generic Online Diagnostics (GOLD) on Nexus 7000 devices deployed as part of the new design. It is recommended that GOLD be used to perform some initial bootup diagnostics as well as background (prescheduled) diagnostics to verify correct operation and functioning of hardware and internal systems. Online diagnostics allow the user to test and verify the hardware functionality of the Nexus 7000 switch while the switch is connected and live in the network. The online diagnostics contain packet switching tests that check different hardware components and verify the data path and control signals. Online diagnostics detect problems in the following areas; Hardware components Interfaces (GBICs, Ethernet ports etc) Connectors (loose connectors, bent pins etc) Solder joints Memory (failure over time)

Online diagnostics are categorized as bootup, on-demand, scheduled or health monitoring. Bootup diagnostics run during bootup, on-demand diagnostics run from the CLI, scheduled diagnostics run at user-designated intervals or specified time when the switch is connected to live network and health monitoring run in the background. Please refer to the GOLD whitepaper at the URL below for additional information, Cisco AS recommends online diagnostics as a very useful feature in determining the health of Nexus 7000 hardware. Cisco AS recommends at a minimum that the bootup diagnostics be set to level complete, In addition it is recommended to use the on-demand diagnostics to verify the correct operation of the new Catalyst 6500 hardware as it is installed at all the site. It is proposed to use specific online-diagnostics for pre-deployment testing and verification of new hardware before progressing to more advance testing. Please refer to Link: GOLD Recommendations, for information regarding those diagnostic tests that are recommended to be run as part of the network pre-deployment testing. http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_0/nxos/system_management/configuration/guide/sm_gold.html

Not e

44

(Release) v1.3

High level DCN Design

3.4.5 uRPF (unicast reverse path forwarding)


uRPF (unicast reverse path forwarding) is a security feature which helps to mitigate against problems caused by the introduction of malformed or spoofed IP source addresses into a network by discarding IP packets that lack a verifiable IP source address. For example, a number of common types of DoS attacks including Smurf and Tribal Flood Network (TFN) can take advantage of forged or rapidly changing source IP addresses to allow attackers to thwart efforts to locate or filter attacks. Unicast RPF checks to see if any packet received at a router interface arrives on the best return path (return route) to the source of the packet. Unicast RPF does this by doing a reverse lookup in the CEF table. If the packet was received from one of the best reverse path routes, the packet is forwarded as normal. If there is no reverse path route on the same interface from which the packet was received, it might mean that the source address was modified. If Unicast RPF does not find a reverse path for the packet, the packet is dropped or forwarded, depending on whether an access control list (ACL) is specified in the ip verify unicast reverse-path interface configuration command. . The Molina HC data centers are deemed to be internal networks therefore the potential threat from external DoS attacks is seen to be minimal. However as a configuration best practice Cisco AS recommends that Molina consider configuration of uRPF in the core of the network and potentially on the distribution switches. Initial configuration should be done using loose mode uRPF.

3.4.6 Trunking
Cisco recommends that all trunk ports be configured to use 802.1Q encapsulation (standard), additionally the trunk mode should be set to non-negotiate i.e. always on. By disabling negotiation i.e. DTP (Dynamic Trunking Protocol) faster convergence can be achieved by avoiding the latency associated with the protocol which prevents a port from being added to STP for up to 5 seconds. As VLAN 1 is enabled on all switch ports and trunks by default it is good practice to restrict the size of VLAN 1 as much as possible, therefore it is recommended to clear VLAN 1 from trunk ports. Note, even though VLAN 1 is cleared from the trunk port, this VLAN is still used by the processor to pass control frames such as VTP and CDP. Furthermore only the specific user VLANs and the management VLAN should be enabled on a trunk port, all other VLANs should be cleared from trunk ports. An 802.1Q trunk has a concept of a Native VLAN which is defined as the VLAN which a port will return when not trunking and is the untagged VLAN, by default this is VLAN 1, however as VLAN 1 will be cleared from trunks it is advisable to configure another VLAN (user or management) for the Native VLAN. Cisco understands that vlans will be reused at each access switch and all access switches might host any server vlans in the network.

45

(Release) v1.3

High level DCN Design

The approach for trunking vlans is to explicitly add them to the trunk as needed; as opposed to trunk all the vlans and then pruning them.

3.4.7 VTP (VLAN Trunking Protocol)


On NX-OS, VLAN Trunking Protocol (VTP) mode is OFF. VTP PDUs are dropped on all device interfaces, which partitions VTP domains if other devices have VTP turned on. Therefore Cisco recommends that VTP mode transparent be used on non NX-OS devices. The benefits of implementing transparent mode include: It encourages good change control practice, as the requirement to modify a VLAN on a switch or trunk port has to be considered one switch at a time. It limits the risk of an administrator error, such as deleting a VLAN accidentally and thus impacting the entire domain. There is no risk from a new switch being introduced into the network with a higher VTP revision number and overwriting the entire domains VLAN configuration. It encourages VLANs to be pruned from trunks running to switches that do not have ports in that VLAN, thus making frame flooding more bandwidthefficient. Manual pruning also has the benefit of reducing the spanning tree diameter.

3.4.8 VLAN Hopping


VLAN hopping is a vulnerability that results from the IEEE 802.1q trunk specification, which specifies for the native VLAN on a trunk to be carried untagged. When the switch receives an encapsulated packet on an access port whose native VLAN is also the native VLAN of a trunk, the switch forwards the frame on the trunk. The receiving switch forwards the frame to the VLAN specified in the tag that the attacker prepended to the packet. To avoid this issue, it is recommended to configure the following command (global) on all switches configured for VLAN trunking,

3.4.9 Unused ports


On Cisco switches by default all ports on a switch are in Van 1 which is automatically included in trunk ports, therefore assigning unused ports to a unique rogue VLAN will prevent potential STP issues impacting the entire network if a host or switch is attached to these ports. This rogue VLAN should be local to the switch and not trunked to any other switches. Cisco AS recommend that Molina implement a policy of assigning unused ports on all switches to a dedicated rogue VLAN local to the switch and if possible the ports should be in a shutdown state until required for use.

46

(Release) v1.3

High level DCN Design

3.4.10 ISSU
In a Nexus 7000 series chassis with dual supervisors, in-service software upgrade (ISSU) feature can be used to upgrade the system software while the system continues to forward traffic. ISSU uses the existing features of nonstop forwarding (NSF) with stateful switchover (SSO) to perform the software upgrade with no system downtime

3.5 Control Plane and Management Plane Policing


CoPP is by default disabled for 6500 platform and by default enabled for Nexus7000 platforms. Figure below depicts a Cisco 6500, its Multilayer Switching Feature Card (MSFC) that contains the L2 Switch Processor (SP) and L3 Route Processor (RP), and the mechanisms available on the Cisco 6500 to protect the Switch Processor and Route Processor against DoS attacks. Specifically, the Cisco 6500 supports a two level defense with control plane policing (CoPP) and special case CPU hardware rate limiters (HWRL). The CoPP feature applies to traffic going to the Route Processor control plane interface. CoPP is applied in hardware on a per forwarding engine basis at the Policy Feature Card (PFC) and Distributed Forwarding Card (DFC). The Special Cases CPU Hardware limiters are platform dependant rate limiters applied in hardware to traffic going to the Switch Processor or Route Processor.

47

(Release) v1.3

High level DCN Design

Figure 3.1-CONTROL PLANE POLICING

A router can be logically divided into three functional components or planes: 1. Data Plane 2. Management Plane 3. Control Plane

Figure 3.2-LOGICAL PLANES OF ROUTER

The vast majority of traffic generally travels through the router via the data plane; however, the Route Processor must handle certain packets, such as routing updates, keepalives, and network management. This traffic is often referred to as control and management plane traffic. The Route Processor is critical to network operation. Any service disruption to the route processor, and hence the control and management planes, can lead to businessimpacting network outages. A Denial of Service (DoS) attack targeting the route processor, which can be perpetrated either inadvertently or maliciously, typically involves high rates of traffic destined to the Route Processor itself that result in excessive CPU utilization. Such an attack can be devastating to network stability and availability and may include the following symptoms: High Route Processor CPU utilization (near 100%) Loss of line protocol keepalives and routing protocol updates, leading to route flaps and major network transitions Interactive sessions via the Command Line Interface (CLI) are slow or completely unresponsive due to high CPU utilization Route processor resource exhaustion: resources such as memory and buffers are unavailable for legitimate IP data packets

48

(Release) v1.3

High level DCN Design

Packet queues back up, leading to indiscriminate drops (or drops due to lack of buffer resources) of other incoming packets CoPP addresses the need to protect the control and management planes, ultimately ensuring routing stability, reachability, and packet delivery. It uses a dedicated control-plane configuration via the IOS Modular Quality of Service CLI (MQC) to provide filtering and rate limiting capabilities for control plane packets.

3.5.1 Developing a CoPP Policy


Prior to developing the actual CoPP policy, required traffic must be identified and separated into different classes. One recommended methodology involves categorizing traffic into distinct groups based on relative importance. In the example discussed in this document, traffic is grouped into five different classes. The actual number of classes needed might differ and should be selected based on local requirements and security policies. Note that these 'traffic classes' are defined with regard to the CPU/control plane. 1. Critical Traffic that is crucial to the operation of the router and the network Examples: routing protocols like OSPF Protocol Note that some sites might choose to classify other traffic as critical when appropriate. 1. Important Necessary, frequently used traffic that is required during day-to-day operations Examples: traffic used for remote network access and management (i.e.: telnet, Secure Shell (SSH), Network Time Protocol (NTP) and Simple Network Management Protocol (SNMP) 1. Normal Traffic that is expected but not essential to network operation Normal traffic used to be particularly hard to address when designing controlplane protection schemes, as it should be permitted but should never pose a risk to the router. With CoPP, this traffic can be permitted but limited to a low rate. Examples: ICMP echo request (ping) 1. Undesirable Explicitly identifies bad or malicious traffic that should be dropped and denied access to the Route Processor Particularly useful when known traffic destined to the router should always be denied and not placed into a default category. Explicitly denying traffic allows the end-user to collect rough statistics on this traffic via show commands and therefore offers some insight into the rate of denied traffic. Default All remaining traffic destined to the Route Processor that has not been identified With a default classification in place, statistics can be monitored to determine the rate of otherwise unidentified traffic destined to the control-plane. Once this traffic is identified, further analysis can be performed to classify it and if

49

(Release) v1.3

High level DCN Design

needed, the other CoPP policy entries can be updated to account for this traffic

3.5.2 COPP on NX-OS


The NX-OS device provides control plane policing to prevent denial-of-service (DoS) attacks from impacting performance. CoPP classifies these packets to different classes and provides a rate limiting mechanism to individually control the rate at which the supervisor module receives these packets. CoPP is configured in the default VDC but applies to all VDCs in the box. The supervisor module divides the traffic that it manages into three functional components or planes: Data planeHandles all the data traffic. The basic functionality of a NX-OS device is to forward packets from one interface to another. The packets that are not meant for the switch itself are called the transit packets. These packets are handled by the data plane. Control planeHandles all routing protocol control traffic. These protocols, such as the Border Gateway Protocol (BGP) and the Open Shortest Path First (OSPF) Protocol, send control packets between devices. These packets are destined to router addresses and are called control plane packets. Management planeRuns the components meant for NX-OS device management purposes such as the command-line interface (CLI) and Simple Network Management Protocol (SNMP

The supervisor module has both the management plane and control plane and is critical to the operation of the network. Any disruption or attacks to the supervisor module will result in serious network outages. For example, excessive traffic to the supervisor module could overload and slow down the performance of the entire NX-OS device. Attacks on the supervisor module can be of various types such as denial-ofservice (DoS) that generates IP traffic streams to the control plane at a very high rate. These attacks force the control plane to spend a large amount of time in handling these packets and prevents the control plane from processing genuine traffic. These attacks can impact the device performance and have the following negative effects: High supervisor CPU utilization. Loss of line protocol keep-alive messages and routing protocol updates, which lead to route flaps and major network outages. Interactive sessions using the CLI become slow or completely unresponsive due to high CPU utilization. Resources, such as the memory and buffers, might be unavailable for legitimate IP data packets. Packet queues fill up, which can cause indiscriminate packet drops.

50

(Release) v1.3

High level DCN Design

3.5.3 CoPP Risk Assessment


Care must be taken to ensure that the CoPP policy does not filter critical traffic such as routing protocols or interactive access to the routers. Filtering this traffic could prevent remote access to the router, thus requiring a console connection

51

(Release) v1.3

High level DCN Design

4. Security Technologies
To achieve the design presented in this document Cisco will utilize multiple security technologies to fulfil the design requirements. These technologies include firewall statefull failover, virtualized firewall, intrusion detection systems, and virtualized intrusion detection sensors.

4.1 Firewall Technologies


The ASA firewall can run in one of the following modes: Routed - The ASA firewall is considered to be a router hop in the network and acts as a default gateway for hosts that connect to one of its screened subnets. It can perform NAT between connected networks. In single firewall mode, ASA supports OSPF or passive RIP dynamic routing protocols. Multiple context modes support static routes only. Transparent - The same subnet exists on both sides of the firewall. The ASA firewall acts like a "bump in the wire," and is not a router hop. The ASA connects the same network on its inside and outside interfaces, but each interface must be on a different Vlan. Using ASA in Transparent mode there is no need for dynamic routing or NAT as only secure bridging is performed.

Firewalls mentioned in the above section will be configured in the routed mode of operation. However, the Global Data Center Service firewall will be configured in the transparent mode of operation to ease insertion of new services into the network without making any changes to the IP address space. In multiple context modes, the firewall mode cannot be set separately for each context; only one firewall mode can be set for the entire security appliance.

Not e

4.1.1 Transparent Mode


Overview
Firewalls protect inside networks from unauthorized access by users on an outside network. The firewall can also protect inside networks from each other, for example, by keeping a human resources network separate from a user network. If network resources need to be available to an outside user, such as a web or FTP server, these resources can be placed on a separate network behind the firewall, called a demilitarized zone (DMZ). The firewall allows limited access to the DMZ, but because the DMZ only includes the public servers, an attack there only affects the servers and does not affect the other inside networks. Firewalls can also be used to control inside users access to outside networks (for example, access to the Internet), by allowing only certain addresses out, by requiring authentication or authorization, or by coordinating with an external URL filtering server.

52

(Release) v1.3

High level DCN Design

Cisco firewall includes many advanced features, such as multiple security contexts (virtualized firewalls), transparent (Layer 2) firewall or routed (Layer 3) firewall operation etc. A transparent firewall is a layer 2 firewall that acts like a "bump in the wire" and
is not seen as a router hop to connected devices. The firewall connects the same network on its inside and outside interfaces, but each interface must be on a different VLAN.

4.1.1.1 Traffic Passing through Transparent firewall


IPv4 traffic is allowed through the transparent firewall automatically from a higher security interface to a lower security interface, without an access list. ARPs are allowed through the transparent firewall in both directions without an access list. ARP traffic can be controlled by ARP inspection. For Layer 3 traffic travelling from a low to a high security interface, an extended access list is required on the low security interface. The transparent firewall can allow almost any traffic through using either an extended access list (for IP traffic) or an Ether Type access list (for non-IP traffic).

Not e The transparent mode security appliance does not pass CDP packets or IPv6 packets, or any packets that do not have a valid EtherType greater than or equal to 0x600. For example, IS-IS packets are not allowed to pass. An exception is made for BPDUs, which are supported. For example, it is possible to establish routing protocol adjacencies through a transparent firewall; one can allow OSPF, RIP, EIGRP, or BGP traffic through based on an extended access list. Likewise, protocols like HSRP or VRRP can pass through the security appliance. Non-IP traffic (for example AppleTalk, IPX, BPDUs, and MPLS) can be configured to go through using an EtherType access list. When the security appliance runs in transparent mode without NAT, the outgoing interface of a packet is determined by performing a MAC address lookup instead of a route lookup. Route statements can still be configured, but they only apply to security appliance-originated traffic. For example, if syslog server is located on a remote network, you must use a static route so the security appliance can reach that subnet. If you use NAT, then the security appliance uses a route lookup instead of a MAC address lookup.

4.1.1.2 Transparent Firewall in a Network


Figure below shows a typical transparent firewall network where the outside devices are on the same subnet as the inside devices. The inside router and hosts appear to be directly connected to the outside router.

53

(Release) v1.3

High level DCN Design

Figure 4.1 TRANSPARENT FIREWALL NETWORK

4.1.1.3 Transparent Firewall Guidelines


A management IP address is required; for multiple context mode, an IP address is required for each context. Unlike routed mode, which requires an IP address for each interface, a transparent firewall has an IP address assigned to the entire device. The security appliance uses this IP address as the source address for packets originating on the security appliance, such as system messages or AAA communications. The management IP address must be on the same subnet as the connected network and cannot set the subnet to a host subnet (255.255.255.255). It is possible to configure an IP address for the Management 0/0 management-only interface. This IP address can be on a separate subnet from the main management IP address. The transparent security appliance uses an inside interface and an outside interface only. If firewall includes a dedicated management interface then that management interface or subinterface can be configured for management traffic only. In single mode, only two data interfaces can be used (and the dedicated management interface, if available) even if security appliance includes more than two interfaces. Each directly connected network must be on the same subnet. Do not specify the security appliance management IP address as the default gateway for connected devices; devices need to specify the router on the other side of the security appliance as the default gateway. For multiple context mode, each context must use different interfaces; interface cannot be shared across contexts.

54

(Release) v1.3

High level DCN Design

4.1.1.4 Unsupported Features in Transparent Firewall

Table 3

UNSUPPORTED FEATURES IN TRANSPARENT FIREWALL

Feature Dynamic DNS DHCP Relay

Dynamic Routing Protocols

IPv6 Multicast

QoS VPN termination for through traffic

Description -The transparent firewall can act as a DHCP server, but it does not support the DHCP relay commands. DHCP relay is not required because you can allow DHCP traffic to pass through using two extended access lists: one that allows DCHP requests from the inside interface to the outside, and one that allows the replies from the server in the other direction. However static routes can be added for traffic originating on the security appliance. Dynamic routing protocols can be allowed through the security appliance using an extended access list. IPv6 traffic cannot allow using an EtherType access list. Multicast traffic can be allowed through the security appliance by allowing it in an extended access list. -The transparent firewall supports site-to-site VPN tunnels for management connections only. It does not terminate VPN connections for traffic through the security appliance. You can pass VPN traffic through the security appliance using an extended access list, but it does not terminate non-management connections. WebVPN is also not supported.

55

(Release) v1.3

High level DCN Design

4.2 Routed Mode


Overview
In routed mode, the security appliance is considered to be a router hop in the network. It can use OSPF or RIP (in single context mode). Routed mode supports many interfaces. Each interface is on a different subnet. Interfaces can also be shared between contexts. Multiple context mode supports static routes only.

4.2.1 Routed Firewall in a Network


The figure shown below illustrates a typical routed firewall in a network where all the interfaces of the firewall are on a different subnet:

Figure 4.2 ROUTED FIREWALL NETWORK

4.3 ASA Virtual Context


Overview
A virtual firewall (also known as a multiple security context) is a logical firewall with its own policies, rules, access-lists, IP addresses, interfaces and resources running concurrently with other logical firewalls on the same physical resource. In multiple context mode (multimode), you can create up to 50 separate security contexts (depending on your software license) with the ASA5550. Multiple contexts are similar to having multiple stand-alone firewalls. Contexts are conveniently contained within a single appliance. A firewall can therefore operate in multiple transparent mode or in multiple routed mode. The ASA does not currently support coexistence of running some contexts in routed mode and others in transparent mode. With the default software license, you can run up to two security contexts in addition to an admin context. For more contexts, additional license must be purchased.

56

(Release) v1.3

High level DCN Design

4.3.1 Understanding Multiple Contexts


In multiple security context mode (a.k.a. multimode) the ASA security appliance can be divided into three execution spaces: System execution space Admin context One or more user or customer contexts

4.3.2 System execution space


The system execution space does not have its own functional interfaces and is used to define the attributes of other security contexts such as the admin and user contexts. Three key attributes include the context name, location of the context start-up configuration file and interface allocation.

Not e The system execution space configuration resides in Non-Volatile RAM (NVRAM) whereas the actual configurations for security contexts are stored in either local Flash memory or on a network server or both. Context configurations residing on a network server can be accessed via the TFTP, FTP, HTTPS or HTTP protocols from the system execution space. Access from the system execution space across the network is provided via the designated admin context.

4.3.3 Admin context


The admin context provides access to network resources. The IP addresses on the allocated interfaces can be used for remote management purposes via protocols such as SSH or Telnet. The ASA also uses the IP addresses to retrieve configurations for other contexts if they are located on network server. A system administrator with access to the admin context can switch into the other contexts to manage them. The ASA also uses the admin context to send the syslog messages that relate to the system to an external syslog server.

Not e The admin context must be created before defining other contexts. Additionally, it must reside on the local disk. Using the admin context as a regular context for through traffic is not recommended even though it is possible.

4.3.4 User or customer contexts


User contexts are the virtual firewalls used for customer firewalling purposes in multiple security context mode (multimode). Virtual firewalls have their specific configurations are similar to standalone physical firewalls however they reside virtually within a single ASA security appliance. As such virtual firewalls share the overall resources of the physical appliance. Organisations leverage virtual firewalls for many reasons including segregation along organisational and departmental boundaries, reduced rule complexity or to offer managed firewall services.

57

(Release) v1.3

High level DCN Design

4.4 Packet Flow, Shared Interfaces and Classification in Multimode


In multiple security context mode (multimode) the ASA must classify packets to determine which context they should be forwarded to. If all contexts are using unique interfaces this is easy as the ASA classifies packets based on source interface. The ASA allows you to also share interfaces across multiple security contexts. In this scenario the destination IP addresses on the security contexts must be unique (not overlapping) since the ASA uses them to further clarify the target context.

Recommenda tion To allow contexts to share interfaces it is recommended that you assign unique MAC addresses to each context interface. The MAC address is used to classify packets within a context. If you share an interface, but do not have unique MAC addresses for the interface in each context, then the destination IP address is used to classify packets. The destination address is matched with the NAT configuration however this method does have some limitations when compared to the unique MAC address method. For this reason the Commander configuration the mac-address auto command will appear in the system.

4.5 Failover Functionality Overview on the ASA


The ASA failover feature allows a standby ASA to take over the functionality of a failed active ASA. When the active unit fails, it changes to the standby state, while the standby module changes to the active state. The two ASAs must have the same major and minor software version, license, and operating modes. The failover configuration requires two identical security appliances connected to each other through a dedicated failover link and, optionally, a Stateful Failover link. The health of the active interfaces and units is monitored to determine if specific failover conditions are met. If those conditions are met, failover occurs. For the Emerson Network, Active/Standby failover will be used as opposed to Active/Active failover. With Active/Standby failover, only one unit passes traffic while the other unit waits in a standby state. Each Firewall interface on the primary and secondary firewall is connected to each other using common VLANs i.e. the inside interface of the primary unit is connected to the inside interface of the failover unit. This is done by putting the primary and secondary interfaces in the same VLAN on the connected switches. When the active unit fails, it changes state to the standby state while the standby unit changes to the active state. The unit that becomes active assumes the IP addresses and MAC addresses of the failed unit and begins passing traffic. The unit that is now in standby state takes over the standby IP addresses and MAC addresses. Because network devices see no change in the MAC to IP address pairing, no ARP entries change or time out on the network.

4.5.1 Stateful failover

58

(Release) v1.3

High level DCN Design

During normal operation, the active module continually passes per-connection stateful information to the standby module every 10 seconds. After a failover occurs, the same connection information is available at the new active module. Supported end-user applications are not required to reconnect to keep the same communication session. The state information passed to the standby module includes the following data: NAT translation table. TCP connection states. UDP connection states (for connections lasting at least 15 seconds). HTTP connection states (Optional). H.323, SIP, and MGCP UDP media connections. ARP table.

4.5.2 Failover and State Links


Two links between the ASAs are required for active/standby and stateful failover. Failover Link - The two units constantly communicate over a failover link to determine the operating status of each module. Communications over the failover link include the following data: The module state (active or standby). Hello messages (also sent on all other interfaces). Configuration synchronization between the two modules.

State Link - To use stateful failover, a stateful failover link needs to be configured to pass all state information. This link can be the same as the failover link, but we recommend that a different link be used for the same. The state traffic can be large, and performance is improved with separate links. The IP address and MAC address for the failover and state link do not change at failover. For multiple context mode, the failover and state links reside in the system configuration. These interfaces are the only configurable interfaces in the system configuration.

4.5.3 Intrusion Detection & Prevention


Intrusion detection products such as the Cisco Intrusion Detection System (IDS) appliance and the Cisco Catalyst 6500 IDS module, and intrusion prevention products such as the Cisco Security Agent (CSA) protect the server farm from attacks that exploit OS and application vulnerabilities. These technologies are complemented by the use of mirroring technologies such as VACLs and RSPAN that allow differentiating traffic on multiple sensors. The Cisco Catalyst 6500 Series Switch combined with the Cisco IDS 4200 Series sensors can provide multi-gigabit IDS analysis. IDS sensors can detect malicious activity in a server farm based on protocol or traffic anomalies, or based on the stateful matching of events described by signatures. An IDS sensor can detect an attack from its very beginning by identifying the probing activity, or it can identify the exploitation of well-known vulnerabilities.

59

(Release) v1.3

High level DCN Design

Traffic distribution to multiple IDS sensors can be achieved by using mirroring technologies (RSPAN and VACL) for multi-gigabit traffic analysis. The logical topology shows the IDS placement at the presentation tier and at the database tier. When a web/application server has been compromised and the hacker attacks the database, the second sensor reports the attack. In a consolidated data center environment, servers for the different tiers may be connected to the same physical infrastructure, and multiple IDS sensors can provide the same function as in the logical topology of Figure 4.3.

Figure 4.3 IDS PLACEMENT

In Figure 4.3, IDS1 monitors client-to-web server traffic and IDS2 monitors web/application server-to-database traffic. When a hacker compromises the web/application tier, IDS1 reports an alarm; when a compromised web/application server attacks the database, IDS2 reports an alarm. HTTPS traffic can be inspected if the IDS sensors are combined with a device that provides SSL offload.

60

(Release) v1.3

High level DCN Design

Figure 4.4 WEB TO DATABASE SERVER TRAFFIC FLOW

The following sequence takes place: 1. The Multilayer Switch Feature Card (MSFC) receives client-to-server traffic from the data center core. 2. The ACE diverts traffic directed to the VIP address. 3. The ACE sends HTTPS client-to-server traffic to the SSLSM for decryption. 4. The SSLSM decrypts the traffic and sends it in clear text on an internal VLAN to the CSM. 5. The IDS sensor monitors traffic on this VLAN. 6. The ACE performs the load balancing decision and sends the traffic back to the SSLSM for re-encryption.

61

(Release) v1.3

High level DCN Design

5. Load Balancing & Technologies


To achieve the design presented in this document Cisco will utilize multiple load balancing technologies. These technologies include server load balancing (SLB) in routed and one-armed mode, global server load balancing (GSLB) and WAN optimization.

5 5.1 Server Load Balancing


The ACE server load balancing module can run in one of the following modes: Routed - The ACE server load balancing module can act as a routerin the network and perform as a default gateway for servers and other hosts that connect to one of its server facing vlans. The Cisco Catalyst supervisor module will perform any dynamic routing. bridged - The same subnet exists on both sides of the ACE server load balancing module. In this mode the ACE will bridge all traffic (load balanced and non load balanced). The upstream routed interface on the Cisco Catalyst supervisor module will perform the role of default gateway for the ACE as well as the servers and other hosts bridging through the ACE. One-Armed - the ACE server load balancing module can also operate using a single vlan for data. This mode is referred to as one-armed. In this mode, all non-load balanced traffic bypasses the load balancer completely. Load balanced traffic is required to traverse the load balancer on the ingress as well as egress so special consideration must be taken to ensure this occurs. Typically the source IP address of a load balanced packet is modified to ensure the return path traverses the load balancer.

The solution being provided to Molina Health Care includes some routed configuration on the ACE server load balancing module as well as some one-armed configurations. The ACE server load balancer also has the ability to operate a number of virtual contexts. These allow for multiple modes to run concurrently. Each mode has its own dedicated hardware resources as well as distinct configuration.

Cauti on The specific load balancing rules are still being investigated at this time. Each load balanced application will be looked at to determine the proper load balanced topology and virtual context

5.1.1 Routed Mode


The routed mode applications and vlans are still being investigated at this time

62

(Release) v1.3

High level DCN Design

3 4 5 3 4 5.1.2 One-Armed Mode


The one-armed mode application and vlans are still being investigated at this time

5.1

Global Server Load Balancing


The Cisco GSS will be positioned to provide global server load balancing (GSLB). This will provide disaster recovery (DR) in the event the New Mexico data center is unavailable for any reason. When the DR site is operational the GSS implementation will be extremely simple to deploy.

5.2

WAN Optimization
The Cisco Wide Area Application Service (WAAS) is currently deployed at Molina Healthcare. The existing WAAS deployment strategy will remain unchanged in the new data center. When the core servers move to the new data center the core WAAS appliances will migrate with them.

63

(Release) v1.3

High level DCN Design

64

(Release) v1.3

High level DCN Design

6. SAN Technologies
The feature recommendations are based on the following state of the art technologies: MDS 9513 director with Gen-2 chassis and Line cards SANTAP with RecoverPoint as integrated data replication solution for VMWare and Windows NPV and Flexattach features FCIP for redundant link between data centers

6.1 Feature Recommendations


Based on Molinas requirements and Ciscos best practices the following features have been recommended for deployment at the production site where MDS directors and switches are planned for deployment.
Table 4 Feature Recommendations

Feature Recommendations VSANs Enhanced Device-alias

Scope of Use Provide logical isolation between open systems disk and Backup tape access. Provides ability to use of human readable names to be associated with the PWWN of the end devices and the ability to manipulate then easily if the PWWN changes due to device replacement. To ensure zones and zonesets are safely deployed with minimal chance for human risk. To leverage the port density of the deployed hardware without sacrificing performance. Role based Access control allows Molina to control the access and management of various physical and logical entities in the SAN infrastructure being deployed. Cisco Fabric Servers is used to replicate information between connected switches. Some of the applications supported by CFS are device-alias, ntp, AAA, roles to name a few. The ability of provide notification through email on the events and incidents happening on the switch in real time. Can be configured to send automatic notification of critical events to Cisco to generate a Case and automatically open a support call to resolve the incident. Provides larger aggregate bandwidth and enhances High Availability Allow multiple FCID login to one N port. Must be enabled for NPV connections. Reduced domain ID usage, function as linecard extension to NPIV enabled switch

Enhanced Zoning Port Bandwidth Reservation RBAC

CFS

Callhome

Port Channels N Port Identifier Virtualization N Port Virtualizer

65

(Release) v1.3

High level DCN Design

FCIP SANTap

Extend the SAN over IP. Connect Fabrics (Switches) over IP using Fibre Channel protocol. Offload host traffic, allows network based traffic splitting to redirect data to RecoverPoint appliances.

6.1.1 VSANs
The proposed SAN design includes the following VSAN(s) based on Molinas requirements and Ciscos best practices. The design allows having multiple storage environments within the same fabric. The use of VSANs will be deployed to support these technologies. This logical isolation will enable environments to be isolated from each other. Changes to the development environment zoning will not affect the production environment. As per Molinas requirement for supporting shared storage, backup environment, and dedicated storage environment within a single fabric; VSANs will be deployed to support these environments. This logical isolation; for example, will enable environments that do not require daily zoning such as tape, to be isolated from those that require daily zoning activities, such as the shared disk and dedicated storage environments. Limit the granularity of VSANs. Typically a production, backup, dev-test. Production for all business critical devices. Highest change management requirements. Do not create separate Production VSANs for different applications. This will unnecessarily complicate the design and maintenance of the SAN Backup VSAN. Typically, backup devices require high maintenance. Media server reloads, tape drive replacements. High activity of backup devices is normally opposite the production high activity windows. Maintenance can be done during the day time. Change management windows are normally not as stringent as Production environments. Having a dedicated backup VSAN allows for making changes without affecting business critical functions. Dev/Test VSAN. A physically dedicated test environment would be the ideal. There is the ability to test different versions of switch software, and features without any impact to the by the Production environment.

However, this is quite costly to have separate equipment. Using some ports in a separate dev/test VSAN is the next best thing. Host HBAs and drivers can be tested in a SAN environment. The only requirement is the availability of dedicated array ports for the dev/test environment. Making VSANs more granular by dedicating a Windows VSAN, AIX VSAN, SAP VSAN, HR VSAN, Domino VSAN does reduce the potential for inadvertent operator error from causing wide spread damage. However, it does require a large number of available array ports. As each VSAN requires a minimum of two dedicated array ports to satisfy the A and B fabrics. In general, we do not recommend this approach. Some VSANs may grossly underutilize an array ports bandwidth, while other VSANs may experience bottle necks due to the inability to assign additional array ports to that VSAN. Common industry practice is what we have proposed.Having a smaller number of large VSANs does require some housekeeping to ease management.

66

(Release) v1.3

High level DCN Design

For each Fabric the numbering scheme consists of using an offset of 500 between the A and B fabric. The VSAN ranges for the two Data Centers are show in 6.
Table 5 VSAN Ranges for Various Data Centers

Data Centers NM HW

Fabric A VSAN range 100-200 (sample)???? 400-500

Fabric B VSAN range 500-600 700-800

For each Fabric the numbering scheme will consist of using an offset of 1 between the A and B fabric. Fabric A will start at VSAN 101 while Fabric B will start at VSAN 102. VSAN 1 will not be used on either fabric for any fibre channel fabric. VSANs in the different Data Centres are not assumed to share resources. Except in the case of the replication VSAN. Only the replication VSAN shares a common VSAN number. By providing the non-replication VSANs unique VSAN numbers, there is no possibility of an unintentional VSAN merge. The Domain ID numbering scheme will be the domainIDs will make use of odd/even scheme to further differentiate which fabric (A or B) it is on. The Domain ID numbering scheme specified in the Low Level Design document. The number of VSAN will be determining in the Low Level Design document as well. It will consists are least two, production host/disk SAN and tape backup SAN. All VSANs will have the following features enabled: Static port based VSANs. Load balancing scheme: src/dst/oxid Static Domain IDs VSANs defined on all the directors and switches. VSAN-1 utilized only for administrative use and for non-host to storage access. FCID Persistency configured for all VSAN.

6.1.2 Devices Aliases


Aliasing of pWWN information is recommended for human readability. The actual device connectivity is always controlled by the underlying pWWN information. Device aliases databases are maintained on the switches and synced up to all switches via CFS. FM Server allows for a GUI interface to update/change device alias information on the switches. FMS does not maintain device alias information. Enhanced Device aliases will be deployed instead of focalises as they are independent of the zoning database and can provide name resolution to applications beyond the zoneserver. The enhanced device alias also provides for easy fix if an end device is replaced. It automatically updates all the application with the new PWWN when the new PWWN is updated in enhanced device aliases.

67

(Release) v1.3

High level DCN Design

6.1.3 Zoning and Zonesets


The proposed SAN design includes the following Zoning characteristics based on Molinas requirements and Ciscos best practices. Enhanced Zoning is to be deployed for the VSANs which requires a minimal learning curve to the Molinas SAN team. It provides automatic full zoneset distribution and synchronization as well as preventing multiple administrators from modifying a VSANs zoneset at the same time. Each zone is to be configured as a single initiator zone. This method specifies that one initiator and all of its associated targets to be in a single zone. Zonesets: ZS_PROD_A_100 ZS_REPLICATION_A_200 ZS_PROD_B_101 ZS_REPLICATION_B_201 Zones: Z_SERVERNAME-HBA0X_ARRAY-PORT Device-alias SERVERNAME-HBA0X ARRAY-PORT

6.1.4 Security
This High Level SAN design describes the necessary security features based on Molinas requirements and Ciscos best practices. The Following Security features are listed by Security Type of access such as Management, Fabric or Data. The features selected might include multiple items from each Security Type or just from one or two Security Types are based again on Molinas requirements and Ciscos best practices. Management Access (Security Type) Password protected Console Secure Shell (SSH) Secure File Transfer Protocol (SFTP) Role Based Access Control (RBAC) Each user to have their own unique username and password with roles providing them the minimal privileges or rules required to perform their job.

SNMPv3 (FMServer)/SNMPv1 (for SRM support) TACACS+ to be used for authentication to the MDS infrastructure.

68

(Release) v1.3

High level DCN Design

Data Access (Security Type) VSANs Enhanced Zoning Only fiber channel attached devices (HBAs, Storage, Tape, analyzers) can access the fiber channel network. In order for end devices to communicate, administratively controlled access must be granted. End device access security is controlled by allowing specific parameters of those devices (Port World Wide Numbers (pWWN), or specific ports). End devices must be in the correct VSAN, zone and zoneset. All require administrator authorization. The MDS has a port security feature to defeat port spoofing that can be enabled to prevent a duplicate pWWN to logon to the fabric. In general this is not deployed as Data Center physical access is tightly controlled. Current tape backup environment has encryption built-in on the tape devices. As noted: Cisco also has the capability to encrypt tape. This is a licensed feature. It can make use of the 18/4 line cards currently deployed in the Molina network. Actual quantities of 18/4s would require an analysis of the backup environment. IPsec can be enabled on FCIP links to provide data encryption over a wide area network. This is normally deployed when running over public networks.

6.1.5 Role Based Access Control (RBAC)


The proposed SAN design includes Role Based Access Control based on Molinas requirements and Ciscos best practices. The following Roles are recommended for the Molinas SAN deployment in the four data centers.
Table 6 Role Based Access Control

RBAC Description Network-admin Network-Operator Operations

RBAC Scope of Responsibility For those members of the SAN team that require complete write access to the MDS. For those personnel that do not require write access to the MDS. For those personnel that perform day to day operations such as port enabling and zoning.

6.1.6 Logging
This High Level SAN design describes the external logging to be implemented based on Molinas requirements and Ciscos best practices. Logging to be done to the FM Servers syslog service or to a standalone syslog server.

69

(Release) v1.3

High level DCN Design

6.1.7 Monitoring
Monitoring of the fabric to be configured using Cisco Fabric Manager Server. Additional flows need to be configured to monitor the ISLs and critical servers to enable Molina to be able to do performance trending and capacity planning. Custom reports can be configured using the FMS web interface to monitor the fabric health as well as inventory. Fabric Manager will allow you to manage multiple fabric (FMS license required). Tabs within FM will allow you to quickly toggle between physical fabrics. If the switches do not have a FMS license, only one physical fabric may be opened at one time.

6.1.8 Call Home


The proposed SAN design includes the Call Home feature based on Molinas requirements and Ciscos best practices. The MDS infrastructure will be configured to call home or send email notification to Cisco TAC a predefined email alias to include members of the Molinas staff. Call home is a mechanism to alter the storage administrators about potential issues on the switches as and when they occur. This needs to be properly configured so that the switch is able to send out the required logs and incident report to both Cisco and EMC to help open a trouble ticket and help resolve the problem quickly. Callhome is also used to send information on an occurred problem to the local storage/SAN administrators through email. To configure callhome a valid SMTP server is require. Also it is helpful to provide contact information /support contract details in the configuration so that based on the severity of the issue EMC /Cisco can contact the appropriate personal to remediation. The Call home details required need to be filled out in the low level data spread sheet. The data provided will be used to configure the switch.

6.1.9 Port-Channel
When there are multiple ISL between any two MDS switches it is recommended that they are configured as a port-channel. This is the recommended practice. It is also recommended that while configuring the port-channel to turn on port-channel protocol. The load balancing algorithm on the port-channel is preferable to be set to exchange id based load balancing.

6.1.10 N Port Identifier Virtualization


N port identifier virtualization (NPIV) provides a means to assign multiple FC IDs to a single N port. This feature allows multiple applications on the N port to use different identifiers and allows access control, zoning, and port security to be implemented at the application level.

70

(Release) v1.3

High level DCN Design

Enabling NPIV on the MDS switch is the only configuration requirement for the switch. This is a global setting and applies to all VSANs.NPV enabled end devices are required to connect to a NPIV enabled switch. NPIV is enabled for all core directors.

6.1.11 N Port Virtualizer


N port virtualization (NPV) reduces domain ID usage. On a large scale deployment with 40 edge switches, the number of domain IDs will be more then 40. NPV addressed this challenge by treating each fiber channel edge switch as an HBA, so it does not use the domain ID. In this design with NPV, a single fabric uses only two domain IDs.

6.1.12 Licences
The proposed High Level Design includes various features which will require the following Product License to be installed:
Table 7 Feature License

Switch Core Switches

Feature FM Server, Enterprise, FCIP, SANTap

Product License Required FM_SERVER_PKG, Enterprise_PKG, SAN_EXTN_OVER_IP, STORAGE_SERVICES_184

6.2 Future SAN


6.2.1 Consolidated IO (Fibre Channel over Ethernet FCoE)
The introduction of blade servers and high performance 1U and 2U servers within the data centre has increased the cable density within the rack footprint. With each host having multiple NICs and HBAs to provide redundancy and service IO demands. Consolidating the IP and FC traffic onto a single interface, the Consolidated Network Adapter (CNA) will cut in the half the number of adapters and cables per device. Additionally, these consolidated adapters will operate at 10Gb, well above the current requirements of the servers of today. The CNA adapters pass FC over Ethernet (FCoE) to the Cisco Nexus 5000 switch. The Nexus switch has both 4GB FC and 10GigE for IP connectivity. The Cisco Nexus 5000 products are a top-of-rack design providing the connectivity between the CNA and the IP and FC switches within the data centre. FCoE is a technology that is lossless and low latency L2. It is intended for a local data centre and not for wide area networks.

71

(Release) v1.3

High level DCN Design

Figure 6.1 FCoE TOPOLOGY

The proposed Molina SAN will seamlessly integrate with the Nexus 5000 switches. In the Molina architecture, the new Nexus switches are a drop in replacement for 9124 switches resulting in FCoE capable network. No additional cabling or re-wiring is need for FCoE.

72

(Release) v1.3

High level DCN Design

7. Scalability & Virtualization


7.1 DCN Scalability
7.1.1 CORE
The main goal of the Core layer is to switch traffic at very high speed between the various networking Modules. The configuration of the Core devices should be kept to a minimum, which means pure routing and switching. It is recommended that in the interest of scalability of the data center backbone (Core layer) consists of all Layer 3 switches. All Aggregation Modules will connect physically with the Core layer using 20G fiber links (2 10-G link to each Core switch). Layer 3 switched backbones have several advantages over their Layer 2 counterparts: Reduced router peering Flexible topology with no logical Layer 2 loops Better multicast and broadcast control in the backbone Scalability to arbitrarily large size

7.1.2 AGGREGATION
The Aggregation layer switches will require interconnecting the Access switches. In order to provide high-speed computing capabilities, 10-Gigabit ether channel will be used from the Aggregation to the Access. The servers will connect using copper 1-gig links to the Access switches. Nexus7k is the recommended platform for the Aggregation layer as it will provide Molina with future bandwidth and port density scalability. This choice of platform will also enable Molina to move toward Cisco Virtual Port Channel (vPC) in the future should there be need for it. The number of line cards (payload cards) being used to accommodate the existing environment is 4, where the chassis can support up to 8.

7.1.3 ACCESS
Most part of the access architecture is designed to accommodate the 1 Gig Physical servers Molina has today. The Nexus 5010 architecture can be repurposed when Molina upgrades the servers from 1 Gig to 10Gig The Access switches links to the Aggregation switches. The V-shaped is used to account for both an Aggregation switch and a port channel (multiple links) failure. The links will be Layer2 to allow for dot1q tagging and multiple VLANs being carried over between the Aggregation and Access. vPC (virtual Port Channel) technology can be used based on BW requirements & Rapid PVST+ Spanning Tree protocol will be used for within a second convergence in case of STP topology changes. HSRP will be used

73

(Release) v1.3

High level DCN Design

on the Aggregation switches to provide gateway functionalities for the servers and to provide predictable server egress traffic.

7.1.4 Services
The services chassis in the Aggregation layer about 60% populated with the service modules and can be scaled further as the service modules like ACE can be virtualized for efficient resource utilization.

7.1.5 SAN Scalability


The proposed SAN design includes scalability considerations based on Molinas requirements and Ciscos best practices. The SAN core edge topology is design to scale future needs. Each fabric core hosts 40 edge switches in the current configuration. Edge switches can be added to accommodate additional hosts and racks. Storage and tape ports can also be expanded by adding more line cards on the core switches. During discussions and interviews with Molinas personnel, it has been discussed that additional 20 to 40 percent increase of standalone and blade servers in the infrastructure in the next few years. The current design accommodates server density of up to 20 hosts per rack and additional home run core connections on any rack. Edge switches can be added to the existing racks for higher connection density. Additional linecards can be added on core switches for increased high performance connections. The initial design provides eight 4 Gb ISLs between the core directors in the same fabric for a total of 32 Gb portchannel. The core to edge connections are four 4 Gb ISLs per edge switch. Additional links can be added to both core to core and core to edge connections for increased bandwidth. The additional links can be added nondisruptively to the existing fabric.

7.2 Network Virtualization Technologies


Resource Virtualization is key design principle used throughout the architecture. Virtualization is used to segment production traffic, provide high availability and increase resource utilization in the network. In the data center Production Module, Layer3 and Layer2 port channels are used to virtualize link to achieve high availability and higher bandwidth. Virtualization of the services modules such as Application Control Engine (ACE) and firewall appliance such as ASA is used to leverage the same physical device securely for different application tiers who require policy separation. ACE and ASA provide both data and control plane separation which ensures two traffic and traffic policies does not mix. Some of the new virtualization technologies are recommended below which will be used in Molinas DCN

74

(Release) v1.3

High level DCN Design

7.2.1 Virtual Port Channel ( vPC )

1 Links int vP 6 he C

8Links pe H r W P Channel ort

1 Links pe H P 6 r W ort ChannelonN K 5


Figure 7.1 vpc ARCHITECTURE

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 Series devices to appear as a single port channel by a third device (see Figure 7.1). The third device can be a switch, server, or any other networking device that supports port channels. A vPC can provide Layer 2 multipathing, which allows you to create redundancy and increase bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load balancing traffic.

...

7.2.2 Virtual Device Context ( VDC )


The Cisco NX-OS software supports VDCs, which partition a single physical device into multiple logical devices that provide fault isolation, management isolation, address allocation isolation, service differentiation domains, and adaptive resource management. A VDC instance can be managed within a physical device independently. Each VDC appears as a unique device to the connected users. A VDC runs as a separate logical entity within the physical device, maintains its own unique set of running software processes, has its own configuration, and can be managed by a separate administrator. VDCs also virtualize the control plane, which includes all those software functions that are processed by the CPU on the active supervisor module. The control plane supports the software processes for the services on the physical device, such as the routing information base (RIB) and the routing protocols.

75

(Release) v1.3

High level DCN Design

VDC-2 TEST DEV

VDC-1 PRODUCTION

VDC -4

VDC-3

Figure 7.2 VDC ON NX 7K

When a VDC gets created, the NX-OS software takes several of the control plane processes and replicates them for the VDC. This replication of processes allows VDC administrators to use virtual routing and forwarding instance (VRF) names and VLAN IDs independent of those used in other VDCs. Each VDC administrator essentially interacts with a separate set of processes, VRFs, and VLANs. Molina can leverage this technology to provide path isolation between their production and test dev environment.

7.3 Server Virtualization


Molina currently has a server environment which uses virtualization technologies to maximize its server infrastructure. Server virtualization, similar to network virtualization, can help reduce total cost of ownership and operational cost. Deploying VMWare requires a well thought out network architecture design. Some of the key areas that need to be considered when deploying VMWare: Virtualization Ratio VMWare VMotion VMotion Network Console Network VMWare Hosts placement

7.3.1 Deploying VMWare


The following are best practices recommendation when deploying VMWare: Distribute same application Virtual Machines (VM) across multiple physical Hosts Hosts distributed across cabinets and Access switches Separation of Console and VMotion traffic from production when possible VMs participating in VMotion should be in hosts within the same L2 domain VMs within a Host should be in the same security domain RPVST+ should be used on Access switches

76

(Release) v1.3

High level DCN Design

In Blades, inverted U shaped design should be used to maximize throughput Configure Broadcast Storm control for VMotion VLANs to avoid Broadcast Storm from an active host affecting non-active Host

7.3.2 VMWare with Standalone Server


When deploying VMWare, VMotion network and Console network requirements will play a critical role in determining how many uplinks are needed for the server. VMWare recommendation is to have VMotion and Console networks in their own dedicated VLANs.

7.3.3 VMWare with Blade Server


When deploying VMWare with Blade servers, VMotion network and Console network requirements will play a critical role in determining how the uplinks from the Blade switches will be assigned. VMWare recommendation is to have VMotion and Console networks in their own dedicated VLANs. The topology above shows VMWare integration with Blade Chassis having integrated switches. In Molina's environment Blade Chassis will be deployed in Pass-Thru mode such that all server ports will be carried over to the Nexus 5k Access switch. All Layer 2 features and functionality of the Nexus 5k will used to support the Server Blades directly. Deploying Blade servers in a Data Center poses different challenges than a standalone server. Some of the key challenges are: Power - Power consumption per rack increased tremendously as more servers are packed into a single rack Cooling When more power in consumed per rack, there will definitely more heat generated. Data Center cooling must be designed to accommodate the extra heat generated by the Blade servers Cabling Blade servers introduce different cabling requirements than standard stand alone servers. Since multiple servers are consolidated into a single Blade chassis and connected to Blade Switches, new cabling and network design approach need to be considered Network Design New Access Layer design is needed to optimize the throughput of a Blade Center and at the same time still provide redundancy

77

(Release) v1.3

High level DCN Design

7.4 Access Layer Architecture for Blade Server


Typically, a server is connected in a V-shape topology to the Access switches to provide redundancy, as shown in Figure 7.3. This topology is fine with a standalone server as the backup link will be in non-forwarding mode and the server still have a dedicated single uplink to the switch.

Figure 7.3 TYPICAL SERVER CONNECTIVITY

However, with Blade servers, all servers in the chassis are sharing the same uplinks. These uplinks consist of 8 ports, with 4 ports coming out of each Blade switch. With this in mind, it is best to consider a design that will allow both Blade switches uplinks in the forwarding state to maximize the throughput. Figure 7.4 shows the proposed architecture for Molinas Blade servers environment.

Figure 7.4 BLADE SERVER ARCHITECTURE WITH FLIPPED-U DESIGN AND vPC

The flipped-U topology design will maximized the throughput of the Blade servers and at the same time also provides the required redundancy. For this architecture to work seamlessly, the server chassis should support VLANs, Dot1q trunking, RPVST (Spanning tree protocol) and trunk tracking

78

(Release) v1.3

High level DCN Design

The servers on the blades should be deployed in NIC teaming fashion with the primary NICs on the blades split across the Blade switches The Blade switches should support uplinks, downlinks tracking such that if uplinks from integrated switch goes down, the server primary links can be shutdown so that server can start using alternate path without impacting user traffic . In Molina's new Data Center, Blade Chassis will be used in Pass-Through mode where the Blades connect to the Nx5010 Access switches directly. Since the Blades directly connect to Nexus 5010, vPC can be leveraged and cross-chassis PortChannel (LACP Link Aggregation Control Protocol) can be deployed for higher bandwidth gains.

79

(Release) v1.3

High level DCN Design

8. Management
The primary goal of the management module is to facilitate the secure management of all devices and hosts within the enterprise network architecture. Because the management network has administrative access to nearly every area of the network, it can be a very attractive target to hackers. The management module is key for any network security management and reporting strategy. It provides the servers, services, and connectivity needed for the following: Device access and configuration Event collection for monitoring, analysis, and correlation Device and user authentication, authorization, and accounting Device time synchronization Configuration and image repository Network access control manager and profilers The Management firewall Provides granular access control for traffic flows between the management hosts and the managed devices for in-band management. Firewall also provides secure VPN access to the management module for administrators located at the campus, branches and other places in the network. The following are some of the expected threat vectors affecting the management module: Unauthorized Access Denial-of-Service (DoS) Distributed DoS (DDoS) Man-in-the-Middle (MITM) Attacks Privilege escalation Intrusions Network reconnaissance Password attacks IP spoofing

The OOB network segment hosts console servers, network management stations, AAA servers, analysis and correlation tools, NTP, FTP, syslog servers, network compliance management, and any other management and control services. A single OOB management network may serve all the enterprise network modules located at the headquarters. An OOB management network should be deployed using the following best practices: Provide network isolation Enforce access control Prevent data traffic from transiting the management network

The OOB management network is implemented at the headquarters using dedicated switches that are independent and physically disparate from the data network. The OOB management may also be logically implemented with isolated and segregated VLANs. Routers, switches, firewalls, IPS, and other network devices connect to the OOB network through dedicated management interfaces. The management subnet should operate

80

(Release) v1.3

High level DCN Design

under an address space that is completely separate from the rest of the production data network. This facilitates the enforcement of controls, such as making sure the management network is not advertised by any routing protocols. This also enables the production network devices to block any traffic from the management subnets that appears on the production network links. Devices being managed by the OOB management network at the headquarters connect to the management network using a dedicated management interface or a spare Ethernet interface configured as a management interface. The interface connecting to the management network should be a routing protocol passive-interface and the IP address assigned to the interface should not be advertised in the internal routing protocol used for the data network. Access-lists using inbound and outbound accessgroups are applied to the management interface to only allow access to the management network from the IP address assigned to the management interface and, conversely, only allow access from the management network to that management interface address. In addition, only protocols that are needed for the management of these devices are permitted. These protocols could include SSH, NTP, FTP, SNMP, TACACS+, etc. Data traffic should never transit the devices using the connection to the management network.

8.1 Network Management


8.1.1 SNMP
All Cisco switches are capable of running SNMP v3 which is secure therefore it is recommended that all devices should be configured using snmp v3 parameters. SNMP configuration should follow Molinas network management policies

8.1.2 SSH/Telnet
SSH should be the preferred way of remote access to all switches in the data center. Cisco switches support SSHv1 and any open source client can be used to access the switches via SSH. SSH keys are locally generated on the switches and are time based. SSH users can be authenticated locally or via AAA services using TACACS+ or RADIUS protocol. User access should be limited using the IP access list so that access to the switches is allowed from the network administration subnets only. All remote access configurations should be done conforming to Molinas remote access policy.

8.1.3 Logging
Any device logging configuration should be done according to existing Molinas network logging standards. Cisco switches support both local and remote logging to hosts, configuration template would be developed as part of LLD (low level design) for the actual deployment of switches.

8.1.4 NTP
All switches in the new core network will be configured with NTP source for clock synchronization. NTP best practices white paper is available at the following URL: http://www.cisco.com/en/US/tech/tk869/tk769/technologies_white_paper09186a0080117070.shtml

81

(Release) v1.3

High level DCN Design

If there are any existing NTP configuration best practices adapted at Molina those should be taken into consideration for deployment in the new core infrastructure.

8.1.5 RBAC/AAA/TACACS+
In NX-OS, RBAC, AAA and TACACS+ (as well as Radius) are integrated with each other. The AAA feature allows Admin to verify the identity of, grant access to, and track the actions of users managing an NX-OS device. Cisco NX-OS devices support Remote Access Dial-In User Service (RADIUS) or Terminal Access Controller Access Control device Plus (TACACS+) protocols. Cisco NX-OS devices perform local authentication or authorization using the local database or remote authentication or authorization using one or more AAA servers. A preshared secret key provides security for communication between the NX-OS device and AAA servers. Admin can configure a common secret key for all AAA servers or for only a specific AAA server.

8.2 Management Technologies


8.2.1 Cut-Through Proxy (Management Firewall)
The Cut-through proxy feature significantly improves performance compared to a traditional proxy server. The performance of a traditional proxy server suffers because it analyses every packet at the application layer of the OSI model. The security appliance cut-through proxy challenges a user initially at the application layer and then authenticates against standard AAA servers or the local database. After the security appliance authenticates the user, it shifts the session flow, and all traffic flows directly and quickly between the source and destination while maintaining session state information.

8.2.2 DCNM
DCNM is a management solution that maximizes overall data center infrastructure uptime and reliability, which improves business continuity. Focused on the management requirements of the data center network, DCNM provides a robust framework and rich feature set that fulfils the switching needs of present and future data centers. In particular, DCNM automates the provisioning process. DCNM is a solution designed for Cisco NX-OS-enabled hardware platforms. Cisco NXOS provides the foundation for the Cisco Nexus product family, including the Cisco Nexus 7000 Series.

82

(Release) v1.3

High level DCN Design

8.2.3 ANM
Cisco Application Networking Manager (ANM) software enables centralized provisioning, operations, and basic monitoring of Cisco data center networking equipment and services. It focuses on providing provisioning capability for Cisco Application Control Engine (ACE) devices, including ACE modules and appliances. It also supports operations management and monitoring for ACE devices. Cisco ANM simplifies Cisco ACE provisioning through forms-based configuration management of Layer 47 virtualized network devices and services. With Cisco ANM, network managers can create, modify, and delete all virtual contexts of Cisco ACE, as well as control the allocation of resources among the virtual contexts.

8.2.4 CSM
Cisco Security Manager is a powerful but easy-to-use solution for configuring firewall, VPN, and intrusion prevention system (IPS) policies on Cisco security firewalls, routers, and appliances. To deal with the complexity of different security devices, operating systems, and configuration interfaces, Cisco Security Manager has been designed to act as a layer of abstraction. The result is an application with usability enhancements that deliver a superior look and feel to simplify the process of scalable policy definition and deployment. For example, if a network or security administrator wants to implement a policy of limited instant-messaging traffic during business hours, they can do so in a series of simple clicks. The user experience is the same regardless of the actual security device type that is enforcing the rule-whether it is a Cisco PIX firewall, a Cisco IOS Software-based integrated services router, a Cisco ASA adaptive security appliance, or a Cisco Catalyst switch services module. Cisco Security Manager helps administrators reduce user error and maintain consistent policies across the secure network infrastructure.

8.2.5 Fabric Manager


Fabric Manager Server simplifies centralized SAN fabric management. Fabric Manager Server provides fabric configuration, device configuration, network performance monitor, hot spot traffic analysis and historical trending. Fabric Manager Server can manage multiple fabrics at the same time. Flows can be configured to monitor the ISLs and critical servers to enable Molina to able to do performance trending and capacity planning. Custom reports can be configured using the FMS web interface to monitor the fabric health as well as inventory. SAN directors and switches have console port and out of band network management network port. User authentication can be integrated in the existing TACACS+ or RADIUS security infrastructure.

83

(Release) v1.3

High level DCN Design

9. Rack Design
9.1 Data Center Sizing No of Servers & Server NICs
A key component of Data Center architectural development methodology is to understand the number of servers and the NICs per server in each of the server Aggregation Module (a.k.a. POD) within the Data Center. Following table defines the number of NICs for Windows server from 1GS and HW. 1GS SERVER DETAILS:

84

(Release) v1.3

High level DCN Design

Server Description
Total Number of Physical servers Virtual Servers

Quantit y of Phy.srv
291 205

Producti on NIC Count

Manageme nt NIC Count


198

Backup NIC

HBA

Remarks

Physical Servers with Single NIC Physical servers with Dual NIC Physical servers with 3 NIC Physical servers with 4 NIC Physical servers with 5 NIC Physical servers with 9 NIC Physical servers with 11 NIC Servers with Unknown NIC Count

154 104 8 2 1 7 3 12

154 208 24 8 5 63 33 24

Not available Not available Not available Not available Not available

Not available Not available Not available Not available Not available

The Details of Backup & Management NIC is to be provided by Molina.

Not available

Not available

Assuming the 2 NIC's/Server.

Total Number of Servers Total Number of Production NIC's Total Number of Management NIC of servers Total Number of Appliance NIC Total Number of NIC count

291 519 198 76 152 869


Assuming the Appliance has Two NIC's

HUGHES WAY SERVER DETAILS:

85

(Release) v1.3

High level DCN Design

Make

Quantit y phy.srv

Rack space

Productio n NIC

Backup NIC

Manageme nt NIC

HBA

Remarks

IBM DELL HP Unisys Total Number of Physical Servers Total Number of Virtual servers Total Number of Servers

37 11 8 3 59 29

77U 41U 26U 8U

Not Availabl e Not Availabl e Not Availabl e Not Availabl e

Not Available Not Available Not Available Not Available

Not Available Not Available Not Available Not Available

Not Available Not Available Not Available Not Available

The Details of NIC's to be provided by Molina.

88 152U

Table 8- 1GS & HWS SERVER DETAILS

Cauti on This above statistics has been captured based on the 1GS Hardware Inventory 1.3(10-06-2009).The 1GS Hardware Inventory ver1.3 has been sent for validation from the Customer & Yet to be confirmed by Molina. These counts may change under following: 1. The servers from Non-Migration list may move to Migration. 2. There is a possibility of New appliance that needs to be Hosted in a New Datacenter. 3. There is a possibility of New servers being hosted in the New Data Center 4. The above Count is Includes the Appliance list as per the DCOPS_IP_Address_List 6-02-2009.xlsx & the NIC count of these appliances needs to be explored. 5. In case of any changes in the Assumption.

REQUIRED RACKS SPACE 1. SERVER BASED : 1GS = 291 (Phy.Srv) / 16 ( Srv/Rack ) = 18.18 ~ = 4 PODS OR RACK UNIT BASED : 1GS = 428 U/ (32U/Rack ) = 13.3 Rack ~= 3 PODS : HWS =59 (Phy.Srv) / 16 ( Srv/Rack ) = 3.68 ~ = 1 POD OR RACK UNIT BASED : HWS = 152 U/ (32U/Rack ) = 4.75 Rack ~= 1 PODS

2. SERVER BASED

86

(Release) v1.3

High level DCN Design

9.2 RACK & POD Design


9.2.1 Rack Space Division
Rack Space available = 48 Total Rack Space Available = 50 ( 45 RU each ) Access Racks = 40 Racks /5 Racks per PoD = 8 PODs Network Rack = 8

9.2.2 POD Assignments


Total No of PODs = 8 No of Access PODs = 6 TYPE 1: 5 PODs for 1Gig Server connectivity TYPE 2: 1 POD for 10 Gig Server Connectivity TYPE 1 POD ASSIGNMENTS PRODUCTION SERVERS = 4 PODS TEST DEV = 1 POD VOICE = 1 POD EMPTY = 1POD

TYPE 2 POD ASSIGNMENTS 10 Gig Future Servers = 1 POD

Cauti on VOICE POD: No information available assuming POD TYPE 1 without SAN switches. & L2 connectivity only.

87

(Release) v1.3

High level DCN Design

9.2.3 POD Design


POD TYPE 1 :
PER RACK CABLE 2K * 3 = 6 MDS *2 = 8 PER RACK CABLE 2K * 3 = 6 MDS *2 = 8 PER RACK CABLE MDS *2 = 8 5K*2 = 8 5k*1 = 4~8 PER RACK CABLE 2K * 3 = 6 MDS *2 = 8

46 U
Ci c o N e x 2 1 4 8 s u sT 1G E F a b r E x t n d er c i e S TA T D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 27 28 2 93 0 3 1 3 2 3 3 3 4 3 53 6 3 7 3 8 3 9 4 0 4 1 4 2 4 34 4 4 5 4 6 4 7 4 8 1 2 3 4

46 U
MGT
C i c o N e x u s 48 s 21T G 1 E F ab r E x t n d e r c i e ST AT D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 93 0 3 1 3 2 3 3 3 4 3 53 6 37 3 8 39 4 0 4 1 4 2 4 3 4 4 4 5 4 6 4 7 4 8 1 2 3 4

46 U
MGT
C i c o N e xu s 4 8 s 21 T 1 G E F a b r E xt n d e r c i e ST AT

46 U
MGT
C i c o N e x2 1 4 8 s us T G 1 E F a b r E x t nd e r c i e ST AT 3 3 3 4 3 5 3 6 3 7 3 8 3 9 4 0 4 1 4 2 4 3 4 4 4 54 6 47 48 1 2 3 4 D I 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 19 20 2 1 2 2 23 24 2 5 2 6 2 7 2 8 2 9 3 0 3 13 2 3 3 3 4 3 5 3 6 3 7 3 8 3 94 0 4 1 2 4 4 3 4 4 4 54 6 4 7 4 8 1 2 3 4

D I

7 8

1 0

1 1 1 2

13 14

1 5 1 6

17 1 8

1 9 2 0 2 1 2 2

2 3 2 4

2 5 2 6

2 7 2 8

2 9 3 0

3 1 3 2

MGT

C i c o N e x 2 1s 4 T s u 8 1 GE F a b r E x t n d e r c i e ST AT

D I

C c Nxu 24 T is o e s 1 8 1 EFaric E t n er G b xe d STA T Cis N us 1 8 co ex 2 4 T 1 EF br E t n r G a ic xe de 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 2 4 25 26 27 28 29 30 31 32 3 3 34 35 36 37 3 8 39 40 4 1 42 4 3 44 45 46 47 48 1 2 3 4 STA T

Cis oN x 218 c eus 4T 1 E Fbr E tede G a ic x n r STA T

Cisco N xu 2 4 T e s 18 1 F ric E e GE ab xt nder


1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 5 16 1 7 18 19 20 21 22 2 3 24 25 26 27 28 29 3 0 31 32 3 3 34 3 5 36 3 7 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 ST AT

Cisco N xus 2 48 e 1 T 1 Fa ric E e er GE b xt nd


STA T

D I

ID

D I ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 3 2 33 34 35 36 37 38 39 40 41 42 43 4 4 45 46 47 48 1 2 3 4

ID

10

11 12

13 14

15 16

17 18

19 20

21 22

23 24

25 26

27 28

29 3 0

31 32

33 34

35 36

37 38

39 40

4 1 42

43 44

45 46

47 48

C c Nxu 24 T is o e s 1 8 1 EFaric E t n er G b xe d STA T D I 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 2 4 25 26 27 28 29 30 31 32 3 3 34 35 36 37 3 8 39 40 4 1 42 4 3 44 45 46 47 48 1 2 3 4 Cis N us 1 8 co ex 2 4 T 1 EF br E t n r G a ic xe de STA T ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 3 0 31 32 33 34 35 36 37 38 39 40 4 1 42 43 44 45 46 47 48 1 2 3 4

Cisco N x 21 8 e us 4 T 1 GE F bricEx n r a te de
STA T Ci c Nxu 24 s o e s 18T 1 E F ric E e er G ab xtnd ST AT ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 5 16 1 7 18 19 20 21 22 2 3 24 25 26 27 28 29 3 0 31 32 3 3 34 3 5 36 3 7 38 39 40 41 42 43 44 45 46 47 48 1 2 3 4 ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 3 2 33 34 35 36 37 38 39 40 41 42 43 4 4 45 46 47 48 1 2 3 4

C c Nxu 248 is o e s 1 T 1 EFaric E t n er G b xe d STA T D I 1 2 3 4 5

M S9 D 134M LT U ILAYER FAB R S IC WITCH

MD 91 M S 34 ULTI A YE FA RI SW L R B C ITC H

CON S L O E

M GM T 1 /1 0 0 0

STA S TU P /S FA N
DS-C9 1 4 -K9 3 -K 9 CON S L O E M GM T 1 0 /1 0 0

STA TUS LIN K AT C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 4 15 16 17 18 19 20 2 0 21 22 23 2 4 25 26 27 28 29 30 3 1 32 10GE1 10G 10GE2 E2 P/S FA N LIN K AC T 1 2 3 4 5 6 7 8 9 10 11 12 1 3 14 15 16 17 18 19 20 2 1 22 23 24 2 5 26 27 2 8 29 30 31 32 10GE1 10 E2 10G GE2

MGTAGG
1 2 3 4 5 6 7 8 9 10 11 12 1 3 14 15 16 17 18 19 20 2 1 22 23 24 2 5 26 27 2 8 29 30 31 32 10GE1 10 E2 10G GE2

MGTAGG

M S9 D 134M L U TILAYER F BRIC SWITC A H M 914 M DS 3 ULTI AYER FABRIC S W L ITCH

CONSOLE CONSOLE MG MT 1 0 0 0 /1 S AU T TS P /S P /S FN A DS-C9 1 4 -K9 3 - 9 K

M GMT 1 0 0 0 /1

STA TUS P /S FA N
DS- 9 3 -K9 DS-C9 1 4 - 9 C 1 K

L K IN

AT C

L INK

AC T

10

1 1

12

13

14 1 4

15

16

17

18

1 9

20

21

22 22

23

24

25

2 6

2 7

28

29

30

31

32

10G E1

10G 10GE2 E2
M S9 D 134M L U TILAYER F BRIC SWITC A H

M S9 D 134M LT U ILAYER FAB R S IC WITCH

DS-C9 3 4 D S 1 3 -K9 -C9

M 914 M DS 3 ULTI AYER FABRIC S W L ITCH MD 91 M S 34 ULTI A YE FA RI SW L R B C ITC H CONSOLE CON S L O E M GM T 1 /1 0 0 0 CONSOLE CON S L O E M GM T 1 0 /1 0 0 MG MT 1 0 0 0 /1 M GMT 1 0 0 0 /1

STA S TU P /S FA N
DS-C9 1 4 -K9 3 -K 9

STA S TU P/S P /S FA N
DS-C9 1 4 -K9 3 - 9 K

STA TUS P /S FA N
DS- 9 3 -K9 DS-C9 1 4 - 9 C 1 K

STA TUS LIN K AT C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 4 15 16 17 18 19 20 2 0 21 22 23 2 4 25 26 27 28 29 30 3 1 32 10GE1 10G 10GE2 E2 P/S FA N


DS-C9 3 4 D S 1 3 -K9 -C9

LIN K

A CT

L INK

AC T

10

1 1

12

13

14 1 4

15

16

17

18

1 9

20

21

22 22

23

24

25

2 6

2 7

28

29

30

31

32

10G E1

10G 10GE2 E2

LIN K

AC T

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

MD S91 MUL 34 TILAYER FA B ICSW I CH R T

CO NSOLE

MGM T 1 /1 0 0 0

ST U S AT P /S F AN
D S 134 DS-C9 4 -K9 -C9

LIN K

AT C

1 0

11

12

13

14

15

1 6

1 7

18

19

20

21 2 22 2

23

24

25

26

27

28

29

3 0

31

3 2

10 GE1

10G E2 E2

MD S91 MUL 34 TILAYER FA B ICSW I CH R T

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

CO NSOLE

MGM T 1 /1 0 0 0

ST U S AT P /S F AN
D S 134 DS-C9 4 -K9 -C9

LIN K

AT C

1 0

11

12

13

14

15

1 6

1 7

18

19

20

21 2 22 2

23

24

25

26

27

28

29

3 0

31

3 2

10 GE1

10G E2 E2

FIGURE9.1 RACK DESIGN

3 ZONE XX Total Rack Space /POD = 45 RU X 5 Racks = 225 RU


Total Rack Space Required by N. Equipment = 28 RU /POD

BW available Per rack is = 40 Gig CONNECTIVITY(Based on 16server/R provisioning 20 Gi) Racks 1,2,4,5 have similar TOR design which will interface with Connections to the MOR ( Rack 3 ) Connections to the SAN CORE Total N.Space Required /Rack = 5 RU Cables/Rack For R 1,2,4,5 ~ (2K X 3= 6)+(MDS X 2=8)=14 Cables

1 GIG RACK GROUP WITH SAN

Rack 3 is the MOR for the POD which will interface with Connections from Intra POD racks

88

(Release) v1.3

High level DCN Design

Connections to the DCN Aggregation Connections to the SAN Core Total N.Space Required /Rack = 8 RU Cables/Rack For R 3 Intra POD (2K X 3= 6C* 4R) = 24 Cables MDS X 2= 8 Cables Access to Aggrigation = (5k X 2) = 8 ~ 16 Cables To MNET CORE = 2 Cables

Rack/POD TYPE 2: Each Rack is individually connecting to the aggregation switches.

46 U
C i c o N e 2 1 4T8 s x u s 1 G EF a b r E x t nd e r c i e S T AT D I 1 2 3 4 5 6 7 8 9 1 0 1 11 2 1 31 4 1 51 6 1 718 1 920 2 12 2 2 32 4 2 52 6 2 72 8 2 93 0 3 13 2 3 33 4 3 53 6 3 7 3 8 3 94 0 41 4 2 4 34 4 4 54 6 4 74 8 1 2 3 4

46 U
MGT
C i c o N e 2 1 s T8 s x u 4 G 1E F a b r E x t n d er c i e S T A T D I 1 2 3 4 5 6 7 8 9 1 0 1 11 2 1 31 4 1 516 1 71 8 1 920 2 12 2 2 32 4 2 5 2 6 2 72 8 2 9 30 3 13 2 3 3 34 3 53 6 37 3 8 3 94 0 41 2 4 4 344 4 54 6 4 748 1 2 3 4

46 U
MGT
MGT MGTAGG
C i c o N e 2 u sT8 s x1 4 1 G E F ab r E x t n d e r c i e S T A T C i c o N e 2 1T 8 s x u4 s G 1 E F a b r E x t n de r c i e S T AT D I 1 2 3 4 5 6 7 8 9 10 1 11 2 13 1 4 1 5 16 1 7 1 81 9 2 0 2 12 2 2 3 2 4 2 52 6 27 2 8 2 93 0 31 3 2 3 33 4 35 3 6 3 7 3 83 9 4 0 4 14 2 4 34 4 45 4 6 4 74 8 1 2 3 4 D I 1 2 3 4 5 6 7 8 91 0 11 1 2 13 1 4 1 51 6

46 U
17 1 8 1 92 0 2 12 2 2 32 4 2 52 6 2 7 2 8 2 93 0 31 3 2 3 33 4 35 3 6 3 738 39 4 0 4 1 4 2 4 34 4 4 54 6 4 74 8 1 2 3 4

MGT

C i c o N e 2 1 sT8 s x u 4 G 1 E F a b r Ex t n d e r c i e S T AT

D I

91 0

1 1

MD S 9 3 4 M ULTILAY 1 ER FABR IC SWITCH M DS 9 1 4 M ULTILAYE R FAB RIC SW 3 ITCH M DS 9 1 4 MU LT 3 ILA YE FAB RIC SWITCH R MD S9 1 3 MULTILAYER FA BRIC SWI C H 4 T ST T S AU P/S FN A DS-C 91 34 - K9 K 9 LK N I A CT 1 2 3 4 5 CONSOL E C ONS OL E C ONSO L E CO NS L E O M G MT 10 /10 0 MG MT 1 0 /1 00 MG M T 1 0/1 00 M GMT 1 0 /10 0

ST AT US P/S P/S FA N
DS -C9 13 4-K 9

S TA T US P/S F N A

STATUS P /S S / FA N
DS- 9 13 4-K DS-C 1 34 - 9 C K

LI NK

A CT

10

11

12

1 3

14 14

1 5

1 6

1 7

18

19

2 0 2 1 22

23

24

2 5

2 6

2 7 2 8 29

30

31

32

1 GE 1 0

10 GE2 10GE 2

DS-C9 1 3 4-K 9 34 -K M DS 9 1 3 MU LTI A Y FAB RIC SWITCH 4 L ER

L INK

AC T

1 0 11

12

13

14

15

16

1 7

1 8

19

20

21

22

23 24

25 26

2 7

2 8

29

3 0

31 3 2

10 GE1

10GE2
MDS 9 1 3 MU LTILAYER FA BRIC S W C H 4 T I

L INK

AC T

1 0

11

12

1 3 1 4 15 4

16

1 7 18

1 9

20 20

21 22 22

2 3

2 4

25

26

27

28

2 9

30

31

32

1 0 E1 G

10 2 0 GE 2 GE
M DS 9 1 4 MU LT 3 ILA YE FAB RIC SWITCH R

M DS 9 1 4 M ULTILAYE R FAB RIC SW 3 ITCH

MD S9 1 3 MULTILAYER FA BRIC SWI C H 4 T

C ONS OL E

MG MT 1 0 / 00 1 C ONS L E O MGM T 10 / 10 0 S TS TA U P S / /S FA N DS C 13 4-K9 DS- 9 1 34 - 9 -C9 K

C ONSO L E

MG MT 1 0/ 1 00

STA TUS
C ONSO L E CO NS L E O M G MT 10 /10 0 MG MT 1 0 /1 00

STATUS P /S S / FA N
DS- 9 13 4-K DS-C 1 34 - 9 C K

ST AT US P/S P/S FA N

P/ S P/S F AN
D -C 9 3 4-K 99 S-C91 13 4-K DS

S TA T US P/S F N A
DS-C9 1 3 4-K 9 34 -K

LINK

A CT

10

1 1

1 2

1 3

14 1 4

15

1 6

17

1 8 1 9 20

21

22 2 2

23

24

25

2 6

27

28

29

30 31 32

1 0GE1

10 GE2 1 0GE2

LIN K

A CT

LINK

A CT

10

11 12 13 1 4

1 5

16

1 7

18

19

2 0 2 1 22

23

24

25

26

27

2 8 2 9 30

31

32

1 0GE1

1G 10 E2 0GE2

L INK

AC T

1 0

11

12

1 3 1 4 15 4

16

1 7 18

1 9

20 20

21 22 22

2 3

2 4

25

26

27

28

2 9

30

31

32

1 0 E1 G

10 2 0 GE 2 GE

DS -C9 13 4-K 9

LI NK

A CT

10

11

12

1 3

14 14

1 5

1 6

1 7

18

19

2 0 2 1 22

23

24

2 5

2 6

2 7 2 8 29

30

31

32

1 GE 1 0

10 GE2 10GE 2

M DS 9 1 3 MU LTI A Y FAB RIC SWITCH 4 L ER

C ONS OL E

MG MT 1 0 / 00 1

STA TUS

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

P/ S P/S F AN

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

D -C 9 3 4-K 99 S-C91 13 4-K DS

LINK

A CT

10

1 1

1 2

1 3

14 1 4

15

1 6

17

1 8 1 9 20

21

22 2 2

23

24

25

2 6

27

28

29

30 31 32

1 0GE1

10 GE2 1 0GE2

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U 2U

FIGURE9.2 10G Rack Design

21

Total Rack Space = 46 RU X 5 Racks 23230 RU = 22

24

Total Rack Space Required by N. Equipment = 16 RU ( if Doing ZONE 5 FCoE)

10 GIG RACK GROUP Total Rack Space Required by N. Equipment = 24 RU ( with MDS as TOR)
BW available Per rack is = 10 Gig Ports at line rate Cables / Rack -Access to Aggregation = TBD ( MAX 16)

89

(Release) v1.3

High level DCN Design

-Access to SAN (FCoE) = TBD -Access to SAN (MDS *2 ) = 8 Cables

90

(Release) v1.3

DCN

10 Document Acceptance
Name Title Compan y Signatur e Date Name Title Compan y Signatur e Date

Name Title Compan y Signatur e Date

Name Title Compan y Signatur e Date

Name Title Compan y Signatur e Date

Name Title Compan y Signatur e Date

91

(Release) v1.3

Appendix A

Cisco incorporates Fastmac and TrueView software and the RingRunner chip in some Token Ring products. Fastmac software is licensed to Cisco by Madge Networks Limited, and the RingRunner chip is licensed to Cisco by Madge NV. Fastmac, RingRunner, and TrueView are trademarks and in some jurisdictions registered trademarks of Madge Networks Limited. Copyright 1995, Madge Networks Limited. All rights reserved. Xremote is a trademark of Network Computing Devices, Inc. Copyright 1989, Network Computing Devices, Inc., Mountain View, California. NCD makes no representations about the suitability of this software for any purpose. The X Window System is a trademark of the X Consortium, Cambridge, Massachusetts. All rights reserved. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PRACTICAL PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. AccessPath, AtmDirector, Browse with Me, CCDE, CCIP, CCSI, CD-PAC, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, Cisco Systems Networking Academy, Fast Step, Follow Me Browsing, FormShare, FrameShare, GigaStack, IGX, Internet Quotient, IP/VC, iQ Breakthrough, iQ Expertise, iQ FastTrack, the iQ logo, iQ Net Readiness Scorecard, MGX, the Networkers logo, Packet, RateMUX, ScriptBuilder, ScriptShare, SlideCast, SMARTnet, TransPath, Unity, Voice LAN, Wavelength Router, and WebViewer are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, Discover All Thats Possible, and Empowering the Internet Generation, are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, the Cisco Certified Internetwork Expert Logo, Cisco IOS, the Cisco IOS logo, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Enterprise/Solver, EtherChannel, EtherSwitch, FastHub, FastSwitch, IOS, IP/TV, LightStream, MICA, Network Registrar, PIX, Post-Routing, Pre-Routing, Registrar, StrataView Plus, Stratm, SwitchProbe, TeleRouter, and VCO are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and certain other countries. All other trademarks mentioned in this document or Web site are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0105R) INTELLECTUAL PROPERTY RIGHTS: THIS DOCUMENT CONTAINS VALUABLE TRADE SECRETS AND CONFIDENTIAL INFORMATION OF CISCO SYSTEMS, INC. AND ITS SUPPLIERS, AND SHALL NOT BE DISCLOSED TO ANY PERSON, ORGANIZATION, OR ENTITY UNLESS SUCH DISCLOSURE IS SUBJECT TO THE PROVISIONS OF A WRITTEN NON-DISCLOSURE AND PROPRIETARY RIGHTS AGREEMENT OR INTELLECTUAL PROPERTY LICENSE AGREEMENT APPROVED BY CISCO SYSTEMS, INC. THE DISTRIBUTION OF THIS DOCUMENT DOES NOT GRANT ANY LICENSE IN OR RIGHTS, IN WHOLE OR IN PART, TO THE CONTENT, THE PRODUCT(S), TECHNOLOGY OF INTELLECTUAL PROPERTY DESCRIBED HEREIN.

Cisco Advanced Services,DCN Copyright 2007, Cisco Systems, Inc. All rights reserved.

High Level Design Document (HLD)ver1.0 v1.0

COMMERCIAL IN CONFIDENCE.

Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100

European Headquarters Cisco Systems Europe 11 Rue Camille Desmoulins 92782 Issy-Les-Moulineaux Cedex 9 France www-europe.cisco.com Tel: 33 1 58 04 60 00 Fax: 33 1 58 04 61 00

Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-7660 Fax: 408 527-0883

Asia Pacific Headquarters Cisco Systems Australia, Pty., Ltd Level 9, 80 Pacific Highway P.O. Box 469 North Sydney NSW 2060 Australia www.cisco.com Tel: +61 2 8448 7100 Fax: +61 2 9957 4350

Cisco Systems has more than 200 offices in the following countries and regions. Addresses, phone numbers, and fax numbers are listed on the Cisco Web site at www.cisco.com/go/offices.
Argentina Australia Austria Belgium Brazil Bulgaria Canada Chile China Colombia Costa Rica Croatia Czech Republic Denmark Dubai, UAE Finland France Germany Greece Hong Kong SAR Hungary India Indonesia Ireland Israel Italy Japan Korea Luxembourg Malaysia Mexico The Netherlands New Zealand Norway Peru Philippines Poland Portugal Puerto Rico Romania Russia Saudi Arabia Singapore Slovakia Slovenia South Africa Spain Sweden Switzerland Taiwan Thailand Turkey Ukraine United Kingdom United States Venezuela Vietnam Zimbabwe

Das könnte Ihnen auch gefallen