Sie sind auf Seite 1von 18

Network Integration

ISR4k with UCS-E SM-X Module


NIM/SM
NIM/SM
NIM/SM
Module
Module
Module

2x1 GE

UCS-E Module Route/Forwarding


Processor
App VNF
IOS-XE Data IOS-XE
App VNF Plane Control Plane
Multi- 3x1 GE
10 GE
2x1 GE WAN
vSwitch Gigabit
Fabric Linux Interfaces
Hypervisor (Internal)
x86 Processor
BMC x86 Processor

CIMC 1 GE
Cisco UCS E-Series server
Network options to steer VM traffic

• Routing traffic to and from VMs via backplane or external ports


• Best practice depends on the application:
• Network functions (e.g. WAAS, FireSight) typically in path of router WAN/LAN interfaces, use backplane
• User applications (POS, AD/DNS, print file) only need access to LAN, use external ports

ISR4000 backplane config examples:


Using IP Unnumbered basic config: Using an SVI: Using dedicated subnet:
interface g0/0/0 interface vlan 1
ip address 10.0.0.1 255.0.0.0 ip address 10.0.0.1 255.0.0.0 ucse subslot 1/0
! ! imc access-port shared-lom console
ucse subslot 1/0 ucse subslot 1/0 imc ip address 10.0.0.2 255.255.255.0 default-gateway
imc access-port shared-lom console imc access-port shared-lom console 10.0.0.1
imc ip address 10.0.0.2 255.0.0.0 default-gateway imc ip address10.0.0.2 255.0.0.0 default-gateway !
10.0.0.1 10.0.0.1 interface ucse 1/0/0
! platform switchport 0 svi ip address 10.0.0.1 255.255.255.0
interface ucse 1/0/0 !Ena/Dis SVI on UCSE needs an OIR or Router reload !
ip unnumbered g0/0/0 ! end
! interface ucse1/0/0
ip route 10.0.0.2 255.255.255.255 ucse 1/0/0 switchport mode trunk
!
!Best Practice Spanning-tree enablement and config:
spanning-tree mode rapid-pvst
spanning-tree vlan 1 priority 24576
Cisco UCS E-Series server
How server interfaces map to virtual NICs
VMware ESXi networking settings

Cisco IOS/IOS-XE

CIMC CLI/GUI

Note: A double-wide UCS E-series server will have a fourth interface labeled ge3. This is an external
facing interface that maps to vmnic3 on the virtual network side

• The backplane network interfaces can be monitored via installed OS and router monitoring features
• The backplane interfaces support router BDI, sub-interface, VLAN and SVI and other IOS/IOS-XE features
• The external front-facing network interfaces are only accessible by the server and can only be monitored by the installed OS
• On a double-wide server you can configure NIC teaming using the two front-facing interfaces to create redundancy or increase bandwidth
Service Chaining Applications
To WAN Ingress WAN traffic from the ISR WAN port is
1
redirected to vWAAS running on the UCS®-E

Cisco® ISR Chassis


GE 0/0/0

Motherboard
2 vWAAS will redirect traffic back to the ISR router
3
WCCP IN
UCSE1/0/0 UCS-E1/0/1
(BDI 10) (BDI 20) Use standard routing to route traffic from vWAAS to
3
BDI/VLAN 20 to the UCS-E blade
1 2 4

GE 0 GE 1
Traffic will be routed to the vASA outside interface set
4
to its own internal switch

UCS-E Server Module


vmnic0 vmnic1 5

vNIC
vSwitch0 vSwitch1 vSwitch2 vWSA
Traffic is filtered and only authorized traffic is allowed

ESX Host
vmnic2
6 5
out to the vASA inside network

vNIC outside vNIC inside vNIC

vNIC
vWAAS vASA 7 vWLC
vWSA and miscellaneous LAN apps are installed
6 behind the firewall so they are accessible to
LAN devices
GE 2

All LAN traffic accesses the LAN apps through the


To LAN Switch 7
physical external GE 2 port on the UCS-E module
Configuration Example: vWAAS + vNGIPS
LAN access sw
10.0.1.0 /16 WAN
Intfc GE 0/0/0
desription WAN intfc

Intfc ucse 1/0/0.100


description GRT LAN
Intfc ucse 1/0/1.200
ISR Global Route table description VRF inside LAN 192.168.25.1/24
(WAN) 192.168.24.1/30
Intfc ucse 1/0/0.10 192.168.24.2/30
description wccp WAAS

SourceFire ISR VRF inside


IPS (LAN)

vWAAS
Configuration Example: vWAAS + vNGIPS
LAN access sw
10.0.1.0 /16 WAN
Intfc GE 0/0/0
desription WAN intfc

Intfc ucse 1/0/0.100


description GRT LAN
Intfc ucse 1/0/1.200
ISR Global Route table description VRF inside LAN 192.168.25.1/24
(WAN) 192.168.24.1/30
Intfc ucse 1/0/0.10 192.168.24.2/30
description wccp WAAS

SourceFire ISR VRF inside


IPS (LAN)

vWAAS
Using IP SLA and EEM script to provide Fail-
Open backup if IPS service fails
• IP SLA continuously monitors connection across FirePower
• If connectivity fails the EEM script configures the LAN facing router GE
interface into the “global route table” and send a “fail” email notification
• During IPS failure LAN devices can still reach the outside, but have no
IPS/IDS protection
• Once the IPS is back online the IP SLA ping will be successful and activate
a second EEM script
• The second EEM script will reconfigure the LAN facing router GE interface
back to the “vrf inside” to force traffic across FirePower
Cisco IP SLA ping and EEM script Reference
IP SLA ping config: IPS down EEM script config:
event manager environment _email_to your-to-mail@domain.com
track 1 ip sla 1 event manager environment _email_server your.mail.server
delay down 3 event manager environment _email_from your-from-mail@domain.com
! !
ip sla 1 event manager applet ipsla_ping-down
icmp-echo 192.168.24.1 source-ip 192.168.24.2 event syslog pattern "1 ip sla 1 state Up -> Down"
vrf inside action 1.0 cli command "enable"
threshold 500 action 1.5 cli command "config term"
timeout 1000 action 2.0 cli command "interface g0/0/2"
frequency 2 action 2.5 cli command "no ip vrf forwarding"
ip sla schedule 1 life forever start-time now action 2.6 cli command "ip address 192.168.25.1 255.255.255.0"
! action 2.7 cli command "ip nat inside"
end action 2.8 cli command "ip wccp 61 redirect in"
action 3.0 cli command "end"
IPS up EEM script config: action 3.1 cli command "wr mem“
event manager applet ipsla_ping-up action 4.0 mail server "$_email_server" to "$_email_to" from "$_email_from" subject
"$_event_pub_time: IPS down!" body "$_syslog_msg"
event syslog pattern "1 ip sla 1 state Down -> Up " action 4.1 syslog priority notifications msg "priority" facility "state Up -> Down - Mail Sent"
action 1.0 cli command "enable"
action 1.5 cli command "config term"
action 2.0 cli command "interface g0/0/2"
action 2.5 cli command "ip vrf forwarding inside"
action 2.6 cli command "ip address 192.168.25.1 255.255.255.0"
action 2.7 cli command "no ip nat inside"
action 2.8 cli command "no ip wccp 61 redirect in"
action 3.0 cli command "end"
action 3.1 cli command "wr mem"
Server Performance
vWAAS and FirePower Tests
UCSE 140S-M2 LAN WAN Router DP UCS-E
(Mbps) (Mbps) CPU% CPU%
Example: vWAAS test
182 Mbit/s 128 Mbit/s
No Service 1200 1180 12 n/a
ISR4451
vWAAS 617 308 29 97 LAN 2 Gbit/s
WAN
FirePower (IDS) 648 600 98 96 435 Mbit/s 180 Mbit/s
vWAAS + FirePower (IDS) 365 190 89 100

• UCS-E M2 with 16 GB memory and 2 * SAS900 Drives (no RAID)


• Throughput numbers are aggregate (in+out)
• Traffic profile is SFR traffic profile (mixed traffic), about 1/3 upload, 2/3 download
• WAN condition 1G , 20 ms RTT delay , 0.01% loss
• In “No Service” test case interface speed limited the throughput
• FirePower in IDS mode = Router replicates traffic which is quite CPU intense
High Availability
Cisco UCS E-Series and Cisco ISRs
Power & Cooling:
 The E-Series are “Power Sucking Aliens”
 Soft reload of the ISR router does not affect UCS E-Series
(SM-Xs, EHWICs & NIMs!)
 A hard reset or power-down of the router will cause the
E-Series to power down
 ISR router dual power supplies:
− 3900 and 4400 Series can have built-in dual power supply
− 2900 Series ISRs can have external RPS 2300 power supply
− 1900 and 4300 Series ISRs have no power supply redundancy

Online Insertion and Removal (OIR):


 supported on 3900 and all 4000 Series ISR platforms
 Hard drives on the UCS E-Series can be removed and installed without powering
down the blade or the router (Note: RAID disks would have to be rebuilt)
Disaster Recovery

 To recover after a disaster you need to backup your storage to the


datacenter as well
− Use technologies like VMware vSphere replication to set up automatic
backup of data between E-Series Servers and the data center
− Backup is asynchronous, often done nightly after hours
Redundant Storage with StorMagic
One box two server solution

 Requires a 2 server cluster, centralized vCenter can act as tiebreaker between two out-of-sync servers
 Uses direct-attached HDDs/SSDs to create a shared storage iSCSI target

 Virtual machine files (.vmdk, .vmx, .nvram, etc..) are mirrored across servers

 Leverage UCS E-series backplane interfaces to connect management, mirroring and iSCSI network traffic

 If one server fails the VMs survive running on the available server
 When the failed server is recovered, SvSANs communicates with the neutral storage host to determine which host
contains the most up-to-date data, and begin to re-synchronize
Cisco UCS E-series application survivability
Box-to-Box Redundancy

 Should be used if:


− 2 routers are available for redundancy
− Double-wide servers are used
− Switch-Module SM-Xs are used
 Each Cisco ISR can host a Cisco UCS® E-Series server
− Network connectivity between UCS E-Series Servers is done using the front-panel GE interfaces for data replication
(mirror) traffic
− Each E-Series Server runs the SvSAN VM with data mirroring
− Both Network and application survivability is delivered
How StorMagic works from a high level
VMWare StorMagic Plugin
VMs Resolves dual-active scenarios VMs
VMs VMs
VMs VMs
StorMagic StorMagic
mirror

mgmt mgmt

ESXi ESXi

UCS-E UCS-E
SSD caching
Current version is a “write back” cache
VM
• Improves overall I/O write performance significantly
• Delivers low latency access improving application response times 1. Data written directly to 2. Write acknowledged,
• Reduces the number of I/Os going directly to disk cache data in cache is “dirty”

All new data is written to SSD


• Efficient and flexible
̶ Data is written as variable sized extents
SSD
3. Data flushed from cache 4. Flush complete, data in
• Extents are merged and coalesced in the background to persistent storage cache marked as “clean”
̶ Data in cache is flushed to hard disk regularly in small bursts
• Data is retained in cache until space is required
̶ Enables data previously written to be read from cache, improving read
performance

In ROBO environments the amount of data written per day is relatively low
• Ranging from a few tens of Gigabytes to hundreds of Gigabytes
• A 250GB SSD could cache a days worth of data

Das könnte Ihnen auch gefallen