Sie sind auf Seite 1von 48

Dell EMC Open Networking

Options Insights
Kyiv, January 2020
George Popescu | Networking Systems Engineer (George.Popescu@dell.com)
INDUSTRY'S #1

Open Networking – Dell EMC The Market Leader

IT Brand Pulse, a trusted source for research, data and analysis


about data center infrastructure, announced the 2019 Networking
Brand Leaders, voted by IT professionals from around the world.

Open Networking Switch


(Bare Metal)

Dell EMC headed up all six leader categories in the Open Networking Switch
category for the fifth time: Market, Price, Performance, Reliability, Service &
Support, and Innovation
CLOUD ISN’T
A LOCATION.
IT’S A SET OF
DESIGN PRINCIPLES.
BIG CLOUD FABRIC - Product Overview
20 YEARS OF MANAGEMENT PLANE EVOLUTION

1996 2016

Terminal Protocol: Telnet Terminal Protocol: SSH


7 Confidential © 2019
7
CLOUD ISN’T A LOCATION,
IT’S A SERIES OF DESIGN PRINCIPLES

Speed

Experience

Reliability

Security

8 Confidential © 2019
WHAT ARE THE OPTIONS?
Two different next steps, a different vision for the future

LEGACY-FIRST PATH: “vHARDWARE”


Bring on-prem networking everywhere

TODAY
TOMORROW
Silo-ed hybrid clouds
Seamless hybrid
clouds?

=
CLOUD-FIRST PATH: “VPCs-ON-PREM”
Bring in-cloud networking everywhere

9
BIG CLOUD FABRIC Cloud Native Networking

• Leaf-Spine Fabric, Private & Hybrid Clouds SDN controller provide abstraction

REST API for integration with private or


OPENSTACK, VMWARE, & CONTAINER
ORCHESTRATORS Hybrid
Single Programmatic Interface
cloud

REST API
Optional vSwitch for containers
BIG CLOUD FABRIC SDN CONTROLLER
CONTROLLER Full Automation for Provisioning
(CLI, GUI or API) HA/Resiliency & Management
Single point of visibility,
SWITCH LIGHT OS control and automation
Hierarchical Control Open Network Linux (ONL) based OS
Plane for Dell-ON or white box switches
SWITCH LIGHT OS SWITCH LIGHT OS
Size of POD 64 Leaf switches. Depends on
SWITCH LIGHT OS

10G/40G/100G the Scale of Spine switches


L + L3 CLOS Fabric
Managed by SDN Controller
L2 + L3 CLOS FABRIC

SWITCH LIGHT OS SWITCH LIGHT OS SWITCH LIGHT OS

1G/10G/25G/40G Complexity never more than a single


SWITCH LIGHT VX switch

Single BGP session per fabric


SWITCH LIGHT VX
SWITCH LIGHT VX User space agent for Linux
(Optional)
Server Connectivity Physical Server Virtual Workloads
Backs & Storage Racks Scale of a network with operational
experience of a single switch
1
0
RESILIENCY @ SCALE
Chaos Monkey Resilience proves BCF is Best in class HA at Scale

Chaos Monkey Testing: 42k simulated


End-points/VMs of background load
and 640+ forced component failures
during the “under stress” test runs
▪ 32 leaf / 6 spine / 16 rack pod
▪ Controller fail-over every 30
seconds
▪ Switch fail-over every 8 seconds
▪ Link fail-over every 4 seconds

Conclusion: 640 component failures in 30 minutes with no


impact on application performance
APP DEPLOYMENT IN LEGACY NETWORKS
Network is the bottleneck to App Agility

Application Requirements Delivered by Traditional Networking


External Core Router

Many Consoles Complex Provisioning

FW
vlan
480-
LB 490
TENANT BLUE allow
vlan tcp 80
Logical Router allow vlan 10-12
225- allow tcp
tcp on eth2
318 22
8080,44
deny all
3
FW
vlan rule
480- allow
490 web7
SEGMENT WEB SEGMENT APP SEGMENT
allow
DB
from allow any any
10.4.3.22 tcp 22 ! through FW 3
WEB WEB APP APP DB DB
/28 only

MULTIPLE L2 SEGMENTS

Slow, Brittle, Complex, Expensive


Logical Topology

12
BIG SWITCH APPROACH
North-South → East-West Architectures Big Cloud Fabric
• Bare metal HW (economical)
• SDN architecture (simple)
• Automation-centric (Agile)
• Scalable & Resilient (like traditional
network)
CORE
Fabric Solutions Big Cloud
AGGREGATION Fabric
Controller
Clos / Fat Tree
Architecture

EDGE

RACK 1 RACK N RACK M


INGRESS/EGR
ESS RACK 1 RACK N

N-TIER DATA CENTER Overlay Solutions


DESIGN Releases:
• P-Clos: Leaf & Spine
(Traditional Approach)
• Unified P+V Clos: vSwitch, Leaf & Spine

13
BIG CLOUD FABRIC
MODERN FABRIC ARCHITECTURE FOR PHYSICAL & VIRTUAL WORKLOADS
The industry’s first bare metal
BIG CLOUD FABRIC
CONTROLLER SDN data center switching fabric
High Performance: Dense 10G/40G
Hierarchical Box
Control Plane
SPINE by
1 2 3
SWITCHES
(32x40G) Box Scalable: Max scale of Trident II

10G/40G Links Resilient: Headless Mode Operations


LEAF
SWITCHES
(48x10G+6x40
L4-7 Service Insertion & Chaining
G)
Physical
& Hypervisor: ESX, Hyper-V, KVM, Xen
Virtual
Workloads

COMPUTE COMPUTE SERVICES & Orchestration: OpenStack, CloudStack


WORKLOAD WORKLOAD CONNECTIVITY RACKS
Private/Public Big Data
VDI …
Clouds Analytics

Single “Logical” Switch (Zero-touch Fabric, Dramatic TCO reduction)


14
POD-LEVEL DEPLOYMENT
Inter-operate with Existing PODs in Data Center
Internet/WAN
Example BCF PODs:
• Private Cloud: Dev/Test
Data Center • Analytics (Hadoop)
Core Routers • VDI

Big Cloud Big Cloud


Fabric Fabric
Controller 40G Controller 40G

L3 L3
L2 L2 10G 10G

INGRESS/ RACK N- INGRESS/ RACK N-


EGRESS RACK 1 RACK 2 RACK N EGRESS RACK 1 RACK 2 RACK N
1 1

15
BIG CLOUD FABRIC: SIMPLE, SIMPLE, SIMPLE
Application Centricity Simplicity, Automation Zero Touch
(Auto Config, Auto Scaling, Auto
(Logical Networking, (REST APIs, GUI, CLI)
Upgrade)
Provisioning Templates)
Feature Big Cloud Fabric
External Core 16 racks, 40 devices
Router Switch OS Install Automatic
Link Automatic Hitless Fabric
FW Aggregation Upgrade

15
TENANT BLUE
LB
Logical Router
Fabric Automatic
(w/ policy) Formation
Trouble-shooting Fabric-wide

Segment-Web Segment-App Segment-DB L4-7 Service Declarative (per


Chaining tenant)
WEB WEB APP APP DB DB
Add/Remove/ Automatic Minutes
MULTIPLE L2 SEGMENTS Update Fabric
Hitless Upgrade Automatic
Box Box
by Fabric Visibility Controller or API by
Box Box Box
by
16
Box
Comparison with Traditional
ZERO TOUCH FABRIC
Operational Simplicity - Install Fabric
Traditional
CORE
Design
BIG CLOUD
FABRIC
Big Cloud Fabric
CONTROLLER

AGGREGATION
1 2 3

EDGE

Physical
&
Virtual
RACK 1 RACK N RACK M Workloads

Traditional - Steps to install a fabric: BCF - Steps to install a fabric:


1. Mount physical switches and cable them 1. Mount physical switches and cable them
2. Load right image on switch and boot up 2. Install controller/s & add switches mac and role on controller
3. Configure individual links on switch 3. Power up switches
4. Check all links are up • Switches will download right image and configuration from controller
5. Switch 2 – Repeat steps 2-4 • Monitor link status on controller (single point for entire fabric)
6. Switch 3 – Repeat steps 2-4
7. …

Redundant and Manual work (box by box) One time configuration, no redundant and manual work

18
TENANT AWARE FABRIC
Operational Simplicity - Add / remove Tenant
Vlan 100 Vlan 100
Vlan 100 Vlan 100
interface Ethernet 1/1VRF BLUE ! tenant
Vlan 100 tenant BLUE
switchportEthernet 1/1VRF
interface RD X:Y
BLUE
Vlan 100
switchport mode trunk logical-router
switchport
interface Ethernet 1/1RD X:Y
switchport
Vlanvlan
100 100Interface vlan 100 interface segment web
switchport mode
switchport trunk
. interface Ethernet
ip 1/1
addressvlan
10.1.1.254/24 ip address 10.1.1.254/24
switchport vlan
switchport 100Interface
mode trunk 100
. . switchport
interface Ethernet 1/1
vrfipforwarding BLUE
switchport vlan 100 address 10.1.1.254/24
. . switchport mode trunk
switchport segment web
. vrf forwarding BLUE
switchport vlan
switchport 100trunk
mode
interface Ethernet 1/1 member switch any interface any vlan 100
. .
. switchport interface
vlan 100 Ethernet 1/1
switchport
. .
. switchport mode trunk
switchport
. .
Traditional switchport vlan
switchport 100trunk
mode Big Cloud
.
Design switchport vlan 100 Fabric
Traditional - Steps to add a tenant:
BCF - Steps to add a tenant:
1. Tenant is associated with a VLAN or a set of VLANs; identify
available VLAN on all switches for new tenant 1. Configure tenant specific configuration on controller
2. Configure respective VLAN (and necessary L3) on a switch
3. Repeat this configuration on all switches BCF - Steps to remove a tenant:
Traditional - Steps to remove a tenant: 1. Remove tenant from controller
1. Find Tenant and VLAN association
2. Remove respective VLAN (and necessary L3) on a switch
Controller takes care of configuring all the switches.

3. Repeat this configuration on all switches New switch inherits configuration from controller

19
FABRIC CHANGE MANAGEMENT
Operational Simplicity – Add or replace switch
CORE Traditional BIG CLOUD
Big Cloud Fabric
FABRIC
Design CONTROLLER

AGGREGATION ✖
✖ 1 2 3

EDGE

Physical
&
Virtual
RACK 1 RACK N RACK M Workloads

Traditional - Steps to change a switch: Traditional - Steps to change a switch:


1. Back up configuration (to recover in case of problem) 1. Replace switch & re-cable; update MAC address in config
2. Replace switch & re-cable
3. Install right image
4. Download right configuration
5. Check L2 protocols converge correctly
Controller takes care of configuring rest of the steps.
6. Check L3 protocols converge correctly

20
HEADLESS MODE
High Availability and Resiliency
BIG CLOUD FABRIC
CONTROLLER Big Cloud Fabric

Headless Mode: Switch is not able to connect to
controller

Network continues to run even when: 1 2 3

• Both controllers are down


• All controller to management switch links
fail Physical

Both management switches fail


&
• Virtual
Workloads

• Switch management link fails (individual


switch going in headless mode)

Once failure is fixed, switch sync up with


controller and updates its forwarding table

21
UPGRADE
Operational Simplicity
Traditional
CORE Design BIG CLOUD
FABRIC
Big Cloud Fabric
CONTROLLER

AGGREGATION
1 2 3

EDGE

Physical
&
Virtual
RACK 1 RACK N RACK M Workloads

Traditional - Steps to upgrade: BCF - Steps to upgrade:


1. Back up configuration (to recover in case of a problem) 1. Back up configuration (to recover in case of a problem)
2. Copy new image and upgrade switch
2. Copy new image and upgrade standby controller
3. Check L2 protocols converge correctly
3. Active controller hands of half of the fabric switch to new upgraded
4. Check L3 protocols converge correctly controller; switches boots up with new image
5. Switch 2 – Repeat steps 2-4 4. New upgraded controller brings up fabric and takes control of rest of the
switches; switches boots up with new image
6. Switch 3 – Repeat steps 2-4

7. …

It is a sequential process and time taken is directly Automated process and takes much less time,
proportion to the number of switches in a fabric independent of the number of switches in a fabric

22
SCALABLE ARCHITECTURE
Traditional
CORE Design BIG CLOUD
FABRIC
Big Cloud Fabric
CONTROLLER
L3
AGGREGATION
L2
1 2 3

EDGE

L2/L3
Physical
&
Virtual
RACK 1 RACK N RACK M Workloads

L2/L3 boundary is choke point: Distributed L2/L3 boundary:

• ARP response for external communication needs to be • Logical Router (distributed) handles ARP response for external
generated by L2/L3 boundary router → CPU becomes communication
choke point
• No complex protocols, all forwarding logic is implemented in
• L2 and L3 protocol scale needs to be handled by L2/L3 controller
boundary router → CPU becomes choke point
• No HSRP/VRRP, can use multiple leaf for external connectivity
• HSRP/VRRP limit L2/L3 boundary router to two devices →
Bandwidth is choke point
Need more bandwidth → add more spine.
Need more server port → add more leaf.

23
BCF FACTS
Key Innovation

1. Zero Touch Fabric


2. Fabric LAG - ECMP like L2-Multipath
3. Distributed Logical Routing
4. Fabric Visibility using Test Path
5. BCF Management GUI
6. Open Stack/Cloud Stack Integration
7. Simple, simple, simple…
24
DELL EMC & BIG SWITCH - CUSTOMERS SUCCESS STORIES
• Driven by their Transformation Requirements

Telco, Web
High Tech Financial Gov’t / Fed & Emerging
Services, Higher Ed Healthcare
& SaaS Services Smart Cities Verticals
Cloud
Adaptive Cloud Fabric
Netvisor ONE | UNUM
Pluribus Product Portfolio
Fabric Manager Insight Analytics Monitoring + Analytics Platform
UNUM 2+ Billion Flow Database
Management,
Automation

Telemetry Integrated TCP Flow visibility


Every Port, Every End Point, Every Flow without separate probes and without packet brokers

Adaptive Cloud Fabric


Distributed SDN Fabric for unified control and
Contollerless SDN Fabric
Multi site overlay, Network Slicing, Virtual Wires
provisioning of fabric services across multiple sites

NOS Netvisor ONE Network OS L2/L3 NOS based on open, standard protocols
Feature Rich Layer 2/3/VXLAN Switching for easy insertion in brownfield networks
and into any existing topology (incl. rings)

2019 Pluribus Networks, Inc. Confidential & Proprietary


Adaptive Cloud Fabric – Basic Concepts

Existing Existing
Network Network

Underlay Underlay

Adaptive Unified Mgmt Of 1:N Provisioning


Cloud Fabric Fabric Switches Simplifications
VRF, subnets ,
Anycast Gateway
VLANs,
Bridge Domains

Virtual Network
Overlay Traffic Policies
(auto-tunnels)

Hosts Switches Hosts Switches

2019 Pluribus Networks, Inc. Confidential & Proprietary


28
Adaptive Cloud Fabric – Why Is It Distributed?

Centralized SDN Distributed SDN


Centralized Single mgmt
Fabric control Open,
processing for failover interface à single
Deploy 3 and re-convergence point of failure runs in-band interoperable
redundant w/o single point
servers for the of failure Third Party Devices

controllers Closed leaf-spine Core IP


OOB not meant to
architecture Network
be extended
across locations Every switch
can control the
Set up a separate fabric
management
network
Core IP
Network

OOB network Seamlessly


single point of Distributed control extends across
attack à single plane for rapid failover locations
point of failure and re-convergence
1. Cost of 3 extra servers just to operate the Fabric is a
1. Cost effective, no cost for 3 controllers; sweet spot for small-
significant burden for small fabrics
mid-size deployments
2. Complex to extend across geo-distributes sites
2. Simple to extend across geo-distributes sites
3. Bookended, closed leaf-spine architecture
3. Adapts to brownfield, adapts to any topology
4. Exposed to single points of failure…
4. No single point of failure
2019 Pluribus Networks, Inc. Confidential & Proprietary 29
Where Can I Position The Adaptive Cloud Fabric With
Clear Differentiation?
MANAGED SP | CLOUD SP
Small-Medium Environment (<=8 racks)
▪SIMPLEST SDN FABRIC TO SCALE ACROSS MULTIPLE SITES
▪MOST COST EFFECTIVE Leaf-Spine SDN Fabric.
Single pane of glass for all the switches across multiple
No controllers, no probes/taps monitoring overhead
locations
▪MOST BROWNFIELD INTEROPERABLE SDN Fabric
▪MOST AGILE DEPLOYMENT OF MULTI-TENANCY SERVICES
Designed to interoperate with 3rd party switches and routers
Single operation to deploy policies across multiple sites
▪MOST COST EFFECTIVE, INTEGRATED VISIBILITY SOLUTION
▪MOST ADAPTABLE SDN FABRIC
No external probes, taps, pkt brokers GUI based provisioning
Designed to work with any topology
Third-party OS

p2p pseudowires L3 multipoint VPN


VXLAN AUTO Tunnels
L2 multipoint VPN L3 multicast VPN
1 Single Fabric

HW VTEP HW VTEP HW VTEP

VTEP VTEP VTEP VTEP VTEP VTEP Single IP


Existing
Core Network

.1Q | Deep Fabric


QinQ Slicing
2019 Pluribus Networks, Inc. Confidential & Proprietary
30
Success Story: Econocom Italia naboocloud

“360-degree digital transformation” services


€3B revenue – 18 countries – 40 years experience

Any Cloud, Any Service


Econocom naboocloud
+ public clouds

Comprehensive
Multi-tenant Slicing
4 DCs expanding to 6 Data – Control – Management

“The programmability, performance and economics of our


software-defined infrastructure is critical to our success and
the foundation of our customer experience.”

Paolo Bombonati
VMware, OpenStack Chief Operating Officer

2019 Pluribus Networks, Inc. Confidential & Proprietary 31


Adaptive Cloud Fabric Summary

Multi-site, Multi- Built for Brownfield Insertion Integrated Network


tenant Fabric Services No Rip & Replace, interop w/ legacy Visibility
Single “pane-of-glass” to protocols, adapts to any topology
Visibility built into the fabric,
Fabric Admin
automate the delivery of services no external probes, taps,
across multiple Edge sites and packet brokers
reduce human errors
Yellow Admin Red Admin

1 Single Fabric

VM VM VM VM
VM VM VM VM
VM
VM VM
VM VM VM
VM VM VM VM

SDN Fabric with best economics for mid-size deployments


32
2019 Pluribus Networks, Inc. Confidential & Proprietary
Adaptive Cloud Fabric – SDN-driven VXLAN Overlay
▪ The Fabric enables provisioning of services with one operation across the entire fabric
Fabric Admin
▪ The Fabric can be “sliced” in multiple tenant fabrics down to the physical switch port
with per-tenant isolated data plane, and management plane
▪ The Fabric automates overlay tunnels and manages optimal forwarding
▪ The Fabric seamlessly extends tenant services across multiple geo-distributed sites Yellow Admin Red Admin

HA HA HA 1 Single Fabric HA HA Cluster


VTEP VTEP VTEP VTEP VTEP VTEP

VM VM VM VM
VM VM VM VM
VM
VM VM
VM VM VM
VM VM VM VM

2019 Pluribus Networks, Inc. Confidential & Proprietary


Geographically Distributed L2VPN Services
▪ Fabric bridging: VLAN/Bridge Domain managed as a single object across the entire fabric.
▪ Large scale MP-to-MP L2VPN across multiple sites. Configure port membership everything else auto-provisioned.
▪ Flexible programmability of ethernet encapsulation at any point of the fabric (e.g. 802.1q & Q-in-Q)
▪ Per port multiple bridge-domains and encapsulations

Inter-Site Auto VXLAN Tunnels

Single Fabric

HA HA HA HA HA Cluster
VTEP VTEP VTEP VTEP VTEP VTEP
VTEP VTEP

Fabric
VLAN | BD
.1Q 10 .1Q 10 QinQ untagged

2019 Pluribus Networks, Inc. Confidential & Proprietary


P-to-P Transparent L2VPN Service – Virtual Link Extension
▪ Virtual Link Extension (vLE) provides transparent point-to-point services emulating direct fiber/cable connectivity
▪ Port link state synchronization between vLE end ports
▪ Carry any protocol/any VLAN
▪ Support client-side LAG

“Virtual Fiber”
Single Fabric

Cluster Cluster Cluster


VTEP VTEP VTEP
1
2

1 2 1 2

2019 Pluribus Networks, Inc. Confidential & Proprietary


Geographically Distributed L3VPN Services
▪ Fabric routing: VRF and Subnets managed as a single object across the entire fabric.
▪ Large scale (to max HW capacity) L3 macro segmentation with distributed VRF/subnet objects (dual IPv4/IPv6)
across multiple sites
▪ Optimized conversational routing with Anycast Gateway (no hair-pinning)

Inter-Site Auto VXLAN Tunnels

Single Fabric

HA HA HA HA HA Cluster
VTEP VTEP VTEP VTEP VTEP VTEP
VTEP VTEP

Fabric VRF
10.0.10.0/23 10.0.10.0/23 10.0.10.0/23 10.0.10.0/23
10.1.20.0/23 10.1.20.0/23 10.1.20.0/23 10.1.20.0/23

2019 Pluribus Networks, Inc. Confidential & Proprietary


Geographically Distributed Multicast Routing
▪ Allow multicast traffic to be routed at each ToR without reaching a centralized RP
▪ Rely on IGMP join/leave information, fully automated – PIM free – self-learning location changes
▪ Fabric VRF aware - works with Anycast Gateway
▪ Assumption: source and receivers are always adjacent to the fabric

Single VXLAN packet


VNI=VRF

Inter-Site Auto VXLAN Tunnels

Single Fabric

HA HA HA HA HA Cluster
VTEP VTEP VTEP VTEP VTEP VTEP

S
S S

R R R R
R R R R
Multicast Receivers Multicast
In subnets A,B,C Source in subnet C
2019 Pluribus Networks, Inc. Confidential & Proprietary
Fabric Overlay Services At-a-Glance
Single 802.1q VLAN

MultiPoint-to-MultiPoint Multiple 802.1q VLAN1


L2 VPN Q-in-Q1
Bridging
L2 Multicast

Point-to-Point Transparent
Virtual Link Extension (virtual wire)
L2VPN
Distributed unicast routing IPv4 (AGW)

Distributed unicast routing IPv6 (AGW)


MultiPoint-to-MultiPoint
Routing Distributed multicast routing IPv4 (AGW)1
L3VPN
Distributed multicast routing IPv6 (AGW)1

Multi-tenant DC Gateway
(1) 5.1.x release 2019 Pluribus Networks, Inc. Confidential & Proprietary
UNUM Unified Management and Analytics Platform
Visibility without external packet brokers or external probes

Alert & Reports


Application Flow Analytics
Device & Fabric Manager Health Dashboard (30 days time machine)

Zero-Touch Provisioning Device Analytics


(traffic, syslog, snmp)
Flow Dependency Mapping
(Application security analysis) End-point Analytics
Source

Destination

Proportional to
Tier-2 conn. volume

2019 Pluribus Networks, Inc. Confidential & Proprietary


UNUM – Advantages
• Interactive topology – simplifies network operations
Fabric Management • Multiple fabrics, multiple locations – one point of management
• Manage NetVisor devices in homogenous and heterogeneous fabrics
• Multi-vendor mapping (LLDP speakers)
UNUM-LIC
• Ingest Syslog and SNMP telemetry

• Workflow automation – accelerates deployment times


Automation
• Switch software provisioning – NetVisor OS
• Fabric Creation – Adaptive Cloud Fabric
• Underlay Build – Supports L2/L3 leaf-spine topologies

Easy to use, always-on analytics – no packet brokers, taps or mirrors

Insight Analytics Leverage embedded switch flow telemetry


VM Aware – UNUM collects meta-data from vCenter
IA-MOD-LIC
Real-time and historical troubleshooting for application and services

2019 Pluribus Networks, Inc. Confidential & Proprietary


Integrated Telemetry & Insight Analytics

Insight Analytics

End-point (vPort) based telemetry Flow-based telemetry

1 Single Fabric

2019 Pluribus Networks, Inc. Confidential & Proprietary 41


Challenges Of Traditional Labs In Large Enterprise and Service
Providers
Development Team Test Team
Integration Team Regression Team

Frequent “re-wiring” of lab resources and network


topologies is error prone and time consuming

Limited sharing of expensive test equipment (e.g.


traffic generators) across teams

Captive lab resources often located in lab silos


around the company, simultaneously underutilized
Lab A and inaccessible to many

Lab B Limited or no automation of test environments


Lab C

2019 Pluribus Networks, Inc. Confidential & Proprietary


Lab Network Automation – Business Drivers
Development Team Test Team
Integration Team Regression Team

Test Bed #1
Wire-once, re-wire in software!
10G Test Tool Eliminate manual, error-prone, slow topology changes
Lab-As-A Service
Software
Maximize utilization and sharing of expensive test
equipment across teams and projects
100G Test Tool

Enable sharing of lab resources across multiple


geographically dispersed labs
Test Bed #2

Enable extensive automation of test environments


1G Test Tool

Test Bed #3

2019 Pluribus Networks, Inc. Confidential & Proprietary


Lab Network Automation – Use Case Examples
Development Team Test Team Large Enterprises and Service Providers have deployed or have
Integration Team Regression Team the need for L1 cross-connect technologies

Test Bed #1 Engineering Teams: accelerate service onboarding,


10G Test Tool
testing of updates and patches, developing and
Lab-As-A Service
Software
testing new services
Support Teams: accelerate time to reproduce customer
found defects
100G Test Tool

Technical Sales Teams: reduce time to set up topologies


Test Bed #2
for customer proof of concepts

All: share expensive traffic generator ports across


1G Test Tool teams, projects and even different labs

Test Bed #3

2019 Pluribus Networks, Inc. Confidential & Proprietary


Pluribus – DellEMC L1 Solution For Lab Automation
▪ Functions as distributed software
programmable patch panel with
automated L1 path building
LaaS Software ▪ Supports Layer 1 packet transparency
“Virtual Chassis” across the fabric

▪ Pay-as-you-grow, scale-out
VirtualWire Fabric architecture based on Dell Open
Networking 1U switches

▪ “Virtual Chassis” architecture with


single point of management

▪ Any speed, any media type conversion

▪ Extensive packet filtering with 1:1 or


100G Test Tool 10G Test Tool 1G Test Tool
1-to-many virtual wires
▪ Integrated with LaaS (Quali and
Tokolabs)
Test Bed #1 Test Bed #2 Test Bed #3
▪ RESTful API,
2019 Pluribus CLIInc.and
Networks, integration
Confidential & Proprietary
45
Pluribus Multi-Tenant DC Gateway
• Pluribus Freedom Series Whitebox Multi-tenant Gateway
Router (Intel Xeon-based, 32 GB RAM)

• Requested by and sold to Econocom (via Dell/Pluribus) ISP

• Per tenant vRouters in isolated containers. HW accelerated.

• Optimized for:

• Edge locations where placing traditional ASR or MX is cost


prohibitive

• Public cloud interconnect where the full internet table not


required - ideal for MSP, CSP, Hosting providers

2019 Pluribus Networks, Inc. Confidential & Proprietary 46


Multi-tenancy Isolation For North-South and East-West

Tenants ISP 1

ASN-1 ASN-2 ASN-x


Nx10/100GE Multi-tenancy Everywhere
• Fully isolated data, control &
management plane per customer
vR vR vR vR
DC_GW DC_GW for N-S traffic
Third party leaf vR vR
• If Pluribus spine/leaf,
and spine corresponding slice throughout
network for E-W
VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF VRF
ACF
VRF VRF VRF VRF VRF VRF Multi-tenancy Business value:
• Cost effective shared infrastructure
• More control, better security

Hosting DC 2019 Pluribus Networks, Inc. Confidential & Proprietary 47

Das könnte Ihnen auch gefallen