Sie sind auf Seite 1von 54

A

 Network  in  a  Laptop:  


Rapid  Prototyping  for  
So7ware-­‐Defined  Networks  

Bob  Lantz,  Brandon  Heller,  Nick  McKeown  


Stanford  University  
HotNets  2010,  10/20/10    

1  
2  
Wouldn’t  it  be  amazing…  
if  systems  papers  were  runnable.  

3  
Wouldn’t  it  be  amazing…  
If  systems  papers  made    
replicaSng  their  results,  
modifying  the  described  system,  
and  sharing  it  with  others…  

…  as  easy  as  downloading  a  file.   4  


Wouldn’t  it  be  amazing…  
if  network  systems  papers  were  more    
than  runnable.  
share  with  others  

idea   prototype   deploy  on  


hardware  

with  no  code  changes!?!  


5  
Mininet:  a  plaYorm  for  rapid  
network  prototyping.  

scales  to  usefully  large  nets  


runs  unmodified  applica4ons  
provides  path  to  hardware  
facilitates  sharing  
6  
openflow.org/mininet  
140+  users  
45+  on  mailing  list  
20+  insStuSons  
open  source  (BSD  license)  

[don’t  download  now!  save  the  WiFi!]  

7  
Demo  

8  
Demo  Topology:  Fat  Tree  
&"'($

)**'(*+,"-$
,--'(-./"0$

.#*($
1#-($
!"#$%$
!"#$%$ !"#$/$
!"#$)$ !"#$0$
!"#$*$ !"#$1$
!"#$+$

9  
described  in  Scalable  Commodity  Data  Center,  SIGCOMM  2008,  Al  Fares  et  al.  
(1)  share-­‐able  
(2)  runs  on  hardware  

10  
11  
Date    
Nov  2009:  deadline  in  3  months  

[based  on  a  true  story]


 

12  
Resources:    
a  laptop  

13  
Goal:    
build/eval/demo  a  
realisSc  new  networked  
system  

14  
Why  not  a  real  system?  

switch  s1  
+  as  real  as  it  gets  
-­‐    a  pain  to  reconfigure  

host  h2   host  h3  

15  
Why  not  networked  virtual  machines?  

VM1   switch  s1  


+  easier  topology  changes  
-­‐  scalability  
VM2   VM3  

host  h2   host  h3  

16  
Why  not  a  simulator?  
process  

state   switch  s1  


+  good  visibility  
-­‐  no  path  to  hardware  
state   state  

host  h2   host  h3  

17  
Problem  1:    
Want  scale  with  
unmodified  applicaSons.  
→  Use  lightweight,  OS-­‐
level  virtualizaSon.  
18  
OS-­‐level  VirtualizaSon  
Same  system,  different  view.    Almost  zero  overhead.  
ex.  IMUNES,  Emulab    
process  
user   process  
kernel   filesystem   filesystem  
hostname   hostname  
user  IDs   user  IDs  
network   network  

one  OS  kernel  


19  
Problem  2:    
Want  a  smooth  path  to  
hardware  deployment.  

→  Use  So7ware-­‐Defined  
Networking.  
20  
Feature   Feature  

Network  OS  

Feature  
OS  

Feature  
Hardware  
OS  
Feature  
Hardware  
OS  
Feature  
Hardware  
OS  

Hardware  
Feature  

OS  

Hardware  
21  
So7ware-­‐Defined  Network  
Feature   Feature  

Network  OS  

OpenFlow  
Packet  
Forwarding     Packet  
Forwarding    

Packet  
Packet   Forwarding    
Forwarding    

Packet  
Forwarding    
22  
Mininet  
Walkthrough  
23  
controller  

switch  s1  

host  h2   host  h3  

24  
$> mn --topo minimal \ ! run  Mininet  
--switch ovsk \ !
--controller ref! launcher  

root  network  namespace  

mn!

25  
$> mn --topo minimal \ ! Hosts  
--switch ovsk \ !
--controller ref!

create  bash  processes  


root  network  namespace  

mn!

pipes  

/bin/bash! /bin/bash!

host  h2   host  h3  

26  
$> mn --topo minimal \ ! Hosts  
--switch ovsk \ !
--controller ref!

unshare(CLONE_NEWNET)!
root  network  namespace  

mn!

pipes  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

27  
$> mn --topo minimal \ ! Links  
--switch ovsk \ !
--controller ref!

ip link add  
root  network  namespace  

veth0   veth2  

veth  pairs  
mn!
veth1   veth3  

pipes  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

28  
$> mn --topo minimal \ ! Links  
--switch ovsk \ !
--controller ref!

ip link set name  


root  network  namespace  

s1-­‐eth1   s1-­‐eth2  

veth  pairs  
mn!
h2-­‐eth0   h3-­‐eth0  

pipes  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

29  
$> mn --topo minimal \ ! Links  
--switch ovsk \ !
--controller ref!

ip link set netns  


root  network  namespace  

mn!

pipes  
s1-­‐eth1   s1-­‐eth2  

veth  pairs  

h2-­‐eth0   h3-­‐eth0  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

30  
$> mn --topo minimal \ ! Switch  
--switch ovsk \ !
--controller ref!

create  OpenFlow  Switch  


root  network  namespace  

ofdatapath! ofprotocol!
unix  
socket  

mn!
raw  
sockets  
pipes  
s1-­‐eth1   s1-­‐eth2  

veth  pairs  

h2-­‐eth0   h3-­‐eth0  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

31  
$> mn --topo minimal \ ! Controller  
--switch ovsk \ !
--controller ref!

create  controller  
root  network  namespace  

ofdatapath! ofprotocol!
unix  
socket  

mn! controller!
raw  
sockets  
pipes  
s1-­‐eth1   s1-­‐eth2  

veth  pairs  

h2-­‐eth0   h3-­‐eth0  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

32  
Virtual  Machine  
root  network  namespace  

ofdatapath! ofprotocol!
unix  
socket  

mn! controller!
raw  
sockets  
pipes  
s1-­‐eth1   s1-­‐eth2  

veth  pairs  

h2-­‐eth0   h3-­‐eth0  

/bin/bash! /bin/bash!

h2  namespace   h3  namespace  

33  
Mininet  example  commands  
Create  a  network  using  mn  launcher:  
mn --switch ovsk --controller nox --topo \ tree,depth=2,fanout=8
--test pingAll!

Interact  with  a  network  using  CLI:  


mininet> h2 ping h3!
mininet> h2 py dir(locals())!

Customize  a  network  w/API:  


from mininet.net import Mininet!
from mininet.topolib import TreeTopo!
tree4 = TreeTopo(depth=2,fanout=2)!
net = Mininet(topo=tree4)!
net.start()!
h1, h4 = net.hosts[0], net.hosts[3]!
print h1.cmd(’ping -c1 %s’ % h4.IP())!
net.stop()!

34  
Apps  made  with  the  Mininet  API  

35  
EvaluaSon  

36  
Startup/Shutdown/Memory  

Topology H S Setup(s) Stop(s) Mem(MB)


Minimal 2 1 1.0 0.5 6
Linear(100) 100 100 70.7 70.0 112
VL2(4, 4) 80 10 31.7 14.9 73
FatTree(4) 16 20 17.2 22.3 66
FatTree(6) 54 45 54.3 56.3 102
Mesh(10, 10) 40 100 82.3 92.9 152
Tree(4ˆ4) 256 85 168.4 83.9 233
Tree(16ˆ2) 256 17 139.8 39.3 212
h iperf Tree(32ˆ2) 1024 33 817.8 163.6 492
and ker-
Table 2: Mininet topology benchmarks: setup time, stop time and

lots  of  switches  &  hosts  


memory usage for networks of H hosts and S Open vSwitch kernel
switches, tested in a Debian 5/Linux 2.6.33.1 VM on VMware
in fig-
PI will
w/reasonable  amounts  of  memory  
Fusion 3.0 on a MacBook Pro (2.4 GHz intel Core 2 Duo/6 GB).
Even in the largest configurations, hosts and switches start up in
less than one second each.
menta- 37  

terials, Operation Time (ms)


Tree(16ˆ2) 256 17 139.8 39.3 212
perf Tree(32ˆ2) 1024 33 817.8 163.6 492
d ker-
Table 2: Mininet topology benchmarks: setup time, stop time and
memory usage for networks of H hosts and S Open vSwitch kernel

fig-
Microbenchmarks  
switches, tested in a Debian 5/Linux 2.6.33.1 VM on VMware
Fusion 3.0 on a MacBook Pro (2.4 GHz intel Core 2 Duo/6 GB).
Even in the largest configurations, hosts and switches start up in
will less than one second each.
nta-
ials, Operation Time (ms)
Create a node (host/switch/controller) 10
Run command on a host (’echo hello’) 0.3
Add link between two nodes 260
Delete link between two nodes 416
den- Start user space switch (OpenFlow reference) 29
hine Stop user space switch (OpenFlow reference) 290
Start kernel switch (Open vSwitch) 332
The
Stop kernel switch (Open vSwitch) 540
dis-
Table 3: Time for basic Mininet operations. Mininet’s startup
VM and shutdown performance is dominated by management of vir-
mine
M is
link  management  is  slow  
tual Ethernet interfaces in the Linux (2.6.33.1) kernel and ip
link utility and Open vSwitch startup/shutdown time.
vely 38  

T NS of topologies with Mininet. Larger topologies which


Bandwidth  
S (Switches) User(Mbps) Kernel(Mbps) RaSo   Top
1 445 2120 ~5x   Min
10 49.9 940 Lin
20 25.7 573 VL2
40 12.6 315 Fat
60 6.2 267 Fat
80 4.15 217 Mes
100 2.96 167 ~50x   Tre
Tre
Table 1: Mininet end-to-end bandwidth, measured with iperf Tre
through linear chains of user-space (OpenFlow reference) and ker-
nel (Open vSwitch) switches. Table
mem
usable  amount  of  bandwidth   switc
Fusio
graphical applications, two of which are shown in fig-
Even
ures 2 and 3. The hope is that the Mininet API will 39  
less t
prove useful for system-level testing and experimenta-
Case  Studies  

40  
Research  Examples  
•  Ripcord:  modular  data  center  
•  Asterix:  wide-­‐area  load  balancing  
•  SCAFFOLD:  new  internet  architecture  
•  Distributed  snapshot  demo  

41  
Unexpected  Uses  
•  Tutorials  
•  Whole-­‐network  regression  suites  
•  Bug  replicaSon  

42  
LimitaSons  

43  
Inherent  LimitaSons  
•  OS-­‐level  virtualizaSon    one  kernel  only  
•  Linux  containers    Linux  programs  only  
•  Cannot  match  the  introspecSon  of  an  event-­‐
driven  simulaSon  

44  
Performance  Fidelity  
switch  process  
p     c  cores  
processes  

host  process   links  


links  
links       

p  >>  c  ?  Sme  mulSplexing  


Issues:  performance  predictability,  isolaSon  
45  
Wouldn’t  it  be  amazing…  
if  systems  papers  were  runnable.  

46  
Wouldn’t  it  be  amazing…  
If  systems  papers  made    
replicaSng  their  results,  
modifying  the  described  system,  
and  sharing  it  with  others…  

…  as  easy  as  downloading  a  file.   47  


Wouldn’t  it  be  amazing…  
if  network  systems  papers  were  more    
than  runnable.  
share  with  others  

idea   prototype   deploy  on  


hardware  

with  no  code  changes!?!  


48  
“A  Network  in  a  Laptop…”    
is  a  runnable  paper  

…which  itself  describes  how  


to  make  other  runnable  
papers.  
49  
Mininet  
•  Rapid  prototyping  
enables    
•  Scalable  
“runnable  papers”  
•  Shareable   for  a  subset  of  
•  FuncSonally  correct   networking  
•  Path  to  hardware  

openflow.org/mininet  
50  
51  
The  SDN  Approach  
Separate  control  from  the  datapath  
–  i.e.  separate  policy  from  mechanism  

Datapath:  Define  minimal  network  instrucSon  set  


–  A  set  of  “plumbling  primiSves”  
–  A  narrow  interface:  e.g.  OpenFlow  
Control:  Define  a  network-­‐wide  OS  
–  An  API  that  others  can  develop  on  

52  
How  to  get  performance  fidelity?  
•  Careful  process-­‐to-­‐core  allocaSon  
•  Bandwidth  limits  
•  Scheduling  prioriSes  
•  Real-­‐Sme  scheduling  
•  Scheduling  groups  w/resource  isolaSon  

53  
Why  not  processes?  

process   switch  s1  


+  scales  beser  
-­‐  breaks  applicaSons  
process   process  

host  h2   host  h3  

54  

Das könnte Ihnen auch gefallen