Sie sind auf Seite 1von 7

2013 Second European Workshop on Software Defined Networks

Performance Evaluation of a Scalable Software-Defined Networking Deployment

Siamak Azodolmolky, Philipp Wieder and Ramin Yahyapour


Gesellschaft für Wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG), Göttingen 37075, Germany,
E-mail: {Siamak.Azodolmolky, Philipp.Wieder, Ramin.Yahyapour}@gwdg.de

Abstract—Since the introduction of software-defined net- network architects and designers. Although simulation
working (SDN), scalability has been a major concern. There studies and experimentation are among the widely used
are different approaches to address this issue, and most performance evaluation techniques, analytical modeling
of them can be addressed without losing the benefits of
SDN. SDN provides a level of flexibility that can accom- has its own benefits. A closed-form description of a net-
modate network programming and management at scale. working architecture enables the network designers to have
In this work we present the recent approaches, which are a quick (and approximate) estimate of the performance of
proposed to address scalability issue of SDN deployment. their design, without the need to spend considerable time
We particularly select a hierarchical approach for our for simulation studies or expensive experimental setup.
performance evaluation study. A mathematical framework
based on network calculus is presented and the performance In this work we utilize network calculus as a mathe-
of the selected scalable SDN deployment in terms of upper matical framework to analytically model the behavior of a
bound of event processing and buffer sizing of the root SDN scalable SDN deployment. To the best of our knowledge,
controller is reported. this is for the first time that a network calculus-based
Keywords-Scalable SDN, OpenFlow, Hierarchical con- analytical model is investigated and presented to model
troller, Network Calculus, Delay Bound, Buffer Sizing a scalable SDN architecture. After this introduction, re-
lated studies to address the scalability issue of SDN are
I. I NTRODUCTION compiled in Section II. Network calculus framework and
Decoupling the network control out of the forwarding detailed description of our analytical models are presented
devices is the common denominator of Software-defined in Section III. This section includes an overview of
Networking (SDN) (to mention a few [1], [2], [3], [4], network calculus, definitions, system model, and analysis
and [5]) proposals in the research community. This sepa- of an SDN local and root controller. The mathematical
ration paves the way for a more flexible, programmable, description of queue length and delay bound of SDN
vendor-agnostic, and innovative networking. While SDN controllers along with the buffer requirements in a scalable
concept and OpenFlow [5] find their ways into commercial deployment is presented too. Using the result of some
deployments, performance evaluation of the SDN concept recent evaluations, we present the boundary performance
and its scalability, delay bounds, buffer sizing and similar of packet delay and buffer sizing of SDN controllers in
performance metrics are not sufficiently investigated in Section IV. Finally we draw our conclusions and future
recent researches. research area in Section V.
It seems that control plane scalability challenges in SDN
are not inherently different than the similar concerns in II. A S CALABLE SDN
traditional network design. In fact, SDN encourages us The common perception that control in SDN is (log-
to apply common software and distributed systems de- ically) centralized leads to concerns about scalability of
velopment practices to simplify development, verification, SDN and its overall performance in a production network.
and debugging. In SDN, there is no need to address basic Regardless of the controller capability, a (logically) central
but challenging concerns like topology discovery, state controller does not scale as the network grows (in terms of
distribution, and resilience. In SDN, control applications number of SDN switches, number of flows and their rate,
can rely on the control platform that provide these basic bandwidth, etc.) and will fail to serve all the incoming
functions such as maintaining a cohesive view of the requests within an acceptable level of service guarantees.
network in a distributed and scalable fashion [1]. Given the logically centralized nature of SDN deploy-
Currently, first OpenFlow implementations from hard- ments, one can argue that there is no inherent bottleneck
ware vendors are being deployed in networks and a grow- to the SDN scalability. A typical data center network has
ing number of experiments over SDN-enabled networks tens of thousands of switching elements and can grow at
are expected. This will creates new challenges, as question a fast pace. The total number of control events generated
of SDN performance and scalability have not yet been in any network at that scale is enough to overload any
properly investigated. Understanding the performance and centralized controller. There are applications and events
limitation of the SDN concept is a requirement for its that stress the control plane by over-consuming the control
usage in practical applications. There are very few per- plane resources. For instance a big-flow detection appli-
formance evaluation studies of OpenFlow and SDN ar- cation continuously queries the switches to detect big-
chitecture. Besides, an initial estimate of the performance flows (statistics queries-replies). Upon detection of big-
and requirement of an SDN deployment is essential for flows, the application re-routes the big-flows. By their

978-1-4799-2433-2/13 $31.00 © 2013 IEEE 68


DOI 10.1109/EWSDN.2013.18
design, these applications have a high chance to over-
consume the control channel. Similar to any distributed
system, one can design a scalable SDN control plane [1].
Scalability limitations are not limited to SDN; traditional
control protocol design also faces the same challenges.
Early benchmark of NOX (as an OpenFlow controller)
reveals that it can serve 30,000 requests per second [6].
Although this performance may sound sufficient for an
enterprise network, it could be questionable for data-center
deployments with high rate of flow initiation [7]. One
approach to address this issue is to utilize parallel process-
ing features in multi-core systems to improve the overall
performance of the SDN controller. It is also possible to
decrease the number of forwarded requests to the SDN
controller. For instance DIFANE [8] proactively pushes all Figure 1. Two Levels of Controllers. Local controllers handle frequent
state to the data path. Utilizing and ASIC, DevoFlow [9] events, while a logically centralized root controller handles less frequent
handles short-lived flows in the data path and only larger (rare) events.
flows are forwarded to the controller. This effectively
decreases the load on the controller and alleviates the
scalability issue. Based on the constraints and setting of takes care of applications that require network-wide state,
the underlying network infrastructure, DevoFlow trades and also acts as a mediator for any coordination required
fine-grained flow-level visibility in the control plane with between local controllers. Apparently we need to move the
its scalability. Based on measurements of switching times, control functionality as close as possible to the data plane,
authors in [10] derived a basic model for the forwarding but without modifying the switches. The basic idea is to
speed and blocking probability of an OpenFlow switch classify events in a clever way and process frequent events
combined with an OpenFlow controller and validated it. in applications that do not use (or require) the network-
wide state just like traditional networks. These applications
It is possible to maintain a network-wide view us-
do not need state synchronization and are called local
ing physically distributed control plane (e.g., ONIX [1]).
network functions.These functions are easy to replicate
These approaches provide control applications with a set
as they do not share state (consider their analogy with
of general APIs to facilitate access to network state.
Mappers in Map Reduce). For instance learning switch or
On the other hand, network state can be synchronized
LLDP are good examples of local functions. There are also
among multiple controller instances, providing the control
local modules inside the non-local network functions. For
and management applications with an illusion of overall
example, the big-flow detection is a local function, while
network control. This is the approach which is utilized
re-routing big-flows is a non-local function. Therefore in
in HyperFlow [11]. Alternatively, one can distribute the
order to propose a scalable SDN-based architecture a two
state and/or computation of the control functionality over
level hierarchy can be considered as follows (see Fig. 1).
multiple controllers. In fact a physically centralized con-
In the first level (closed to the data plane) the local
troller is not an intrinsic feature of SDN. The main
controllers serve for local functions. These controllers
requirement is to have a unified network-wide view to keep
are only running local functions and therefore are very
the benefits of SDN. However, like any distributed system,
easy to implement and replicate (e.g. learning switching,
providing a strictly consistent centralized view put con-
LLDP, etc.). In the second level a root controller serve
straints on response time, and throughput as the network
the non-local requests/events [13]. This framework is the
scales. Maintaining availability and strong consistency of
base architecture for our analytical model for performance
network-wide states is not always feasible in distributed
evaluation.
systems [12].
There is a third approach as depicted in Fig. 1 [13] III. A NALYTICAL M ODEL
that we are evaluating it in this work. It classifies the There are two important approaches towards analytical
events and handle most of the events locally in the switch performance evaluation of computer networks: queuing
or as close as possible to the switch. This means that theory [14] and network calculus [15]. The classical queu-
we can handle most of the events inside the switch ing theory generally concerns about the average quantities
or using a local controller and just forward rare events in equilibrium at the steady state, and accuracy is the target
to the root controller. This framework [13] defines a of analysis. Therefore, obtaining a rich set of tractable
scope of operations to enable applications with different results comes at the cost of having to restrict to Markovian
requirements to coexist. Locally scoped applications (i.e., (memory-less) traffic, which is not necessarily a realistic
applications that can operate using the local state of a assumption for Ethernet traffic [16]. In contrast to queuing
switch) are deployed close to the data path in order to theory, network calculus is concerned with worst case
process frequent requests and shield other parts of the (upper bounds) instead of average (equilibrium) behaviour
control plane from the load. A master root controller and therefore does not deal with arrival and departure

69
as A ∼ (σ, ρ), if (see Fig. 2):
A(t) − A(s) ≤ σ + ρ(t − s), 0 ≤ s ≤ t. (1)
According to the multiplexing rule [15], if constrained
flows are merged, the output process is also constrained
or:   
Ai ∼ (σi , ρi ) → Ai ∼ ( σi , ρi ) (2)
For any increasing sequence A (i.e., cumulative arrival
process), we define its “stopped sequence” at time τ (see
Fig. 2), denoted as Aτ , by:

τ A(t) if t ≤ τ,
A (t) = . (3)
A(τ ) otherwise
Figure 2. Graphical representation of cumulative arrival process A(t) According to (3) for the stopped sequence Aτ , if A is an
with average sustainable request arrival rate of ρ and burstiness of σ;
along with a stopped sequence Aτ (t). arrival process, then there are no further packet arrival (or
event) after time τ . We can simply show that a stopped
sequence Aτ is (σ(τ ), ρ)-upper constrained where
processes themselves but with bounding processes called σ(τ ) = max max [A(t) − A(s) − ρ(t − s)]. (4)
arrival and service curves. A comprehensive overview 0≤t≤τ 0≤s≤t

and outlook of stochastic network calculus can be found Note that in (4), the sequence Aτ is stopped at time τ ,
in [17]. By focusing on bounds, network calculus com- σ(τ ) is the maximum number of packet in the queue of a
pliments the classical queuing theory. To the best of conserving link with a capacity of ρ and input Aτ .
our knowledge, this is for the first time that a network
calculus-based analytical study is presented to model the B. Local SDN Controller
behaviour of a scalable SDN. A local SDN controller model is depicted in Fig. 3.
This controller has one input A, one control input C
(from root SDN controller), and one output F such that
A. Definitions F = C(A(t)), where A(t) is the cumulative number of
Network calculus is a tool to analyse flow control arrival by time t, C(n) is the number of events, which are
problems in networks mainly from a lower bound (i.e., flow controlled (e.g. using “Flow Mod” operation) among
worst case) perspective. In other words it is a frame- the first n arrivals, and F (t) is the cumulative departures
work to derive deterministic guarantees on delay, queue by time t. In other words, the cumulative number of event
lengths, throughput, and similar performance metrics. It output by time t, is the cumulative number of event arrival
is mathematically based on min-plus algebra, which could to the local controller, which are “flow controlled” by the
be also interpreted as a system theory for deterministic root SDN controller by time t. For the case of OpenFlow,
and stochastic queuing systems. The use of alternate when the OpenFlow controller caches an operation inside
algebra such as min-plus algebra and max-plus algebra the flow table of the switch, the consecutive packets,
to transform complex network systems into analytically which match the flow table entry will not be forwarded
tractable models is central to the network calculus theory. to the OpenFlow controller. The operation of OpenFlow
Furthermore, arrival and service processes are character- protocol, should be considered in the definition of C(n)
ized by some bounds in order to simplify the analysis. function. For an ideal SDN switch, if A ∼ (σ, ρ) is upper
The performance evaluations are performed based on these constrained and C ∼ (δ, γ) is also upper constrained, then
bounds. Most of the network flows can be described F ∼ (γσ + δ, γρ) is also upper constrained. This lemma
using arrival curves, represented with leaky bucket traffic can be easily proved as follows:
envelopes and most of the network elements provide Proof:
some service to the flows, described by its rate and slack F (t) − F (s) = C(A(t)) − C(A(s))
term [18], [19]. Network calculus uses traffic specification
≤ δ + γ(A(t) − A(s))
to model the arrival and peak characteristics of a flow
(packets, events, etc.). A cumulative arrival process A ≤ δ + γ(σ + ρ(t − s))
is a non-decreasing, integer valued function on the non- = δ + γσ + γρ(t − s)
negative integer Z+ such that A(0) = 0. A(t) denotes
the total number of arrivals (i.e., events) in time slots
1, 2, ..., t. The burstiness and the average sustainable rate We are assuming that the local SDN controller is in fact
of arrivals are represented by σ and ρ respectively. The a constant server under arrival process A ∼ (σ, ρ), with
number of event arrivals at time t is denoted by a(t) and a constant service rate of μ (positive integer). We define
a(t) = A(t)−A(t−1). The cumulative arrival process (A) a busy period for an SDN local controller, which starts
is said to be (σ, ρ)-upper constrained, which we denote it at instance s in time and ends at t if the queue length of

70
Figure 3. A model of a local SDN controller with interface to a root
SDN controller (i.e., C(n).

Figure 4. A model of a scalable SDN controller.


the local controller at instances of (s − 1) and t is empty,
while the queue length of the controller during the said
period is not empty. In other words, q(s − 1) = 0, a(s) > which can be simply re-written as (note that we present
0, q(r) > 0 for s ≤ r < t and q(t) = 0, where q(t) is B in integer format):
the length of the queue inside the controller at time slot t.  
σ−1
According to this definition, The duration B of the busy B≤ . (8)
μ−ρ
period is B = t − s time units.
Finding the upper bound of the busy period, we can simply
C. Analysis of the local SDN controller express the delay bound of the events, which are residing
In the first part of our analysis we will compute and inside the queue. We define that delay of an event as the
present the queue length and the delay bound of local time the event departs the local SDN controller minus the
SDN controller. Consider the local SDN controller model time that it has arrived (and potentially forwarded to the
(Fig. 3) as a single server model with a constant service root SDN controller). The delay of any event is less than
rate (μ). Let A be the cumulative arrival for this controller. or equal to the length of the busy period. Therefore, the
As defined before, q(t) will be the length of the event upper bound of the event delay, independent of service
queue at time slot t. Therefore, according to the definition discipline is equal to the B, which is derived in (8).
of a(t), and the event forwarding rate of the SDN switch D. Analysis of a scalable SDN deployment
(i.e., μ), we have: The complete ecosystem of a scalable SDN deployment
q(t + 1) = (q(t) + a(t + 1) − μ)+ with q(0) = 0. (5) includes a number of local SDN controllers, which are
centrally controlled by a root SDN controller. Here we
The (exp)+ means that the expression is evaluated when present an analytical model, which provides a closed form
it is positive and is zero otherwise. By using induction on of upper bound of the queue length inside the local and
t we can prove that we have root SDN controllers. The interaction of a root SDN
controller and local SDN controllers are depicted in Fig. 4.
q(t) = max {A(t) − A(s) − μ(t − s)}. (6) Note that due to the multiplexing rule we can easily
0≤s≤t
consider the input event of the other local SDN con-
If we suppose that the event arrival process (i.e., A(t)) is trollers in addition to the depicted (see Fig. 3) local SDN
(σ, ρ)-upper constrained, and if μ ≥ ρ, then equation (6) controller. The cumulative arrival process A2 represents
implies that q(t) ≤ σ ∀t ≥ 0. This is an upper bound the input events from other local SDN controllers, which
independent of the service order. Using (6), we can are controlled by root SDN controller. Similarly part of
compute the upper bound of the queue length inside the the flow control commands, which leave the root SDN
SDN controller. The output flow cumulative process of the controller is forwarded to the local SDN switch, and the
SDN controller is: rest is forwarded to the other local SDN controllers. These
F (t) = A(t)−q(t) = min {A(s)+μ(t−s)}∀t ≥ 0. (7) local controllers are controlled by the root SDN controller.
0≤s≤t The output stream of the local SDN controller (i.e., FS (t))
Based on the definition of “Busy period”, the local SDN is divided in two parts. One is forwarded to the root SDN
controller should have μ departures at each of the B times controller (i.e., S12 (FC (t))), which represent the events,
{s, . . . , t − 1}. Also by definition we must have at least for which the local SDN controller is not able to make
one event in the queue at time t − 1. Furthermore, at the a decision. The other, represents the events, for which an
time frame of {s, . . . , t − 1} at least μB + 1 events must existing action is available in the local SDN controller.
arrive to sustain the busy period of the SDN controller. We are assuming that both cumulative arrival processes
Since A ∼ (σ, ρ), we have at most σ + ρB arrival events (i.e., A1 , A2 ) are upper constrained and the service rate
during busy period B. This will help us to come up with of the queues in the local and root SDN controllers are
a closed formula to express the busy period of the local C1 and C2 respectively. The flow controlling functions
SDN controller using the constraints of the arrival process S12 ∼ (δ12 , γ12 ) and S21 ∼ (δ21 , γ21 ) are upper con-
and its service rate. In fact we have μB + 1 ≤ σ + ρB, strained.

71
We are interested to find a closed form for the queue
length of the local and the root SDN controllers. Let Ã1
and Ã2 be the overall arrival process of the controllers
and FC (t) , FS (t) be the respective output processes. We
have
Ã1 (t) = A1 (t) + S21 (FC (t)), (9)
Ã2 (t) = A2 (t) + S12 (FS (t)). (10)
Furthermore, let FCτ and FSτ be the stopped sequence of
FC and FS at time τ . It follows that for “any” α1 , FSτ ∼
(σ1 (τ ), α1 ).
σ1 (τ ) = max max [FS (t) − FS (s) − α1 (t − s)]. (11)
0≤t≤τ 0≤s≤t

Similarly, for “any” α2 , FCτ ∼ (σ2 (τ ), α2 ).


σ2 (τ ) = max max [FC (t) − FC (s) − α2 (t − s)]. (12) Figure 5. Delay bound of the local SDN controller (hardware imple-
0≤t≤τ 0≤s≤t mentation) vs. upper constrained cumulative arrival process A ∼ (σ, ρ).

Assuming that γ12 γ21 < 1, we have: α1 = ρ1−γ 1 +γ21 ρ2


12 γ21
and α2 = ρ1−γ
2 +γ12 ρ1
12 γ21
. Considering the local SDN controller
model and multiplexing rule (see equation (2) above), we
have:
FSτ ∼ (σ1 + γ21 σ2 (τ ) + δ21 , ρ1 + γ21 α2 ), (13)
Since the value of α1 and α2 are known, we can claim
that FSτ ∼ (σ1 + γ21 σ2 (τ ) + δ21 , α1 ) is upper constrained.
It follows that
σ1 (τ ) ≤ σ1 + γ21 σ2 (τ ) + δ21 . (14)
Using similar argument, we can characterize FCτ as fol-
lows:
σ2 (τ ) ≤ σ2 + γ12 σ1 (τ ) + δ12 . (15)
solving the above system of inequalities results in σ1 (τ ) ≤
Figure 6. Delay bound of the local SDN controller (software imple-
σ˜1 and σ2 (τ ) ≤ σ˜2 , where mentation) vs. upper constrained cumulative arrival process A ∼ (σ, ρ).
σ1 + γ21 σ2 + γ21 δ12 + δ21
σ˜1 = ,
1 − γ12 γ21
Therefore, by characterizing the behaviour of the arrival
σ2 + γ12 σ1 + γ12 δ21 + δ12
σ˜2 = . processes and operating regime of the root SDN controller
1 − γ12 γ21 (through monitoring, or simulation studies), the upper
Note that we have just shown that these bounds are bounds and the buffer requirements of the root and local
independent of τ . Therefore, FC and FS are both upper SDN controller can be easily defined.
constrained (i.e., FS ∼ (σ˜1 , α1 ) and FC ∼ (σ˜2 , α2 )).
This in turn implies that Ã1 and Ã2 are also upper IV. P ERFORMANCE E VALUATION
constrained (i.e., Ã1 ∼ (σ1 + γ21 σ˜2 + δ21 , α1 ) and Using the mentioned analytical frameworks and pre-
Ã2 ∼ (σ2 + γ12 σ˜1 + δ12 , α2 )). If α1 ≤ C1 then the queue sented outcomes, we report the upper bound of event
length of the buffer inside the local SDN controller (i.e., processing delay of two variations of local controllers
QS ) is bounded by σ1 +γ21 σ˜2 +δ21 . Similarly, if α2 ≤ C2 , and the upper bound of the queue length of the root
then the queue length of the buffer (i.e., QC ) inside the SDN controller. The former gives a worst case estimation
root SDN controller is bounded by σ2 + γ12 σ˜1 + δ12 . To of event processing delay of the local SDN controllers,
summarize: in contrast to the steady state (i.e., equilibrium) evalua-
QS ≤ σ1 + γ21 σ˜2 + δ21 , (16) tion of classic queuing theory approaches. The latter is
an interesting result for buffer sizing and performance
QC ≤ σ2 + γ12 σ˜1 + δ12 . (17)
evaluation of the root SDN controllers. In fact using the
It worth mentioning that our final results, which are analytical results, the developers can precisely estimate the
the upper bounds of the queue lengths in the local and buffer requirement of controllers, given the specification
root controllers are independent of τ and mainly depend of various arrival processes as described in Section III.
on upper bounds of the arrival processes and those of Fig. 5 and Fig. 6 depict the upper bound of the
the local SDN controller and the root SDN controller. event processing delay of two variations of the local

72
able SDN deployment. Focusing on bounds and worst
case scenario, network calculus compliments the classical
queuing theory. The latter concerns about the average
quantities in equilibrium, while network calculus focuses
on boundary conditions. Our scalable SDN deployment
model (consisting of local and root SDN controllers)
captured the closed form of the event delay and buffer
length inside the local SDN controller. Furthermore, an
analytical model of the interaction between local and root
SDN controller were analysed. Given the parameters of
the cumulative arriving processes and the flow control
functionality of the SDN controller, the network architect
or designer is able to compute an upper bound estimate
of the delay and buffer requirements of SDN controllers.
We presented the event delay of two variants of local
Figure 7. Upper bound of the root SDN controller buffer for a given feed
SDN controllers along with the buffer requirement of the
forward from local controllers (i.e., S12 (FS (t))) and different feedback root SDN controller. In addition to deterministic network
parameters (i.e., S21 (FC (t))). calculus, stochastic network calculus is another interesting
branch of network calculus, which can be also utilized for
analytical modelling of other aspect of SDN deployments.
SDN controllers. Assuming that the local SDN controller Comparing the performance of this approach with simu-
can be integrated inside an SDN switch, we borrow the lation or experimental setup are among the future works
specification of these two local SDN controllers from [20]. of this study.
Thus, the first local SDN controller has an average event
processing performance of 0.286 million events per second
and the second one, which is a software implementation, R EFERENCES
has an average processing performance of 0.03 events
per second [20]. As shown in these results, the average [1] T. Koponen, M. Casado, N. Gude, J. Stribling,
sustainable arrival rate of the input events to the second L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue,
T. Hama et al., “Onix: a distributed control platform for
controller, should be kept one order of magnitude less large-scale production networks,” in Proceedings of the
than the same parameter of the other (e.g. hardware) 9th USENIX conference on Operating systems design and
implementation (e.g. Switch 1 [20]) in order to achieve the implementation, 2010, pp. 1–6.
similar performance in terms of event processing delay.
Fig. 7 is one of the potential upper bounds, which [2] A. Greenberg, G. Hjalmtysson, D. A. Maltz, A. Myers,
yields the required buffer space in the root SDN con- J. Rexford, G. Xie, H. Yan, J. Zhan, and H. Zhang, “A clean
troller. Using the recent reports [21], we selected Beacon slate 4d approach to network control and management,”
SIGCOMM Comput. Commun. Rev., vol. 35, no. 5, pp. 41–
OpenFlow controller with an average performance of 1.75 54, Oct. 2005.
million flow operations (e.g. Flow Mod) per seconds [21].
Assuming the average arrival rate of 0.3 mpps for local [3] M. Caesar, D. Caldwell, N. Feamster, J. Rexford,
SDN controllers, and 0.6 mpps as the aggregated arrival A. Shaikh, and J. van der Merwe, “Design and implemen-
of other local controllers, the buffer requirement of this tation of a routing control platform,” in Proceedings of the
root controller is shown in Fig. 7. We assumed that 1/100 2nd conference on Symposium on Networked Systems De-
sign & Implementation - Volume 2, ser. NSDI’05, Berkeley,
of the arrived events will be forwarded to the controller
CA, USA, 2005, pp. 15–28.
(i.e., δ12 ) with a burstiness parameter (i.e., γ12 ) of 0.2
mpps. Given these parameters, the required buffer size of
[4] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown,
the root SDN controller amounts to 0.83 million events and S. Shenker, “Ethane: taking control of the enterprise,”
in worst case scenario. This result helps the designers in Proceedings of the 2007 conference on Applications,
provision the required buffer space (i.e., buffer sizing) technologies, architectures, and protocols for computer
based on the operating regime of the controllers in terms communications, ser. SIGCOMM ’07, New York, NY,
USA, 2007, pp. 1–12.
of average sustainable arrival rate, burstiness of the input
traffic, and traffic specification of the feedback and feed
[5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar,
forward paths.
L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Open-
flow: enabling innovation in campus networks,” SIGCOMM
V. C ONCLUSIONS AND F UTURE R ESEARCH Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, Mar.
In spite of benchmark tools and some limited simulation 2008.
models, there are very few research activities to analyti-
cally evaluate the performance of an SDN deployment. [6] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker,
In this paper we exploited the capabilities of network “Applying nox to the datacenter,” in Proc. of workshop on
Hot Topics in Networks (HotNets-VIII), 2009.
calculus framework to model the behaviour of a scal-

73
[7] T. Benson, A. Akella, and D. A. Maltz, “Network traffic [21] Openflow controller performance comparison. [On-
characteristics of data centers in the wild,” in Proceedings line]. Available: http://www.openflow.org/wk/index.php/
of the 10th ACM SIGCOMM conference on Internet mea- Controller Performance Comparisons (last access 24
surement, ser. IMC ’10. New York, NY, USA: ACM, 2010, September 2013).
pp. 267–280.

[8] M. Yu, J. Rexford, M. J. Freedman, and J. Wang, “Scalable


flow-based networking with difane,” in Proceedings of the
ACM SIGCOMM 2010 conference, ser. SIGCOMM ’10,
New York, NY, USA, 2010, pp. 351–362.

[9] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula,


P. Sharma, and S. Banerjee, “Devoflow: scaling flow man-
agement for high-performance networks,” in Proceedings
of the ACM SIGCOMM 2011 conference, ser. SIGCOMM
’11, 2011, pp. 254–265.

[10] M. Jarschel, S. Oechsner, D. Schlosser, R. Pries, S. Goll,


and P. Tran-Gia, “Modeling and performance evaluation
of an openflow architecture,” in Teletraffic Congress (ITC),
2011, Sept., pp. 1–7.

[11] A. Tootoonchian and Y. Ganjali, “Hyperflow: a distributed


control plane for openflow,” in Proceedings of the 2010
internet network management conference on Research on
enterprise networking, ser. INM/WREN’10. Berkeley, CA,
USA: USENIX Association, 2010.

[12] S. Gilbert and N. Lynch, “Brewer’s conjecture and the


feasibility of consistent available partition-tolerant web
services,” in In ACM SIGACT News, 2002.

[13] S. Hassas Yeganeh and Y. Ganjali, “Kandoo: a framework


for efficient and scalable offloading of control applications,”
in Proceedings of the first workshop on Hot topics in
software defined networks, ser. HotSDN ’12, 2012, pp. 19–
24.

[14] L. Kleinrock, Queueing Systems. Wiley Interscience, 1975,


vol. I: Theory, (Published in Russian, 1979. Published in
Japanese, 1979. Published in Hungarian, 1979. Published
in Italian 1992.).

[15] J.-Y. Le Boudec and P. Thiran, Network calculus: a theory


of deterministic queuing systems for the internet. Springer-
Verlag, 2001.

[16] W. E. Leland, M. S. Taqqu, W. Willinger, and D. V. Wilson,


“On the self-similar nature of ethernet traffic (extended
version),” IEEE/ACM Trans. Netw., vol. 2, no. 1, pp. 1–
15, Feb. 1994.

[17] Y. Jiang, “Stochastic network calculus for performance


analysis of internet networks - an overview and outlook,”
in Computing, Networking and Communications (ICNC),
2012 International Conference on, 30 2012-Feb. 2, pp.
638–644.

[18] R. Cruz, “A calculus for network delay. i. network elements


in isolation,” Information Theory, IEEE Transactions on,
vol. 37, no. 1, pp. 114–131, Jan 1991.

[19] ——, “A calculus for network delay. ii. network analysis,”


Information Theory, IEEE Transactions on, vol. 37, no. 1,
pp. 132–141, Jan 1991.

[20] C. Rotsos, N. Sarrar, S. Uhlig, R. Sherwood, and A. Moore,


“Oflops: An open framework for openflow switch eval-
uation,” in Passive and Active Measurement, ser. Lecture
Notes in Computer Science, N. Taft and F. Ricciato, Eds.
Springer, 2012, vol. 7192, pp. 85–95.

74

Das könnte Ihnen auch gefallen