Sie sind auf Seite 1von 26

Internetworking: Architectures,

Protocols and Applications

Course Code: ETN676


Instructor: Dr. Muhammad Awais Javed
Network layer
application
• transport segment from transport
network

sending to receiving host data link


physical
network
network

• on sending side network


data link
data link
physical
data link
physical

encapsulates segments into physical network


data link
network
data link
datagrams physical physical

• on receiving side, delivers network


data link
network
data link

segments to transport layer physical


network
data link
physical

• network layer protocols in


physical
application
network transport
every host, router network
data link
physical
network
data link
network
data link

• router examines header data link


physical
physical physical

fields in all IP datagrams


passing through it
Two key network-layer functions
• forwarding: move analogy:
packets from router’s  routing: process of
input to appropriate planning trip from source
router output to dest

• routing: determine  forwarding: process of


route taken by getting through single
interchange
packets from source
to dest.
– routing algorithms
Interplay between routing and forwarding

routing algorithm routing algorithm determines


end-end-path through network

local forwarding table forwarding table determines


header value output link local forwarding at this router
0100 3
0101 2
0111 2
1001 1

value in arriving
packet’s header
0111 1

3 2
Connection setup
• 3rd important function in some network
architectures:
– ATM, frame relay, X.25
• before datagrams flow, two end hosts and
intervening routers establish virtual connection
– routers get involved
• network vs transport layer connection service:
– network: between two hosts (may also involve
intervening routers in case of VCs)
– transport: between two processes
Network service model
Q: What service model for “channel” transporting
datagrams from sender to receiver?

example services for example services for a


individual datagrams: flow of datagrams:
 guaranteed delivery • in-order datagram
 guaranteed delivery with delivery
less than 40 msec delay • guaranteed minimum
bandwidth to flow
• restrictions on changes in
inter-packet spacing
Network layer service models:
Guarantees ?
Network Service Congestion
Architecture Model Bandwidth Loss Order Timing feedback

Internet best effort none no no no no (inferred


via loss)
ATM CBR constant yes yes yes no
rate congestion
ATM VBR guaranteed yes yes yes no
rate congestion
ATM ABR guaranteed no yes no yes
minimum
ATM UBR none no yes no no
Connection, connection-less service
datagram network provides network-layer
connectionless service
virtual-circuit network provides network-layer
connection service
analogous to TCP/UDP connecton-oriented /
connectionless transport-layer services, but:
 service: host-to-host
 no choice: network provides one or the other
 implementation: in network core
Datagram networks
• no call setup at network layer
• routers: no state about end-to-end connections
– no network-level concept of “connection”
• packets forwarded using destination host address

application application
transport transport
network 1. send datagrams 2. receive datagrams network
data link data link
physical physical
Datagram forwarding table
4 billion IP addresses, so
routing algorithm rather than list individual
destination address
local forwarding table
list range of addresses
dest address output link (aggregate table entries)
address-range 1 3
address-range 2 2
address-range 3 2
address-range 4 1

IP destination address in
arriving packet’s header
1
3 2
Datagram forwarding table
Destination Address Range Link Interface

11001000 00010111 00010000 00000000


through 0
11001000 00010111 00010111 11111111

11001000 00010111 00011000 00000000


through 1
11001000 00010111 00011000 11111111

11001000 00010111 00011001 00000000


through 2
11001000 00010111 00011111 11111111

otherwise 3

Q: but what happens if ranges don’t divide up so nicely?


Longest prefix matching
longest prefix matching
when looking for forwarding table entry for given
destination address, use longest address prefix that
matches destination address.

Destination Address Range Link interface


11001000 00010111 00010*** ********* 0
11001000 00010111 00011000 ********* 1
11001000 00010111 00011*** ********* 2
otherwise 3

examples:
DA: 11001000 00010111 00010110 10100001 which interface?
DA: 11001000 00010111 00011000 10101010 which interface?
Datagram or VC network: why?
Internet (datagram) ATM (VC)
• data exchange among • evolved from telephony
computers • human conversation:
– “elastic” service, no strict – strict timing, reliability
timing req. requirements
– need for guaranteed
• many link types service
– different characteristics • “dumb” end systems
– uniform service difficult – telephones
• “smart” end systems – complexity inside network
(computers)
– can adapt, perform control,
error recovery
– simple inside network,
complexity at “edge”
Router architecture overview
two key router functions:
 run routing algorithms/protocol (RIP, OSPF, BGP)
 forwarding datagrams from incoming to outgoing link

forwarding tables computed, routing


pushed to input ports routing, management
processor
control plane (software)

forwarding data
plane (hardware)

high-seed
switching
fabric

router input ports router output ports


Network Layer 4-14
Input port functions

lookup,
link forwarding
line layer switch
termination protocol fabric
(receive)
queueing

physical layer:
bit-level reception
data link layer: decentralized switching:
e.g., Ethernet • given datagram dest., lookup output port
see chapter 5 using forwarding table in input port
memory (“match plus action”)
• goal: complete input port processing at
‘line speed’
• queuing: if datagrams arrive faster than
forwarding rate into switch fabric
Network Layer 4-15
Switching fabrics
 transfer packet from input buffer to appropriate output
buffer
 switching rate: rate at which packets can be transfer from
inputs to outputs
 often measured as multiple of input/output line rate
 N inputs: switching rate N times line rate desirable
 three types of switching fabrics

memory

memory bus crossbar

Network Layer 4-16


Switching via memory
first generation routers:
• traditional computers with switching under direct control of CPU
• packet copied to system’s memory
• speed limited by memory bandwidth (2 bus crossings per datagram). If memory bandwidth is B packets per second (i.e., B packets can be read or
written to it), then forwarding throughput is less than B/2
• Two packets can not be forwarded at same time even if they have different destination ports

input output
port memory port
(e.g., (e.g.,
Ethernet) Ethernet)

system bus

Network Layer 4-17


Switching via a bus

 datagram from input port memory


to output port memory via a shared bus
 Packet is received by all ports but only port with
matched header keeps it.
 Issue: If packets at more than one input ports,
they all have to wait. Only one can cross the
shared bus.
bus
 bus contention: switching speed limited by bus
bandwidth
 32 Gbps bus, Cisco 5600: sufficient speed for
access and enterprise routers

Network Layer 4-18


Switching via interconnection network
 overcome bus bandwidth limitations
 banyan networks, crossbar, other interconnection nets
initially developed to connect processors in
multiprocessor
 2N buses to connect N input ports with N output ports
 Intersection points are controlled by switches.
 Unlike previous switching schemes, cross bar network
capable of forwarding multiple packets in parallel.
 However, if two packets from different input ports
destined to same output port, one packet has to wait. crossbar
 advanced design: fragmenting datagram into fixed
length cells, switch cells through the fabric.
 Cisco 12000: switches 60 Gbps through the
interconnection network

Network Layer 4-19


Output ports
This slide in HUGELY important!
datagram
switch buffer link
fabric layer line
protocol termination
queueing (send)

 buffering required when datagrams arrive from fabric faster than the
transmission rate
 scheduling discipline chooses among queued datagrams for transmission
Datagram (packets) can be lost
due to congestion, lack of buffers
Priority scheduling – who gets best
performance, network neutrality
Network Layer 4-20
Output port queueing

switch
switch
fabric
fabric

at t, packets more one packet time later


from input to output

• buffering when arrival rate via switch exceeds output line speed
• queueing (delay) and loss due to output port buffer overflow!

Network Layer 4-21


How much buffering?
• RFC 3439 rule of thumb: average buffering
equal to “typical” RTT (say 250 msec) times
link capacity C
– e.g., C = 10 Gpbs link: 2.5 Gbit buffer
• recent recommendation: with N flows,
buffering equal to RTT . C
N

Network Layer 4-22


Input port queuing
• fabric slower than input ports combined -> queueing may occur at input
queues
– queueing delay and loss due to input buffer overflow!
• Head-of-the-Line (HOL) blocking: queued datagram at front of queue
prevents others in queue from moving forward

switch switch
fabric fabric

output port contention: one packet time later:


only one red datagram can be green packet
transferred. experiences HOL
lower red packet is blocked blocking
Network Layer 4-23
IP datagram format
IP protocol version 32 bits
number total datagram
header length length (bytes)
ver head. type of length
(bytes) len service for
“type” of data fragment fragmentation/
16-bit identifier flgs
offset reassembly
max number time to upper header
remaining hops live layer checksum
(decremented at
32 bit source IP address
each router)
32 bit destination IP address
upper layer protocol
to deliver payload to options (if any) e.g. timestamp,
record route
how much overhead? data taken, specify
 20 bytes of TCP
(variable length, list of routers
typically a TCP to visit.
 20 bytes of IP
or UDP segment)
 = 40 bytes + app
layer overhead

Network Layer 4-24


IP fragmentation, reassembly
• network links have MTU
(max.transfer size) - largest
possible link-level frame
fragmentation:


– different link types, in: one large datagram
different MTUs out: 3 smaller datagrams
• large IP datagram divided
(“fragmented”) within net
– one datagram becomes reassembly
several datagrams
– “reassembled” only at
final destination …
– IP header bits used to
identify, order related
fragments

Network Layer 4-25


IP fragmentation, reassembly
length ID fragflag offset
example: =4000 =x =0 =0
 4000 byte datagram
one large datagram becomes
 MTU = 1500 bytes several smaller datagrams

1480 bytes in length ID fragflag offset


data field =1500 =x =1 =0

offset = length ID fragflag offset


1480/8 =1500 =x =1 =185

length ID fragflag offset


=1040 =x =0 =370

Network Layer 4-26

Das könnte Ihnen auch gefallen