Sie sind auf Seite 1von 44

Arhitecturi pentru retele si servicii (ARS)

Multicast Communications

Managementul serviciilor si retelelor (MSR)

Multicast communications

Multicast communications

Data delivered to a group of receivers. Typical examples:


One-to-many (1:N) two-way MR MR/US Many-to-many (N : M) MS/R MS/R

One-to-many (1:N) one-way

MS

MS
M=Multicast; U=Unicast S= Sender; R=Receiver

Chapter outline

What applications use multicast? What are the requirements and design challenges of multicast communications? What multicast support does IP provide (network layer)? After an overview of multicast applications, we'll focus on IP multicast: service model, addressing, group management, and routing protocols.
Octavian Catrina 2

Multicast applications (examples)

One-to-many
Real-time audio-video distribution: lectures, presentations, meetings, movies, etc. Internet TV. Time sensitive. High bandwidth. Push media: news headlines, stock quotes, weather updates, sports scores. Low bandwidth. Limited delay. File distribution: Web content replication (mirror sites, content network), software distribution. Bulk transfer. Reliable delivery. Announcements: alarms, service advertisements, network time, etc. Low bandwidth. Short delay.

Many-to-many
Multimedia conferencing: multiple synchronized video/audio streams, whiteboard, etc. Time sensitive. High bandwidth, but typically only one sender active at a time. Distance learning: presentation from lecturer to students, questions from students to everybody. Synchronized replicated resources, e.g., distributed databases. Time sensitive. Multi-player games: Possibly high bandwidth, all senders active in parallel. Time sensitive. Collaboration, distributed simulations, etc.
3

Octavian Catrina

Multicast requirements (1)

Efficient and scalable delivery

Sender -

Multi-unicast repeats each data item. Wastes sender and network resources. Cannot scale up for many receivers and/or large amounts of data.

Multi-unicast delivery Rcvrs

Timely and synchronized delivery

Multi-unicast uses sequential transmission. Results in long, variable delay for large groups and/or for large amounts of data. In particular, critical issue for real-time communications (e.g., videoconferencing).

5 7

We need a different delivery paradigm.


Octavian Catrina 4

Multi-unicast vs. Multicast tree

Multi-unicast delivery

Multicast tree delivery

1:N transmission handled as N unicast transmissions. Inefficient, slow, for N1: multiple packet copies per link (up to N).
1 Multi-unicast delivery Rcvr 1 3 Sender

Transmission follows the edges of a tree, rooted at the sender, and reaching all the receivers. A single packet copy per link.
1 Multicast tree delivery Rcvr 1 3 Sender

4
2
Octavian Catrina

5
Rcvrs 3 4 2

4
Rcvrs 3

5
4
5

Multicast requirements (2)

Multicast group identification

Applications need special identifiers for multicast groups. (Could they use lists of host IP addresses or DNS names?) Groups have limited lifetime. We need mechanisms for dynamic allocation of unique multicast group identifiers (addresses).

Group management

Group membership changes dynamically. We need join and leave mechanisms (latency may be critical). For many applications, a sender must be able to send without knowing the group members or having to join (e.g., scalability). A receiver might need to select the senders it receives from.
Octavian Catrina 6

Multicast requirements (3)

Session management

Receivers must learn when a multicast session starts and which is the group id (such that they can "tune in"). We need session description & announcement mechanisms.

Reliable delivery

Applications need a certain level of reliable data delivery. Some tolerate limited data loss. Others do not tolerate any loss (e.g., all data to all group members - hard problem). We need mechanisms that can provide the desired reliability.

Heterogeneous receivers

Receivers within a group may have very different capabilities and network connectivity: processing and memory resources, network bandwidth and delay, etc. We need special delivery mechanisms.
Octavian Catrina 7

Requirements: Some conclusions

Multi-unicast delivery is not suitable

Multi-unicast does not scale up for large groups and/or large amounts of data: it becomes either very inefficient, or does not fulfill the application requirements.

Specific functional requirements

Specific multicast functions, which are not needed for unicast: group management, heterogeneous receivers. General functions, which are also needed for unicast, but become much more complex for multicast: addressing, routing, reliable delivery, flow & congestion control.

We need new mechanisms and protocols, specially designed for multicast.


Octavian Catrina 8

Which layers should handle multicast?

Data link layer

Efficient delivery within a multi-access network. Multicast extensions for LAN and WAN protocols.

Network layer

Multicast routing for efficient & timely delivery. IP multicast extensions. Multicast routing protocols.

Transport layer

End-to-end error control, flow control, and congestion control over unreliable IP multicast. Multicast transport protocols.

Application layer multicast

Overlay network created at application layer using existing unicast transport protocols. Easier deployment, less efficient. Still an open research topic.
9

Octavian Catrina

IP multicast model (1)

"Transmission of an IP datagram to a group of hosts"

Extension of the IP unicast datagram service. IP multicast model specification: RFC 1112, 1989.

Multicast address

Unique (destination) address for a group of hosts. Different datagram delivery semantics A distinct range of addresses is reserved in the IP address space.

Who receives? Explicit receiver join

IP delivers datagrams with a destination address G only to applications that have explicitly notified IP that they are members of group G (i.e., requested to join group G). Multicast senders need not be members of the groups they send to.
10

Who sends? Any host can send to any group

Octavian Catrina

IP multicast model (2)

No restrictions for group size and member location

Groups can be of any size. Group members can be located anywhere in an internetwork.

Dynamic group membership

Receivers can join and leave a group at will. The IP network must adapt the multicast tree accordingly.

Anonymous groups

Senders need not know the identity of the receivers. Receivers need not know each-other.

Analogy: A multicast address is like a radio frequency, on which anyone can transmit, and anyone can tune in.

Best-effort datagram delivery

No guarantees that: (1) all datagrams are delivered (2) to all group members (3) in the order they have been transmitted.
11

Octavian Catrina

IP multicast model: brief analysis

Applications viewpoint

Simple, convenient service interface. Same send/receive ops as for unicast, plus join/leave ops. Anybody can send/listen to a group. Security, billing? Extension to reliable multicast service? Difficult problem.

IP network viewpoint

Scales up well with the group size.

Single destination address, no need to monitor membership.

Does not scale up with the number of groups. Conflicts with the original IP model (per session state in routers).

Routers must discover the existence/location of receivers and senders. They must maintain dynamic multicast tree state pergroup and even per-source&group.

Dynamic multicast address allocation. How to avoid allocation conflicts (globally)? Very difficult problem.
12

Octavian Catrina

IPv4 multicast addresses

IPv4 multicast addresses


Class Address range 224.0.0.0 to 239.255.255.255
31 28 27 0

1110

multicast address
228 addresses

IP multicast in LANs

Relies on the MAC layer's native multicast. Mapping of IP multicast addresses to MAC multicast addresses:
IPv4 multicast address 1110 group bit 0000000100000000010111100 28 bits (228 addresses)

Ethernet multicast address


23 bits

Ethernet LAN

Octavian Catrina

13

Multicast scope

Multicast scope

Limited network region where multicast packets are forwarded. Application-specific reasons or/and better efficiency. TTL-based or administrative scopes (RFC 2365).

Administrative scopes

Delimited by configuring boundary routers: do not forward some ranges of multicast addresses, on some interfaces. Connected, convex regions. 1 Local B Nested and/or overlapping.

IPv4 administrative scopes:

Organization-local (239.192.0.0/14). Local scope (239.255.0.0/16). Link-local scope (224.0.0.0/24). Global scope: No boundary, all remaining multicast addresses.
Octavian Catrina

Local A 4 Organization-local Internet 6


14

Group management: local

Multicast service requirements


1 Multicast routers have to discover the location of the members of Multicast tree any multicast group & maintain a multicast tree reaching them all. 2 Dynamic group membership.

Sender

Rcvr 1 3

IGMP

Local (link) level

Multicast applications must notify IP when they join or leave a multicast group (API available).

IGMP 2

4 Rcvrs 3

IGMP 4

Internet Group Management Protocol (IGMP) allows multicast routers to learn which groups have members, at each interface. Dialog between hosts and a (link) local multicast router.
Octavian Catrina 15

Group management: internetwork

Global (internetwork) level

Sender

Multicast routing protocols propagate information about group membership and allow routers to build the tree.

1
Multicast tree 2

Multicast Routing Protocol


3

Rcvr 1

IGMP

Implicit vs. explicit join

Implicit: Multicast tree obtained by pruning a default broadcast tree. Nodes must ask to be removed. Explicit: Nodes must ask to join.

IGMP 2

4 Rcvrs 3

IGMP 4

Data-driven vs. control-driven multicast tree setup

Data-driven: tree built/maintained when/while data is sent. Control-driven: tree set up & maintained by control messages (join/leave), independently of the sender(s) activity.
Octavian Catrina 16

Group Management: IGMP (1)

Internet Group Management Protocol

Enables a multicast router to learn, for each of its directly attached networks, which multicast addresses are of interest to the systems attached to these networks. IGMPv1: join + refresh + implicit leave (timeout). IGMPv2: adds explicit leave (fast). IGMPv3 (2002): adds source selection. IGMPv3 presented in the following, IGMPv1/v2 in the annex.

Periodic General Queries: Refresh/update group list

Reports are randomly delayed to avoid bursts.


(Duplicate reports are completely suppressed in IGMPv1 & v2.)
(1) IP packet to 224.0.0.1 (all systems on this subnet) IGMPv3 General Query: Anybody interested in any group? Groups: 224.1.2.3 Groups: 224.1.2.3 Groups: none

Local groups at IF i1: 224.1.2.3

i1

(2) IP packet to 224.0.0.22 IGMPv3 Current State Report: member of group 224.1.2.3
Octavian Catrina

17

IGMP (2)

Host joins a group


Local groups at IF i1: add 224.1.2.3
i1

IP packet to 224.0.0.22 (all IGMPv3 routers) IGMPv3 State Change Report: joined group 224.1.2.3 Groups: + 224.1.2.3

Host leaves a group

Router must check if there are other members of that group.


(2) IP packet to 224.0.0.1 (all systems on this subnet) IGMPv3 Group-Specific Query: Anybody interested in 224.1.2.3?

Local groups at IF i1: 224.1.2.3? i1 maintained

Groups: none/left
(1) IP packet to 224.0.0.22 IGMPv3 State Change Report: not member of 224.1.2.3
Octavian Catrina

Groups: 224.1.2.3

Groups: none

(3) IP packet to 224.0.0.22 IGMPv3 Current State Report: member of 224.1.2.3


18

Multicast trees

What kind of multicast tree?

Sender

Minimize tree diameter (path length, delivery delay) or tree cost (network resources)? Special, difficult case of minimum cost spanning tree (Steiner tree). No good distributed algorithm!

Rcvr
1

3
Rcvr

2
4 5

Practical solution

Take advantage of existing unicast routing: Shortest path tree based on routing info from unicast routing protocol. Multicast extension of a unicast routing protocol, or separate multicast routing protocol.
Octavian Catrina

Rcvr 3 6 7

Rcvrs 4 5

Shortest paths tree (e.g., unicast routing).

Min cost tree.

19

Source-based vs. shared trees

Source-based trees

One tree per sender. Tree rooted at the sender. Typically shortest-path tree.

Shared trees

One tree for all senders. Examples: Minimum diameter tree or minimum cost tree, etc.

Octavian Catrina

20

Source-based trees (1)


172.16.5.0/24

Source-based tree

Sender
Sender + Rcvr 1

Tree rooted at a sender which spans all receivers. Typically, shortest-path tree.

In general: M1 senders/group

172.20.2.0/24

M sources transmit to a group. Session participants may be senders, receivers, or both. A separate source-based tree has to be set up for each sender.
Router 2: Multicast forwarding table Source prefix Multicast group In IF Out IF 172.16.5.0/24 224.1.1.1 N S, E, SE 172.20.2.0/24 224.1.1.1 E N ...
Octavian Catrina

Rcvr

2
4 5

7 Rcvrs

Interface notation: N = North (up); S = South (down) W = West (left); E = East (right) NW = North-West (up-left). Etc.

21

Source-based trees (2)

Pros

Per-source tree optimization.

Shortest network path & transfer delay.

Tree created/maintained only when/while a source is active.

Cons

Does not scale for multicast sessions with M>>1 sources.

The network must create and maintain M separate trees: per-source & group state in routers, higher control traffic and processing overhead.

Examples
PIM-DM, DVMRP, MOSPF. Mixed solution: PIM-SM. PIM-DM, DVMRP, MOSPF: Data-driven tree setup. PIM-SM: Explicit join, control-driven tree setup.

Octavian Catrina

22

Shared trees (1)

Core-based shared tree

Sender Sender + Rcvr

The multicast session uses a single distribution tree, with the root at a "core" node, and spanning all the receivers ("core-based" tree). Each sender transmits its packets to the core node, which delivers them to the group of receivers. Typically, shortest-path tree, with the central root node.
Router 5: Multicast forwarding table Multicast group In IF Out IF (any sender) 224.1.1.1 W N, E ...
Octavian Catrina

1
2 3 Rcvr Core 4 5 2

6
3 Rcvrs

4
5

Interface notation: N = North (up); S = South (down) W = West (left); E = East (right) NW = North-West (up-left). Etc.

23

Shared trees (2)

Pros

More efficient for multicast sessions with M>>1 sources.

The network creates a single delivery tree shared by all senders: only per-group state in routers, less control overhead.

Tree (core to receivers) created and maintained independently of the presence and activity of the senders.
Less optimal/efficient trees.

Cons

Possible long paths and delays, depending on the relative location of the source, core, and receiver nodes. Issue: (optimal) core selection.

Traffic concentrates near the core node. Danger of congestion.

Examples

PIM-SM, CBT.

Explicit join, control-driven (soft state, implicit leave/prune).


24

Octavian Catrina

DVMRP

DVMRP: Distance Vector Multicast Routing Protocol

First IP multicast routing protocol (RFC 1075, 1988).

DVMRP at a glance

Source-based multicast trees, data-driven tree setup. Distance vector unicast routing (DVR). Reverse path multicast (RPM). Support for multicast overlays: tunnels between multicast enabled routers through networks not supporting multicast. Used to create the Internet MBone (Multicast Backbone).

Routing info base


DVMRP incorporates its own unicast DVR protocol.


Separated routing for unicast service and multicast service. E.g., routers learn the downstream neighbors on the multicast tree for any source address prefix.
25

DVR protocol derived from RIP and adapted for RPM.

Octavian Catrina

Reverse Path Broadcast

Broadcast tree for source s

172.16.5.0/24

The unicast route matching s indicates a router's parent in the broadcast tree for source s (child-to-parent pointer). Broadcast/multicast packet from source s received on interface i: - If i is the interface used to forward a unicast packet to s, then forward the packet on all interfaces except i. - Otherwise discard the packet.

- Sender s 1 Rcvr 1 2 3

Reverse Path Forwarding (RPF)

Rcvr
2

Reverse Path Broadcast (RPB)

RPF still allows unnecessary copies. Add parent-child pointer: A router learns which neighbors use it as next hop for each route. Forward a packet only to these neighbors.
Octavian Catrina

Rcvr 3

Route entries matching the broadcast sender's address. Unnecessary packet copies sent by RPF.

26

Reverse Path Multicast (1)

Truncated RPB

172.16.5.0/24

Sender

Uses IGMP to avoid unnecessary broadcast in leaf multi-access networks.

1
Rcvr 1 2 3

Reverse Path Multicast (RPM)

Creates a multicast tree by pruning unnecessary tree branches from the (truncated) RPB broadcast tree.

IGMP

IGMP Rcvr 2

Prune mechanism

Prune 4 5

A router sends a Prune message to its upstream (parent) router if: - its connected networks do not contain group members, and - its neighbor routers either are not downstream (child) routers, or have sent Prune messages. Both routers maintain Prune state.
Octavian Catrina

IGMP

IGMP

Prune 6 7

Rcvr 3

IGMP

IGMP

27

Reverse Path Multicast (2)

Adapting the multicast tree to group membership changes

172.16.5.0/24

Sender

1
Rcvr 1 2 Graft 4 Graft 5 3

Pruning can remove branches when members leave. A mechanism is necessary to add branches when members join.

IGMP

Periodic broadcast & prune

IGMP Rcvr 2

The multicast tree can be updated by repeating periodically the broadcast & prune process (a parent removes the prune state after some time).

IGMP

IGMP

Graft mechanism

Faster tree extension. A router sends a Graft message, which cancels a previously sent Prune message.
Octavian Catrina

Rcvr 4 IGMP

Rcvr 3

IGMP

28

DVMRP operation

Data-driven: multicast tree setup when the source starts sending to the group. Initially, RPB: All routers receive the packets, learn about the session (source-group), & record state for it. Next, RPM: Unnecessary branches IGMP are pruned from the data paths (but the routers still maintain state). Tree update by periodic broadcast & prune, and graft. IGMP
Router 2: Multicast forwarding cache Source prefix Multicast group In IF 172.16.5.0/24 224.1.1.1 N 224.5.6.7 Out IF E, SE S(Prune)

172.16.5.0/24 1

Sender: Sends to 224.1.1.1, 224.5.6.7


Rcvr 1

IGMP Rcvr 2

IGMP

Rcvr 3

Router 4: Multicast forwarding cache Source prefix Multicast group In IF Out IF 172.16.5.0/24 224.1.1.1 N(Prune) S(Prune) 224.5.6.7
Octavian Catrina

IGMP IGMP Route entries matching the broadcast sender's address.


29

DVMRP conclusions

DVMRP & RPM shortcomings

Several design solutions limit DVMRP scalability & efficiency. Tree setup and maintenance by periodic broadcast & prune.

Can waste a lot of bandwidth, especially for a sparse group spread over a large internetwork (OK for dense groups). Due to source-based trees and to enable fast grafts.

Per-group & source state in all routers, both on-tree & off-tree.

Controversial feature: Embedded DVR protocol.

New generation RPM-based protocol: PIM-DM

Protocol Independent Multicast: Uses existing unicast routing table, from any routing protocol. No embedded unicast routing. Dense Mode: Intended for "dense groups" - concentrated in a network region (rather than thinly spread in a large network). Uses RPM as described on previous slides (similar to DVMRP).
No parent-to-child pointers, hence redundant transmissions in broadcast phase.
Octavian Catrina 30

PIM-SM

PIM: Protocol Independent Multicast

Uses exiting unicast routing table, from any routing protocol. No embedded unicast routing. No solution matches well different application contexts. Two protocols, different algorithms.

PIM-DM: Dense Mode

Efficient multicast for "dense" (concentrated) groups. RPM, source-based trees, implicit-join, data-driven setup.
PIM-DM is similar to DVMRP, except it relies on existing unicast routing, hence it does not avoid redundant transmissions in the broadcast phase.

PIM-SM: Sparse Mode

Efficient multicast for sparsely distributed groups. Shared trees, explicit join, control-driven setup. After initially using the group's shared tree, members can set up source-based trees. Improved efficiency and scalability.
Octavian Catrina 31

Rendezvous Points

Rendezvous Point (RP) router

Sender
R1

Core of the multicast shared tree. Meeting point for the group's receivers & senders. At any moment, any router must be able to uniquely map a multicast address to an RP. Resilience & load balancing a set of RP.

Sender + Rcvr
1

R2

R3

Rcvr
2

RP
R4 R5

RP discovery and mapping

Several routers are configured as RPcandidate routers for a PIM-SM domain. They elect a Bootstrap Router (BSR).

R6 3

R7

Rcvr

Rcvr

BSR monitors the RP candidates and distributes a list of RP routers (RP-Set) to all the other routers in the domain. A hash function allows any router to uniquely map a multicast address to an RP-Set router.
Octavian Catrina 32

Shared tree (RP-tree) setup

Designated Router (DR)

Unique PIM-SM router responsible for multicast routing in a subnet.


To join a group G, a receiver informs the local DR using IGMP.

R1

RP-tree
Rcvr(*,G)

Receiver join

IGMP R2 (*,G) Join RP(G) Join IGMP Rcvr(*,G) 2 R4 (*,G) Join Join (*,G) R5 R3

IGMP Rcvr(*,G)
4

DR join

DR adds (*,G) multicast tree state (group G and any source). DR determines the group's RP, and sends a PIM-SM Join(*,G) packet towards the RP. At each router on the path, if (*,G) state does not exist, it is created, & the Join(*,G) is forwarded.

IGMP
Rcvr(*,G) 3

IGMP

R6

(*,G)

(*,G)

R7

IGMP

Multicast tree state is soft state: Refreshed by periodic Join messages.


33

Octavian Catrina

Sending on the shared tree

Register encapsulation

Sender
R1 (S,G)

The sender's local DR encapsulates each multicast data packet in a PIMSM Register packet, and unicasts it to the RP. RP decapsulates the data packet and forwards it onto the RP-tree. Allows the RP to discover a source, but data delivery is inefficient.

Join(S,G)

Rcvr(*,G)
1

IGMP

R2

(S,G)

(*,G)

R3

Join(S,G)
RP(G) IGMP Rcvr(*,G) 2 R4

IGMP Rcvr(*,G)

4
R5 IGMP

Register-Stop

(*,G)

(*,G)

RP reacts to a Register packet by issuing a Join(S, G) towards S. At each router on the path, if (S,G) state does not exist, it is created, & the Join(S, G) is forwarded.

Rcvr(*,G) 3

R6 R7 IGMP IGMP (*,G) (*,G) Register-encapsulated Register-Stop data packet

When the (S,G) path is complete, RP stops the encapsulation by sending (unicast) a PIM-SM Register-stop packet to the sender's DR.
34

Octavian Catrina

Source-specific trees

Shared vs. source-specific tree

Sender
R1 (S,G)

Routers may continue to receive data on the shared RP-tree. Often inefficient: e.g., long detour from sender to receiver 1. PIM-SM allows routers to create a source-specific shortest-path tree.

Rcvr(*,G)
Join(S,G) IGMP 1

R2 (S,G)

(S,G) Prune (S,G)

R3

IGMP Rcvr(*,G)

RP(G)

4
R5 IGMP

Transfer to source-specific tree

IGMP Rcvr(*,G) 2

R4

A receiver's DR sends a Join(S, G) towards S creates (S,G) multicast tree state at each hop. After receiving data on the (S,G) path, DR sends a Prune(S,G) towards the RP removes S from G's shared tree at each hop.
Octavian Catrina

(*,G)

(*,G)

Rcvr(*,G) 3
R6 R7

IGMP

(*,G)

(*,G)

IGMP

Example: transfer to SPT for receiver 1

35

PIM-SM conclusions

Advantages

Independence of unicast routing protocol. Better scalability, especially for sparsely distributed groups: - Explicit join, control-driven tree setup no data broadcast, no flooding of group membership information. Per session state maintained only by on-tree routers. - Shared trees routers maintain per-group state, instead of per-source-group state. Flexibility and performance: optional, selective transfer to source-specific trees (e.g., triggered by data rate).

Weaknesses

Much more complex than PIM-DM. Control traffic overhead (periodic Joins) to maintain soft multicast tree state.
Octavian Catrina 36

MOSPF

MOSPF

Backbone area

Natural multicast extension of the OSPF (Open Shortest Path First) link-state unicast routing protocol.

ABR
Area 1

ABR
Area 2

ABR
Area 3

MOSPF at a glance

OSPF hierarchical network structure

Source-based shortest-path multicast trees, data-driven setup. Multicast extensions for both intra-area and inter-area routing. Extends the OSPF topology database (per-area) with info about the location of the groups' members. Extends the OSPF shortest path computation (Dijkstra) to determine multicast forwarding:
For each pair source & destination-group, each router: computes the same shortest path tree rooted at the source, finds its own position in the tree, and determines if and where to forward a multicast datagram.
Octavian Catrina 37

OSPF review (single area)

Link state advertisements

Each router maintains a links state table describing its links (attached networks & routers). It sends Link State Advertisements (LSA) to all other routers (hop-by-hop flooding).
All routers build from LSAs the same network topology database (directed graph labeled with link costs).

Example: OSPF topology (link state) database for one OSPF area. Shortest-path tree computed using Dijkstra algorithm by router R1.
R1 N1

Topology database

N2 R2 R3 N3

Routing table computation

Each router independently runs the same algorithm (Dijkstra) on the topology, to compute a shortest-path tree rooted at itself, to all destinations. A destination-based unicast routing table is derived from the tree.
Octavian Catrina

N4

R4

R5

N5

N6

R6

R7

N7

38

MOSPF: topology database

Local group database


Records group membership in a router's directly attached network. Created using IGMP.

R1

Sender

Rcvr, m1 Flood G-M LSA (R3, m1) IGMP R2 R3 1

Group-membership LSA

Sent by a router to communicate local group members to all other routers (local transit vertices that should remain on a group's tree).

IGMP Rcvr, m1 2

Topology database extension for multicast

IGMP

R4

Flood G-M LSA (R5, m1) R5

IGMP Rcvr, m1 2

A router or a transit network is labeled with the multicast groups announced in Group-membership LSAs.

IGMP

R6

Flood G-M LSA (R7, m1) R7

IGMP

Octavian Catrina

39

MOSPF: multicast tree (intra-area)

Source-based multicast tree

MOSPF link state database (one area). Shortest-path tree for (N1, m1). Source (172.16.5.1),
R1 N1 sends to m1= 224.1.1.1

Shortest-path tree from source to group members (receivers).


A router computes the tree and the multicast forwarding state when it receives the first multicast datagram (i.e., learns about the new session).

Data-driven tree setup

N2

172.16.5.0/24

R2

R3

N3

m1

Multicast tree & state

Routers determine independently the same shortest path tree rooted at the source, using Dijkstra. The tree is pruned according to group membership labels. The router finds its position in the pruned tree, and derives the forwarding cache entry.
Octavian Catrina

N4

R4

R5

N5

m1

N6

R6

R7

N7

m1
Out IF E, SE
40

Router 2: Multicast forwarding cache Source Multicast group In IF 172.16.5.1 224.1.1.1 N

MOSPF conclusions

Advantages

OSPF is the interior routing protocol recommended by IETF. MOSPF is the natural choice of multicast routing protocol in networks using OSPF. More efficient than DVMRP/RPM: no data broadcast.

Weaknesses

Various features limit scalability and efficiency: Dynamic (!) group membership advertised by flooding. Multicast state per-group & per-source, maintained in ontree, as well as off-tree routers. Relatively complex computations to determine multicast forwarding: for each new multicast transmission (sourcegroup), repeated when the group/topology change. Few implementations?
Octavian Catrina 41

Annex

IGMP v1/v2 - Group Management

Octavian Catrina

43

IGMP v.2 - Group Management

IGMP v2 enhancements:

Election of a querier router (lowest IP address). Explicit leave (reduce leave latency).

Octavian Catrina

44

Das könnte Ihnen auch gefallen