Sie sind auf Seite 1von 15

Medium Access Control Sublayer (chapter 4)

The Medium Access Control Sublayer (MAC) is the layer that is responsible for deciding who
can use the communication channel. It’s part of the data link layer. It makes sure that people are
not talking over each other.

In any broadcast network, the key issue is how to determine who gets to use the channel when
there is competition for it. When only a single channel is available (as is the case with LAN), it is
hard to determine the order of who should go.

The channel allocation problem


The central theme of this chapter is how to allocate a single broadcast channel among
competing users. The channel may be anything that connects each user to all other users and
any user who makes full use of the channel interferes with other users who also wish to use the
channel.

Static channel allocation


The traditional way of allocating a single channel for N users is to divide the bandwidth into N
portions one way. This can be done is splitting the frequency of the channel so that each user
has its own frequency band. With only a few users and a steady data stream for each of them
this division is simple and efficient. An example of where this is done is FM radio stations.
However when the number of uses is large and/or the traffic is bursty this way of allocating falls
short. As all users take up a part of the bandwidth they may not necessarily use and which also
could be too small to be useful.

Dynamic allocation
Some assumptions:
1. Independent traffic: traffic from node A is unrelated to that from node B
2. Single channel: there is only one channel/link over which communication takes place
3. Observable collisions: if two frames collide it can be detected
4. Continuous or slotted time
5. Carrier sense or no carrier sense: nodes can detect if the channel is in use

Multiple Access Protocols


A comparison of the various Multiple Access Protocols can be found under Appendix A

ALOHA Protocol
In pure ALOHA, users transmit frames whenever they have data; if a collision occurs, users retry
after a random delay.

Collisions in ALOHA
The vulnerable period is equal to two times the duration of a transmission. A transmission
between two packets can cause multiple collisions. The main limitation of ALOHA is that it is
non-scalable, the more devices, the more collisions.

Slotted ALOHA
Packets can only be transmitted in certain time slots so multiple collisions can no longer occur.
This method is twice as efficient as normal ALOHA. The best utilization that can be achieved is
1
e
.

On the following image, the messages marked in red have to be re-transmitted:

Carrier Sense Multiple Access Protocols (CSMA)


Senders detect (sense) if the channel is in use

Protocols that apply CSMA:


1. 1-persistent: Wait for idle, then send. Often ends in collisions as people will start sending
as soon as the network is available
2. Non persistent: If busy wait for random amount of time try again. Results in larger delays
when sending, but higher channel utilization
3. p-persistent: Keeps waiting until free and then sends with probability p. Else, wait for next
slot

We see that being greedy gives good performance under low load. Being generous gives
good performance under high load.

CSMA with collision detection (CSMA/CD)


When a collision is detected we abort (similar to slotted ALOHA). We then have a contention
period to make sure it is safe to send new data. Try and reduce this as much as possible to
increase throughput. After this we have an idle period when all stations are quiet (because they
have nothing to send). Illustrated here:
Collision free protocols
The basic bit-map protocol
A contention period is split in n parts (where n is the amount of nodes). Each node sends a 1
bit during its own slot in the contention period if it wants to send. After that, they begin
transmitting frames in numerical order and then repeat.
This thus has 1 bit overhead per device on the network per transmission:

Token ring
Stations are connected in a circle, which does not need to be physical, can use the same wire.
When a station receives a token, it may send a packet and then passes the token to the next
station. Similar performance as a bit-map.

Binary countdown
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In binary
countdown, binary station addresses are used. A station wanting to use the channel broadcast
its address as binary bit string starting with the high order bit. All addresses are assumed of the
same length. Here, we will see the example to illustrate the working of the binary countdown.
Wireless LAN
We will look at 802.11 (WiFi) specifically. Some notable properties:
• Some stations might not be able to talk to others (hidden terminals), because of the
limited range.
• Nodes cannot detect collisions while sending (can't talk and listen at the same time).

Hidden terminal problem: The hidden terminals may cause a station to not be able to detect a
potential competitor
Exposed terminals: The exposed terminal problem is when two receivers are out of range of
each other yet two transmitters are in range of each other. This will lead to the two transmitters
thinking they will interfere with the transmission to the other receiver and waiting to send when
in actuality they wouldn't interfere.

Multiple Access with Collision Avoidance (MACA)


Solving the hidden node problem and the exposed node problem. The basic idea behind it is
for the sender to stimulate the receiver into outputting a short frame, so stations nearby can
detect this transmission and avoid transmitting for the duration of the upcoming (large) data
frame. This technique is used instead of carrier sense.

It works the following way:


• The sender (A) first sends a short Request To Send packet (RTS). This is a 30 bytes frame
containing the length of the to be sent frame
• B replies with a Clear To Send (CTS). If A receives CTS, it begins transmission.

Some properties:
• Any station that hears RTS should remain silent to avoid conflict with CTS.
• Any station that hears CTS, must remain silent until the entire transmission is done.
• Frames contain frame lengths, so that anyone listening knows how long it will take
It can still go wrong, for example if 2 stations send RTS at the same time. In that case, wait a
random amount of time and try again.

We'll use this image as illustration.

Ethernet
Classic Ethernet

Ethernet frame format


Preamble
The preamble contains the bit pattern 10101010 seven times and ends with the pattern
10101011. That last pattern is called the start of the frame (for 802.3) and it tells the receiver
when to expect the frame.

Destination address / source address


The first bit of the destination is a 0 for a single device and a 1 for multiple devices. An address
with all 1s is for broadcasting (see this). Addresses are unique, 48 bits, and assigned by IEEE.
The first 3 bytes are assigned to a manufacturer (OUI, Organizationally Unique Identifier), the
second 3 bytes are to be decided by each manufacturer.

Type/length field
Next comes the type or length field, depending on whether this is Ethernet or IEEE 802.3.

Ethernet uses type to tell the receiver which protocol is contained in the data. Multiple network
layer protocols can be in use at the same time on a machine and so it needs to know to which
it has to hand the frame, this is thus specified by this type field.

IEEE 802.3 provides the length of the data instead, because looking inside the frame for length
was considered a layering violation. The protocol is transferred within the data using an
additional Logical Link Control (LLC) layer. The current rule is that any number less than 0x600
is interpreted as a length, while any number bigger than 0x600 is interpreted as a type.

Data
Now the data which has a maximum length of 1500 bytes, this was chosen because the
receiver had to have the frame in memory and memory was quite expensive at the time.

Padding
Ethernet requires a minimum length of 64 bytes and the padding field is used if the data is less
than that.

This is done to distinguish valid frames from garbage (garbage can happen on the cable
whenever collisions occur). More importantly, it is used to prevent a station from completing
the transmission of a short frame before the first bit has even reached the far end of the cable
where it may collide. Otherwise it could miss a collision and think the data was sent
successfully while in actuality it wasn't. We'll illustrate the problem with the figure below

• At time 0, station A, at one end of the network, sends off a frame. Let us call the
propagation time for this frame to reach the other end τ
• Just before the frame gets to the other end (i.e., at time τ − ϵ), the most distant station, B
, starts transmitting
• When B detects that it is receiving more power than it is putting out, it knows that a
collision has occurred, so it aborts its transmission and generates a 48-bit noise burst to
warn all other stations. In other words, it jams the ether to make sure the sender does not
miss the collision.
• At about time 2τ , the sender sees the noise burst and aborts its transmission, too. It then
waits a random time before trying again.

If a station tries to transmit a very short frame, it is conceivable that a collision will occur, but the
transmission will have completed before the noise burst gets back to the station at 2τ . The
sender will then incorrectly conclude that the frame was successfully sent. To prevent this
situation from occurring, all frames must take more than 2τ to send so that the transmission is
still taking place when the noise burst gets back to the sender.

Checksum
The final field is a 32-bit CRC checksum just used for error detection and thus to know when to
drop the frame.

CSMA/CD with BEB


After i collisions, a random number between 0 and 2i−1 slots is skipped. This is called binary
exponential back-off.

Channel utilization: being greedy is good with low load and being generous gives good
performance under high load

Switched Ethernet
Hub
Ethernet slowly moved away from a single cable architecture towards one were each computer
has got it's own dedicated cable running to a central hub. A hub simply connects a cables
together as if they were soldered together. Because twisted copper pairs were already the
norm for telephone cables the same type of cables were used for Ethernet.

A disadvantage of the hub is that because it is equivalent to one big ethernet cable the
maximum capacity does not increase. Due to technology advancing higher capacity was
needed.

Switched
The solution to this increased load was switched Ethernet. The heart of this is system is a
switch which contains a high speed backplane connecting all the ports. From the outside a
switch is similar to a hub and has the same advantages of being easy to use.

Inside the switch things differ greatly from a hub however. Switches only output frames to ports
for which those frames are destined. When a switch port receives an Ethernet frame from a
station the switch checks for which address the frame is destined and sends it the
corresponding port.

Advantages of switches:
• Increases speed of network
• All connections have different collision domains, thus having no collisions anymore.
• If the cable of full duplex, a station and the switch can send a packet at the same time, no
problem, otherwise if half duplex, use a regular CSMA protocol.
• A switch can send multiple frames simultaneously. However since two frames might be
sent to the same output port at the same time the switch has to buffer these frames
• The traffic is isolated, thus making it harder to spy on the network (though encryption is
still the preferred way)

802.11 (WiFi)
Architecture

Can be used in 2 different ways:


• Infrastructure mode, where the client uses an access point to send its packets. Several
access points may be connected together, typically by a wired network called a
distribution system
• Ad-hoc mode, a collection of computers that are associated and send frames to each
other, this is barely used

Protocols
The physical layer corresponds fairly well to the OSI physical layer but the data link layer in all
the 802 protocols is split into two or more sublayers. In 802.11 the MAC sublayer determines
how the channel is allocated. Above it is the Logical Link Control sublayer, whose job it is to
hide away (abstract) the difference of the multiple 802 variants. These days LLC is a glue layer
that identifies the protocol that is carried within a 802.11 frame.

MAC sub-layer
Radios are nearly always half duplex. This is due to the fact that the receiving signal can easily
be a million times weaker than the transmitted signal, so it can't send and receive at the same
time. Stations cannot detect collision because you cannot detect the interference, and
therefore it relies on ACKs to determine if collision occurred, if no ACK is received we assume
the whole frame is lost. This is called DCF (Distributed Coordination Function). See this
example:
CSMA/CA
We use CSMA/CA (which is similar to Ethernet's CSMA/CD), with an exponential back-off and
collision avoidance.

Physical channel sensing


• Sense if the channel is in use
• If so wait for idle

Virtual channel Sensing


• Frames carry a Network Allocation Vector (NAV) which says how long the sequence of
which this frame is part will take to complete. The NAV mechanism keeps other stations
quiet only until the next acknowledgement
• Wait for end of transmission
• Optionally RTS/CTS is used to prevent terminals from sending frames at the same time
as hidden terminals.
• See the following image (D is out of range of the RTS, Request To Send):

CSMA/CA with physical and virtual sensing is at the core of 802.11 however, there are several
other mechanisms that have been developed to go with it. Each of these mechanisms was
driven by the needs of real operation, so we will look at them briefly.

Unreliability
In contrast to wired networks wireless networks are unreliable. The use of acknowledgements
and retransmissions is of little help if the probability if getting a frame through is small in the
first place.

The main strategy that is used to increase a successful transmission is to lower the
transmission rate. If too many frames are lost a station can lower its transmission rate on the
other side if frames are getting though with no to little loss the transmission rate can be
increased.

Another strategy to improve the chance of the frame getting though is through undamaged is
to send shorter frames. The probability of a shorter frame being damaged is a lot lower than
that of bigger frames so sometimes it's helpful to send more smaller frames as opposed to
fewer large ones. This can be implemented by reducing the maximum size of the message that
is accepted from the network layer.

Alternatively 802.11 allows frames to be split up in smaller pieces called fragments each with its
own checksum. The fragment size is not fixed and can be set by the AP depending on
conditions. The fragments are individually numbered and acknowledged using a stop-and-wait
protocol (i.e., the sender may not transmit fragment k + 1 until it has received the
acknowledgement for fragment k). Once the channel has been acquired, multiple fragments
are sent as a burst. They go one after the other with an acknowledgement (and possibly
retransmissions) in between, until either the whole frame has been successfully sent or the
transmission time reaches the maximum allowed.

Power
Battery life is always an issue with mobile wireless devices. The basic mechanism for saving
power builds upon beacon frames. These are periodic (every ~100 ms) broadcasts by the AP.
The frames advertise the presence of the AP to clients and carries some system parameters.

Clients can set a power-management bit in frames that they send to the AP to tell it they are
entering power-save mode. In this mode, the client can doze and the AP will buffer traffic
intended for it. To check for incoming traffic, the client wakes up for every beacon, and checks
a traffic map that is sent as part of the beacon. This map tells the client if there is buffered
traffic. If so, the client sends a poll message to the AP, which then sends the buffered traffic.
The client can then go back to sleep until the next beacon is sent.

Wifi frames
802.11 uses 3 different classes of frames: data, control and management. Each of these has a
header with a variety of fields used within the MAC sublayer.
Data frame:
• Frame control
• Version: Version number of the protocol, currently 00
• Type: Data, control or management. For regular data frames 10
• Subtype: E.g. RTS or CTS. For regular data frames 0000. See this
• To DS/From DS: Whether it is going to or from the networks connected to the AP
• More frag: More fragments will follow
• Retry: This is a retry from a previous try
• Pwr mgt: Indicates that sender is going into power management mode
• More data: More frames will follow
• Protected: Data is encrypted
• Order: Tells the receiver that the higher layer expects the frames strictly in order
• Duration: How long the frame and its ack will occupy the channel (in ms)
• Addresses: in IEEE 802 format (MAC address)
• Address 1: The direct recipient (the AP, often)
• Address 2: The sender
• Address 3: The final recipient
• Sequence: To avoid duplicates, sequence id. 4 bits for the fragment id and 12 that are
incremented each transmission.
• Data: The next protocol’s data. In the LLC format to detect the protocol.
• Check sequence: 32-bit CRC

Management frames are similar to data, but contain extra data that depends on the subtype.
Control frames are short and only have 1 address and no data. They do have frame control,
duration and check sequence fields.

Services
The 802.11 standard defines the services that the clients, the access points, and the network
connecting them must be a conformant wireless LAN. These services cluster into several
groups:
• Association service: For phones to connect to APs
• Reassociation service: For phones to change it’s preferred AP
• Disassociation service: Phones and AP can stop the connection.
• Authentication service: Proof that you are allowed to connect, using WPA2

• Distribution service: Determines how to route packets that arrive at the AP


• Integration service: Handles any translation to send a packet over a different protocol
• Transmit power control service: Gives station information that it needs to meet
regulatory requirements on the power limit
• Dynamic frequency selection: To avoid the station from using the 5GHz service when it’s
not possible

Data Link Layer Switching


Ethernet switches bridge multiple networks together.

Learning bridges
A bridge accepts every frame that it receives, and must decide to which networks the frame
must be sent.

It uses backward learning:


• Create a hashtable of all IP’s and their subnetworks
• Listen on each subnetwork, by looking at the source addresses, they can see which
nodes are on which subnetwork
• Store time of last activity with each IP, purge entries that are inactive for a few minutes
• If we don’t know a certain IP, send it everywhere

The logic:
• If the destination subnetwork is the same as the source subnetwork, discard the packet
• If the destination subnetwork is different from the source subnetwork, send the packet
there
• If the destination subnetwork is unknown, send the packet everywhere (flooding)

We can start forwarding as soon as the header has come in, we don’t need to wait for the rest
of the packet. This is called cut-through switching.

Spanning tree bridges


We can create loops in our network to increase redundancy, but this will cause problems with
the previous protocol.

We will construct a spanning tree through the network.


• First, we need a root for the tree. Find the lowest mac address (since mac addresses are
unique)
• Next, a shortest path from each node to the root must be constructed. Each node finds
out it’s distance to the root, and chooses the smallest distance of it’s connections
• The algorithm keeps running while the network is running, to adjust for topology
changes

Repeaters, hubs, bridges, switches, routers, and gateways


Repeaters: Get signal from one side, boost and clean it up, then re-send it on the other side.
Used to extend maximum cable length.
Hubs: Repeaters with multiple inputs and outputs.

Bridge: Connects two or more LANS, creates different collision domains


Switches: Modern bridges. Pretty much the same thing.
Routers: Frame headers are stripped off and the network layer is passed to the routing software.
Does not see MAC addresses.

Transport gateways: Used to connect two computers that use different connection-oriented
protocols.
Application gateways: Understands the pure data, and can change it into a different format. For
example, SMS to email.

Virtual LANs
Why does it matter who is on a LAN?
• Security, because some packets may only be intended for certain computers. Letting
some servers not be accessible from outside a LAN is a good thing.
• Load, some networks are more loaded than others, in which case it might be useful to
separate them.
• Broadcast traffic, as the network size increases, packets that ask questions like “Who
owns IP address x?” increase.

Creating physical LAN’s for every logical structure is not always possible. Solution: Virtual LANs.
• Configure the switches such that they know which computer belongs to which VLAN
• Only allow communication between the same “colors” (VLANs are often named after
colors).
• Label each port with which colors are accessible via the port, for broadcast traffic, send
only to the port which is labeled with the color.

Packets need to contain which VLAN they belong to, so we keep track of that in the Ethernet
header. In the header we add a few fields before the the length field:
• VLAN protocol id: always the same value (0x8100), and is interpreted as type (2 bytes)
• Pri: priority, has nothing to do with VLAN, but as we're changing it anyway we add it (3
bits)
• CFI: kind of a controversial bit, now not really in use (1 bit)
• VLAN identifier: specifiying to which VLAN it belongs. 0x000 means no VLAN (12 bits)

Bridges need to be aware of VLANs to support them, though legacy bridges are still supported.
If the bridges are not VLAN aware, we just always send the packets there:

Appendix
A

Das könnte Ihnen auch gefallen