Sie sind auf Seite 1von 3

Jumbo frame

From Wikipedia, the free encyclopedia This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed.
(March 2010)

In computer networking, jumbo frames are Ethernet frames with more than 1500 bytes of payload. Conventionally, jumbo frames can carry up to 9000 bytes of payload, but variations exist and some care must be taken when using the term. Many Gigabit Ethernet switches and Gigabit Ethernet network interface cards support jumbo frames, but all Fast Ethernet switches and Fast Ethernet network interface cards support only standard-sized frames. Most national research and education networks (such as Internet2/NLR, ESnet, GANT, and AARNet) support jumbo frames, but most commercial Internet service providers do not.[citation needed]

Contents
[hide]
y y y y y y

1 Inception 2 Adoption 3 Super jumbo frames 4 See also 5 References 6 External links

[edit] Inception
The original 1500-byte payload size for Ethernet frames was used because of the high error rates and low speed of communications. Thus, if one receives a corrupted packet, only 1500 bytes (plus 18 bytes for the frame header and other overhead) must be re-sent to correct the error. However, each frame requires that the network hardware and software process it. If the frame size is increased, the same amount of data can be transferred with less effort. This reduces CPU utilization (mostly due to interrupt reduction) and increases throughput by allowing the system to concentrate on the data in the frames, instead of the frames around the data. At the sender, a similar reduction in CPU utilization can be achieved by using TCP segmentation offloading, although this does not reduce the receiver CPU load. Interrupt-combining Ethernet chipsets, however, do provide most of the same gain for the receiver, and work without special consideration and without requiring all stations to support jumbo frames. Zero-copy NICs and device drivers, when combined with interrupt combining, can provide effectively all the gains of jumbo frames without the re-send costs, and without requiring any changes to other stations on the network.

Jumbo frames gained initial prominence when Alteon WebSystems introduced them in their ACEnic Gigabit Ethernet adapters. Many other vendors also adopted the size; however, they did not become part of the official IEEE 802.3 Ethernet standard.[1]

[edit] Adoption
The IEEE 802 standards committee does not recognize jumbo frames, as doing so would remove interoperability with existing Ethernet equipment and other 802 protocols, including 802.5 Token Ring and 802.11 Wireless LAN. The presence of Jumbo frames may have an adverse effect on network latency, especially on low bandwidth links. The use of 9000 bytes as preferred size for jumbo frames arose from discussions within the Joint Engineering Team of Internet2 and the U.S. federal government networks. Their recommendation has been adopted by all other national research and education networks. In order to meet this mandatory purchasing criterion, manufacturers have in turn adopted 9000 bytes as the conventional jumbo frame size. Internet Protocol subnetworks require that all hosts in a subnet have an identical MTU. As a result, interfaces using the standard frame size and interfaces using the jumbo frame size should not be in the same subnet. To reduce interoperability issues, network interface cards capable of jumbo frames require explicit configuration to use jumbo frames. IETF solutions for adopting Jumbo Frames avoids the data integrity reductions through use of the Castagnoli CRC polynomial being implemented within the SCTP transport (RFC 4960), and iSCSI (RFC 3720). Selection of this polynomial was based upon work documented in the paper "32-Bit Cyclic Redundancy Codes for Internet Applications"[2]. The Castagnoli polynomial 0x11EDC6F41 achieves the Hamming Distance HD=6 beyond one Ethernet MTU (to a 16,360 bit data word length) and HD=4 to 114,663 bits, which is more than 9 times the length of an Ethernet MTU. This gives two additional bits of error detection ability at MTU-sized data words compared to the Ethernet CRC standard polynomial while not sacrificing HD=4 capability for data word sizes up to and beyond 72k bits. By using a CRC checksum rather than simple additive checksums as contained within the UDP and TCP transports, errors generated internal to NICs can be detected as well. Both TCP and UDP have proven ineffective at detecting bus specific bit errors, since these errors with simple summations tend to be self cancelling. Testing that led to adoption of RFC 3309 compiled evidence based upon simulated error injection against real data that demonstrated as much as 2% of these errors were not being detected. One of the major impediments toward the adoption of Jumbo Frames has been the inability to upgrade existing Ethernet infrastructure that would be needed to avoid a reduction in the ability to detect errors. CRC calculations done in software have always resulted in slower performance than that achieved when using simple additive checksums, as found with TCP and UDP. To overcome the performance penalty, Intel now offers 1Gb NIC (82576) and 10Gb NIC (X520) that off-load SCTP checksum calculations and Core i7 processors support the CRC32c instruction as part of their new SSE4 vector math instruction set.

Support of Castagnoli CRC polynomial within a general purpose transport designed to handle data chunks, and within a TCP transport designed to carry SCSI data, both provide improved error detection rates despite the use of Jumbo Frames where increase of the Ethernet MTU would have otherwise resulted in a significant reduction in error detection.

[edit] Super jumbo frames


Super jumbo frames (SJFs) are generally considered to be Internet packets which have a payload in excess of the tacitly accepted jumbo frame size of 9000 bytes. The relative scalability of network data throughput as a function of packet transfer rates is related in a complex manner [3] to payload size per packet. Generally, as line bit rate increases, the packet payload size should increase in direct proportion to maintain equivalent timing parameters. This however implies the covariant scaling of numerous intermediating logic circuits along the network path, to accommodate the maximum transmission unit (MTU), required. As it has been a relatively difficult, and somewhat lengthy, process to increase the path MTU of high performance national research and education networks from 1518 bytes to 9000 bytes or so, a subsequent increase, possibly to 64000 bytes for example, may take some time. The main factor involved with an increase in the maximum segment size (MSS) is an increase in the available memory buffer size in all of the intervening persistence mechanisms along the path. The main benefit of this is the reduction of the packet rate, both at end nodes and intermediate transit nodes. As the nodes in general use reciprocating logic to handle the packets, the number of machine cycles spent parsing packet headers decreases as the average MSS per packet increases. This relationship becomes increasingly important as average network line bit rate increases to 10 gigabits per second, and above.

Das könnte Ihnen auch gefallen