Sie sind auf Seite 1von 13

Ethernet:

Definition: Ethernet

is a physical and data link layer technology for local area networks (LANs).

Ethernet was invented by engineer Robert Metcalfe.


When first widely deployed in the 1980s, Ethernet supported a maximum
theoretical data rate of 10megabits per second (Mbps). Later, so-called "Fast
Ethernet" standards increased this maximum data rate to 100 Mbps. Gigabit
Ethernet technology further extends peak performance up to 1000 Mbps,
and 10 Gigabit Ethernet technologies also exists.

Higher level network protocols like Internet Protocol (IP) use Ethernet as
their transmission medium. Data travels over Ethernet inside protocol units
called frames

The run length of individual Ethernet cables is limited to roughly 100 meters,
but Ethernet networks can be easily extended to link entire schools or office
buildings using network bridge devices.

Ethernet was originally based on the idea of computers communicating over


a shared coaxial cable acting as a broadcast transmission medium. The
methods used were similar to those used in radio systems, with the common
cable providing the communication channel likened to the Luminiferous
aether in 19th century physics, and it was from this reference that the name
"Ethernet" was derived.

Original Ethernet's shared coaxial cable (the shared medium) traversed a


building or campus to every attached machine. A scheme known as carrier
sense multiple access with collision detection (CSMA/CD) governed the way
the computers shared the channel. This scheme was simpler than the
competing token ring or token bus technologies. Computers were connected
to an Attachment Unit Interface (AUI) transceiver, which was in turn
connected to the cable (later with thin Ethernet the transceiver was
integrated into the network adapter). While a simple passive wire was highly
reliable for small networks, it was not reliable for large extended networks,
where damage to the wire in a single place, or a single bad connector, could
make the whole Ethernet segment unusable.

ATM :

Definition: ATM is a high-speed networking standard designed to support


both voice and data communications. ATM is normally utilized by Internet
service providers on their private long-distance networks. ATM operates at
the data link layer (Layer 2 in the OSI model) over either fiber or twisted-
pair cable.
ATM differs from more common data link technologies like Ethernet in
several ways. For example, ATM utilizes no routing. Hardware devices known
as ATM switches establish point-to-point connections between endpoints and
data flows directly from source to destination. Additionally, instead of using
variable-length packets as Ethernet does, ATM utilizes fixed-sized cells. ATM
cells are 53 bytes in length that includes 48 bytes of data and five (5) bytes
of header information.

The performance of ATM is often expressed in the form of OC (Optical


Carrier) levels, written as "OC-xxx." Performance levels as high as
10 Gbps(OC-192) are technically feasible with ATM. More common
performance levels for ATM are 155 Mbps (OC-3) and 622 Mbps (OC-12).
ATM technology is designed to improve utilization and quality of service
(QoS) on high-traffic networks. Without routing and with fixed-size cells,
networks can much more easily manage bandwidth under ATM than under
Ethernet, for example. The high cost of ATM relative to Ethernet is one factor
that has limited its adoption to backbone and other high-performance,
specialized networks.

Cells in practice
ATM supports different types of services via AALs. Standardized AALs include
AAL1, AAL2, and AAL5, and the rarely used[ AAL3 and AAL4. AAL1 is used for
constant bit rate (CBR) services and circuit emulation. Synchronization is
also maintained at AAL1. AAL2 through AAL4 are used for variable bit rate
(VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not
encoded in the cell. Instead, it is negotiated by or configured at the
endpoints on a per-virtual-connection basis.
Following the initial design of ATM, networks have become much faster. A
1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 s to transmit
on a 10 Gbit/s network, reducing the need for small cells to reduce jitter due
to contention. Some consider that this makes a case for replacing ATM with
Ethernet in the network backbone. However, it should be noted that the
increased link speeds by themselves do not alleviate jitter due to queuing.
Additionally, the hardware for implementing the service adaptation for IP
packets is expensive at very high speeds. Specifically, at speeds of OC-3 and
above, the cost of segmentation and reassembly (SAR) hardware makes
ATM less competitive for IP than Packet Over SONET (POS)[ because of its
fixed 48-byte cell payload, ATM is not suitable as a data link
layer directly underlying IP (without the need for SAR at the data link level)
since the OSI layer on which IP operates must provide a maximum
transmission unit (MTU) of at least 576 bytes. SAR performance limits mean
that the fastest IP router ATM interfaces are STM16 - STM64 which actually
compares, while as of 2004 POS can operate at OC-192 (STM64) with higher
speeds expected in the future.
On slower or congested links (622 Mbit/s and below), ATM does make sense,
and for this reason most asymmetric digital subscriber line (ADSL) systems
use ATM as an intermediate layer between the physical link layer and a
Layer 2 protocol like PPP or Ethernet.
At these lower speeds, ATM provides a useful ability to carry multiple logical
circuits on a single physical or virtual medium, although other techniques
exist, such as Multi-link PPP and Ethernet VLANs, which are optional
in VDSL implementations. DSL can be used as an access method for an ATM
network, allowing a DSL termination point in a telephone central office to
connect to many internet service providers across a wide-area ATM network.
In the United States, at least, this has allowed DSL providers to provide DSL
access to the customers of many internet service providers. Since one DSL
termination point can support multiple ISPs, the economic feasibility of DSL
is substantially improved.
Why virtual circuits?
ATM operates as a channel-based transport layer, using virtual
circuits (VCs). This is encompassed in the concept of the Virtual Paths (VP)
and Virtual Channels. Every ATM cell has an 8- or 12-bit Virtual Path
Identifier (VPI) and 16-bit Virtual Channel Identifier (VCI) pair defined
in its header. The VCI, together with the VPI, is used to identify the next
destination of a cell as it passes through a series of ATM switches on its way
to its destination. The length of the VPI varies according to whether the cell
is sent on the user-network interface (on the edge of the network), or if it is
sent on the network-network interface (inside the network).
As these cells traverse an ATM network, switching takes place by changing
the VPI/VCI values (label swapping). Although the VPI/VCI values are not
necessarily consistent from one end of the connection to the other, the
concept of a circuit is consistent (unlike IP, where any given packet could
get to its destination by a different route than the others).[8]ATM switches
use the VPI/VCI fields to identify the Virtual Channel Link (VCL) of the next
network that a cell needs to transit on its way to its final destination. The
function of the VCI is similar to that of the data link connection
identifier (DLCI) in frame relay and the Logical Channel Number & Logical
Channel Group Number in X.25.
Another advantage of the use of virtual circuits comes with the ability to use
them as a multiplexing layer, allowing different services (such as
voice, Frame Relay, n* 64 channels, IP). The VPI is useful for reducing the
switching table of some virtual circuits which have common paths]
Using cells and virtual circuits for traffic engineering
Another key ATM concept involves the traffic contract. When an ATM circuit
is set up each switch on the circuit is informed of the traffic class of the
connection.
ATM traffic contracts form part of the mechanism by which "quality of
service" (QoS) is ensured. There are four basic types (and several variants)
which each have a set of parameters describing the connection.

1. CBR - Constant bit rate: a Peak Cell Rate (PCR) is specified, which is
constant.
2. VBR - Variable bit rate: an average or Sustainable Cell Rate (SCR) is
specified, which can peak at a certain level, a PCR, for a maximum
interval before being problematic.
3. ABR - Available bit rate: a minimum guaranteed rate is specified.
4. UBR - Unspecified bit rate: traffic is allocated to all remaining
transmission capacity.
VBR has real-time and non-real-time variants, and serves for "bursty" traffic.
Non-real-time is sometimes abbreviated to vbr-nrt.
Most traffic classes also introduce the concept of Cell Delay Variation
Tolerance (CDVT), which defines the "clumping" of cells in time.
Traffic policing
To maintain network performance, networks may apply traffic policing to
virtual circuits to limit them to their traffic contracts at the entry points to
the network, i.e. the usernetwork interfaces (UNIs) and network-to-
network interfaces (NNIs): Usage/Network Parameter Control (UPC and
NPC). The reference model given by the ITU-T and ATM Forum for UPC and
NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky
bucket algorithm. CBR traffic will normally be policed to a PCR and CDVt
alone, whereas VBR traffic will normally be policed using a dual leaky bucket
controller to a PCR and CDVt and an SCR and Maximum Burst Size (MBS).
The MBS will normally be the packet(SAR-SDU) size for the VBR VC in cells.
If the traffic on a virtual circuit is exceeding its traffic contract, as
determined by the GCRA, the network can either drop the cells or mark
the Cell Loss Priority (CLP) bit (to identify a cell as potentially redundant).
Basic policing works on a cell by cell basis, but this is sub-optimal for
encapsulated packet traffic (as discarding a single cell will invalidate the
whole packet). As a result, schemes such as Partial Packet Discard (PPD)
and Early Packet Discard (EPD) have been created that will discard a whole
series of cells until the next packet starts. This reduces the number of
useless cells in the network, saving bandwidth for full packets. EPD and PPD
work with AAL5 connections as they use the end of packet marker: the ATM
User-to-ATM User (AUU) Indication bit in the Payload Type field of the
header, which is set in the last cell of a SAR-SDU
Traffic shaping
Traffic shaping usually takes place in the network interface card (NIC) in
user equipment, and attempts to ensure that the cell flow on a VC will meet
its traffic contract, i.e. cells will not be dropped or reduced in priority at the
UNI. Since the reference model given for traffic policing in the network is the
GCRA, this algorithm is normally used for shaping as well, and single and
dual leaky bucket implementations may be used as appropriate.
Types of virtual circuits and paths
ATM can build virtual circuits and virtual paths either statically or
dynamically. Static circuits (permanent virtual circuits or PVCs) or paths
(permanent virtual paths or PVPs) require that the circuit is composed of a
series of segments, one for each pair of interfaces through which it passes.
PVPs and PVCs, though conceptually simple, require significant effort in large
networks. They also do not support the re-routing of service in the event of
a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or
SPVCs), in contrast, are built by specifying the characteristics of the circuit
(the service "contract") and the two end points.
Finally, ATM networks create and remove switched virtual circuits (SVCs) on
demand when requested by an end piece of equipment. One application for
SVCs is to carry individual telephone calls when a network of telephone
switches are inter-connected using ATM. SVCs were also used in attempts to
replace local area networks with ATM.
Virtual circuit routing
Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private
Network Node Interface or the Private Network-to-Network Interface (PNNI)
protocol. PNNI uses the same shortest-path-first algorithm used
by OSPF and IS-IS to route IP packets to share topology information
between switches and select a route through a network. PNNI also includes a
very powerful summarization mechanism to allow construction of very large
networks, as well as a call admission control (CAC) algorithm which
determines the availability of sufficient bandwidth on a proposed route
through a network in order to satisfy the service requirements of a VC or VP.

Network Standards

Network standards are also ground rules that are set by commissions so that
hardware is compatible among similar computers and assures
interoperability. This is done to ensure that backwards compatibility and
compatibility from vendor to vendor. It is necessary to have standards
because if each company had its own protocol standards and didn't allow it
to talk with other protocols there would be a lack of communication from
different machines and would result in one company being hugely successful
and the other running out of business due to lack of being able to
communicate with other machines. So this is why its necessary to have
network standards and protocols because they are what allow different
computers from different companies running different software to
communicate with each-other making networking possible.

File Management
File management is where the network allocates where you store your files
within the network in-order for later recovery by the user for example saving
a word document in a certain place so that only you can access it and
nobody else.

Shared Storage

Shared storage is where the network allocates storage for files that can be
viewed and accessed by other people within the network for example having
a document that many people will need to view and edit would get stored in
the shared storage area of the network.

Account Management

Account management is where the network provides user accounts to log in


to the network, different people will need different access, for example
employee will need to log in into a employees account where rights are
limited where as a admin will log in into a admin account where they have
full access rights.

Web Services

Web services are managed by the network in-order to restrict the end users
on computers from accessing potentially bad websites, these can include
pornography and sites deemed unsafe to the network. They will do this by
blocking the access to such website.

Printing

Printing services are where the network allocates printers to certain


machines for example having one printer per floor for an office or room in-
order to save time from having to go different places to receive a printout.

METRIC UNITS

The metric system is an internationally agreed decimal system of


measurement that was originally based on themtre des Archives and
the kilogramme des Archives introduced by France in 1799. Over the years,
the definitions of the metre and kilogram have been refined and the metric
system has been extended to incorporate many more units. Although a
number of variants of the metric system emerged in the late nineteenth and
early twentieth centuries, the term is now often used as a synonym for
"SI or the "International System of Units"the official system of
measurement in almost every country in the world.
The metric system has been officially sanctioned for use in the United States
since 1866, but it remains the onlyindustrialised country that has not
adopted the metric system as its official system of measurement. Many
sources also cite Liberia and Burma as the only other countries not to have
done so. Although the United Kingdom uses the metric system for most
official purposes, the use of the imperial system of measure, particularly
among the public, is widespread, and is legally mandated in various cases.
Although the originators intended to devise a system that was equally
accessible to all, it proved necessary to useprototype units under the
custody of government or other approved authorities as standards. Control
of the prototype units of measure was maintained by the French government
until 1875 when it passed to an inter-governmental organisation
the General Conference on Weights and Measures (CGPM). 1] It is now hoped
that the last of these prototypes can be retired by 2014.
From its beginning, the main features of the metric system were the
standard set of inter-related base units and a standard set of prefixes in
powers of ten. These base units are used to derive larger and smaller units
that could replace a huge number of other units of measure in existence.
Although the system was first developed for commercial use, the
development of coherent units of measure made it particularly suitable for
science and engineering.
The uncoordinated use of the metric system by different scientific and
engineering disciplines, particularly in the late 19th century, resulted in
different choices of fundamental units, even though all were based on the
same definitions of the metreand the kilogram. During the 20th century,
efforts were made to rationalise these units and in 1960 the CGPM published
theInternational System of Units which, since then, has been
the internationally recognised standard metric system.
Binary/metric prefixes for computer size, speed, etc.

Computers are often described in terms such as megahertz, gigabytes,


teraflops, etc. This section has a brief summary of the meaning of these
terms.
Prefixes

Numbers associated with computers are often very large or small and so
standard scientific prefixes are used to denote powers of 10. E.g. a kilobyte
is 1000 bytes and a megabyte is a million bytes. These prefixes are listed
below, where 1e3 for example means10^3:

kilo = 1e3
mega = 1e6
giga = 1e9
tera = 1e12
peta = 1e15
exa = 1e18

Note, however, that in some computer contexts (e.g. size of main memory)
these prefixes refer to nearby numbers that are exactly powers of 2:

kilo = :math:`2^{10}` = 1024


mega = :math:`2^{20}` = 1048576
etc.

This is falling out of use, however. For a more detailed discussion of this
(and additional prefixes) see [wikipedia].

For numbers that are much smaller than 1 a different set of prefixes are
used, e.g. a millisecond is 1/1000 = 1e-3 second:

mille = 1e-3
micro = 1e-6
nano = 1e-9
pico = 1e-12
femto = 1e-15

Units of memory, storage

The amount of memory or disk space on a computer is normally measured in


bytes (1 byte = 8 bits) since almost everything we want to store on a
computer requires some integer number of bytes (e.g. an ASCII character
can be stored in 1 byte, a standard integer in 4 bytes, see storage).

Memory on a computer is generally split up into different types of memory


implemented with different technologies. There is generally a large quantity
of fairly slow memory (slow to move into the processor to operate on it) and
a much smaller quantity of faster memory (used for the data and programs
that are actively being processed). Fast memory is much more expensive
than slow memeory.

of storage on the hard disk, typically hundreds of gigabytes (hundreds of


billions of bytes). The hard disk is used to store data for long periods of time
and is generally slow to access (i.e. to move into the core memory for
processing). The main memory or core memory might only be 1GB or a few
GB.

Units of speed

The speed of a processor is often measured in Hertz (cycles per second) or


some multiple such as Gigahertz (billions of cycles per second). This tells
how many clock cycles are executed each second. Each instruction that a
computer can execute takes some integer number of clock cycles. Different
instructions may take different numbers of clock cycles. An instruction like
add the contents of registers 1 and 2 and store the result in register 3 will
typically take only 2 clock cycles. On the other hand the instruction load x
into register 1 can take a varying number of clock cycles depending on
where x is stored. If x is in cache because it has been recently used, this
instruction may take only a few cycles. If it is not in cache and it must be
loaded from main memory, it might take 100 cycles. If the data set used by
the program is so huge that it doesnt all fit in memory and x must be
retrieved from main memory, it might take ?? cycles.

So knowing how fast your computers processor is in Hertz does not


necessarily directly tell you how quickly a given program will execute. It
depends on what the program is doing and also on other factors such as how
fast the memory accesses are.

In scientific computing we frequently write programs that perform


many floating point operations such as multiplication or addition of two
floating point numbers (see ??). A floating point operation is often called
a flop. For many algorithms it is relatively easy to estimate how many flops
are required. For example, multiplying an n by n matrix by a vector of length
n requires roughly n^2 flops. So it is convenient to know roughly how many
floating point operations the computer can perform in one second. In this
contextflops often stands for floating point operations per second. For
example, a computer with a peak speed of 100 Mflops can perform up to
100 million floating point operations per second. As in the discussion above
clock speed, the actual performance on a particular problem typically
depends very much on factors other than the peak speed, which is generally
measured assuming that all the data needed for the floating point operations
is already in cache so there is no time wasted fetching data. On a real
problem there may be many more clock cycles used on memory accesses
than on the actual floating point operations.

IEC Standard

Old Standard bit bit 0 or 1

bit b 0 or 1 byte B 8 bits

byte B 8 bits kibibit Kibit 1024 bits

kilobit kb 1000 bits kilobit kbit 1000 bits

Kilobyte (binary) KB 1024 bytes kibibyte (binary) KiB 1024 bytes

Kilobyte (decimal) KB 1000 bytes kilobyte (decimal) kB 1000 bytes

Megabit Mb 1000 kilobits megabit Mbit 1000 kilobits

Megabyte (binary) MB 1024 Kilobytesmebibyte (binary) MiB 1024 kibibytes


Megabyte megabyte
MB 1000 Kilobytes(decimal) MB 1000 kilobytes
(decimal)
Gigabit Gb 1000 Megabitsgigabit Gbit 1000 megabits

1024 1024
Gigabyte (binary) GB gibibyte (binary) GiB
Megabytes mebibytes

1000 gigabyte 1000


Gigabyte (decimal) GB GB
Megabytes (decimal) megabytes

Terabit Tb 1000 Gigabits terabit Tbit 1000 gigabits

1024 tebibyte (binary) TiB 1024 gibibytes


Terabyte (binary) TB
Gigabytes 1000
terabyte (decimal) TB
1000 gigabytes
Terabyte (decimal) TB
Gigabytes petabit Pbit 1000 terabits
Petabit Pb 1000 Terabits pebibyte (binary) PiB 1024 tebibytes
Petabyte (binary) PB 1024 Terabytespetabyte 1000
PB
Petabyte (decimal) PB 1000 Terabytes(decimal) terabytes

Exabit Eb 1000 Petabits exabit Ebit 1000 petabits

Exabyte (binary) EB 1024 Petabytes 1024


exbibyte (binary) EiB
pebibytes
Exabyte (decimal) EB 1000 Petabytes
1000
exabyte (decimal) EB
petabytes
Decimal System

Although data storage capacity is generally expressed in binary


code, many hard drive manufacturers (and some newer BIOSs) use
a decimal system to express capacity:

Old Standard
1 bit (b)
1 byte (B) = 8 bits
1 Kilobyte (K / KB) = 10^3 bytes = 1,000 bytes
1 Megabyte (M / MB) = 10^6 bytes = 1,000,000 bytes
1 Gigabyte (G / GB) = 10^9 bytes = 1,000,000,000 bytes
1 Terabyte (T / TB) = 10^12 bytes = 1,000,000,000,000 bytes
1 Petabyte (P / PB) = 10^15 bytes = 1,000,000,000,000,000 bytes
1 Exabyte (E / EB) = 10^18 bytes = 1,000,000,000,000,000,000
bytes
1 Zettabyte (Z / ZB) = 10^21 bytes =
1,000,000,000,000,000,000,000 bytes
1 Yottabyte (Y / YB) = 10^24 bytes =
1,000,000,000,000,000,000,000,000 bytes

Note: A third definition of Megabyte is that used in formatting floppy


disks: 1 Megabyte = 1,024,000 bytes.
IEC Standard
1 bit (bit)
1 byte (B) = 8 bits
1 kilobyte (kB) = 10^3 bytes = 1,000 bytes
1 megabyte (MB) = 10^6 bytes = 1,000,000 bytes
1 gigabyte (GB) = 10^9 bytes = 1,000,000,000 bytes
1 terabyte (TB) = 10^12 bytes = 1,000,000,000,000 bytes
1 petabyte (PB) = 10^15 bytes = 1,000,000,000,000,000 bytes
1 exabyte (EB) = 10^18 bytes = 1,000,000,000,000,000,000 bytes
1 zettabyte (ZB) = 10^21 bytes = 1,000,,000,000,000,000,000,000
bytes
1 yottabyte (YB) = 10^24 bytes =
1,000,000,000,000,000,000,000,000 bytes

Note: Note the use of a lowercase "k" in the abbreviation for


kilobyte, in keeping with the metric code.

Das könnte Ihnen auch gefallen