Sie sind auf Seite 1von 21

Basic network concepts and the OSI model in simple terms

In this chapter of the journey to learn computer networking technology we explain the OSI model in simple terms, and expand on the different layers of the OSI model.

terms, and expand on the different layers of the OSI model. The OSI model defines the

The OSI model defines the basic building blocks of computer networking, and is an essential part of a complete understanding of modern TCI/IP networks. An understanding of the concepts of the OSI model is absolutely necessary for someone learning the role of the Network Administrator or the System Administrator. If are looking for something less technical that focuses more on using a computer network, rather than understand the core concepts of how it works, please visit our companion website Smart Technology. At Smart Technology we discuss managing technology from the perspective of a business owner or department manager. Check out the section on Managing Technology and specifically the article on The System Administrator and the Power User.

The role of the Network Administrator or the System Administrator On a small to mid size network there may be little, if any, distinction between a Systems Administrator and a Network Administrator, and the tasks may all be the responsibility of a single post. As the size of the network grows, the distinction between the areas will become more well defined.

In larger organizations the administrator level technology personnel typically are not the first line of support that works with end users, but rather only work on break and fix issues that could not be resolved at the lower levels.

Network administrators are responsible for making sure computer hardware and the network infrastructure itself is maintained properly. The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages via email, pager or other alarms. The typical Systems Administrator, or sysadmin, leans towards the software and NOS (Network Operating System) side of things. Systems Administrators install software releases, upgrades, and patches, resolve software related problems and performs system backups and recovery.

Basic computer networking terms defined

Whether you are a business manager learning the language of technology to better communicate with IT staff, or just beginning your IT career, don't overlook a basic understanding of computer networking.

The simplest definition of a computer network is a group of computers that are able to communicate with one another and share a resource. A computer network is a collection of hardware and software that enables a group of devices to communicate and provides users with access to shared resources such as data, files, programs, and operations.

Common networking terms Each device on a network, is called a node. In order for communications to take place, you need the software, the network operating system (NOS) and the means of communication between network computers known as the media.

In computer networking the term media refers to the actual path over which an electrical signal

travels as it moves from one component to another. The media can be physical such as a specialized cable or various forms of wireless media such as infrared transmission or radio

signals.

A local area network (LAN) is a collection of computers cabled together to form a network in a

small geographic area (usually within one building).

A wide area network (WAN) is connecting computers together long distance using a public

highway such as the internet.

A network interface card (NIC) enables two computers to send and receive data over the network

media.

A gateway is software or hardware, or a combination of the two, that interconnects different

types of networks, translating as necessary between the two.

What is a protocol?

A Network protocol is a specialized electronic language that enables network computers to

communicate. Different types of computers, using different operating systems, can communicate with each other, and share information as long as they follow the network protocols.

A protocol suite is a set of related protocols that come from a single developer or source.

A protocol stack is a set of two or more protocols that work together, with each protocol

covering a different aspect of data communications.

What is the client server network model?

In the most common network model, client server, at least one centralized server manages shared

resources and security for the other network users and computers. A network connection is only made when information needs to be accessed by a user. This lack of a continuous network connection provides network efficiency. The client requests services or information from the server computer. The server responds to the client's request by sending the results of the request back to the client computer. Security and permissions can be managed by administrators which cuts down on security and rights issues when dealing with a large number of workstations. This model allows for convenient backup services, reduces network traffic and provides a host of other services that come with the network operating system.

What are Peer-to-Peer Networks? Simply sharing resources between computers, such as on a typical home network, every computer acts as both a client and a server. Any computer can share resources with another, and any computer can use the resources of another, given proper access rights.

This is a good solution when there are 10 or less users that are in close proximity to each other, but it is difficult to maintain security as the network grows. This model can be a security nightmare, because each workstation setting permissions for shared resources must be maintained at the workstation, and there is no centralized management. This model is only recommended in situations where security is not an issue.

Other Models Before microcomputers because cost effective dumb terminals were used to access very large main frame computers in remote locations. The local terminal was dumb in the sense that it was nothing more than a way for a keyboard and monitor to access another computer remotely with all the processing occurred on the remote computer. This model, sometimes referred to as a centralized model, is not very common.

OSI model explained in simple terms

As you begin your quest to learn computer networking one of the first tasks you have before you is a basic understanding of the OSI model. For many folks understanding the OSI model is like trying to understand some mystical formula that controls the way computer networks operate.

As we help you to begin your journey to understanding computer networking

explaining the complex subject of the computer networking OSI model simple terms in hopes

that you will gain an understanding of the reasons behind the definitions

We will tackle

of the reasons behind the definitions We will tackle You can find a lot of resources

You can find a lot of resources that define the components of the OSI model, but an understanding of the reasons behind the definitions will go a lot way to fully understanding this complex technology model.

The acronym and the organization behind it can get confusing. The formal name for the OSI model is the Open Systems Interconnection model. Open Systems refers to a cooperative effort

to have development of hardware and software among many vendors that could be used together. The model is a product of the International Organization for Standardization (2) which is often abbreviated ISO.

The logic behind ISO

Before we delve into the OSI model, let us take a moment to understand the organization behind it. You may have seen the term ISO certified in various technology areas. ISO, International Organization for Standardization, (1) is the world's largest developer and publisher of International Standards. ISO helps to manage and create many international standards in many technical areas to insure the same quality of a product or process regardless of location or company.

The OSI (Open Systems Interconnection) model provides a set of general design guidelines for data communications systems and gives a standard way to describe how various layers of data communication systems interact. Applying the logic of the ISO standards to computer networking, a computer component, or computer software needs to comply to set of standards so that the product or process will work no matter where in the world we are, and no matter who is the world is producing it.

Putting the OSI model into perspective

Strive for a good understanding of the intent of the model and a few of the core principles, that will go a long way in an overall understanding of computer networking. Do not focus on the intricate details of the OSI model at first, as the more you read the more confused you may get. The model was created in the 1970s and the technology is ever changing. Many text books will contradict each other on some aspects of the upper layers. Some of the reasoning behind the upper layers are for processes that are not nearly as useful today as they were many years ago, and for that reason many other network models will blend together the upper three layers into a single layer.

Basic definitions of the OSI Model

The seven layers of the OSI Model can be remembered by using the following memory aide: All People Seem To Need Data Processing. As you say the phrase, write down the first letter of each word, and that will help you to remember the seven layers in order from highest to lowest:

Application, Presentation, Session, Transport, Network, Data Link, and Physical. We will briefly discuss the lower four layers from the bottom up.

Layer one, the Physical layer provides the path through which data moves among devices on the network.

Layer two, the Data Link layer provides a system through which network devices can share the communication channel.

Layer three, the Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

Layer four, the Transport layer provides the upper layers with a communication channel to the network.

An analogy to understand the model

Some of reasons behind the OSI model are, to break network communication into smaller, simpler parts that are easier to develop and to facilitate standardization of network components to allow multiple vendor development and support.

Let's take the reasons behind the OSI model and apply them to something totally different to illustrate how they are used. If we wanted to start a railroad and build a new type of train from scratch, and we wanted this train to be able to use existing train tracks, and existing train stations so our new system could get up and running quickly, we would need to understand what existing standards are currently in place.

Even if we never had to build a set of train tracks we would need understand the standards by which train tracks were build and designed so we could assure our train could operate on them, and how the track is shared. Likewise, in order for components to operate, manufactures must understand the track, layer one, and how the track is shared, layer two.

If we are building trains, not train stations, we need to know the size and shape of other vehicles using the tracks so our trains could use the same track as all the other trains. Layer one of the OSI model gives us the path, or the track we use for communication. Layer one, referred to as the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Once you have more than one train on the track, you need to find a way to share the track. Layer two provides a system through which network devices can share the communication channel, or in the case of our analogy, share the track. One of the functions of layer two is called media access control (MAC). If you think about the term media access control you can break it down into the two parts it represents, the media or the track, and access control, or the sharing of the track.

In the OSI model layers one and two represent the the media, or the physical components. Layers three through seven represent the logical, or the software components.

In layer three of the OSI model, the Network later, the logical decision is made to decide which physical path the information should follow from its source to its destination.

In order to continue our analogy to understand this complex set of rules, think of the track system that has already been built as layers one and two. Once this track system is in place we need a system to control the routing of the train system that runs on the tracks. Think of layers three through seven as processes which affect the train itself, which would represent the actual package of information being transported along the tracks. The main purpose of layer three is switching and routing.

Layer four of the OSI model, the transport layer ensures the reliability of data delivery by detecting and attempting to correct problems that occurred. In terms of our analogy, think of this as a set of standards and procedures that allows our train to arrive safely at its destination in a timely manner.

Learning and understanding the OSI model can be confusing

meant to define the layers of the OSI model from purely a technical nature, but to offer an analogy to understand why it is needed and how it used to establish standards for data communications. In our next article we will go over the basic definitions of all the layers of the OSI model.

OSI defined: the details of the OSI model.

In our previous article, OSI model explained in simple terms, we kept it simple.

The goal of this article was not

terms, we kept it simple. The goal of this article was not We will continue on

We will continue on from there to explain some of the details of the OSI model. We will do our best to break it down into bite sized chunks to help you understand the concepts.

The OSI (Open Systems Interconnect) reference model was developed in the early 1970s by the International Standards Organization (ISO). Provides a set of general design guidelines for data- communications systems and also gives a standard way to describe how various portions (layers) of data-communication systems interact.

The hierarchical layering of protocols on a computer that forms the OSI model is known as a stack. A given layer in a stack sends commands to layers below it and services commands from layers above it.

The Purpose of the OSI Model:

breaks network communication into smaller, simpler parts that are easier to develop.

facilitates standardization of network components to allow multiple-vendor development and support.

allows different types of network hardware and software to communicate with each other.

prevents changes in one layer from affecting the other layers so that they can develop more quickly.

breaks network communication into smaller parts to make learning it easier to understand.

The seven layers in order from highest to lowest are Application, Presentation, Session,

Transport, Network, Data Link, and Physical can be remembered by using the following memory aide: All People Seem To Need Data Processing.

The Application layer includes network software that directly serves the user, providing such things as the user interface and application features. The Application layer is usually made available by using an Application Programmer Interface (API), or hooks, which are made available by the networking vendor.

The Presentation layer translates data to ensure that it is presented properly for the end user, also handles related issues such as data encryption and compression, and how data is structured, as in a database.

The Session layer comes into play primarily at the beginning and end of a transmission. At the beginning of the transmission, it makes known its intent to transmit. At the end of the transmission, the Session layer determines if the transmission was successful. This layer also manages errors that occur in the upper layers, such as a shortage of memory or disk space necessary to complete an operation, or printer errors.

The Transport layer provides the upper layers with a communication channel to the network. The Transport layer collects and reassembles any packets, organizing the segments for delivery and ensuring the reliability of data delivery by detecting and attempting to correct problems that occurred.

The Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

The Data Link layer provides a system through which network devices can share the communication channel. This function is called media-access control (MAC).

The Physical layer provides the electro-mechanical interface through which data moves among devices on the network.

In the articles that follow we will break down each layer in more detail, covering topics you will need to know as a networking professional.

The Physical Layer of the OSI model

The Physical Layer is the lowest layer in the seven layer OSI model of computer networking.

The Physical Layer consists of the basic hardware transmission technologies of a network sometime referred to as the physical media. Physical media provides the electro-mechanical interface through which

network sometime referred to as the physical media. Physical media provides the electro-mechanical interface through which

data moves among devices on the network.

Initially physical media is though of as some sort of wire. As technology progresses the types of media grows. Bounded media transmits signals by sending electricity or light over a cable. Unbounded media transmits data without the benefit of a conduit-it might transmit data through open air, water, or even a vacuum. Simply put, media is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Definitions from the wired world of data transmission:

Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), connections run over the standard copper phone lines found in most homes

Integrated Services Digital Network (ISDN) uses a single wire or fiber optic line to carry voice, data, and video signals.

Basic Rate Interface (BRI) is most commonly used in residential ISDN connections. It's composed of two bearer (B) channels at 64 Kbps each for a total of 128 Kbps (used for voice and data) and one delta (D) channel at 16 Kbps (used for controlling the B channels and signal transmission). The total bandwidth is up to 144 Kbps.

Primary Rate Interface (PRI) is most commonly used between a PBX (Private Branch Exchange) at the customer's site and the central office of the phone company. It is composed of 23 B channels at 64 Kbps and one D channel at 64 Kbps. The total bandwidth is up to 1,536 Kbps.

Digital Subscriber Line (DSL) technologies use existing, regular copper phone lines to transmit data. DSL hardware can transmit data using three channels over the same wire. In a typical set up, a user connected through a DSL hookup can send data at 640 Kbps, receive data at 1.5 Mbps, and still carry on a standard phone conversion over one line.

T-Carrier Technology is a digital transmission service used to create point-to-point private networks and to establish direct connections to Internet Service Providers. It uses four wires, one pair to transmit and another to receive.

T-1 lines support data transfer at rates of 1.544 megabits per second. Each T-1 line contains 24 channels. The E1 line is the European counterpart that transmits data at 2.048 Mbps.

T-3 has 672 (64 Kbps) channels, for a total data rate of 44.736 Mbps. The E3 line is the European counterpart that transmits data at 34.368 Mbps.

Cable connections provide access to the Internet through the same coaxial cable that brings cable TV into your home. A signal splitter installed by the cable company isolates the Internet signals from the TV signals. The two-way cable connection is always available and can be very fast. Speeds up to 30 Mbps are claimed to be possible, although speeds in the 1 to 2 Mbps range are more typical.

Unbounded media examples of data transmission:

Narrow band radio, laser, and microwave , transmission can not occur through steel or load bearing walls. Satellite has a transmission delay of 240 to 300 milliseconds Terrestrial microwave is commonly used for long distance voice and video transmissions, and for short distance high speed links between buildings. Laser is resistant to eavesdropping and capable of high transmission rates; susceptible to attenuation and interference. Spread spectrum radio frequencies are divided into channel or hops.

The Physical Layer: data communications definitions

While in your world many of the older technologies of data communications may be replaced with modern one, there are many reasons why you may need to know about them. You may get a better understanding of how things are done on your current network if you understand the evolution of the network. If you ever work in consulting you may be surprised to find out how much of what you call obsolete is still in use. You will also find questions on older technologies on various certification tests.

Data communications definitions:

In the early days of connecting your computer to the internet most folks had Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), and all connections were run over the standard copper phone lines. In order for the digital world of computers to talk over analog phone lines you needed to use a MODEM.

The term MODEM comes from the words modulator and demodulator, it is a device that modulates a carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.

Modem standards, or V dot modem standards, are defined by the ITU (International Telecommunications Union). The FCC has limited the speed of analog transmissions to 53 Kbps

Twisted pair cabling is a common form of wiring in which two conductors are wound around each other for the purposes of canceling out electromagnetic interference which can cause crosstalk. The number of twists per meter makes up part of the specification for a given type of cable.

which can cause crosstalk. The number of twists per meter makes up part of the specification

The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded twisted-pair (STP).

UTP - Unshielded Twisted Pair; uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max length is 100 meters, speed is up to 100Mps. Cheap, easy to install, length becomes a problem. Can be CAT 2,3,4 or 5 quality grades.

In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire mesh. Shielded Twisted Pair uses RJ-45, RJ-11, RS-232, and RS-449 connectors, max length is

100 meters, speed is up to 500Mps. Not as inexpensive as UTP, easy to install, length becomes a

problem. Can be CAT 2,3,4 or 5 quality grades.

Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most telephone cable installed prior to 1983 is Category 1.

Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to 4 Mbps (megabits per second).

Category 3 UTP cable is made up of four twisted-pair wires, each twisted three times per foot. Category 3 is certified to transmit data up to 10 Mbps.

Category 4 UTP cable is made up of four twisted-pair wires, certified to transmit data up to 16 Mbps.

Category 5 UTP cable is made up of four twisted-pair wires, certified to transmit data up to 100 Mbps.

Twisted-pair Ethernet cable has the following specifications:

a maximum of 1,024 attached workstations

a maximum of 4 repeaters between communicating workstations

a maximum segment length of 328 feet (100 meters).

100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a

100 Mbps data transmission speed. Each segment can be up to 100 meters long.

100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters long.

Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is up to 2Gbps but very expensive. Best used for a backbone due to cost.

100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media.

Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex, single-mode fiber media has a maximum segment length of 10,000 meters.

The Physical Layer: Fast Ethernet Specifications

Continuing next with data communications definitions let us cover some Fast Ethernet specifications and definitions.

Coaxial cable (coax) was commonly used for thick ethernet, thin ethernet, cable TV and ARCnet, coaxial cabling that uses BNC connectors; heavy shielding protects data, but expensive and hard to make connectors Through the first half of the 1980s, Ethernet's 10BASE5 implementation used a coaxial cable 0.375 inches (9.5 mm) in diameter, later called "thick Ethernet" or "thicknet". Its successor, 10BASE2, called "thin Ethernet" or "thinnet", used a cable similar to cable television cable of the era.

10BASE5, also called Thicknet or Thick Ethernet, uses thick, coaxial cable. Thick coax cable (RG-6) requires the following:

a 50-ohm terminator on each end of the cable;

a maximum of 3 segments with attached devices (populated segments);

a network board using the external transceiver;

a maximum of 100 devices on a segment, including repeaters;

a maximum length of 1,640 feet (500 meters) per segment;

a maximum of 4,920 feet (1500 meters) per segment trunk;

one ground per segment;

a maximum of 16 feet (5 meters) between a tap and its device; and

a minimum of 8 feet (2.5 meters) between taps.

10BASE2 uses thin Ethernet cable. Thin coax cable, or Thin Ethernet, implemented with T-connectors and terminators, such as RG-58 and A/U or C/U, have the following specifications:

a 50-ohm terminator on each end of the cable;

a maximum length of 1,000 feet (185 meters) per segment;

a maximum of 30 devices per segment;

a network board using the internal transceiver;

a maximum of 3 segments with attached devices (populated segments);

one ground per segment;

a minimum of 1.5 feet (.5 meters) between T-connectors;

a maximum of 1,818 feet (555 meters) per trunk segment; and

a maximum of 30 connections per segment.

A British Naval Connector (BNC), also known as the Bayonet Nut Connector, also known as Bayonet Neill-Concelman (the inventors of the BNC connector) is usually used for thinnet coaxial cable. A terminator is a resistor attached to the end of the cable. Its purpose is to prevent

signal reflections, effectively making the cable "look" infinitely long to the signals being sent across it.

Physical Layer Topology

A network topology refers to the layout of the transmission medium and devices on a network.

As a networking professional for many years I can honestly say about the only time network topology has come up is for certification tesing. Here are some basic definitions.

Physical Topology:

Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations

and other network devices tap directly into the backbone by using drop cables that are connected

to the backbone.

This topology is an old one and essentially has each of the computers on the network daisy- chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up. Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.

The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify. Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.

The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access. Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the

network.

Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment. Logical Topology:

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid). Logical topology defines the network path that a signal follows (ring or bus).

The Data Link Layer of the OSI model

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking. The Data Link layer deals with issues on a single segment of the network.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC),

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:

Unacknowledged connectionless service:does not perform reliability checks or maintain a

connection, very fast, most commonly used

Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.

Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged

IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data. MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs. The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low- complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.

The Network Layer of the OSI model

The Network Layer is Layer 3 of the seven layer OSI model of computer networking. The key element of the Network Layer are addressing and routing.

The Network Layer defines how information moves to the correct network address, how messages are addressed and how logical addresses and names are translated into

physical addresses, as well as enabling the option of specifying a service address, known as a sockets or ports to point the data to the correct program on the destination computer.

Addressing

Each computer on a TCP/IP network has to have a unique, numeric IP address. The IP address is like a mailing address, some of the bits represent the network segment that the computer is on, like the street name of a mailing address. Other bits represent the particular host on the segment, like the house number of a mailing address.

IP addresses have 4 bytes, each of which is referred to as an octet. Since each byte in the address has 8 bits, an IP address is 32 bits long. IP addresses are usually displayed in decimal format where the value of each byte is converted from binary to decimal. This makes them easier to remember. For example, an IP address of 74.52.151.178 is much easier to remember than its binary equivalent of: 01001010.00110100.10010111.10110010

If an IP address represents a mailing address, thing of the service address as a specific room in

the house. The service address is a number that is appended to the IP Address such as 74.52.151.178:25 where 74.52.151.178 is the IP address and 25 is the service address. In the early days of computer networking the term socket number was use. A well-known range of port numbers is reserved by convention to identify specific service types on a host computer.

On most IP networks, computers have not only IP addresses, but they also have descriptive names that are easier for people to remember and use. This name is called the host name. It's a friendly name assigned to a computer that people can use instead of the numeric IP address

Routing

Routing is the process of selecting which physical path the information should follow from its source to its destination. The Network Layer manages data traffic and congestion involved in packet switching and routing

Routers are devices that play a significant role in directing the flow of data between two or more networks. Routers make sure that information makes it to the intended destination as well as ensure that information does not go where it is not needed. This is crucial for keeping large volumes of data from clogging connections.

One of the tools a router uses to decide where a packet should go is a configuration table. A configuration table identifies which connections lead to particular groups of addresses and sets priorities for connections to be used and establishes rules for handling both routine and special cases of traffic.

A

configuration table can be as simple as a half-dozen lines in the smallest routers, but can grow

to

massive size and complexity in the very large routers that handle the bulk of Internet

messages.

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the

envelope and forward the data to its destination and defines how much data can fit in a single packet.

Internet Protocol (IP)

before sending data. IP is responsible for addressing and routing of packets between computers. It does not guarantee delivery and does not give acknowledgement of packets that are lost or sent out of order as this is the responsibility of higher layer protocols such as Transmission Control Protocol (TCP).

is a connectionless protocol, which means that a session is not created

Time To Live (TTL) is a concept in IP that prevents packets from endlessly looping around the Internet. When a packet leaves a computer the TTL is set to a maximum of 256 Each router will decrease the TTL by one or more If the TTL reaches Zero, the Router Sends the Source Computer a ICMP-Time Exceeded and discards the packet Packet Switching Throughout the standard for Internet Protocol you will see the description of packet switching, "fragment and reassemble internet datagrams when necessary for transmission through small packet networks." A message is divided into smaller parts know as packets before they are sent. Each packet is transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message.

Packet switching explained in simple terms

As you begin your quest to learn computer networking one of the important concepts you will need to understand is packet switching. One of the key differences between communications before the internet, to the way information flowed with the new standards known as Internet Protocol, is the concept of packet switching.

as Internet Protocol, is the concept of packet switching. Internet data, whether in the form of

Internet data, whether in the form of a Web page, a downloaded file or an e-mail message, travels over a system known as a packet-switching network. Each of these packages gets a wrapper that includes information on the sender's address, the receiver's address, the package's place in the entire message, and how the receiving computer can be sure that the package arrived intact.

There are two huge advantages to the packet switching. The network can balance the load across various pieces of equipment on a millisecond-by-millisecond basis. If there is a problem with one piece of equipment in the network while a message is being transferred, packets can be routed around the problem, ensuring the delivery of the entire message.

Packet switching, an integral part of internet technology and internet history explained in simple terms

In teaching the concept of packet switching in the classroom, I would take a piece of paper with a message written on it, and from the front of the classroom, ask the person in the front seat

simply to turn around and pass the paper to the person behind him, and in turn continue the process until the paper made it to the person in the back row.

In the next phase of the illustration, I would take the same piece of paper that had the message written on it, and tear it into four pieces. On each individual piece of paper I would address it as if sending a letter through the postal service, by writing my name as the sender, and also the name of the person in the back of the room as the recipient. I would also label each individual piece of paper as one of four, two of four, three of four, and four of four.

This time I would take the four individual pieces of paper and walk across the front row, and as I handed one piece of paper to four different students, I would explain to them who was to receive the paper, and asked them to pass it to the person marked as the recipient by using the people behind them. When all four pieces of paper arrived at the destination, I would ask the recipient to read the label I had put on each piece of paper, and confirm they had received the entire message.

My original passing of the paper represented Circuit switching, the telecommunications technology which used circuits to create the virtual path, a dedicated channel between two points, and then delivered the entire message.

My second passing of the "packets" or scraps of paper illustrated packet switching, and each individual in the room acted as a router. The key difference between the two methods was the additional routes that the pieces of the message took. A very primitive, but effective demonstration of packet switching and the way in which a message would be transmitted across the internet.

Once the concept of packet switching was developed the next stage in the evolution was to create a language that would be understood by all computer systems. This new standard set of rules would enable different types of computers, with different hardware and software platforms, to communicate in spite of their differences.

The transport layer is layer four of the OSI model.

In computing and telecommunications, the transport layer is layer four of the seven layer OSI model. It responds to service requests from the session layer and issues service requests to the network layer.

Transport Layer is responsible for packet handling, ensures error free delivery, repackages messages, divides messages into smaller packets, and handles error handling.

The purpose of the Transport layer is to provide transparent transfer of data between end users, thus relieving the upper layers from any concern with providing reliable and cost-effective data transfer.

On the Internet there are a variety of Transport services, but the two most common are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP is the more complicated, providing a connection and byte oriented stream which is almost error free, with flow control, multiple ports, and same order delivery. UDP is a very simple datagram service, which provides limited error reduction and multiple ports.

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgements are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgements to track successful packet transfer

If the ACK is not received after a given time period, then the data is resent. If segments are not delivered to the destination device correctly, then the Transport layer can initiate retransmission or inform the upper layers. Uses segmentation, flow control, and error checking to insure packet delivery the purpose of name resolution, either to an IP/IPX address or a network protocol name resolution helps upper layer services communicate segment destinations with lower layer services.

User Datagram Protocol (UDP) provides same services as TCP but is connectionless and unacknowledged. UDP lets applications send datagrams without the overhead involved in acknowledging packets and maintaining a virtual circuit. UDP is therefore used to broadcast messages across an internetwork, because acknowledgment is unnecessary and overhead is undesirable.

The Internet family of protocols the TCP/IP protocol suite

TCP/IP is not a single protocol, but rather an entire family of protocols. The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate.

Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the layer below.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family.

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless protocol, which means that a session is not created before sending data. It does not guarantee delivery and does not give acknowledgement of packets that are lost or sent out of order as this is the responsibility of higher layer protocols such as TCP.

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgement (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgements are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgements to track successful packet transfer

Once the basic concept of the TCP/IP family was developed, many more members of the family were added. Some of the more common protocols are listed here.

Simple Mail Transfer Protocol (SMTP) is used for transferring email across the internet.

File Transfer Protocol (FTP) is used to upload and download files.

Hyper Text Transfer Protocol (HTTP) is the protocol used to transport web pages.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC) address. A diskless workstation uses this protocol during bootup to determine its IP address.

BOOTP is used by diskless workstations. It enables these types of workstations to discover their IP addresses, the address of a server host, and the name of the file that should be loaded into memory and run at bootup.

Dynamic Host Configuration Protocol (DHCP) is used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

Simple Network Management Protocol (SNMP) was designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.

The evolution of the Internet and the birth of TCP/IP

The creation of the protocol suite TCP/IP as the basic set of rules for computers to communicate was one of the last major phases in the development of this global network we now call the Internet. The internet was not something born of a single idea, but rather a gradual evolution, and the work of many people over many years.

evolution, and the work of many people over many years. The idea started with a vision

The idea started with a vision to create a decentralized computer network, whereby every computer was connected to each other, but if one member of the systems was hit, the others would remain unaffected.

From the initial idea of a decentralized computer network came the concept of packet switching. During the 1960s Paul Baran developed the concept of packet switching networks while conducting research at the historic RAND organization.

What is a Protocol?

Once the concept of packet switching was developed the next stage in the evolution was to create a language that would be understood by all computer systems.

The network concept of protocols would establish a standard set of rules that would enable different types of computers, with different hardware and software platforms, to communicate in spite of their differences. Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

During the 1970s Bob Kahn and Vinton Cerf would collaborate as key members of a team to create TCP/IP, Transmission Control Protocol (TCP) and Internet Protocol (IP), the building blocks of the modern internet.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards.

TCP/IP RFC History

The creation of TCP/IP as the basic set of rules for computers to communicate was one of the last major phases in the development of this global network we now call the Internet. Many additional members of the TCP/IP family of protocols continue to be developed, expanding of the basic principals established by Bob Kahn and Vinton Cerf back in the 1970s.

In 1981 the TCP/IP standards were published as RFCs 791, 792 and 793 and adopted for use. On January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, the predecessor to today's internet.