Beruflich Dokumente
Kultur Dokumente
systems
Article Table of contents
Connection equipment
The primary hardware set up in local area networks is:
What is a hub?
A hub is an element of hardware for centralising network traffic coming from multiple
hosts, and to propagate the signal. The hub has a certain number of ports (it has enough
ports to link machines to one another, usually 4, 8, 16 or 32). Its only goal is to recover
binary data coming into a port and send it to all the other ports. As with a repeater, a hub
operates on layer 1 of the OSI model, which is why it is sometimes called a multiport
repeater.
The hub connects several machines together, sometimes arranged in a star shape, which
gives it its name, due to the fact that all communication coming from the machines on the
network passes through it.
Types of hubs
There are several categories of hubs:
"Active" hubs: They are connected to an electrical power source and are used to
refresh the signal being sent to the ports.
"Passive" ports: They simply send the signal to all the connected hosts, without
amplifying it.
Repeaters
On a transmission line, the signal suffers from distorsion, and becomes weaker as the
distance is between the two active elements becomes longer. Two local area network
nodes are usually no further than a few hundred meters apart; this is why additional
equipment is needed to place nodes beyond that distance.
A repeater is a simple device for refreshing a signal between two network nodes, in
order to extend the range of a network. The repeater works only on the physical layer
(layer 1 of the OSI model), meaning that it only acts on the binary information travelling
on the transmission line and cannot interpret the packets.
What's more, a repeater can be used as an interface between physical media of two
different types, meaning that it can, for example, link a length of twisted-pair wire to a
fibre-optic line.
What is a hub?
A hub is an element of hardware for centralising network traffic coming from multiple
hosts, and to propagate the signal. The hub has a certain number of ports (it has enough
ports to link machines to one another, usually 4, 8, 16 or 32). Its only goal is to recover
binary data coming into a port and send it to all the other ports. As with a repeater, a hub
operates on layer 1 of the OSI model, which is why it is sometimes called a multiport
repeater.
The hub connects several machines together, sometimes arranged in a star shape, which
gives it its name, due to the fact that all communication coming from the machines on the
network passes through it.
Types of hubs
There are several categories of hubs:
"Active" hubs: They are connected to an electrical power source and are used to
refresh the signal being sent to the ports.
"Passive" ports: They simply send the signal to all the connected hosts, without
amplifying it.
Bridges
A bridge is a hardware device for linking two networks that work with the same protocol.
Unlike a repeater, which works at the physical level, a bridge works at the logical level
(on layer 2 in the OSI model), which means that it can filter frames so that it only lets
past data whose destination address corresponds to a machine located on the other side of
the bridge.
The bridge is used to segment a network, holding back the frames intended for the local
area network while transmitting those meant for other networks. This reduces traffic (and
especially collisions) on all networks, and increases the level of privacy, as information
intended for one network cannot be listened to on the other end.
On the other hand, the filtering carried out by the bridge can cause a slight delay when
going from one network to another, and this is why bridges must be carefully placed
within a network.
A bridge's normal role is to send packets between two networks of the same type.
Concept
A bridge has two connections to two distinct networks. When the bridge receives a frame
on one of its interfaces, it analyses the MAC address of both the sender and recipient. If a
bridge doesn't recognise the sender, it stores its address in a table in order to "remember"
which side of the network the sender was on. This way, the bridge can find out if the
sender and receiver are found on the same side or opposite sides of the bridge. If it's the
former, the bridge ignores the message; if it's the latter, the bridge sends the frame along
to the other network.
Switches
A switch is a multi-port bridge, meaning that it is an active element working on layer 2 of
the OSI model.
The switch analyses the frames coming in on its entry ports and filters the data in order to
focus solely on the right ports (this is called switching and is used in switched
networks). As a result, the switch can act as both a port when filtering and as a hub when
handling connections. Here is a diagram of a switch:
Switching
The switch uses a filtering/switching mechanism that redirects data flow to the most
suitable machines, based on certain elements found in the data packets.
A layer-4 switch, operating on the transport layer of the OSI model, inspects the source
and destination addresses of the messages, and creates a table that lets it find out which
machine is connected to which port on the switch (in general this process is done
automatically, but the switch manager can work differently if the right adjustments are
made).
Once it knows the destination port, the switch only sends the message to the right port,
and the other ports are then free for other transmissions which may be taking place at the
same time. Consequently, each data exchange can run at the nominal transfer rate (more
bandwidth sharing), without collisions, with the end result being a very significant
increase in the network's bandwidth (at an equal nominal speed).
The most advanced switches, called layer 7 switches (corresponding to the application
layer of the OSI model) can redirect data based on advanced application data contained in
the data packets, such as cookies for HTTP, the type of the file being sent for FTP, etc.
For this reason, a layer 7 switch can be used for load balancing, by routing the incoming
data flow to the most appropriate servers, which have a lower load or are responding
more quickly.
Application gateways
An application gateway is a hardware/software system for connecting two networks
together, in order to serve as an interface between different network protocols.
When a remote user contacts the gateway, it examines his/her request; if that request
corresponds to the rules that the network administrator has set, the gateway creates a link
between the two networks. The information, therefore, is not directly transmitted; rather,
it is translated in order to ensure continuity between the two protocols.
Besides an interface between two different kinds of networks, this system offers
additional security, as all information is carefully inspected (which may cause a delay)
and is sometimes recorded in an event log.
The major drawback of this system is that there must be an application of this kind
available for each service (FTP, HTTP, Telnet, etc.).
Router
A router is a device for connecting computer networks to one another, used for handling
the routing of packets between two networks, or to determine the path that a data packet
shall take.
When a user enters a URL, the Web client (the browser) queries the domain name server,
which shows it the IP address of the desired machine.
The workstation sends the request to the nearest router, i.e. to the default gateway on the
network it is located on. This router determines the next machine to which the data will
be forwarded, in such a way as to choose the best pathway possible. To do so, the routers
keep up-to-date routing tables, which are like maps showing the paths that can be taken to
get to the destination address. There are numerous protocols designed to handle this
process.
In addition to their routing function, routers are also used to manipulate data travelling in
the form of datagrams so that they can go from one kind of network to another. As not all
networks are able to handle the same size of data packets, routers are tasked with
fragmenting packets so they can travel freely.
A router has several network interfaces, with each one connected to a different network.
Therefore, it has one IP address for every network it is connected to.
Wireless router
A wireless router is the same in principle as a traditional router, the difference being that
it lets wireless devices (such as WiFi stations) connect to the networks which the router is
connected to by wired connections (usually Ethernet).
Routing protocols
There are two major types of routing protocols:
Distance vector routers generate a routing table that calculates the "cost" (in
terms of the number of hops) of each route, then sends that table to nearby
routers. Each time a connection request is made, the router chooses the least
costly route.
Link state routers listen to the network continuously, in order to identify the
various elements surrounding it. With this information, each router calculates the
shortest pathway (in terms of time) to each neighbouring router, and sends this
information in the form of update packets. Finally, each router builds its own
routing table by calculating the shortest pathways to all other routers (using the
Dijkstra algorithm).
Introduction to bridge/routers
A bridge/router is a hybrid element that joins the features of a router and those of a
bridge. Therefore, this kind of hardware is used for transferring non-routable protocols
from one network to another, and to route the others. More precisely, the bridge/router
acts first and foremost as a bridge if it can, and routes the packets if that isn't possible.
A bridge/router can, in some architectures, save more money and space than having both
a router and a bridge.
Proxy servers
Caching
Most proxies have a cache, the ability to keep pages commonly visited by users in
memory (or "in cache"), so they can provide them as quickly as possible. Indeed, the term
"cache" is used often in computer science to refer to a temporary data storage space (also
sometimes called a "buffer.")
A proxy server with the ability to cache information is generally called a "proxy-cache
server".
The feature, implemented on some proxy servers, is used both to reduce Internet
bandwidth use and to reduce document loading time for users.
Nevertheless, to achieve this, the proxy must compare the data it stores in cached
memory with the remote data on a regular basis, in order to ensure that the cached data is
still valid.
Filtering
What's more, by using a proxy server, connections can be tracked by creating logs for
systematically recording user queries when they request connections to the Internet
Because of this, Internet connections can be filtered, by analysing both client requests
and server replies. When filtering is done by comparing a client's request to a list of
authorised requests, this is called whitelisting, and when it's done with a list of forbidden
sites, it's called blacklisting. Finally, analysing server replies that comply with a list of
criteria (such as keywords) is called content filtering.
Authentication
As a proxy is an indispensable intermediary tool for internal network users who want to
access external resources, it can sometimes be used to authenticate users, meaning to ask
them to identify themselves, such as with a username and password. It is also easy to
grant access to external resources only to individuals authorised to do so, and to record
each use of external resources in log files.
This type of mechanism, when implemented, obviously raises many issues related to
individual liberties and personal rights.
Reverse-proxy servers
A reverse-proxy is a "backwards" proxy-cache server; it's a proxy server that, rather than
allowing internal users to access the Internet, lets Internet users indirectly access certain
internal servers.
The reverse-proxy server is used as an intermediary by Internet users who want to access
an internal website, by sending it requests indirectly. With a reverse-proxy, the web server
is protected from direct outside attacks, which increases the internal network's strength.
What's more, a reverse-proxy's cache function can lower the workload if the server it is
assigned to, and for this reason is sometimes called a server accelerator.
Finally, with perfected algorithms, the reverse-proxy can distribute the workload by
redirecting requests to other, similar servers; this process is called load balancing.