Beruflich Dokumente
Kultur Dokumente
Local-Area Networks (LANs), in which all of the computers comprising the network are in close
geographic proximity to each other, and
An example of a local-area network would be the Limerick network at Calvin, which connects
computers in the student computer labs, enabling access to shared disk drives
and printers.
One of the main factors that serves to distinguish LANs and WANs is the category of network hardware
involved. LANs use standard network hardware, and there are limits to the distances that such
technologies can extend before transmissions are no longer functional. On the other hand, in order to
transmit over distance for WAN, technologies beyond those of conventional network hardware are
required: e.g., high-speed data lines leased from a telecommunications company, or satellite technology.
At "rock bottom" is the physical layer. This layer consists of hardware that simply provides the means
for raw digital signals to pass from one computer to another.
In 1973, a young researcher at the Xerox Palo Alto Research Center (PARC)
named Bob Metcalfe developed network hardware for building smaller-scale
networks he called ethernet, which is still the most widely used category of network
hardware today. Ethernet is not a brand name; rather, it is a scheme for how
network hardware parts can be built to operate together—in the same way that the
term "phillips" describes the way screwdrivers and screws must be built in order to
work together. Thus, a great variety of companies produce and sell ethernet cards
and cables. When you hear the term ethernet, simply understand it as a synonym
for "network."
Cables
One way of creating a physical connection between computers is with cabling. The most conventional
way to transmit information is by sending an electrical signal down a copper wire. Because plain copper
wire is highly susceptible to electromagnetic interference (which is not good for communication between
computers), it is almost never used by itself in a computer network. Instead, one of two kinds of wire is
used: twisted pair or coaxial cable. In both twisted pair and coaxial cable, electricity moves at the speed
of electrons through copper, which provides the potential for reasonably high bandwidth.
Twisted pair cable consists of two wires twisted together, only one of which carries
the information. This simple, inexpensive technique is the approach used in
telephone lines. It produces two benefits:
the second wire absorbs some of the electromagnetic energy emitted by the
wire carrying the signal, reducing the interference this pair might cause to
neighbouring pairs
the second wire also absorbs any electromagnetic interference coming from outside the pair,
reducing the interference received by the wire carrying the information.
The most common grade of twisted-pair network cable is known as "category five," or CAT 5.
Twisted pair cabling uses a snap-in modular connector similar to that used
to connect a telephone cord into a phone jack, although it is a big larger.
This type of cabling scheme is often referred to with the BaseT suffix: e.g.,
"10BaseT." Again, when you hear this term, simply substitute the word
network.
Coaxial cable consists of a single copper wire surrounded by insulation that is itself
surrounded by a flexible metal cylindrical sheath. This is the approach used by
cable-TV companies. Because the metal sheath completely surrounds the wire
carrying the signal, coaxial cable is far more expensive than twisted pair. For the
same reason, it does a far better job of shielding: the wire is well protected from
external electromagnetic interference. This kind of cabling is often denoted with the
suffix Base2—e.g., "10base2."
Unfortunately, the terms "10BaseT" and "10Base2" are all too easily confused. Try to remember: in
10BaseT, T stands for "twisted."
Pulses of light move through a glass fiber at the speed of light (the fasted
possible transmission speed). Other factors aside, fiber optic networks offer
the highest bandwidth of any communication media.
The main answer is that fiber optic cable is very expensive to install; in fact, most of the cost is in the
installation!
One key difficulty is that connecting two glass fibers is more difficult than connecting two copper wires. To
connect two copper wires, you simply twist them together. By contrast, you can't just twist two glass fibers
together. Instead, the fibers must be connected using a special connector that (hopefully) bonds the fibers
together seamlessly.
More precisely, light refracts when it passes from one medium to another, and unless the two glass fibers
being connected are bonded without a seam, the seam may represent a change of medium. Thus, light
crossing a connection with a seam may refract, and the refraction can cause the light pulse to change in
frequency or wavelength. Since the light pulse represents information, such changes will change the
information being transmitted across the connection. The chief disadvantage of glass fiber is thus the
difficulty of connecting two fibers in such a way that the light passes through the connection unaltered.
Finally, fiber optic cable is also more fragile than copper cable and difficult to bend. Thus, it can
sometimes be difficult to find a path for the cable.
Wireless
The first years of the new millennium have seen fast growth in wireless networking.
802.11b, also known as Wi-Fi (short for "wireless fidelity") and "Wireless Ethernet," provides a standard
for a wireless physical layer for wireless networking. This increasingly affordable technology has become
a popular technology for wireless home networking, but it is quickly becoming the choice for commercial
applications as well. An 802.11b network is often called a WLAN ("wireless local-area network).
One of the most interesting social phenomena that Wi-Fi has produced is that of the hot spot. Here, the
idea is to create and advertise spaces in which 802.11b wireless networks have been created, such that
anyone with a laptop or handheld computer with Wi-Fi capability can access the Internet. A growing
number of airports, for instance, have gates with Wi-Fi hotspots; thus, travellers can connect to the
Internet simply by walking into the invisible hot spot. No cable or cellular connection is necessary. Many
campus libraries have designated an entire floor (or the entire library) as a hot spot so that students have
Internet access anywhere they sit.
Perhaps one of the most popular sites for the creation of WLANs is the
astonishing number of coffee shops that are becoming wireless
networking hot spots.
There are several other popular technologies for wireless networking over
shorter distances. Bluetooth technology, for instance, is a radio frequency
technology that enables users to create wireless connections between
handheld computers and cellular phones
However, to utilize this service, the customer needs a link from their site to one of the
points at which the ISP's network can be accessed. Such an ISP connection points are
called an ISP point of presence, or POP.
How is this connection to the POP created? There are a variety of means.
Modems
The term modem is a contraction of "modulate and demodulate."
In the context of information technology, to modulate means to turn a computer's output of binary digits
(bits, zeroes and ones) into some sort of an analog wave that can travel across a medium. In the case of
copper wire, this takes the form of a wave of electrical current. In the case of fibre optic cable, this takes
the form of a wave of light. To demodulate is to convert this wave back into digital bits that a computer
can input.
In the case of traditional modems, it may be helpful to simply think of this device as converting zeroes and
ones into sound waves, which then travel over the phone line to another modem, where the sounds are
converted back into zeroes and ones.
Today, the fastest modems for analog telephone lines (sometimes called POTS, "plain old telephone
service") are rated at 56,000 bits (56 kilobits) per second. However, FCC regulations are such that no
modem can operate faster than around 52,000 bps. Moreover, anytime there is any degree of noise or
distortion on the telephone line, this causes the modem to drop down to a slower speed. Thus, even with
a very good phone line, the typical home computer user is only likely to reach speeds of around 45,000
bps (45 Kbps).
Although such modems approach what was the cutting-edge speed of the ARPANET in 1969, this form of
connection is fairly low bandwidth by today's standards. Moreover, users of traditional modems over
telephone lines are subject to sudden disconnections for any of a variety of reasons.
For these reasons, and because other technologies have become more affordable, many consumers are
turning to what are called broadband connections to their ISPs.
(Question: why are 56 Kbps modems sometimes described as being capable of 57,600 bps rather than
56,000 bps? Answer: because in the binary computer context, one K is equal to 1024, not 1000, as in the
decimal metric system.)
Broadband: DSL
A digital subscriber line (DSL) is a digital telephone line. That is, both sound and data travel over the
wires as digital bits. Voices get encoded into binary digits, travel to the destination, and are then decoded
back into sounds, but there is no need to encode and decode digital data into sounds and then back
again.
DSL is sometimes described as asymmetric (ADSL), because it is based on the observation that almost
all network interactions are as follows:
1. A user sends a (small) network-related command (e.g., a request for a remote webpage);
2. The network sends a (big) response to that command (e.g., the page containing graphics,
applets, music, etc.).
That is, it takes very little outgoing bandwidth to send a user's command (e.g. mouse click on a URL in a
browser).
By contrast, a great deal of bandwidth is needed to retrieve the response to that command (e.g.,
webpage), because it is much larger by comparison to the user command.
DSL divides up the digital phone line's bandwidth accordingly, providing an effective bandwidth of 576
Kbps for outgoing (or upstream) packets and a maximum bandwidth of 6.144 Mbps for incoming (or
downstream) packets.
A DSL modem accomplishes this by modulating the higher frequencies on a phone line that are unused
by the line's normal voice-carrier wave. More precisely, it divides those frequencies into 286 different
channels, 255 for "downstream" traffic, 31 for "upstream" traffic, and 2 for control signals. This is like
having a traditional modem that, instead of having a single modulator and a single demodulator, has 31
modulators, and 255 demodulators, all working in parallel.
Even more remarkable is the fact that ADSL keeps all of this "data" traffic on frequencies distinct from
those used to carry the line's "voice" traffic, so that using a DSL modem, a person can talk on the phone
and access the Internet at the same time, using the same telephone line!
In most markets, the cost of DSL is slightly higher than the cost of a second phone line, but most users
report that the difference in speed more than justifies the price difference. A strong argument can be
made that DSL represents the most cost-effective home-access solution available today.
The main drawback to cable modem technology is that most cable companies employ a bus topology, in
which all of the subscribers in a given region/neighborhood share the same bandwidth. The result is poor
scalability, since all of the users must compete for the same (limited) bandwidth.
To illustrate, if you are the only person in your region/neighborhood with a cable modem, you can expect
to get great performance (e.g., 10 Mbps). However, if just 1000 other people in your region are using the
same service, your performance will drop to 10 Mbps / 1000 = 10 Kbps, and every additional person will
cause further degradation.
For this reason, most people today believe that DSL represents a more cost-effective way to access the
Internet than a cable modem. (Although if everyone else acts on this belief and avoids cable modems,
you may be the only person with a cable modem in your region and get great performance! But if your
cable company continues to support a program with such low demand . . .)
Broadband: DSS
It is also possible to connect to an Internet Service Provider through such digital
satellite system (DSS) services as DirecTV and Dish Network. Such connections
achieve speeds comparable to DSL and cable modems and provide availability in
areas that are too remote for DSL and cable modems.
Again, the role of an ISP is to take a customer from the ISP's point of presence to a network access point
(NAP), which is a kind of "ramp" onto the backbone.
Calvin College's Internet Service Provider (ISP) is Merit, a long-standing network company.
Merit provides MichNet, a network joining educational institutions and non-profit
organizations throughout the state.
2. Click the map to the right to see a larger view of the North American
backbones of a major network company.
o Where are some of the key network access points in North America?
3. Click on the map to the right to see this company's global backbones.
When we surf the Web with our Web browsers, all we are shown is a little animation in the upper-
upper
right corner of the browser window to represent the operations of the Internet behind the scenes.
But an examination of the Internet infrastructure reveals, once again, that the paradigm of graphical user
i.e., "hide the dirty details from the end user, who doesn't need to know"
interfaces—i.e., know"—also hides some of
Clearly, the Web is not yet a
the social and geopolitical relationships inherent in information systems. Clearly,
"worldwide" one; nor is it a system of "equal opportunity." We must not be lulled into believing that this
global infrastructure is simple or neutral.
Network Software
Networks are often described as a system of layers. At the bottom level is the physical layer,
layer which
consists of the network hardware we have been describing. On top of the physical layer are 6 layers of
software.
Client-Server
On many networks, software operates according to a client-server
pairing of software.
It is important to note that the fundamental distinction between a "client" computer and "server" computer
is one of software. What makes a particular computer a client is that it is running client software, software
that requests a service. Likewise, what makes another computer a server is simply the fact that it is
running server software, software that offers a service to clients
Try this: in a Web browser, enter www.calvin.edu. As a result, you see the Calvin homepage,
because the client software (Web browser) requested the homepage from the Web server
running on the computer with the host name www.calvin.edu.
Now, in your Web browser, enter 153.105.4.23. You see the same webpage, because the Web
browser (client) sent the request to the exact same computer as before; in this case, it was just
unnecessary ask an intermediary DNS server to convert a host name into an IP address.
Ports
As it turns out, the addresses used by clients are even more specific than just
a host name or IP address. The client not only addresses a request to a
specific server; it actually addresses the request to a specific port on the
server.
For instance, a Web server usually listens for Web browser (client) requests
on port 80.
A port number can be placed at the end of a URL after a colon. Try this: in your browser, enter:
www-stu.calvin.edu
You will see the student organizations webpage.
So why did it work the first time, when we didn't specify a port number at all? This is because
Web browsers are programmed to send all requests to port 80 automatically unless you specify
otherwise.
Ports are obviously very important inroads into your computer system. For this reason, one of the
techniques crackers use is called "port scans"—i.e., they scan all of the ports to see if there's an insecure
one through which the cracker can break in to the computer.
This is one of the roles of a firewall, which is hardware or software that protects
individual computers or LANs from improper access via the Internet.
Your home computer has ports too: be careful, especially if you leave your
computer connected to the Internet for long periods of time. (An excellent and free
firewall software program called ZoneAlarm is produced by Zone Labs.)
Perhaps you've heard this term used in the context of diplomacy. For instance,
observing the "proper protocol" when meeting a queen might include "Do not speak to
the queen unless she speaks to you first."
Similarly, for each network service, there is an agreed-upon set of rules according to
which the client software and the server software will communicate.
For instance, Web browsers and Web servers communicate according to the Hypertext
Transfer Protocol, or HTTP. Have you ever wondered why some Web addresses begin with the prefix
http://? It is because this prefix specifies the protocol according to which the browser and the server will
communicate. You don't need to specify it every time, because the browser will assume that you meant to
include it and add it for you.
Similarly, you can transfer files between Internet computers using the File Transfer Protocol, or FTP. In
this case, you simply execute an FTP client, and it will communicate the appropriate requests to an FTP
server. Interestingly, your Web browser has an FTP client built-in. Thus, you may sometimes notice URLs
that begin with the prefix ftp://. This indicates that your browser will send the request to an FTP server,
rather than an HTTP server, and that the browser will communicate according to the FTP protocol rather
than the HTTP protocol.
1. A mail server takes care of sending outgoing e-mail to other mail servers and receiving
incoming e-mail from other mail server computers. This exchange of e-mail between e-mail
servers happens according to the SMTP protocol.
2. A mail server handles requests from mail client software seeking to send mail. This
communication between server and a client wishing to send mail also happens according to
SMTP
It is a weakness in the SMTP protocol that can enable the "spoofing" of e-mail
addresses when sending mail—i.e., sending e-mail as someone other than
yourself.
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail) transmission.
First defined by RFC 821 in 1982, it was last updated in 2008 with the Extended SMTP additions by RFC
5321 - which is the protocol in widespread use today. SMTP by default uses TCP port 25.
3. A mail server processes requests from clients wishing to read mail received from other mail
servers. This communication between server and a client wishing to receive mail happens
according to one of two protocols:
Both POP and IMAP are very commonly used. However, POP clients send a user's password to
the mail server in "plain text," unencrypted. Thus, this poses a security risk. IMAP, on the other
hand, allows a user to specify that the mail client should send the password in encrypted form.
IMAP also offers a number of extended features, including the ability to view portions of incoming
e-mail messages while they are still on the server. Thus, a user can first check if there is any
urgent e-mail on the server before deciding whether or not to take the time to download
messages from the server to the client.
Remember: POP and IMAP are only used to receive messages from the mail server. A mail
client is typically still required to use SMTP commands in order to send e-mail.
There are a great variety of e-mail clients available today. One of the most common is Microsoft's Outlook
Express. However, this e-mail client has repeatedly demonstrated that it is not consistently secure in its
operations, and very prone to e-mail viruses.
Peer-to-Peer
Especially since the advent of the World Wide Web in the last half of the
1990s, the Internet has evolved toward a more centralized organization.
Consider: the scheme of the World Wide Web is such that each
computer (node) in the network is either a client or a server.
This kind of centralized organization, of course, runs counter to the original design of the Internet.
However, client-server is not the only scheme for network services.
What makes such peer-to-peer file-sharing schemes controversially difficult to control is precisely what
made the ARPANET such a resilient network: it is distributed computing. The ARPANET was able to
keep operating in the face of catastrophe involving a portion of the network; similarly, if you shut down
one portion of a peer-to-peer file sharing network, the rest of the network can still continue to operate.
On the next page, we turn to some of the other issues surrounding the Internet.