Sie sind auf Seite 1von 54

Taylor, L.W.; et. al.

Communications and Information Systems


Mechanical Engineering Handbook
Ed. Frank Kreith
Boca Raton: CRC Press LLC, 1999


c1999 by CRC Press LLC
Communications and
Information Systems

Lloyd W. Taylor 18.1 Introduction ....................................................................18-1


DIGEX, Inc. 18.2 Network Components and Systems...............................18-2
Electrical and Optical Communications Wireless Networks
Daniel F. DiFonzo Satellite Communications Computer Communications
Planar Communications Corp.
18.3 Communications and Information Theory...................18-23
A. Brinton Cooper III, PhD Communication Theory Information Theory
Bel Air, Maryland 18.4 Applications..................................................................18-41
Accessing the Internet Data Acquisition
Dhammika Kurumbalapitiya
Harvey Mudd College

S. Ratnajeevan H. Hoole
Harvey Mudd College

18.1 Introduction
This chapter provides a broad introduction to and reference resource for the field of communications
and information systems.
The first section covers the areas of computer networks and their underlying technologies. These
technologies include electrical, optical, wireless, and satellite communications channels, as well as the
protocols (such as TCP/IP) used to transfer information over these channels. The purpose of this section
is to provide a high-level understanding of the infrastructure which underlies all modern electronic
communications.
The second section introduces Communications and Information Theory. This section provides the
mathematical background necessary to better understand the technologies used in electronic communi-
cations. Issues such as noise, compression, and error correction are explained. Of necessity, this section
is somewhat more theoretical than the others.
The third section concludes the chapter with information on two applications of computers and
networking. The first, Accessing the Internet, introduces the Internet, provides a summary of the software
tools used to access information on the Internet, and discusses methods of finding information using a
World Wide Web browser. The second application, Data Acquisition, describes the components of a data
acquisition system and shows how they follow the model of a computer network.

1999 by CRC Press LLC 18-1


18-2 Section 18

18.2 Network Components and Systems


There are a variety of network architectures, standards, and transmission methods used to interconnect
computers and communications systems. This section provides the information necessary to untangle
the internetworking web. It begins by surveying the electrical and optical communications standards and
architectures commonly used in modern networks, moves on to discussions of optical fibers and satellite
communications, and concludes with a discussion of computer communications standards.

Electrical and Optical Communications


Lloyd W. Taylor
The field of networking is generally broken into two domains: the local area network and the wide
area network. The distinction between the two tends to be rather fuzzy. In general, local area refers to
networks within the same building, and wide area refers to networks that interconnect buildings, cities,
or countries.
Local area networks are generally faster than wide area networks. A common rule of thumb used by
network engineers is that speed and distance are inversely related; that is, the longer the distance the
network must travel, the lower the speed that will be used. This is not an electrical or physical limit,
but rather a financial one. Longer links and higher speeds simply cost more.
Cabling Architectures
All networks are based on underlying cabling architectures. These architectures define how the nodes
are interconnected and may also limit the choices of network electrical standards. For example, ethernet
requires a bus architecture, but can be implemented in a star architecture with the proper network
electrical equipment. Ethernet cannot be implemented over a ring architecture.
In a bus architecture (Figure 18.2.1), all members of the network share a common cable for their
network signaling. The bus is a shared resource. All traffic can be read by every node.

FIGURE 18.2.1 Bus architecture.

For a bus-oriented network (such as ethernet) to work, there must be a mechanism for sharing access
to the bus. In the ethernet world, the standard for this mechanism is carrier-sense multiple access/collision
detection (CSMA/CD). When a computer wishes to transmit on the bus, it first listens to see if the bus
is busy. If not, the computer begins to transmit while simultaneously listening to its own message. If
the message is heard back as sent, then the transmission has been successful. If there are errors, it is
likely that another computer decided to transmit at the same time, resulting in a collision. When a
collision occurs, all currently transmitting computers stop transmitting for a random amount of time and
then begin the process again by listening to see if the bus is busy.
Bus architectures are useful where no one node requires a large portion of the shared bandwidth. If
one or two nodes are constantly using the majority of the available bandwidth, performance for all
connected nodes can become seriously affected.

1999 by CRC Press LLC


Communications and Information Systems 18-3

Bus architectures also are relatively insecure. It is quite simple to run a program to monitor all traffic
on a bus and capture sensitive information such as user IDs and passwords. Because all systems must
have access to all transmissions (to determine whether or not the bus is busy), there is no simple way
around this limitation.
In a ring architecture (Figure 18.2.2), each node has two connections, an inbound connection and an
outbound connection. Network traffic passes through each node until it reaches its destination.

FIGURE 18.2.2 Ring architecture.

For a ring architecture to work, there must be some method of determining which node has the right
to transmit at any given time. A common method to do this is by passing a token around the ring. When
a node wishes to transmit, it captures the token and sends on its message in the tokens place. When the
message reaches the intended destination, the recipient node marks it read and sends it on around the
ring. When the originating node receives back its own message, it removes it from the ring and replaces
the token on the ring. Note that the node that has just transmitted will not have another chance to transmit
until all other nodes on the ring have had a chance to transmit. In other words, until the token makes it
all the way around the ring, the first node may not transmit again. This has the effect of making a ring
architecture fairer than a bus architecture, as no one node can co-opt the entire available bandwidth.
Ring architectures suffer from the same security problems as bus networks: all traffic flows through
all nodes. In addition, ring architectures are more difficult to install and maintain. If any one node
becomes disconnected from the ring, the entire ring can fail, as there will be no path for the data. Some
rings are implemented redundantly, with two separate traffic paths rotating in opposite rotations. Should
any one node fail, the nodes on either side of the failure will wrap the ring, sending data back around
the second ring to reach the computer on the other side of the failed node. In this way, no single-point
failure can disrupt the ring.
In a star architecture (Figure 18.2.3), each node has a separate connection to a central point. Each
of these connections is to an active hub, which is a networking device designed to manage and switch
network traffic.

FIGURE 18.2.3 Star architecture.

1999 by CRC Press LLC


18-4 Section 18

FIGURE 18.2.4 Star implementation of other architectures.

A star architecture can be used to implement both bus and ring architectures. For a ring architecture
(Figure 18.2.4a), the central hub simply sends data out one wire to the computer and receives it back
on another wire. For a bus architecture (Figure 18.2.4b), the hub electrically or logically interconnects
all wires together.
For implementations of both architectures, the hub can monitor the bandwidth being used by each
attached node, and in some implementations can actually limit the total bandwidth used by each. Many
hubs can also censor the data going to each node, sending unchanged data only to the destination
system. All other nodes get scrambled data, making it impossible for them to capture sensitive informa-
tion.
Most larger networks are now implemented as stars, because of the increased flexibility and manage-
ability of a star architecture. The same cabling system can be used to implement bus and ring architec-
tures, as well as point-to-point connections such as telephone lines. In addition, the costs of operation
are lower, as all moves and changes can be made in a central location, rather than requiring that new
cable be pulled for each new computer or terminal.
Complex systems are larger networks generally using a combination of architectures. For campus-
wide networks, a common architecture is the ring of stars (Figure 18.2.5a). In this architecture, a star
architecture is implemented in each building, and the buildings are interconnected in a ring, usually
using fiber optic cables. The key advantage of this configuration is that no single-point failure will disrupt
the network. If the ring architecture has been implemented as a redundant ring, it will self-heal and route
around the damaged section.

FIGURE 18.2.5 Complex architectures.

Another common architecture is the star of stars (Figure 18.2.5b). This architecture is implemented
identically in each building, but each building is interconnected to a single central location in a star
configuration. The key advantage of this architecture is that it can support very high bandwidth networks.
The key disadvantage is that the loss of the central hub will disrupt the entire network.
Local Area Networks
Copper Standards. Local networks are still most commonly implemented using copper wire, rather than
fiber optics, because of the significantly higher cost of fiber. As the demand for higher bandwidth

1999 by CRC Press LLC


Communications and Information Systems 18-5

connections to the desktop increases and the cost of fiber decreases, we can expect to see more fiber to
the desktop.
There are three common categories of copper cable used in networking: unshielded twisted pair (UTP),
shielded twisted pair (STP), and coaxial (Coax). Each of these has its unique advantages and disadvan-
tages.
All copper cabling is available in two general grades of insulation. Normal (nonplenum) insulation
is usually made from PVC (polyvinyl chloride) and may be used anywhere other than in air handling
spaces. The more expensive plenum cable (Teflon-based insulation) is required in air handling spaces
because of its fire-resistant characteristics.
Unshielded twisted pair cable is the most commonly used network cable today. It comes in a variety
of qualities, called categories. These categories (Table 18.2.1) define the electrical characteristics of the
cable and specify the maximum data rate that they support.

TABLE 18.2.1 UTP Cable Categories


Category # Pairs Max. Bandwidth Uses

1 Any 1 Mbps Telephone


2 225 4 Mbps Token Ring 4 Mbps
3 16 10 Mbps Ethernet
4 16 20 Mbps Token Ring 16 Mbps
5 16 100 Mbps 100BaseT, CDDI

UTP cable is always installed in a star architecture. It requires active electronics in the wiring closets
to operate the connections to each computer.
As the category number gets higher, the manufacturing standards for such things as the number of
twists per meter, variations in impedance, insulation consistency, etc. become stricter. This is because
the cables must be of uniformly high quality to handle the higher frequencies required for higher data
rates.
For higher category cabling, installers must be specially trained. The rules for installation of high-
bandwidth cable limit the number of twists that can be unwrapped for termination, the minimum bend
radius, and the maximum force that can be applied while pulling the cable through walls or conduits.
Once the cable is installed and terminated, it must be tested and certified to the appropriate level.
Improperly handled or installed Category 5 cabling may perform at only a Category 3 level, through no
fault of the cable itself. Testing includes time-domain reflectometer measurements, noise and crosstalk
measurements, and impedance measurements.
UTP cable is specified for use in two common network electrical standards, 10BaseT ethernet and
100BaseT ethernet. IBM Token Ring can be run over UTP cable if an appropriate impedance matching
transformer (usually called a balun) is used at both ends of the UTP run. In an increasing number of
systems, these matching transformers are being integrated into the network interface cards and network
hubs themselves.
Shielded twisted pair (STP) cable is identical to UTP cable with the addition of a shield around each
twisted pair and usually another shield around the entire cable bundle. This shielding reduces suscepti-
bility to electrical noise (such as that caused by heavy electrical machinery), provides a more predictable
impedance, but also reduces the efficiency of the cable. A typical connection with STP cable can only
run one third the distance of an equivalent UTP cable. This is because the capacitance of the STP is
much higher than that of the UTP, resulting in three times the attenuation at a given frequency.
STP cable is commonly used in IBM Token Ring network installations. IBM-Standard STP cable is
available in two versions: type 1 cable contains two pairs of 22-gauge wire, each with an individual
shield. It is a heavy, stiff cable that is difficult to handle and pull. Type 9 cable is a lighter, more flexible
version that is made with 26-gauge wire. The lighter gauge wires result in a higher impedance, limiting
the length of a cable run to two thirds that of a type 1 cable.

1999 by CRC Press LLC


18-6 Section 18

IBM has defined a number of other cabling standards that include fiber optics, UTP, and mixed cable
types within a single sheath.
Coaxial cable was the original cable used for ethernet. It has largely been superseded by UTP, but is
still in use in older installations.
Coaxial cable can be used either in a bus or a star architecture. In a bus architecture, a heavy coaxial
cable (often called thickwire) is run through the hallway, above the drop ceiling. Each end is terminated
with a 50- resistor to minimize electrical reflections. The coax is drilled wherever a connection to a
computer is to be made, and an active tap is installed (Figure 18.2.6). The connection between the tap
and the computer is via a 15-pin drop cable that carries signal and power to the active tap.

FIGURE 18.2.6 Ethernet thickwire coax network.

In a star architecture, a much lighter coaxial cable (often called thinwire) is used. A standard BNC
connector is installed on each end, and the cable is plugged directly into the network interface card in
the computer and into the network hub in the wiring closet. Thinwire networks can be daisy-chained,
allowing several computers to share a single run of coaxial cable.
Fiber Standards. Fiber optics are playing an increasingly important role in computer networks. The most
common use at present is in backbone and wide area networks. As costs decrease, we can expect to see
fiber to the desktop become the norm.
How Fiber Optics Work. Fiber-optic cables are essentially light pipes. They are made up of two
coaxial layers of glass, the inner called the core and the outer called the cladding (Figure 18.2.7). Each
of the components has a different index of refraction, resulting in a reflective surface at their interface.

FIGURE 18.2.7 Optical fiber construction.

A modulated electrical signal drives a lightsource such as a laser diode, which injects a collimated
beam of light into one end of the optical fiber. The lightbeam is reflected back and forth along the inner
fiber and is coupled with an optical detector at the other end (Figure 18.2.8). The optical detector converts
the lightbeam back to an electrical signal.
While optical fibers can carry signals much greater distances than can electrical conductors operating
at the same frequency, they do suffer from attenuation just as do electrical conductors. The mirror surface
at the interface between the core and the cladding is not perfect, and photons can escape through this
interface and be lost in the cladding. Also, not all lightbeam frequencies propagate at the same rate
through the fiber, resulting in degradation of the signal over long distances.

1999 by CRC Press LLC


Communications and Information Systems 18-7

FIGURE 18.2.8 Optical transmission system.

FIGURE 18.2.9 Multimode fiber.

FIGURE 18.2.10 Single-mode fiber.

Types of Fiber-Optic Cables. There are two kinds of fiber-optic cables in common use. The first,
called multimode fiber, is less expensive to make and use, but has a more limited bandwidth. The second,
called single-mode fiber, is more expensive to make and use, but can support very high bandwidths.
Multimode fiber is used for distances of up to 2 km. It typically has a core diameter of 62.5 m and
a cladding diameter of 125 m. Because of the large diameter of the core, there are many paths, or
modes, between the two ends of the fiber. As demonstrated in Figure 18.2.9, a given photon may travel
down the center of the core (mode 1), may reflect a few times at a shallow angle to the interface (mode
2), or may strike the interface at a sharp angle (mode 3) and reflect many times as it travels to the far
end of the fiber.
These many paths result in a high attenuation of the lightbeam as it travels down the fiber. This
attenuation is caused by the large number of photons lost through the interface (for a given probability
of reflection, the more reflections a photon must make, the higher the chance that the photon will pass
through the interface rather than being reflected by it), as well as other attenuation mechanisms such as
phase distortion.
Single-mode fiber is designed for use in higher bandwidth and longer distance applications. It is made
up of a much smaller core (typically 10 m) in a somewhat smaller cladding (typically 100 m) than
is used in multimode fiber, resulting in a greatly reduced number of modes (Figure 18.2.10). As a result,
single-mode fiber has a much lower attenuation than an equivalent multimode fiber.
Single-mode fiber is much more difficult to splice and terminate than is multimode fiber, as there is
very little margin for error in aligning the core. A 2-m alignment offset in a multimode fiber is only
about 3% of the total diameter, whereas it is 20% of the diameter of a single-mode fiber. Thus, a minor
misalignment in single-mode fiber splice will result in a major increase in attenuation.
Typical Fiber-Optic-Based Networks. One ubiquitous use of fiber in corporate networks is for fiber
distributed data interface (FDDI) backbone networks. FDDI networks transfer data at 100 Mbps using
a token-ring architecture. Rings of up to 200 km are supported, with a maximum distance between
adjacent nodes of 2 km.

1999 by CRC Press LLC


18-8 Section 18

Wide Area Networks


Once out of the local network area, the characteristics of networks change. In general, connections
between widely distributed locations are not handled by the installation of cable by the company itself.
The company will generally turn to a telecom provider, such as the local telephone company, to provide
the necessary connections.
The telephone company does not generally sell cable connections. They sell bandwidth. Thus, the
standards for wide area networks are based on fixed bandwidth allocations over telephone company
facilities. The telephone company may user copper or fiber, at their discretion, to provide the requested
service. The telephone companys networks are usually implemented as rings or stars, as are local
networks.
There are three broad classes of digital service available. The first, traditionally copper based, is the
DS-n service. The second, also copper based, is basic rate Integrated Services Digital Network (ISDN).
The third, optical fiber based, is the OC-n service.
DS-n. The DS series of services provide low- to medium-speed data connections. Table 18.2.2 summa-
rizes the commonly available services and their customary uses.

TABLE 18.2.2 DS-n Class Services


Service Name Data Rate Common Use

DS-0 56/64 Kbps Voice, low speed data


DS-1 1.544 Mbps Multiple voice lines, data
DS-2 6.2 Mbps Data
DS-3 44.736 Mbps Data

Pricing for these services is heavily dependent on the total mileage of the link. The links are usually
leased for a minimum period (typically 1, 2, or 5 years). These leased data lines can support all network
protocols.
Leased lines are point-to-point. That is, they run from one location to another. A single line cannot
be configured as a ring, a star, or a bus. To implement these architectures, several leased lines must be
ordered, and connected together by the customer into the desired configuration.
Basic Rate ISDN. An increasingly important wide area networking technology is the Integrated Services
Digital Network (ISDN). The most commonly implemented version of this service is the basic rate
service, which provides three data channels over a single pair of telephone cables. Two channels, known
as the bearer (or B) channels, each provide a 64-Kbps dialup digital connection. The third channel,
known as the data (or D) channel, is used for control information between the telephone switch and
the ISDN terminal.
In practice, an ISDN line is installed at a business or home location. The line is connected to a piece
of equipment known as an ISDN terminal adapter. This device allows a variety of other devices to make
use of the various channels provided by the ISDN line. One typical device has a connection for a voice
telephone and an ethernet, and can be used to place normal telephone calls simultaneously with a digital
dialup connection to a companys ethernet.
A higher-speed version of ISDN, known as primary rate ISDN, supports 24 B channels over a DS-1
leased line. One of these channels is typically used for control information, leaving 23 channels available
for data transmission.
OC-n. As demand for higher-speed connections continued to increase, standards for optical networks
were established to meet these demands. The OC-n (Optical Carrier) standards (Table 18.2.3) provide
data rates starting at 55 Mbps and going up to multiple gigabits per second. OC-based networks use the
SONET (Synchronous Optical Network) signaling standards for transferring data.

1999 by CRC Press LLC


Communications and Information Systems 18-9

TABLE 18.2.3 OC-n Data Rates


OC-n Data Rate (Mbps)

OC-1 51.84
OC-3 155.52
OC-12 622.08
OC-48 2488.32
OC-192 9953.28

OC/SONET networks are always implemented as rings. They can be configured to be fault-tolerant,
so that no single break in the cable (or failed network node) will cause connections to fail.
Logical connections are made across a SONET ring by the use of virtual circuits. These virtual circuits
can either be permanent (PVC) or switched (SVC). A PVC is configured manually into the control
electronics of the originating and terminating SONET control equipment. It will persist indefinitely until
it is manually terminated. An SVC is established dynamically upon a request by a computer connected
to the control equipment. The SVC persists until the requesting computer informs the control equipment
that the circuit is no longer required. Conceptually, PVCs are like leased lines (permanent), and SVCs
are like dial-up telephone lines (temporary).
Asynchronous transfer mode (ATM) is an increasingly important protocol that is commonly imple-
mented over SONET networks. ATM makes use of fixed-sized packets, called cells, to transfer data at
very high speeds. Each of these cells is 53 bytes in size, with 5 bytes for addressing and cell management
and 48 bytes for data. Because the cells are fixed in size and format, it is possible to build network
switches that are very fast, as the processing and routing of the cell can be entirely done in hardware.
To set up a virtual circuit between nodes, the originating node sends a request to its local ATM switch.
In this request, the originating node specifies the bandwidth required for the link, the length of time the
link is needed, and the quality of service required. The originating node will check its available network
resources to see if it can grant the service requested. If it can, it will pass on the request to the next
switch in the network which will repeat the check for available resources. This process continues until
the destination node is reached and consents to the link request. An acknowledgment is relayed back to
the originating node confirming the availability of the requested service. As this acknowledgment is
returned through each switch, the switch commits the required resources to the virtual circuit.
A key strength of ATM is that it can carry any type of information over the same link, rather than
requiring separate links for voice, data, and video as is common today. This ability will likely reduce
the overall cost of networking as separate voice, data, and video networks will not be required in the
future.

Defining Terms
100 The standard for running 10 Mb/sec ethernet over unshielded twisted pair network cabling.
100BaseT: The standard for running 100 Mb/sec ethernet over unshielded twisted pair network cabling.
Active hub: A device that interconnects multiple network links.
Coax: Coaxial cable.
Local area network: A network that is within the local area, generally within an office area or building.
STP: Shielded twisted pair cable.
Token: A data packet that circulates around a token-ring network indicating that the network is available
for data transmission.
UTP: Unshielded twisted pair cable.
Wide area network: A network that is outside the local area, generally between buildings, between cities,
or between countries.

1999 by CRC Press LLC


18-10 Section 18

References
Acampora, A.S. 1994. An Introduction to Broadband Networks. Plenum Press, New York.
Bates, R.J. 1992. Introduction to T1/T3 Networking. Artech House.
Clark, M.P. 1991. Networks and Telecommunications: Design and Operation. John Wiley & Sons, New
York.
Conard, J.W. (Ed.) 1991. Handbook of Communications Systems Management. Auerbach, Boca Raton,
FL.
Davidson, R.P. 1994. Broadband Networking ABCs for Managers. John Wiley & Sons, New York.
Kosiur, D.R. 1995. How Local Area Networks Work. Prentice-Hall, Englewood Cliffs, NJ.
McElroy, M.W. 1993. The Corporate Cabling Guide. Artech House.
McNamara, J.E. 1988. Technical Aspects of Data Communication. Digital Press.
Minoli, D. 1993. Enterprise Networking. Artech House.

Wireless Networks
Introduction
With increasing mobility of the workforce comes the need to provide network access to computers
located in automobiles, in briefcases, on shipboard, and even in aircraft. Wireless networks provide the
required connectivity.
This section covers the basic radio frequency (RF) link types used in wireless networking, compares
wireless networks to traditional wired networks, and discusses key trends in the field. The reference
section provides pointers to sources of additional information.
RF Technologies
All mobile wireless networks make use of a radio frequency carrier*. This carrier may be fixed frequency
(like an AM radio station signal) or spread spectrum (where the signal is spread over a wide frequency
band). Section 18.4 provides additional information on radio frequency communications, including
modulation methods.
Point to Point. A point-to-point communications system relays the signal from the mobile user to a base
station (Figure 18.2.11). The radio link is a single hop, that is, the signal is transferred directly from the
mobile user to the base station without going through intermediate reception and retransmission steps.
A point-to-point system generally requires a line of sight between the transmitter and the receiver.
Because the curvature of the earth limits the line of sight, this type of system has a typical maximum
range of tens of miles.

FIGURE 18.2.11 Point-to-point link.

Several point-to-point links can be put in series to relay a signal. Microwave relay stations are typically
placed approximately every 30 mi to relay a signal from point to point over long distances. Each relay
point receives the signal and retransmits it to the next station in line.

* Some local area wireless networks use infrared light or laser carriers. These are not used in mobile applications

because of their limited range.

1999 by CRC Press LLC


Communications and Information Systems 18-11

Cellular. To overcome the limitations of point-to-point systems for mobile telephony, a cellular radio
system can be used. This system requires a large number of antennas, each at the center of a cell, as
shown in Figure 18.2.12. As a ground-based vehicle approaches the edge of the coverage range for a
particular antenna, the cellular control system communicates with the mobile data unit or telephone and
assigns it to the next antenna in the direction in which the vehicle is heading. At a coordinated time,
the in-progress conversation is transparently (to the user) switched to the newly assigned antenna without
interruption.

FIGURE 18.2.12 Coverage pattern for multiple cellular antennas.

In congested downtown areas, cell antennas may be only a few hundred meters apart and use lower-
powered transmitters to reduce the size of the cell. This provides more channels for use by more
simultaneous conversations, but requires more handoffs as the mobile unit moves through the city. In
suburban and rural areas, the cell antennas may be several thousand meters apart, with higher-powered
transmitters to provide a large cell. This minimizes the number of cell antennas that must be installed,
reducing the cost of the system.
A personal communications system (PCS) is a version of cellular technology that uses smaller
microcells with lower power transmitters. This second-generation cellular system requires many more
cell antennas (every few hundred meters), making it most effective in urban areas. PCS telephones are
typically smaller and lighter than first-generation cellular telephones, because of their lower power and
higher frequency operation.
Satellite Low Earth Orbit. Cellular and PCS systems require a very large number of antennas installed
in a grid over a large area to provide coverage for mobile units. This limits the rate at which a system
with complete coverage for a given area can be installed. Each cell antenna installation requires permis-
sion from the cell-site property owner, building permits, power, telecommunications connections, and
regulatory approval before it can be built and made operational. In addition, cellular systems differ from
country to country, requiring different mobile units for access in each country.
One approach to addressing these problems is to stand the problem on its head. Rather than having
a large set of fixed antennas which hand off the mobile unit as the mobile unit moves, use a number of
satellite transceivers in low earth orbit (LEO). Establish the orbits of these transceivers so that there is
at least one satellite in view of any place on the face of the earth at a given time. Then, as a given
satellite begins to move out of range of the mobile unit, it hands off the call to another satellite that is
just moving into range.
This type of system can easily provide data and voice connectivity to any location on (or slightly
above) the face of the Earth using a common mobile transceiver. For more information, see the subsection
Satellite Applications.
Comparing Wireless to Wired Networks
It is clearly posible to provide data and voice access to mobile users with any of the above technologies.
Yet there are significant limitations to these systems when compared to a hardwired network connection.
As always, there are trade-offs that must be made.

1999 by CRC Press LLC


18-12 Section 18

Bandwidth. A directly wired network connection can easily provide 100 Mbps. The systems discussed
in the previous section are typically limited to tens of Kbps, four to five orders of magnitude below what
is commonly available to directly wired users.
This bandwidth limitation has profound implications for mobile users. For example, complex graphics
and images require several minutes to download, rather than the several seconds that are required for a
directly wired user. Any function that requires the transfer of large amounts of data will be greatly slowed
by a wireless network link.
The limitation is not likely to ever go away. There is a limited amount of RF spectrum available for
mobile computing, while there is an essentially unlimited amount of network spectrum available for
directly wired users (just add more fibers or wires!). While improvements in wireless bandwidth avail-
ability will come in time, it simply will never compare with the bandwidth available to a desktop computer
user.
Security. Wireless transmissions can be easily intercepted. The signal is broadcast through the atmosphere
and can be received by anyone within range who has a properly tuned receiver. Thus, it is necessary to
use encryption to protect the transmitted data.
Encryption adds complexity and cost to any system. Securely exchanging a cryptographic key for a
call can be difficult to do in a way that is difficult to compromise. Every system that needs to interoperate
must agree on what cryptographic algorithm will be used and how it will be keyed.
Costs. As can be inferred, the costs of establishing a mobile wireless network are very high. These costs
must be recovered from the users of the system within its projected useful life. As an example, compare
the cost of a typical local wired telephone call with the cost of a cellular telephone call. In many areas,
the local wired call is offered at a fixed rate for an unlimited call length, while the cellular call is charged
by the minute.
Conclusions
Wireless networking has clear usefulness where it is necessary to have access to data while mobile.
Applications of wireless networking in such areas as police work or emergency services have already
proven their value.
The costs and limitations of wireless networking must be carefully considered before embarking on
a major initiative. With the technology in this area developing rapidly, in a wide variety of incompatible
directions, caution must be exercised.

References
Bates, R.J. 1994. Wireless Networked Communication. McGraw-Hill, New York.
Breed, G. 1994. Wireless Communications Handbook. Cardiff.
Calhoun, G. 1992. Wireless Access and the Local Telephone Network. Artech House.
Davis, P.T. and McGuffin, C.R. 1995. Wireless Local Area Networks. McGraw-Hill, New York
Lee, W.C.Y. 1995. Mobile Cellular Telecommunications. McGraw-Hill, New York
Nemzow, M.A.W. 1995. Implementing Wireless Networks. McGraw-Hill, New York

Satellite Communications
Daniel F. DiFonzo
Introduction
The impact of satellites on world communications since commercial operations began in the mid-1960s
is such that we now take for granted many services that were not available a few decades ago: worldwide
TV, reliable communications with ships and aircraft, wide area data networks, communications to remote
areas, direct TV broadcast to homes, position determination, and Earth observation (weather and map-

1999 by CRC Press LLC


Communications and Information Systems 18-13

ping). Future satellite-based global personal communications to hand-held portable telephones may usher
in yet another new era.
Satellites function as line-of-sight microwave relays in orbits high above the Earth which can see
large areas of the Earths surface. This unique feature ensures the continued growth of satellites even as
fiber-optic cables capture a larger market share of high-density point-to-point traffic. Satellites provide
cost-effective access for areas with low (thin-route) communications traffic, because Earth terminals can
be installed in locations where the high investment cost of terrestrial facilities might not be warranted.
Satellites are particularly well suited to wide area coverage for broadcasting, mobile communications,
and point-to-multipoint communications.
Satellite Applications
Figure 18.2.13 depicts several kinds of satellite links and orbits. The geostationary Earth orbit (GEO)
is in the equatorial plane at an altitude of 36,000 km with a period of one sidereal day (23h 56m 4.09s).
GEO satellites appear to be almost stationary from the ground (subject to small perturbations) and the
Earth antennas pointing to these satellites may need only limited or no tracking capability. The orbits
for which the highest altitude (apogee) is at or greater than GEO are sometimes referred to as high Earth
orbits (HEO). Low Earth orbits (LEO) typically range from a few hundred kilometers to about 1000
km, and medium Earth orbits (MEO) are at intermediate altitudes.

FIGURE 18.2.13 Satellite links and orbits.

Initially, satellites were used primarily for point-to-point traffic in the GEO fixed satellite service
(FSS), e.g., for telephony across the oceans and for point-to-multipoint TV distribution to cable head
end stations. Large Earth station antennas with high-gain narrow beams and high uplink powers were
needed to compensate for limited satellite power. This type of system, exemplified by the early global
network of the International Telecommunications Satellite Consortium (Intelsat), used Standard A
Earth antennas with 30-m diameters. Since the start of Intelsat, many other satellite organizations and
consortia have been formed around the world to provide international, regional, and domestic services
(Rees, 1990).
As satellites have grown in power and sophistication, the average size of the Earth terminals has been
reduced. High-gain satellite antennas and relatively high-power satellite transmitters have led to very
small aperture Earth terminals (VSAT) with diameters of less than 2 m and modest powers of less than
10 W (Gagliardi, 1991). As depicted in Figure 18.2.13, VSAT terminals may be placed atop urban office
buildings, permitting private networks of hundreds or thousands of terminals which bypass terrestrial
lines. VSATs are usually incorporated into star networks where the small terminals communicate through
the satellite with a larger Hub terminal. The Hub retransmits through the satellite to another small
terminal. Therefore, VSAT-to-VSAT links require two hops with attendant time delays. With high-gain
satellite antennas and relatively narrowband digital signals (e.g., compressed voice at 8 kbps), direct
single-hop mesh interconnections of VSATs may be used.

1999 by CRC Press LLC


18-14 Section 18

Satellite Functions
The traditional function of a satellite is that of a bent pipe quasilinear repeater in space. As shown in
Figure 18.2.13, uplink signals from Earth terminals directed at the satellite are received by the satellites
antennas, amplified, translated to a different downlink frequency band, channelized into transponder
channels, further amplified to relatively high power, and retransmitted toward the Earth. Transponder
channels are generally rather broad (e.g., bandwidth of 36 MHz) and each may contain many individual
or user channels.
Multiple access techniques, to be discussed later, allow many users to share a satellites resources of
bandwidth and power and to avoid interfering with each other and with other satellite or terrestrial
systems. Multiple access systems segregate users by frequency, space, time, polarization, and signaling
code orthogonality.
Analog and digital modulations are both in widespread use. While frequency modulation (FM) has
been prevalent, recent advances in digital voice and video compression will lead to the widespread use
of digital modulation methods such as quartenary phase shift keying (QPSK) and quartenary amplitude
modulation (QAM). Figure 18.2.14 depicts the functional diagram appropriate to a satellite using
frequency division multiple access (FDMA) and reusing available frequencies by means of multiple
antenna beams. Interference can result if the sidelobes of one beam receive or transmit substantial energy
in the direction of the other beam.

FIGURE 18.2.14 Satellite system block diagram.

Newer satellite architectures, such the NASA Advanced Communications Technology (ACTS) and
Motorolas Iridium system, may use regenerative repeaters, which process the uplink signals by demod-
ulating them to baseband. These baseband signals, which may be for individual users or may represent
frequency division multiplexed (FDM) or time division multiplexed (TDM) signals from many users, are
routed to downlink channels, modulated onto one or more radio frequency (RF) carriers, and transmitted
to Earth.
High-power direct broadcast satellites (DBS) operating at Ku-band (around 12 GHz) deliver TV
directly to home receivers having antennas less than 1 m in size. Such systems using analog FM are
operational in Japan and Europe. In the United States, DBS with digital modulation and compressed
video will provide more than four NTSC TV channels per 24-MHz transponder channel. For the United
States, where each DBS orbital location is allocated 32 transponder channels of 24 MHz each, more
than 128 conventional TV channels can be provided from a single DBS orbital location. DBS is seen
as an attractive medium for delivery of high-definition TV (HDTV) to a large number of homes.
Mobile satellite services (MSS) operating at L-band around 1.6 GHz have revolutionized communi-
cations with ships and, more recently, with aircraft which would normally be out of reliable communi-

1999 by CRC Press LLC


Communications and Information Systems 18-15

cations range of terrestrial radio signals. The International Maritime Satellite Consortium (Inmarsat)
operates the dominant system of this type.
Links between LEO satellites (or the NASA shuttle) and GEO satellites are used for data relay, e.g.,
via the NASA Tracking and Data Relay Satellite System (TDRSS). Some systems will use intersatellite
links (ISL) to improve the interconnectivity of a wide-area network. ISL systems would typically operate
at frequencies above 20 GHz or even use optical links.
An exciting new development is the prospective use of L-band frequencies with a large number (12
to 66) of LEO satellites for personal communications systems (PCS) directly with small hand-held
portable telephones anywhere in the world.
Access and Modulation
Satellites act as central relay nodes which are visible to a large number of users who must efficiently
use the limited power and bandwidth resources. A brief summary of issues specific to satellite systems
is given below.
Frequency division multiple access (FDMA) has been the most prevalent access for satellite systems
until recently. Individual users assigned a particular frequency band may communicate at any time.
Satellite filters subdivide a broad frequency band into a number of transponder channels, e.g., the 500-
MHz uplink FSS band from 5.925 to 6.425 GHz may be divided into 12 transponder channels of 36
MHz bandwidth plus guard bands. This limits the interference among adjacent channels in the corre-
sponding downlink band of 3.7 to 4.2 GHz.
FDMA implies that several individual carriers coexist in the transmit amplifiers. In order to operate
the amplifiers in a quasilinear region relative to their saturated output power to limit intermodulation
products, the amplifiers must be operated in a backed-off condition. For example, in order to limit third-
order intermodulation power for two carriers in a conventional TWT (traveling wave tube) amplifier to
20 dBc, its input power must be reduced (input backoff) by about 10 dB relative to the power that
would drive it to saturation. The output power of the carriers is reduced by about 4 to 5 dB (output
backoff). Amplifiers with fixed bias levels will consume power even if no carrier is present. Therefore,
dc-to-RF efficiency degrades as the operating point is backed off. For amplifiers with many carriers, the
intermodulation products have a noise-like spectrum and the noise power ratio is a better measure of
multicarrier performance.
Time division multiple access (TDMA) users share a common frequency band and are each assigned
a unique time slot for their digital transmissions. At any instant there is only one carrier in the transmit
amplifier, requiring little or no backoff from saturation. The dc-RF efficiency is high. A drawback is the
system complexity required to synchronize widely dispersed users in order to avoid intersymbol inter-
ference caused by more than one user signal appearing in a given time slot. Also, the total transmission
rate in a TDMA satellite channel must be essentially the sum of the users rates, including overhead
bits such as for framing, synchronization and clock recovery, and source coding. At the present state of
the art, Earth terminal hardware costs may be higher than for FDMA. Nevertheless, TDMA systems are
gaining acceptance for some applications.
Code division multiple access (CDMA) modulates each carrier with a unique pseudorandom code,
usually by means of either a direct sequence or frequency hopping spread spectrum modulation. As the
CDMA users occupy the same frequency band at the same time, the aggregate signal in the satellite
amplifier is noise-like. Individual signals are extracted at the receiver by correlation processes. CDMA
tolerates noise-like interference but does not tolerate large deviations from average loading conditions.
One or more very strong carriers could violate the noise-like interference condition and generate strong
intermodulation signals.
User access is via assignments of a frequency, time slot, or code. Fixed assigned channels allow a
user unlimited access. However, this may result in poor utilization efficiency for the satellite resources
and may imply higher user costs (analogous to a leased terrestrial line). Other assignment schemes
include demand assigned multiple access (DAMA) and random access (e.g., for the Aloha concept).
DAMA systems require the user to first send a channel request over a common control channel. The

1999 by CRC Press LLC


18-16 Section 18

network controller (at another earth station) seeks an empty channel and instructs the sending and
receiving units to tune to it (either in frequency or time slot). A link is maintained for the call duration
and then released to the system for other users to request. Random access is economical for lightly used
burst traffic such as data. It relies on random time of arrival of data packets and protocols are in place
for repeat requests in the event of collisions (Gagliardi, 1991).
In practice, combinations of multiplexing and access techniques may be used. A broad band may be
channelized or frequency division multiplexed (FDM) and FDMA may be used in each subband, e.g.,
FDM/FDMA.
The traditional satellite modulation format has been FM. However, recent trends indicate that digital
modulations such as M-ary PSK and QAM will become more prevalent for nearly all applications
including voice, data, and TV. The efficiencies afforded by digital modulations arise partly because they
allow signal processing for bandwidth compression. Compressed digital TV transmission allows a
significant improvement in capacity compared with FM.
Trends
Satellite communications have approached a mature stage of development, and their competitiveness for
point-to-point voice traffic, compared with fiber, has been questioned. However, as mentioned, satellites
will continue to exploit their unique wide view of the Earth for such applications as broadcast and
personal communications. The satellite industrys maturity also presents another challenge. To date,
satellite construction has resembled a craft industry with extensive custom design, long lead times, long
test programs, and high cost. Satellites will benefit from modern lean production and design-to-cost
concepts that could lead to systems having lower cost per unit of capacity and higher reliability.
Technology advances that are being pursued include development of lightweight lightsats for econom-
ical provision of services at low cost, more sophisticated on-board processing to improve interconnec-
tivity, intersatellite links, improved components such as batteries, and even such speculative concepts
as providing satellite power from the ground via high-power laser beams using adaptive optics (Landis
and Westerlund, 1992).

Defining Terms
Attitude: The angular orientation of a satellite in its orbit, characterized by roll (R), pitch (P), and yaw
(Y). The roll axis points in the direction of flight, the yaw axis points toward the earths center,
and pitch axis is perpendicular to the orbit plane such that R P Yb
Backoff: Amplifiers are not linear devices when operated near saturation. To reduce intermodulation
products for multiple carriers, the drive signal is reduced or backed off. Input backoff is the dB
difference between the input power required for saturation and that employed. Output backoff
refers to the reduction in output power relative to saturation.
Bus: The satellite bus is the ensemble of all the subsystems that support the antennas and payload
electronics. It includes subsystems for electrical power, attitude control, thermal control, TT&C,
and structures.
Frequency reuse: A way to increase the effective bandwidth of a satellite system when available spectrum
is limited. Dual polarizations and multiple beams pointing to different Earth regions may utilize
the same frequencies as long as, for example, the gain of one beam or polarization in the directions
of the other beams or polarization (and vice versa) is low enough. Isolations of 27 to 35 dB are
typical for reuse systems.
Polarization isolation: Frequency reuse allocates the same bands to several independent satellite tran-
sponder channels. The only way these signals can be kept separate is to isolate the antenna response
for one reuse channel in the direction or polarization of another. The beam isolation is the coupling
factor for each interfering path (ideally 0 or dB).

1999 by CRC Press LLC


Communications and Information Systems 18-17

References
Gagliardi, R.M. 1991. Satellite Communications. Van Nostrand Reinhold, New York.
Griffin, M.D. and French, J.R. 1991. Space Vehicle Design. American Institute of Aeronautics and
Astronautics, Washington, D.C.
Long, M. 1990. The 1990 World Satellite Annual. MLE, Winter Beach, FL.
Morgan, W.L. and Gordon, G.D. 1989. Communications Satellite Handbook. John Wiley & Sons, New
York.
Rees, D. 1990. Satellite Communications: The First Quarter Century of Service. John Wiley & Sons,
New York.
Richaria, M. 1995. Satellite Communications Systems: Design Principles, McGraw-Hill, New York.
Roddy, D. 1995. Satellite Communications. Prentice-Hall, Englewood Cliffs, NJ.
Wertz, J.R. and Larson, W.J., eds. 1991. Space Mission Analysis and Design. Kluwer Academic Pub-
lishers, Dordrecht, The Netherlands.

Further Information
For a brief history of satellite communications see Satellite Communications: The First Quarter Century
of Service, by D. Rees (Wiley, 1990). Propagation issues are summarized in Propagation Effects Hand-
book for Satellite Systems Design, 1983, NASA Reference Publication 1082(03), November 1990;
descriptions of the proposed LEO personal communications systems are in the FCC filings for Iridium
(Motorola), Globalstar (SS/Loral), Odyssey (TRW), Ellipso (Ellipsat), and Aries (Constellation Com-
munications), 1991 and 1992.

Computer Communications
Lloyd W. Taylor
The previous three sections have discussed various communications links used in digital interconnections.
This section discusses common protocols used to handle network addressing and delivery over these
communication links.
The OSI Network Model
The International Standards Organization (ISO) developed a model of network communications called
the Open Standards Interconnect (OSI model). This is an abstract model that does not necessarily imply
any particular implementation. In fact, it is most commonly used as a reference model into which various
network protocols are mapped.
The OSI model (Table 18.2.4) defines seven layers. The topmost layer defines the interface between
the user-interface software and the network, while the bottommost layer defines the electrical charac-
teristics of the network itself. Not all protocols implement all seven layers, as will be seen later in this
section.

TABLE 18.2.4 The OSI Network Model


Layer Name Function

Application Interface between user applications and network


Presentation Interface between network applications and network
Session Destination definition
Transport Internet addressing
Network Switching, routing, management
Data link Network access control, local addressing
Physical Electrical or optical cable, signaling, and transmission standards

1999 by CRC Press LLC


18-18 Section 18

The seven layers have the following functions:


The application layer is the layer that the user or the users application interfaces with. This layer
provides such capabilities as file transfer, electronic mail transfer, and videoteleconferencing
transfer.
The presentation layer defines the format of the information to be transferred. It handles the mapping
between the network representation of data and the local representation of data. This layer is
typically where network encryption and compression will occur.
The session layer is responsible for setting up connections between processes on separate computers.
It connects the processes, manages the connection, and terminates the connection when the data
transfer is complete.
The transport layer guarantees error-free communication for the connection. It handles error checking
and retransmission of lost packets, and manages the traffic flow between the end points of the
connection.
The network layer manages the creation, maintenance, and termination of connections between
computers. It handles the routing of the data across the intervening networks. It interfaces between
the logical abstractions of the higher layers with the technical specifics of the lower.
The data link layer handles data at the packet level. It enables the receiving computer to check that
these blocks of data have been received reliably.
The physical layer implements the standards for sending bits across a particular type of network. It
includes standards for the wire or fiber itself, standards for the signal characteristics, and standards
for the connection of the network to the computer.
These layers work together to ensure reliable communications between processes on different com-
puter systems. Note that each computer must implement the same set of protocols to be able to
interoperate. Even if both systems use protocols which are OSI Standard, there is no guarantee that
they will talk to each other.
Common Network Protocols
A typical corporate network has many different protocols in use at any one time. This is a legacy of the
days when every vendor implemented his own network standards, which resulted in serious interoper-
ability problems.
In the recent past, there has been a significant move toward the use of TCP/IP as a standard protocol
for all network communications. Most vendors today offer TCP/IP as an option, and many have adopted
it as a standard. If all systems on a corporate network speak the same language, it is much easier to
get them to talk to each other. It is also much simpler to configure and operate the computers on that
network, as they need speak only one language, rather than several, to interconnect with all other
computers.
The three protocols selected for discussion in this section are among the most common. They are
provided here as a set of examples of how various protocols are implemented, to provide an introduction
to the complexities of internetworking.
TCP/IP. The Transmission Control Protocol/Internet Protocol (TCP/IP) was developed by two engineers
in 1974. They built on the work of a Ph.D. student at Harvard, who had developed the idea for ethernet
as his doctoral thesis. By 1982, these ideas had been further developed by others and were published
as standards by the U.S. Department of Defense.
TCP/IP is a very simple set of protocols. It implements only those pieces that are necessary for reliable
and efficient network communications and leaves the more complex tasks (such as network routing) to
other applications. As a result, TCP/IP is relatively simple to implement and therefore has been imple-
mented on every computer operating system. It has become the lingua franca of networking.

1999 by CRC Press LLC


Communications and Information Systems 18-19

The mapping of TCP/IP into the OSI model is shown in Table 18.2.5. Note that TCP/IP does not
implement all layers of the OSI model explicitly, but combines several layers into a single implementation
piece.

TABLE 18.2.5 Mapping of TCP/IP into the OSI Model


OSI Layer Name TCP/IP Implementation

Application File transfer protocol (FTP)


Presentation Simple mail transport protocol (SMTP)
Session Virtual terminal protocol (Telnet)

Transport Transmission control protocol (TCP)


User datagram protocol (UDP)
Internet control message protocol (ICMP)

Network Internet protocol (IP)


Address resolution protocol (ARP)

Data link Ethernet (coax, twisted pair), token ring,


Physical FDDI, ATM, X.25, SONET, LocalTalk, ...

The protocols listed perform the following functions:


File transfer protocol implements the messaging necessary to transfer files from one computer to
another. It handles such things as authentication, directory listings, file selection, and datastream
format. Most systems implement an FTP command that allows the user to interact with the FTP
system through the use of either text commands or a graphical user interface.
Simple mail transport protocol implements a standard for the transfer of electronic mail messages
between computer systems. It handles such things as system identification, addressing, the sep-
aration of the body of the message from the headers of the message, and the termination of the
transfer. A typical computer will implement a mail user interface that completely isolates the user
from this protocol.
Virtual terminal protocol implements a standard for the negotiation and connection of a terminal
emulator to a remote computer. Telnet negotiates such things as data representation standards
(ASCII, EBCDIC, Latin1), datastream format (8-bit or 7-bit), and control characters to be used
for interrupt and flow control. Typical computers implement a user interface that provides inter-
pretation of terminal control codes sent by the remote computer (e.g., clear screen), as well as
controls for establishing and terminating a terminal emulation session.
Transmission control protocol underlies the above protocols and provides a reliable, full-duplex,
datastream service between two computers. It establishes a virtual circuit (or connection) between
the two systems, then sends and receives data across that connection until the communicating
processes request that the circuit be disestablished. TCP depends on IP for sending and receiving
the packets that are transferred across the virtual circuit.
User datagram protocol is used by processes that do not require guaranteed delivery of data. No
virtual circuit is established. The sending process simply addresses the packet to a particular
process on the remote computer and sends it across the network. It is up to the processes at either
end to ensure that the packet is received properly, as the UDP protocol makes no guarantees.
Internet control message protocol is used to send control messages concerning network functions to
TCP/IP clients. Instructions such as reduce your transmission rate and your requested desti-
nation is unreachable are transferred at this level.
Internet protocol defines the standard datagram that transports information across the network. It does
not guarantee delivery, but expects higher-level protocols (such as TCP) to handle that function.
It handles packet-level flow control and error checking.

1999 by CRC Press LLC


18-20 Section 18

Address resolution protocol handles the mapping of a logical IP address to a physical network address.
It enables the local computer to determine where the packet must be sent for the next step of its
journey across the network.
IPX. The Internet packet exchange (IPX) protocol is commonly used in Novell NetWare filesharing
systems. It is based on the XNS suite of protocols designed by Xerox. IPX is a lightweight protocol. It
is designed to be very efficient (easy to process) and to provide maximum performance on a local
network. The downside of lightweight protocols is that they tend not to work as well in large internets.
Because they are less complex, they lack the necessary components to work well in complex network
environments.
The differences between IPX and TCP/IP can be seen by comparing Tables 18.2.6 and 18.2.5. Notice
the significantly fewer number of layers in the IPX set of protocols. This simplification is the source of
the efficiency of IPX.

TABLE 18.2.6 Mapping of IPX into the OSI Model


OSI Layer Name IPX Implementation

Application
Presentation Netware core protocol (NCP)
Session

Transport Sequenced packet exchange (SPX)

Network Internet packet exchange (IPX)

Data link Ethernet (coax, twisted pair), token ring,


Physical FDDI, ATM, X.25, SONET, LocalTalk,

The protocols listed perform the following functions:


Netware core protocol implements directory services, connection services, security, and similar func-
tions. It is essentially the operating system for NetWare servers and, as such, implements far more
functionality than the TCP/IP protocol, where the top of the OSI stack is underneath the operating
system. Core services can be added to NetWare servers by adding netware loadable modules
(NLM), which can be used to add things such as data base engines and gateways to TCP/IP
networks.
Sequenced packet exchange builds on the services offered by IPX. SPX adds packet acknowledgment
to the functions in IPX, so that the sending computer can know for certain that the packet was
received at the far end. Note that the use of SPX is not required in NetWare networks.
Internetwork packet exchange implements an efficient packet transfer service. The packet contains
only address information, data, and a error-checking code, resulting in a packet with little
overhead.
It should be noted that current NetWare networks can be implemented using TCP/IP protocols. This
is appropriate where the network is large (or expected to grow), or where there is already a requirement
for TCP/IP, such as in networks connected to the Internet.
AppleTalk. AppleTalk was designed by Apple Computer to provide a simple-to-install, simple-to-
manage network. It has an extensive set of protocols to provide such services as automatic address
selection (for workstations and servers), printer sharing, routing, and security. Anyone who has installed
an AppleTalk network has benefited from its simple plug and play characteristics.
This simplicity for the user comes at the cost of a significant complexity in the protocol. Since
AppleTalk must discover on its own what most other protocols require the network administrator to
manually configure, it follows that there must be many additional functions, resulting in a necessarily
more complex protocol suite.

1999 by CRC Press LLC


Communications and Information Systems 18-21

The major protocols used in AppleTalk are listed in Table 18.2.7. Notice the large number of protocols
at the session and transport layers. These additional protocols are used to self-configure the AppleTalk
network.

TABLE 18.2.7 Mapping of AppleTalk into the OSI Model


OSI Layer Name AppleTalk Implementation

Application AppleShare

Presentation AppleTalk filing protocol (AFP), PostScript

Session AppleTalk session protocol (ASP), AppleTalk data stream protocol (ADSP),
zone information protocol (ZIP), printer access protocol (PAP)

Transport AppleTalk transaction protocol (ATP), name binding protocol (NBP),


AppleTalk echo protocol (AEP), routing table maintenance protocol (RTMP)

Network Datagram delivery protocol (DDP)

Data link Ethernet (coax, twisted pair), token ring,


Physical FDDI, ATM, X.25, SONET, LocalTalk,

The protocols listed perform the following functions:


AppleShare provides file sharing services.
AppleTalk filing protocol handles the communications between the users computer and the AppleShare
fileserver. It allows users to share files and applications, handle security, and ensure that each
user has a current view of the shared file service.
PostScript defines the format of documents printed from Macintosh applications. It is a page descrip-
tion language, developed by Adobe.
AppleTalk session protocol manages the creation, operation, and destruction of sessions between
computers. It organizes the sequencing of function requests (who gets access to what service in
what order).
AppleTalk data stream protocol is a connection-oriented protocol (like TCP) that reliably transfers a
stream of bytes between two computers.
Zone information protocol handles the discovery and mapping of the entire AppleTalk network. It
finds all connected networks and builds a table that is used by AppleTalk to find other computers,
and to route packets appropriately.
Printer access protocol sets up and manages connections between the users computer and printers
or print servers.
AppleTalk transaction protocol is used to pass requests and responses reliably between computers. It
ensures that the receiving computer accurately gets the request and also ensures that the answer
returns reliably to the requester.
Name binding protocol translates between the name of a computer and its address.
AppleTalk echo protocol responds to a request by a remote computer by simply acknowledging that
an echo request was received. This function is used to be sure that a remote computer is active.
Routing table maintenance protocol is used by AppleTalk routers to discover and keep track of the
configuration of the AppleTalk internet. It is used to maintain routing tables or internal network
maps.
Datagram delivery protocol is the core of the AppleTalk protocol suite. It moves packets of data from
one computer to another, but does not guarantee reliable delivery.

1999 by CRC Press LLC


18-22 Section 18

Defining Terms
AppleTalk: A set of protocols developed by Apple computer to interconnect Macintosh computers and
peripherals.
FTP: File transfer protocol.
IP: Internet protocol.
IPX: Internetwork packet exchange.
ISO: International Standards Organization.
Layer: One part of a protocol stack.
OSI model: The model of network interfaces as a seven-layer entity.
Protocol: A standard way of doing things.
SPX: Sequenced packet exchange.
TCP: Transmission control protocol.
Telnet: Virtual terminal protocol.

References
Comer, D. 1988. Internetworking with TCP/IP. Prentice-Hall, Englewood Cliffs, NJ.
Cypser, R.J. 1991. Communications for Cooperating Systems: OST, SNA, and TCP/IP. Addison-Wesley,
Reading, MA.
Sheldon, T. 1993. Novell NetWare 4. McGraw-Hill, New York.
Sidhu, G.S. et al. 1990. Inside AppleTalk. Apple Computer.
Tittle, E. and Robbins, M. 1994. Network Design Essentials. Academic Press, New York.
Wilder, F. 1993. A Guide to the TCP/IP Protocol Suite. Amacom, New York.

1999 by CRC Press LLC


Communications and Information Systems 18-23

18.3 Communications and Information Theory


A. Brinton Cooper, III

Communications Theory
Communications science and technology have been experiencing unprecedented growth and impact in
the last decade of the 20th century. The reasons are twofold. Advances in making electronic devices
smaller and in reducing their requirement for electric power permit the use of complex communication
processing, coding, and decoding functions that until recently languished in the research literature. On
the other hand, spurred by the demands of society for more and better communication functions, scholars
and their students continue to bring forth more efficient algorithms for using communication channels
and to advance networking from the traditional techniques used by the telephone company to a plethora
of methods for networks of portable and mobile terminals.
Yet, these advances could not have occurred without a firm theoretical basis. Reliance on intuitive
and untested ideas in communications often leads down the wrong path. Shannons idea of error-free
communications at nonzero rates through noisy channels and Nyquists notion that all the information
in a continuous band-limited signal is contained in a string of samples that are taken at a uniform, finite
rate ran counter to the collective intuition of practicing communications engineers of 1948 and 1928,
respectively.
In what follows we introduce communications theory and information theory, believing that an
introduction to the principles upon which a discipline is based provides a firm foundation for exploring
the technology which it underlies.
Structure and Functions of a Communication System
The purpose of a communications system is to convey information. In order that the information be
transmitted efficiently and economically and be received reliably and with fidelity, several important
signal processing operations are performed prior to sending the information over the (usually noisy)
channel. Complementary functions are performed at the receiver.
The center of focus of a communications system (Figure 18.3.1) is the channel. A channel is a medium
of communication by which information from a source is conveyed to a destination (or sink). The source
can be a microphone, a sensor, a video camera, a compact disc player anything that produces an
electrical signal which represents information. The destination is the recipient of the source signal. It
can be a piece of magnetic tape, a computer file, a meter, or a loudspeaker. For the moment, we consider
information to have its intuitive meaning. Information is placed on the channel (or sent over the
channel) by a transmitter and obtained from the channel by a receiver. The physical nature of most
communication channels renders them unsuitable for conveying information in the form produced by
the source. That is, the signal produced by a channel transmitter is quite different from that produced
by a source. The modulator bridges this gap by modifying one or more parameters of the transmitter
output signal in accordance with the output of the source. The corresponding demodulator in the receiver
recovers the information signal from the received channel waveform.

FIGURE 18.3.1 A simple communication system.

1999 by CRC Press LLC


18-24 Section 18

Consider the example shown in Figure 18.3.2. The source is an optical sensor that produces a
continuous electrical waveform representing a picture or image of something. This waveform is processed
in order to make it more suitable for transmission over a binary communication channel. A source
encoder converts the analog waveform into a string of binary digits (analog to digital conversion) and
removes redundant elements (compression) that can be reconstructed at the destination. If the informa-
tion is sensitive, there may be a stage of encryption in order to prevent eavesdroppers from viewing the
image. A channel encoder adds to this digit string some extra bits which are used by a channel decoder
in the receiver to correct patterns of errors that can be caused by noise, interference, or distortion in
the channel. A transmitter converts the digital signal to one that can be carried over the channel. At the
receiving end, each of these functions is reversed, in sequence, to reproduce the original optical picture
or image.

FIGURE 18.3.2 Example transmission system.

Signals
The contents of a communication are carried by a signal, typically a time-varying voltage, current, or
electromagnetic wave. Information is actually conveyed in the time variations. Important distinctions
among types of signals permit the development and use of methods and tools for analyzing and processing
signals.
Characteristics of Signals. A signal s(t) is said to be periodic if and only if s(t + T) = s(t), < t < ,
where T is said to be the period of s(t). Any signal which is not periodic is said to be aperiodic. Familiar
periodic signals include the sinusoid sin(2ft + ), which has period t = 1/f. This is an important signal
model in communication theory.
A signal which is a completely specified function of time at every instant of time is said to be
deterministic. By contrast, the precise values of a random signal cannot be predicted in advance. Noise
in a radio communication system often is modeled as a random signal. In many cases of interest, the
output of a noisy communication channel can be modeled as the sum of a deterministic and a random
part, r(t) = s(t) + n(t), where r(t) is the received waveform, s(t) is the transmitted signal, and n(t) is the
noise induced by the channel.
A signal can be a continuous function of a continuous-time random variable, such as the sinusoid
used above. On the other hand, in many systems, signals are sensed and measured only at discrete values
of the time variable. This practice is the foundation of digital communications. Further, the signal s(t)
may not be a continuous function of time, but rather it may assume values from a finite set only. Such
a signal is said to be a quantized or a discrete-valued signal.
Signal Representations. Analyzing the effects on communication signals of processing circuits, propa-
gation paths, and noise and interference requires accessible mathematical representations of those signals.
What follows introduces widely used and powerful representations for deterministic signals, both periodic
and aperiodic. More complete treatments can be found in many excellent texts. Several are mentioned
at the end of this section.
Suppose the signal s(t) represents the waveform produced by someone singing. A microphone can be
placed to capture the sound for display on an oscilloscope, thus permitting the viewing of s(t) as a
function of time. Now we know, for example, that men and women sound fundamentally different when
they sing. Women are said to have voices that are higher than those of men. Yet, examining on the
oscilloscope the waveforms produced by male and female singers may fail to reveal these differences
to all but a trained observer. However, they would be captured easily by a graphical display of the
frequency spectrum, or simply the spectrum of the singers voice.
The usual spectral representation is given by the Fourier transform S(f) of the signal:

1999 by CRC Press LLC


Communications and Information Systems 18-25


S( f ) =


s(t ) exp( j 2 ft ) dt (18.3.1)

We also say that S( f ) is the frequency domain representation of signal s(t) and that it specifies the
spectral composition of the signal. Such a tool makes it possible to determine easily the effects on a
signal of filters, noisy and bandwidth-limited communication channels, antennas, and other devices
through which it may pass.
Bandwidth. The breadth of spectral occupancy of a signal is called its bandwidth. This term makes
specific a notion of the width of the signal. As an important parameter of the communication channel,
bandwidth measures the range of frequencies over which the channel passes energy with relatively little
attenuation. (In this regard, a channel behaves as does any filter, a device which permits one or more
bands of frequencies to pass relatively unattenuated while deeply attenuating all other frequencies.)
A signal whose spectrum is limited to a range of frequencies near the origin is said to be a low pass
signal (Figure 18.3.3), while one whose spectrum is centered about some frequency away from the origin
is called a band pass signal (Figure 18.3.4). Channels (and filters) have similar designations.

FIGURE 18.3.3 Low-pass filter.

FIGURE 18.3.4 Band-pass filter.

It is not possible for physical channels and real signals to assume perfectly rectangular functional
forms since neither mathematical discontinuities nor infinite slopes can occur in physical signals.
Therefore, the definition of bandwidth must specify the level of attenuation at which spectral width is
measured. Two definitions are commonly used. The half power or three-decibel (dB) bandwidth is
defined by the maximum and minimum frequencies in the signal where its amplitude has dropped to
1 / 2 of its peak value. The definition comes from the decibel, the logarithmic expression of the ratio
of power loss or gain:

Pout
Power ratio (decibels) = log10
Pin

1999 by CRC Press LLC


18-26 Section 18

Notice that 3 dB corresponds to a power ratio of one half. A major virtue of the 3-dB bandwidth is
the ease with which it can be measured in the laboratory or in the field.
Noise in Communication Systems. Whenever a complex communication signal is transmitted through a
channel, the channel output is not likely to be an exact replica of what was transmitted. If the channel
is linear and time-invariant*, two types of signal modification can occur: (1) distortion, caused by a
nonuniform attenuation of the signal spectrum or by a nonconstant time delay of all parts of the signal
spectrum, and (2) noise. Sources of noise include thermal vibrations in the receiving circuits, nearby
rotating machinery or automotive ignition systems, and natural processes in the propagation path. Most
commonly experienced in communication systems is white Gaussian noise, so called because its spectrum
is constant over all frequencies and because its amplitude follows a Gaussian probability law. Noise is,
of course, a random phenomenon. Otherwise, the communications receiver could simply subtract the
(known) noise waveform from the channel output, leaving a noise-free signal that is proportional to what
was transmitted.
Some Communication Functions
Modulation. The spectra of signals such as speech, video, or computer data are concentrated at the low
frequencies and are examples of baseband signals. The transmission of such signals by wire or cable
for short distances, perhaps on the order of the dimensions of a room or office, is not difficult. An
example is the wire or cable connection of loudspeakers to the amplifier of a home entertainment system.
However, speech or music transmitted over 10, 50, or 100 mi of cable would experience so much
attenuation that whatever is received would be quite useless. By contrast, bandpass signals, having
spectra concentrated at much higher frequencies, are useful for such applications because they can be
transmitted by the propagation of electromagnetic energy from an antenna (e.g., by radio). Radio
transmission requires an antenna having physical dimensions which are approximately the wavelength
of the signals to be transmitted. Typical wavelengths for mobile radio signals are approximately 1.0 cm
to 1.0 m, while the wavelength of a 3000-Hz audio signal is 105 m, quite an impractical size for a
transmitting antenna. Either baseband or bandpass signals can be transmitted via radio if suitable
modulation of a sinusoidal carrier is used.
Modulation is the process of combining an information-bearing baseband signal s(t) with a bandpass
signal so that the combination is suitable for transmission over a specified communication medium. For
analog modulation processes, the bandpass signal is usually a simple sinusoid, Asin(2ft + ), the
frequency of which is known as the carrier frequency f, the amplitude A, and the phase angle . A
modulator modifies one or more of these parameters in accordance with the time variation of s(t) as
discussed below. Pulse or digital modulation requires first that s(t) be digitized. The resulting string of
binary digits modulates a parameter of the carrier in accordance with the sequence of its values.
Demodulation/Detection. The communications receiver must recover s(t) from the received modulated
signal for presentation to the destination. This demodulation process is more complicated than modulation
because the signal has most likely been corrupted by noise in the transmission. In an analog receiver,
the demodulator performs an estimation process by which it tries to determine the time-varying value
of some parameter of the received carrier, e.g., its amplitude or frequency. The detector is concerned
with whether or not a signal is present in noise. In the detection problem, the receiver is trying to decide
which of a finite number (in this case, two) of signals was actually sent. Again, this is not a trivial
problem, as the received signal is accompanied by channel noise.
Processing of the Information. Certain operations can be performed on s(t) to prepare it for more efficient
transmission. These operations include compression, a form of source coding which can reduce the
bandwidth occupied by the signal and make the modulation, transmission, and demodulation processes
more efficient. In a digital communication system, the binary representation of s(t), can be further

* A linear, time-invariant channel transforms a signal s(t) into As(t+T) + n(t) where A is constant with time.

1999 by CRC Press LLC


Communications and Information Systems 18-27

encoded in order to protect against channel errors. These operations are described in some detail under
the subsection on Information Theory.
Communications Techniques
Analog Modulation
Amplitude Modulation and Its Variants. Amplitude modulation (AM) continues to be widely used in
everyday communications including standard broadcast radio, the Citizens Radio Service, international
shortwave broadcasting, and certain segments of the Amateur Radio Service. Its popularity stems from
the relatively simple transmitters and receivers required.
Let s(t) be the baseband signal to be transmitted. Then the amplitude-modulated carrier is v(t) =
A[1 + Ks(t)]cos2fct, where the carrier amplitude is A, the baseband signal amplitude is K, and the
carrier frequency is fc. We have omitted the random phase angle since it is of no significance in
AM. Now, consider the modulating signal s(t) = cos2fmt.

v(t ) = A(1 + K cos 2 fm t ) cos 2 fc t

Trigonometric expansion gives

cos 2 ( fc + fm ) t + cos( fc fm ) t
KA KA
v(t ) = A cos 2 fc t + (18.3.2)
2 2

The Fourier transform of v(t) is

V( f ) =
A
2
[
( f fc ) + ( f + fc ) + ]
KA
4
[
( f fc f m ) + ( f + f c + f m )]
(18.3.3)
+
KA
4
[
( f fc + f m ) + ( f + fc f m ) ]
where (t), the dirac delta function, is defined by


(t ) = 0, t 0, and

(t ) dt = 1

This function provides a convenient way to represent sampling and time-shifting operations.
Equation (18.3.2) shows that the transmitted waveform is the sum of three terms: a carrier term and
two sidebands, with each of the latter carrying the information-bearing modulation. For K 1, the power
in the carrier term is proportional to A2, while the power in each sideband is proportional to A2/4. Hence,
two thirds of the power in the transmitted signal carries no useful information. It is merely along for
the ride. Further, each sideband carries the same information, so the signal occupies twice the bandwidth
that is necessary to represent the modulation faithfully. Finally, on noisy channels, it is important to
measure the ratio of signal power to noise power at the receiver output, (SNR)O. Let (SNR)I be the signal-
to-noise ratio at the receiver input. The quotient of these two ratios, a figure of merit for modulation
schemes,

(SNR)o K2
=
(SNR) I 2 + K 2

is never greater than one third for AM. Compare this with other methods given below.
Each of the following three variants on AM is designed to provide savings in power and bandwidth.

1999 by CRC Press LLC


18-28 Section 18

Double sideband-suppressed carrier (DSB-SC) suppresses the transmission of the carrier itself, thus
obtaining more efficient use of the power transmitted. However, it requires the same channel bandwidth
as ordinary AM. The modulation process is represented as follows:

v(t ) = As(t ) cos2 fc t (18.3.4)

The Fourier transform of the DSB-SC signal is

V( f ) =
1
2
[
A S( f fc ) + S( f + fc ) ] (18.3.5)

Now notice that all components of the signal carry the information being conveyed. No part of the
signal is along for a free ride. In the receiver, the demodulator first multiplies the received signal by
Acos2fct:

v0 (t ) = AAs(t ) cos(2 fc t + ) cos(2 fc t )


(18.3.6)
= ( AA 2) cos(4 fc t + ) + ( AA 2) s(t ) cos

Notice that the first term can be removed by a simple lowpass filter.* This leaves a term which is
proportional to the information s(t). However, as the value of (the random phase difference between
the carrier and the locally generated copy of the carrier) approaches /2, the amplitude of the coefficient
of s(t) will be small, and the output signal may be too weak to be useful. Even worse, may vary
randomly with time, thus distorting the information s(t). In practice, a phase tracking system is commonly
used to keep the local oscillator in phase with the received signal. When such tracking is used, the
receiver is said to perform coherent detection. The figure of merit for the DSB-SC is (SNR)O/(SNR)I =
1. This improvement over AM is a consequence of not having a carrier-only component in the spectrum.
Vestigial sideband (VSB) modulation is used in commercial television to convey video information.
One entire sideband and a very small portion of the other are transmitted. Although carrier may or may
not be sent, depending upon the application, the low cost of envelope detectors has dictated that
commercial TV use VSB with a small carrier component. This is helpful because television video signals
have large bandwidths and carry significant amounts of information in the low frequencies.
One can think of generating single sideband (SSB) modulation by generating an ordinary AM signal,
then filtering out the carrier and one of the sidebands, so that the transmitted signal is either the upper
sideband (USB modulation) or the lower sideband (LSB modulation) only. The Fourier transform shows
that this is equivalent to a linear translation of the modulating signal s(t) from baseband to frequencies
near fc. Since the transmitted signal is essentially a replica of the information-bearing signal s(t), SSB
is a very efficient modulation technique, placing all the transmitted power into transmitting the infor-
mation and doing so with a minimal use of bandwidth. As with DSB-SC, the SSB receiver does not
degrade the signal-to-noise ratio of the received signal in noise.
Frequency Modulation. The use of frequency modulation (FM) in standard radio broadcast commu-
nications has surpassed that of AM due in part to the higher fidelity afforded by the larger channel
bandwidths and in part to the inherent resistance of FM receivers to propagation disturbances, electrical
storms, and human-induced interference as well as the lack of many atmospheric propagation distur-
bances in the FM broadcast band.** FM is also used in a variety of mobile and public safety applications,
in certain segments of the Amateur Radio Service and in military communications.

* A low-pass filter attenuates all frequencies above its cut-off frequency while permitting frequencies at or
below that frequency to pass unhindered. In the example of DSB-SC demodulation, the cut-off frequency should be
greater than fc but less than 4fc.

1999 by CRC Press LLC


Communications and Information Systems 18-29

Let s(t) be the information-bearing baseband signal. The frequency modulated signal is


s() d
t
v(t ) = A cos2 fc t + 2 K (18.3.7)
0

Observe that the envelope of v(t) is constant, independent of the message signal, s(t).
As we did for AM, let us exhibit important properties of FM by studying a carrier that is frequency
modulated with a single sinusoid. Let s(t) = Amcos2fmt. Substituting into Equation (18.3.7), differenti-
ating and dividing by 2 give the frequency at any instant of time as

fi (t ) = fc + KAm cos 2 fm t

= fc + f cos 2 fm t

where f = KAm.
The quantity f is known as the frequency deviation and indicates the largest difference between the
actual, instantaneous frequency of the FM signal and the carrier frequency fc. Note that, while f is
proportional to the amplitude of s(t), it is independent of the frequency of s(t). The derivative of the
instantaneous frequency gives the phase angle of the FM signal as a function of time:

f
i (t ) = 2 f c t + sin 2 fm t
fm

The quantity = f/fm is called the modulation index of the FM signal. It is the maximum difference
between the instantaneous value of the time-varying phase of the signal and the phase of the unmodulated
carrier fc.
The modulation index will be used to distinguish between two types of FM systems. Write the FM
signal as a function of time:

v(t ) = Am cos(2 fc t + sin 2 fm t ) (18.3.8)

Expanding Equation 18.3.8 using the trigonometric formula for the cosine of the sum of two variables
gives:

v(t ) = A cos 2 fc t cos( sin 2 fm t ) A sin 2 fc t sin( sin 2 fm t ) (18.3.9)

When is much smaller than one, this quickly simplifies to

v(t ) = A cos 2 fc t A sin 2 fc t sin 2 fm t (18.3.10)

** The resistance of FM broadcast signals to nighttime fading and other propagation anomalies is actually due
to the use by FM of carrier frequencies between 88 and 108 MHz (in the VHF band) where signals travel through
the atmosphere in direct line of sight from transmitter to receiver. Contrast this with the signals between 0.540
and 1.600 MHz (used by standard AM broadcast radio) which travel through the ionosphere where they are reflected
back to earth, often hundreds or thousands of miles from the transmitter. Even so, FM is far less vulnerable to
additive noise and interference (such as lightning) because it carries information in the argument of a sinusoid.
Further, most FM receivers enhance this effect by passing the received signal through a limiter circuit which prevents
the amplitude of the received signal from varying above a set value.

1999 by CRC Press LLC


18-30 Section 18

which represents the sum of two signals at frequency fc, one of which has constant amplitude and the
other of which has an amplitude which is proportional to the modulation. Thus, the narrowband FM
signal seems to exhibit some amplitude modulation and will not have a constant amplitude. After
trigonometric expansion, Equation (18.3.10) shows

v(t ) = A cos 2 fc t + 0.5 A cos 2 ( fc + fm ) t 0.5 A cos 2 ( fc fm ) t (18.3.11)

Equation (18.3.11), which represents a narrowband FM signal, has the appearance of the AM signal
shown in Equation (18.3.1) except for a change of sign in the last term. Thus, one might conclude that
a narrowband FM signal has the same bandwidth as an AM signal. This approximation can be shown
to hold so long as is less than one.
On the other hand, when is larger than one, the small angle approximations used in narrowband
FM do not offer a valid representation of the signal. Careful analysis of the mathematics shows that the
wideband FM signal with sinusoidal modulation has the following characteristics:
Its spectrum consists of a carrier at frequency fc and sidebands at all integer multiples of fm above
and below fc.
The amplitude of the carrier component of the spectrum is a function of the modulation index .
This occurs because the envelope of v(t) is constant for an FM signal, so any power in the sidebands
must be taken from power in the unmodulated carrier. Although the infinite number of sidebands
suggests that the transmission bandwidth of the wideband FM signal must also be infinite, it is
found that, at frequencies sufficiently far from the carrier, the sideband amplitudes are of insig-
nificant magnitude to cause noticeable distortion in the demodulated signal. It has been found
that the effective transmission bandwidth of an FM signal with sinusoidal modulation is WT =
2f + 2fm = 2f (1 + 1/).
A more accurate estimate of the required transmission bandwidth can be made by choosing to retain
all sidebands whose amplitudes exceed some specified fraction of the carrier amplitude. Determination
of the transmission bandwidth required to support this criterion then becomes a numerical exercise.
Digital Communications. Most modern communications advances are occurring in the rapidly expanding
field of digital communications. In digital communication systems, all information is represented as
strings of symbols that take values from a set of finite size. The most common is the binary representation,
in which all information is represented as strings of binary (0,1) symbols. More generally, however,
digital can imply a finite set of any size. In a quartenary system, for example, information is represented
as strings of symbols that take values from an alphabet of four symbols. Industry is working on digital
television, implementing what is essentially an analog function using digital techniques and circuits.
Why is this happening? In communication channels, digital signals are fundamentally more robust
and resistant to corruption than are analog signals. Digital signals do not experience intermodulation;
they do not exhibit crosstalk; if the amplitude is abruptly cut off by a faulty amplifier (clipping), no
signal distortion results. When errors occur in a digital bitstream, error control coding techniques can
be used to seek out and reverse the errors. Typically, there is no corresponding way to reverse the
distortion of an analog signal. To communicate over very long distances, it is quite easy to regenerate
digital pulses at sufficiently frequent intervals to avoid error or loss. The analog counterpart is a repeater
that must exhibit a linear relationship between output and input signals over a wide range of input signal
amplitudes. In manufacturing, the cost of digital hardware is lower than that of analog circuitry, and the
reliability is higher. Digital bitstreams from a wide variety of sources can be transmitted over the same
channels. They can even be intermingled and stored easily on magnetic media for later transmission,
thus giving impetus to multimedia communications. Digital communications are easily encrypted to
protect them from eavesdropping; this is quite difficult for analog signals and usually results in their
being digitized when such protection is required. Finally, with the advent of modern, computer-controlled

1999 by CRC Press LLC


Communications and Information Systems 18-31

telephone switching systems, the exclusive use of digital communications signals affords great ease in
switching and multiplexing operations.
On the other hand, digital communications poses certain challenges. If analog information is to be
transmitted via a digital channel, it must first be converted into a digital format. This requires two basic
steps:
Sampling: Values of the analog signal are measured (sampled) at equally spaced time intervals.
Nothing that occurs between the sample values will affect the transmitted signal. In fact, according
to Nyquists sampling theorem, all the information contained in the original, continuous-time
signal is contained in the samples, so long as they are taken at least 2B times per second, where
B is the highest frequency component in the signal.
Quantization: By itself, sampling is not adequate to prepare an analog signal for digital transmission.
Within the limits of resolution of the sampling circuitry, each sample is a value of a continuous
variable. In order to transmit the sample values in finite time, they must be represented by a finite
string of symbols. Therefore, each sample is rounded or truncated to its nearest neighbor in the
finite set; the process is known as quantization. For example, if the amplitude of the continuous
signal remains within the interval (Amax, Amax), the quantizer divides this interval into L levels
and the quantized value is taken as the midpoint of the interval in which the continuous value
falls. Each sample value, therefore, could be in error by plus or minus half the value of the sample
interval. These sample values are expressed as binary numbers which are transmitted over the
channel. In the receiver, they are recovered and a stepwise approximation to the original signal
is built. If this stepwise approximation is completely correct, it still differs from the actual
continuous signal because of the loss of amplitude information caused by the quantization process.
While smoothing circuits can remove the discontinuities, they cannot assure recovery of the correct
value of the analog signal. Thus, there is a residual quantization error that can be minimized in
the design but never eliminated. For example, assume the use of uniform quantization (all
quantization levels are the same size). If the average power in the original signal is S, and if L =
2R where R is the number of bits used to encode each sample, then the signal-to-noise ratio at
the quantizer output is

3S
SNRQ = 2 2 2 R
Amax

Additional functions such as compression and error control may be required prior to transmission
over the channel. These are discussed in the subsection on Information Theory.
Baseband Digital Modulation. Baseband digital modulation schemes are suitable for transmission of
digital information over wire or cable, for example. A typical baseband system is shown in Figure 18.3.5.

FIGURE 18.3.5 Typical digital baseband transmitter.

For each symbol to be transmitted, a waveform is chosen as its representation on the channel. Examples
of waveform selection schemes include the following.
Nonreturn-to-zero (NRZ) schemes represent a binary ONE as +V volts and ZERO as V volts
for some design value of V.
In return-to-zero (RZ) systems, ONE is represented by +V volts and ZERO by a voltage of value
zero.

1999 by CRC Press LLC


18-32 Section 18

In phase-coded representations, a voltage transition occurs during each symbol interval, whether
or not the symbol has changed value from the previous interval. Such a property is useful in
magnetic recording systems and optical communications where arbitrarily long, constant voltages
(which approximate direct current) cannot be physically transmitted. For example, in the popular
Manchester code, ONE is represented by a positive pulse for one half the symbol period and a
negative pulse for the other half, while ZERO uses the same scheme with the polarities reversed.
Bandpass Digital Modulation Schemes. The most general form of bandpass-modulated signal is
v(t) = A(t)cos(t). Note that the amplitude A(t) or phase (t) of the sinusoid, or both, can vary with
time and, hence, carry information via digital modulation. More specific to our uses, we can also write

(t ) = 2 f0 (t ) + 0 (t )

We now consider a few important examples of digital modulation. We assume that the phase of the
carrier is used in the demodulation process, so that the receivers are coherent. In general, coherent
modulation gives a lower error probability for a given signal-to-noise ratio than does noncoherent
modulation. We consider channels in which additive, zero-mean, stationary, white Gaussian noise is
added to the signal. Thus, the binary data recovered at the receiver are not guaranteed to be free of errors.
If a binary communication signal is received with power P (watts) at a speed of B bits per second,
each binary symbol (or bit) contains Eb = P/B joules of energy. We call Eb the signal energy per bit. A
characteristic of white Gaussian noise is a uniform spectrum having constant noise power in every unit
of bandwidth. If this power is N0 watts per Hz of bandwidth, then a receiver of bandwidth W will intercept
N0W watts of channel noise power. In such cases, it is useful to consider the ratio Eb /N0, a dimensionless
signal-to-noise ratio often called the ratio of signal energy per bit-to-noise spectral density. Digital
modulation schemes are compared by plotting the probability of error per bit, Pb, as a function of Eb /N0.
Phase-shift-keying (PSK) carries digital information in discrete shifts of the carrier phase. The mod-
ulated waveform is

2 i
cos 0 t +
2E
v(t ) = i = 1,K, M, 0tT
T M

where E is the signal energy and T is the duration of the transmission of a symbol. When M = 2, this
is called binary PSK (or simply PSK). The general case (M 2) is often denoted MPSK. The average
probability of error in the binary case is


Pe =
1
Eb N0
( )
exp x 2 dx

Of all the modulation techniques considered, PSK offers the minimum probability of error for fixed
Eb /N0.
Binary frequency-shift-keying (FSK) is one of the oldest digital bandpass modulation forms in exist-
ence. It was used extensively in early dial-up computer modems at speeds up to 1200 bits/sec, beyond
which more sophisticated modulation methods are needed in order to signal faster in the same bandwidth.
Each discrete source symbol is represented by one of two frequencies, fi:

cos(2 fi t + )
2A
v(t ) = i = 1, 2 0 t Tb
T

where Tb = the length of a transmitted bit.

1999 by CRC Press LLC


Communications and Information Systems 18-33

The arbitrary, constant phase term is represented by , and the general (nonbinary) case is usually
denoted MFSK. The case of interest here is coherent FSK and its performance is given by


Pe =
1
Eb 2 N0
( )
exp x 2 dx

It is interesting to note that the signal-to-noise ratio for coherent binary FSK must be twice that for
coherent binary PSK to give the same probability of error.
Amplitude-shift-keying (ASK) carries the information in the discrete-valued amplitude of the signal.

2 Ei (t )
v(t ) = cos( 0 t + ) i = 1, 2
T

When one of the amplitude values is 0, this gives rise to on-off keying, surely the oldest form of
digital modulation. It was commonly used in radiotelegraphy from the earliest days of radio and continues
in use by amateur radio operators. In mid-1995, the U.S. Navy retired its remaining radiotelegraph
operators because of the very slow rate of information exchange afforded by ASK.

Information Theory
Information is the commodity in which communication systems deal. Information is conveyed to the
commuter listening to a news broadcast on the car radio during morning rush hour; it is the response to
a customers inquiry about his or her bank account; it is what the patient learns following a physical
examination. In each case, the content of the information cannot be completely predicted. The information
is news; it is something of a surprise.
Information theory provides mathematical bounds on the performance of communication systems.
Specifically, it affords:
A lower bound on the number of discrete symbols (or on the bandwidth of a continuous signal)
necessary to represent a source without loss of information
An upper bound on the rate at which information can be reliably transmitted over a noisy channel
It relies heavily on concepts of uncertainty, by which the nature of information is represented.
Therefore, probabilistic concepts are needed.
Information theory provides a formal definition of information but does not provide a subjective
meaning. Information is provided by a source and is transmitted over a channel where it is accepted by
a destination or a sink. Sources (as well as channels and sinks) can be discrete or continuous depending
upon the representation of the information.
Sources and Source Coding
A common model of an information source is something that periodically emits one symbol X from a
known alphabet, A = {x0, x1, , xK1}, according to the probabilities:

(
pj = P X = x j ) j = 0,1,K K 1

If successive source symbols are statistically independent, this is said to be a discrete memoryless
source. From the event xj, an observer gains an amount of information defined by i(xj) = log2(1/pj), from
which we conclude:

1999 by CRC Press LLC


18-34 Section 18

1. A certain event produces no information: i(xj) = 0, pj = 1.


2. The occurrence of an event cannot take away information that the observer already has:
0 p j 1 i(x j) 0; that is, information is always positive.
3. Unlikely events produce more information than do likely events: pj < pi i(xj) > i(xi).
4. The contributions of successive symbols to information are additive: i(xi, xj, xk) = i(xi) + i(xj) + i(xk).
Because the logarithm is taken to the base 2, we call the unit of information the bit, a contraction of
binary digit.
Example
Consider a fair coin (one for which the chances of a head and a tail are equal). We say that one bit of
information is conveyed each time the coin is tossed.
Example
A modem which communicates to another modem across telephone lines by sending one of four audio
tones every second is sending information at the rate of two bits per second.
The average value of the information per source symbol is an important quantity. It is given by

K 1 K 1

[ ( )] = p i( x ) = p log p1
H ( A) = E i x j
j =0
j j
j =0
j 2
j

and is called the entropy* of the discrete memoryless source with source alphabet A. From this definition,
it can be shown that 0 H(A) < log2K, where K is the size of alphabet A. In addition, H(A) = 0 if and
only if pj = 1 for some j and all other probabilities are zero. The upper bound on entropy is given by
H(A) < log2K if and only if all symbols in the alphabet are equiprobable, that is, pj = 1/K for all j.
The output of the typical information source is rarely a sequence of equiprobable symbols, so its
entropy is rarely maximum, and its transmission is not as efficient (in terms of bits of information per
symbol) as is possible. In order to improve this efficiency source coding is employed. Typically, a source
code assigns short binary sequences (or code words) to more probable source symbols and long binary
sequences to less probable symbols**. Such a variable length code should also be uniquely decodable
so that the original source sequence can be recovered without ambiguity from the encoded sequence.
We confine our attention to binary sequences. Figure 18.3.6 shows a source coding scheme in which
the output symbol sj of the discrete memoryless source is encoded into a binary string bj = (b0 , b1 , K, bl j ).

FIGURE 18.3.6 Source encoding.

If the string bj has length lj, then the average codeword length L is simply L = Kj =01 p j l j , and we
say that the source code produces L (average) bits per source symbol. Now, let Lmin be the smallest
possible value of L. Then the coding efficiency of the source code is defined to be = Lmin / L . The

* Mechanical engineers are likely to be familiar with the entropy (k ln) from statistical mechanics. Here, k is
Boltzmanns constant, is the number of states in the system, and ln is the natural logarithm. This entropy measures
quantitatively the randomness of the system and, therefore, is highly suggestive of information theoretic entropy.
The relationship between the entropies has been the object of study by scientists and philosophers for years.
** The English language performs a heuristic form of source coding. Frequently used words such as a, is,

he, it, we, I, me, etc. are composed of a few letters while less-used words such as subcutaneous are
assigned much longer strings.

1999 by CRC Press LLC


Communications and Information Systems 18-35

fundamental bound on source codeword length is given by Shannons source coding theorem: The
average codeword length L for a discrete memoryless source of entropy H(A) is lower-bounded by that
entropy L H(A).
Lossless Compression. The discrete memoryless source provides a valid model for many physical sources,
including sampled speech and the outputs of various environmental sensors. Typically, much of the
information from such sources is redundant; that is, the same underlying information may be represented
in the values of several output symbols. It is prudent to remove this redundancy in order to use less
bandwidth and/or time for transmission. Source coding algorithms which remove such redundancy in
such a manner that the original source data can be reconstructed exactly are said to perform lossless
data compression. They work by assigning short sequences to the most probable source outputs and
longer sequences to less probable source outputs.
A (variable length) source code in which no codeword is a prefix of another codeword is said to be
a prefix code. Prefix codes are always uniquely decodable, and their average lengths obey the inequalities:
H(A) L H(A) + 1. Asymptotically, the efficiency of a prefix code approaches 1. Because of their
large decoding complexities, however, we look for other classes of lossless compression algorithms. The
Huffman code tries to assign to each source symbol a binary sequence, the length of which is roughly
equal to the amount of information carried by that symbol. The algorithm is straightforward and
transparently easy to decode. The average codeword length approaches the source entropy H(A). How-
ever, the Huffman code requires knowledge of the probability distribution of source symbols, and this
often is not available.
The Lempel-Ziv algorithm, by contrast, adapts to the statistics of the source as they are revealed in
the source text. A binary source sequence is read left to right and parsed into segments, each of which
is the shortest subsequence not encountered previously. Each of these sequences is encoded into a fixed-
length binary code sequence. Thus, in contrast to Huffman codes and others, a variable number of source
symbols is encoded into a fixed number of code symbols. Lempel-Ziv achieves, on average, a reduction
of 55% in the number of symbols representing a source file; Huffman coding typically achieves 43%.
This accounts for the enormous popularity of the former.
Rate-Distortion Theory. If the source output is a continuous variable x, the number of bits required for
an exact representation of a sample value is infinite. If a finite number is used, the representation is not
exact, but is often useful in many applications. The degree to which the representation x is inexact is
called distortion and can be thought of as the cost associated with representing an exact source by an
approximation. We write distortion as D( x, x ). A typical distortion function is the familiar squared-error
criterion:

D( x, x ) = ( x x )
2

In a typical application, we think of a source as presenting a number n of samples per second. If each
sample is, on average, represented by R bits, then we say that the source produces information at the
rate of Rn bits per second. It is common, however, to normalize the source rate to the sample rate, in
which case we say that we have a source of rate R (bits per sample). In principle, as R increases, we
should be able to make distortion decrease. The mapping of source symbols (or sample values) to strings
of bits is called a source code. Naturally, it is the aim of source code designers to minimize the distortion
for a given rate. Let D be the expected distortion between two sequences of source values. We call (R,D)
a rate distortion pair and say that a given (R,D) is achievable if there exists a source code having rate
R and expected distortion D. The minimum rate R for which a source code exists having expected
distortion D is called the rate distortion function R(D). The average amount of uncertainty about a
random variable X provided by observing another random variable Y is called the conditional entropy
H(X|Y). Thus, if the true source entropy is H(X), then the average uncertainty provided by source coding

1999 by CRC Press LLC


18-36 Section 18

can be written H ( X | X ). The average amount of information provided by one random variable X about
another random variable X is called the average mutual information between the two and can be written:

( ) ( )
I X , X = H ( X ) H X | X

Then R(D) = min I ( X , X ). Shannon showed that it is possible to construct source codes that can
achieve distortion D for any rate R > R(D) and that it is impossible to construct source codes that can
achieve arbitrary distortion for any rate below the rate distortion bound.
There is great interest in studying rate distortion bounds for various coding schemes. One of the oldest
applications is in a speech compression application where 8000 samples per second are taken and
quantized as eight-bit numbers to produce a 64-kb/sec uncompressed signal. Simply using the correlation
between adjacent samples reduces the number of bits (hence the required channel bandwidth) by factors
of two to four with little additional perceived distortion.
Of greater modern interest, however, are the efforts to compress images and video. While lossless
compression methods can compress an image by a factor of three, methods employing (guaranteed loss)
quantization can compress an image by a factor of 50 with what is claimed subjectively to be little loss
in picture quality. A major outstanding problem is that the mean square error as a measure of distortion
correlates very poorly with picture quality as judged subjectively by human viewers. A better, quantitative
method of evaluating distortion in images and video is sorely needed. Alas, this is a difficult problem,
and solutions do not seem immediately forthcoming.
Channels, Capacity, and Coding
On many channels, satisfactory communication is limited by noise. Whether traveling by coaxial cable
or radio, as a signal gets farther from its source it grows weaker. In the cable, this attenuation is caused
by the cables nonzero resistance, which dissipates energy in the signal. In radio propagation, a variety
of phenomena cause an apparent weakening of the signal, but in every case, radio waves spread out in
space spherically just as circular waves spread out when a stone is dropped into a still pond. A receiving
antenna can be thought of as a window of constant size which intercepts a fraction of the power in the
signal. At distance R from the source, all of the signal power P (watts) passes through a sphere of area
4R2. An antenna of effective area A will, therefore, intercept (P/4R2)A watts. Thus, radio signals
attenuate as the square of the distance over which they travel. In either case, when the signal has been
sufficiently attenuated, the noise resident in receiving equipment and the propagation medium can distort
the signal or, in fact, cause bit errors.
Prior to 1948, communications engineers assumed that the constraints of noise would forever limit
not only communications distance, or range, but signal quality as well. In that year, however, Claude
Shannon published a remarkable theorem which continues to shape communications research and
technology.
Shannons Noisy Channel Coding Theorem. For a large and interesting class of communications channels,
noise limits the rate at which information can be transmitted. In 1948, Shannon showed how to determine
the channel capacity, the maximum rate at which information can be sent through the channel at an
arbitrarily small error probability. He explained how noise induces uncertainty into the channel output.
So, for example, if information from a source is fed to the channel at rate H(X), it exits the channel with
rate H(X|Y), the amount of information that the receiver knows about source X when observing Y. Clearly,
H(X) H(X|Y). So, the average amount of information passed through the channel is I(X,Y) = H(X)
H(X|Y). The channel capacity then is defined as the maximum overall values of input distributions of
H(X) H(X|Y); that is,

C = max I ( X , Y )
p( x )

1999 by CRC Press LLC


Communications and Information Systems 18-37

Shannons Channel Coding Theorem. There exist channel codes that permit communication at any
rate less than capacity with as small an error probability as desired. It is not possible to achieve arbitrarily
small error probability with any code at a rate greater than capacity. For example, for a channel of
bandwidth W perturbed by additive white Gaussian noise having average power N, when the average
received signal power is S, the capacity C of the channel is given by:

C = W log 2 1 +
S
N

This example suggests that information can be encoded so that its transmission over a channel of
finite bandwidth with finite signal-to-noise ratio can be received with a probability of error that is
arbitrarily close to zero. Prior to this discovery, communication engineers believed that the only way to
combat channel noise was to use very narrow bandwidths and send information at very low rates. They
felt that errors would be inevitable in the received signal as long as the channel noise had finite power.
Although Shannons noisy channel theorem is an impressive result, it does not show how to encode
the transmitted signals in order to achieve capacity. This fact has provided employment opportunities
for many coding theorists and designers for nearly a half century. Further, while most well-designed
and correctly used error control codes can provide significant reductions in error rate, most do not provide
an arbitrarily small error probability while sending information at capacity.
How Coding Improves Communications. While error control coding can provide a reduced error rate at
the receiver output, its contribution to system performance is far more profound. Coding works by
introducing redundancy into a stream of symbols being sent over the noisy channel. In order to transmit
k binary symbols using an n-bit binary block code, n code symbols must be sent during the same amount
of time that the uncoded system would send k information symbols. To support this increased transmission
rate requires a larger channel bandwidth which admits proportionally more noise power into the receiver
(assuming white noise) and which causes, therefore, a greater error rate in the received bit stream. Thus,
the code not only has to correct errors at the original uncoded error rate, but also must actually correct
errors at a higher error rate caused by the increased noise.
For example, suppose a source is transmitting noncoherent binary frequency-shift keying (FSK) at a
rate of B bits per second using a transmitted power of P (watts) on a channel corrupted by white Gaussian
noise. The energy per bit Eb is given by P/B, and the power spectral density of the noise is N0 (watts/Hz).
For binary FSK, the error probability is given by Pe = 1/2 e 2 Eb / N0 . If a code of rate R (bits/symbol) is
used, then the energy per bit is reduced to R Eb , and the error probability is actually increased to Pe =
1/ e 2 REb / N0 . Appropriate choice of error control code, however, will reduce the error probability not
2
merely back to Pe, but rather to a significantly smaller number, Pd. Of course, it is always possible to
achieve the same reduction in error probability merely by increasing the transmitter power from P to
some larger value, PH. The logarithmic ratio, 10logPH/P, of these two values is called the coding gain.
Thus, for a given modulation format and channel noise, an important feature afforded by coding is the
use of less transmitter power than if coding is not used.
Error Control Codes. Perhaps the most widely known scheme for the control of channel errors is the
single parity check that is appended to every fixed-length block of binary digits so that the number of
ONEs in an augmented block is, for example, always even (or odd; the choice is arbitrary). Suppose,
for example, that data are to be transmitted in four-bit blocks (b0, b1, b2, b3). A single parity check
position p4 is set to 0 if the number of ONEs in the 4-tuple is even and to 1 if odd. (This is known as
even parity. A similar rule for odd parity can be used as well.) A single error in a block of even parity
will cause the number of ONEs to be odd. This odd parity condition can be detected, triggering an
automatic retransmission of that portion of the data containing the block with the error condition. If the
channel error rate is fairly low, this method is convenient and efficient.
However, if the channel error rate is not quite so low, more powerful error control is needed. Consider
computing parity checks on subsets of the bi. For example, using even parity, compute parity bits p0 on

1999 by CRC Press LLC


18-38 Section 18

data bits b1, b2, and b3; p1 on b0, b2, and b3; and p2 on b0, b1, and b2. Instead of transmitting 4 bits of
information in five binary symbols (as above), we now transmit 4 bits of information using seven binary
symbols. This is somewhat more inefficient, but if any single binary digit in the channel is corrupted,
the location of that error in the codeword is uniquely determined by noting which parity checks fail.
One way to decode such a code is to recompute the parity checks p0, p1, and p2. An error in b0 (only)
will cause p1, and p2 to fail. No other single error will cause p1 and p2, and any other parity check to
fail. Similarly, a single error in b1 will cause p0 and p2 to fail, etc.
This code is called a single error-correcting code because it can correct any single bit error in a block.
Comparing this code with the single parity-check code that detects any odd number of errors, we see
that the latter adds one redundant bit for every four data bits, while the single error-correcting code adds
three redundant bits for every four data bits. We say that the single parity-check code has a rate R = 4/5
= 0.8 bits/symbol, while the single error-correcting code has R = 4/7 = 0.57 bits/symbol. Therefore, the
price for error correction is, to a degree, decreased transmission rate.
The preceding single error-correcting code is a particular example (the Hamming code) of linear
block code (LBC). The parity check computations used in encoding are succinctly expressed in its
generator matrix G:

1000011
0100101
G=
0010111

0001110

Encoding of a 4-bit information vector b = (b0, b1, b2, b3) is performed by matrix multiplication:

v = bG

= (b0 , b1 , b2 , b3 , p0 , p1 , p2 )

where the {pj} are as given above and v is the codeword produced by information vector.
Every LBC has a G-matrix having k rows and n columns and is said to be an (n,k) code; k is called
the dimension of the code and n is its blocklength. For the foregoing example, (n,k) = (7,4) and the code
rate is

k 4
R= = = 0.57 bits symbol
n 7
The Hamming distance d between two words of an LBC is the number of positions in which they
differ. The minimum Hamming distance dmin of an LBC is the smallest Hamming distance for all pairs
of codewords. An LBC is guaranteed to correct t = [(dmin 1)/2] or fewer errors in any word received
from a noisy channel. Generally, codes with large values of dmin have small values of code rate R and
vice versa, so the communication engineer faces a serious trade-off between attempted transmission
speed (high rate) and error correction required to achieve a required fidelity. For example, the Varsharmov-
Gilbert bound tells us that codes with rates greater than some number exist (if only we can find them):

k d d d d d
1 min log 2 min + 1 min log 2 1 min = 1 H min
n n n n n n

In the receiver, the error correction process is performed by a decoder, a digital (usually) circuit that
implements an appropriate decoding algorithm. Of course, any LBC can be decoded by comparing the
received word with every code word, choosing the codeword most likely to have been transmitted. In
most cases, this maximum likelihood decoding procedure can be done only by looking up all the

1999 by CRC Press LLC


Communications and Information Systems 18-39

codewords in a table, a process which is too complex and time-consuming for most practical commu-
nication situations.
Fortunately, mathematical decoding algorithms exist for most popular families of LBCs and can
decode with much less complexity (but with a higher error rate) than maximum likelihood decoding.
Applications of Coding. Coding is found everywhere that digital communication is found. It has been
said that if a digital communications system does not use coding, it is probably overdesigned. We find
trellis codes in popular computer dial-up modems; Reed-Solomon codes in military communications
equipment and in the compact disc player; convolutional codes in outer space. Error control codes will
play an important part in the emerging digital cellular telephone systems and in any digital communication
application where transmitter (or battery) power is at a premium.

Defining Terms
Bandpass: The characterization of a signal or a channel, the spectrum of which is centered around a
frequency far from zero. Contrast with baseband.
Bandwidth: The nominal width of the spectrum of a signal or of the bandpass characteristic of an
electronic filter or a communication channel.
Baseband: The characterization of a signal or a channel, the spectrum of which is concentrated at low
frequencies, typically in the audio and video ranges.
Carrier: The nominal frequency at which a bandpass signal exists. For some formats, such as amplitude
modulation, the carrier is the center frequency of the spectrum. It is the frequency remaining in
the spectrum as the level of modulation is gradually reduced to zero.
Channel: A medium of transmission of a communication system. Typically, it refers to the physical
medium and includes fiber-optic channels, voice-grade telephone channels, mobile wireless chan-
nels, and satellite communication channels.
Channel code: A code (q.v.) which is designed to control channel noise-induced errors in a transmission.
Channel codes use redundant symbols to detect the presence of and to locate and correct errors.
Coherent demodulation: Any demodulation process in which the phase of the arriving signal is known,
measured, or estimated with sufficient accuracy to improve the signal-to-noise ratio at the receiver
output.
Code: A mapping from a set of symbols or messages into strings of symbols. Codes often, but not
always, map into strings of binary symbols.
Compression: The technique of removing redundancy from a signal so that it can be transmitted in less
time or use less bandwidth or so that it occupies a smaller space on storage media.
Crosstalk: Leakage of a signal being carried on one communication channel to another. It is an unde-
sirable phenomenon that is often caused when the output of a channel is inadvertently coupled
into another.
Decibel: A logarithmic expression of power ratio computed by multiplying the common logarithm of
the ratio by 10.
Demodulate: In a radio receiver, to remove the information-bearing signal from the received signal.
Digital communications: The field of communications in which all information is represented as strings
of symbols drawn from a set of finite size. Most commonly, the term implies binary communica-
tions, but, in fact, it refers equally well to nonbinary communications as well.
Distortion: A measure of the difference between two signals. The various forms of distortion usually
arise from nonlinear phenomena.
Entropy: A measure of the average uncertainty of a random variable. For a random variable with
probability distribution p(x), the entropy H(X) is defined as x p(x)log p(x).
Huffman coding: A procedure that constructs the source code of minimum average length for a random
variable.

1999 by CRC Press LLC


18-40 Section 18

Intermodulation: The unintentional modulation of one communication signal by another. It is an unde-


sirable effect of having elements of two communication systems in too close proximity to one
another.
Lempel-Ziv coding: A dictionary-based procedure for source coding that does not use the probability
distribution of the source and is nonetheless asymptotically optimal.
Linear block code: A channel code with dimension k, blocklength n, and k by n generator matrix G
which produces a code word when multiplied by an information block of length k.
Modulate: To combine source information with a bandpass signal at some carrier frequency.
Noise: A signal n(t), the value of which at any time t is a random variable having a probability distribution
that governs its values.
Quantization: A process by which the output of a continuous source is represented by one of a set of
discrete values.
Rate-distortion function: The minimum rate at which a source can be described to within a given value
of average distortion.
Sample: The value of a continuously varying function of time measured at a single instant.
Signal: a time-varying voltage, current, or electromagnetic wave that conveys or represents information.
Source: Anything that produces an electrical signal that represents information.
Source code: A code (q.v.) that compresses a source signal, reducing its data rate and attempting to
maximize its entropy.
Spectrum: The representation of a time-varying signal in the frequency domain, usually obtained by
taking the Fourier transform of the signal.

References
Cover, T.M. and Thomas, J.A. 1991. Elements of Information Theory. John Wiley & Sons, New York.
Gallager, R. 1968. Information Theory and Reliable Communication. John Wiley & Sons, New York.
Haykin, S. 1994. Communication Systems, 3rd ed., John Wiley & Sons, New York.
Michelson, A.M. and Levesque, A.H. 1985. Error Control Techniques for Digital Communication. John
Wiley & Sons, New York.
Proakis, J.G. 1995. Digital Communications, 3rd ed. McGraw-Hill, New York.
Sklar, B. 1988. Digital Communications: Fundamentals and Applications. Prentice-Hall, Englewood
Cliffs, NJ.
Sloane, N.J.A. and Wyner, A.D., Eds. 1993. Claude Elwood Shannon: Collected Papers. IEEE Press,
New York.
Wicker, S.B. 1995. Error Control Systems for Digital Communication and Storage. Prentice-Hall,
Englewood Cliffs, NJ.

Further Information
Information theory as presented here treats only the case where one user transmits one message over
one channel at a time. During the past 15 years, a multiuser information theory has been emerging. This
theory provides bounds on the information rates of more than one user transmitting over the same channel
simultaneously. The beginnings of the theory are documented in the reference by Cover and Thomas.
New results appear frequently in the IEEE Transactions on Information Theory.
Source and channel coding remain fertile areas of research and development. In addition to the
information theory transactions, important results and some applications can be found in the IEEE
Transactions on Communications. Special areas are given emphasis in the IEEE Journal on Selected
Areas in Communications, nearly all the issues of which are special issues.
Another accessible text on digital communications is Digital Communications, by E. A. Lee and
D. G. Messerschmitt (Kluwer, 1988).

1999 by CRC Press LLC


Communications and Information Systems 18-41

18.4 Applications

Accessing the Internet


Lloyd W. Taylor
The Internet can provide access to a wide variety of information. Understanding what the Internet is,
how it works, and what tools you can use will help in finding that fact or that person that is needed.
What Is the Internet?
The Internet is a voluntary, worldwide association of networks. There is no one in charge of the
Internet, and anyone can belong.
There have been many attempts to define the Internet, all of which fall short in one way or another.
Probably the most accurate definition is that the Internet is any interconnected network that uses the
TCP/IP protocol suite.

A Brief History of the Internet


The Internet got its start as a project of the Defense Advanced Projects Research Agency (DARPA) in
1962. The plan was to create a military network that could survive nuclear attack by automatically
rerouting traffic around network nodes that had been destroyed. The first nodes were installed in 1969.
By 1971 there were 15 nodes spread across the United States.
In 1983, responsibility for the ARPANet (as it was then called) was split. The military portion of the
Internet was transferred to the Department of Defense and merged with the Defense Data Network
(DDN). The civilian portion continued to be operated as ARPANet. In 1986, the NSFNet was established,
which superseded the ARPANet by 1990. By this time there were over 313,000 connected computers.
Use of the Internet continued to grow. In 1993, the NSF decided to get out of the business of running
the Internet, and the entire system was privatized. In the meantime, a number of private Internet access
providers (IAPs) had come to exist, largely eliminating the need for any government subsidy or operation
of the Internet. By the beginning of 1994, there were over 2.2 million connected computers.
Today, there are dozens of IAPs and thousands of Internet service providers (ISPs) around the United
States, and many more internationally. The deregulation of telephone companies in the United States
has resulted in explosive growth, as the costs of long distance leased lines have dropped precipitously.
Anyone with a few thousand dollars can set up an ISP by simply purchasing a Unix system and several
modems and signing up for Internet access with an IAP.
The Internet works by routing TCP/IP packets from one router to another until the packet reaches
its destination (Figure 18.4.1). Each router determines the best path to the next router by the use of
routing tables it maintains, based on conversations with adjacent routers.

FIGURE 18.4.1 Router network.

1999 by CRC Press LLC


18-42 Section 18

Typically, a company will contract with an Internet access provider (IAP) for a fixed bandwidth
connection to that IAPs nearest point of presence (POP). The IAP will usually interconnect its routers
via high speed leased data lines. The IAP will install a leased line between its POP and a router located
at the companys site, and then connect the router to the companys local network. Once the router is
programmed and enabled, your companys network is part of the Internet.
Individuals can also gain access to the Internet for a monthly fee by contracting with an Internet
service provider (ISP). The ISP will typically offer either a shell account, which provides you with
dialup access to a Unix system via a terminal emulator, or a SLIP/PPP account, which allows you to
connect your personal computer directly to the Internet via a dialup modem.
Internet Tools
There are several basic tools that can be used to access information across the Internet. Each has its
particular uses and shortcomings. This section provides a brief introduction to each.
Telnet. Telnet is a tool that allows a computer to emulate a simple terminal. It provides a mechanism
by which the user can connect to another computer as if that computer were physically attached to the
users workstation.
A typical telnet application will emulate one or more terminal types. The most commonly emulated
terminal type is the Digital Equipment Corporations VT100 terminal, with the IBM 3270 terminal a
close second. The VT100 terminal is commonly used to communicate with Unix and VMS systems,
while the IBM3270 is used to communicate with IBM mainframe systems based on the VM or MVS
operating systems.
Telnet provides a text-only interface and is incapable of displaying graphics. It is a good choice when
a simple, efficient connection to a remote computer is desired, or when a text-only interface is sufficient.
If a graphical interface is required for an interactive session, the X-Window System is a better choice.
X-Window System. The X-Window System (X) was developed as part of MITs Project Athena. It is
capable of full color graphics display and requires a mouse on the workstation for window and command
selection.
The X-Window server runs on the users workstation. It receives drawing commands from the remote
computer and creates the requested text and graphics. Mouse clicks and keyboard input are sent to the
remote computer, which responds by sending commands back to the workstation to fulfill the users
request.
In general, a window manager must be running either on the remote computer or on the users
workstation, but not both. The window manager is responsible for controlling the placement and
movement of windows, for creating and destroying windows, and for starting applications on the remote
computer at the request of the user.
X requires significant processing power on the local workstation and can demand significant network
bandwidth for proper operation. In general, it should be run only on high-end workstations that are
directly connected to an ethernet. Attempting to run X-Windows over a dialup line will result in very
slow performance.
FTP. The file transfer protocol (FTP) is used to move files from one computer to another. It is capable
of moving all types of files (text or binary; images, sounds, software, etc.).
FTP provides a collection of commands that allow the user to connect to remote systems, to manipulate
files and directories on the remote system, and to move files back and forth between local and remote
systems. These commands vary somewhat between implementations of FTP.
Many systems on the Internet offer anonymous FTP. This de facto standard allows remote users to
connect to a public portion of the computer and to retrieve (and sometimes place) files. The standard
way to use anonymous FTP is to connect to the remote computer, log in as anonymous, and use your
email address as the password. You will generally be granted read-only access to a set of directories and
files which you can then download and use.

1999 by CRC Press LLC


Communications and Information Systems 18-43

Email. Electronic mail (email) is traditionally the most commonly used Internet application. Commu-
nicating with colleagues and friends around the world essentially instantaneously is a powerful capability.
There are two parts to email. The user agent (UA) is the part that the user directly interacts with. This
is the program on the workstation that provides the necessary commands and capabilities to create and
send messages. The UA may include features such as spell checking, attachments, and address books.
There are many different implementations of user agents for every kind of operating system. All of them
use the same transport mechanism.
The second part of email is the transport mechanism, or message transfer agent (MTA). It is the
responsibility of the MTA to receive the message from the UA, to parse the mailing instructions (e.g.,
To:, Cc:), and to send the message on to the intended recipients. On the Internet, all MTAs use the
simple mail transport protocol (SMTP) to carry messages between systems.
At this writing, the ability to send attachments and binary files between systems remains problematic.
There are several different standards for encapsulating attachments and graphics within SMTP messages.
Unfortunately, these standards are not interoperable. The most commonly used encapsulation standards
are MIME (multipurpose Internet mail extensions) and X.400. To successfully transfer attachments and
graphics via email, you must ensure that the recipients of your message use a UA that uses a compatible
encapsulation standard.
Worldwide Web. The Worldwide Web (WWW or Web) first appeared on the Internet in 1992. It was
developed as a way of sharing scientific information among researchers, but quickly was adopted by
millions of people as the way to access information on the Internet. By 1995, half of all the traffic on
the Internet was Web traffic.
A typical Web browser runs on the local workstation and passes requests for information to the remote
server. The browser may use a variety of transport mechanisms to obtain the information (e.g., ftp, http
hypertext transport protocol) as directed by the server.
The unifying concept behind the power of the Web is the universal resource locator (URL). A URL
is a unique address for a specific piece of information on the Web. It is made up of four parts: a protocol
identifier, a host name, a path name, and an item name. Figure 18.4.2 shows a typical URL, where the
protocol identifier is http, the host name is www.asmenet.org, the path name is /techaff/, and the item
name is techprog.html.

FIGURE 18.4.2 A typical URL.

To retrieve this item, the Web browser will use the http protocol to connect to the www.asmenet.org
server. Then it will change to the techaff directory and retrieve the techprog.html item. Because the item
ends in html, the browser knows that this is a hypertext document and will parse and directly display
the item. If it were a different type of item (Table 18.4.1), the browser would either directly display the
item or call an external program to process it. In the case where there is no external program identified
for the particular extension, the browser will typically offer the user the option of saving the item to
disk or of canceling the download.

TABLE 18.4.1 Typical Item Extensions


Extension File Type Typical External Program

.gif Compressed image None required


.au Audio file SoundPlayer
.mpeg Digital movie MPEGPlay
.html Hypertext None required

1999 by CRC Press LLC


18-44 Section 18

Web browsers are capable of transferring information using all major transport mechanisms (e.g., ftp,
gopher, http). Because of this, they largely eliminate the need to use these other tools. For those just
getting started on the Internet, a good Web browser should be the first tool learned.
Finding Things on the Internet
The Internet is largely an anarchy. There is no central authority that provides an organizing structure to
the information available. Anyone with an Internet connection can become an information provider by
simply setting up a Webserver or an FTP server.
An inescapable consequence of this is that it can be very difficult to find specific information on the
Internet. There is no rhyme or reason to where information is placed, and similar types of information
may be located thousands of miles apart.
A number of organizations have taken upon themselves to try to provide some organization. They
typically take one of two approaches: either a content-based approach or an automated indexing approach.
Content-based tools make use of discipline specialists to discover and index information manually.
These discipline specialists comb the Internet for relevant information. When they find such information,
they add a link to it from their content-based page. Thus, if you wish to find information of a specific
type, you may start from a general definition of information type, and then further refine your query by
clicking on the appropriate keywords. Table 18.4.2* lists a few content-based indexes current as of the
date of publication of this book. A directory of content-based sites is maintained at

http://home.netscape.com/home/internet-directory.html

TABLE 18.4.2 Content-Based Information Indexes


Name URL Discipline

Yahoo Directory http://www.yahoo.com/ All


McKinley Internet Directory http://www.mckinley.com/ All
Lycos http://www.lycos.com/ All
Virtual Tourist http://www.vtourist.com/ Webservers sorted by geographic region

Content-based searching is most effective when you know the area in which you are interested and
want to find a resource that is relevant to that area. It has a couple of drawbacks: first, if the content
specialist has not yet indexed an information resource, there will be no way for you to find it; second,
links across disciplines may not exist, limiting the breadth of the information you may be able to locate.
Automated indexing tools are based on software agents that visit every Web site on a periodic basis
and index the content of those sites by keyword. A variety of indexing strategies are used, resulting in
a wide range of usefulness for a particular purpose. A selection of indexing tool sites are listed in Table
18.4.3. A directory of these sites is available at
http://home.netscape.com/home/internet-search.html

TABLE 18.4.3 Index-Based Information Sites


Name URL

InfoSeek Search http://www.infoseek.com/


WebCrawler Search http://webcrawler.com/
W3 Search Engine http://cuiwww.unige.ch/meta-index.html
Altavista Search Engine http://altavista.digital.com

*URLs and search engines come and go with surprising frequency. These specific addresses may well cease to
operate during the lifetime of this book. Similar services are likely to be available - check with a local Internet
specialist or reference librarian for assistance.

1999 by CRC Press LLC


Communications and Information Systems 18-45

Automated indexing tools are useful when you are looking for information by keyword or keyphrase.
Their major disadvantage is the volume of information that will be returned on simple queries. For
example, a simple keyword search on Mechanical Engineering returned over 12,000 pointers to
information.
It is therefore important to make your keywords as specific as possible. It is also important to read
the guidelines for keywords on each of the servers you use. They each have their own specific search
rules and syntax.
Internet Security Issues
The Internet is an inherently insecure environment. Anything that you send over the Internet can be read
by someone with sufficient motivation. Because of this, it is critical to think twice about what you send.
In general, the best rule of thumb is to ask yourself the question Would I be upset if this message were
printed on the front page of The New York Times? If the answer is yes, you should seriously consider
using encryption to protect your message. One popular freeware package that implements email encryp-
tion, and is available for all major operating systems, is PGP. Check with your system administrator to
see if it is available on your system.
The Internet also provides a path for people outside your organization to access computers and
information within. If your network staff has not installed an Internet firewall, your system is directly
accessible to anyone. You are the only one who can ensure that your system is secure.
If you are using a multiuser workstation (like a Unix or VMS system), your system administrator is
responsible for ensuring that known security holes have been closed. Your responsibility is to pick strong
passwords (ones that do not appear in the dictionary and are hard to guess) so that someone else will
not be able to compromise your account. You should change your password periodically (every 3 months
or so) to help protect yourself.

Defining Terms
IAP: Internet access provider a company that sells access to the Internet.
ISP: Internet service provider a company that sells Internet services.
PPP: Point-to-point protocol a standard for encapsulating multiple protocols (including TCP/IP) over
dialup telephone lines.
Routing: Moving data packets from one router to the next until they reach their destination.
SLIP: Serial line internet protocol a standard for encapsulating TCP/IP and transmitting it over dialup
lines.
TCP/IP: Transmission control protocol/Internet protocol.

References
Crumlish, C. 1994. A Guided Tour of the Internet, Sybex, Inc.
Eddings, J. and Wattenmaker, P. 1994. How the Internet Works. Ziff-Davis.
Gaffin, A. 1994. Everybodys Guide to the Internet. Baker,
Kehoe, B.P. 1994. Zen and the Art of the Internet. West.
Otte, P. 1994. The Information Superhighway: Beyond the Internet. QUE.
Randall, N. 1994. Teach Yourself the Internet: Around the World in 21 Days. SAMS.
Salus, P.H. 1995. Casting the Net: From Arpanet to Internet and Beyond.
Tittle, E. and Robbins, M. 1994. Internet Access Essentials. Academic Press, New York.

1999 by CRC Press LLC


18-46 Section 18

Data Acquisition
Dhammika Kurumbalapitiya and S. Ratnajeevan H. Hoole
Data acquisition includes everything from gathering data, to transporting it, to storing it. The term data
acquisition is described as the phase of data handling that begins with sensing of variables and ends
with a magnetic recording of raw data, may include a complete telemetering link (McGraw-Hill,
Dictionary of Scientific and Technical Terms, 2nd ed., 1978). Here the term variables refers to those
physical quantities that are associated with a natural or artificial process. A data acquisition phase involves
a real-time computing environment where the computer must be keyed to the time scale of the process.
Figure 18.4.3 shows a block diagram of a data acquisition system which gives a simplified block diagram
of a data acquisition system current in the early 1990s.

FIGURE 18.4.3 Block diagram of a data acquisition system.

The path the data travel through the system is called the data acquisition channel. Data are first
captured and subsequently translated into usable signals using transducers. In this discussion, usable
signals are assumed to be electrical voltages, either unipolar (that is, single ended, with a common
ground so that we need just one lead wire to carry the signal) or bipolar (that is, common mode, with
the signal carried by a wire pair, so that the reference of the rest of the system is not part of the output).
These voltages can be either analog or digital, depending on the nature of the measurand (the quantity
being captured). When there is more than one analog input, they are subsequently sent to an analog
multiplexer (MUX). Both the analog and the digital signals are then conditioned using signal condi-
tioners. There are two additional steps for those conditioned analog signals. First they must be sampled
and next converted to digital data. This conversion is done by analog-to-digital converters (ADC).

1999 by CRC Press LLC


Communications and Information Systems 18-47

Once the analog-to-digital conversion is done, the rest of the steps have to deal with digital data only.
The calendar/clock block shown in Figure 18.4.3 is used to add the time-of-date information, an important
parameter of a real-time processing environment, into the half-processed data. The digital processor
performs the overall system control tasks using a software program, which is usually called system
software. These control tasks also include display, printer, data recorder, and communication interface
management. A well-regulated power supply unit (PSU) and a stable clock are essential components
in many data acquisition systems. There are systems where massive amounts of data points are produced
within a very short period of time, and they are equipped with on-board memory so that a considerable
amount of data points can be stored locally. Data are transmitted to the host computer once the local
storage has reached its full capacity. Historically, data acquisition evolved in modular form, until
monolithic silicon came along and reduced the size of the modules.
The analysis and design of data acquisition systems are a discipline that has roots in the following
subject areas: signal theory, transducers, analog signal processing, noise, sampling theory, quantizing
and encoding theory, analog-to-digital conversion theory, analog and digital electronics, data communi-
cation, and systems engineering. Cost, accuracy, bit resolution, speed of operation, on-board memory,
power consumption, stability of operation under various operating conditions, number of input channels
and their ranges, on-board space, supply voltage requirements, compatibility with existing bus interfaces,
and the types of data recording instruments involved are some of the prime factors that must be considered
when designing or buying a data acquisition system. Data acquisition systems are involved in a wide
range of applications, such as machine control, robot control, medical and analytical instrumentation,
vibration analysis, spectral analysis, correlation analysis, transient analysis, digital audio and video,
seismic analysis, test equipment, machine monitoring, and environmental monitoring.
The Analog and Digital Signal Interface
The data acquisition system must be designed to match the process being measured as well as the end-
user requirements. The nature of the process is mainly characterized by its speed and number of measuring
points, whereas the end-user requirement is mainly the flexibility in control. Certain processes require
data acquisition with no interruption where computers are used in controlling. On the other hand, there
are cases where the acquisition starts at a certain instance and continues for a definite period. In this
case the acquisition cycle is repeated in a periodic manner, and it can be controlled manually or by
software. Controllers access the process via the analog and digital interface submodules, which are
sometimes called analog and digital front ends.
Many applications require information capturing from more than one channel. The use of the analog
MUX in Figure 18.4.3 is to cater to multiple analog inputs. A detailed diagram of this input circuitry is
shown in Figure 18.4.4 and the functional description is as follows. When the MUX is addressed to
select an input, say xi(t), the same address will be decoded by the decoding logic to generate another
address, which is used in addressing the programmable register. The programmable register contains
further information regarding how to handle xi(t). The outcome of the register is then used in subsequent
tuning of the signal conditioner. Complex programmable control tasks might include automatic gain
selection for each channel, and hence the contents of this register are known as the channel gain list.
The MUX address generator could be programmed in many ways, and one simple way is to scan the
input channels in a cyclic fashion where the address can be generated by means of a binary counter.
Microprocessors are also used in addressing MUXs in applications where complex channel selection
tasks are involved. Multiplexers are available in integrated circuit form, though relay MUXs are widely
used because they minimize errors due to cross talk and bias currents. Relay MUX modules are usually
designed as plugged-in units and can be connected according to the requirements.
There are applications where the data acquisition cycle is triggered by the process itself. In this case
an analog or digital trigger signal is sent to the unit by the process, and a separate external trigger
interface circuitry is supplied. The internal controller assumes its duties once it has been triggered. It
takes a finite time to settle the signal xi(t) through the MUX up to the signal conditioner once it is
addressed. Therefore, it is possible to process x i1(t) during the selection time of xi(t) for greater speeds.

1999 by CRC Press LLC


18-48 Section 18

FIGURE 18.4.4 Input circuitry.

This function is known as pipelining and will be illustrated in the subsection 4, on Analog Signal
Conditioning.
In some data acquisition applications the data acquisition module is a plugged-in card in a computer,
which is installed far away from the process. In such cases, transducers the process sensing elements
are connected to the data acquisition module using transmission lines or a radio link. In the latter
case a complete demodulating unit is required at the input. When transmission lines are used in the
interconnection, care must be taken to minimize electromagnetic interference since transmission lines
pick up noise easily. In the case of a single-ended transducer output configuration, a single wire is
adequate for the signal transmission, but a common ground must be established between the two ends
as given in Figure 18.4.5(a). For the transducers that have common mode outputs, a shielded twisted
pair of wires will carry the signal. In this case, the shield, the transducers encasing chassis, and the
data acquisition modules reference may be connected to the same ground as shown in Figure 18.4.5(c).
In high-speed applications the transmission line impedance should be matched with the output impedance
of the transducer in order to prevent reflected traveling waves. If the transducer output is not strong
enough to transmit for a long distance, it is best to amplify it before transmission.

FIGURE 18.4.5 Sensor/acquisition module interconnections.

Transducers that produce digital outputs may be first connected to Schmitt trigger circuits for pulse
shaping purposes, and this can be considered a form of digital signal conditioning. This becomes an
essential requirement when such inputs are connected through long transmission lines where the line
capacitance significantly affects the rising and falling edges of the incoming wave. Opto-isolators are
sometimes used in coupling when the voltage levels of the two sides of the transducer and the input
circuit of the data acquisition unit do not match each other. Special kinds of connectors are designed
and widely used in interconnecting transmission lines and data acquisition equipment in order to screen

1999 by CRC Press LLC


Communications and Information Systems 18-49

the signals from noise. Analog and digital signal grounds should be kept separate where possible to
prevent digital signals from flowing in the analog ground circuit and including spurious analog signal
noise.
Analog Signal Conditioning
The objective of an analog signal conditioner is to increase the quality of the transducer output to a
desired level before analog-to-digital conversion. A signal conditioner mainly consists of a preamplifier,
which is either an instrumentation amplifier or an operational amplifier, and/or a filter. Coupling more
and more circuits to the data acquisition channel has to be done, taking great care that these signal
conditioning circuits do not add more noise or unstable behavior to the data acquisition channel. General-
purpose signal conditioner modules are commercially available for applications. Some details were given
in the previous section about programmable signal conditioners and the discussion is continued here.
Figure 18.4.6 shows an instrumentation amplifier with programmable gain where the programs are
stored in the channel-gain list. The reason for having such sophistication is to match transducer outputs
with the maximum allowable input range of the ADC. This is very important in improving accuracy in
cases where transducer output voltage ranges are much smaller than the full-scale input range of an
ADC, as is usually the case. Indeed, this is equally true for signals that are larger than the full-scale
range, and in such cases the amplifier functions as an attenuator. Furthermore, the instrumentation
amplifier converts a bipolar voltage signal into a unipolar voltage with respect to the system ground.
This action will reduce a major control task as far as the ADC is concerned; that is, the ADC is always
sent unipolar voltages, and hence it is possible to maintain unchanged the mode control input which
toggles the ADC between the unipolar and bipolar modes of an ADC.

FIGURE 18.4.6 Programmable gain instrumentation amplifier.

Values of the signal-to-noise ratio (SNR)

2
SNR =
RMS signal
(18.4.1)
RMS noise

at the input and the output of the instrumentation amplifier are related to its common-mode rejection
ratio (CMRR) given by

SNR output
CMRR = (18.4.2)
SNR input

1999 by CRC Press LLC


18-50 Section 18

Hence, higher values of SNRoutput indicate low noise power. Therefore, instrumentation amplifiers are
designed to have very high CMRR figures. The existence of noise will result in an error in the ADC
output. The allowable error is normally expressed as a fraction of the least significant bit (LSB) of the
code such as (1/X)LSB. The amount of error voltage (Verror) corresponding to this figure can be found
considering the bit resolution ( N) and the ADCs maximum analog input voltage (Vmax) as given in

V 1
Verror = Nmax volt (18.4.3)
2 1 X

Other specifications of amplifiers include the temperature dependence of the input offset voltage (Voffset,
V/C) and the current (Voffset, pA/C) associated with the operational amplifiers in use. High slew rate
(V/s) amplifiers are recommended in high-speed applications. Generally, the higher the bandwidth, the
better the performance.
Cascading a filter with the preamplifier will result in better performance by eliminating noise. Active
filters are commonly used because of their compact design, but passive filters are still in use. The cutoff
frequency, f c, is one of the important performance indices of a filter that has to be designed to match
the channels requirements. The value fc is a function of the preamplifier bandwidth, its output SNR,
and the output SNR of the filter.
Sample-and-Hold and A/D Techniques in Data Acquisition
Sample-and-hold systems are primarily used to maintain a constant magnitude representing the input,
across the input of the ADC throughout a precisely known period of time. Such systems are called
sample-and-hold amplifiers (SHA), and their characteristics are crucial to the overall system accuracy
and reliability of digital data. The SHA is not an essential item in applications where the analog input
does not vary more than (1/2)LSB of voltage. As the name indicates, an SHA operates in two different
modes, which are digitally controlled. In the sampling mode it acts as an input voltage follower, where,
once it is triggered into its hold mode, it should ideally retain the signal voltage level at the time of the
trigger. When it is brought back into the sampling mode, it instantly assumes the voltage level at the input.
Figure 18.4.7 shows the simplified circuit diagram of a monolithic sampling-and-hold circuit and the
associated switching waveforms. The differential amplifiers function as input and output buffers, and
the capacitor acts as the storage mechanism. When the mode control switch is at its on position, the two
buffers are connected in series and the capacitor follows the input with minimum time delay, if it is
small. Now, if the mode control is switched off, the feedback loop is interrupted, and the capacitor ideally
retains its terminal voltage until the next sampling signal occurs. Leakage and bias currents usually
cause the capacitor to discharge and/or charge in the hold mode and the fluctuation of the hold voltage
is called droop, which could be minimized by having a large capacitor. Therefore, the capacitance has
to be selected such that the circuit performs well in both modes. Several time intervals are defined
relative to the switching waveform of SHAs. The acquisition time (ta) is the time taken by the device
to reach its final value after the sample command has been given. The setting time (ts) is the time taken
to settle the output. The aperture uncertainty or aperture jitter (tus) is the range of variation of the
aperture time. It is important to note here that the sampling techniques have a well-formulated theoretical
background.
ADCs perform a key function in the data acquisition process. The application of various ADC
technologies in a data acquisition system depends mainly on the cost, bit resolution, and speed. Successive
approximation types are more common at high resolution at moderate speeds (<1 MHz). This kind of
ADC offers the best trade-offs among bit resolution, accuracy, speed, and cost. Flash converters, on the
other hand, are best suited for high-speed applications. Integrating-type converters are suitable for high-
resolution and -accuracy applications.
Many techniques have been developed in coupling sample-hold circuits and ADCs in data acquisition
systems because no single ADC or sampling technology is able to satisfy the ever-increasing requirements
of data acquisition applications. Figure 18.4.8 illustrates the various sampling and ADC configurations

1999 by CRC Press LLC


Communications and Information Systems 18-51

FIGURE 18.4.7 Monolithic sample-and-hold circuit.

FIGURE 18.4.8 ADC and sampling configurations.

used in practice. It can be seen that the sampling frequencies are increased because of pipelining,
parallelism, or concurrent architecture. The increase in the sampling frequency improves the bandwidth,
improving, in turn, the SNR in the channel.
The Communication Interface of a Data Acquisition System
The communication interface is the module through which the acquired data are sent; as well, other
control tasks are established between the data acquisition module and the host computer (Figure 18.4.3).
There are basically two different ways of establishing a data link between the two. One way is to use

1999 by CRC Press LLC


18-52 Section 18

interrupts and the other is through direct memory access (DMA). In the case of an interrupt-driven
mode, an interrupt-request signal is sent to the computer. Upon receiving it, the computer will first finish
the execution of the current instruction, suspend the next, and then send an interrupt-acknowledge signal
asking the module to send data. The operation is asynchronous since the sender sends data when it wants
to do so. Getting the computer ready to receive data is known as handshaking. In the case of a DMA
transfer, the DMA controller is given the starting address of the memory location where the data have
to be written. The DMA controller asks the computer to freeze its operations until it has finished writing
data directly into the memory. The operation does not need any waiting time and therefore it is fast.
Data acquisition systems are usually designed to couple with existing computer systems, and many
computer systems provide standard bus architecture, allowing users to connect various peripherals that
are compatible with its bus. Data acquisition systems are computer peripherals that follow the above
description. Since ADCs produce parallel data, many data acquisition systems provide outputs compatible
with parallel instrument buses such as the IEEE-488 (HP-IB or GPIB) or the VMEbus. Personal computer-
based data acquisition boards must have communication interfaces compatible with the computer bus
in order to share resources. The RS-232 standard communication interfaces are widely used in serial
data transfer. Communication interfaces for data acquisition systems are normally designed to satisfy
the electrical, mechanical, and protocol standards of the interface bus. Electrical standards include power
supply requirements, methods of supply, the data transfer rate (baud rate), the width of the address, and
the line terminating impedance. Mechanical requirements are the type, size, and the pin assignments of
the connectors. The data transfer protocol determines the procedure of data transfer between the two
systems. A definition of the timing and inputoutput philosophywhether the transfer is in synchronous,
asynchronous, or quasi-synchronous mode and how errors are detected and handled is an important
factor to be considered.
Data Recording
It is important to provide storage media to cater to large streams of data being produced. Data acquisition
systems use graph paper, paper tapes, magnetic tapes, magnetic floppy disks, hard disks, or any combi-
nation of these as their data recorders. Paper and magnetic tape storage schemes are known as sequential
access storage, whereas disk storage is called direct access storage. Tapes are cost-effective media
compared to disk drives and are still in wide use. In many laboratory situations it will be much more
cost effective to network a number of systems to a single, high-capacity hard drive, which acts as a file
server. This adoption of digital recording provides the ultimate in signal-to-noise ratio, accuracy of signal
waveform, and freedom from tape transfer flutter. Data storage capacity, access time, transfer rate, and
error rate are some of the performance indices that are associated with these devices.
Software Aspects
So far the discussion has been mainly on the hardware side of the data acquisition system. The other
most important part is the software system associated with a data acquisition system, which can generally
be divided into two the system software and the user-interface program. Both must be designed
properly in order to achieve the maximum use of the system. The system software is mainly written in
assembly language with many lines of code, whereas the user interface is built using a high-level software
development tool. One main part of system software is written to handle the inputoutput (I/O) operations.
The use of assembly language results in the fast execution of I/O commands. The I/O software has to
deal with how the basic inputoutput programming tasks such as interrupt and DMA handling are done.
The other aspects of system software are to perform the internal control tasks such as providing trigger
pulses for the ADC and SHA, addressing the input multiplexer, the accessing and editing of the channel-
gain list, transferring data into the on-board memory, and the addition of the clock/calendar information
into data. Multitasking software programs are best suited for many data acquisition systems because it
may be necessary to read data from the data acquisition module and display and print it at the same
time. Menu-driven user interfaces are common and have a variety of functions built into them.

1999 by CRC Press LLC


Communications and Information Systems 18-53

Defining Terms
Analog-to-digital converter (ADC): A device that converts analog input voltage signals into digital form.
Common-mode rejection ratio (CMRR): A measure of quality of an amplifier with differential inputs
and the ratio between the common-mode gain and the differential gain.
Direct memory access (DMA): The process of sending data from an external device into the computer
memory with no involvement of the computers central processing unit.
Least significant bit (LSB): The 20th bit in a digital word.
Multiplexer (MUX): A combinational logic device with many input channels and usually just one output.
The function performed by the device is connecting one and only one input channel at a time to
the output. The required input channel is selected by sending the channel address to the MUX.
Power supply unit (PSU): The unit that generates the necessary voltage levels required by a system.
Sample-and-hold amplifier (SHA): A unity gain amplifier with a mode control switch where the input
of the amplifier is connected to a time-varying voltage signal. A trigger pulse at the mode control
switch causes it to read the input at the instance of the trigger and maintain that value until the
next trigger pulse.
Signal-to-noise ratio (SNR): The ratio between the signal power and the noise power at a point in the
signal traveling path.

References
Feucht, D.L. 1990. Handbook of Analog Circuit Design. Academic Press, San Diego.
Fink, D.G. and Christiansen, D., Eds. 1989. Electronic Engineers Handbook, 3rd ed. McGraw-Hill,
New York.
Frizell, K.W. et. al. 1993. Guidelines for PC-Based Data Acquisition Systems for Hydraulic Engineering.
American Society for Civil Engineering.
Holloway, P. 1990.Technology focus interview. Electronic Eng. December.
Rigby, W.H. and Dalby, T. 1994. Computer Interfacing: A Practical Approach to Data Acquisition and
Control. Prentice-Hall, Englewood Cliffs, NJ.
Tatkow, M. and Turner, J. 1990. New techniques for high-speed data acquisition. Electronic Eng.
September.

1999 by CRC Press LLC

Das könnte Ihnen auch gefallen