Sie sind auf Seite 1von 27

Module I

Introduction to embedded systems


Whirlwind, a computer designed at MIT in the late 1940s and early 1950s, was also the
first computer designed to support real-time operation and was originally conceived as a
mechanism for controlling an aircraft simulator. Even though it was extremely large physically
compared to todays computers (e.g., it contained over 4,000 vacuum tubes), its complete design
from components to system was attuned to the needs of real-time embedded computing.
A microprocessor is a single-chip CPU. Very large scale integration (VLSI) stetthe
acronym is the name technology has allowed us to put a complete CPU on a single chip since
1970s, but those CPUs were very simple. The first microprocessor, the Intel 4004,was designed
for an embedded application, namely, a calculator. The calculator was not a general-purpose
computerit merely provided basic arithmetic functions. However, Ted Hoff of Intel realized
that a general-purpose computer programmed properly could implement the required function,
and that the computer-on-a-chip could then be reprogrammed for use in other products as well.
Eg: Automobiles, microwave oven etc.
A programmable CPU was used rather than a hardwired unit for two reasons: First, it
made the system easier to design and debug; and second, it allowed the possibility of upgrades
and using the CPU for other purposes.
System
A system is a way of working, organizing or doing one or many tasks according to a
fixed plan, program or set of rules. A system is also an arrangement in which all its units
assemble and work together according to the plan or program.
Eg: watch, washing machine.
Embedded system
Definition
An embedded system is a system that has embedded software and computer-hardware,
which makes it a system dedicated for an application or specific part of an application or product
or a part of a larger system.
A computer is a system that has the following components:
1. A microprocessor
2. A large memory of the following kinds,

3.
4.
5.
6.
7.

Primary memory (RAM--Random Access Memory, ROM--Read Only


Memory)
Secondary memory using which different user programs can be loaded into
the primary memory and run.
I/O units such as touch screen, modem, fax cum modem etc.
Input units such as keyboard, mouse, mice, digitizer, scanner etc.
Output units such as an LCD screen, video monitor, printer etc.
Networking units such as Ethernet card, front-end processor based server, bus driver
etc.
An operating system (OS) that has general purpose user and application software in
the secondary memory.

An embedded system has three main components:


1. It embeds hardware similar to a computer.
2. It embeds main application software. The application software may concurrently perform
a series of tasks or processes or threads.
3. It embeds a real-time operating system (RTOS) that supervises the application software
running on hardware and organizes access to a resource according to the priorities of
tasks in the system.
An embedded system is designed keeping in view three constrains:
1. Available system memory
2. Available processor speed
3. The need to limit power dissipation when running the system continuously in cycles
of wait for events, run, stop, wake-up, sleep.
Input devices
interfacing/ driver
circuit

Processor
Power Supply,
Reset and
Oscillator circuit

Program memory
and data memory

Timers

Serial communication Ports

Interrupt
Controller

Parallel Ports

Outputs Interfacing/ Driver Circuits

System
Application
Specific circuits

Requirements of Embedded Systems


Performance: The speed of the system is often a major consideration both for the
usability of the system and for its ultimate cost. As we have noted, performance may be a
combination of soft performance metrics such as approximate time to perform a user-level
function and hard deadlines by which a particular operation must be completed.
Cost: The target cost or purchase price for the system is almost always a consideration.
Cost typically has two major components: manufacturing cost includes the cost of components
and assembly; nonrecurring engineering(NRE) costs include the personnel and other costs of
designing the system.
Physical size and weight: The physical aspects of the final system can vary greatly
depending upon the application. An industrial control system for an assembly line may be
designed to fit into a standard-size rack with no strict limitations on weight. A handheld device
typically has tight requirements on both size and weight that can ripple through the entire system
design.
Power consumption: Power, of course, is important in battery-powered systems and is
often important in other applications as well. Power can be specified in the requirements stage in
terms of battery lifethe customer is unlikely to be able to describe the allowable wattage.
Classification
Three types
1. Small scale embedded system:
Designed with a single 8 or 16 bit microcontroller
Have little hardware and software complexities and board level design.
Battery operated.
When developing embedded software for these, an editor, assembler and cross
assembler, an integrated development environment (ISE) tool specific to the
microcontroller or processor used, are the main programming tools.
Using C language, programs are compiled into the assembly and executable
codes are appropriately located in the system memory.
2. Medium scale embedded system:
Designed with a single or a few 16 or 32-bit microcontrollers, DSPs or RISCs.
Have both hardware and software complexities
For complex software design, the following programming tools are available,
C/C++/VC++/Java, RTOS, source code engineering tool, simulator, debugger
and an integrated development environment.
3. Sophisticated embedded systems:
Enormous hardware and software complexities and may need several IPs (A
standard source solution for synthesizing a higher-level component by
configuring an FPGA core or a core of VLSI circuit may be available as an

intellectual Property, called IP), ASIPs(Application specific instruction set


processors), scalable processors or configurable processors and programmable
logic arrays.
Constrained by the processing speeds available in their hardware units.
A complier or retargetable complier might have to be developed for these (A
retargetable compiler is one that configures according to the given target
configuration in a system).

Applications of Embedded Systems in Consumer Electronics


Consumer electronics devices provide several types of services in different
combinations:
Multimedia: The media may be audio, still images, or video (which includes both motion
pictures and audio). These multimedia objects are generally stored in compressed form and must
be uncompressed to be played (audio playback, video viewing, etc.).A large and growing number
of standards have been developed for multimedia compression:MP3,Dolby Digital(TM), etc. for
audio; JPEG for still images; MPEG-2, MPEG-4, H.264, etc. for video.
Data storage and management: Because people want to select what multimedia objects
they save or play, data storage goes hand-in-hand with multimedia capture and display. Many
devices provide PC-compatible file systems so that data can be shared more easily.
Communications: Communications may be relatively simple, such as a USB interface to
a host computer. The communications link may also be more sophisticated, such as an Ethernet
port or a cellular telephone link.
Functional architecture of a generic consumer electronics device

CELL PHONES
A cell phone performs several very different functions:
It transmits and receives digital data over a radio and may provide analog
voice service as well.
It executes a protocol that manages its relationship to the cellular network.
It provides a basic user interface to the cell phone.
It performs some functions of a PC, such as contact management, multimedia
capture and playback, etc.
Early cell phones transmitted voice using analog methods. Today, analog voice is used
only in low-cost cell phones, primarily in the developing world; the voice signal in most systems
is transmitted digitally. A wireless data link must perform two basic functions: it must modulate
or demodulate the data during transmission or reception; and it must correct errors using error
correcting codes.
A processor in the cell phone sets various radio parameters, such as power level and
frequency. However, the processor does not process the radio frequency signal itself. As low
power, high performance processors become available, we will see more cell phones perform at
least some of the radio frequency processing in programmable processors. This technique is
often called software radio or software defined radio (SDR).
Error correction algorithms detect and correct errors in the raw data stream. Radio
channels are sufficiently noisy that powerful error correction algorithms are necessary to provide
reasonable service. Error correction algorithms, such as Viterbi coding or turbo coding,require
huge amounts of computation. Many handset platforms provide specialized hardware to
implement error correction.
Many cell phone standards transmit compressed audio. The audio compression
algorithms have been optimized to provide adequate speech quality. The handset must compress
the audio stream before sending it to the radio and must decompress the audio stream during
reception.
The network protocol that manages the communication between the cell phone and the
network performs several tasks: it sets up and tears down calls; it manages the hand-off when a
handset moves from one base station to another; it manages the power at which the cell phone
transmits, etc.
The basic user interface for a cell phone is straightforward: a few buttons and a simple
display. Early cell phones used microcontrollers to implement their user interface.

The radio frequency processing is performed in analog circuits. The baseband processing
is handled by a combination of a RISC-style CPU and a DSP. The CPU runs the host operating
system and handles the user interface, controlling the radio, and a variety of other control
functions. The DSP performs signal processing: audio compression and decompression,
multimedia operations, etc. The DSP can perform the signal processing functions at lower power
consumption levels than can the RISC processor. The CPU acts as the master, sending requests
to the DSP.
COMPACT DISCs AND DVDs
Compact discs use optical storagethe data is read off the disc using a laser. Data is
stored in pits on the bottom of a compact disc. A laser beam is reflected or not reflected by the
absence or presence of a pit. The pits are very closely spaced: pits range from 0.8 to 3m long
and 0.5m wide. The pits are arranged in tracks with 1.6m between adjacent tracks. Unlike
magnetic disks, which arrange data in concentric circles, CD data is stored in a spiral.
Data stored on a compact disc

A compact disc mechanism

A sled moves radially across the CD to be positioned at different points in the spiral data.
The sled carries a laser, optics, and a photo detector. The laser illuminates the CD through the
optics. The same optics capture the reflected light and pass it onto the photo detector.
The optics can be focused using some simple electric coils. Laser focus adjusts for
variations in the distance to the CD.
Laser focusing in a CD

An in-focus beam produces a circular spot, while an out-of-focus beam produces an


elliptical spot with the beams major axis indicating the direction of focus. The focus can change
relatively quickly depending on how the CD is seated on the spindle, so the focus needs to be
continuously adjusted.
The laser pickup is divided into six regions, named A, B, C, D, E and F. The basic four
regions A, B, C and Dare used to determine whether the laser is focused. The focus error
signal is (A + C) (B + D) the magnitude of the signal gives the amount of focus error and the
sign determines the orientation of the elliptical spots major axis. The sum of the four basic

regions, A+B+C+D, gives the laser level to determine whether a pit is being illuminated. Two
additional detectors, E and F, are used to determine when the laser has gone far off the track.
Tracking error is given by E-F.
CD laser pickup regions.

Control algorithms monitor the level and error signals and determine how to adjust focus,
tracking, and sled signals. These control algorithms are very sophisticated. Each control may
require digital filters with 30 or more coefficients. Several control modes must be programmed,
such as seeking vs. playback.
The sled, focus system, and detector forma servo system. The servo control algorithms
are generally performed on a programmable DSP. The bits on the CD are not encoded directly.
To help with tracking, the data stream must be organized to produce 01 transitions at some
minimum interval. An eight-to-fourteen (EFM) encoding is used to ensure a minimum transition
rate. For example, the 8 bits of user data 00000011 is mapped to the 14-bit code
00100100000000. The data are reconstructed from the EFM code using tables.
CD players are very vulnerable to shaking. Early players could be disrupted by
walking on the floor near the player. A jog memory is used to buffer data to maintain playing
during a jog to the drive. The player reads ahead and puts data into the jog memory. During a
jog, the audio output system reads data stored in the jog memory while the drive tries to find the
proper point on the CD to continue reading.
Jog control memories also help reduce power consumption. The drive can read ahead, put
a large block of data into the jog memory, then turn the drive off and play from jog memory.
Because the drive motors consume a considerable amount of power, this strategy saves battery
life. When reading compressed music from data discs, a large part of a song can be put into jog
memory.

Hardware architecture of a CD player

AUDIO PLAYERS
Audio players are often called MP3 players after the popular audio data format. The
earliest portable MP3 players were based on compact disc mechanisms. Modern MP3 players use
either flash memory or disk drives to store music. An MP3 player performs three basic functions:
audio storage, audio decompression, and user interface.
Architecture of audio processor for CD/MP3 players.

The audio controller includes two processors. The 32-bit RISC processor is used to
perform system control and audio decoding. The 16-bit DSP is used to perform audio effects
such as equalization. The memory controller can be interfaced to several different types of
memory: flash memory can be used for data or code storage; DRAM can be used as a buffer to
handle temporary disruptions of the CD data stream. The audio interface unit puts out audio in
formats that can be used by A/D converters. General-purpose I/O pins can be used to decode
buttons, run displays, etc.

DIGITAL STILL CAMERAS


The digital still camera bears some resemblance to the film camera but is fundamentally
different in many respects. The digital still camera not only captures images, it also performs a
substantial amount of image processing that formerly was done by photofinishers.
Digital still cameras must perform many functions:
It must determine the proper exposure for the photo.
It must display a preview of the picture for framing.
It must capture the image from the image sensor.
It must transform the image into usable form.
It must convert the image into a usable format, such as JPEG, and store the
image in a file system.
A typical hardware architecture for a digital still camera is shown in Figure. Most
cameras use two processors. The controller sequences operations on the camera and performs
operations like file system management. The DSP concentrates on image processing. The DSP
may be either a programmable processor or a set of hardwired accelerators. Accelerators are
often used to minimize power consumption.
Architecture of a digital still camera

The picture taking process can be divided into three main phases: composition, capture,
and storage. We can better understand the variety of functions that must be performed by the
camera through a sequence diagram. Figure below shows a sequence diagram for taking a picture
using a point-and-shoot digital still camera.
When the camera is turned on, it must start to display the image on the cameras screen.
That imagery comes from the cameras image sensor. To provide a reasonable image, it must
adjust the image exposure. The camera mechanism provides two basic exposure controls: shutter
speed and aperture. The camera also displays what is seen through the lens on the cameras
display. In general, the display has fewer pixels than does the image sensor; the image processor
must generate a smaller version of the image.
When the user depresses the shutter button, a number of steps occur. Before the image is
captured, the final exposure must be determined. Exposure is computed by analyzing the image
characteristics; histograms of the distribution of pixel brightness are often used to determine
focus. The camera must also determine white balance. Different sources of light, such as
sunlight and incandescent lamps, provide light of different colors. The eye naturally compensates
for the color of incident light; the camera must perform comparable processing to avoid giving
the picture a color cast. White balance algorithms generally use color histograms to determine
the range of colors and re-weigh colors to reduce casts.
The image captured from the image sensor is not directly usable, even after exposure and
white balance. Virtually all still cameras use a single image sensor to capture a color image.
Color is captured using microscopic color filters, each the size of a pixel, over the image sensor.
Since each pixel can capture only one color, the color filters must be arranged in a pattern across
the image sensor. A commonly used pattern is the Bayer pattern shown in Figure below. This
pattern uses two greens for every red and blue pixel since the human eye is most sensitive to
green. The camera must interpolate colors so that every pixel has red, green, and blue values.
After this image processing is complete, the image must be compressed and saved.
Images are often compressed in JPEG format, but other formats, such as GIF, may also be used.
The EXIF standard defines a file format for data interchange. Standard compressed image
formats such as JPEG are components of an EXIF image file; the EXIF file may also contain a
thumbnail image for preview, metadata about the picture such as when it was taken, etc.

Sequence diagram

The Bayer pattern for color image pixels.

Image compression need not be performed strictly in real time. However, many cameras
allow users to take a burst of images, in which case the images must be compressed quickly to
make room in the image processing pipeline for the next image.
Buffering is very important in digital still cameras. Image processing often takes longer
than capturing an image. Users often want to take a burst of several pictures, for example during
sports events. A buffer memory is used to capture the image from the sensor and store it until it
can be processed by the DSP.
The display is often connected to the DSP rather than the system bus. Because the display
is of lower resolution than the image sensor, the images from the image sensor must be reduced
in resolution. Many still cameras use displays originally designed for camcorders, so the DSP
may also need to clip the image to accommodate the differing aspect ratios of the display and
image sensor.

Smart Card
Smart card is one of the most used embedded system today. It is used for credit-debit
bankcard, ATM card, e-purse or e-wallet card, identification card, medical card etc. The security
aspects is paramount importance for smart card use, when used for financial and banking related
transactions. It is an embedded system on a card.
Embedded hardware
Components are,
Microcontroller or ASIP
RAM for temporary variables and stack
One time programmable ROM for application codes and RTOS codes for
scheduling the tasks.
Flash for storing user data, user address, user identification codes, card
number and expiry date.
Timer and interrupt controller

A carrier frequency 16 MHz generating circuit and Amplitude Shifted key


(ASK) modulator.
Interfacing circuit for IOs
Charge pump for delivering power to the antenna for transmission and for
system circuits. The charge pump stores charge from received RF at the card
antenna in its vicinity.

Embedded hardware components in a contact less smart card

The details of the basic hardware units are as follows,


1. Microcontroller: Most cards use an 8-bit CPU. The recent introduction in the cards is of a
32-bit RISC CPU. A smart card CPU should have special features, for eg., a security
lock. The lock is for a certain sections of the memory. A protection bit at the
microcontrollermay protect 1kB or more data from modification and access by an
external source or instruction outside the memory. Once the protection bit is placed at the
maskable ROM in the microcontroller, the instruction or datawithin that part of the
memory are accessible from th instruction in that part only (internally) and not accessible
from external instructions or instructions outside that part.
The CPU may disable access by blocking the write cycle placement of the data
bits on the buses for instruction and data protction at the physical memory after certain
phases of card initialization and before issuing the card to the user. Another way of
protecting is as follows: the CPU may access by using the physical addresses, which are
different from the logical address used in the program.
2. ROM: size is 8 or 64kB for usual or advanced cryptographic features in the card. The
ROM stores the following:
Fabrication key,which is unique key for each card.

Personalization key, which is inserted after the chip is tested on a printed


circuit board.
RTOS codes
Application codes
A utilization lock to prevent modification of two PINs and to prevent access
to the OS and application instructions.

3. EEPROM or Flash is scalable. This means that only that part of the memory required for
a particular operation will unlock for use. It stores the following:
PIN(Personal Identification Number), the allotment and writing of which is by
the authorizer (eg, bank) and its use is possible by the latter only by using the
personalization and fabrication keys. It is for identifying thae card user in
future transactions.
An unblocking PIN for use by the authorizer. Through this key, the card ciruit
identifies the authorizer before unblocking.
Access conditions for various hierarchically arranged data files.
Card user data.
Data post issue that the application generates. For eg, e-purse the details of
the previous transactions and current balance.
It also stores the applications non-volatile data.
Invalidation lock sent by the host after the expiry period or card misuse and
user account closing request. It locks the data file of the master or elementary
individual file or both.
4. RAM stores the temporary variables and stack during card operations by running the OS
and the application.
5. Chip power supply voltage extracts by a charge pump circuit. The pump extracts the
charge from the signal from the host analogous to what a mouse does in a computer and
delivers the regulated voltage to the card chip, memory and IO system.
6. IO system of chip and host interact through asynchronous serial UART at 9.6k or 106k or
115.2 k baud/s.
7. Wireless Communication for IO interaction is by radiations through the antenna coils for
contactless interactions. The card and the host interact through a card modem and a host
modem. The application protocol data unit (APDU) is a standard for communication
between the card and the host computer. Modulation is with 10% index amplitude
modulating carrier of 13.66-13.56 Mbps ASK is used for contactless communication at
data rates of ~1Mbps.
Embedded Software
Smart card embeds the following software components:
1.
2.
3.
4.

Boot-up, initialisation and OS programs


Smart card secure file system
Connection establishment and termination
Communication with host

5.
6.
7.
8.

Cryptography algorithm
Host authentication
Card authentication
Saving addition parameters or recent new data sent by the host.

Special features needed as follows:


1. Protection environment. It means software should be stored in the protected part of the
ROM.
2. Restricted run-time environment.
3. Its OS, every method, class and run time library should be scalable.
4. Code-size generated should be optimum. The system needs should not exceed 64kB
memory.
5. Timited use of data types; multidimensional array,long 64-bit integer and floating points
and very limited use of the error handlers, exceptions, signals, serialization, debugging
and profiling. Serialization is the process of converting an object into a data stream for
transferring it to network or from one process to another.
6. A three-layered file system for the data. One file for the master file to store all file
headers. A header means file status, access conditions and the file lock. The second file is
a dedicated file to hold a file grouping and headers of the immediate successor
elementary files of the group. The third file is the elementary file to hold the file header
and its file data.
Handheld computers
LCD Touch
Screen

Keyboard

USB

Flash
Memory

IrDA

Processor
DRAM

RS232
Ethernet
Codes for Audio,
Video

MIC

Speaker

Video
camera

The original Personal Digital Assistants (PDAs) were used mainly to store data. With
low-cost 32-bit processors, the computing power on the handheld has increased multifold.
Handheld device runs in a powerful OS such as Windows CE, Palm OS, Symbian OS etc. and
provide numerous services, including e-mail, a calendar and an address book.
In addition to computing power, it offers a lot of communication power as well. Using
interfaces such as USB, IrDA (infra-red Data Associations standards-based communication
using infrared links), the RS232 serial communication interface, or Ethernet, it can connect to the
desktop PC or communication through a built in modem directly to the internet service provider.
The handheld computer can receive input via a full-fledged keyboard or via a form of
handwriting recognition such as that used on Palm OS devices.
The output can be an LCD display, with touch screen. The memory, dynamic RAM and
flash memory, capacities are also very high.
To provide audio and video capability to the handheld computer, peripherals such as
microphone, a speaker and a video camera are provided. The signals from the microphone and
the video camera is converted into digital format and vice versa using the codec, which is a
combination of coder and decoder. The audio codec converts the audio signal into digital format
to store it into the computer and also converts the stored digital speech data into audio format to
be played through the speaker. Similarly, the video codec does coding and decoding of the video
signal.
Handheld computer also provide a software utility for data synchronization. Using this
utility, we can download a database onto our handheld, collect information from the field, and
upload the updated database into the corporate database.
Biomedical System
Hospitals are full of embedded system, including X-ray control unit, EEG and ECG units,
and other equipment used for diagnostic testing such as colonoscopy and endoscopy. PC-based
ECG and EEG equipment, which belong to a different type of embedded system. These system
use PC add-on cards, which takes the ECG signal and process them. The PC monitor is used for
display and the PCs secondary storage is used to store the ECGs records. The PC add-one cards
consist of a processor and the associated circuitry for processing the signals. The card sits on the
slots provided on the motherboard of the PC. The slot may be based on PCI architecture.
Biometric system for fingerprint and face recognition are gaining wide use in security
field. These are complex systems with high memory requirement. The input fingerprint must be
processed and compared with the available database using pattern recognition algorithm, which
require intensive processing.
Biometric system use a Digital Signal Processor (DSP) for signal processing, such as
filtering and edge enhancement of the image, and a general purpose processor for implementing
pattern matching algorithms.
Control System
A control system seeks to make a physical systems output track a desired reference
input, by setting physical system inputs.
Eg: automobile cruise controller, which seeks to make a cars speed track a desired speed,
by setting the cars throttle and brake inputs. Another eg: thermostat controller, which seeks to

force a buildings temperature to a desired temperature by turning on the heater or air conditioner
and adjusting the fan speed.
Tracking in a control system is shown below

First graph is for good tracking system and second graph represents not a good tracking
system.
Open-Loop and Closed-Loop Systems
Control System can be classified into two typesOpen loop control system and closed loop control system
Control system minimally consist of several parts
1. The plant, also known as the process, is the physical system to be controlled. Eg:
automobile
2. The output is the particular physical system aspect that we are interested in
controlling. Eg: speed of an automobile.
3. The reference input is the desired value that we want to see for the output. Eg: The
desired speed set by an automobiles driver.
4. The actuator is the device that we use to control the input to the plant. Eg: stepper
motor controlling a cars throttle position
5. The controller is the system that we use to compute the input to the plant such that we
achieve the desired output from the plant.
6. A disturbance is an additional undesirable input to the plant imposed by the
environment that may cause the plant output to differ from what we would have
expected based on the plant input. Eg: wind and road grade.
A control system with this components are referred to as an Open-loop system or feed control
system. The controller reads the reference input and then computes a settling for the actuator.
The actuator modifies the input to the plant along with the disturbances, results some time later
in a change in the plant output. In an open loop system the controller does not measure how well
the plant output matches the reference input.

Many control systems possess some additional parts,


1. A sensor measures the plant output.
2. An error detector determines the difference between the plant output and the
reference input.
A control system with this part is known as a closed loop system or feedback control
system.
Monitors the error between the plant output and the reference input.
Controller adjusts the plant input in response to this error.
Goal is to minimize the tracking error given the physical constraints of the
system
Control Objectives

The objective of control system design is to make a physical system behave in a useful
fashion by causing its output to track a desired reference input even in the presence of
measurement noise, model error and disturbances.
1. Stability: the main idea of stability is that all variables in the control system remains
bounded. Preferably, the error variables, like desired output minus plant output,
would converge to zero. Stability is of primary importance, since without stability, all
of the other objectives are immaterial.
2. Performance: Assuming stability, performance describes how well the output tracks a
change in the reference input.
a) Rise time, Tr, is the time required for the response to change from 10% to
90% of the distance from the initial value to the final value, for the first time
b) Peak time, Tp, is the time required to reach the first peak of the response.
c) Overshoot, Mp, is the percentage amount by which the peak of the response
exceeds the final value.
d) Settling time, Ts, is the time required for the system to settle down to within
1% of its final value.
3. Disturbance rejection: Disturbances are undesired effects on the system behavior
caused by the environment. A designer cannot eliminate disturbances but can reduce
their impact on system behavior.
4. Robustness: the plant model is a simplification of a physical system, and is never
perfect. Robustness requires that the stability and performance of the controlled
system should not be significantly affected by the presence of model errors.
Communication Devices
Modems
The dial up modems normally used to access the Internet are embedded system with a
DSP inside. Using the DSP and associated software, the modem establishes the connection using
standard protocols. As line conditions vary, a modem can go into fallback mode- the speed of
communication decreases as the line conditions goes bad. As the digital signal is modulated, a lot
of signal processing is involved; therefore DSPs are used.
Internet access speed have increased multifold with advances in embedded systems. The
xDSL (Digital Subscriber Line) family provides access speed ranging from 64Kbps to 8Mbps.
The ADSL (Asymmetric Digital Subscriber Line) modems are capable of download speed up to
8Mbps and the uploading speed of 1Mbps.
Multimedia over IP networks
We have different networks with different services like the telephone network (Public
Switched Telephone Network or PSTN) for making voice calls and sending fax messages; the
internet for data services such as e-mail, file transfer and web services; and cellular mobile
networks for making voice calls while on the move. But now, the internet is used for voice, fax
and video communication as well.
An IP(Internet protocol) based wide area network (WAN) can be the backbone network
supporting data, voice and video communication services.

The IP phone is the terminal that will be at the subscribers premises. IP phone is a
powerful embedded system. It will have an IP address its own. It receives voice in the form of
packets from the IP network and reassembles the packets to convert them into voice. It converts
the voice into packet format and sends the packets over the IP network.

Unlike the normal telephone, which transmits analog voice signals, the IP phone send the
voice after performing voice compression; therefore it needs a voice compression module to
reduce the size of the data to be transmitted.
To take care of all the processing involved to carry out these operations, the IP phone will
have a 16-bit or 32-bit processor, a real time operating system and TCP/IP protocol stack and
other special protocols are used.
The IP-PSTN gateway performs the conversion of protocols between the IP cloud and the
PSTN cloud. The important difference is that PSTN uses circuit switching- ie, a circuit is
established between the calling party and the called party and this circuit is disconnected only
when the subscriber hangs up. This results in wastage of bandwidth. IP network, the data is
transmitted in packet format, and the same circuit can carry packets of voice from different
subscribers. The gateways functions are to translate the call processing protocol and to code the
speech in the required format.
Protocol Converter
The gateway or the routers on the internet are protocol converters, which convert the
LAN protocols to WAN protocols and vice versa. The embedded system sends the data in
serial/parallel format, and this system should be monitored/ controlled from any node on a LAN.
The data received from the embedded system has a standard physical interface (such as RS232 or
RS422), but uses a proprietary protocol for data transfer.

The commands to be issued from the LAN node are specific to the application; therefore
you need to define your own protocols. In such a case a protocol converter is required, which
performs the translation between the serial packets and Ethernet packets. The packet received
from the embedded system are converted into Ethernet packets and broadcast over the LAN.

Similar protocol converters are required in most telecommunication equipment. Suppose


we would like to send data from a LAN node over the satellite communication network. The
satellite may accept only serial communication data. In such a case we need to use a protocol
converter to convert the Ethernet packets into serial communication packets.
The diagram below is for the protocol encapsulation.
The application data (such as an e-mail or a file to be transferred) is divided
into packets, and each packets is sent to the TCP (transmission control
protocol) layer.
The TCP layer adds a header to the application data packet and a TCP
segment is formed. The TCP segment is sent to the IP layer software.
The IP layer adds its own header to the TCP segment and send is to the LLC
(Logical Link Control) layer of the LAN.
The LLC layer adds the LLC header and the LLC protocol data unit (PDU) is
sent to the MAC (Media Access control) layer.
The MAC its own header and sends the MAC frame over the LAN.
When the frame is received by another node, the software for each layer strips
off the header and sends the rest of the packet to the upper layer. Finally, the
application data reaches the application running on the receiving node.

Audio codec
When voice is transmitted over the telephone network, the voice is coded at the rate of
64Kbps using a technique known as Pulse Code Modulation (PCM). In radio system (such as
mobile communication systems, satellite communication systems, etc.) speech is compressed to
save bandwidth.
At the transmitting end, audio signals are compressed to achieve data rates in the range of
2.4-32Kbps; and at the receiving end, the audio signals is expanded to retrieve the original
signal. These codec (coder + decoder) use DSPs extensively and embedded into cell phones,
infrastructure equipment of mobile and fixed communication systems. Eg: MP3 player
An important application of audio coding technique is the computer generation of speech.

Pulse Generator

Voiced sounds

DSP based Vocal


tract filter

Low Pass Filter


Speech
Output

Noise Generator
Unvoiced sounds

In the above fig. the vocal tract is modeled as a filter and the filter is excited by a pulse
generator or a noise generator. When the filter is excited by the pulse generator, voiced sounds
such as vowels are produced. When the filter is excited by noise generator, unvoiced sounds are
produced. By varying the filter characteristics, various speech sounds can be produced, and
combining these sounds results in speech. Eg: talking toys and talking cameras use a small
embedded system or an Application Specific Integrated circuit (ASIC) to perform this function.
Interactive Voice Response (IVR) systems
In developing countries where computer penetration is low, telephone access to
computerized information would be of great value. IVR systems provide this function. Eg: in
many countries we can use any telephone to retrieve our bank account balance from the IVR
system installed in your bank. A database in the bank stores this information in a computer,
which can be accessed through a conventional telephone.
An IVR system is an embedded system connected to the computer holding the bank
database. The IVR system also has a telephone interface and is connected in parallel to a
telephone line. After the bank assigns a specific number to the IVR system, any subscriber can
call this number to get information about his or her bank account.

The block diagram of IVR system is given above. We can implement it as a stand-alone
system connected to the computer through a parallel port or USB port, or it can be implemented
in a PC with an add-on cards. The various block of the IVR systems are the PSTN interface, the
ADC and DAC, S-to-P and P-to-S serial converters, FIFOs, DTMF decoders and an interface
circuitry to interface microphone and speaker.
The PSTN interface receives the telephone calls and answers them. Filters
limit the audio signal to the desired frequency band (up to 4 kHz).

The ADC converts the input signal to digital format to send it for further
processing.
The DAC takes the speech file stored in the IVR system and converts them
into analog signals for transmission over the telephone line.
The ADC outputs the digitized voice data in serial format, which is converted
into parallel format using the S-to-P converter (serial to parallel).
Similarly the data in the parallel format can be convert into serial format using
P-to-S converter.
FIFOs are the buffers that temporarily hold the speech data.
The digits entered by the subscriber are in the form of DTMF (Dual tone multi
frequency) signals.

Global Positioning System (GPS) receivers


The GPS system uses a set of 24 NAVSTAR satellites, the DOD (Department of
Defense) provides the GPS services for any moving or fixed object anywhere on the earth, free
of cost. Anyone with a GPS receiver receives the satellite signals and processes them to find the
position parameters of the GPS receivers location.
GPS receiver is a powerful embedded system that uses a DSP to process the satellite
signals. From the data received from the satellite, the GPS receiver computed its longitude,
latitude, altitude and velocity and time.
GPS receiver can simultaneously receive signals from up to six satellites. These signal
are processed to obtain the position parameters. GPS receiver has an RS232 serial
communication interface or a USB interface from which the position data is available. This data
is processed by a processor based system; and using mapping software, the location of the GPS
receiver can be mapped onto a digital map.
Due to their low cost, GPS receiver have been integrated into a number of devices
including wristwatches, pagers, mobile phones, PDAs etc. GPS receivers are used extensively
by emergency services.

Pre-amplifier

Multi-channel
satellite receiver

Processor
Application
(Mapping) software

LCD
Display

Embedded SYSTEM-ON-CHIP(SoC)
SoC is a system on a VLSI chip that has all the necessary analog and digital circuits,
processors and software. SoC may be embedded with the following components:
1. Embedded processor GPP or ASIP core,
2. Single purpose processing cores or multiple processors,
3. A network bus protocol core,
4. An encryption function unit,
5. Discrete cosine transforms for signal processing application,
6. Memories,
7. Multiple standard source solutions, called IP (Intellectual Property) cores,
8. Programmable logic device and FPGA (Field Programmable Gate Array) cores,
9. Other logic and analog units.
An exemplary application of such an embedded SoC is the mobile phone. Single purpose
processors, ASIPs and IPs on an SoC are configured to process encoding and deciphering,
dialing, modulating, demodulating, interfacing the keypad and multiple line LCD matrix displays
or touch screen, storing data input and recalling data from memory.

1. Application Specific IC (ASIC)


ASICs are designed using the VLSI design tools with the processor GPP or ASIP and
analog circuits embedded into the design. The designing is done using the Electronic Design
Automation (EDA) tool. For design of an ASIC, a high level design language (HDL) is used.
2. IP core
On a VLSI chip, there may be integration of high-level components. These
components possess gate-level sophistication in circuits above that of the counter, register,
multiplier, floating point operations unit and ALU. A standard source solution for
synthesizing a higher-level component by configuring an FPGA core or a core of VLSI
circuit may be available as Intellectual Property (IP).
An IP may provide hardwired implementable design of a transform, an
encryption algorithm or a deciphering algorithm
An IP may provide a design for adaptive filtering of a signal.
An IP may provide a design for implementing Hyper Text Transfer Protocol
(HTTP) or File Transfer Protocol (FTP) or Bluetooth protocol to transmit a
web page or a file on the internet.
An IP may be designed for a USB or PCI bus controller.
3. FPGA core with single or Multiple Processors
It consist of large number of programmable gates on a VLSI chip. There is a set of
gates in each FPGA cell, called macro cell. Each cell has several input and outputs. All cells
interconnect like an array (matrix). Each interconnection is programmable through the
associated RAM in an FPGA programming tool. An FPGA core can be used with a single or
multiple processor.

Das könnte Ihnen auch gefallen