Sie sind auf Seite 1von 156

INTRODUCTION TO ELECTRONICS

Voltage - 1) the electric potential difference between two points in a circuit that will cause
an electric current to flow between these two points; 2) the electromotive pressure that
forces electric current to flow through a complete circuit; voltage is measured in volts
Current - 1) the rate of flow of electric charge through a conductor; 2) the net transfer of
electric charge per unit time; current is measured in amperes (1 ampere = 1 coulomb per
second)
Power - 1) the rate at which work is done in a circuit; 2) the amount of energy (in joules)
that a device delivers or consumes divided by the time (in seconds) that the device is
operating; 3) the product of the voltage across an electrical path and the current through it;
power is measured in watts
Circuit - 1) the complete path or closed loop through which electric current flows from the
power source to the load(s) (a load is a component or device that consumes power) and
then back to the source; 2) an electrical interconnection of power sources and electrical
loads that performs a certain electrical or electronic function
Resistance (R) - 1) the ability of an electronic component or an electrical path to resist the
flow of current; 2) the ratio of the voltage (V) across a conductor to the current (I)
flowing through it, i.e., R = V/I
Capacitance (C) - the ratio of the charge (Q) stored in a capacitor to the voltage (V) across
the capacitor, i.e., C = Q/V
Inductance - the amount of electromotive force (e.m.f.) or voltage induced within a circuit
by a change in current flowing through the circuit
Reactance - the ability of a capacitor or inductor to resist the flow of alternating current
Impedance - the ability of a circuit to resist the flow of alternating current
Parallel Connection (refer to Figure 1a) - a circuit arrangement wherein the electronic
components are connected to the power source independently of each other; in a parallel
connection, each component has the same voltage across it (but each may have a different
amount of current flowing through it)
Series Connection (refer to Figure 1b) - a circuit arrangement wherein the electronic
components are connected to each other along a single electrical path; in a series
connection, there is only one possible current path, so each component experiences the
same amount of current (but each may have a different voltage across it)

Figure 1a. Resistors in


Parallel

Figure 1b. Resistors in


Series

Open Circuit - a circuit that has a cut, break, or interruption, preventing current from
flowing through it; an open circuit may be due to a broken or disconnected wire, or a
damaged component that blocks current flow
Short Circuit - an unwanted or abnormal low-resistance path that causes current to bypass
a component or load; the low resistance of a short circuit can cause large amounts of
current to flow, which can result in greater damage (such as an open circuit due to
conductor burn-out)
Ground - 1) the common point of current return in a circuit; 2) the point(s) in a circuit that
is at zero volt; 3) the connection of an electrical circuit to earth; in a circuit that has a
single power supply, the ground is generally the point to which the negative terminal is
connected
Passive Components are electronic components that may be characterized as follows:
1) they do not require a power supply in order to operate;
2) they do not amplify or switch signals;
3) their behavior or basic electrical characteristics do not change with the application of
signal; and
4) they are often used for simple and basic applications such as limiting current, storing
electrical charge, filtering, suppressing current surges, timing, and tuning.
Examples of passive components are resistors, capacitors, and inductors.
Components that are not passive are referred to as 'active components'.
Resistor Value Color Coding
The color bands on a resistor indicate the resistance value of a resistor.

As shown in Figure 1, a common resistor usually has 4 color bands, the first two of which
indicates the first and second digit of its resistance value while the third band indicates the
number of zeros that follows the first two digits. The fourth band indicates the error or
2

tolerance of the resistor. For example, the top resistor in Figure 1 has a resistance value of
1 kilo-ohm (10 ohms x 100) +/- 5%. Some resistors have 5 color bands to indicate their
resistance values more precisely, an example of which is the bottom resistor of Figure
1.

Figure 1. The color bands of a resistor indicate its resistance value (see Table 1)
Table 1. Resistor Color Codes (Refer to Figure 1)
Color

1st Digit 2nd Digit 3rd Digit Multiplier Tolerance

Black

Brown

10

Red

100

Orange

1K

Yellow

10K

Green

100K

Blue

1M

Violet

10M

Gray

100M

White

1G

Gold

0.1

5%

Silver

0.01

10%

No Color

20%

Crystal Oscillators
An oscillator is a device that produces a recurring waveform or oscillating signal, such as a
3

voltage that continuously fluctuates (oscillates) between two levels. Oscillators are
generally used as timing or clocking devices in electronic circuits.
A crystal oscillator is simply an oscillator that's made of a piezoelectric crystal, a material
that generates a voltage when subjected to mechanical stress, or generates a mechanical
force when voltage is applied to it.
If electrodes are plated on opposite faces of a piezo crystal such as quartz and a potential is
applied across these electrodes, forces will be exerted on the bound charges within the
crystal, making the crystal bend. Removal of the applied voltage will make it return to its
original shape, which may generate a voltage in the process. Assuming that the device is
properly mounted, such conditions will allow the crystal to behave as an electromechanical
system that vibrates as long as it is subjected to proper electrical excitation.

Figure 1. Photo of crystal


oscillators (left) and the symbol
for a crystal oscillator (right)
A crystal oscillator behaves like a circuit composed of an inductor, capacitor, and resistor,
resonating at a precise frequency, i.e., its electrical oscillation attains maximum amplitude
at this resonant frequency. The resonant frequency and the quality factor Q of a crystal
oscillator depends on the crystal dimensions, the orientation of the surfaces with respect to
its axes, and how the device is mounted.
The quality factor Q of a resonant circuit is a measure of: 1) how fast the response of the
circuit falls off as the excitation frequency moves away from the resonant frequency; and
2) how large the amplitude of the response is at the resonant frequency. A high Q means
that the circuit resonates at a higher amplitude and its response falls off more quickly as
the oscillation frequency moves away from resonance.
Crystal oscillators can generate frequencies ranging from a few kHz to a few MHz, and Q's
ranging from several thousands to several hundred thousands. These exceptionally high Q
values and the fact that quartz is extremely stable with respect to time and temperature is
the reason for the frequency stability of crystal oscillators.
Crystal oscillators are usually labelled as 'XTAL' on schematic diagrams.
A capacitor is a component used for storing electrical charge, and usually consists of two
plates or sheets of conductor placed very close to (but not touching) each other. As the
capacitor charges up, one of these conductors becomes positively charged while the other
one becomes negatively charged.
4

Capacitance (C) is defined as the ratio of the charge (Q) stored in a capacitor to the voltage
(V) across the capacitor. Mathematically, therefore, C = Q/V. The unit of measurement for
capacitance is the 'farad', F, which is defined as coulomb/volt (coulomb and volt are the
units of measurement for charge and voltage, respectively). The higher the capacitance of a
capacitor, the greater is the charge it can store for a given voltage across it.

Figure 1. Photo of
Capacitors
Capacitors may be connected to each other to form new values of capacitance. They may
be connected in series or in parallel, as shown in Figure 2.

Figure 2. Capacitors in parallel (left) and in series (right)


When N capacitances are connected in series and a voltage V is applied across them, V =
V1 + V2 + ... + VN = Q1/C1 + Q2/C2 + ... + QN/CN, where Qi and Vi are the
corresponding charge in and voltage across every individual capacitance Ci, respectively.
However, Q1 = Q2 = ... = QN, since each capacitor in the series experiences the same
current or flow of charge. From earlier equations, Q/Ceff = Q/C1 + Q/C2 + ... + Q/CN.
This equation may be simplified as follows: 1/Ceff = 1/C1 + 1/C2 + ... + 1/CN.
Thus, the reciprocal of the effective capacitance of N capacitors connected in series is
equal to the sum of the reciprocals of their individual capacitances, i.e., 1/Ceff = 1/C1 +
1/C2 + ... + 1/CN.
When two or more capacitors are connected in parallel, the voltages across each of them
are equal. However, the corresponding charge accumulated in each of them differs in
accordance with the equation Q = CV. Thus, for a given circuit consisting of N capacitors
connected in parallel and excited by a voltage V, V = Q1/C1 = Q2/C2 = ... = QN/CN,
where where Qi and Vi are the corresponding charge in and voltage across every
individual capacitance Ci, respectively.
The total amount charge Q accumulated by all the capacitors is equal to the sum of the
individual charges accumulated by the individual capacitances, or Q = Q1 + Q2 + ... + QN,
which may be rewritten as CeffV = C1V + C2V + ... + CNV, since the voltage across all
5

the capacitors is V. This equation may be simplified as follows: Ceff = C1 + C2 + ... +


CN.
Thus, the effective capacitance of N capacitors connected in parallel is just the sum of their
individual capacitances, i.e., Ceff = C1 + C2 + ... + CN.
Capacitance Equations
Table 1. Capacitance Equations
Description

Equation

Remarks

where CT is the total


Equivalent
capacitance, C1,
Capacitance of
CT = C1+C2+...+CN
C2,...,CN are the N
Capacitors in Parallel
capacitors in parallel
where CT is the total
Equivalent
1/CT =
capacitance, C1,
Capacitance of
1/C1+1/C2+...+1/CN C2,...,CN are the N
Capacitors in Series
capacitors in series
Equivalent
CT = C1C2 /
CT equals C1 and C2
Capacitance of Two
(C1+C2)
in series
Series Capacitors
Q = charge
Q = CV
Charge Storage in an
(coloumb);
C=Q/V
Capacitor
C = capacitance (F);
V=Q/C
V = voltage (V)
2
E = CV / 2
E = Energy (J);
Energy Storage in an
C = 2E / V2
C = capacitance (F);
Capacitor
V = sqrt(2E / C)
V = voltage (V)
where C =
V = It / C
Constant Charging/
capacitance (F);
I = CV / t
Constant
V = voltage (V)
C = It / V
Discharging
I = current (A)
t = CV / I
t = time (s)
where C =
capacitance (F);
Instantaneous
v = voltage (V)
Charging/
i = C dv/dt
i = current (A)
Instantaneous
v = 1/C idt
dv = change in
Discharging
voltage (V)
dt = time interval (s)
Capacitive
Xc = 1 / (2fC)
where Xc =
Reactance
capacitive reactance;
C = capacitance;
6

Parallel Plate
Capacitance

C = [8.855(N1)kA] / d

f = frequency
where C =
capacitance (pF);
N = number of plates
k = relative dielectric
constant
A = plate area, sq. m
d = dielectric
thickness, m

See Also: Capacitance; Reactance; Various Reference Tables

Resistance (R) is defined as the ratio of the voltage (V) across a conductor to the current
(I) flowing through it. Mathematically, therefore, R = V/I, which is also known as Ohm's
Law. The unit of measurement for resistance is the 'ohm', , which is defined as
volt/ampere (volt and ampere are the units of measurement for voltage and current,
respectively). A component fabricated to exhibit nothing but a certain resistance (ideally)
is known as a resistor.
The higher the resistance, the greater is the voltage required to attain a given amount of
current flow. Resistance is therefore, as its name implies, a measure of the ability of a
conductor to resist the flow of current.

Figure 1. Photo of
Resistors
Resistance is an extrinsic property, i.e., its value is affected by characteristics that are not
inherent to the conductor, such as the conductor's dimensions. The inherent characteristic
of a material that defines its ability to resist the flow of current is known as its 'resistivity',

. The resistance R is related to the resistivity by the equation: R = L/A, where L is


the length of the conductor and A is its cross-sectional area.
Resistors may be connected to each other to form new values of resistance. They may be
connected in series or in parallel, as shown in Figure 2.

Figure 2. Resistors in parallel (left) and in series (right)


When two or more resistances are connected in series, the currents through each of them
are equal. However, the corresponding voltage developed across each of them differs in
accordance with Ohm's Law. Thus, for a given circuit consisting of N resistors connected
in series and excited by a voltage V, I = V1/R1 = V2/R2 = ... = VN/RN, where I is the
current flowing through the circuit and Vi is the corresponding voltage developed across
every individual resistance Ri, such that V = V1 + V2 +... + VN.
The effective resistance of such as circuit is Reff = V/I = (V1 + V2 + ... + VN) / I = (IR1 +
IR2 + ... + IRN)/I. This equation may be simplified as follows: Reff = R1 + R2 + ... +
RN.
Thus, the effective resistance of N resistances connected in series is just the sum of the
individual resistances, i.e., Reff = R1 + R2 + ... + RN.
When two or more resistances are connected in parallel, the voltages across each of them
are equal. However, the corresponding current flowing through each of them differs in
accordance with Ohm's Law. Thus, for a given circuit consisting of N resistors connected
in parallel and excited by a voltage V, V = I1R1 = I2R2 = ... = IN/RN, where Ii is the
corresponding current flowing through every individual resistance Ri.
In such a circuit, the current I flowing through the entire circuit is the sum of the individual
currents flowing through each corresponding resistor, or I = I1 + I2 + ... + IN, which may
be rewritten as V/Reff = V/R1 + V/R2 + ... + V/RN. This equation may be simplified as
follows: 1/Reff = 1/R1 + 1/R2 + ... + 1/RN.
Thus, the reciprocal of the effective resistance of N resistors connected in parallel is equal
to the sum of the reciprocals of the individual resistances. i.e., 1/Reff = 1/R1 + 1/R2 + ... +
1/RN.
Self-Inductance (L) is defined as the amount of electromotive force (e.m.f.) or voltage
induced within a circuit by a change in current flowing through the circuit. A component
8

fabricated to exhibit a certain inductance is known as a inductor, which usually consists of


a coil of wire through which current is made to flow.
When the current flows through an inductor, it produces a magnetic flux that passes
through the loops of the coil. Flux linkage (N) is the product of this magnetic flux ()
and the number of loops (N) in coil. The flux linkage changes as current through the coil
changes, and it is the change in flux linkage (N) that causes an e.m.f. (V) to be induced
in an inductor according to the following equation: V = -N d/dt. Self-inductance (L) is
also defined as the flux linkage N per current I, so N d/dt = L dI/dt. Thus, V = -L dI/dt.

Figure 1. Photo of
Inductors
The unit of measurement for inductance is the 'henry', H, which is defined as volt per
ampere-sec, which is also equal to ohm-second (volt and ampere are the units of
measurement for voltage and current, respectively, while ohm is the unit of measure for
resistance). The higher the inductance, the greater is the voltage induced for a given
change in current flow. Inductance is a measure of the ability of a circuit to resist a change
in the flow of current through it.
Inductors may be connected to each other to form new values of inductance. They may be
connected in series or in parallel, as shown in Figure 2.

Figure 2. Inductors in parallel (left) and in series


(right)
When two or more inductors are connected in series, the currents through each of them are
equal. However, the corresponding e.m.f. developed across each of them when the current
changes differs in accordance with the equation V = -L dI/dt. The total e.m.f. (Vtotal)
produced by a series of inductors experiencing a change in current is equal to the sum of
the individual e.m.f.'s produced by each inductor.
Thus, for a given circuit consisting of X inductors connected in series and through which
the current changes at a rate of dI/dt, Vtotal = V1 + V2 + ... + VX = -L1 dI/dt + -L2 dI/dt
9

+ ... + -LX dI/dt, where dI/dt is the rate at which the current through the circuit changes
and Vi is the corresponding e.m.f. developed across every individual inductance Li. If
Leff is the effective inductance of the circuit, then -Leff dI/dt = -L1 dI/dt + -L2 dI/dt + ... +
-LX dI/dt, or Leff = L1 + L2 + ... + LX.
Thus, the effective self-inductance Leff of X inductors connected in series is just the sum
of their individual self-inductances, i.e., Leff = L1 + L2 + ... + LX.
When two or more inductors are connected in parallel, the voltages across each of them are
equal. However, the currents through each of them differs according to the equation: I =
1/L vdt. The total current Itotal flowing through the circuit is equal to the sum of the
individual currents flowing through each inductor. Thus, for a given circuit consisting of X
inductors connected in parallel, Itotal = 1/Leff vdt = 1/L1 vdt + 1/L2 vdt + ... + 1/LX
vdt, where Leff is the effective inductance of the entire circuit. This equation may be
simplified as follows: 1/Leff = 1/L1 + 1/L2 + ... + 1/LX.
Thus, the reciprocal of the effective self-inductance Leff of X inductors connected in
parallel is equal to the sum of the reciprocals of their individual self-inductances. i.e.,
1/Leff = 1/L1 + 1/L2 + ... + 1/LX
Inductance Equations
Table 1. Inductance Equations
Description

Equation

Remarks
where LT is the total
Equivalent Inductance
inductance, L1,
LT = L1+L2+...+LN
of Inductors in Series
L2,...,LN are the N
inductors in series
where LT is the total
Equivalent Inductance
1/LT = 1/L1+1/L2+...
inductance, L1,
of Inductors in
+1/LN
L2,...,LN are the N
Parallel
inductors in parallel
Equivalent Inductance
LT equals L1 and L2
of Two Parallel
LT = L1L2 / (L1+L2)
in parallel
Inductors
E = LI2 / 2
E = Energy (J);
Energy Storage in an
2
L = 2E / I
L = Inductance (H);
Inductor
I = sqrt(2E / L)
I = Current (A)
where L = inductance
V = LI / t
(H);
Constant Charging/
I = Vt / L
V = voltage (V)
Constant Discharging
L = Vt / I
I = current (A)
t = LI / V
t = time (s)
10

where L = inductance
(H);
Instantaneous
v = voltage (V)
Charging/
v = L di/dt
i = current (A)
Instantaneous
i = 1/L vdt
di = change in current
Discharging
(A)
dt = time interval (s)
where XL =
inductive reactance;
Inductive Reactance
XL = 2fL
L = Inductance;
f = frequency
LT = total inductance
LT = L1 + L2 + 2M L1,L2 = individual
(aiding fields)
component selfMutual Inductance
LT = L1 + L2 - 2M
inductances (H)
(opposing fields)
M = mutual
inductance (H)
k = coupling
coefficient
L1,L2 = component
Coupling Coefficient k = M / (sqrt(L1L2))
inductances (H)
M = mutual
inductance (H)
L = inductance (H)
N = number of turns
in the coil
Flux Linkage in a
L = N / i
= magnetic flux
Coil
(Wb)
i = instantaneous
current (A)

A thermistor is a device that has a resistance that changes when the temperature changes.
The term 'thermistor' is a combination of the words 'thermal' and 'resistor'.
There are two types of thermistor: 1) the positive temperature coefficient (PTC) thermistor
(also known as posistor) and 2) the negative temperature coefficient (NTC) thermistor.
The basic linear relationship between the resistance and the temperature of a thermistor is
governed by the following equation: R = kT where R is the change in resistance, T is
the change in temperature, and k is the first-order temperature of coefficient of resistance.

11

A PTC thermistor has a positive k while an NTC thermistor has a negative k. Thus, the
resistance of a PTC thermistor increases whenever the temperature increases, while the
resistance of an NTC thermistor decreases as the temperature increases.
Not all thermistors exhibit a linear relationship between its resistance and temperature.
Some thermistors are non-linear, i.e., they exhibit a different amount of change in
resistance for each degree of change in temperature.

Figure 1. Photo of thermistors (left) and the


symbol for a thermistor (right)
PTC thermistors are usually manufactured by doping a polycrystalline ceramic with
semiconductor materials. PTC thermistors can also be made by embedding a piece of
plastic with carbon grains. NTC thermistors, on the other hand, are usually fabricated from
oxides of manganese, nickel, cobalt, and copper. NTC thermistors may also be formed by
crystallizing semiconductors such as silicon and germanium.
Thermistors have been employed in a variety of applications that include current limiters,
temperature sensors, fault protection systems, and heat regulators.

Active Components are electronic components that can change their basic electrical
characteristics in a powered electrical circuit, i.e., a change in their input signal can result
in a change in their electrical behavior. Most active components today are in the form of
semiconductors.
Active components have the ability to rectify, switch, or amplify signals. Aside from the
input signal, most active components require a power supply in order to perform their
assigned functions. They are often used in dynamic applications such as rectification,
switching, amplification, modulation, etc.
Examples of active components are diodes, bipolar transistors, field effect transistors, and
thyristors.
Components that are not active are referred to as 'passive components'.
Block Diagram of the 555 Timer IC

12

Figure 1. Internal Block Diagram of the 555 Timer IC


Figure 1 shows the block diagram of the internal circuit of the 555 timer IC. This diagram
shows how a very versatile IC like the 555 can also be very simple internally. The
excellent design of the 555 timer has made it one of the most widely-used and long-lasting
IC's ever.
The 555 circuit consists of just a handful of main components: two comparators, a flipflop, a discharge path, an output stage, and a resistor network. The timing functions of the
555 depend on how the inputs of the comparators are configured and how the discharge
path is used. To get a better understanding of how this block diagram works, please refer
to our description of how a 555 IC operates internally.
Learn more about the 555 Timer IC and its applications...
The p-n junction is the most basic building block of semiconductor electronics. It consists
of a p-type material in perfect contact with an n-type material. The area within the vicinity
of the junction is known as the depletion region, because it is depleted of mobile carriers
(electrons and holes). This is because the electrons from the n-type material have crossed
the junction and diffused into the other side (p-type), recombining with holes in that side.
On the other hand, the holes from the p-type material have diffused to the n-type material,
recombining with electrons.
Because of this diffusion process, holes not covered by electrons are left in the n-type
material, while electrons not covered by holes are left in the p-type material. Known as
13

uncovered charges, these result in an over-all negative charge in the p-type material and an
over-all positive charge in the n-type material.
This separation of charges develops a potential across the depletion region, preventing
further diffusion of carriers across the junction. This potential, known as the potential
barrier, is about 0.6- 0.7V in a typical silicon p-n junction. A voltage greater than this
potential barrier has to be applied across the p-n junction in order to make current flow
through the junction. This characteristic of the p-n junction is the basis for the operation
of a device known as the junction diode.

Figure 1. The p-n junction. Note the depletion region at the


junction where only immobile uncovered charges (ions) exist
A diode is a two-terminal electronic device consisting of a single p-n junction. This p-n
junction is usually created on a single block of silicon by doping the block with donor and
acceptor dopants at opposite ends. A diode is a rectifier, allowing current to pass in one
direction but not in the opposite direction.
When the anode (p-type side) of the diode is connected to the positive terminal of a
battery, the diode is said to be in forward bias, allowing current to pass through it. The
diode is said to be in reverse bias if its cathode (n-type side) is the one connected to the
positive terminal of the battery. A diode doesnt conduct current in reverse bias.
Figure 1 shows a photo of a diode (left) and the circuit symbol for a diode (right). The
cathode of a diode is usually indicated by a white band, as shown in the diode photo. The
diode symbol consists of a triangle that indicates the direction of the flow of current when
the diode is forward-biased, and a line perpendicular to this direction. This line indicates
the cathode in the diode symbol.
A diode only becomes forward-biased when the potential at the anode is greater than the
potential of the cathode by 0.7 V, the potential barrier. Under this condition, the potential
barrier is effectively 'overcome' by the applied voltage, allowing the carriers of the diode
to move across the junction. This means that the electrons from the n-type side can now
go to the p-type side in the same way that the holes in the p-type side can now go to the ntype side.
14

Figure 1. Photo of an ordinary diode


(left) and the symbol for a diode (right)
The current through the diode increases exponentially as the forward-bias voltage across
the diode is increased. Thus, the increase in the current flowing through a diode is very
abrupt once the diode starts to conduct. In physical terms, increasing the forward-bias
voltage injects more electrons into the n-type side of the diode. These electrons
immediately cross the junction in the absence of a potential barrier. Once these reach the
p-type material, they are pulled back to the positive terminal of the battery again. The
holes in the p-type side also move in the same manner under forward bias condition,
although in the opposite direction as the electrons. This continuous flow of charges
through the diode will go on as long as the diode is in forward bias.
When a diode is put under reverse bias, the holes of the p-type side are pulled toward the
negative terminal of the battery while the electrons in the n-type side are pulled toward the
positive terminal of the battery. In effect, the mobile charges are pulled away from the
junction in opposite directions, inhibiting the flow of charges through the diode. This is
also essentially widening the potential barrier of the diode, making it more difficult for the
carriers to move across the junction.
In reality, however, a very small amount of current still flows through a reverse-biased
diode. This current, known as reverse saturation current, is due to thermal generation of
holes and electrons near the junction of the diode. This is therefore dependent only on
temperature and not on the potential barrier of the diode.
A diode is a two-terminal electronic device consisting of a single p-n junction that
conducts current in one direction and blocks current in the other direction. The p-side of a
diode is known as its anode while its n-side is known as its cathode.
An ideal diode doesn't conduct any current at all if its cathode is more positive than its
anode (or, by convention, the voltage across it is negative). On the other hand, an ideal
diode conducts with no resistance at all if the anode is more positive than its cathode (or
the voltage across the diode is positive). A diode whose anode is more positive than its
cathode is also said to be 'forward-biased', while one whose cathode is more positive than
its anode is 'reverse-biased'. The voltage-current curve of an ideal diode would therefore
look like what's shown in Figure 1.

15

Figure 1. The Voltage-Current Curve of an Ideal Diode


Of course, in the real world, there is no such thing as an ideal diode. A real diode would
only start to conduct when the voltage across it exceeds a value referred to as its cut-in
voltage (also known as 'breakpoint', 'offset', 'threshold', or 'forward' voltage). When the
diode voltage is below the cut-in voltage, there is almost no current flowing through a
diode. Once the diode voltage exceeds the cut-in voltage, the current flowing through the
diode increases sharply with the voltage. The typical cut-in voltage for a silicon diode is
about 0.6 V while a germanium diode has a typical cut-in voltage of about 0.2 V.
A real diode also exhibits some resistance when it is conducting. In fact, the 'on' resistance
of a diode is not a fixed value - it roughly varies with the current inversely. A real diode
also exhibits a certain amount of current through it when it is reverse-biased. This current
is known as its reverse-bias current, or simply reverse current. It is also referred to as
'leakage' current.
If the reverse-bias voltage applied to a diode becomes large enough, the diode junction
begins to break down, allowing the diode to conduct even in reverse-bias. This
phenomenon occurs at what is known as the reverse breakdown voltage, which is very
much larger than the forward cut-in voltage of the diode. For a silicon diode, for instance,
the reverse breakdown voltage would be typically around 50 V. Thus, the voltage-current
curve of a real diode is shown 2.

Figure 2. The Voltage-Current Curve of a Real Diode (not drawn to scale)


A light-emitting diode (LED) is a special type of diode that emits light when it is
conducting. Just like an ordinary diode, a light-emitting diode also has a p-n junction that
allows it to conduct current much more easily in one direction than the other. As in
ordinary diodes, it conducts current only when it is forward-biased, i.e., its p-side is more
16

positive than its n-side by about 0.7 V. The p-side of an LED is known as an anode while
its n-side is known as a cathode.
In a semiconductor device, the charge carriers that comprise the flow of current are
electrons and holes. When an electron meets a hole during device operation, they
annihilate each other, since a hole is basically just the absence of an electron. The electron
then goes into a lower-energy state, releasing its 'excess' energy first before doing so. In
ordinary diodes, the energy released is not visible. In LED's, however, the energy released
is in the form of visible optical emissions. This is why an LED emits light during
conduction.
LED's are often used in digital indicators and signs, in decorative applications, or even for
illumination purposes. Figure 1 shows a photo of an LED (left), a photo of a car's tail light
consisting of numerous LED's (center), and the circuit symbol for an LED (right). The
LED's circuit symbol is just the symbol for a regular diode combined with arrows
signifying the emission of photons.

Figure 1. Photo of a light-emitting diode (left); a


car tail light consisting of LED's (center); and the
circuit symbol for an LED (right)
The wavelength of the light emitted by an LED, and therefore its color, are determined by
the band gap energy of the semiconductor material used in forming the p-n junction. The
materials used for LED's are chosen to have band-gap energies that correspond to nearinfrared, visible, or near-ultraviolet light, to make them emit light during operation. Table
1 shows some semiconductor materials used for LED's and their corresponding emission
color.
Table 1. Some Semiconductors Used for LED's and their Colors
Semiconductor
aluminum gallium arsenide
aluminum gallium phosphide
aluminum gallium indium
phosphide

Symbol
AlGaAs
AlGaP

gallium arsenide phosphide

GaAsP

gallium phosphide

GaP

gallium nitride
gallium nitride w/ AlGaN
quantum barrier

GaN

Color
Red, Infrared
Green
Orange, Yellow,
Green
Red, Orange,
Yellow
Red, Yellow,
Green
Green, Blue

GaN

White

AlGaInP

17

indium gallium nitride


silicon carbide as substrate
sapphire as substrate
zinc selenide
Diamond
aluminum nitride
aluminum gallium nitride

InGaN
SiC
Al2O3
ZnSe
C
AlN
AlGaN

near-UV, Blue
Blue
Blue
Blue
UV
UV
UV

LED's are often fabricated on an n-type substrate, although p-type substrates are also used
for the same purpose. Substrates that are transparent to the light emission and backed by a
reflective layer, increase the efficiency of the LED. The microchip of an LED is
encapsulated in a tough, solid plastic lens. The refractive index of the LED package must
be compatible with the semiconductor used; otherwise, the light emitted gets reflected
back into the semiconductor where it is absorbed and dissipated as heat.
Being a diode, an LED requires correct polarity (it must be forward-biased) in order to
emit light. LED's must not be subjected to large currents, since they are easily destroyed
by electrical overstress. A resistor is often connected in series with an LED to limit the
current flowing through the latter.
A zener diode is a special type of diode that is designed to conduct large currents in reverse
breakdown mode, mainly for voltage regulation purposes. A zener diode behaves like an
ordinary diode, i.e., it conducts current in only one direction and blocks current in the
other direction. Just like a regular diode, a zener diode conducts when it is forward-biased,
or when its anode is more positive than its cathode by a certain voltage. It is said to be in
reverse bias if its cathode is more positive than its anode, blocking the flow of current in
that state.
If an excessive reverse-bias voltage is applied across an ordinary diode, it goes into a
phenomenon known as 'avalanche breakdown'. Under this state, the diode starts
conducting large amounts of current even if it is in reverse bias. This phenomenon can
cause an ordinary diode to get permanently damaged. A zener diode, on the other hand, is
designed to operate in reverse-bias mode and can handle large currents when it is
conducting under reverse bias.
A zener diode is fabricated to exhibit a specified reverse bias voltage breakdown that is
much lower than that of an ordinary diode. This reverse breakdown voltage of a zener
diode is also known as its 'zener knee voltage' or simply its 'zener voltage'.

18

Figure 1. Photo of a zener diode


(left) and the circuit symbol for a
zener diode (right)
A zener diode has a heavily doped p-n junction that allows electrons to tunnel from the
valence band of the p-type material to the conduction band of the n-type material. The
zener voltage of a zener diode may be set through a precisely controlled doping process,
which can achieve tolerances of as high as 0.05%. Most zener diodes, however, have
tolerances of 5% to 10%.
Since a zener diode is meant to operate in reverse bias, its usual application would have it
connected to the circuit in such a way that its cathode is more positive than its anode.
Under this connection, the zener diode will not conduct unless the voltage at the cathode
exceeds the anode voltage by more than its zener voltage. Thus, a zener diode starts
conducting as soon as it is reverse-biased by a voltage equal to its zener voltage. Once it
conducts, the zener diode tends to pull down the voltage applied across it. As such, the
voltage seen across a conducting zener diode is very close to its zener voltage.
Since a conducting zener diode maintains the voltage across it at a value around its zener
voltage, its main purpose is to serve as a voltage regulator. A very simple circuit that
demonstrates this is shown in Figure 2. This is a shunt voltage regulator circuit, since the
zener diode is connected in shunt (parallel) with the load. In this circuit, Vout will be
maintained by the zener diode at the zener voltage level even if Vin changes, as long as
Vin exceeds the zener voltage.
If Vin increases, the current flowing through the zener diode ZD1 increases as well,
causing the voltage across the resistor R to increase while allowing the voltage across the
zener diode to remain at the zener voltage level. Of course, if Vin falls below the zener
voltage of ZD1, ZD1 stops conducting, and Vout starts falling with Vin. Note that this
circuit is also a very inefficient way to regulate a voltage, since regulation is achieved by
shunting current to ground, which is like simply throwing the excess energy away.

Figure 2. A simple Shunt Voltage Regulator using a zener diode

19

A tunnel diode, also known as an Esaki diode, is a special type of diode that can be
operated at very high frequencies, i.e., well into the microwave frequency range. It was
invented by Leo Esaki of Tokyo Tsushin Kogyo in 1957.
A tunnel diode has this capability because it operates on the principle of electron tunneling
effect. Making use of extremely doped p-type and n-type materials, a tunnel diode
achieves very high concentrations of carriers that can interact readily without having to
pass over the junction's potential barrier. Instead, the carriers simple pass through the
potential hill. This is the electron tunneling effect mentioned earlier.
The very heavy doping of the p-n junction of a tunnel diode results in a broken bandgap,
somewhat causing the conduction band electron states on the n-side of the junction to be
aligned with the valence band hole states on the p-side.
Tunnel diodes are usually fabricated using germanium, although gallium arsenide and
silicon tunnel diodes also exist. They are used in applications such as oscillators,
amplifiers, frequency converters, and detectors.
When the forward bias voltage is initially applied in a tunnel diode, the current increases
as the filled electron states in the n-side conduction band aligns with the empty valence
band hole states in the p-side. As the voltage is further increased, these states begin to
misalign, causing the current to decrease. The current further drops as the voltage is
increased, which means that the tunnel diode is exhibiting a negative resistance under
these conditions. Increasing the voltage further eventually prevents the occurrence of
electron tunneling, causing the tunnel diode to just operate like a normal diode.

Figure 1. Photo of a tunnel diode (left) and the two


circuit symbols commonly used for the tunnel diode
(right)
Transistor Common-Terminal Configurations
Transistor circuits may be classified into three configurations based on which terminal is
common to both the input and the output of the circuit. These configurations are: 1) the
common-emitter configuration; 2) the common-base configuration; and 3) the commoncollector configuration.
The common-emitter (CE) transistor configuration is shown in Figure 1. In this
configuration, the transistor terminal common to both the input and the output of the

20

circuit is the emitter. The common-emitter configuration, which is also known as the
'grounded-emitter' configuration, is the most widely used among the three configurations.

Figure 1. Common-Emitter Transistor Configuration


The input current and output voltage of the common-emitter configuration, which are the
base current Ib and the collector-emitter voltage Vce, respectively, are often considered as
the independent variables in this circuit. Its dependent variables, on the other hand, are the
base-emitter voltage Vbe (which is the input voltage) and the collector current Ic (which is
the output current). A plot of the output current Ic against the collector-emitter voltage
Vce for different values of Ib may be drawn for easier analysis of a transistor's
input/output characteristics, as shown in this Diagram of Vce-Ic Curves.
The common-base (CB) transistor configuration, which is also known as the 'grounded
base' configuration, is shown in Figure 2. In this configuration, the terminal common to
both the input and the output of the circuit is the base.

Figure 2. Common-Base Transistor Configuration


The input current and output voltage of the common-base configuration, which are the
emitter current Ie and the collector-base voltage Vcb, respectively, are often considered as
the independent variables in this circuit. Its dependent variables, on the other hand, are the
emitter-base voltage Veb (which is the input voltage) and the collector current Ic (which is
the output current). A plot of the output current Ic against the collector-base voltage Vcb
for different values of Ie may be drawn for easier analysis of a transistor's input/output
characteristics, as shown in this Diagram of Vcb-Ic Curves.
The common-collector (CC) transistor configuration is shown in Figure 3. In this
configuration, the collector is common to both the input and the output of the circuit. This
21

is basically the same as the common-emitter configuration, except that the load is in the
emitter instead of the collector. Just like in the common-emitter circuit, the current
flowing through the load when the transistor is reverse-biased is zero, with the collector
current being very small and equal to the base current. As the base current is increased,
the transistor slowly gets out of cut-off, goes into the active region, and eventually
becomes saturated. Once saturated, the voltage across the load becomes maximum, while
the voltage Vce across the collector and emitter of the transistor goes down to a very low
value, i.e., as low as a few tens of millivolts for germanium and 0.2 V for silicon
transistors.

Figure 3. Common-Collector Transistor Configuration


One way of understanding the basic operation of a bipolar transistor is by looking at its
Vce-Ic curves, i.e., the plots of its collector current Ic versus the voltage Vce across its
collector and emitter for different values of base current Ib. These curves are derived from
a transistor in common-emitter configuration, and basically describe the transistor's
common-emitter output characteristics.
A transistor circuit is said to be in common-emitter configuration if the emitter is the
terminal common to both the input and the output. In the analysis of transistor circuits, the
input current and output voltage are usually considered the independent variables. Thus,
for the common-emitter configuration, the independent variables are the input current Ib
and the output voltage Vce, while the dependent variables are the input voltage Vbe and
the output current Ic. The family of input characteristic curves may therefore be described
by the function f1 wherein Vbe = f1(Vce, Ib), while the family of output curves may be
described by the function f2, wherein Ic = f2(Vce, Ib).
The output curves corresponding to f2 are drawn with the collector-to-emitter voltage Vce
as the abscissa, and the collector current Ic as the ordinate. Different output curves are
generated for different values of base current Ib, which are all drawn on the same plot.
Figure 1 shows an example of a common-emitter transistor circuit's Vce-Ic curves for
different values of Ib.

22

Figure 1. The Vce-Ic Curves of an NPN transistor for different


values of Ib (Common-emitter Collector Characteristics)
These common-emitter Vce-Ic curves are useful in choosing the optimum operating point
of a transistor used as an amplifier. A transistor used as an amplifier must be operated in
the active region, wherein the change in the collector current (output) is proportional to the
change in the base current (input). This linear region is ideal for amplification use because
it allows the output waveform to be an enlarged 'faithful' copy of the input waveform. The
active or linear region in the Vce-Ic curve is its flat portion on the right side. Note that the
change in collector current Ic is most sensitive to the change in base current Ib in this
region.
The slope of the curve in this operating region represents the reciprocal of the transistor's
collector or output resistance Rc, i.e., Slope = Ic/Vce = 1/Rc. This means that in this region
of the curve, the output resistance is relatively high (between 10-50 k). An average Rc
value of 30 k in this region may be assumed.
Also note that in this flat portion of the curve, the collector current Ic doesn't change much
with the collector-emitter voltage Vce for any given base current Ib.
One way of understanding the basic operation of a bipolar transistor is by looking at its
Vcb-Ic curves, i.e., the plots of its collector current Ic versus the voltage Vcb across its
collector and base for different values of emitter current Ie. These curves are derived from
a transistor in common-base configuration, and basically describe the transistor's commonbase output characteristics.
A transistor circuit is said to be in common-base configuration if the base is the terminal
common to both the input and the output. In the analysis of transistor circuits, the input
current and output voltage are usually considered the independent variables. Thus, for the
common-base configuration, the independent variables are the input current Ie and the
23

output voltage Vcb, while the dependent variables are the input voltage Veb and the output
current Ic. The family of input characteristic curves may therefore be described by the
function f1 wherein Veb = f1(Vcb, Ie), while the family of output curves may be described
by the function f2, wherein Ic = f2(Vcb, Ie).
The output curves corresponding to f2 are drawn with the collector-to-base voltage Vcb as
the abscissa, and the collector current Ic as the ordinate. Different output curves are
generated for different values of emitter current Ie, which are all drawn on the same plot.
Figure 1 shows an example of a common-base transistor circuit's Vcb-Ic curves for
different values of Ie.

Figure 1. The Vcb-Ic Curves of an NPN transistor for different


values of Emitter Current (Common-Base Output Characteristics)
These common-base Vcb-Ic curves are useful in distinguishing between the different
regions of operation of a bipolar transistor, namely, the active, saturation, and cut-off
regions. The active or linear region, wherein the collector junction is reverse-biased while
the emitter junction is forward-biased, is characterized by the flat or horizontal portions of
the curves. The saturation region, in which both the collector and emitter junctions are
forward-biased, lies in the portions of the curves wherein Vcb is just very slightly above 0
V and Ie > 0. Lastly, the cut-off region, wherein both the collector and emitter junctions
are reverse-biased, is where the curves are below Ie =0.
The Junction Field Effect Transistor (JFET) is a type of field effect transistor whose basic
structure consists of a semiconductor bar with ohmic contacts at the end and heavily doped
regions on its opposite sides. If the semiconductor bar is made of n-type material, then it
is an n-channel JFET. The JFET is p-channel if the bar is made of p-type material. The
terminals at the ends of the bar correspond to the source and drain of the JFET. The
heavily doped regions on the sides of the bar are connected to serve as the gate of the
JFET. Needless to say, the gate regions are doped to be of opposite type with respect to the
channel, so that a p-n junction is formed between the channel and the gate regions.

24

By applying a voltage across the source and the drain of a JFET, current consisting of
majority carriers (electrons for an n-channel and holes for a p-channel) is caused to flow
through the channel. The current flowing through the channel is controlled by applying a
gate voltage Vgs that reverse biases the p-n junction formed by the gate with respect to the
source. The higher the Vgs is, the more the p-n junction is reverse-biased, and the wider
the depletion region across the channel becomes. The wider depletion region results in a
narrower channel, consequently constricting the flow of current through the channel.
Varying Vgs therefore varies the current through the channel for any given voltage across
the source and the drain.
The JFET structure described above is no longer practical to use because of the difficulty
with having to diffuse dopants from two opposite sides of a bar. Most JFETs built onto
IC's nowadays involve single-ended geometries that require doping for the gate from only
one side of the channel, i.e., the surface of the wafer. This is achieved by building the
JFET on an epitaxially grown channel over a doped substrate that acts as the second gate.
The current through the channel of a MOSFET or JFET consists of only the majority
carriers, which is why FETs are referred to also as unipolar transistors.

Figure 1. Structure of a singleended-geometry junction FET


The Metal-Oxide Semiconductor Field Effect Transistor (MOSFET) or MOS transistor is a
type of transistor that consists of a metal layer, an oxide layer, and a semiconductor layer.
The semiconductor layer is usually in the form of single-crystal silicon substrate doped
precisely to perform transistor action. The oxide is usually in the form of a silicon dioxide
layer that insulates the semiconductor layer from the metal layer. The metal layer is used
as contact for providing voltage inputs to the MOS transistor.
The MOS transistor consists of three terminals: a gate, a source, and a drain. These are
equivalent to the base, emitter, and collector of a bipolar transistor. The metal layer of the
MOS transistor serves as the gate, while the source and drain are fabricated on the silicon
substrate.

25

Like a bipolar transistor, the current flowing through a MOS transistor is controlled by the
input at its gate. However, unlike a bipolar transistor which is controlled by the amount of
current into its base, a MOS transistor is controlled by the voltage level at its gate.
The source and drain of a MOS transistor are created on the silicon substrate in such a way
that they are 'sandwiching' the gate. The source and drain are doped to be of the same
material type, which should be different from the doping received by the substrate. A
MOS transistor is referred to as a P-channel MOSFET, or PMOS, if the source and drain
are p-type, and the substrate is n-type. It is an N-channel MOSFET, or NMOS, if the
source and drain are n-type, and the substrate is p-type.
The area under the gate is known as the channel. The conductivity of the channel may be
controlled through the voltage level applied to the gate. For instance, in an NMOS, the
major carrier is the electron, so the channel becomes more conductive by applying a
positive voltage at the gate, which tends to attract more electrons from the substrate into
the channel. The layer formed by these attracted electrons is known as the 'inversion
layer', since electrons are the minority carriers of the p-substrate.
If the source of the NMOS is more negative than the drain while a sufficiently positive
voltage is applied to the gate, current would pass through the transistor. Removing the
positive voltage at the gate would significantly decrease the conductivity of the channel,
constricting the flow of electrons. A MOS transistor operating in this manner is known as
an enhancement-mode MOS transistor, because it is normally open and conducts only
when the channel is 'enhanced.' On the other hand, a normally conducting transistor is
known as a depletion-mode transistor, since its conduction is controlled by 'depleting' the
normally-present channel.

Figure 1. Structure of an Enhancement


MOSFET
The term 'diac', which stands for 'diode for alternating current', refers to a three-layer twoterminal device that can conduct current in two directions. However, a diac only starts to
conduct current when the voltage across it momentarily exceeds a certain threshold known
as its 'breakdown voltage'.
Once triggered by a momentary voltage higher than its breakdown voltage, the resistance
of the diac decreases abruptly. This results in a sharp increase in current flowing through
26

the diac and a corresponding decrease in the voltage across it. This conducting state
remains as long as the current flowing through the diac is higher than a current threshold
known as the diac's 'holding current.'
Once the current through a conducting diac falls below the holding current, the diac
switches back to its high-resistance or non-conducting state.
Prior to being triggered into conduction, a non-conducting diac exhibits negative
resistance. This means that increasing the voltage across a non-conducting diac will cause
the current flowing through it to decrease, as long as the breakdown voltage is not reached.

Figure 1. Photo of a diac (left)


and the symbol for a diac (right)
Most diacs exhibit a breakdown voltage of around 30 V. Unlike other thyristors such as
the SCR or the triac, the diac has no gate electrode with which it can be triggered. A diac's
primary application is for triggering another device.
Diacs are also known as symmetrical trigger diodes because of the symmetry exhibited by
their V-I characteristic curves. Because of this symmetry, the two terminals of the diac are
not called 'anode' and 'cathode', and are instead referred to as MT1 and MT2.
A Silicon-Controlled Rectifier (SCR) is a four-layer (p-n-p-n) semiconductor device that
doesn't allow current to flow until it is triggered and, once triggered, will only allow the
flow of current in one direction. It has three terminals: 1) an input control terminal referred
to as a 'gate'; 2) an output terminal known as the 'anode'; and 3) a terminal known as a
'cathode', which is common to both the gate and the anode.
SCR's are generally used for switching and power control purposes in AC and high-power
circuits. The SCR is a device that falls under a group of devices known as 'thyristors',
which refer to devices that have a 4-layer or p-n-p-n structure. The term 'silicon-controlled
rectifier' is a trade name used by General Electric in 1957 to refer to this type of thyristor.

Figure 1. Photo of various SCR's


(left) and the circuit symbol for an
SCR (right)
27

An SCR may be thought of as a rectifier whose ability to conduct current can be controlled
using a third terminal known as a 'gate'. While untriggered, an SCR will prevent any
current to flow through it, except for a very small leakage current caused by non-ideal
conditions. The SCR is triggered to turn on if the voltage across its gate and its cathode
exceeds a certain threshold level.
Once an SCR has been triggered, it will remain 'on' even if the triggering gate voltage is
removed, until the current flowing through it falls below a level known as its 'holding
current'. Thus, a conducting SCR will continue to conduct as long as the current flowing
through it is greater than the holding current. In normal AC applications, an SCR is turned
off automatically during the half-cycle wherein the voltage and current are below zero.
The p-n-p-n structure of an SCR may be modeled in terms of a PNP and an NPN transistor,
as shown in Figure 2. It can easily be seen from this diagram why an SCR remains 'on'
once triggered, even if the triggering gate voltage is removed. Applying sufficient
triggering voltage at the gate drives the NPN transistor to conduct. This, in turn, pulls
down the PNP's base voltage, causing the PNP to conduct. The conducting PNP then
supplies the base current to the NPN transistor to keep it conducting. Unless the supply of
current to the base of the NPN is cut off, the circuit will continue conducting under this 'on'
condition.

Figure 2. The Equivalent Circuit (left) and Structure (right) of an SCR


SCR's, which can have voltage ratings of up to 2,500 volts and current ratings of up to
3,000 amperes, are encountered in many AC and high-power applications. Examples of
applications for SCR's include: 1) power switching; 2) phase control; 3) battery charging;
4) power inverters; 5) motor switching and control; 6) high-voltage DC conversion; etc.
A Triac is a three-terminal electronic component that functions as a gate-controlled
bidirectional switch, primarily for AC circuits. Its structure is basically equivalent to two
oppositely-facing silicon-controlled rectifiers (SCR's) connected in parallel, with their
gates connected together. Whereas an SCR is capable of conducting current in only one
direction, a triac can conduct current in both directions because of its dual-SCR
configuration.

28

The main terminals of a triac are often designated as 'MT1' and 'MT2', while its input
control terminal is referred to as a 'gate', as mentioned earlier. Whenever a sufficient
positive or negative voltage is applied at the gate of a triac, one of its two SCR's turn on,
causing current to flow through the triac. Which SCR is conducting at any one time
depends on the polarity of the voltage across the triac.

Figure 1. Photo of a triac (left)


and the circuit symbol for a triac
(right)
Just like an SCR, a triac will continue to conduct once it is turned on, even if the triggering
gate voltage is removed. However, the current flowing through the triac must remain
above a certain level, known as the 'holding current', in order to keep the triac conducting.
If the current through the triac falls below the holding current, the triac switches off, and
needs to be triggered again in order to conduct.
A triac is a good switching device for AC loads, such as incandescent bulbs and AC
motors. In normal AC applications, a triac turns off when the sinusoidal current crosses the
zero level. Note that when a triac is used to drive inductive loads, the current is more
difficult to drive to zero (since inductors oppose instantaneous changes in current), and this
might present some issues during switch-off. Thus, this phenomenon must be considered
when designing an application wherein a triac is used to power an inductive load.
The point within the cycle of the sine wave at which a triac is triggered may also be
precisely timed, such that the percentage of power delivered to the load may also be
controlled with a triac.
Examples of applications for triacs include: 1) switching and dimming for AC
incandescent bulbs; 2) speed controls for appliances with electric motors, e.g., electric
fans; and 3) interfacing of AC appliances to digital computer systems.
The following parameters must be considered when selecting a triac: 1) forward and
reverse breakover voltage; 2) maximum load current; 3) minimum holding current; 4) gate
voltage and current trigger specifications; 5) switching speeds; and 6) maximum dV/dt.
Reactance (X) is defined as the ratio of the AC voltage to the AC current in an AC circuit,
as caused by the presence of a reactive component in the circuit, i.e., a capacitor or an
inductor. The reactance of a capacitor or inductor is therefore like the resistance of a
resistor, except that the reactance of a capacitor or inductor is not constant - it varies with
the frequency of the signal across the capacitor or through the inductor. Reactance is also
expressed in ohms.
29

The presence of a capacitor or an inductor in a circuit impedes changes in voltage or


current within the circuit. Specifically, the voltage across a capacitor can not change
instantaneously, in the same way that current flowing through an inductor can not change
instantaneously. As such, the presence of a capacitor or inductor in an AC circuit
introduces a phase shift between the voltage and current signals in the circuit.
In an AC circuit where both resistance and reactance exist, the effective ratio of voltage to
current is known as the impedance Z of the circuit, which is given by the equation Z = R +
jX, or |Z| = SQRT(R2 + X2). Impedance is therefore the over-all ability of a circuit to
'impede' the flow of current for a given voltage, and it consists of two components:
resistance and reactance.
Assume that a sinusoidal voltage with frequency f (in Hz) is applied across a capacitor
with capacitance C. The reactance XC of the capacitor is then given by the equation: XC
= 1 / (2fC) = 1 / C where = 2f is the angular frequency in radians per second. Thus,
the reactance XC of a capacitor decreases as the frequency f increases, and increases as f
decreases. This is why a capacitor is used to block the DC component of an AC signal.
Furthermore, the current sine wave through a capacitor is 90 degrees ahead of the voltage
sine wave across it.
On the other hand, the reactance XL of an inductor is given by the equation XL = 2fL =
L, which means that the reactance of an inductor increases as the frequency of the signal
through it increases. Furthermore, the current sine wave through an inductor lags the
voltage sine wave across it by 90 degrees.
Because the reactance exhibited by a capacitor decreases as frequency increases, its
reactance is considered negative. Thus, a reactance that's less than 0 means that it is
capacitive. Similarly, a reactance that's greater than 0 means that it is inductive. Reactance
equals zero in a purely resistive circuit.
Susceptance (B) is the reciprocal of reactance, i.e., B = 1 / X.
Impedance (Z) is the over-all measure of the ability of a circuit to resist the flow of
alternating current under a given a.c. voltage excitation. It may be expressed as Z = V / I
where V and I are the a.c. voltage applied to and current flowing through the circuit. Its
unit of measurement is the ohm (volt/ampere).
Impedance consists of two components - resistance and reactance. A good understanding
of what impedance is and how it is related to resistance and reactance is important in the
analysis of voltage-current relationships in an AC circuit that consists of resistors as well
as reactive components (capacitors and inductors). For more details about resistance and
reactance, please see these separate articles: Resistance; Reactance.

30

In an AC circuit that consist of a resistance R in series with a reactance X, the following


equations apply:
|Z| = SQRT(R2 + X2);
= tan-1(X / R)
p.f. = cos = R / Z
where:
Z is the impedance of the circuit;
R is the resistance of the circuit;
X is the reactance of the circuit;
is the phase angle between the voltage and current signals;
and p.f. is the power factors of the circuit, which is the ratio of true power to apparent
power of the circuit.
In an AC circuit that consist of a resistance R in parallel with a reactance X, the following
equations apply:
|Z| = RX / SQRT(R2 + X2);
= tan-1(R / X)
p.f. = cos = Z/R
where:
Z is the impedance of the circuit;
R is the resistance of the circuit;
X is the reactance of the circuit;
is the phase angle between the voltage and current signals;
and p.f. is the power factors of the circuit, which is the ratio of true power to apparent
power of the circuit.
As shown in the equations above, the presence of either a capacitor or an inductor (or
both) in a circuit produces a phase shift between the voltage and current ac signals. This
phase shift between voltage and current results in reactive power which can not do real
work. In such a case, the true power of the circuit is just a fraction (denoted by the power
factor) of its apparent power.
Admittance (Y) is the reciprocal of reactance, i.e., Y = 1 / Z. The unit of measurement for
admittance is the siemens (S).
Digital Electronics
Digital Electronics refers to the field of electronics that deals with digital signals, i.e.,
signals that exist at discrete or quantized levels only. In fact, digital electronics as known
today deals with signals that are represented by only two discrete levels or 'binary states': 1
and 0. A level '0', or 'low' state, may be represented by zero volt (0 V), while a level '1', or
'high' state, may be represented by a higher voltage, say 5 V.
Digital electronics involves the storage, processing, receiving, and transmission of
information in the form of digital signals, or a train of 'low' (0 V) and 'high' (5 V) pulses
that correspond to 0's and 1's, respectively.
31

The opposite of a digital signal is an analog signal, which is defined as a varying and
continuous signal.
Digital signals may be converted into analog signals and vice versa by devices known as
digital-to-analog converters (DAC's) and analog-to-digital converters (ADC's) ,
respectively.

Resistor-Transistor Logic (RTL)


Resistor-Transistor Logic, or RTL, refers to the obsolete technology for designing and
fabricating digital circuits that employ logic gates consisting of nothing but transistors and
resistors. RTL gates are now seldom used, if at all, in modern digital electronics design
because it has several drawbacks, such as bulkiness, low speed, limited fan-out, and poor
noise margin. A basic understanding of what RTL is, however, would be helpful to any
engineer who wishes to get familiarized with TTL, which for the past many years has
become widely used in digital devices such as logic gates, latches, buffers, counters, and
the like.

Figure 1 shows an example of an N-input RTL NOR gate. It consists of N transistors,


whose collectors are all tied up to Vcc through a common resistor, and whose emitters are
all grounded. Their bases individually act as inputs for input voltages Vi (i = 1,2,...,N),
which represent input logic levels. The output Vo is taken across the collector- resistor
node and ground. Vo is only 'high' if the inputs to the bases of all the transistors are 'low'.

32

Figure 1. A simple N-input RTL NOR Gate

One of the earliest gates used in integrated circuits is a special type of RTL gate known as
the direct-coupled transistor logic (DCTL) gate. A DCTL gate is one wherein the bases of
the transistors are connected directly to inputs without any base resistors. Thus, the RTL
NOR gate shown in Figure 1 becomes a DCTL NOR gate if all the base resistors (Rb's) are
eliminated. Without the base resistors, DCTL gates are more economical and simpler to
fabricate onto integrated circuits than RTL gates with base resistors.
The main drawback of DCTL gates is that they suffer from a phenomenon known as
current hogging. Ideally, several transistors that are connected in parallel will share the
load current equally among themselves when they are all brought into saturation. In the
real world, however, the saturation points of different transistors are attained with different
levels of input voltages to the base (Vbe). As such, transistors that are in parallel and share
the same input voltage (which are commonly encountered in DCTL circuits) do not share
the load current evenly among themselves.

In fact, once the transistor with the lowest Vbesat saturates, the other transistors are
prevented from saturating themselves. This causes the saturated transistor to 'hog' the load
current, i.e., it carries the bulk of the load current whereas those transistors that were
prevented from saturating carries a minimal portion of it. Current hogging, which
prevented DCTL from becoming widely used, is largely avoided in RTL circuits simply by
retaining the base resistors.
RTL gates also exhibit limited 'fan-outs'. The fan-out of a gate is the ability of its output to
drive several other gates. The more gates it can drive, the higher is its fan-out. The fan-out
of a gate is limited by the current that its output can supply to the gate inputs connected to
it when the output is at logic '1', since at this state it must be able to drive the connected
input transistors into saturation.

33

Another weakness of an RTL gate is its poor noise margin. The noise margin of a logic
gate for logic level '0', 0, is defined as the difference between the maximum input voltage
that it will recognize as a '0' (Vil) and the maximum voltage that may be applied to it as a
'0' (Vol of the driving gate connected to it). For logic level '1', the noise margin 1 is the
difference between the minimum input voltage that may be applied to it as a '1' (Voh of the
driving gate connected to it) and the minimum input voltage that it will recognize as a '1'
(Vih). Mathematically, 0 = Vil-Vol and 1 = Voh-Vih. Any noise that causes a noise
margin to be overcome will result in a '0' being erroneously read as a '1' or vice versa. In
other words, noise margin is a measure of the immunity of a gate from reading an input
logic level incorrectly.
In an RTL circuit, the collector output of the driving transistor is directly connected to the
base resistor of the driven transistor. Circuit analysis would easily show that in such an
arrangement, the differences between Vil and Vol, and between Voh and Vih, are not that
large. This is why RTL gates are known to have poor noise margins in comparison to DTL
and TTL gates.

Diode-Transistor Logic (DTL)


Diode-Transistor Logic, or DTL, refers to the technology for designing and fabricating
digital circuits wherein logic gates employ both diodes and transistors. DTL offers better
noise margins and greater fan-outs than RTL, but suffers from low speed, especially in
comparison to TTL.
RTL allows the construction of NOR gates easily, but NAND gates are relatively more
difficult to get from RTL. DTL, however, allows the construction of simple NAND gates
from a single transistor, with the help of several diodes and resistors.
Figure 1 shows an example of an 3-input DTL NAND gate. It consists of a single
transistor Q configured as an inverter, which is driven by a current that depends on the
inputs to the three input diodes D1-D3.

34

Figure 1. A simple 3-input DTL NAND Gate

In the NAND gate in Figure 1, the current through diodes DA and DB will only be large
enough to drive the transistor into saturation and bring the output voltage Vo to logic '0' if
all the input diodes D1-D3 are 'off', which is true when the inputs to all of them are logic
'1'. This is because when D1-D3 are not conducting, all the current from Vcc through R
will go through DA and DB and into the base of the transistor, turning it on and pulling Vo
to near ground.
However, if any of the diodes D1-D3 gets an input voltage of logic '0', it gets forwardbiased and starts conducting. This conducting diode 'shunts' almost all the current away
from the reverse-biased DA and DB, limiting the transistor base current. This forces the
transistor to turn off, bringing up the output voltage Vo to logic '1'.
One advantage of DTL over RTL is its better noise margin. The noise margin of a logic
gate for logic level '0', 0, is defined as the difference between the maximum input voltage
that it will recognize as a '0' (Vil) and the maximum voltage that may be applied to it as a
'0' (Vol of the driving gate connected to it). For logic level '1', the noise margin 1 is the
difference between the minimum input voltage that may be applied to it as a '1' (Voh of the
driving gate connected to it) and the minimum input voltage that it will recognize as a '1'
(Vih). Mathematically, 0 = Vil-Vol and 1 = Voh-Vih. Any noise that causes a noise
margin to be overcome will result in a '0' being erroneously read as a '1' or vice versa. In
other words, noise margin is a measure of the immunity of a gate from reading an input
logic level incorrectly.
In a DTL circuit, the collector output of the driving transistor is separated from the base
resistor of the driven transistor by several diodes. Circuit analysis would easily show that
in such an arrangement, the differences between Vil and Vol, and between Voh and Vih, are
much larger than those exhibited by RTL gates, wherein the collector of the driving
transistor is directly connected to the base resistor of the driven transistor. This is why
DTL gates are known to have better noise margins than RTL gates.
35

One problem that DTL doesn't solve is its low speed, especially when the transistor is
being turned off. Turning off a saturated transistor in a DTL gate requires it to first pass
through the active region before going into cut-off. Cut-off, however, will not be reached
until the stored charge in its base has been removed. The dissipation of the base charge
takes time if there is no available path from the base to ground. This is why some DTL
circuits have a base resistor that's tied to ground, but even this requires some trade-offs.
Another problem with turning off the DTL output transistor is the fact that the effective
capacitance of the output needs to charge up through Rc before the output voltage rises to
the final logic '1' level, which also consumes a relatively large amount of time. TTL,
however, solves the speed problem of DTL elegantly.

Transistor-Transistor Logic (TTL)


Transistor-Transistor Logic, or TTL, refers to the technology for designing and fabricating
digital integrated circuits that employ logic gates consisting primarily of bipolar
transistors. It overcomes the main problem associated with DTL, i.e., lack of speed.
The input to a TTL circuit is always through the emitter(s) of the input transistor, which
exhibits a low input resistance. The base of the input transistor, on the other hand, is
connected to the Vcc line, which causes the input transistor to pass a current of about 1.6
mA when the input voltage to the emitter(s) is logic '0', i.e., near ground. Letting a TTL
input 'float' (left unconnected) will usually make it go to logic '1', but such a state is
vulnerable to stray signals, which is why it is good practice to connect TTL inputs to Vcc
using 1 kohm pull-up resistors.
The most basic TTL circuit has a single output transistor configured as an inverter with its
emitter grounded and its collector tied to Vcc with a pull-up resistor, and with the output
taken from its collector. Most TTL circuits, however, use a totem pole output circuit,
which replaces the pull-up resistor with a Vcc-side transistor sitting on top of the GNDside output transistor. The emitter of the Vcc-side transistor (whose collector is tied to
Vcc) is connected to the collector of the GND-side transistor (whose emitter is grounded)
by a diode. The output is taken from the collector of the GND-side transistor. Figure 1
shows a basic 2-input TTL NAND gate with a totem-pole output.

36

Figure 1. A 2-input TTL NAND Gate with a Totem Pole Output Stage
In the TTL NAND gate of Figure 1, applying a logic '1' input voltage to both emitter
inputs of T1 reverse-biases both base-emitter junctions, causing current to flow through R1
into the base of T2, which is driven into saturation. When T2 starts conducting, the stored
base charge of T3 dissipates through the T2 collector, driving T3 into cut-off. On the other
hand, current flows into the base of T4, causing it to saturate and pull down the output
voltage Vo to logic '0', or near ground. Also, since T3 is in cut-off, no current will flow
from Vcc to the output, keeping it at logic '0'. Note that T2 always provides
complementary inputs to the bases of T3 and T4, such that T3 and T4 always operate in
opposite regions, except during momentary transition between regions.
On the other hand, applying a logic '0' input voltage to at least one emitter input of T1 will
forward-bias the corresponding base-emitter junction, causing current to flow out of that
emitter. This causes the stored base charge of T2 to discharge through T1, driving T2 intocut-off. Now that T2 is in cut-off, current from Vcc will be diverted to the base of T3
through R3, causing T3 to saturate. On the other hand, the base of T4 will be deprived of
current, causing T to go into cut-off. With T4 in cut-off and T3 in saturation, the output Vo
is pulled up to logic '1', or closer to Vcc.
Outputs of different TTL gates that employ the totem-pole configuration must not be
connected together since differences in their output logic will cause large currents to flow
from the logic '1' output to the logic '0' output, destroying both output stages. The output of
a typical TTL gate under normal operation can sink currents of up to 16 mA.
The noise margin of a logic gate for logic level '0', 0, is defined as the difference between
the maximum input voltage that it will recognize as a '0' (Vil) and the maximum voltage
that may be applied to it as a '0' (Vol of the gate driving it). For logic level '1', the noise
margin 1 is the difference between the minimum input voltage that may be applied to it
as a '1' (Voh of the gate driving it) and the minimum input voltage that it will recognize as
a '1' (Vih). Mathematically, 0 = Vil-Vol and 1 = Voh-Vih. Any noise that causes a noise
37

margin to be overcome will result in a '0' being erroneously read as a '1' or vice versa. In
other words, noise margin is a measure of the immunity of a gate from reading an input
logic level incorrectly. For TTL, Vil = 0.8V and Vol = 0.4V, so 0 = 0.4V, and Voh = 2.4V
and Vih = 2.0 V, so 1 = 0.4V. These noise margins are not as good as the noise margins
exhibited by DTL.
As mentioned earlier, TTL has a much higher speed than DTL. This is due to the fact that
when the output transistor (T4 in Figure 1) is turned off, there is a path for the stored
charge in its base to dissipate through, allowing it to reach cut-off faster than a DTL output
transistor. At the same time, the equivalent capacitance of the output is charged from Vcc
through T3 and the output diode, allowing the output voltage to rise more quickly to logic
'1' than in a DTL output wherein the output capacitance is charged through a resistor.
The commercial names of digital IC's that employ TTL start with '74', e.g., 7400, 74244,
etc. Most TTL devices nowadays, however, are named '74LSXXX', with the 'LS' standing
for 'low power Schottky'. Low power schottky TTL devices employ a Schottly diode,
which is used to limit the voltage between the collector and the base of a transistor, making
it possible to design TTL gates that use significantly less power to operate while allowing
higher switching speeds.

Boolean Algebra
Boolean Algebra, also known as the 'algebra of logic', is a branch of mathematics that is
similar in form to algebra, but dealing with logical instead of numerical relationships. It
was invented by George Boole, after whom this system was named. Thus, instead of
variables that represent numerical quantities as in conventional algebra, Boolean algebra
handles variables that represent two types of logic propositions: 'true' and 'false'.

Boolean algebra has become the main cornerstone of digital electronics, since the latter
also operates with two logic states, '1' and '0', represented by two distinct voltage levels.
Boolean algebra's formal interpretation of logical operators AND, OR, and NOT has
allowed the systematic development of complex digital systems from simple logic gates,
38

that now not only include circuits that perform mathematical operations, but intricate data
processing as well. Tables 1 to 4 summarize the definitions of logical operators and their
basic mathematical properties as represented in Boolean algebra.
Table 1. Elementary Logic Gate Actions
OR
0+0=0
0+1=1
1+0=1
1+1=1

AND
00=0
01=0
10=0
11=1

NOT
0=1
1=0

NOR
0+0=1
0+1=0
1+0=0
1+1=0

NAND
00=1
01=1
10=1
11=0

Table 2. Single-Variable Logic Gate Actions


OR
A+0=A
A+1=1
A+A=A
A+A=1

AND
A0=0
A1=A
AA=A
AA=0

NOT
A=NOT(A)
NOT(A)=A

NOR
A+0=A
A+1=0
A+A=A
A NOR
A=0

NAND
A0=1
A1=A
AA=A
A
NAND
A=1

Table 3. Multi-Variable Boolean Equalities


A+B=B+A
(A+B)+C=A+(B+C)
AB=BA
(AB)C=A(BC)
A(B+C)=AB+AC

De Morgan's Theorem
One important concept in digital electronics design is known as De Morgan's Theorem.
This theorem basically states that: 1) the complement of the product of a given set of
variables is equal to the sum of the complements of the individual variables; and 2) the
complement of the sum of a given set of variables is equal to the product of the
39

complements of the individual variables. De Morgan's Theorem applies to any arbitrary


number of variables.
The important implication of De Morgan's Theorem is that any logic gate or circuit can be
replaced by an equivalent circuit composed of other gates, as long as the NOT function can
be provided by at least one of the substitute gates. This is useful in digital electronics
design wherein there is a need to minimize the variety and number of logic gate IC's used.
Table 1 shows the mathematical forms of De Morgan's Theorem.
Table 1. De Morgan's Theorem
A B = A+ B
A+ B = A B
A B C = A + B +
C +
A + B + C + = A B
C

Digital Logic Gates


Digital logic gates, which are also known as combinational logic gates or simply 'logic
gates', are electronic devices whose output at any time is determined by the states of its
inputs at that time. Since logic gates are digital IC's, their input and output signals can
only be in one of two possible digital states, i.e., logic '0' or logic '1'. Thus, the logic state
in which the output of a logic gate will be put in depends on the logic states of each of its
individual inputs.
The primary application of logic gates is to implement 'logic' in the flow of digital signals
in a digital circuit. Logic in its ordinary sense is defined as a branch of philosophy that
deals with what is true and false, based on what other things are true and false. This
essentially is the function of logic gates in digital circuits - to determine which outputs will
be true or false, given a set of inputs that can either be true (logic '1') or false (logic '0').

The response output (usually denoted by Q) of a logic gate to any combination of inputs
may be tabulated into what is known as a truth table. A truth table shows each possible
combination of inputs to a logic gate and the combination's corresponding output. The
Table of Logic Gates and Their Properties., which describes the various types of logic
gates, provides a truth table for each of them as well.

40

Interestingly, the operation of logic gates in relation to one another may be represented and
analyzed using a branch of mathematics called Boolean Algebra which, like the common
algebra, deals with manipulation of expressions to solve or simplify equations.
Expressions used in Boolean Algebra are called, well, Boolean expressions.
There are several kinds of logic gates, each one of which performs a specific function.
These are the: 1) AND gate; 2) OR gate; 3) NOT gate; 4) NAND gate; 5) NOR gate; and
6) EXOR gate. See the Table of Logic Gates and Their Properties.
Logic gates may be thought of as a combination of switches. For instance, the AND gate,
whose output can only be '1' if all its inputs are '1', may be represented by switches
connected in series, with each switch representing an input. All the switches need to be
activated and conducting (equivalent to all the inputs of the AND gate being at logic '1'),
for current to flow through the circuit load (equivalent to the output of the AND gate being
at logic '1').
An OR gate, on the other hand, may be represented by switches connected in parallel,
since only one of these parallel switches need to turn on in order to energize the circuit
load.
In Boolean Algebra, the AND operation is represented by multiplication, since the only
way that the result of multiplication of a combination of 1's and 0's will be equal to '1' is if
all its inputs are equal to '1'. A single '0' among the multipliers will result in a product
that's equal to '0'. The Boolean expression for 'A AND B' is similar to the expression
commonly used for multiplication, i.e., AB.
The OR operation, on the other hand, is represented by addition in Booelean Algebra. This
is because the only way to make the result of the addition operation equal to '0' is to make
all the inputs equal to '0', which basically describes an 'OR' operation. The Boolean
expression for 'A OR B' is therefore A+B.
The NOT operation is usually denoted by a line above the symbol or expression that is
being negated: A = NOT(A). The NAND operation is simply an AND operation
followed by a NOT operation. The NOR operation is simply an OR operation followed by
a NOT operation. The symbols used for logic gates in electronic circuit diagrams are
shown in Figure 1.

Figure 1. Logic Gate Symbols


One of the most useful theorems used in Boolean Algebra is De Morgan's Theorem, which
states how an AND operation can be converted into an OR operation, as long as a NOT
operation is available. De Morgan's Theorem is usually expressed in two equations as
follows:
41

(AB) = A + B; and
(A+B) = A B.
De Morgan's Theorem has a practical implication in digital electronics - a designer may
eliminate the need to add more IC's to the design unnecessarily, simply by substituting
gates with the equivalent combination of other gates whenever possible. Since NAND and
NOR gates can be used as NOT gates, de Morgan's Theorem basically implies that any
Boolean operation may be simulated with nothing but NAND or NOR gates. This is why
NAND and NOR gates are also called universal gates.

Table of Digital Logic Gates and Their Properties

Gate

AND Gate

OR Gate

NOT Gate

Description

Truth Table

The AND gate is a logic gate that gives


an output of '1' only when all of its
inputs are '1'. Thus, its output is '0'
whenever at least one of its inputs is '0'.
Mathematically, Q = A B.

The OR gate is a logic gate that gives


an output of '0' only when all of its
inputs are '0'. Thus, its output is '1'
whenever at least one of its inputs is '1'.
Mathematically, Q = A + B.

The NOT gate is a logic gate that gives


an output that is opposite the state of its
input. Mathematically, Q = A.
42

Output Q

Output Q

Output Q

NAND Gate

NOR Gate

EXOR Gate

The NAND gate is an AND gate with a


NOT gate at its end. Thus, for the same
combination of inputs, the output of a
NAND gate will be opposite that of an
AND gate. Mathematically, Q = A B.

The NOR gate is an OR gate with a


NOT gate at its end. Thus, for the same
combination of inputs, the output of a
NOR gate will be opposite that of an
OR gate. Mathematically, Q = A + B.

The EXOR gate (for 'EXclusive OR'


gate) is a logic gate that gives an output
of '1' when only one of its inputs is '1'.

Tri-State Output Circuit

43

Output Q

Output Q

Output Q

Figure 1. Diagram for a Tri-State Output Circuit


Digital circuits often use circuit paths and nodes that are shared by different IC's connected
to them. For example, memory chips share the same data buses to send and receive data
from other chips. There is, therefore, a need to let these chips share data systematically so
that no data clashes ever occur on any bus, i.e., no conflicting data are put on the same bus
at the same time. This is achieved by employing what is known as the tri-state output.
A tri-state output is basically an output that can assume three states: logic '1', logic '0', and
a high-impedance state. The '1' and '0' states are of course used to represent real output
data on a shared bus. An output that is in a high-impedance state, as its name implies, is
simply one that is electrically isolated from the bus.
The circuit in Figure 1 shows how a digital tri-state output circuit may be implemented. If
the 'enable' pin is at logic '0', the base-emitter and base-collector junctions of both Q1 and
Q2 are reverse-biased regardless of what the input's logic state is, which prevent base
currents from flowing in both Q3 and Q4. This effectively turns off Q3 and Q4, putting the
output pin in high-impedance mode.
If the 'enable' pin is at logic '1' and the input is at logic '0', Q2 conducts and pulls Q4's base
voltage to '0', causing Q4 to conduct. At the same time, Q1 cuts off and prevents Q3 from
turning 'on'. With Q3 'off' and Q4 conducting, the output is pulled to ground. In short, the
logic '0' at the input appears at the output.
If the 'enable' pin is at logic '1' and the input is at logic '1', Q2 turns off which also turns off
Q4. At the same time, Q1 conducts and causes current to flow into Q3's base, turning it
'on'. With Q3 'on' and Q4 'off', the output is pulled to +5V. In short, the logic '1' at the
input appears at the output.
44

Flip-Flops and Latches


A flip-flop is a semiconductor device that has a digital output which can be toggled
between two stable states by providing it with the appropriate digital input signals. Once
the output is put in one state, it remains there until a change in the inputs causes it to toggle
again. This toggling between two logic states is also referred to as 'flip-flopping.'

There are several types of flip-flops, the common ones of which are described in the
following paragraphs.
The Set-Reset (S-R) Flip-flop
The Set-Reset (SR) flip-flop refers to a flip-flop that obeys the truth table shown in Table
1. It has two inputs, namely, a Set input, or S, and a Reset input, or R. It also has two
outputs, the main output Q and its complement Q.

S
0
0
1
1

R
0
1
0
1

QN+1
QN+1
QN
QN
0
1
1
0
Not Used

Table 1. The S-R Flip-flop Truth Table


A simple representation of an S-R flip-flop is a pair of cross-coupled NOR gates, i.e., the
output of one gate is tied to one of the two inputs of the other gate and vice versa. The free
input of one NOR gate is used as R while the free input of the other gate is used as S.
The output of the gate with the 'R' input is used as the Q output while the output of the gate
with the 'S' input is used as the Q output. Thus, resetting an S-R flip-flop's output Q to '0'
requires R=1 and S=0, while setting Q to '1' requires S=1 and R=0.
In real-world applications, flip-flops are 'clocked' so that one can control the exact moment
at which the output changes its state in response to changes in inputs. The clock digital
input of clocked flip-flops is usually denoted as C.
The JK Flip-flop

45

The JK flip-flop is a flip-flop that obeys the truth table in Table 2. The J-K flip-flop differs
from the S-R flip-flop in the sense that its next output is determined by its present output
state as well, aside from the states of its inputs. Note that in the J-K flip-flop, the S input
is now called the J input and the R input is now called the K input. Thus, in a JK flip-flop,
the output will not change if both J and K are '0', but will toggle to its complement if both
inputs are '1'.

J
0
0
1
1

K
0
1
0
1

QN+1
QN
0
1
QN

Table 2. The J-K Flip-flop Truth Table


The D-Type Flip-flop
The D-type flip-flop is just a clocked flip-flop with a single digital input D. Every time a
D-type flip-flop is clocked, its output follows whatever the state of D is.
A flip-flop may be used to store or 'lock' one bit of information. This locking of
information is also known as 'latching', so a flip-flop may be referred to as a single-bit
latch.
There now exist many digital IC's consisting of a set of several flip-flops, whose main
function is to latch several bits of data. These IC's are known as 'latches', and are used to
capture data from the data bus of a digital system at precise moments in time. In fact,
simple computer-controlled circuits use latches as I/O devices. The flip-flop is also the
basic building block of SRAM's.

Shift Registers
A register is a semiconductor device that is used for storing several bits of digital data. It
basically consists of a set of flip-flops, with each flip-flop representing one bit of the
register. Thus, an n-bit register has n flip-flops. A basic register is also known as a 'latch.'

A special type of register, known as the shift register, is used to pass or transfer bits of data
from one flip-flop to another. This process of transferring data bits from one flip-flop to
the next is known as 'shifting'. Shift registers are useful for transferring data in a serial
manner while allowing parallel access to the data.

46

A shift register is simply a set of flip-flops interconnected in such a way that the input to a
flip-flop is the output of the one before it. Clocking all the flip-flops at the same time will
cause the bits of data to shift or move to the right in one direction (i.e., toward the last flipflop) . Figure 1 shows a simple implementation of a 4-bit shift register using D-type flipflops.

Figure 1. A Simple Shift Register Consisting of D-type Flip-flops


Under its basic operation, the data bit of the last flip-flop is lost once it is clocked out. In
some applications there is a need to bring this back to the first flip-flop, in which case the
data will just be circulated within the shift register. A shift register connected this way is
known as an end-around-carry shift register, or simply 'ring counter'.
A more complicated version of a shift register is one that allows shifting in both directions,
left or right. It is aptly and quite descriptively referred to as the Shift-Right Shift-Left
Register. To accomplish this, a 'Mode' control line is added to the circuit. The state of this
'Mode' input determines whether the shift direction would be right or left.
Digital Counters
A digital counter, or simply counter, is a semiconductor device that is used for counting the
number of times that a digital event has occurred. The counter's output is indexed by one
LSB every time the counter is clocked.

A simple implementation of a 4-bit counter is shown in Figure 1, which consists of 4


stages of cascaded J-K flip-flops. This is a binary counter, since the output is in binary
system format, i.e., only two digits are used to represent the count, i.e., '1' and '0'. With
only 4 bits, it can only count up to '1111', or decimal number 15.
As one can see from Figure 1, the J and K inputs of all the flip-flops are tied to '1', so that
they will toggle between states every time they are clocked. Also, the output of each flipflop in the counter is used to clock the next flip-flop. As a result, the succeeding flip-flop
toggles between '1' and '0' at only half the frequency as the flip-flop before it.

47

Figure 1. A Simple Ripple Counter Consisting of J-K Flip-flops


Thus, in Figure 1's 4-bit example, the last flip-flop will only toggle after the first flip-flop
has already toggled 8 times. This type of binary counter is known as a 'serial', 'ripple', or
'asynchronous' counter. The name 'asynchronous' comes from the fact that this counter's
flip-flops are not being clocked at the same time.
A 4-bit counter, which has 16 unique states that it can count through, is also called a
modulo-16 counter, or mod-16 counter. By definition, a modulo-k or base-k counter is one
that returns to its initial state after k cycles of the input waveform. A counter that has N
flip-flops is a modulo 2N counter.
An asynchronous counter has a serious drawback - its speed is limited by the cumulative
propagation times of the cascaded flip-flops. A counter that has N flip-flops, each of
which has a propagation time t, must therefore wait for a duration equal to N x t before it
can undergo another transition clocking.
A better counter, therefore, is one whose flip-flops are clocked at the same time. Such a
counter is known as a synchronous counter. A simple 4-bit synchronous counter is shown
in Figure 2.
Not all counters with N flip-flops are designed to go through all its 2N possible states of
count. In fact, digital counters can be used to output decimal numbers by using logic gates
to force them to reset when the output becomes equal to decimal 10. Counters used in this
manner are said to be in binary-coded decimal (BCD).

Figure 2. A Simple Synchronous Counter Consisting of J-K Flip-flops and AND gates
Digital Encoders/Decoders

48

Multiplexing is defined as the process of feeding several independent signals to a common


load, one at a time. The device or switching circuitry used to select and connect one of
these several signals to the load at any one time is known as a multiplexer.

The reverse function of multiplexing, known as demultiplexing, pertains to the process of


feeding several independent loads with signals coming from a common signal source, one
at a time. A device used for demultiplexing is known as a demultiplexer.
Multiplexing and demultiplexing, therefore, allow the efficient use of common circuits to
feed a common load with signals from several signal sources, and to feed several loads
from a single, common signal source, respectively.
In digital circuits, the term 'multiplexing' is also sometimes used to refer to the process of
encoding, which is basically the generation of a digital code to indicate which of several
input lines is active. An encoder or multiplexer is therefore a digital IC that outputs a
digital code based on which of its several digital inputs is enabled.
On the other hand, the term 'demultiplexing' in digital electronics is also used to refer to
'decoding', which is the process of activating one of several mutually-exclusive output
lines, based on the digital code present at the binary-weighted inputs of the decoding
circuit, or decoder. A decoder or demultiplexer is therefore a digital IC that accepts a
digital code consisting of two or more bits at its inputs, and activates or enables one of its
several digital output lines depending on the value of the code.
Multiplexing and demultiplexing are used in digital electronics to allow several chips to
share common signal buses. In demultiplexers, for instance, the output lines may be used
to enable memory chips that share a common data bus, ensuring that only one memory
chip is enabled at a time in order to prevent data clashes between the chips.
If a demultiplexer or decoder has 2N output lines, then it has N input lines. A common
example of a decoder/demultiplexer IC is the 74LS138, which is a Low-Power Schottky
TTL device that has 3 input lines and 8 output lines. Of course, a decoder IC such as the
74LS138 also has chip control lines that need to be 'enabled' for the decoding function to
take place.
In the case of the 74LS138, these control lines consist of one active high control line (G1,
pin 6) and two active-low control lines (G2A, pin 4 and G2B, pin 5). Thus, the 74LS138
will only be in its 'decoding' mode if G1 is at logic '1' and G2A and G2B are at logic '0'.
The 74LS138, whose generic product name is '3-to-8 Line Decoder/Multiplexer', obeys the
truth table shown in Table 1. The outputs of the 74LS138 are 'active-low', i.e., the enabled
output goes to logic '0' while all the other outputs are at logic '1'.
Table 1. Truth Table for the 74LS138, a 3-to-8 Line Decoder
G1 G2aG2b A2 A1 A0 Y0 Y1 Y2 Y3 Y4 Y5 Y6 Y7
49

0
X
1
1
1
1
1
1
1
1

X
1
0
0
0
0
0
0
0
0

X
1
0
0
0
0
0
0
0
0

X
X
0
0
0
0
1
1
1
1

X
X
0
0
1
1
0
0
1
1

X
X
0
1
0
1
0
1
0
1

1
1
0
1
1
1
1
1
1
1

1
1
1
0
1
1
1
1
1
1

1
1
1
1
0
1
1
1
1
1

1
1
1
1
1
0
1
1
1
1

1
1
1
1
1
1
0
1
1
1

1
1
1
1
1
1
1
0
1
1

1
1
1
1
1
1
1
1
0
1

1
1
1
1
1
1
1
1
1
0

If a multiplexer or encoder has N output lines, then it has 2N input lines. A common
example of a decoder/demultiplexer IC is the 74LS148, which is a Low-Power Schottky
TTL device that has 8 input lines and 3 output lines. The 74LS148 is a priority encoder,
which means that if more than one of its inputs are active, then the active input line with
the highest binary weight will be given priority, and the output of the encoder will depend
on this prioritized input. Table 2 shows the truth table for the 74LS148. Note that E0 and
GS are output pins while E1 is a control pin (input).
Table 2. Truth Table for the 74LS148, an 8-to-3 Line Priority Encoder
E1 D7 D6 D5 D4 D3 D2 D1 D0 A2 A1 A0 E0 GS
1 X X X X X X X X 1 1 1 1
1
0 1 1 1 1 1 1 1 1 1 1 1 0
1
0 0 X X X X X X X 0 0 0 1
0
0 1 0 X X X X X X 0 0 1 1
0
0 1 1 0 X X X X X 0 1 0 1
0
0 1 1 1 0 X X X X 0 1 1 1
0
0 1 1 1 1 0 X X X 1 0 0 1
0
0 1 1 1 1 1 0 X X 1 0 1 1
0
0 1 1 1 1 1 1 0 X 1 1 0 1
0
0 1 1 1 1 1 1 1 0 1 1 1 1
0

SRAM's
Random Access Memory (RAM) refers to a read/write memory device that can read data
from or write data to any of its memory addresses, regardless of what memory address was
last accessed for reading or writing. RAM comes in two major classifications: Static
RAM, or SRAM, and Dynamic RAM, or DRAM.
SRAMs store data in flip-flops, which retain data as long as the SRAM is powered up.
DRAMs store data in cells that depend on capacitors, which need to be 'refreshed'
continuously since they are not able to retain data indefinitely even if the device is
continuously powered up.
50

A typical SRAM IC has address lines, data lines, and control lines. The address lines are
used to identify the location of the memory storage element(s) or cell(s) to be read from or
written to. The data lines contain the value of the data read or being written into the
memory cells accessed. The control lines are used to direct the sequence of steps needed
for the read or write operations of the SRAM.
The memory elements of an SRAM are arranged in an array of rows and columns. Each
row of memory cells share a common 'Word Enable' line, while each column of cells share
a common 'bit' line. The number of columns of such a memory array is known as the bit
width of each word.
The basic storage element of an SRAM is a circuit that consists of 4 to 6 transistors. This
multi-transistor circuit usually forms cross-coupled inverters that can hold a '1' or '0' state
as long as the circuit is powered up. A pair of cross-coupled inverters have the output of
one inverter going into the input of the other and vice versa, such that the output (and
input) of one inverter is the complement of that of the other.
This circuit doesn't need periodic refreshing or clocking in order to hold its data, making
the SRAM faster than a DRAM (which needs data refreshing). All it needs is a constant
supply of power. However, since the memory cell of an SRAM is more complex than that
of a DRAM, it eats up more space on the chip (which means that you get less memory per
given area), making SRAM's more expensive than DRAM's.
Data is written into an SRAM's storage element by setting the 'bit' line (usually referred to
as Dataj) to the data value to be written and then enabling the element's corresponding
word line. Asserting the 'Word Enable' line while driving the data bit and its complement
into the cross-coupled inverters of the storage element causes the data bit to overwrite the
previous state of the element. If a word consists of several bits, then the whole word may
be written with new data in one step if the new values are provided to all the bit lines at the
same time before the 'Word Enable' line is asserted.
Reading the content of an SRAM's storage element also requires the 'Word Enable' line to
be asserted. This time, however, the SRAM uses sense amplifiers to detect the voltage
difference between the voltage at Dataj and that of Dataj's complement at the outputs of
the cross-coupled inverters. If the former is greater than the latter, then the cell contains a
logic '1'. Otherwise, the cell contains a logic '0'.

DRAM's
Random Access Memory (RAM) refers to a read/write memory device that can read data
from or write data to any of its memory addresses, regardless of what memory address was
last accessed for reading or writing. RAM comes in two major classifications: Dynamic
RAM, or DRAM, and Static RAM, or SRAM.

51

DRAMs store data in cells that depend on capacitors, which need to be 'refreshed'
continuously since they are not able to retain data indefinitely even if the device is
continuously powered up. SRAMs, on the other hand, store data in flip-flops, which retain
data without refreshing as long as the SRAM is powered up.
DRAMs provide more memory per unit chip area compared to SRAMs, mainly because of
the much simpler structure of its storage element. Whereas an SRAM memory cell consists
of 4 to 6 transistors, a DRAM memory cell consists of only a single transistor that is paired
with a capacitor. The presence or absence of charge in the capacitor determines whether
the cell contains a '1' or a '0'. This single-transistor configuration is commonly referred to
as a 1-T memory cell.
A typical DRAM IC has address lines, data lines, and control lines. The address lines are
used to identify the location of the memory storage element(s) or cell(s) to be read from or
written to. The data lines contain the value of the data read or being written into the
memory cells accessed. The control lines are used to direct the sequence of steps needed
for the read and write operations of the DRAM.
The memory elements of a DRAM are arranged in an array of rows and columns. Each
row of memory cells share a common 'word' line, while each column of cells share a
common 'bit' line. Thus, the location of a memory cell in the array is the intersection of its
'word' and 'bit' lines. The number of columns of such a memory array is known as the bit
width of each word.
Just like an SRAM memory cell, a DRAM memory cell uses these 'word' and 'bit' lines for
its read and write operations. During a 'write' operation, the data to be written ('1' or '0') is
provided at the 'bit' line while the 'word line' is asserted. This turns on the access transistor
and allows the capacitor to charge up or discharge, depending on the state of the bit line.
During a 'read' operation, the 'word' line is also asserted, which turns on the access
transistor. The enabled transistor allows the voltage on the capacitor to be read by a
sensitive amplifier circuit through the 'bit' line. This sense circuit is able to determine
whether a '1' or '0' is stored in the memory cell by comparing the sensed capacitor voltage
against a threshold, i.e., 50% of the full-charge voltage. Thus, it is a '1' (charged capacitor)
if the charge is still more than 50% and a '0' (discharged capacitor) if it's less than that.
For DRAMs, the simple operation of reading the data of a memory cell is destructive to
the stored data. This is because the cell capacitor undergoes discharging every time it is
sensed through the 'bit' line. In fact, the stored charge in a DRAM cell decays over time
even if it doesn't undergo a 'read' operation. Thus, in order to preserve the data in a DRAM
cell, it has to undergo what is known as a 'refresh' operation.
A refresh operation is simply the process of reading a memory cell's content before it
disappears and then writing it back into the memory cell. Typically it is done every few
milliseconds per word. However, the refresh cycle itself is very short (in the order of
52

nanoseconds), since a DRAM IC contains thousands of words that need to be refreshed


regularly at that interval. The need for regular refreshing gave DRAMs the name
'dynamic'.
Aside from its memory array, a DRAM device also needs to have the following support
circuitries to accomplish its functions: 1) a decoding circuit for row address and column
address selection; 2) a counter for tracking the refresh operation sequence; 3) a sense
amplifier for reading and restoring the charge of each cell; and 4) a write enable circuit to
put the cell in 'write' mode, i.e., make it ready to accept a charge.
DRAMs are mainly used as a computer system's volume memory, since they are denser
and less costly than SRAM's. However, they are not suited for speed-sensitive
applications such as cache memories since the dynamic refreshing required by them slows
down system operation. SRAM's are a better choice if speed is a major concern.

EPROM's
The term "EPROM" is the acronym for "Erasable Programmable Read-only Memory". As
its name implies, it is a semiconductor memory device that can be programmed with data
which can only be read, but not altered, by the application circuit. As such, programming
an EPROM generally takes place prior to its attachment to the application circuit. One of
the most common applications for an EPROM is as a BIOS chip of a personal computer,
which stores information about the computer's basic input/output system.

An EPROM is a non-volatile memory device, i.e., it can retain its stored data even if it is
powered off. Reprogramming an EPROM with new data is possible, but it has to undergo
a special data erasure process that employs ultraviolet (uv) light before it can be done.
There are some EPROMs though, known as one-time programmable (OTP) EPROMs, that
are designed to be non-reprogrammable as a cheaper alternative for storing specific bugfree data that never require any change.
An EPROM, just like a Flash memory IC, is a "floating gate" device, i.e., it is a device that
employs a 'floating gate' in each of its memory cells to store data. A "floating gate" is a
structure embedded within the dielectric layer that isolates the silicon channel and the
external gate of each memory cell transistor. It is designed to store charge, the amount of
which is used to represent whether the bit of data stored in the memory cell is a '1' or '0'.
The memory cells of a completely erased EPROM all contain a '1'. Programming a cell,
which entails charging up the cell's floating gate to a certain level, gives it a new value of
'0'. A memory cell sensor determines the amount of charge stored in the floating gate and
compares it with a given threshold. If the charge stored exceeds the threshold, then the
stored data is considered a '0', otherwise it is a '1'.

53

Programming an EPROM requires higher voltages than just reading it, since injection of
charge into the floating gate is needed. Modern EPROMs just need a 5-V supply (Vcc) to
be read, but require a second power supply (set to a higher voltage in the range of 12 to 25
volts during programming) to be programmed. This second supply voltage used during
programming is often referred to as the "Vpp." In some EPROMS, programming also
requires a higher voltage at Vcc aside from the Vpp voltage.
Just like other memory devices, a typical EPROM has data pins, address pins, and control
pins (CE and OE). The data pins are bi-directional, acting as inputs during programming
and as outputs during reading. The CE or "chip enable" pin is used for activating or
deactivating the entire EPROM chip itself, while the OE or "output enable" pin is used for
activating or deactivating the EPROM's output pins only.
Programming an EPROM basically requires: 1) setting the Vcc and Vpp to their 'program
mode' levels; 2) applying the addresses and input data at the address and data pins; and 3)
applying the required programming pulses in accordance with a programming 'algorithm'.
Erasure of an EPROM requires the dissipation of the stored charge inside the floating gate,
which is accomplished by exposing the cells to ultraviolet light with a wavelength of 2537
angstroms. This is why EPROMs have a glass window directly over the chip area - to
allow ultraviolet radiation to reach the memory cells. This glass window must be covered
after the chip has been programmed to protect the stored data from ambient light, which
can indeed erase EPROM data if given enough time. A wide range of EPROM uv erasers
are available in the market for erasing EPROMS conveniently.
Flash Memory
Flash Memory is a semiconductor memory device that is electrically erasable and
programmable in sections of memory called 'blocks'. In a flash memory, a whole block of
memory cells can be erased in a single action, or in a 'flash,' which is how this device got
its name. Flash memory is non-volatile, i.e., it can retain its memory contents even if it is
powered off.
A basic flash memory cell consists of a MOSFET that was modified to include an isolated
inner gate between its external gate and the silicon (see Figure 1). This inner gate is
known as a 'floating gate', which is the data-storing element of the memory cell. Flash
memory is not the first memory device to use a floating gate to store information. The uverasable EPROM, which preceded the Flash memory, is also a 'floating gate' memory
device.

54

Figure 1. A Typical Flash Memory Cell

Data is stored in a flash memory cell in the form of electrical charge accumulated inside
the floating gate. The amount of charge stored in the floating gate depends on the voltage
applied to the external gate of the memory cell that controls the flow of charge into or out
of the floating gate. The data contained in the cell depends on whether the voltage of the
stored charge exceeds a specified threshold voltage Vth or not.
Intel has developed flash memory technology wherein memory cells can hold two or more
bits of data instead of just one each. The trick is to take advantage of the analog nature of
the charge stored in the memory cell and allow it to charge to several different voltage
levels. Each voltage range to which the floating gate can charge can then be assigned its
own digital code. Thus, a 2-bit cell can distinguish 4 distinct voltage ranges, while a 3-bit
one can distinguish 8 of them. Intel calls this technology 'Multi-Level Cell (MLC)"
technology.
A typical MLC consists of a single transistor with direct electrical connections to its gate,
source, and drain that allow very precise control of the charging of the cell's floating gate.
For a multi-level cell to work, it must be able to deposit charge with precision, sense
charge with precision, and store charge over time. High-precision charging and charge
sensing are the key to a MLC's ability to distinguish several charge levels. Table 1
illustrates how a 2-bit multi-level cell assigns digital codes to 4 different charge voltage
levels.
Table 1. 2-Bit Intel MLC Digital Code Assignment
Charge Level

Digital Code

Level 3

00

Level 2

01

Level 1

10

Level 0

11

MLC programming is accomplished by charging the floating gate through a precise


process of Channel Hot-Electron (CHE) injection. During programming, the source of the
55

MLC transistor is usually grounded. Column decoding of the MLC provides direct bitline
connection to the drain which is pulsed at a constant voltage. Row decoding of the MLC,
on the other hand, provides direct wordline connection that causes the MLC transistor gate
to be connected to an internally generated supply voltage. This direct and precise control
of the drain and gate is critical to the correct charging of the floating gate and, hence,
correct storage of information.
Reading the contents of multi-level cells involves highly precise sensing of the amount of
charge in the floating gate, measured in terms of cell currents that have an inverse
relationship with the Vth. The sensed currents are compared to reference currents, with the
comparison results inputted to a logic circuit that encodes them into the corresponding
digital data.
Flash memory erasure is achieved by 'discharging' the floating gate through a phenomenon
known as Fowler-Nordheim tunneling, wherein electrons from the floating gate pass
through the thin dielectric layer and get dissipated at the source of the memory cell
transistor.
Flash memory is used in a variety of applications such as: personal and notebook
computers, digital cell phones, digital cameras, portable memory devices, LAN switches,
embedded controllers, etc.
Microprocessors
A microprocessor is a programmable integrated circuit that is used for executing
instructions to process digital data or exercise digital control over other devices. It is
employed primarily as the central processing unit (CPU) of a computer system. The
complexity of present-day microprocessors make even a modest description of how they
work beyond the scope of this page. Thus, what is presented below is the architecture of a
typical microprocessor from a couple of decades ago. The following discussion, simple as
it is, nonetheless gives a reasonable understanding of how microprocessors in general
work.
As mentioned, a microprocessor is used to execute a series of steps or instructions, which
collectively constitutes a 'program'. Every microprocessor has a unique set of instructions
that it can execute. This set of instructions is known as its, well, instruction set. Every
instruction on the instruction set does something unique, and has different requirements in
terms of which part(s) of the microprocessor to utilize or what data to work on.
A basic microprocessor circuit has the following parts: 1) an Arithmetic and Logic Unit
(ALU), which is where the arithmetic and logic operations of the microprocessor take
place; 2) a data bus system where data that need to be processed are transported; 3) an
address bus system that provides the address of the memory location being accessed; 4) a
control unit for orchestrating the program execution of the microprocessor; 5) an
56

instruction register/decoder where instructions are loaded one at a time and 'interpreted'; 6)
a program counter that indicates the memory address where the next instruction will come
from; and 7) various registers, flags, and pointers.
A microprocessor executes a program stored in memory by fetching the instructions of the
program (and whatever data they require) one at a time and performing these instructions.
Memory in this context basically refers to external memory devices that complement the
microprocessor and the input/output devices of the computer system. The manner in which
the next instruction will be executed depends on the results of the last operation. Thus, the
output of the microprocessor depends on the instructions and the input data provided to it.
Microprocessors with different ALU designs have different arithmetic and logic
capabilities. For instance, some ALU's can handle all the basic arithmetic functions
directly, while the simplest ones only perform addition and shift operations, which are also
the steps used to emulate all other arithmetic functions such as multiplication and division.
The logic capability of the ALU also varies from one microprocessor to another, but
almost all ALU's can perform the AND, OR and EXOR.
The instructions being followed by a microprocessor come in the form of instruction
codes. Instruction execution can not occur haphazardly, and must be controlled precisely
as it happens. The control unit of the microprocessor is the one responsible for controlling
the sequencing of events needed for the execution of an instruction, as well as the timing
of this sequence of events. The control unit is complemented by a clock or timing
generator that helps it trigger the occurrence of each event at the correct point in time.
The program counter of a microprocessor indicates where the next instruction bytes are
located in memory. It is indexed by the control unit by 1 every time an instruction code is
transferred from memory to the microprocessor.
A microprocessor uses the instruction register to store the instruction code last fetched
from memory. The first byte of an instruction code is fed by the instruction register to the
instruction decoder, which 'decodes' it to determine which operation must be carried out,
how many bytes of data will be processed, and where to get these data. After instruction
decoding, the execution of the instruction proceeds.
Registers are elements composed of a set of flip-flops where data are stored temporarily
for subsequent processing or transfer, as the microprocessor goes about its task of
executing its instructions one at a time. The accumulator is a special register used by the
microprocessor for holding operands, or data to be manipulated by the ALU. Aside from
the accumulator, several general-purpose registers are also available to the microprocessor
for holding data that need to be operated on.
Microprocessors also have Status Flags, which are really just special registers for storing
the state of a condition that results from a previous operation. Examples of status flags
include: 1) the Carry Status Flag, which indicates if there's a need to do a 'carry' after
addition or a 'borrow' after subtraction; 2) the Zero Status Flag, which indicates if a given
57

operation in the ALU results in a 'zero'; 3) the Sign Status Flag, which indicates whether
the result of an ALU operation is negative or positive; 4) the Overflow Status Flag, which
indicates if an operation produces a result that can't fit into the specified word length; and
5) the Parity Status Flag, a flag (used in error detection) that is set if the result of an
operation contains an even number of 1's.
The microprocessor has been around for more than two decades already. It now comes in
many forms, sizes and levels of sophistication, powering all kinds of applications that rely
on 'computer control'. Although it is the central processing unit of a computer system, it
also needs to interact with other semiconductor devices in order to perform its functions.
These 'other' devices include the memory and input/output devices that constitute the rest
of the computer system.
Digital-to-Analog Converters (DAC's)
A digital-to-analog converter, or simply DAC, is a circuit that is used to convert a digital
code into an analog signal. Digital-to-analog conversion is the primary means by which
digital equipment such as computer-based systems are able to translate digital data into
real-world signals that are more understandable to or useable by humans, such as music,
speech, pictures, video, and the like. It also allows digital control of machines, equipment,
household appliances, and the like.

A typical digital-to-analog converter outputs an analog signal, which is usually voltage or


current, that is proportional to the value of the digital code provided to its inputs. Most
DAC's have several digital input pins to receive all the bits of its input digital code in
parallel (at the same time). Some DAC's, however, are designed to receive the input
digital data in serial form (one bit at a time), so these only have a single digital input pin.
A simple DAC may be implemented using an op-amp circuit known as a summer, so
named because its output voltage is the sum of its input voltages. Each of its inputs uses a
resistor of different binary weight, such that if R0=R, then R1=R/2, R2=R/4, R3=R/8,.., RNN-1
). The output of a summer circuit with N bits is:
1=R/(2
Vo = -VR (Rf / R) (SN-12N-1 + SN-22N-2+...+S020) where VR is the voltage to which the bit is
connected when the digital input is '1'. A digital input is '0' if the bit is connected to 0V
(ground). A 4-bit summer circuit is shown in Figure 1.

Figure 1. An Op Amp Summer Circuit Used as a DAC; where


58

R0 = 2 R 1 = 4 R 2 = 8 R 3
One problem with this circuit is the wide range of resistor values needed to build a DAC
with a high number of digital inputs. Putting thin-film resistors that come in a wide range
of values (e.g., from a few ks to several Ms) on a single semiconductor chip can be
very difficult, especially if high accuracy and stability are required.
A better-designed and more commonly-used circuit for digital-to-analog conversion is
known as the R-2R ladder DAC, a 4-bit version of which is shown in Fig. 2. It consists of
a network of resistors with only two values, R and 2R. The input SN to bit N is '1' if it is
connected to a voltage VR and '0' if it is grounded. Thevenin's Theorem may be applied to
prove that the output Vo of an R-2R ladder DAC with N bits is:
Vo = VR/2N (SN-12N-1 + SN-22N-2+...+S020).
Thus, the output of the R-2R ladder in Figure 2 is Vo = VR/24 (S323+S222+S121+S020) or Vo
= VR (S3/2 + S2/4 + S1/8 + S0/16). In effect, contribution of each bit to the analog output is
proportional to its binary weight.

Figure 2. A 4-bit R-2R Ladder DAC


Analog-to-Digital Converters (ADC's)
An analog-to-digital converter, or simply ADC, is a circuit that is used to convert an
analog signal into a digital code. In the real world, most of the signals sensed and
processed by humans are analog signals. Analog-to-digital conversion is the primary
means by which analog signals are converted into digital data that can be processed by
computers for various purposes.
An analog signal is a signal that may assume any value within a continuous range.
Examples of analog signals commonly encountered every day are sound, light,
temperature, and pressure, all of which may be represented electrically by an analog
voltage or current. A device that is used to convert an analog signal into an analog voltage
or current is known as a transducer. An analog-to-digital converter is used to further
translate this analog voltage or current into digital codes that consist of 1's and 0's.
A typical ADC, therefore, has an analog input and a digital output, which may either be
'serial' (consisting of just one output pin that delivers the output code one bit at a time) or
'parallel' (consisting of several output pins that deliver all the bits of the output code at the
same time).
59

Analog-to-digital converters come in many forms. One example is the parallel comparatortype ADC, which basically consists of: 1) a set of comparators that compare the input
analog voltage to different values of fixed voltages; 2) a corresponding set of D-type flipflops that hold the digital outputs of the comparators; and 3) an encoder that converts the
outputs of the D-type flip-flops into the final output digital code.
Another implementation of the ADC is known as the successive-approximation ADC. This
circuit consists of: 1) a sample and hold circuit to accept the analog input Va; 2) a
successive approximation register (SAR) consisting of clocked flip-flops and gates
designed to systematically and progressively approximate the digital code corresponding
to the analog input Va; 3) an internal reference DAC that gets its digital inputs from the
SAR; and 4) a voltage comparator that compares the analog output of the internal DAC to
the analog input Va.
In a successive approximation ADC, the SAR generates a series of digital codes as it is
clocked, which are fed into the reference DAC one at a time. The digital codes are
generated in binary search fashion, i.e., the bits are toggled to logic '1' one at a time
starting with the MSB. If the bit toggled to '1' causes the DAC to output an analog voltage
that exceeds Va, then it is returned to '0', otherwise it is kept at logic '1'.
Eventually all the bits would have been exercised, and the resulting digital code is the one
that causes the DAC to produce an analog voltage that is as close to Va as possible without
exceeding it. Thus, this will be the same digital code released by the ADC to its outputs,
since it was basically the code that produced a voltage equal to Va using the internal
reference DAC.
Another ADC design that operates similarly to the successive approximation ADC is the
counting ADC. It also employs an internal reference DAC, except that in this case it is fed
with digital data that are generated by a counter. As the counter is clocked, the digital
code fed to the DAC increases which causes the DAC to increase its analog output
proportionately. Eventually the DAC output exceeds the analog input Va and the counter
is stopped. The digital code fed to the DAC at this point becomes the output of the
counting ADC itself.
The ADC's discussed earlier all employ what is referred to as Pulse Code Modulation
(PCM), wherein an N-bit digital code is assigned to each sample taken from the analog
signal. Another major class of ADC's employs a process known as Delta Modulation
(DM) instead of PCM to digitize analog signals.
A basic linear DM ADC has an internal processor that generates an analog signal that
approximates the analog signal being digitized. It also has a comparator for comparing the
processor's analog output to the actual input analog voltage. If the comparator determines

60

that the analog input is greater than the processor output, then the processor increases its
output by a step S0; otherwise the processor output is decreased by S0.
One strength of linear DM is the ease by which the analog signal can be reconstructed
from the digitized signal. The drawback of linear DM is that its output can only change in
steps of just one size, S0. This limits the slope of the digitized signal, which becomes a
problem when the input analog signal is changing rapidly.
Adaptive Delta Modulation (ADM) addresses the limitation of linear DM ADC's by
allowing variations in the step sizes at which the digitized signal changes. Under ADM,
the step size by which the digital output of the ADC changes increases whenever the
analog signal being digitized is changing rapidly.

Digital Signal Processors (DSP's)


A Digital Signal Processor, or DSP, is a circuit used for processing signals digitally. A
signal, in this context, traditionally refers to an analog signal (such as analog voltage) that
has been converted into a digital one so that it can be processed mathematically.
Nowadays, however, almost every piece of information has been digitized, so a digital
signal may be any stream of digital data - digital audio/video data, betting odds, or even
the weight of clothes in a washing machine. Analysis of such digital signals for a variety
of purposes can be easily accomplished by a DSP.

Signal processing encompasses a large variety of actions performed on signals - filtering,


encoding/decoding, compression/decompression, amplification, modulation, level
detection, pattern matching, mathematical/logical operations, and much more. These
processes are performed on a signal for a number of reasons: to enhance it; reduce its
component noise; make its transmission and reception more effective, efficient, and faster;
transform it; make it interact with other signals in special ways; facilitate its use in digital
analysis, monitoring, or control; etc. A DSP has built-in capabilities to perform these
signal processing functions easily.
A DSP is very similar to a microprocessor. In fact, it is regarded by many as a special
microprocessor created particularly to process signals. Both a microprocessor and a DSP
can execute instructions, accept input digital data, perform operations on them, and output
digital data. The fundamental difference between a DSP and a microprocessor is what their
built-in processing capabilities were designed for.
A DSP is a highly-specialized device that's equipped with a multitude of mathematical
functions specifically intended for processing a digital signal, whereas a microprocessor is
designed to be a general-purpose device. A microprocessor would be able to handle many
different applications, such as word processing, spreadsheets, databases, and, well, even

61

digital signal processing. However, it can not be as good as a DSP when it comes to
serious DSP applications.
Current trends in technology seem to indicate the possibility though that the distinction
between a DSP and a microprocessor will soon be gone. Microprocessors are becoming
more and more sophisticated that some of them are now equipped with true DSP
capabilities. It will just be a matter of time before high-end microprocessors will have the
capability to perform high-end signal processing, or any high-end task for that matter.
A DSP is also very similar to a microprocessor as far as architecture is concerned, i.e., it
has many parts that are also seen in a microprocessor, such as data and address buses, an
Arithmetic-Logic Unit (ALU), a program control unit, assorted flags and registers, etc. It
also has its own native instruction set, which defines what it can be programmed to do.
Programming DSP's is no longer complicated too, with the existence of various
development kits in the market that support DSP software development using high-level
programming languages such as C.
Many DSP applications deal with real-world analog signals (such as sound, light, analog
voltage, analog current, temperature, pressure). Since a DSP can only process digital
signals, there is a need to convert analog signals first into digital data before they can be
processed by a DSP. After processing, there is again a need for the DSP to convert these
digital data back into the original real-world analog signal format. In such applications, the
DSP must be supported by an analog-to-digital converter (ADC) and a digital-to-analog
converter (DAC), which will perform the required analog-digital and digital-analog
conversions, respectively.
Applications where DSP's are commonly used include: 1) digital sound and image
processing; 2) digital communications; 3) consumer electronics (e.g., mobile phones,
faxes, computer peripherals such as modems and sound cards, and digital entertainment
systems such as DVD players and digital TV); 4) medical electronics; and 5) industrial and
automation electronics.

CIRCUT ANALYASIS PRINCIPLES


Ohm's Law
Ohm's Law states that for a conductor maintained at a constant temperature, the voltage V
across it is proportional to the current I flowing through it. The constant of proportionality
is known as the resistance R of the conductor. Thus, V = R x I. Ohm's Law is more
commonly written as V = IR.

62

Figure 1. Ohm's Law


Consequently, the current I flowing through a resistor when a voltage V is applied across it
may be calculated using Ohm's Law: I = V/R where R is the resistance of the resistor. If a
current I is forced to flow through a resistor with resistance R, the voltage drop V across
the resistor would be equal to V = I x R. If a voltage V is applied across a resistor of
unknown resistance and this applied voltage causes a current I to flow through it, then the
resistance R of the resistor is given by: R = V/I.
Ohm's Law applies not just to a single resistor (a component exhibiting resistance), but to
entire circuits consisting of many resistors connected in various ways. A complex circuit
of resistors connected in any manner and across which a voltage V is applied may be
mathematically analyzed as a single equivalent resistor Reff, through which a current I is
made to flow by the applied voltage V. Ohm's Law may be applied to this equivalent
resistor, i.e., V = IReff, where V is the voltage applied across the circuit and I is the current
consumed by the circuit.
Kirchhoff's Current Law (KCL)
Kirchhoff's Current Law (KCL), which is also known as Kirchhoff's First Law, states that
the total electric current flowing into a node or junction of a circuit through all possible
routes leading to the junction is zero. Worded in another way, KCL states that the sum of
all the currents flowing towards a node or junction of a circuit is equal to the sum of all the
currents leaving it.

Figure 1. Kirchhoff's Current Law

63

The implication of Kirchhoff's Current Law is that charge can not accumulate at a node or
junction of a circuit, and that charge is a conserved quantity. Kirchhoff's Current Law
complements Kirchhoff's Voltage Law in formalizing the algebra for circuit analysis.
Kirchhoff's Voltage Law
Kirchhoff's Voltage Law (KVL), which is also known as Kirchhoff's Second Law, states
that the sum of all voltage sources in a circuit loop equals the sum of all voltage drops in
the same loop. This law is also sometimes stated as follows: the net electromotive force
(e.m.f.) within a circuit loop equals the sum of all potential differences around the loop. In
this statement, e.m.f. refers to a source of electrical energy while potential differences are
drains of electrical energy.

Figure 1. Kirchhoff's Voltage Law


Kirchhoff's Voltage Law is basically a variant of the Law of Conservation of Energy as
applied to electrical circuits. Kirchhoff's Voltage Law complements Kirchhoff's Current
Law in formalizing the algebra for circuit analysis.
Superposition Principle
The Superposition Principle states that in any linear network composed of resistors and
voltage sources, the voltage across a resistor is simply the sum of all the voltages
individually applied to it by each voltage source in the circuit. The superposition principle
is a useful method for finding the voltage across any resistor in a network where more than
one voltage source is present. A voltage source in this context may refer to any source of
voltage or voltage signal, e.g., power supplies, batteries, oscillators, signal generators, etc.
To apply the superposition principle, one has to calculate the individual 'voltage
contribution' of each voltage source to the resistor of interest. To determine the voltage
contribution of a source, simply 'zero out' all the other sources by replacing them with a
short circuit, or with their internal resistances if these are known. Kirchoff's Voltage Law
64

and Ohm's Law are then applied to calculate the voltage across the resistor with this single
source present. Once all the voltage contributions have been determined, the voltage
across the resistor is simply the sum of all the individual voltages contributed by the
voltage sources.

Figure 1. A circuit with two voltage sources; find the


voltage across the 3K resistor

Figure 2. Contribution of the 5V source to the voltage


across the 3K resistor

Figure 3. Contribution of the 3V source to the voltage


across the 3K resistor
To illustrate the Superposition Principle, consider the circuit in Figure 1. The voltage
across the 3K resistor may be calculated by getting the individual voltages applied by the
65

5V and 4V sources across it. Figure 2 shows the equivalent circuit wherein the 4V source
is 'zeroed out', i.e., replaced by a short circuit. This circuit gives the voltage V1 applied by
the 5V source across the 3K resistor, wherein V1 = 5V (0.75/2.75) = 1.364 V. Figure 3
shows the equivalent circuit wherein the 5V source is replaced by a short circuit. This
circuit gives the voltage V2 applied by the 4V source across the 3K resistor, wherein V2 =
4V (1.2/2.2) = 2.182 V. Thus, from the Superposition Principle, the voltage across the 3K
resistor in Figure 1 is 1.364V + 2.182V = 3.546V.
Similarly, the voltage across the 2K resistor is -5V(2/2.75) + 4V(1.2/2.2) = -3.636 +
2.1818 = -1.454V and the voltage across the 1 K resistor is 5V(.75/2.75) - 4V(1/2.2) =
1.364 - 1.818 = -0.454.
Thevenin's Theorem
Thevenin's Theorem states that for any two linear networks A and B that are connected by
two conductors but not magnetically coupled, network A may be replaced by a simpler
equivalent network for the purpose of simpler circuit analysis computations with respect to
network B.

The equivalent network is known as the Thevenin equivalent network, and it consists of a
voltage source V(s) in series with an impedance Z(s). The voltage source V(s) is the
transform of the voltage at the two open terminals of network A, while the impedance
Z(s) is the transform impedance at the two terminals of A with all independent sources
reduced to zero. Related to Thevenin's Theorem is Norton's Theorem, which is just the
current representation of Thevenin's Theorem.
One must know the characteristics of networks to which Thevenin's Theorem is
applicable. Let A and B be the two connected but non-magnetically coupled linear
networks, with current i(t) or I(s) flowing from A to B, as shown in Figure 1. Network A
must have the following characteristics: 1) it only has linear passive elements; 2) it may
contain independent voltage and current sources as well as dependent (or controlled)
sources; 3) it may have initial conditions present (e.g., voltages in capacitors or currents in
inductors); 4) it has no magnetic coupling to B. On the other hand, network B may be
characterized as follows: 1) it has linear passive elements only; 2) it has no sources; 3) it
has no initial conditions; and 4) it has no magnetic coupling to B.

66

Figure 1. Thevenin Equivalent Circuit


Given the networks A and B characterized above, Thevenin's Theorem is applied by
finding a network C that's equivalent to A, wherein C consists only of a voltage source
V(s) in series with an impedance Z(s), and replacing A with C.
To find the Thevenin equivalent circuit for network A, the following steps need to be done:
1) zero out all independent sources in network A by opening all current sources and
shorting all voltage sources (dependent sources are not changed); 2) determine Z(s) by
calculating the equivalent passive impedance of network A as seen from the open terminals
of A (bear in mind that all the independent sources are zeroed out at this point); and 3)
apply a voltage source V(s) in series with Z(s), with V(s) chosen such that the current
from C to B (once they're connected) will still be I(s). In step 2, it is assumed that A has no
dependent sources, since if A has dependent sources, then Z(s) must be computed based
on the fact that A is an active network.
Thevenin's theorem is useful in simplifying the analysis when only a part of a circuit is of
interest, such as the load to an electronic circuit. As an example, Thevenin's theorem can
be used to simplify the analysis on how to maximize power delivery to the load (B) of a
network (A).
Norton's Theorem
Norton's Theorem states that for any two linear networks A and B that are connected by
two conductors but not magnetically coupled, network A may be replaced by a simpler
equivalent network for the purpose of simpler circuit analysis computations with respect to
network B.
The equivalent network is known as the Norton equivalent network, and it consists of a
current source I(s) in parallel with an impedance Z(s). The current source I(s) is the
transform of the current at the two terminals of network A when these are shorted, while
the impedance Z(s) is the transform impedance at the two terminals of A with all
independent sources reduced to zero. Related to Norton's Theorem is Thevenin's Theorem,
which is just the voltage representation of Norton's Theorem.

One must know the characteristics of networks to which Norton's Theorem is applicable.
Let A and B be the two connected but non-magnetically coupled linear networks, with
current i(t) or I(s) flowing from A to B, as shown in Figure 1. Network A must have the
following characteristics: 1) it only has linear passive elements; 2) it may contain
independent voltage and current sources as well as dependent (or controlled) sources; 3) it
may have initial conditions present (e.g., voltages in capacitors or currents in inductors); 4)
it has no magnetic coupling to B. On the other hand, network B may be characterized as
follows: 1) it has linear passive elements only; 2) it has no sources; 3) it has no initial
conditions; and 4) it has no magnetic coupling to B.
67

Figure 1. Norton Equivalent Circuit


Given the networks A and B characterized above, Norton's Theorem is applied by finding a
network C that's equivalent to A, wherein C consists only of a current source I(s) in
parallel with an impedance Z(s), and replacing A with C.
To find the Thevenin equivalent circuit for network A, the following steps need to be done:
1) zero out all independent sources in network A by opening all current sources and
shorting all voltage sources (dependent sources are not changed);
To find the Norton equivalent circuit for network A, the following steps need to be done:
1) zero out all independent sources in network A by opening all current sources and
shorting all voltage sources (dependent sources are not changed); 2) determine Z(s) by
calculating the equivalent passive impedance of network A as seen from the open terminals
of A (bear in mind that all the independent sources are zeroed out at this point); and 3)
apply a current source I(s) in parallel to Z(s), with I(s) chosen such that the current
from C to B (once they're connected) will still be I(s). In step 2, it is assumed that A has no
dependent sources, since if A has dependent sources, then Z(s) must be computed based
on the fact that A is an active network.
Norton's theorem is useful in simplifying the analysis when only a part of a circuit is of
interest, such as the load to an electronic circuit. As an example, Norton's theorem can be
used to simplify the analysis on how to maximize power delivery to the load (B) of a
network (A).
Millman's Theorem
Millman's Thoerem states that for a circuit that can be redrawn as a network of parallel
branches, with each branch consisting of a resistor or a resistor in series with a voltage
source (or current source), the voltage Vm across all the parallel branches may be
computed as follows: Vm = (V1/R1 + V2/R2 + V3/R3 + ... + Vn/Rn) / (1/R1 + 1/R2 +
1/R3 + ... + 1/Rn). For computation purposes, the value of the voltage source is equal to
zero in a branch that consists of nothing but a resistor.
It then follows that the entire network of parallel branches may be simplified into a circuit
consisting of just a voltage source Vm in series with a single resistor Rm, where Rm = 1/
68

(1/R1 + 1/R2 + 1/R3 + ... + 1/Rn). The current Im through this simple circuit is given by:
Im = Vm / Rm, or Im = V1/R1 + V2/R2 + V3/R3 + ... + Vn/Rn.
Millman's Theorem, therefore, simplifies the computation of the voltage across the parallel
branches of a circuit. Note that Millman's Theorem is only applicable to circuits that can
be redrawn as a network of parallel branches, with each branch consisting of a resistor or a
resistor in series with a voltage source or current source.
Millman's Theorem may also be written in terms of the impedance Z or admittance Y (Z =
1/Y) through each branch. Thus,
Vm = (V1Y1 + V2Y2 + V3Y3 + ... + VnYn) / (Y1 + Y2 + Y3 + ... + Yn);
Zm = 1 / (Y1 + Y2 + Y3 + ... + Yn); and
Im = V1Y1 + V2Y2 + V3Y3 + ... + VnYn.

Figure 1. Illustration of Millman's Theorem


In summary, Millman's Theorem is just a tool used for the quick computation of the
voltage across and the current through a circuit that consists of nothing but parallel
branches of a resistor in series with a voltage source.

Analog Electronics
Analog Electronics refers to the field of electronics that deals with analog or 'real-world'
signals. An analog signal is a signal that is both variable and continuous, and therefore
allowed to assume any value between its applicable lower and upper limits. This is in
contrast with a digital signal, which can only assume discrete or quantized values. Analog
signals represent waveforms that are commonly encountered by humans in the real world.

In telephony, for example, one's voice is translated into an analog electrical signal that can
be transmitted continuously over wires, received on the other end, and made to vibrate a
69

speaker that translates the analog electrical signal back into sound waves again.
Although our society is becoming more and more digital everyday, the output of a digital
system almost always needs to be converted back into analog form in one way or another
for our day-to-day use in the physical world.
Analog signals may be converted into digital signals and vice versa by devices known as
analog-to-digital converters (ADC's) and digital-to-analog converters (DAC's),
respectively.

Electronic Amplifiers
An electronic amplifier is a device that magnifies or increases the voltage, current, or
power of a signal. An amplifier accomplishes this by taking additional power from a
power supply, and producing an output signal that is an exact copy of the input signal, but
of a higher amplitude. The ratio of the output signal to the input signal is referred to as the
'gain' (G) of the amplifier. Thus, an amplifier that outputs a voltage signal Vout that is a
magnified copy of the input voltage signal Vin has a gain of G, wherein G = Vout / Vin.

An amplifier can be designed to magnify the voltage of a signal (voltage amp), the current
of a signal (buffer amp), or both the voltage and current of a signal (power amp).
Electronic amplifiers can operate using either a single-sided power supply (a voltage rail
or bus that's either positive or negative) or a double-sided or balanced power supply,
which has both a positive and a negative supply rail aside from the ground.
If the output waveform of an amplifier is not a perfect copy of the input signal, then the
amplifier is said to exhibit distortion. One type of distortion is linear distortion, which
causes an output signal to have a shape that's different from that of the input signal. Good
design of the amplifier circuit, proper selection of circuit components, and correct biasing
(which is discussed in the next paragraph), will minimize distortion.
An electronic amplifier needs what is known as an electrical 'bias' in order to function. The
bias of an amplifier is the method by which its active devices (usually transistors) are
powered up and excited in order to attain the desired amount of gain with minimum
distortion. This usually entails setting the DC component of the output signal midway
between the maximum voltages available from the power supply.
It is common to see amplifiers that consist of multiple stages connected in series to attain
higher gains. Each stage of the amplifier may be a different type of amplifier to meet the
requirements of each stage. For instance, the first stage might be a Class A stage, the
output of which is fed into a class AB push-pull second stage, which then drives a class G
70

final output stage. This design takes advantage of the strengths of each amplifier class at
each stage while minimizing weaknesses. Refer to the definitions of the different classes
of amplifiers.
An amplifier's output signal may be of different phase or polarity as the input signal. A
non-inverting amplifier maintains equal phase relationship or polarity between the input
and output waveforms. An emitter follower is a type of this amplifier, indicating that the
signal at the emitter of a transistor follows the phase of the input signal. An inverting
amplifier produces an output that is of opposite polarity or 180 degrees out-of-phase with
the input signal.
There are many different ways to classify amplifiers. Amplifier classifications include the
following: 1) by function; 2) by frequency range; 3) by common terminal; 4) by type of
load; 5) by coupling, etc.
Types or Classifications of Electronic Amplifiers
Electronic amplifiers can be classified in many different ways. Some common
classifications for amplifiers are presented below.
Classification by Signal Type
Amplifiers may be classified based on the type of signal that they amplify. Thus, an
amplifier that amplifies voltage signals is a voltage amplifier, while a buffer amplifier is
one that amplifies current signals. An amplifier that amplifies both the voltage and current
is classified as a power amplifier.

Classification by Common Terminal Connection


Amplifiers consist of active devices (such as bipolar and field-effect transistors) that can
be connected such that there is a common terminal between the input and the output. One
common way of classifying amplifiers is in terms of their common terminal connection.
For instance, a common-emitter amplifier means that the active device is a bipolar
transistor whose emitter terminal is common to the input and the output side.
Classification by Frequency Range
Amplifiers may also be classified according to the frequency range of the signals they can
amplify. Categories under such a classification include: 1) DC amplifiers; 2) Audio
Frequency (AF) amplifiers 20 Hz to 20 kHz; 3) Video amplifiers several MHz; and 4)
Ultra High Frequency (UHF) amplifiers up to a few GHz.
Classification by Function

71

Amplifiers may be classified according to their basic function or output characteristics.


Some of these functional classifications are as follows:
- servo amp : an amp with an integrated feedback loop to actively control the output at the
desired level
- linear amp : an amp with a precise amplification factor over a wide range of frequencies,
often used to boost signals for relay in communications systems
- non-linear amp : an amp that amplifies only a specific narrow or tuned frequency, to the
exclusion of all other frequencies
- RF amp : an amp designed for use in the radio frequency range of the electromagnetic
spectrum, often used to increase the sensitivity of a receiver or the output power of a
transmitter
- audio amp : an amp designed for use in reproducing audio frequencies, with special
considerations made for driving speakers
- operational amp : a low power amp that can perform mathematical operations
Classification by Interstage Coupling Method
Audio amplifiers are sometimes classified by the method used in the coupling of the signal
at the input, output, or between stages. Different types of coupling methods include: the
R-C coupled amplifier; the L-C coupled amplifier; the transformer-coupled amplifier; and
the direct-coupled amplifier.
Classification by Type of Load
Another way of classifying amplifiers is by the type of load that they drive: 1) untuned
amps - amplify audio and video with no tuning required; 2) tuned amps (RF amps) amplify a single radio frequency or band of frequencies
Classification by Angle Flow or Conduction Angle
A letter system for classifying amplifiers also exists, wherein amplifiers fall under class A,
class B, class C, and so on. This classification system is based on the amount of time that
the amplifier's active components are conducting electricity, with the duration measured in
terms of the number of degrees of the sine wave test signal. See also: amplifier classes.

Amplifier Classes
Amplifier circuits are classified under different classes, which include the following:
classes A, B, AB and C for analog designs, and classes D and E/F for switching designs.
Below are brief descriptions of these amplifier classes.
Class A - 100% of the input signal is used (conduction angle a = 360 or 2)
Class A amplifiers amplify over the entire input cycle such that the output signal is an
72

exact magnified copy of the input. They are not efficient (no more than 50% efficiency is
attainable), since the amplifying device is always conducting whether or not an input
signal is applied.

Class B - 50% of the input signal is used (a = 180 or )


A Class B amplifier is one whose operating point is at an extreme end of its characteristic,
so that either the quiescent current or the quiescent voltage is almost zero. If a sinusoidal
input voltage is used, the amplification of a Class B amplifier takes place only for 50% of
the cycle, e.g., the amplifying device is switched off half of the time. A Class B amplifier
can attain an efficiency of up to 78.5%. However, a Class B amp exhibits a higher
distortion than an equivalent Class A amp.
Class AB - more than 50% but less than 100% is used. (181 to 359, < a < 2)
A Class AB amplifier is an amplifier that operates between the two extremes defined for
Class A and B amplifiers.
Class C - less than 50% is used (0 to 179, a < )
Class C amplifiers conduct less than 50% of the input signal, allowing it to reach 90%
efficiency but resulting in high distortion at the output. Thus, in a Class C amp, the output
current (or voltage) is zero for more than 50% of the input waveform cycle. Some
applications, such as megaphones, can tolerate the high distortion of Class C amps. Class
C amps can also be used in tuned RF applications, since the distortion can be significantly
reduced by the tuned loads.
Class D
A class D amplifier is a power amplifier whose power devices are operated in on/off mode.
The input signal is converted into a sequence of pulses whose average is directly
proportional to the amplitude of the input signal. The frequency of the pulses is typically
ten or more times the highest frequency of interest in the input signal. The output of the
amplifier goes through a passive filter to remove unwanted spectral components, resulting
in an amplified replica of the input. Power efficiency is the main advantage of a class D
amplifier. Class D amplifiers were widely used to control motors, but they are also used as
audio amplifiers.
Class E/F
Class E/F amplifiers are highly efficient switching power amplifiers used at radio
frequencies. Class E/F amplifiers consist of two basic parts: 1) a 'perfect' switching device
and 2) an impedance network consisting of resistive and reactive components. The
switching device is 'on' during the zero-voltage crossing, and off during the zero-current
crossing, such that it can not have both current flowing through it and a non-zero voltage
73

across it at the same time, thereby minimizing its power dissipation. The impedance
network, on the other hand, is set up such that the 'imaginary part' of the impedance is
eliminated through proper matching of complex conjugates to attain resonance, leaving
behind only its 'real part.' Thus, a Class E/F amp is very efficient because power loss only
occurs in the real part (resistive component) of the impedance network. Classes E and F
are distinguished from each other by their resonance topology.
There are several other classes of amplifiers not discussed in this article.

Negative Feedback
Negative feedback is used in amplifiers for a variety of reasons. The term 'feedback'
means using a fraction of the output voltage of the amplifier as input or as part of input.
When the signals at input and output are of opposite phase (i.e., they are mirror-imaged),
then the feedback signal is said to be negative.
Negative feedback signals are subtracted from the amplifier's input signal(s). In effect,
they reduce the overall gain of the amplifier. If G is the gain of the amplifier with no
feedback (also known as the 'open-loop gain'), and n is the feedback fraction (or loop
gain) such that Vout/n is fed back to the input of the amplifier, then the gain of the
amplifier when negative feedback is applied (closed-loop gain) is as follows:
Closed-Loop Gain = G / (1 + G/n).
For example, if the open-loop gain G = 100 and n = 10 (so that 1/10 of the output voltage
is fed back), then the closed-loop gain is 100/(1+100/10) = 100/11 = 9.09.
Note that if the open-loop gain G is very much larger than the loop gain n, then the closed
loop gain becomes approximately G/(G/n), or simply equal to n.
Negative feedback, aside from reducing gain, also reduces noise signals generated by the
components of the amplifier. Distortion that does not result in loss of open-loop gain will
also be reduced by negative feedback.
The input resistance of an amplifier may also be affected by feedback. If the feedback
signal is in shunt with the input signal (i.e., they are applied to the same terminal, as shown
in Figure 1a), then the input resistance of the amplifier decreases. Using a feedback signal
that is in series with the input signal (as shown in Figure 1b), on the other hand, will
increase the amplifier's input resistance.

74

Figure 1a. Feedback in Shunt


with the Input Signal

Figure 1b. Feedback in Series with


Input (through the emitter of first
transistor)

The output resistance may also be affected by a feedback network, although to a lesser
extent than the input resistance. Connecting the feedback circuit in series with the output
load increases the amplifier's output resistance while connecting it in parallel with the
output load will decrease the amplifier's output resistance
Low-Pass Filters

A Low-Pass Filter is a circuit that only allows low-frequency signals to pass, and
attenuates or reduces signals whose frequencies exceed its cut-off frequency. It is also
referred to as a 'high-cut filter' or, when used in audio applications, as a 'treble-cut filter'.
One common application of low-pass filters is for driving subwoofers (speakers designed
for bass sounds) and other loudspeakers that don't efficiently broadcast sounds of high
pitches. The low-pass filter is the opposite of the high-pass filter.

An ideal low-pass filter is one that completely blocks all frequencies above a given
frequency, while allowing all those with lower frequencies to pass unchanged. Of course,
an ideal low-pass filter doesn't exist in the real world, so ways to quantify or describe the
effectiveness and efficiency of a low-pass filter have been devised.

75

Figure 1. Low-Pass Filters


The quality of a low-pass filter may be expressed in terms of its n-order. An n-order filter
reduces the signal strength by 6n dB for every octave increase in frequency, i.e., every
time the frequency doubles.
Thus, a first-order low-pass filter (n=1) will reduce the signal strength by 6 dB every time
the frequency doubles. Mathematically, -6dB = 20 log P2/P1, which yields P2/P1 = 0.501.
This means that a first-order filter reduces the strength of the signal by about 50% every
time the frequency doubles.
As further illustration, a second-order low-pass filter (n=2) will reduce the signal strength
by 12 dB every time the frequency of the signal doubles (-12 dB per octave). Thus, a
second-order low-pass filter will reduce a signal to just 1/4 its original level every time its
frequency increases by an octave.
Note that in each of the low-pass filters shown above, the inductors are in series with the
input while the capacitors are in shunt with the input. This is because the reactance XL of
an inductor increases with the signal frequency, i.e., XL = 2fL, while the reactance XC of
a capacitor decreases with the signal frequency, i.e., XC = 1 / 2fC. Thus in these lowpass filters, the inductors resist the passing of an ac signal as the frequency increases,
while the capacitors shunt them towards the ground as the frequency increases. Either
way, the effect is to attenuate the signal as frequency increases.
The following equations apply to the low-pass filters in Figure 1 above:
1) L = Zo / f
2) C = 1 / (f Zo)
3) Zo = sqrt(L/C)
4) f = 1 / ( sqrt(LC))
where Zo is the line impedance and f is the cut-off frequency of the filter.

76

High-Pass Filters
A High-Pass Filter is a circuit that only allows high-frequency signals to pass, and
attenuates or reduces signals whose frequencies are below its cut-off frequency. It is also
referred to as a 'low-cut filter' or, when used in audio applications, as a 'bass-cut filter' or
'rumble filter'. One common application of high-pass filters is for driving tweeters
(speakers designed for high-pitch sounds), so as to block low-frequency signals that can
interfere with or even damage the tweeter. The high-pass filter is the opposite of the lowpass filter.

An ideal high-pass filter is one that completely blocks all frequencies below a given
frequency, while allowing all those with higher frequencies to pass unchanged. Of course,
an ideal high-pass filter doesn't exist, so in the real world, the effectiveness and efficiency
of a high-pass filter is described is terms of the level of attenuation of signals with
frequencies below a cut-off frequency. The cut-off frequency of a high-pass filter is the
frequency at which the output voltage equals 70.7% of the input voltage.
Figure 1 shows some common implementations of high-pass filters. Note that in each of
the high-pass filters shown above, the inductors are in shunt with the input while the
capacitors are in series with the input. This is because the reactance XL of an inductor
increases with the signal frequency, i.e., XL = 2fL, while the reactance XC of a capacitor
decreases with the signal frequency, i.e., XC = 1 / 2fC. Thus in these high-pass filters,
the capacitors resist the passing of an ac signal as the frequency decreases, while the
inductors shunt them towards the ground as the frequency decreases. Either way, the
effect is to attenuate the signal as frequency decreases.

77

Figure 1. High-Pass Filters


The following equations apply to the high-pass filters in Figure 1 above:
1) L = Zo / 4f
2) C = 1 / (4f Zo)
3) Zo = sqrt(L/C)
4) f = 1 / (4 sqrt(LC))
where Zo is the line impedance and f is the cut-off frequency of the filter.

The Bipolar Transistor as a Switch


A switch is a device that is used to 'open' or 'close' a circuit. Opening a circuit means
creating a break in the circuit, preventing current flow and thus, turning it 'off'. Closing a
circuit, on the other, means completing the circuit path, thereby allowing current to flow
around it and thus, turning it 'on'.

The bipolar transistor, whether NPN or PNP, may be used as a switch. Recall that the
bipolar transistor has three regions of operation: the cut-off region, the linear or active
region, and the saturation region. When used as a switch, the bipolar transistor is operated
in the cut-off region (the region wherein the transistor is not conducting, and therefore
makes the circuit 'open') and saturation region (the region wherein the transistor is in full
conduction, thereby closing the circuit).
The bipolar transistor is a good switch because of its large transconductance Gm, with Gm
= Ic/Vbe where Ic is the collector-to-emitter (output) current and Vbe is the base-emitter
(input) voltage. Its high Gm allows large collector-to-emitter currents to be easily
78

achieved if sufficient excitation is applied at the base.


To illustrate this, the simplest way to use an NPN bipolar transistor as a switch is to insert
the load between the positive supply and its collector, with the emitter terminal grounded
(as shown in Figure 1). Applying no voltage at the base of the transistor will put it in the
cut-off region, preventing current from flowing through it and through the load, which is a
resistor in this example. In this state, the load is 'off'.

Figure 1. A Simple Switch Using an NPN


Transistor
Applying enough voltage at the base of the transistor will cause it to saturate and become
fully conductive, effectively pulling the collector of the transistor to near ground. This
causes a collector-to-emitter current to flow through the load that's limited only by the
impedance of the load. In this state, the load is 'on'.
One limitation of this simple design is that the switch-off time of the transistor is slower
than its switch-on time if the load is a resistor. This is because of the stray capacitance
across the collector of the transistor and ground, which needs to charge through the load
resistor during switch-off. On the other hand, this stray capacitance is easily discharged to
ground by the large collector current flow when the transistor is switched on. There are, of
course, other better designs for using the bipolar transistor as a switch.
Switching Circuits Using Bipolar Transistors
As discussed in another article, the bipolar transistor is a good switching device because of
its large transconductance Gm. The same article showed the simple circuit in Figure 1
below, wherein a single NPN transistor is used as a switch for energizing or powering off
the load resistor connected between the collector and the positive supply.

One problem with the simple switch circuit in Figure 1 is the fact that a stray capacitance
exists between the transistor's collector and its grounded emitter, such that the switch-off
79

time of the transistor is slower than its switch-on time. This is because during switch-off,
this stray capacitance has to charge first through the load resistor before the load current
stops. During switch-on, on the other hand, this stray capacitance needs to discharge to
ground, which is easily accomplished by the conducting transistor. The slower charging up
of the stray capacitance compared to its quick discharging is the reason why the switch-off
of Figure 1's circuit is slower than its switch-on.
The circuit in Figure 2 addresses the limitation of the circuit in Figure 1. Two output
transistors are used in this circuit, driven by a single input transistor. The output of this
circuit is taken from the collector of the lower transistor. Just like the circuit in Figure 1,
this circuit is an inverting circuit, i.e., the output signal has a phase that's opposite that of
the input signal. Thus, the output is low if the input is high and the output is high if the
input is low.

Figure 1. A simple switch


using an NPN transistor

Figure 2. A circuit with two


output transistors, allowing
equally rapid switch-off and
switch-on

If the input is high, the upper output transistor goes into cut-off because its base voltage is
pulled down by the conducting input transistor. Meanwhile, the lower output transistor
saturates because the conducting input transistor is supplying its base with a higher
current. Such conditions immediately pulls down the collector of the lower output
transistor to almost ground level, i.e., the output goes 'low'.
On the other hand, if the input is low, the input transistor stops conducting, causing the
voltage at the base of the upper output transistor to be pulled up by the positive supply,
thereby turning it on. Meanwhile, the non-conducting input transistor prevents the base of
the lower output transistor from receiving any current, driving it into cut-off. With the
lower output transistor in cut-off and the upper output transistor conducting, the output of
the circuit is pulled up towards the positive supply, i.e., the output goes 'high.'
80

The circuit in Figure 2 allows the output to switch off as fast as its switch-on, since the
conducting lower output transistor immediately pulls the output to ground during switchoff.
The Darlington Pair
The Darlington pair is basically a combination of two bipolar transistors connected as
shown in Figure 1 below. This circuit is used for amplifying currents, i.e., the amplified
current from the first transistor is further amplified by the second transistor. Needless to
say, this transistor combination exhibits a much higher current gain than if only one
transistor were used.

The overall current gain of the Darlington pair is just equal to the product of the two
individual current gains of the transistors. Thus, if hFE1 and hFE2 are the current gains of
transistors 1 and 2, respectively, the over-all current gain hFE if they are formed as a
Darlington pair will be hFE = hFE1 x hFE2. The very high current gain (e.g., 10000) of a
Darlington pair means that only a tiny amount of base current is needed to make the pair
switch on.

Figure 1. The Darlington Pair Circuit


The Darlington circuit is named after its inventor, Bell Labs engineer Sidney Darlington.
The concept of placing two or three transistors on a single chip was patented by him, but
putting an arbitrary number of transistors on a single chip, which would encompass all
modern IC's, was not covered by his patent.
A Darlington pair behaves just like a single transistor with a very high current gain, i.e., it
can also be modeled as having the three terminals of a typical transistor - the base, the
emitter, and the collector. A single transistor requires just 0.7 V across its base-emitter
terminal to turn on. A Darlington pair, on the other hand, requires double that amount, or
1.4 V, since its input basically has two base-emitter junctions in series (refer to Figure 1).
If Vbe1 and Vbe2 are the base-emitter voltages needed to turn on the first and second
transistors of the Darlington pair, respectively, then the Vbe required to turn on the pair is
Vbe = Vbe1 + Vbe2.
81

Darlington pairs packaged as a single transistor are already very common in the
marketplace. However, this has not diminished the practice of forming Darlington pairs
from two discrete transistors, since this offers more flexibility. The load current will be
carried by the second transistor, so it has to have a higher power rating than the first
transistor. The maximum load current of the Darlington pair is the maximum current that
the second transistor can carry.

The Long-Tailed Pair (LTP) Circuit


The Long-Tailed Pair (LTP) is a circuit that consists of two bipolar transistors whose
emitters are connected together, as shown in Figure 1. Furthermore, the input of the circuit
is applied across the bases of the two transistors, while the output is usually taken from
across their collectors. The long-tailed pair circuit is a very widely used circuit in linear
applications, mainly as a difference amplifier. Field-Effect Transistors (FET's) and
vacuum tubes may also be used in implementing long-tailed pairs, wherein the FET
sources and vacuum tube cathodes are the terminals connected together, respectively.
The term 'long-tail' came from the fact that the large resistor connecting the emitters of the
transistors to ground or a power supply resembles a long tail. In Figure 1, wherein NPN
transistors are used to form the pair, this resistor is tied to ground. The long tail actually
functions as a current source, and may in fact be replaced a more elaborate 'active' current
source circuit.

Figure 1. The Long-Tailed Pair


(LTP) Circuit
82

The long-tailed pair circuit is often used for amplifying any difference between the signals
applied at the base of each transistor. If the two transistors used are identical and
balanced, a common-mode signal (i.e., a signal applied in the same phase to both inputs)
applied to the bases of the transistors will not cause any significant differences between the
voltages at the collectors of the two transistors. In fact, any difference would only be due
to a lack of balance between the transistors. Thus, the output of this circuit (which is taken
across the collectors) for a common-mode signal would ideally be zero.
On the other hand, even a minute difference between the base signals will be amplified
considerably by the transistors, and will be reflected at the output as an amplified version
of the difference between the signals. Thus, the differential gain of this circuit is very
high, whereas its common-mode gain is very low. This is why this circuit is extensively
used in the input circuitries of operational amplifiers.
Operational Amplifiers (Op-Amps)
An Operational Amplifier, or Op Amp, is a dual-input, single-output linear amplifier that
exhibits a high open-loop gain, high input resistances, and a low output resistance. One of
the inputs of an operational amplifier amp is non-inverting while the other is inverting.
The output Vout of an operational amplifier without feedback (also known as open-loop) is
given by the formula: Vout = A(Vp-Vn) where A is the open-loop gain of the op amp, Vp
is the voltage at the non-inverting input, and Vn is the voltage at the inverting input. The
open-loop gain of a typical op amp is in the range of 105-106.
The operational amplifier got its name from the fact that it can be configured to perform
many different mathematical operations. Depending on its feedback circuit and biasing, an
op amp can be made to add, subtract, multiply, divide, negate, and, interestingly, even
perform calculus operations such as differentiation and integration. Of course, aside from
these operations, op amps are also found in a very large number of applications. In fact,
many consider the op amp as the foundation of many analog semiconductor products
today.
Because of the very high resistance exhibited by the inputs of an op amp, the currents
flowing through them are very small. The current flowing in or out of an op amp's input
pin, known as input bias current, is basically just leakage current at the base or gate of the
input transistor of that input, which is why it is very small. When solving voltage/current
equations for op amp circuits, the input currents are usually assumed to be zero. For most
of the commonly-used op-amp circuits, this means that the total output current of the op
amp is flowing through the feedback circuit between the output and the inverting input (the
feedback is usually connected to the inverting input for operation stability).

83

As the main path for an op amp's output current, the feedback circuit used in an op amp
largely determines how the op amp will function. There are many ways to operate an op
amp, but one commonly-used basic configuration is to: 1) provide it with balanced supply
voltages (say, +/-15V, although single-supply operation is also commonly used); 2)
connect the non-inverting input to ground (either directly or with a passive element such as
a resistor); 3) connect a feedback circuit between the output and the inverting input; and 4)
connect a resistor between the inverting input and the input signal source. Figure 1 shows
some op amp circuits using this basic configuration.

Figure 1. Some Common Operational Amplifier Circuits


Another special characteristic of a close-looped op amp with negative feedback is the zero
voltage drop across its inputs. Thus, in the circuits above, the voltage at the inverting input
is zero, in effect putting the inverting input at a 'virtual ground.' Table 1 shows the
voltage/current equations governing the circuits in Figure 1, based on the assumptions that
the currents flowing through the op amp inputs and the voltage across them are zero.

Table 1. Voltage/Current Equations for the Op Amp Circuits in Figure 1


Inverting
Amplifier

Summer

Differentiator

Integrator

Vo = - If(Rf);
Vi = If(Ri);
Vo = (Rf/Ri) Vi

Vo = - If(Rf);
If = V1/R1 +
V2/R2;
Vo = Rf(V1/R1 +
V2/R2)

Vo = - If(Rf);
If = C
dVi/dt;
Vo =
-RfC(dVi/dt)

Vo = -1/C If
dt;
If = Vi/R;
Vo = -1/(RC)
Vi dt;

A typical op-amp is constructed with the following parts: 1) a differential input stage,
which consists of a matched pair of bipolar transistors or field effect transistors (FET's)
that produce an output that's proportional to the difference between the input signals; 2) an
intermediate-gain stage that amplifies the output of the differential input stage; and 3) a
push-pull output stage that is capable of delivering a large current to the load, hence the
small output impedance.

84

Instrumentation Amplifiers (In-Amps)


An Instrumentation Amplifier, or In-Amp, is a closed-loop, differential-input amplifier
with an output that is single-ended with respect to a reference terminal. It has closelymatched input resistances that are very high in value, typically greater than 109 ohms. Like
an operational amplifier, an instrumentation amplifier must have very low input bias
currents (currents flowing in or out the input terminals, typically in the order of several nA
for in amps), and a very low output impedance (nominally a few milliohms at low
frequencies).
The main difference between an instrumentation amplifier and an operational amplifier is
the fact that an op amp is an open-loop device, whereas an in-amp comes with a preset
internal feedback resistor network that is isolated from its input terminals.

Because it is an open-loop device, an op amp's function and gain is set by providing it with
external components that generally constitute a feedback circuit between its output and its
inverting input. On the other hand, the gain of an in-amp, which is used primarily as an
amplifier of low-level signals in noisy environments, is either manufacturer-preset or may
be set by the user using an external gain resistor or by manipulating internal resistors via
some of the in-amp's pins.
Of course, an operation amplifier may be utilized as an instrumentation amplifier. This is
done by configuring it as a differential amplifier or subtractor, as shown in Figure 1a, the
circuit of which gives an output voltage Vo that is proportional to the difference between
the input voltages, or Vo = Rf/Ri (V2-V1). This equation may be derived as follows:
1) Vo = -If Rf + Vi where Vi is the voltage at either input of the op amp;
2) but If = (V1-Vo)/(Ri+Rf) and Vi = V2(Rf/(Rf+Ri));
3) thus, Vo = (Vo-V1)(Rf/(Rf+Ri)) + V2 (Rf/(Rf+Ri));
4) or VoRf + VoRi = (V2-V1)Rf + VoRf;
5) simplifying, Vo = Rf/Ri (V2-V1).
A single op amp configured as a subtractor is capable of serving as an in amp for modest
applications, i.e., it can amplify the difference between very small signals and reject those
that are common-mode (or common to both inputs). This circuit, however, suffers from
certain limitations. Commercially available instrumentation amplifier IC's employ multiple
internal op amps to overcome these limitations and provide a much superior differential
amplification performance.
Figure 1b shows an example of a simple in-amp circuit consisting of multiple op amps
(three op amps, to be exact), two of which are used for input buffering with the third one
acting as the subtractor itself. Input buffering eliminates problems associated with
relatively low input resistances or resistance mismatch between inputs.
85

Figure 1. a) an operational amplifier configured as a differential amplifier to serve as a


simple in-amp (left); b) an in-amp consisting of three (3) op amps (right)
Properties that define a high quality instrumentation amplifier are: 1) high common mode
rejection ratio; 2) low offset voltage and offset voltage drift; 3) low input bias and input
offset currents; 4) well-matched and high-value input impedances; 5) low noise; 6) low
non-linearity; 7) simple gain selection; and 8) adequate bandwidth.
Examples of applications where in-amps may be used include: 1) data acquisition from
low output transducers; 2) medical instrumentation; 3) current/voltage monitoring; 4)
audio applications involving weak audio signals or noisy environments; 5) high-speed
signal conditioning for video data acquisition and imaging; and 6) high frequency signal
amplification in cable RF systems.
Voltage Regulators
Voltage Regulators, also known as voltage stabilizers, are circuits that output a constant
and stable DC voltage at a specified level, despite fluctuations in its input voltage or
variations in its load. Voltage regulator IC's have already become available in so many
forms and characteristics that they've virtually eliminated the need to build voltage
regulating circuits from discrete components.
Factors that spurred the growth of the voltage regulator IC business include: 1) ease with
which zener diodes and balanced amplifiers can be built into IC's; 2) improved IC heat
dissipation capabilities; 3) advances in overload protection techniques; and of course, 4) a
high demand for voltage regulators in almost all fields of the electronics industry,
especially in power supply applications.
Important considerations when selecting a voltage regulator include: 1) the desired output
voltage level and its regulation capability; 2) the output current capacity; 3) the applicable
input voltages; 4) conversion efficiency (Pout/Pin); 5) the transient response time; 6) ease
of use; and if applicable, 7) the ability to step-down or step-up output voltages. In switchmode regulators, the switching frequency is also a consideration.
86

There are several types of voltage regulators, which may be classified in terms of how they
operate or what type of regulation they offer. The most common regulator IC is the
standard linear regulator. A typical linear voltage regulator operates by forcing a fixed
voltage at the output through a voltage-controlled current source. It has a feedback
mechanism that continuously adjusts the current source output based on the level of the
output voltage. A drop in voltage would excite the current source into delivering more
current to the load to maintain the output voltage. Thus, the capacity of this current source
is generally the limiting factor for the maximum load current that the linear regulator can
deliver while maintaining the required output level. The amount of time needed for the
output to adjust to a change in the input or load is the transient response time of the
regulator.
The feedback loop used by linear regulators need some form of compensation for stability.
In most linear regulator IC's, the required feedback loop compensation is already built into
the circuit, thereby requiring no external components for this purpose. However, some
regulator IC's, like the low-dropout ones, do require that a capacitor be connected between
the output and ground to ensure stability. The main disadvantage of linear regulators is
their low efficiency, since they are constantly conducting.
The switching voltage regulator is another type of regulator IC. It differs from the linear
regulator in the sense that it employs pulse width modulation (PWM) to regulate its
output. The output is controlled by current that is switched at a fixed frequency ranging
from a few Hz to a few kHz but with varying duty cycle. The duty cycle of the pulses
increase if the output of the regulator needs to supply more load current to maintain the
output voltage and decreases if the output needs to be reduced. Switching regulators are
more efficient than linear regulators because they only supply power when necessary.
Complexity, output ripples, and limited current capacity are the disadvantages of switching
regulators.
There is also a group of regulator IC's known as Low Drop-out (LDO) regulators. The
drop-out voltage is the minimum voltage across the regulator that's required to maintain
the output voltage at the correct level. The lower the drop-out voltage, the less power is
dissipated internally within the regulator, the higher is the regulation efficiency. In LDO
regulators, the drop-out voltage is typically just about 0.6 V. Even at maximum current,
the drop-out voltage increases to just about 0.7-0.8 V.
Examples of applications of regulator IC's include the following: 1) regulated power
supplies; 2) data conversion (ADC/DAC) circuits; 3) sensor and triggering systems; 4)
DC-to-DC voltage converters; 5) measurement and instrumentation systems; 6) motor
control; and 7) battery charging.
Analog Switches

87

An analog switch is a solid-state circuit that has one or more channels that can transmit
analog signals when they're in the 'on' state or block them when they're in the 'off' state.
The turning 'on' and 'off' of an analog switch is controlled by a digital gating signal applied
to its control gate. Applications of analog switches include data acquisition, process
control, instrumentation, video systems, and communication systems.
An ideal analog switch has zero resistance when 'on' (or closed), and infinite resistance
when 'off' (or open). It also has a perfectly linear volt-ampere characteristic when
transmitting an analog signal. Of course, analog switches of the real world are not 'ideal'.
Being solid-state semiconductor devices, real analog switches exhibit non-zero 'on'
resistance, a finite 'off' resistance, and a non-linear volt-ampere characteristic.
Just like mechanical switches, analog switches come in a variety of forms, depending on
the number of poles and throws they offer. Thus, terms such as 'SPST' and 'SPDT' (singlepole single throw and single-pole double-throw, respectively) which are commonly used to
describe mechanical switches are also applicable to analog switches. A single IC package
can also have multiple switches in it, each of which corresponds to an analog channel.
There are many circuit configurations that can be used as gates for analog switches, some
of which are very simple, e.g., consisting of just a single diode and several resistors. Most
commercially available analog switches though employ well-engineered bipolar
transistors, field-effect transistors (FET's), or a combination of both in their channels for
the transmission or blocking of analog signals. FET's are widely used in analog switches
because of their high 'off' resistance and low 'on' resistance. Figure 1 shows a simplified
CMOS analog switch circuit.

Figure 1. A simple CMOS analog switch


The circuit in Figure 1 employs complementary MOSFETs (CMOS) consisting of an nchannel MOSFET and a p-channel MOSFET, both of which are connected such that their
source terminals are on opposite sides of the circuit (i.e., one is on the input side and the
other is on the output side). It then follows that their drain terminals are also on opposite
sides of the circuit. Also, note that the control voltages at the gates of the transistors are
digital (in this case, '1' means +5V and '0' means -5V) and complementary.

88

The effect of this entire configuration is that one value of Vc will turn both transistors 'off'
and the other value of Vc will turn at least one transistor 'on'. In the latter case, which
transistor is conducting depends on the current value of analog input Vin. The analog
switch is 'off' if both transistors are 'off', and it is 'on' if at least one of the transistors is 'on.'

Power Management IC's


A power management IC is an integrated circuit designed to serve a specific role in
enhancing the way power is utilized by an electronic or electrical system. Power
management IC's therefore include devices that are used in the management of one or
more of the following: 1) power generation; 2) power delivery; 3) power consumption; 4)
power replenishment/storage; 5) power conservation; and 6) power monitoring.
The need for power management IC's became more emphasized as consumers embraced a
more mobile life style. Suddenly, they wanted their laptops, cell phones, PDA's, and other
mobile devices to last longer before requiring a battery recharge. This demand resulted in
the vast array of power management IC's being offered in the market today, examples of
which are presented below. The descriptions of the IC's provided were taken from their
respective manufacturers' websites, and are included here only to serve as an overview of
what power management IC's are available to designers.
The MAX1702B, manufactured by Maxim, is triple-output power management IC for
microprocessor-based applications that require substantial computing and multimedia
capability at low power, such as PDAs, third-generation smart cellular phones, internet
appliances, automotive in-dash Telematics systems, etc. The MAX1702B integrates three
ultra-high-performance power supplies with associated supervisory and management
functions.
Its power management functions include automatic power-up sequencing, power-on-reset
and manual reset with timer, and two levels of low-battery detection. The built-in DC-DC
converters use fast 1MHz PWM switching, allowing the use of small external components.
They automatically switch from PWM mode under heavy loads to skip mode under light
loads to reduce quiescent current and maximize battery life. The input voltage range is
from 2.6V to 5.5V, allowing the use of three NiMH cells, a single Li+ cell, or a regulated
5V input. The MAX1702B is available in a tiny 6mm x 6mm, 36-pin QFN package and
operates over the -40C to +85C temperature range.
The AT73C202, manufactured by Atmel, is an ultra low-power Battery and Power
Management IC designed for state-of-the-art cellular phones. Other power management
devices manufactured by ATMEL include the AT73C203, an ultra low-power Battery and
Power Management IC designed for portable and hand-held applications built around
microprocessors requiring smart power management functions; and the AT73C204, an

89

integrated power management solution for the add-on features in new-generation mobile
phones, e.g., camera modules, sound systems, memory modules, Bluetooth modules, etc.
Analog Devices (ADI) is another major company that produces a large selection of power
management IC's. For instance, the ADP3806 is a stand-alone Li-Ion battery-charging IC
that combines high output voltage accuracy with precise current control to improve the
performance and reduce the design complexity of Constant-Current, Constant-Voltage
(CCCV) chargers. Other examples of battery chargers from ADI are: the ADP2291
Compact, 1.5 A Linear Charger for Single-Cell Li+ Battery; the ADP3804 High Frequency
Switch Mode Li-Ion Battery Charger; and the ADP3820 - 1% Precision, Single Cell LiIon Battery Charger.
ADI also manufactures GSM Power Controllers, which provide all of the power
management functions required to properly power ADI's industry-leading GSM/GPRS
chipsets. Examples of GSM power controllers from ADI are the ADP3404, the ADP3405,
and the ADP3522.
Another group of power management IC's from Analog Devices are known as
Microprocessor Supervisors and Reset Generators. These circuits monitor power supply
voltage levels and code execution integrity in microprocessor-based systems. Aside from
providing power-on reset signals, an on-chip watchdog timer can also reset the
microprocessor if it fails to strobe within a preset timeout period. A reset signal can also be
asserted by means of an external push-button, through a manual reset input. Examples of
microprocessor supervisory IC's and reset generators are the ADM6316; the ADM823; and
the ADM8617.
Aside from the above products, other power management IC's offered by ADI include: 1)
temperature sensors; 2) charge pumps for generating higher voltages from low voltage
inputs, using capacitors as storage elements; 3) hot swap controllers for providing accurate
inrush current control and protection against over-current events and voltage faults; 4) low
dropout linear regulators; 5) dual MOSFET drivers for use in non-isolated synchronous
buck power converters; 6) voltage sequencing and voltage tracking ICs for sequencing
multiple power supplies; 7) switching regulators that operate in step up, step down, and
inverting modes, and capable of generating a fixed or adjustable output voltage; and 8)
hardware system monitoring IC's.

Opto-Couplers
Optocouplers, also known as Opto-isolators, are devices that provide optical isolation and
coupling between two circuits, creating physically- and electrically-isolated signal
coupling between them. Optocouplers, which can be assembled using traditional

90

semiconductor packages, contains both a light emitting diode (LED) and a photosensitive
semiconductor device in the same housing.
The LED and the photosensitive device of an optocoupler are assembled in close
proximity with each other within the package, arranged in such a way that the light emitted
by the LED would strike the photosensitive device and trigger it into conduction. The
photosensitive device is usually a transistor, SCR, or triac in normally non-conducting
state. In such an arrangement, therefore, the photo-emitting device is the transmitter and
the photo-sensing device is the receiver.

Figure 1. Block Diagram of an Optocoupler


Optocouplers are excellent isolating devices because their coupling medium is light,
allowing very large isolation voltages (several kV's) between circuits. The coupling light
doesn't have to be visible light - many commercially available optocouplers use infrared
light or even laser beams as transmission medium. The emission travels through a
transparent gap until it gets picked up by the photosensitive device. The output waveform
is identical to the input waveform, although their amplitudes usually differ.
Opto-isolation is important in applications where 'fragile' digital circuits are at risk of
being damaged by large transient voltages or spikes. Even if damage is not imminent,
such spikes can make a circuit malfunction. For instance, digital circuits that are used to
activate relays that drive large motors can experience inductive voltage kicks during
switching that can produce 'false' triggering pulses, causing the motors to randomly turn on
or off.
Another common application of optocouplers is in modems, allowing a computer to be
connected to the telephone line without risk of damage from line transients. Other
applications of optocouplers include: 1) isolated line receivers; 2) computer peripheral
interfacing; 3) digital isolation between ADC's and DAC's; 4) switching power supplies;
5) instrument input-output isolation; and 6) ground loop elimination.
Waveform Generators
A waveform generator is a device that produces repetitive waveforms at the desired
frequency. It can therefore generate sine, square, triangle, sawtooth, and other output
waveforms. There are now many off-shelf waveform generator IC's that can be
conveniently used and incorporated into a circuit that requires an auto-generated periodic
91

waveform. Some examples of these waveform generator IC's are presented below, with
brief descriptions provided by their respective manufacturers in their websites.

The ICL8038 waveform generator (manufactured by Intersil) is a monolithic integrated


circuit capable of producing high accuracy sine, square, triangular, sawtooth and pulse
waveforms with a minimum of external components. The frequency (or repetition rate) can
be selected externally from 0.001Hz to more than 300kHz using either resistors or
capacitors, and frequency modulation and sweeping can be accomplished with an external
voltage.
The ICL8038 is fabricated with advanced monolithic technology, using Schottky barrier
diodes and thin film resistors, and the output is stable over a wide range of temperature
and supply variations. These devices may be interfaced with phase locked loop circuitry to
reduce temperature drift to less than 250ppm/C. Key features include: Low Frequency
Drift with Temperature 250ppm/C; Low Distortion 1% (SineWave Output); High Linearity
0.1% (Triangle Wave Output); Wide Frequency Range 0.001Hz to 300kHz; Variable Duty
Cycle 2% to 98%; High Level Outputs TTL to 28V; Simultaneous Sine, Square, and
Triangle Wave Outputs
The MAX038 (manufactured by Maxim) is a precision, high-frequency function generator
that produces accurate sine, square, triangle, sawtooth, and pulse waveforms with a
minimum of external components. The internal 2.5V reference (plus an external capacitor
and potentiometer) lets the signal frequency be varied from 0.1Hz to 20MHz. An applied
2.3V control signal varies the duty cycle between 10% and 90%, enabling the generation
of sawtooth waveforms and pulse-width modulation.
A second frequency-control input, used primarily as a VCO input in phase-locked-loop
applications, provides 70% of fine control. This capability also enables the generation of
frequency sweeps and frequency modulation. The frequency and duty-cycle controls have
minimal interaction with each other. All output amplitudes are 2Vp-p, symmetrical about
ground. The low-impedance output terminal delivers as much as 20mA, and a two-bit
code applied to the TTL-compatible A0 and A1 inputs selects the sine, square, or triangle
output waveform.
The XR8038A (manufactured by Exar) is a precision waveform generator IC capable of
producing sine, square, triangular, sawtooth, and pulse waveforms, with a minimum
number of external components and adjustments. The XR8038A allows the elimination of
the external distortion adjusting resistor which greatly improves the temperature drift of
distortion, as well as lowering external parts count. Its operating frequency can be selected
over eight decades of frequency, from 0.001Hz to 200kHz, by the choice of external R-C
components.
The frequency of oscillation is highly stable over a wide range of temperature and supply
voltage changes. Both full frequency sweeping as well as smaller frequency variations
92

(FM) can be accomplished with an external control voltage. Each of the three basic
waveform outputs, (i.e., sine, triangle and square) are simultaneously available from
independent output terminals. The XR8038A monolithic waveform generator uses
advanced processing technology and Schottky barrier diodes to enhance its frequency
performance
Accelerometers
An accelerometer is a device that measures acceleration, or the rate of change of velocity
with respect to time. Accelerometers come in various forms and sizes, but cutting edge
micromachining technology advancements in recent years have allowed them to be built in
microchip form. Today, there are a multitude of semiconductor companies that
manufacture accelerometer IC's that not only measure linear acceleration, but other
parameters as well such as angular speed, vibrations, shock, and even tilt positions.
One of the pioneers in fabricating accelerometers in integrated circuit form is Analog
Devices, which produces the ADXL50 accelerometer. The ADXL50 provides an output
voltage that varies
proportionally with the amount of acceleration experienced along its sensitive axis. It has
an input range of -50g to +50g, with a sensitivity of approximately 1 V per 50 g. Thus, a
50-g acceleration would either decrease or increase the output at 0 g by 1V, depending on
the direction of the acceleration. Since the ADXL50 is calibrated to output 1.8V when
there is no acceleration, the output would either 0.8 V or 2.8 V at 50 g, again depending on
the acceleration's direction.
The ADXL50 is an example of a capacitive accelerometer, i.e., it measures capacitances in
order to measure the acceleration. This accelerometer applies two basic principles of
physics in its operation. The first one is Hooke's Law, which states that a spring, when
stretched, will exert a restoring force F that's proportional to its increase in length x, i.e., F
= kx. The second one is Newton's Second Law, which states that the force F exerted by a
body is equal to its mass m multiplied by its acceleration a, i.e., F = mA.
Combining these two equations, A = kx/m, which means that a body with mass m will
stretch a spring (whose elongation property is characterized by k) by a distance of x if its
acceleration is A. The ADXL50 has a mass-spring system consisting of a bar of silicon
(which is the mass) that is held by four tethers (one at each corner), as shown in Figure 1.
The four tethers, the feet of which are anchored, compose the spring system. When the
mass is subjected to an acceleration, it moves with respect to the anchored feet of the
tethers, causing the tethers to 'stretch' like a spring. The greater the acceleration
experienced, the larger is the displacement. This system therefore translates the
acceleration into a displacement, allowing the acceleration to be measured by measuring
the displacement.

93

Figure 1. A Differential Capacitive Accelerometer Mass-Spring


System at rest (left) and when subjected to acceleration (right)
The displacement of the bar is measured in terms of the difference between two
capacitances formed by the accelerometer's structure in Figure 1. The two fixed capacitor
plates form a capacitor each with the inner capacitor plate that's attached to the moving
mass, i.e., they both share a common capacitor plate (the one that moves with the mass).
The value of the capacitance of each capacitor changes with the movement of the inner
capacitor. Since the change in capacitance of one capacitor is opposite to that of the other
capacitor, even the direction of the acceleration can be determined from the changes.
The amounts and rates of change of these two capacitances are then translated by on-chip
signal conditioning circuits into an output voltage that indicates the strength and direction
of the acceleration. The on-chip signal conditioning circuitry may consist of amplifiers,
filters, oscillators, demodulators, and even self-test circuitry.
Note that velocity is simply the integral of acceleration, and displacement is simply the
integral of velocity. As such, information about the velocity and displacement of the body
may also be known by performing the necessary integration steps on the acceleration
information obtained from the accelerometer.
Performance parameters for accelerometers include: 1) the Zero g Offset, or the voltage
output at 0 g; 2) the Sensitivity, or the output voltage per g; 3) the Noise, which determines
the minimum resolution of the sensor; 4) the Temperature Range; 5) the Bias Drift with
Temperature, or how the 0 g output changes with temperature; 6) the Sensitivity Drift with
Temperature, or how the 0 g output changes with temperature; 7) the Bandwidth; and 8)
the Power Consumption.
DC Motor Controllers
A DC motor controller is a device that provides or facilitates accurate control of a DC
motor. There are two major types of DC motor - the common DC motor with brushes, and
the brushless DC motor. Both types has a non-moving source of magnetic fields, known as
a stator, and a rotating source of magnetic fields, known as the rotor. The interaction of the
magnetic fields from the stator and the rotor is what makes the motor shaft turn.

94

A brushed DC motor, which is operated simply by applying a DC voltage across its


terminals, has a 'permanent magnet' stator and an 'electromagnet' rotor. It uses its brushes
to deliver commutating current from the motor's external terminals to the moving rotor
coils inside. A brushless DC motor, on the other hand, has a 'permanent magnet' rotor and
an 'electromagnet' stator. It requires a more complex form of energization to operate proper sequencing of the delivery of commutating currents to its stator coils. Brushless
DC motors are not subject to the arcing caused by brushes, and therefore have a longer
life.
Brushed DC motors are not widely used in precision motion control application because
they are difficult to control. Brushless DC motors are more widely used, which is why
many motor controller IC's for DC motors cater to the brushless type.
A typical brushless DC motor controller IC is equipped with a control circuit and a driver
circuit. The control circuit, which is the 'brain' of the controller IC, generates a control
output based on some form of input. For instance, it may receive feedback about the state
of the motor, usually in the form of input data based on Hall Effect, which it decodes. The
logic circuit then applies a built-in commutation logic that interprets the decoded feedback
information and outputs the appropriate commutation commands to the driver circuit.
The driver circuit, which translates the logic circuit commands into motor-useable
currents, typically consists of a set of integrated power drivers that supply the correct
sequence of currents to the brushless DC motor. It may also have a built-in pulse width
modulation (PWM) circuit that varies the amounts of DC currents delivered to the motor to
control its torque and speed.
DC motor controller IC's can control DC motors over a wide range of motor supply
voltages (up to 50 V) and motor winding currents (several amperes). They may also
provide specialized outputs such as tachometer readings for use in speed control loops.

Two-Transistor DC Motor Driver

95

Figure 1. Circuit Diagram for a DC Motor Driver Using Transistors


This is a circuit for controlling an ordinary DC motor using a pair of transistors (1 NPN
and 1 PNP). Note the dual supply of this circuit (+6V and -6V).
A DC motor runs in one direction if the required voltage is applied across its winding and
runs in the opposite direction if the polarity of the applied voltage is reversed. This
function can easily be achieved by the circuit above.
In this circuit, a positive voltage at the control input turns on Q1 (an NPN transistor) but
turns off Q2 (a PNP transistor). This causes current from the +6V supply to flow through
the motor from node A to B (ground), making it turn in the forward direction. On the other
hand, a negative voltage turns off Q1 and turns on Q2, causing current to flow from node
B to node A of the motor, then to the negative supply through Q2, making it turn in the
opposite direction. An input of 0 V stops the motor.
Note that the values of the base resistors of the transistors (or even the transistors
themselves) required by the circuit may be different from those shown in Figure 1,
depending on the motor being driven. Experimentation may therefore be required on the
part of the hobbyist to make this circuit work.

96

Transistor-based DC Motor Controller (Single Supply)

Figure 1. Circuit Diagram for a DC Motor Controller Using Bipolar Transistors


This is a circuit for controlling an ordinary DC motor using two pairs of transistors (1 NPN
and 1 PNP for each pair).
A DC motor runs in one direction if the required voltage is applied across its winding and
runs in the opposite direction if the polarity of the applied voltage is reversed. This
function can easily be achieved by the circuit above.
In this circuit, a 'logic 1' voltage at Control 1 and a 'logic 0' voltage at Control 2 will turn
on Q1 and Q4 and turn off Q2 and Q3, causing the motor to turn in one direction.
Reversing the voltage levels at Control 1 and Control 2 will reverse the pairs of transistors
that are 'on' and 'off', causing the motor to turn in the opposite direction. Putting the same
logic input at Control 1 and Control 2 (both '1' or both '0') will cause the motor to stop
turning.
The values of the base resistors of the transistors (or even the transistors themselves)
required by the circuit may be different from those shown in Figure 1, depending on the
97

motor being driven. Experimentation may therefore be required on the part of the hobbyist
to make this circuit work.

Stepper and Servo Motor Controllers


Stepper motor and servo motor controllers are devices that provide or facilitate accurate
control of stepper motors and servo motors, respectively.
Stepper Motor Controllers
A stepper motor is a motor that converts digital pulses into mechanical shaft rotation.
These pulses control the rotation of the shaft in small angular 'steps', hence the name
'stepper motor'. Stepper motors come in a wide range of angular resolution, i.e., from 90
degrees per step to as low as 0.72 degrees per step. The speed and torque of a stepper
motor are determined by the amount of current through its windings. Stepper motors are
simple, low-cost, yet highly reliable motors that can operate in almost any environment.
A stepper motor rotates in discrete 'steps' because its movement is achieved by aligning
certain 'teeth' of the rotor with certain poles of the stator (depending on which windings are
energized and which are not) at any given time. As such, there are only specific
equilibrium points at which the rotor can 'rest.' Every time a new set of pulses is delivered,
the rotor rotates to the next 'equilibrium point', and its angular position with respect to the
stator is locked in place until a new set of pulses arrives.
There are three stepping modes in which a stepper motor can be operated, namely, full
stepping, half stepping, and micro-stepping. Every step taken under full stepping mode
results in a rotation equal to 100% of the angular displacement specified for a single step.
Half-stepping results in only 50% of this, so it would take twice as many steps under the
half-stepping mode as what it would take under the full-stepping mode to cover the same
angular displacement. Micro-stepping further reduces the angular displacement equivalent
to a single step of the motor.
A stepper motor controller must be able to handle the generation and conditioning of
pulses needed to operate the stepper motor easily even in complex applications. To
accomplish this, a typical stepper motor controller IC consists of three basic elements: 1)
an indexer, which generates low level signals that correspond to step pulses and direction
signals (collectively referred to as 'indexer commands') needed to control the stepper
motor; 2) a motor driver circuit, which translate the indexer commands into power that
energizes the appropriate windings of the stepper motor; and 3) an interface that would
allow it to be controlled more conveniently by a computer, PLC, or microcontroller.
98

Characteristics or features that designers take into consideration when choosing a stepper
motor controller IC include the following: 1) the ability to put a stepper motor in
continuous 'run' mode at various speed profiles, or in 'step' mode with precision control; 2)
directional control; 3) available stepping modes for resolution control; 4) programmability;
5) specialized I/O controls; 6) feedback mechanisms about the state of the stepper motor;
7) effective and efficient energization of the stepper motor; 8) industry-standard user
interfaces; and 9) ease of use.
Servo Motor Controllers
A servo motor is a motor whose angular displacement at any one time is determined by a
coded signal, which is usually the width of the pulse applied to its control terminal. It is
operated in a closed loop, i.e., it requires some form of analog feedback (usually provided
by a potentiometer) to let it know the current rotor position. Thus, the repeatability of a
servo motor's positioning depends greatly on the stability of the potentiometer and other
components used in the feedback circuit. Since a stepper motor operates without feedback,
a servo motor is a better choice than a stepper motor if monitoring of the rotor position at
any given time is important.
Since a servo motor operates on the widths of the control pulses it receives, a servo motor
controller must be capable of pulse width modulation (PWM). The servo motor expects to
see a pulse regularly, say, every 20 milliseconds. The pulse width influences the amount of
power delivered to the motor and, therefore, its angular displacement as well, i.e., the
longer the pulse, the larger the rotation will be.
Examples of features offered by servo motor controller IC's in the market include : 1)
ability to support multiple motors; 2) velocity and trapezoidal profiling; 3) directional
control; 4) programmability; 5) specialized I/O controls; 6) stable feedback mechanisms;
7) overcurrent and power failure protection; 8) industry-standard user interfaces; and 9)
ease of use.
4-Phase Stepper Motor Transistor Driver

99

Figure 1. Circuit Diagram for a 4-Phase Stepper Motor Driver Using NPN Transistors
This is a circuit for driving a 4-phase stepper motor using 4 digital signals from a source of
digital control signals such as a computer or a stepper motor controller IC. A stepper motor
is actuated by energizing its internal windings one at a time, i.e., the motor shaft turns a
fraction of a revolution every time a winding is energized. The shaft of a stepper motor
'locks' into place after the incremental turn is made, even if the power to the winding is
sustained. Thus, to turn the motor shaft continuously, the windings must be energized
sequentially in continuous cycles.
The basic pattern for energizing the windings in sequence is conveniently achieved by
digital means, such as from the output port of a computer or from the digital outputs of a
stepper motor controller IC. These digital signals, however, are not strong enough to drive
the windings of a stepper motor directly, so there's a need to 'amplify' the current capacity
of these signals. This is achieved by using these digital signals to drive the base of a power
transistor, which in turn drives the windings of the stepper motor, as shown in Figure 1
above.
In this circuit, a logic '1' is fed into the base of a transistor to energize the winding that's
connected to the transistor. This logic '1' input turns on the transistor, allowing current to
pass through the winding from the 12V supply to ground, energizing the winding in the
process. Inputting a logic '0' to the base of the transistor turns it off, cutting off the current
flow through the winding.
Phase-Locked Loop IC's

100

A Phase-Locked Loop (PLL) device is a closed-loop electronic circuit that controls an


oscillator so that it provides an output signal that maintains a constant phase angle with
respect to a reference signal, which can range from a fraction of a Hz to many GHz. It is
one of the most widely used linear IC's for communications applications today, having the
capability to do one or more of the following: 1) compare signal frequencies; 2) synthesize
an output signal that has a frequency that's equal to that of a reference signal; 3) keep
another signal equal in frequency with the reference signal.
A basic PLL circuit generally consists of a phase frequency detector, a charge pump, a loop
filter, a voltage-controlled oscillator (VCO), and some form of output. The oscillator
generates the periodic output signal that needs to be in phase with the reference signal. If
the frequency of this oscillator-generated signal lags behind that of the reference signal,
the phase detector causes the charge pump to drive current into the loop filter which
changes the oscillator's control voltage in such a way that the oscillator frequency is
increased. By the same token, the phase detector causes the charge pump to draw current
from the loop filter system to change the control voltage and slow down the oscillator if its
output signal starts leading the reference signal. The loop filter also removes jitters from
the charge pump to 'smoothen' the control voltage.
The VCO output stabilizes when it has already attained the same frequency and phase as
the reference signal. In effect, this system ensures that the oscillator frequency gets 'locked'
into the reference signal frequency. Depending on the application, the useful output
derived from the PLL system would either be the output signal of the voltage-controlled
oscillator or the control voltage to the oscillator.

Figure 1. Simple PLL Block Diagram


PLL devices are used heavily in communications applications, primarily for keeping a
communications signal locked on a given frequency, or for generating a signal of a given
frequency. For instance, almost all transceivers utilize PLL devices to synthesize the
stable, high-frequency oscillations needed for radio and wireless communications. PLL's
are also used in the demodulation of both AM and FM signals. In space communications,
PLL devices are employed for coherent carrier tracking and threshold extension, bit
synchronization, and symbol synchronization.
PLL devices are also used in the recovery of small signals that would otherwise be lost.
Clock timing information from a data stream (such as from a disk drive) may also be
recovered by PLL devices. Other PLL applications include microprocessor clock
multipliers, modems, and various decoding circuits.

101

Charge-Coupled Devices
A Charge-Coupled Device, or CCD, is basically an array of closely-spaced metal-oxidesemiconductor (MOS) diodes that can store and transfer information using packets of
electric charge, or charge packets. Applying the proper sequence of voltage pulses (clock
signals) to a CCD biases the array of MOS diodes into the deep depletion region where the
charge packets may be moved in a controlled manner across the semiconductor substrate.
Some people also refer to a CCD's MOS diodes as 'MOS capacitors.'
There are two basic types of CCD, namely, the surface channel CCD (SCCD) and the
buried channel CCD (BCCD). Charge is stored and transferred at the semiconductor
surface in the SCCD, while in the BCCD, the charge packets are stored and transferred in
the bulk semiconductor below the surface.

The semiconductor substrate of a CCD may be n- or p-type. Over this semiconductor


substrate, silicon dioxide is grown as a dielectric or insulating layer. An array of very
closely-spaced metal electrodes is then formed over this dielectric layer. This is why the
CCD is considered a MOS device, i.e., its top to bottom layers are metal, oxide, and
semiconductor.
In an n-type CCD, grounding the substrate and applying a negative voltage -V1 to all the
closely-spaced metal electrodes will create a depletion region in the substrate right beneath
the oxide layer. This depletion region is devoid of majority carriers (electrons), since these
have been repelled by the negative voltage applied at the electrodes. On the other hand,
some of the minority carriers (holes) present in the substrate will be attracted towards this
depletion region.
Applying a significantly more negative voltage -V2 at one of the electrodes while
maintaining the other electrodes at -V1 will cause the depletion region beneath the more
negative electrode (let's call it 'Ek') to extend more deeply (see Fig. 1). This deeper
depletion region beneath Ek creates a 'potential well' that extends from the edge of the
electrode to its left to the edge of the electrode to its right. This 'potential well' is the only
region within the depletion region wherein a positive charge may move about. Thus,
placing a positive charge into this potential well traps it there, since it can not move
outside the well. This, in essence, is how a CCD stores a charge, and how it can be used as
a memory device.

102

Figure 1. A simplified cross-section of a CCD


The stored charge under the more negative electrode Ek (and the one which has an
extended depletion region) may actually be laterally transferred to the next adjacent
electrode, Ek+1. This is done by also putting Ek+1 at the more negative voltage -V2,
while allowing Ek to adjust back to -V1. While this is happening, all other electrodes (esp.
Ek-1 and Ek+2) must be at -V1. As Ek ramps up to -V1, its potential well becomes
shallower, and all the trapped charges in it gets transferred to the deeper potential well of
Ek+1. Eventually the charges previously stored by Ek will get stored in the potential well
Ek+1.
Moving the charge packets from one electrode to the other in this manner requires at least
three electrodes to move one bit of information. In any transfer cycle, one electrode is at
-V2, another electrode is ramping up from -V2 to -V1, and the third electrode is at -V1.
The lateral movement of charge packets in a CCD from one electrode to the next is very
similar to how digital data move in a shift register.
Although the CCD was invented as a memory device, its extreme sensitivity to light soon
made it a popular choice as an image sensor. In an image-sensing CCD chip, each MOS
diode or 'capacitor' represents one pixel. The charge packets are generated when light
excites electrons in the valence band into the conduction band. The light-generated charge
packets that carry the image information are stored and transferred from one potential well
to another until they are eventually shifted out of an output register. Most video cameras
today use a CCD for its image sensing requirements.
ASIC's
The term 'ASIC' stands for 'application-specific integrated circuit'. An ASIC is basically
an integrated circuit designed specifically for a special purpose or application. Strictly
speaking, this also implies that an ASIC is built only for one and only one customer. An
example of an ASIC is an IC designed for a specific line of cellular phones of a company,
whereby no other products can use it except the cell phones belonging to that product line.
The opposite of an ASIC is a standard product or general purpose IC, such as a logic gate
or a general purpose microcontroller, both of which can be used in any electronic
application by anybody.

103

Aside from the nature of its application, an ASIC differs from a standard product in the
nature of its availability. The intellectual property, design database, and deployment of an
ASIC is usually controlled by just a single entity or company, which is generally the enduser of the ASIC too. Thus, an ASIC is proprietary by nature and not available to the
general public. A standard product, on the other hand, is produced by the manufacturer for
sale to the general public. Standard products are therefore readily available for use by
anybody for a wider range of applications.
The first ASIC's, known as uncommitted logic array or ULA's, utilized gate array
technology. Having up to a few thousand gates, they were customized by varying the
mask for metal interconnections. Thus, the functionality of such a device can be varied by
modifying which nodes in the circuit are connected and which are not. Later versions
became more generalized, customization of which involve variations in both the metal and
polysilicon layers.
ASIC's are usually classified into one of three categories: full-custom, semi-custom, and
structured.
Full-custom ASIC's are those that are entirely tailor-fitted to a particular application from
the very start. Since its ultimate design and functionality is pre-specified by the user, it is
manufactured with all the photolithographic layers of the device already fully defined, just
like most off-the-shelf general purpose IC's. The use of predefined masks for
manufacturing leaves no option for circuit modification during fabrication, except perhaps
for some minor fine-tuning or calibration. This means that a full-custom ASIC can not be
modified to suit different applications, and is generally produced as a single, specific
product for a particular application only.
Semi-custom ASIC's, on the other hand, can be partly customized to serve different
functions within its general area of application. Unlike full-custom ASIC's, semi-custom
ASIC's are designed to allow a certain degree of modification during the manufacturing
process. A semi-custom ASIC is manufactured with the masks for the diffused layers
already fully defined, so the transistors and other active components of the circuit are
already fixed for that semi-custom ASIC design. The customization of the final ASIC
product to the intended application is done by varying the masks of the interconnection
layers, e.g., the metallization layers.
Structured or Platform ASIC's, which belong to a relatively new ASIC classification, are
those which have been designed and produced from a tightly defined set of: 1) design
methodologies; 2) intellectual properties (IP's); and 3) well-characterized silicon, aimed at
shortening the design cycle and minimizing the development costs of the ASIC. A
platform ASIC is built from a group of 'platform slices', with a 'platform slice' being
defined as a pre-manufactured device, system, or logic for that platform. Each slice used
by the ASIC may be customized by varying its metal layers. The 're-use' of premanufactured and pre-characterized platform slices simply means that platform ASIC's are
not built from scratch, thereby minimizing design cycle time and costs.

104

Examples of ASIC's include: 1) an IC that encodes and decodes digital data using a
proprietary encoding/decoding algorithm; 2) a medical IC designed to monitor a specific
human biometric parameter; 3) an IC designed to serve a special function within a factory
automation system; 4) an amplifier IC designed to meet certain specifications not
available in standard amplifier products; 5) a proprietary system-on-a-chip (SOC); and 6)
an IC that's custom-made for a particular automated test equipment.
RLC Measurement
The circuits on this page are types of bridge circuits, which are very useful in measuring
unknown values of R, L, and C based on known values of the other components in the
same bridge circuit. The potentiometers and other variable components in the bridge are
adjusted until the reading of the meter at the center of the bridge becomes zero, in which
case the bridge is said to be balanced.

Figure 1. Circuit Diagrams for a Resistance Bridge (left) and a Capacitance Bridge (right)
In Figure 1, the circuit on the left is a simple Wheatstone bridge. When this bridge is
balanced, R1/R2=R3/R4. The circuit on the right is a capacitance bridge which is
balanced when: C2 = C1(R1/R2) and R4 = R3(R2/R1).

105

Figure 2. Diagram for an Inductance Bridge Circuit


The circuit in Figure 2 is used for measuring unknown inductances. When this bridge is
balanced, the following equations apply:
L1 = R1R4C1 / [1 + (2fR3C1)2]; and
R2 = (2fC1)2R3R2R4 / [1 + (2fR3C1)2]
where f is the frequency of the applied input voltage.
Resonance and Quality Factor Q
The term "resonance" literally translates to "to vibrate with". In physics, resonance is the
phenomenon wherein two systems are vibrating within the same frequency range, creating
order. In electronics, resonance is a state wherein a tuned circuit's capacitive reactance is
equal to its inductive reactance.

A series LC circuit that is in resonance, i.e., excited by a signal at its resonant frequency,
exhibits zero reactance. On the other hand, a parallel LC circuit exhibits infinite reactance
at its resonant frequency. Series and parallel LC circuits may therefore be combined to
form either a band-pass filter or a band-stop filter.
The frequency at which resonance in a tuned LC circuit occurs is given by the following
formula:
fr = 1 / [2(sqrt(LC))] where
fr = resonant frequency (Hz);
L = the inductance (H); and
C = the capacitance (F).
Using the equation above, one can calculate either the value of the inductance L or
capacitance C that will result in resonance at a given frequency fr: L = 1 / [42fr2C] or C =
1 / [42fr2L].
The ratio of the reactance of the tuned circuit to its resistance is called the "quality factor",
or Q factor, or simply Q. Thus, Q is the ratio of the energy stored to the energy dissipated
in the circuit per cycle.
The reactance of an inductance L is equal to 2fL while that of a capacitance C is equal to
1/2fC. Thus, for a series RL circuit, the quality factor Q is given by the equation: Q =
2frL / R. On the other hand, the quality factor Q for a series RC circuit is given by the
equation: Q = 1 / 2frCR.
The quality factor Q of a tuned circuit is given by the equation: Q = fr / B where B is the
bandwidth of the circuit in Hz. The bandwidth of a circuit is the frequency interval
between its half-power points f2 and f1, or B = f2 - f1. Thus,
Q = fr / (f2 - f1).
106

Since Q is the ratio of the resonant or center frequency fr to the bandwidth B, Q is a


measure of the 'sharpness' of the response of the tuned circuit to the resonant frequency.
Thus, a circuit with a high Q will exhibit a higher amplitude at the resonant frequency, but
will decay more quickly as the frequency moves away from the resonant frequency.
Band-Pass Filters
A Band-Pass Filter is a circuit that only allows a certain range or band of frequencies to
pass, while attenuating or rejecting signals whose frequencies are either below a lower cutoff frequency or above an upper cut-off frequency. A simple band-pass filter may be
obtained by combining a low-pass filter and a high-pass filter. The range of frequencies
that a band-pass filter allows to pass is referred to as the 'passband'. The band-pass filter is
the opposite of the band-stop filter.
An ideal band-pass filter is one whose passband doesn't undergo any change (i.e., no gain
nor attenuation), but completely rejects all frequencies outside the passband. In an ideal
band-pass filter, the transition of the response from outside the passband to within the
passband and vice versa is instantaneous.
Of course, an ideal band-pass filter doesn't exist in the real world, i.e., some attenuation
still occurs within the passband while complete attenuation is not achieved outside the
passband. The amount of attenuation outside the passband may be described in terms of
the 'roll-off' of the filter, which is the attenuation in dB per octave of frequency.

LC band-pass filters, or filters containing resonant circuits composed of inductors and


capacitors, has a resonant frequency between the lower cut-off frequency f1 and the upper
cut-off frequency f2. At this resonant frequency, the gain of the band pass filter is at its
maximum.
The over-all impedance of a resonant series LC circuit consisting of an inductor and a
capacitor in series with each other will drop to zero at the resonant frequency because the
reactances of the inductor and the capacitor cancel each other out under resonance. On the
other hand, the over-all impedance of a resonant parallel LC circuit consisting of an
inductor and a capacitor in parallel with each other will increase to infinity at the resonant
frequency, i.e., the reactances of the inductor and the capacitor result in zero current flow
under resonance.
Resonant series and parallel LC circuits may thus be combined to form a band-pass filter
as shown in Figure 1. In this circuit, the resonant series LC circuits are used to allow only
the desired frequency range to pass while the resonant parallel LC circuit is used to
attenuate frequencies outside the passband by shunting them towards the ground.

107

Figure 1. A Band-Pass Filter


The following equations apply to the band-pass filter in Figure 1 above:
1) L = Zo / ((f2-f1)); Lf = (f2-f1) / (4(f2f1))
2) C = (f2-f1) / (4(f2f1Zo)); Cf = 1 / ((f2-f1)Zo)
3) fo = sqrt(f1f2) = 1 / (2(sqrt(LC))) = 1 / (2(sqrt(LfCf)))
4) Zo = sqrt(L/Cf) = sqrt(Lf/C)
where Zo = line impedance; f1 = lower cut-off frequency; f2 = upper cut-off frequency; fo
= resonant frequency.
Band-Stop Filters

A Band-Stop Filter is a circuit that allows most frequencies to pass, but blocks or
attenuates a certain range or band of frequencies. It is also known as a 'band-elimination'
filter or a 'band-rejection filter'. The band-stop filter is the opposite of the band-pass filter.
The range of frequencies that a band-stop filter blocks is known as the 'stopband', which is
bound by a lower cut-off frequency f1 and a higher cut-off frequency f2. A special type of
band-stop filter, known as the 'notch filter', is one whose stopband is very narrow, thus
creating a 'notch' in the frequencies allowed to pass. The notch filter is therefore a bandstop filter that has a high Q factor. Combining several notch filters together forms a 'comb
filter', which is a filter that has multiple stopbands.

An ideal band-stop filter is one whose stopband is completely rejected by it, while
allowing all other frequencies to pass unchanged (no gain nor attenuation). In an ideal
band-stop filter, the transition of the response from outside the stopband to within the
stopband and vice versa is instantaneous. Of course, an ideal notch filter doesn't exist in
the real world, i.e., complete attenuation within the stopband can not be achieved while
frequencies outside the stopband undergo some level of attenuation.
The over-all impedance of a resonant series LC circuit consisting of an inductor and a
capacitor in series with each other will drop to zero at the resonant frequency because the
reactances of the inductor and the capacitor cancel each other out under resonance. On the
108

other hand, the over-all impedance of a resonant parallel LC circuit consisting of an


inductor and a capacitor in parallel with each other will increase to infinity at the resonant
frequency, i.e., the reactances of the inductor and the capacitor result in zero current flow
under resonance.
Resonant series and parallel LC circuits may thus be combined to form a notch filter as
shown in Figure 1. In this circuit, the resonant parallel LC circuits are used to block
frequencies within the stopband, while the resonant series LC circuit is used to attenuate
frequencies within the stopband by shunting them towards the ground.

Figure 1. A Notch Filter Using Resonant LC


Circuits
The following equations apply to the notch filter in Figure 1 above:
1) L = Zo (f2-f1) / (f1f2); Lf = Zo / (4(f2-f1))
2) C = 1 / (4(f2-f1)Zo); Cf = (f2-f1) / ((f2f1Zo))
3) fo = sqrt(f1f2) = 1 / (2(sqrt(LC))) = 1 / (2(sqrt(LfCf)))
4) Zo = sqrt(L/Cf) = sqrt(Lf/C)
where Zo = line impedance; f1 = lower cut-off frequency; f2 = upper cut-off frequency; fo
= resonant frequency.

The Schmitt Trigger


A Schmitt Trigger (or Schmidt Trigger) is a comparator circuit or device that incorporates
a feedback system such that it responds to two comparator thresholds instead of one. Its
dual-threshold property as a comparator allows it to be more resistant to input noise and
achieve a more stable output.
Just like other multivibrator circuits, its output can have two possible states - 'low' and
'high'. When the input exceeds the higher threshold, the output goes to 'high'. On the other
hand, the output goes to 'low' when the input goes below the lower threshold. The output
109

retains its current level if the input is in between the two thresholds.
The circuit got its name from its inventor, US scientist Otto Schmitt. It is called a trigger
because the output doesn't change until the change in input is large enough to 'trigger' a
reversal in the level of the output.
The fact that the Schmitt trigger responds to two input thresholds and exhibits an output
that depends on the 'history' of the input implies that the circuit has some memory. This
phenomenon is also known as 'hysteresis', which is defined as the dependence of an output
signal upon the history of prior inputs and the direction of the current traversal of the
input.
As shown in Figure 1, the electronic symbol for a Schmitt trigger is a triangle with a
curved image inside. This curved image is actually the symbol for hysteresis.

Figure 1. Symbol for a Schmitt Trigger


An ordinary comparator has only one input threshold, i.e., its output is high if the input is
greater than this threshold and low if the input is lower than this threshold. If the input is
hovering close to this threshold, a small amount of input noise can make the input oscillate
between the two sides of the threshold, causing the output to oscillate between 'high' and
'low' as well. A Schmitt trigger is immune to this problem, since its dual-threshold property
requires a larger input swing to switch the state of the output.
The Schmitt trigger is widely used in cleaning up or conditioning a signal for digital use,
or in improving digital transitions from low to high and vice versa. For example, a noisy
periodic analog signal can be turned into a clean pulse train by feeding it into a Schmitt
trigger.
Bridge Circuits
A bridge circuit is a special type of electrical circuit wherein the current from a voltage
source splits into two parallel paths. These parallel paths contain components (such as
resistors, capacitors, and inductors), the types and arrangement of which depend on what
the purpose of the bridge circuit is. The parallel paths recombine again to let the current
return to the source in a single conductor, thereby closing the circuit.

The parallel paths are 'bridged' together by another electrical path that usually contains a
load or a measuring device (such as a galvanometer), hence the name 'bridge circuit.'
Bridge circuits are primarily used in measurement applications and power supplies.

110

The best known bridge circuit is the Wheatstone Bridge, which is shown in Figure 1. Here,
one can see that the circuit splits into two paths; the left path contains R2 and R1 while the
right path contains R3 and Runknown. The two parallel paths are bridged together by an
ammeter or galvanometer connected between nodes A and B.
The Wheatstone Bridge is used for accurately measuring the value of Runknown, provided
that the values of the other resistors are known and may be adjusted. To know more about
how a Wheatstone Bridge works, please see this separate article on the Wheatstone Bridge.

Figure 1. The Wheatstone Bridge


Aside from the Wheatstone Bridge, there are many other bridge circuits, the more widely
known of which are as follows:
1)
2)
3)
4)
5)
6)

Wien Bridge
Schering Bridge
Hay Bridge
Owen Bridge
Maxwell Bridge
Resonance Bridge
The Wheatstone Bridge

A Wheatstone Bridge is a bridge circuit used for measuring an unknown electrical


resistance by balancing two legs of its circuit, one of which contains the unknown
resistance. It was invented by Samuel Hunter Christie in 1833, but it was Sir Charles
Wheatstone who improved and popularized it in 1843.

Figure 1 below shows a diagram of the Wheatstone Bridge.


111

Figure 1. The Wheatstone Bridge


In the Wheatstone Bridge shown in Figure 1, the resistance values of resistors R2, and R3
are known, while the resistance value of variable resistor R1 may be adjusted.
The resistance value of R1 is adjusted until the current reading of the ammeter connected
between points A and B of the circuit becomes zero. When this happens, the bridge is said
to be 'balanced', i.e., the voltages at points A and B are already equal, so the value of the
unknown resistance may easily be calculated using voltage ratios: Runknown / R3 = R1 /
R2.
Thus, the following equation applies when the ammeter reading becomes zero:
Runknown = R1R3 / R2.
The equivalent resistance Rb of the circuit when it is balanced is just the resistance of the
left leg (R1+R2) in parallel with the resistance of the right leg (R3+Runknown).
Mathematically,
Rb = [(R1+R2)(R3+Runknown)] / [R1 + R2 + R3 + Runknown].
Alternatively, if the resistance values of R1, R2, and R3 are known but R1 can not be
adjusted, then the value of Runknown can still be calculated using Kirchhoff's Voltage
Law. This set-up is often seen in strain gauges and resistance temperature detection
circuits, since it is quicker to read a voltmeter than to manually adjust a resistor to balance
the circuit.

The Wien Bridge

A Wien Bridge is a bridge circuit used for measuring an unknown capacitance by


balancing the loads of its four arms, one of which contains the unknown capacitance.
112

Figure 1 below shows a diagram of the Wien Bridge.

Figure 1. The Wien Bridge


As shown in Figure 1, one arm of a Wien bridge consists of a capacitor in series with a
resistor (C1 and R3) and another arm consists of a capacitor in parallel to a resistor (C2
and R4). The other two arms simply contain a resistor each (R1 and R2). The values of
R1and R2 are known, and R4 and C2 are both adjustable. The unknown values are those of
C1 and R3.
Like other bridge circuits, the measuring ability of a Wien Bridge depends on 'balancing'
the circuit. Balancing the circuit in Figure 1 means adjusting R4 and C2 until the current
through the ammeter between points A and B becomes zero. This happens when the
voltages at points A and B are equal. When the Wien Bridge is balanced, it follows that
R2/R1 = Z1/Z2 where Z1 is the impedance of the arm containing C1 and Z2 is the
impedance of the arm containing C2.
Mathematically, when the bridge is balanced,
R2/R1 = (1/C1 + R3) / (R4/[C2(R4 + 1/C2)]) wherein = 2f; or
R2/R1 = (1/C1 + R3) / (R4/[C2R4 + 1]); or
R2/R1 = (1/C1 + R3) (C2 + 1/R4); or
R2/R1 = C2/C1 + C2R3 + 1/(C1R4) + R3/R4.
When the bridge is balanced, the capacitive reactances cancel each other out, so
R2/R1 = C2/C1 + R3/R4. Thus, C2/C1 = R2/R1 - R3/R4.
Note that the balancing of a Wien Bridge is frequency-dependent. The frequency f at
which the Wien Bridge in Figure 1 becomes balanced is the frequency at which C2R3 =
1/(C1R4), or 2fC2R3 = 1/(2fC1R4). Thus, the frequency f is given by the following
equation: f = (1 / 2) x (sqrt(1/[R3R4C1C2])).

113

The Schering Bridge


A Schering Bridge is a bridge circuit used for measuring an unknown electrical
capacitance and its dissipation factor. The dissipation factor of a capacitor is the the ratio
of its resistance to its capacitive reactance. The Schering Bridge is basically a four-arm
alternating-current (AC) bridge circuit whose measurement depends on balancing the loads
on its arms. Figure 1 below shows a diagram of the Schering Bridge.

Figure 1. The Schering Bridge


In the Schering Bridge above, the resistance values of resistors R1 and R2 are known,
while the resistance value of resistor R3 is unknown. The capacitance values of C1 and C2
are also known, while the capacitance of C3 is the value being measured. To measure R3
and C3, the values of C2 and R2 are fixed, while the values of R1 and C1 are adjusted
until the current through the ammeter between points A and B becomes zero. This happens
when the voltages at points A and B are equal, in which case the bridge is said to be
'balanced'.
When the bridge is balanced, Z1/C2 = R2/Z3, where Z1 is the impedance of R1 in parallel
with C1 and Z3 is the impedance of R3 in series with C3. In an AC circuit that has a
capacitor, the capacitor contributes a capacitive reactance to the impedance. The capacitive
reactance of a capacitor C is 1/2fC.
As such, Z1 = R1/[2fC1((1/2fC1) + R1)] = R1/(1 + 2fC1R1) while Z3 = 1/2fC3 +
R3. Thus, when the bridge is balanced:
2fC2R1/(1+2fC1R1) = R2/(1/2fC3 + R3); or
2fC2(1/2fC3 + R3) = (R2/R1)(1+2fC1R1); or
C2/C3 + 2fC2R3 = R2/R1 + 2fC1R2.
When the bridge is balanced, the negative and positive reactive components are equal and
114

cancel out, so
2fC2R3 = 2fC1R2 or
R3 = C1R2 / C2.
Similarly, when the bridge is balanced, the purely resistive components are equal, so
C2/C3 = R2/R1 or
C3 = R1C2 / R2.
Note that the balancing of a Schering Bridge is independent of frequency.
The Hay Bridge
A Hay Bridge is an AC bridge circuit used for measuring an unknown inductance by
balancing the loads of its four arms, one of which contains the unknown inductance. One
of the arms of a Hay Bridge has a capacitor of known characteristics, which is the
principal component used for determining the unknown inductance value. Figure 1 below
shows a diagram of the Hay Bridge.

Figure 1. The Hay Bridge


As shown in Figure 1, one arm of the Hay bridge consists of a capacitor in series with a
resistor (C1 and R2) and another arm consists of an inductor L1 in series with a resistor
(L1 and R4). The other two arms simply contain a resistor each (R1 and R3). The values
of R1and R3 are known, and R2 and C1 are both adjustable. The unknown values are those
of L1 and R4.
Like other bridge circuits, the measuring ability of a Hay Bridge depends on 'balancing' the
circuit. Balancing the circuit in Figure 1 means adjusting R2 and C1 until the current
through the ammeter between points A and B becomes zero. This happens when the
voltages at points A and B are equal. When the Hay Bridge is balanced, it follows that
Z1/R1 = R3/Z2 wherein Z1 is the impedance of the arm containing C1 and R2 while Z2 is
the impedance of the arm containing L1 and R4. Thus, Z1 = R2 + 1/(2fC) while Z2 = R4
+ 2fL1.
115

Mathematically, when the bridge is balanced,


[R2 + 1/(2fC1)] / R1 = R3 / [R4 + 2fL1]; or
[R4 + 2fL1] = R3R1 / [R2 + 1/(2fC1)]; or
R3R1 = R2R4 + 2fL1R2 + R4/2fC1 + L1/C1.
When the bridge is balanced, the reactive components are equal, so
2fL1R2 = R4/2fC1, or R4 = (2f)2L1R2C1.
Substituting R4, one comes up with the following equation:
R3R1 = (R2+1/2fC1)((2f)2L1R2C1) + 2fL1R2 + L1/C1; or
L1 = R3R1C1 / (2f)2R22C12 + 4fC1R2 + 1); or
L1 = R3R1C1 / [1 + (2fR2C1)2] after dropping the reactive components of the equation
since the bridge is balanced.
Thus, the equations for L1 and R4 for the Hay Bridge in Figure 1 when it is balanced are:
L1 = R3R1C1 / [1 + (2fR2C1)2]; and
R4 = (2fC1)2R2R3R1 / [1 + (2fR2C1)2]
Note that the balancing of a Hay Bridge is frequency-dependent.
The Owen Bridge
An Owen Bridge is an AC bridge circuit used for measuring an unknown inductance by
balancing the loads of its four arms, one of which contains the unknown inductance.
Figure 1 below shows a diagram of the Owen Bridge.

Figure 1. The Owen Bridge


As shown in Figure 1, one arm of the Owen bridge consists of a capacitor in series with a
resistor (C1 and R1) and another arm consists of an inductor L1 in series with a resistor
(L1 and R4). One arm contains just a capacitor (C2) while the fourth arm just contains a
resistor (R3). The values of C2 and R3 are known, and R1 and C1 are both adjustable. The
unknown values are those of L1 and R4.
116

Like other bridge circuits, the measuring ability of an Owen Bridge depends on 'balancing'
the circuit. Balancing the circuit in Figure 1 means adjusting R1 and C1 until the current
through the bridge between points A and B becomes zero. This happens when the voltages
at points A and B are equal. When the Owen Bridge is balanced, it follows that Z2/Z1 =
R3/Z4 wherein Z2 is the impedance of C2, Z1 is the impedance of the arm containing C1
and R1, and Z4 is the impedance of the arm containing L1 and R4. Mathematically, Z2 =
1/(2fC2); Z1 = R1 + 1/(2fC1) while Z4 = R4 + 2fL1.
Thus, when the bridge is balanced,
1/(2fC2)/[R1 + 1/(2fC1)] = R3 / [R4 + 2fL1]; or
[R4 + 2fL1]= (2fC2R3) [R1 + 1/(2fC1)]; or
R4 + 2fL1 = 2fC2R3R1 + C2R3/C1
When the bridge is balanced, the negative and positive reactive components are equal and
cancel out, so
2fL1 = 2fC2R3R1 or
L1 = C2R3R1.
Similarly, when the bridge is balanced, the purely resistive components are equal, so
R4 = C2R3/C1.
Note that the balancing of an Owen Bridge is independent of frequency.

The Maxwell Bridge


A Maxwell Bridge , also known as the Maxwell-Wien Bridge, is an AC bridge circuit used
for measuring an unknown inductance by balancing the loads of its four arms, one of
which contains the unknown inductance. Figure 1 below shows a diagram of the Maxwell
Bridge.

117

Figure 1. The Maxwell Bridge


As shown in Figure 1, one arm of the Maxwell bridge consists of a capacitor in parallel
with a resistor (C1 and R2) and another arm consists of an inductor L1 in series with a
resistor (L1 and R4). The other two arms just consist of a resistor each (R1 and R3). The
values of R1 and R3 are known, and R2 and C1 are both adjustable. The unknown values
are those of L1 and R4.
Like other bridge circuits, the measuring ability of a Maxwell Bridge depends on
'balancing' the circuit. Balancing the circuit in Figure 1 means adjusting C1 and R2 until
the current through the bridge between points A and B becomes zero. This happens when
the voltages at points A and B are equal. When the Maxwell Bridge is balanced, it follows
that Z1/R1 = R3/Z2 wherein Z1 is the impedance of C2 in parallel with R2, and Z2 is the
impedance of L1 in series with R4. Mathematically, Z1 = R2 + 1/(2fC1); while Z2 = R4
+ 2fL1.
Thus, when the bridge is balanced,
(R2 + 1/(2fC1)) / R1 = R3 / [R4 + 2fL1]; or
R1R3 = [R2 + 1/(2fC1)] [R4 + 2fL1];
When the bridge is balanced, the negative and positive reactive components cancel out, so
R1R3 = R2R4, or
R4 = R1R3/R2.
Note that the balancing of a Maxwell Bridge is independent of the source frequency.

The Resonance Bridge


A Resonance Bridge is an AC bridge circuit used for measuring an unknown inductance,
an unknown capacitance, or an unknown frequency, by balancing the loads of its four
arms. Figure 1 below shows a diagram of the Resonance Bridge.

118

Figure 1. The Resonance Bridge


As shown in Figure 1, three arms of the resonance bridge has a resistor each (R1, R2, and
R3), while the fourth arm has a series RLC circuit (R4, C1, L1). The values of R1, R2, R3,
and R4 are all known. If L1 is the unknown variable, then C1 must be adjustable. If C1 is
the unknown variable, then L1 must be adjustable.
Like other bridge circuits, the measuring ability of a resonance bridge depends on
'balancing' its circuit. Balancing the circuit in Figure 1 means adjusting C1 (if L1 is the
unknown) or L1 (if C1 is the unknown) until the current through the bridge between points
A and B becomes zero. This happens when the voltages at points A and B are equal.
When the resonance bridge is balanced, it follows that R2/R1 = R3/Z wherein Z is the total
impedance of the RLC circuit of the fourth arm. Thus, Z = R4 + 1/(2fC1) + 2fL1.
The resonance bridge got its name from the fact that it becomes balanced when L1 and C1
are in resonance with each other. A series LC circuit that is in resonance, i.e., excited by a
signal at its resonant frequency, exhibits zero reactance. The frequency at which resonance
in a tuned LC circuit occurs is given by the following formula:
fr = 1 / [2(sqrt(LC))] where
fr = resonant frequency (Hz);
L = the inductance (H); and
C = the capacitance (F).
Thus, when a resonance bridge is balanced, the combined reactance of L1 and C1 becomes
zero, and Z simply becomes equal to R4. The equation for a balanced resonance bridge
therefore simplifies to R2/R1 = R3/R4, or R4 = R3R1/R2. The frequency f at which the
resonance bridge becomes balanced is given by: f = 1 / [2(sqrt(L1C1))]. The source
frequency must therefore be known in order to measure L1 (or C1) in terms of C1 (or L1).

Basic Internal Circuit of a Simple Operational Amplifier (Op-Amp)

119

Figure 1. Basic Circuit Diagram for a Simple Operational Amplifier (Op-Amp)


Figure 1 shows the basic circuit diagram of a very simple operational amplifier.
An op amp basically has 4 circuit stages: 1) an input stage; 2) an intermediate stage; 3) a
level-shifting stage; and 4) an output stage. The input stage of an op-amp is usually a pair
of matched transistors configured as a dual-input differential amplifier. The output of this
input stage is taken from across the outputs (collectors in the above example) of the paired
transistors. This balanced output is fed into another dual-input differential amplifier that
serves as the intermediate stage. The output of this intermediate stage is taken from just
one of the transistors, i.e., it is single-ended and therefore not balanced.
The dc level at the output of the intermediate stage is high with respect to ground, so a
level-shifting circuit such as an emitter follower is used to shift it down closer to ground.
The output stage of an op amp usually consists of a push/pull pair of complementary
transistors which increases the swing of the output voltage and enhances the load current
capacity of the op amp. The gains of the input and intermediate stages of an op amp are
high, while those of the emitter follower and output stage are generally close to 1.
120

Dual Op-Amp Differential Amplification

Figure 1. A Dual Operational Amplifier IC Configured for Differential Amplification


Figure 1 shows how a dual op-amp IC such as the LM358 or 1458 may be configured to
serve as a differential amplifier, which is a circuit that only amplifies the difference
between two potentials. Note that both the input and output of this amplifier circuit are
'differential' in nature. All common-mode signals are rejected by this circuit, which makes
this circuit very useful in amplifying signals in noisy environments or in applications that
require high sensitivity.
In this circuit, a differential input voltage is applied across the non-inverting inputs of the
two op-amps (pins 3 and 5). The output is taken from across the two outputs of the two
op-amps (pins 1 and 7).
The gain G of this circuit is given by: G = 1 + (2Rf / Rin). Thus, Vout = Vin (1 + (2Rf /
Rin)).

121

TTL-to-CMOS Interfacing Techniques


There are instances wherein the output of a TTL logic gate needs to be used for driving the
input of a CMOS gate. Since the voltage-current characteristics and requirements of a TTL
gate differ from those of a CMOS gate, it is good practice to use proper interfacial
components between them when connecting them to each other. Below are some common
techniques used in connecting a TTL gate to a CMOS gate.

Figure 1. Interfacing any TTL gate to any CMOS gate using the same power supply (5V)
When the CMOS gate that the TTL gate will drive also uses the same 5-V supply used by
the TTL gate, the simple interfacing technique shown in Figure 1 may be employed. Here,
a pull-up resistor is just placed between the TTL output and the 5-V supply.

122

Figure 2. Interfacing an Open-Collector TTL gate to any CMOS gate using different
power supplies
When the CMOS gate that the TTL gate will drive has a supply voltage that's different
from the 5-V supply used by the TTL gate and if the TTL gate has an open collector, the
simple interfacing technique shown in Figure 2 may be employed. Here, a 10-K pull-up
resistor is just placed between the TTL output and the CMOS gate's supply.

Figure 3. Interfacing any TTL gate to any CMOS gate using different power supplies
When the CMOS gate that the TTL gate will drive has a supply voltage that's different
from the 5-V supply used by the TTL gate and if the TTL gate does not have an open
collector, it would be good to use an NPN transistor to translate the TTL output voltage
level to a correct CMOS input voltage level as shown in Figure 3 so as not to overstress
the TTL gate.
CMOS-to-TTL Interfacing Techniques
There are instances wherein the output of a CMOS logic gate needs to be used for driving
the input of a TTL gate. Since the voltage-current characteristics and requirements of a
CMOS gate differ from those of a TTL gate, it is good practice to use proper interfacial
components between them when connecting them to each other. Below are some common
techniques used in connecting a CMOS gate to a TTL gate.

123

Figure 1. Interfacing any CMOS gate to any TTL gate using the same power supply (5V)
When the CMOS gate that will drive the TTL gate also uses the same 5-V supply used by
the TTL gate, the simple interfacing technique shown in Figure 1 may be employed. Here,
a pull-down resistor is just placed between the CMOS gate output and ground.

Figure 2. Interfacing an Open-Collector TTL gate to any CMOS gate using different
power supplies
When the CMOS gate that will drive the TTL gate has a supply voltage that's different
from the 5-V supply used by the TTL gate, it would be good to use an NPN transistor to
translate the CMOS output voltage level to a correct TTL input voltage level as shown in
Figure 2. Note that the transistor uses the 5-V TTL supply for its Vcc.

124

Figure 3. Interfacing any TTL gate to any CMOS gate using different power supplies
As an alternative to the technique shown in Figure 2, the technique shown in Figure 3 may
be employed to connect a CMOS gate to a TTL gate. Instead of a transistor, a CMOS
buffer (inverting or non-inverting) may be used as long as it is supplied from the 5-V TTL
supply. The example in Figure 3 is an inverting buffer, so the input to the TTL gate is an
inverted logic of the CMOS output.

Techniques for Interfacing Op Amps to CMOS and TTL


There are instances wherein the output of an operational amplifier or a voltage comparator
needs to be used for driving the input of a CMOS or TTL gate. Since the voltage-current
characteristics and requirements of an op-amp or comparator differ from those of a CMOS
or TTL gate, it is good practice to use proper interfacial components between them when
connecting them to each other. Below are some common techniques used in connecting an
op amp or comparator to a CMOS or TTL gate.

125

Figure 1. Interfacing an Op Amp or Comparator to any CMOS gate using the same power
supply
When the op amp or comparator uses the same power supply as the CMOS gate that it is
driving, the simple interfacing technique shown in Figure 1 may be employed. Here, a
current-limiting resistor is just placed between the op amp/comparator output and the
CMOS gate input.

Figure 2. Interfacing an Op Amp or Comparator to any CMOS gate using using different
power supplies
When the op amp or comparator uses a power supply that's different from the supply used
by the CMOS gate that it is driving, the simple interfacing technique shown in Figure 2
may be employed. Here, aside from a current-limiting resistor between the op amp output
and the CMOS gate input, input protection diodes are placed between the gate input and
the positive supply and between the gate input and ground.

126

Figure 3. Interfacing an Op Amp or Comparator to any TTL gate using the same power
supply
When the op amp or comparator uses the same 5-V power supply as the TTL gate that it is
driving, the simple interfacing technique shown in Figure 3 may be employed. Here, a
current-limiting resistor is just placed between the op amp/comparator output and the TTL
gate input. A shunt resistor is also placed across the gate input and ground.
Level-Shifting Opto-Isolator Circuits
Opto-isolator devices may be used for voltage level-shifting purposes. Below are
examples of level-shifting circuits that utilize opto-isolators (one non-inverting and one
non-inverting).

Figure 1. Non-inverting Voltage Level Shifter using an Opto-Isolator


Figure 1 shows an opto-isolator that's configured as a non-inverting voltage level shifter
(5V to 12V). When the input is high (close to 5V), the light-emitting diode does not
conduct. With no light shining on the phototransistor, it is also off, causing the collector-

127

tapped output to be very near 12V. When the input is low (close to 0V), the LED conducts,
shining light on the phototransistor. The transistor turns on and pulls the output to ground.

Figure 2. Inverting Voltage Level Shifter using an Opto-Isolator


Figure 2 shows an opto-isolator that's configured as an inverting voltage level shifter (5V
to 0V and 0V to 12V). When the input is high (close to 5V), the light-emitting diode does
not conduct. With no light shining on the phototransistor, it is also off, causing the emittertapped output to be also be 'low'. When the input is low (close to 0V), the LED conducts,
shining light on the phototransistor. The transistor turns on and pulls up the output to very
near 12 V.
Opto-Isolation / Opto-coupling Between CMOS and TTL Gates
Optocouplers (also known as opto-isolators) may be used for interfacing CMOS and TTL
gates without physically connecting the output of the driving gate to the input of the driven
gate. Below are examples of optocoupler circuits for CMOS-CMOS, TTL-TTL, and TTLCMOS interfacing.

128

Figure 1. CMOS-CMOS Interfacing Using an Optocoupler


Figure 1 shows how an optocoupler may be used to interface a CMOS gate to another
CMOS gate while isolating the output of the former to the input of the latter. When the
input is low, the output of the input NAND gate is high, and the light-emitting diode does
not conduct. With no light shining on the phototransistor, it is also off, causing the
collector-tapped output (and the output NAND gate's input) to be high. This causes the
circuit's output to be 'low'. When the input is high, the LED conducts, shining light on the
phototransistor. The transistor turns on and pulls the output NAND's input to ground,
causing the output to go high. Note that R1 is chosen so as not to overload the output of
the input NAND gate when the LED is conducting.

Figure 2. TTL-TTL Interfacing Using an Optocoupler


Figure 2 shows how an optocoupler may be used to interface a TTL gate to another TTL
gate while isolating the output of the former to the input of the latter. The operation of this
circuit is the same as that of the first example. Thus, when the input is low, the output of
the input inverter is high, and the light-emitting diode does not conduct. With no light
shining on the phototransistor, it is also off, causing the collector-tapped output (and the
129

output inverter's input) to be high. This causes the circuit's output to be low. When the
input is high, the LED conducts, shining light on the phototransistor. The transistor turns
on and pulls the output inverter's input to ground, causing the output to go high. Note that
R1 is chosen so as not to overload the output of the input inverter when the LED is
conducting.

Figure 3. TTL-CMOS Interfacing Using an Optocoupler


Figure 3 shows how an optocoupler may be used to interface a TTL gate to a CMOS gate
while isolating the output of the former to the input of the latter. The operation of this
circuit is the same as the first two examples, except that the voltage levels on the CMOS
(output) side are higher than the TTL (input) side. This basically shows how an
optocoupler may also be used in level-shifting.
Driving Relays and LED's with TTL and CMOS Outputs
Digital devices need to interface with mechanical relays and LED's from time to time.
Below are examples of how TTL and CMOS digital outputs are usually connected to
relays and LED's.

130

Figure 1. A Mechanical Relay Driven by a TTL or CMOS Output


Figure 1 shows how a TTL or CMOS output may be used to control a mechanical relay.
Actually, the logic gate is just driving the base of the transistor, not the relay. It is the
transistor that energizes the relay. A gate output of 'high' turns on the transistor, energizing
the coil of the relay. A gate output of 'low' turns off the transistor, causing the relay to
deactivate. Since the relay coil is an inductor, it will generate a large voltage across the
transistor if the latter turns off while current is flowing through the coil (an inductor
opposes an instantaneous change in current). This inductive kick can destroy the
transistor. To protect it from damage, a diode is placed at its collector as shown in Figure
1, so that any large voltage appearing at the transistor's collector will cause the diode to
conduct, shunting the energy towards the positive supply.

Figure 2. LED Turned 'On' by a TTL or CMOS Output of '1'

131

Figure 2 shows how a light-emitting diode (LED) may be controlled by a TTL or CMOS
output. In this configuration, the LED turns on when the output of the gate is 'high'. A
current-limiting resistor is placed between the LED and the gate output to protect both the
LED and the output. The lower the resistor, the brighter is the LED when it is lit. If the
output of the gate is driving just the LED, the resistor may be as low as 330 ohms for a 5V
supply. However, if the gate output will also be used to drive another gate's input, the
current through the LED must be reduced significantly (by using a much higher resistor) to
ensure that the gate's high output voltage will still meet the minimum level required to be
recognized as a '1'.

Figure 3. LED Turned 'On' by a TTL or CMOS Output of '0'


Figure 3 shows another way of connecting a light-emitting diode (LED) to a TTL or
CMOS output. In this configuration, the LED turns on when the output of the gate is 'low'.

Opto-Isolation / Opto-coupling Between CMOS and TTL Gates


Optocouplers (also known as opto-isolators) may be used for interfacing CMOS and TTL
gates without physically connecting the output of the driving gate to the input of the driven
gate. Below are examples of optocoupler circuits for CMOS-CMOS, TTL-TTL, and TTLCMOS interfacing.

132

Figure 1. CMOS-CMOS Interfacing Using an Optocoupler


Figure 1 shows how an optocoupler may be used to interface a CMOS gate to another
CMOS gate while isolating the output of the former to the input of the latter. When the
input is low, the output of the input NAND gate is high, and the light-emitting diode does
not conduct. With no light shining on the phototransistor, it is also off, causing the
collector-tapped output (and the output NAND gate's input) to be high. This causes the
circuit's output to be 'low'. When the input is high, the LED conducts, shining light on the
phototransistor. The transistor turns on and pulls the output NAND's input to ground,
causing the output to go high. Note that R1 is chosen so as not to overload the output of
the input NAND gate when the LED is conducting.

Figure 2. TTL-TTL Interfacing Using an Optocoupler


Figure 2 shows how an optocoupler may be used to interface a TTL gate to another TTL
gate while isolating the output of the former to the input of the latter. The operation of this
circuit is the same as that of the first example. Thus, when the input is low, the output of
the input inverter is high, and the light-emitting diode does not conduct. With no light
shining on the phototransistor, it is also off, causing the collector-tapped output (and the
133

output inverter's input) to be high. This causes the circuit's output to be low. When the
input is high, the LED conducts, shining light on the phototransistor. The transistor turns
on and pulls the output inverter's input to ground, causing the output to go high. Note that
R1 is chosen so as not to overload the output of the input inverter when the LED is
conducting.

Figure 3. TTL-CMOS Interfacing Using an Optocoupler


Figure 3 shows how an optocoupler may be used to interface a TTL gate to a CMOS gate
while isolating the output of the former to the input of the latter. The operation of this
circuit is the same as the first two examples, except that the voltage levels on the CMOS
(output) side are higher than the TTL (input) side. This basically shows how an
optocoupler may also be used in level-shifting.

Driving Relays and LED's with TTL and CMOS Outputs

134

Digital devices need to interface with mechanical relays and LED's from time to time.
Below are examples of how TTL and CMOS digital outputs are usually connected to
relays and LED's.

Figure 1. A Mechanical Relay Driven by a TTL or CMOS Output


Figure 1 shows how a TTL or CMOS output may be used to control a mechanical relay.
Actually, the logic gate is just driving the base of the transistor, not the relay. It is the
transistor that energizes the relay. A gate output of 'high' turns on the transistor, energizing
the coil of the relay. A gate output of 'low' turns off the transistor, causing the relay to
deactivate. Since the relay coil is an inductor, it will generate a large voltage across the
transistor if the latter turns off while current is flowing through the coil (an inductor
opposes an instantaneous change in current). This inductive kick can destroy the
transistor. To protect it from damage, a diode is placed at its collector as shown in Figure
1, so that any large voltage appearing at the transistor's collector will cause the diode to
conduct, shunting the energy towards the positive supply.

135

Figure 2. LED Turned 'On' by a TTL or CMOS Output of '1'


Figure 2 shows how a light-emitting diode (LED) may be controlled by a TTL or CMOS
output. In this configuration, the LED turns on when the output of the gate is 'high'. A
current-limiting resistor is placed between the LED and the gate output to protect both the
LED and the output. The lower the resistor, the brighter is the LED when it is lit. If the
output of the gate is driving just the LED, the resistor may be as low as 330 ohms for a 5V
supply. However, if the gate output will also be used to drive another gate's input, the
current through the LED must be reduced significantly (by using a much higher resistor) to
ensure that the gate's high output voltage will still meet the minimum level required to be
recognized as a '1'.

Figure 3. LED Turned 'On' by a TTL or CMOS Output of '0'


Figure 3 shows another way of connecting a light-emitting diode (LED) to a TTL or
CMOS output. In this configuration, the LED turns on when the output of the gate is 'low'.

136

Switch Debouncing Using NAND Gates


Logic Gates may be used to remove oscillations or unwanted pulses (bouncing) produced
by mechanical switches and relays during the mechanical switching process. Below is an
example of a switch debouncing circuit that utilizes NAND gates.

Figure 1. Bounce-free Mechanical Switching with NAND Gates


Figure 1 shows a circuit that uses two NAND gates to stabilize the output of a mechanical
switch or relay. The output of the circuit as shown is '0'. Thus, at this point, X1 is '0'.
When the switch is thrown from '0' to '1', the 1.2 k-ohm resistor pulls up Y1 to '1', but this
does not cause the output of NAND A to change, i.e., it remains to be '1'. Thus X2 is still
'1'. However, the switching action causes Y2 to be pulled to ground, causing the output to
switch from '0' to '1'. This causes X1 to also switch to '1'.
Since at this point, Y1 is still '1', the output of NAND A switches to '0', which keeps the
output at logic '1'. In effect, the switching action of the mechanical switch was
transformed by the two NAND gates into a clean digital transition from '0' to '1' at the
circuit's output.
Throwing the mechanical switch back to '0' causes Y1 to go to '0' and Y2 to go to '1'. This
causes the output of NAND A to go to '1' and the output of the circuit to go to '0' since both
X2 and Y2 are now '1'. Again, the mechanical switching action was transformed by the
circuit into a clean digital transition from '1' to '0'.
137

Switch Debouncing Using NOR Gates


Logic Gates may be used to remove oscillations or unwanted pulses (bouncing) produced
by mechanical switches and relays during the mechanical switching process. Below is an
example of a switch debouncing circuit that utilizes NOR gates.

Figure 1. Bounce-free Mechanical Switching with NOR Gates


Figure 1 shows a circuit that uses two NOR gates to stabilize the output of a mechanical
switch or relay. The output of the circuit as shown is '0'. Thus, at this point, X1 is '0'.
When the switch is thrown from '0' to '1', the 3.3 k-ohm resistor pulls up Y1 to '1', causing
the output of NOR A to switch from '1' to '0'. This also causes X2 to go to '0'. However, the
output of the circuit (which is the output of NOR B) remains at '0' until the instant the
mechanical switch pulls Y2 to ground. Once this happens, both X2 and Y2 will be '0',
causing the output of NOR B to go to '1'.
At this point, X1 also goes to '1', keeping the output of NOR A at '0' and the output of
NOR B at '1'. In effect, the switching action of the mechanical switch was transformed by
the two NOR gates into a clean digital transition from '0' to '1' at the circuit's output.
Throwing the mechanical switch back to '0' causes Y1 to go to '0' and Y2 to go to '1'. This
causes the output of the circuit (output of NOR B) to go to '0'. This causes X1 to go to '0',
138

which forces the output of NOR A to go to '1' since Y1 is also '0' at this point. The change
of NOR A's output to '1' stabilizes the output of the circuit at '0'. Again, the mechanical
switching action was transformed by the circuit into a clean digital transition from '1' to '0'.

Switch Debouncing Using NOR Gates


Logic Gates may be used to remove oscillations or unwanted pulses (bouncing) produced
by mechanical switches and relays during the mechanical switching process. Below is an
example of a switch debouncing circuit that utilizes NOR gates.

Figure 1. Bounce-free Mechanical Switching with NOR Gates


Figure 1 shows a circuit that uses two NOR gates to stabilize the output of a mechanical
switch or relay. The output of the circuit as shown is '0'. Thus, at this point, X1 is '0'.
When the switch is thrown from '0' to '1', the 3.3 k-ohm resistor pulls up Y1 to '1', causing
the output of NOR A to switch from '1' to '0'. This also causes X2 to go to '0'. However, the
output of the circuit (which is the output of NOR B) remains at '0' until the instant the
mechanical switch pulls Y2 to ground. Once this happens, both X2 and Y2 will be '0',
causing the output of NOR B to go to '1'.

139

At this point, X1 also goes to '1', keeping the output of NOR A at '0' and the output of
NOR B at '1'. In effect, the switching action of the mechanical switch was transformed by
the two NOR gates into a clean digital transition from '0' to '1' at the circuit's output.
Throwing the mechanical switch back to '0' causes Y1 to go to '0' and Y2 to go to '1'. This
causes the output of the circuit (output of NOR B) to go to '0'. This causes X1 to go to '0',
which forces the output of NOR A to go to '1' since Y1 is also '0' at this point. The change
of NOR A's output to '1' stabilizes the output of the circuit at '0'. Again, the mechanical
switching action was transformed by the circuit into a clean digital transition from '1' to '0'.

`TCP/IP - The Internet Protocol Suite


The internet protocol suite, or TCP/IP protocol suite, refers to the set of communications
protocols that form the basis for transmitting and routing packets of data on the internet. Its
name comes from two of its most important protocols - the Transmission Control Protocol
(TCP) and the Internet Protocol (IP). These protocols were developed by DARPA to allow
communication between various types of computers and computer networks. The TCP/IP
is one thing that all internet sites have in common.
The entire TCP/IP protocol suite may be viewed as consisting of 5 layers: 1) the Physical
Layer, which includes specifications and protocols for physical devices such as modems,
USB, RS-232, Wi-Fi, ISDN, Bluetooth, and the Ethernet; 2) the Data Link Layer, which
includes protocols for linking devices belonging to a local network, i.e., protocols for local
area networking; 3) the Network Layer, which includes protocols for interconnected
networks, or 'internetworks', (the IP is in the Network Layer); 4) the Transport Layer,
which covers the routing protocols that mediate between the Network Layer and the
Application Layer (the TCP is in the Transport Layer); and 5) the Application Layer,
which includes protocols for actual internet applications such as file transfer, email, web
browsing, and terminal emulation applications - the FTP, HTTP, POP3, SMTP, DNS,
TELNET, etc.
Each layer in the protocol suite addresses a specific set of problems for successful data
transmission, but may also provide a designated service to an upper layer protocol. In
general, the higher the layer, the closer its service is to the end-user, and the more abstract
the data that it handles.

140

As mentioned, the IP and the TCP are the most important components of the internet
protocol. The IP is a connectionless protocol while TCP is connection-oriented, i.e., it
establishes connection and maintains it until the required data exchanges have been
completed.
The TCP
The connection-oriented transport protocol TCP enables two hosts to establish a reliable
connection and exchange data, guaranteeing not only delivery of the data, but their
delivery in the same order as they were sent as well. It transmits data as an unstructured
stream of bytes.
The TCP employs sequence numbers and acknowledgement messages to provide the
sending node with information about the delivery of the data packets transmitted to the
destination node. If data is lost during transmission, the TCP can retransmit the data until
they are successfully delivered, or until a time-out condition is reached. The TCP can also
slow down the rate of flow of data transfer if the sending computer is too fast for the
receiving computer. Delivery information can also be relayed by the TCP to upper-layer
protocols and other applications.
The TCP, which is in the Transport Layer, runs on top of the internet protocol (IP), which
is in the Network Layer.
The IP
The IP is considered to be the 'heart' of the internet protocol suite. It is in the third layer of
the TCP/IP layer structure. Its primary function is routing of data between interconnected
networks, but it also provides error reporting as well as fragmentation and reassembly of
data units for inter-network transmission. IP networks all over the world communicate
with each other through globally unique IP addresses, each of which consists of 3 parts.
The first part of the IP address is the network address, the second part is the subnet
address, and the third part is the host address.
COMMUNICATION ELECTRONICS

The Simplest Transmitter


The simplest transmitter is an oscillator, an example of which is shown in Figure 1. An
oscillator is a circuit that generates a periodic signal of a defined frequency. This signal
can be used as a carrier of information. In this circuit, the carrier frequency is set by the
crystal, which vibrates at its characteristic frequency when voltage is applied across it.

141

Figure 1. Circuit Diagram of a Simple Transmitter


The simplicity of this circuit lies in the fact that information is transmitted manually by the
key operator using a special code of 'dots' and 'dashes' to represent alphanumeric
characters. The signal generated by the oscillator is fed into the base of the transistor.
Whenever the key is pressed by the operator, this transistor drives the RF output
transformer of the antenna at the crystal frequency.
Thus, a burst of RF energy is emitted by the circuit every time the key is pressed by the
operator. A 'dot' is a short burst of RF energy and a 'dash' is a long one. This method of
transmission is known as continuous wave (CW) transmission.
The CW transmitter in Figure 1 is not a powerful one, typically being less than 1 watt.
Nonetheless, with proper selection of frequency and a good antenna, it can send
information halfway around the world.
It is not difficult to improve this simple transmitter significantly. It can be enhanced by
incorporating a power amplifier that boosts the power of the RF transmission. Such a setup is depicted in Figure 2 below.

142

Figure 2. An Enhanced CW Transmitter


Crystal Set - the Simplest Receiver

Figure 1. Crystal Set Circuit Diagram


This is the diagram for the simplest receiver circuit, which is also commonly known as a
'crystal set' receiver. This circuit consists of an antenna, a tuned circuit, a diode or 'crystal'
detector, and headphones (or earphones).
In this circuit, the antenna picks up the signal, causing a flow of current in the primary
winding of T1, which is just a coupling transformer. This primary current flow induces a
voltage in T1's secondary winding, which develops a charge across capacitor C1. Note
that both the secondary winding of T1 and capacitor C1 form a series resonant circuit.
At the resonant frequency, the voltage developed across C1 is significantly higher than at
other frequencies. This voltage is known as the resonant rise voltage or resonant step up
voltage. Thus, when the frequency received is 'tuned' to the resonant circuit, the received
signal is readily passed on to the rest of the circuit.
The diode or crystal rectifies the signal from the tuned circuit. Capacitor C2, on the other
hand, 'filters out' the carrier signal, thereby allowing only the information signal to pass
143

through to the attached listening device. What this means is that the simplest receiver
must, at the minimum, be a demodulator.
Impedance-Matching Between Driving and Driven Stages
In a typical transmitter system, the basic carrier signal generated by the transmitter
oscillator needs to be amplified several times before it is radiated out by the antenna. This
amplification process may require the passing of the signal from one stage to another. The
goal is to maximize the transfer of power from one amplification stage to the next.
A driving stage may be modeled as an RF generator with an internal impedance Zin, while
the driven stage may be modeled as an impedance Zload, as shown in Figure 1.

Figure 1. Models for Driving and Driven Stages: Zin=Zload under ideal conditions
Recall that maximum transfer of power occurs when Zin = Zload. Thus, efficient transfer
of power from one stage to the next can be achieved by using special circuits that match
the impedance Zin of the driving stage to that of the driven stage, Zload. These circuits are
commonly referred to as impedance-matching circuits or impedance-matching networks,
as shown in Figure 2.

144

Figure 2. Matching the driving stage impedance to the load impedance


Impedance-matching is a necessity because Zin and Zload are often very different from
each other. Note that an impedance consists not only of a resistive component, but of
reactive components as well. This is why a basic impedance-matching circuit also
contains reactive components. Here are some example of impedance-matching networks.
Impedance-Matching Networks
Impedance-matching is important in achieving maximum power transfer from one stage to
another in a transmitter system. Below are some examples of impedance-matching
networks commonly used to match the driving stage impedance Zin to the driven stage
impedance Zload. These circuits fall under the category of L-Type impedance-matching
networks, because of the L-shaped configurations present within them.

Figure 1. L-Type Low-Pass Impedance-Matching Network for ZLoad < Zin


The low-pass LC impedance-matching network in Figure 1 makes Zload 'appear' larger to
match Zin. The values of L and C are chosen to resonate at the transmitter frequency. At
resonance, the inductive and capacitive reactances are equal, and the impedance exhibited
by the parallel resonant LC network to Zin is very high.

145

Figure 2. L-Type Low-Pass Impedance-Matching Network for ZLoad > Zin


The low-pass LC impedance-matching network in Figure 2 reduces the load impedance
seen by Zin, so it is used when ZLoad>Zin. Here, C is in parallel with ZLoad, and the LC
network form a series resonant circuit with ZLoad. At resonance, the impedance of the
tuned network is very low, which is how this impedance-matching network decreases the
load impedance seen by Zin.

Figure 3. L-Type High-Pass Impedance-Matching Network for ZLoad < Zin


The high-pass LC impedance-matching network in Figure 3 increases the load impedance
seen by Zin, so it is used when ZLoad<Zin. As in Figure 1, the LC network in Figure 3 is
basically a parallel resonant circuit, and therefore exhibits a very high impedance at
resonance..

146

Figure 4. L-Type High-Pass Impedance-Matching Network for ZLoad > Zin


The high-pass LC impedance-matching network in Figure 4 reduces the load impedance
seen by Zin, so it is used when ZLoad>Zin. As in Figure 2, the LC network in Figure 4 is
basically a series resonant circuit, and therefore exhibits a very low impedance at
resonance.
What is Modulation?
Modulation is the process of incorporating information inside a periodic waveform by
varying a certain characteristic or property of the periodic waveform. The periodic
waveform is called the 'carrier signal', since it carries the information, while the
information comes from another waveform of lower frequency known as the 'information
signal'. The purpose of modulation is to be able to use the carrier signal in the efficient
transmission of the information.
The carrier signal used for transmissions is usually a high-frequency sinusoidal waveform.
A sine wave is often defined in terms of three major parameters: its amplitude, its
frequency, and its phase. Any of these three parameters may be varied or modulated to
carry information.
Using a waveform of high frequency to carry the signal offers the following advantages: 1)
lower loss and dispersion during propagation; 2) minimal interference from other signals;
3) smaller antenna requirements; 4) possibility of multiplexing for simultaneous
transmission of multiple signals. The typical carrier frequency ranges used by various
systems today are as follows: 550-1600 KHz for AM Radio, 88-108 MHz for FM Radio,
52-216 MHz for VHF TV, and 470-900 MHz for UHF TV.
During modulation of a given parameter of a carrier signal, the parameter is made to vary
in accordance with how the information signal varies. For example, in amplitude
modulation (AM), the amplitude of the carrier signal is made to change so that it follows
changes in the shape of the information signal. In frequency modulation (FM), it is the
frequency of the carrier signal that is continuously varied according to the information
waveform. In phase modulation (PM), the phase of the carrier waveform is shifted to
match variations of the information signal in time. A device that modulates a carrier
frequency is known as a modulator, while a demodulator is a device that retrieves the
information from a modulated carrier signal.
In the above examples of modulation, the information comes from an analog signal, i.e., a
signal that is continuous in time. When a carrier signal is modulated continuously in time
by an analog signal, it is referred to as 'analog modulation'. There is another type of
modulation, wherein the carrier signal is modulated by a stream of digital bits. This
modulation, known as digital modulation, is used in the transmission of digital data over
an analog channel. Thus, digital modulation may also be considered as a form of digitalto-analog-conversion, while the recovery of the digital bits by demodulation after
147

transmission over the analog channel may be considered as analog-to digital conversion.
Sending digital data over a telephone line via a modem is an example of this.
Many techniques exist for both analog and digital modulation. In fact, amplitude
modulation alone can be implemented in many ways including: Double-Sideband
Modulation (DSB), Single-Sideband Modulation(SSB), Vestigial Sideband Modulation
(VSB), etc. Each of these AM techniques has many different kinds too. The analog
modulation techniques of FM and PM (which were defined earlier in this article), are kinds
of angle modulation.
Techniques used in digital modulation include:
1) Phase-Shift Keying (PSK), wherein data are represented by modulating the phase of the
carrier signal, i.e., a finite number of phases are used, each one corresponding to a unique
pattern of binary bits;
2) Frequency-Shift Keying (FSK), wherein data are represented by discrete changes in the
frequency of the carrier signal, e.g., a certain frequency is used to represent '1' while
another is used for '0'; and
3) Amplitude-Shift Keying (ASK), wherein data are represented by discrete levels of the
amplitude of the carrier signal; and
Another modulation technique, known as 'Quadrature Amplitude Modulation' or QAM,
modulates the amplitudes of two carrier waves that are out of phase with each other by 90
degrees (hence the term 'quadrature'). QAM can be employed both in digital and analog
modulation by ASK or AM, respectively.
Note that each of these major analog and digital techniques also comes in many different
and special forms, each providing a solution to the vast requirements of the enormous
telecommunications industry.

Simple Amplitude Modulator

148

Figure 1. A very simple amplitude modulator


Figure 1 shows an example of a very simple amplitude modulator. This circuit basically
consists of a resistive mixer, a rectifier, and a tuned LC circuit.
In this circuit, the carrier signal and the modulating signal are linearly mixed or
algebraically added through Rc, Rm, and Rsum, with the mixed signal emerging at their
common node. This mixed signal is then fed into the diode D, which rectifies the mixed
signal, i.e., only the forward-going current of the mixed signal is allowed to flow through
the circuit. Note that the rectified signal already varies in amplitude according to the
modulating signal.
The LC circuit is a band-pass filter that's tuned to the carrier frequency, so it only allows
the AM output (the carrier signal and its sidebands) to pass through, and shunts all other
signals to ground. This is because a parallel resonant LC circuit exhibits the highest
impedance at the resonant frequency.
At the carrier (resonant) frequency, L and C repeatedly exchange energy with each other,
resulting in an oscillation that produces a negative half-cycle pulse for every positive pulse
coming out of the diode. The amplitudes of these negative pulses follow those of the
positive cycles so, in effect, the AM waveform at the output of the modulator is complete
with both positive and negative cycles.
Operational Amplifier-Based Amplitude Modulator

149

Figure 1. A simple amplitude modulator using an op-amp


Figure 1 shows a simple amplitude modulator that uses an operational amplifier (op-amp).
The op-amp is configured to amplify the carrier signal that's applied at its non-inverting
input. The gain of the amplification is given by: Gain = 1 + (Rf / Ri), wherein Ri is the
resistance exhibited by the FET. The FET acts as a variable resistor whose source-to-drain
resistance depends on the input signal applied at its gate through capacitor C. Note the
negative bias (-Vbias) applied at the gate of the FET which is used to keep the gate-source
junction reverse-biased.
The input signal to the FET's gate is the modulating signal. An increase in the input signal
will cause a decrease in the FET's resistance, causing the gain of the op-amp to increase.
This results in a corresponding increase in the output voltage. On the other hand, a
decrease in the input signal will cause an increase in the FET's resistance, causing the gain
of the op-amp to decrease. This results in a corresponding decrease in the output voltage.
The FET in this circuit must be properly biased so that its resistance will behave linearly
over a wide range of input signal.
Simple Diode and Bipolar Transistor Mixers
The amplitude modulation (AM) process of converting a modulated signal to a higher or
lower frequency is known as frequency translation or frequency conversion. A circuit that
performs frequency conversion is known as a mixer. Below are two examples of simple
mixers.

150

Figure 1. A Diode Mixer


Figure 1 shows a simple frequency converter or mixer whose main component is a diode.
An input signal with frequency fs is applied to the diode mixer through transformer T1. A
signal from a local oscillator of frequency fo is also applied to the mixer through capacitor
C1. These two input signals (fs and fo) emerge at the output of the diode. A tuned LC
circuit at the output of the diode allows either the difference frequency (fo-fs) or the sum
frequency (fo+fs) to pass to the mixer's output.

Figure 2. A Bipolar Transistor Mixer


Figure 2 shows a simple frequency converter or mixer that employs a bipolar transistor
biased in such a way that the collector current does not vary linearly with the base current.
An input signal with frequency fs is applied to the base of the transistor through a
transformer. A signal from a local oscillator of frequency fo is also applied to the
transistor's base through a capacitor. These two input signals (fs and fo) emerge at the
151

collector of the transistor. A tuned LC circuit at this collector output allows only the
difference frequency (fo-fs) to pass to the mixer's output.
The mixer in Figure 2 is commonly used in radio receivers for translating a signal to a
lower frequency, where it is easier to achieve high gain and good selectivity.
Doppler Effect and Doppler Frequency
Doppler Effect refers to the shift in the observed or perceived frequency of an
electromagnetic or sound wave due to the motion of the source of the wave relative to the
observer. The following equations apply.
Sound Waves
fo = fs [(v + w + vo) / (v + w - vs)]
where fo = observed or perceived sound frequency (Hz)
vo = velocity of observer (m/s)
vs = velocity of source (m/s)
v = velocity of sound in the medium (m/s)
w = velocity of the wind in the direction of sound propagation (m/s)
fs = frequency of the source (Hz)
Electromagnetic Waves
fo = fs (sqrt[(c + vr) / (c - vr)])
where fo = observed or perceived electromagnetic wave frequency (Hz)
fs = frequency of the source (Hz)
vr = velocity of source relative to the observer (m/s)
c = speed of light in vacuum (3e8 m/s)
Antenna Formulas
Below are some commonly used formulas for designing an antenna.
1) Length of Ideal Hertz Antenna
L= /2
where L = length of the Hertz Antenna
= wavelength
2) Length of Ideal Marconi Antenna
152

L= /4
where L = length of the Marconi Antenna
= wavelength
3) Power Received by a Hertz Antenna
P = (PtGtGr2) / (162d2)
where P = received power (W)
Pt = transmitted power (W)
Gt = gain ratio of transmitting antenna relative to an isotropic radiator
Gr = gain ratio of receiving antenna relative to an isotropic radiator
= wavelength (m)
d = distance between antennas (m)
4) Effective Radiated Power
ERP = G x Pi
where G = gain of transmitting antenna relative to an isotropic radiator
Pi = input power (W)
Amplitude Modulation (AM) Equations
Amplitude Modulation (AM) is a method of modulation wherein the carrier amplitude
changes with the amplitude of the input signal. Simply put, amplitude modulation
generates a signal with power concentrated at its carrier frequency and its two sidebands.
These side bands are bands of frequencies above and below the carrier frequency. They
have equal bandwidths, and are mirror images of each other.
AM is not an efficient way to modulate - much of its power is wasted. At least 2/3 of the
power is used by the carrier signal, while the remaining power is split between the two
equal sidebands.
Percent Modulation
M = [(Ec - Et) / (2Ea)] x 100%
or
M = [(Ec - Et) / (Ec+Et)] x 100%
where
M = % Modulation
Ec = crest amplitude of the modulated carrier
Et = trough amplitude of the modulated carrier
Ea = average amplitude of the modulated carrier

153

Sideband Power
Ps = M2Pc / 2
where
Ps = sideband power of an AM carrier, W
M = %Modulation
Pc = carrier power, W
Total Radiated Power
Pt = Ps + Pc
where
Pt = total radiated power, W
Ps = sideband power, W
Pc = carrier power, W
Frequency Modulation (FM) Equations
Frequency Modulation (FM) is a method of modulation wherein information is transmitted
as variations in the instantaneous frequency of the carrier signal. In analog FM, the carrier
frequency is changed in direct proportion to variations in the input signal's amplitude.
The modulation percentage for FM is defined simply as the ratio of actual frequency
variation to the maximum frequency deviation expressed in %. Thus, 100% modulation
means that the carrier's frequency variation covers the entire allowable amount, i.e., its
maximum frequency deviation. This maximum deviation frequency is chosen arbitrarily
according to the application of the FM transmitter.
For example, if the modulating signal's frequency band is from 80 kHz below the carrier
frequency to 80 kHz above it, then 50% modulation means that the carrier's frequency is
being deviated by only 40 kHz above and below the resting frequency of the carrier signal.
The following equations apply to FM.
Percent Modulation
M = [f / D] x 100%
where
M = % Modulation
f = change in frequency
D = maximum frequency deviation, i.e., frequency deviation for 100% modulation
Modulation Index
Mi = fd / fa
154

where
Mi = modulation index
fd = deviation frequency, kHz
fa = modulating audio frequency, kHz

Squelch Circuit

Figure 1. Diagram for a Squelch Circuit


Figure 1 shows a diagram of a squelch circuit. A squelch circuit is a circuit used to turn off
the audio of a receiver amplifier when no RF signal is being received. Without a squelch
circuit, such absence of received signal will be heard on the receiver as an annoying
background noise.
In Figure 1, the automatic gain control (AGC) circuit, which is used to adjust the gain of
the receiver based on the strength of the received signal, outputs a DC voltage that is
proportional to the received signal's amplitude. Thus, in the absence of a received signal,
the output of the AGC's DC amplifier is a very low DC voltage that's fed into the base of
Q1. This low base voltage causes Q1 to turn off, resulting in the base of Q2 being pulled
up 'high' through R1. This turns on Q2. A conducting Q2 shunts the audio signal to
ground away from the audio power amp. This, in effect, silences the speaker of the audio
power amp in the absence of a received signal.
155

When there's a received signal, Q1 gets a high base voltage from the AGC amplifier,
turning it on. Q2's base is pulled 'low' by the conducting Q1, causing Q2 to turn off. With
Q2 'off', the audio signal from Q3's collector is readily passed on to the audio power
amplifier.

156

Das könnte Ihnen auch gefallen