Sie sind auf Seite 1von 63

Reva Institute of Technology,Bangalore

Department of E & C Engg.

*
BASIC ELECTRONICS NOTES
I/II semester

Submitted By:

YOGESH H. KUKADE
Conduction in Semiconductors

Semiconductors are materials which have a conductivity between conductor generally


metals) and nonconductors or insulators (such as most ceramics). Semiconductors can
be pure elements, such as silicon or germanium, or compounds such as gallium arsenide
or cadmium selenide. In a process called doping, small amounts of impurities are added
to pure semiconductors causing large changes in the conductivity of the material.

Due to their role in the fabrication of electronic devices, semiconductors are an


important part of our lives. Imagine life without electronic devices. There would be no
radios, no TV's, no computers, no video, and poor medical diagnostic equipment.
Although many electronic devices could be made using vacuum tube technology, the
developments in semiconductor technology during the past 50 years have made
electronic devices smaller, faster, and more reliable. Think for a minute of all the
encounters you have with electronic devices. How many of the following have you seen
or used in the last twenty-four hours? Each has important components that have been
manufactured with electronic materials.

Band theory of solids

Band theory of solids


Quantum physics describes the states of electrons in an atom according to the four-fold
scheme of quantum numbers. The quantum number system describes the allowable states
electrons may assume in an atom. To use the analogy of an amphitheater, quantum
numbers describe how many rows and seats there are. Individual electrons may be
described by the combination of quantum numbers they possess, like a spectator in an
amphitheater assigned to a particular row and seat.

Like spectators in an amphitheater moving between seats and/or rows, electrons may
change their statuses, given the presence of available spaces for them to fit, and available
energy. Since shell level is closely related to the amount of energy that an electron
possesses, "leaps" between shell (and even subshell) levels requires transfers of energy. If
an electron is to move into a higher-order shell, it requires that additional energy be given
to the electron from an external source. Using the amphitheater analogy, it takes an
increase in energy for a person to move into a higher row of seats, because that person
must climb to a greater height against the force of gravity. Conversely, an electron "leaping"
into a lower shell gives up some of its energy, like a person jumping down into a lower row
of seats, the expended energy manifesting as heat and sound released upon impact.

Not all "leaps" are equal. Leaps between different shells require a substantial exchange of
energy, while leaps between subshells or between orbitals require lesser exchanges.

When atoms combine to form substances, the outermost shells, subshells, and orbitals
merge, providing a greater number of available energy levels for electrons to assume. When
large numbers of atoms exist in close proximity to each other, these available energy levels
form a nearly continuous band wherein electrons may transition.

It is the width of these bands and their proximity to existing electrons that determines how
mobile those electrons will be when exposed to an electric field. In metallic substances,
empty bands overlap with bands containing electrons, meaning that electrons may move to
what would normally be (in the case of a single atom) a higher-level state with little or no
additional energy imparted. Thus, the outer electrons are said to be "free," and ready to
move at the beckoning of an electric field.

Band overlap will not occur in all substances, no matter how many atoms are in close
proximity to each other. In some substances, a substantial gap remains between the highest
band containing electrons (the so-called valence band) and the next band, which is empty
(the so-called conduction band). As a result, valence electrons are "bound" to their
constituent atoms and cannot become mobile within the substance without a significant
amount of imparted energy. These substances are electrical insulators:
Materials that fall within the category of semiconductors have a narrow gap between the
valence and conduction bands. Thus, the amount of energy required to motivate a valence
electron into the conduction band where it becomes mobile is quite modest:

At low temperatures, there is little thermal energy available to push valence electrons across
this gap, and the semi conducting material acts as an insulator. At higher temperatures,
though, the ambient thermal energy becomes sufficient to force electrons across the gap,
and the material will conduct electricity.

It is difficult to predict the conductive properties of a substance by examining the electron


configurations of its constituent atoms. While it is true that the best metallic conductors of
electricity (silver, copper, and gold) all have outer s subshells with a single electron, the
relationship between conductivity and valence electron count is not necessarily consistent:
Likewise, the electron band configurations produced by compounds of different elements
defies easy association with the electron configurations of its constituent elements.

===========================================================
Semiconductor -Diode Characteristics
History
The p-n junction was discovered by Ohl in 1940 when he observed the photovoltaic
effect when light was flashed onto a silicon rod.1'2 Since crystals were not as pure at
the time, different parts of the same crystal had different impurities and a natural p-n
junction was formed unintentionally. Ohl also notice that when a metal whisker was
pressed against different parts of the crystal, opposite behaviors were observed. He
coined the material p-type when "positive" bias was put on the crystal relative to the
whisker to produce a large current, and conversely, n-type when "negative" bias was
needed to conduct similar current. This research group at Bell Laboratories later made
the connection between n-type to acceptor impurities and «-type to donor impurities.
Shockley developed the theory for the p-n junction diode in 1949,3 and it was
instrumental for the invention of the bipolar junction transistor. The theory was
subsequently refined by Sahet l.4 and Moll.5 More recent review articles on the device
may be found in Refs. 6-9. The p-n junction has been the most common rectifier used in
the electronics industry. It also serves as a very important fundamental building block
for many other devices.

Theory of PN junction

1.1 STRUCTURE
Pressing a metal wire onto the surface of a semiconductor made the early version of the
structure. Passing a pulse of then formed a junction

1.3 CHARACTERISTICS
A p-n junction can be viewed as isolated p- and «-type materials brought into intimate
contact (Fig. 1.2). Being abundant in «-type material, electrons diffuse to the p-type
material. The same process happens for holes from the p-type material. This flow of
charges sets up an electric field that starts to hinder further diffusion until equilibrium is
struck. The energy-band diagram under equilibrium is shown in Fig. 1.2(b). (Notice
that when NA ^ ND, where Et crosses
EFdoQS not coincide with the metallurgical junction.) Since the overall charge has
to be conserved, it follows that for an abrupt (step) junction,
WdpNA = WdnND (1.1)
as shown in Fig. 1.2(c). An important parameter is the built-in potential y/bi.
According to Fig. 1.2(b), it is the sum of y/Bn and y/Bp, given by 1 V n
Vbi= VBn+VBP = -7H^f\ (1-2) which is the total band bending at equilibrium by
definition. Under bias, the following can be obtained using the Poisson equation with
appropriate boundary conditions,

Diodes Basics
Diodes used in power electronics applications are generally required to have special
characteristics, these are:

o High breakdown voltage and high current carrying capability;


o Small switching time delays and small current rise and fall times;
o Negligible reverse recovery (ie. charge removal at turn OFF is negligible);
o Low voltage drop when conducting.

Unfortunately, it is not possible to achieve this entire criterion with one single style
of diode and thus a number of different types of power diode are available for
various applications. It is up to the circuit designer to judge which component is
best suited for a particular application This will often result in a conflict between
what is required and what is available and it is here the circuit design can be very
important.

The most common diodes used in rectifier circuits, switching and inverter and
converter circuits are:

Table 1: Diode ratings

Type Maximum Maximum Forward Switching Applications


Breakdown Current Voltage Speed
Voltage Rating Drop

High Voltage 30kV ~500mA ~10V ~100nS HV circuits


Rectifier
Diodes

General ~5kV ~10kA 0.7 - 2.5 V ~25µ S 50 Hz


Purpose diodes Rectifiers

Fast Recovery ~3kV ~2kA 0.7 - 1.5 V <5uS SMPS.


Inverters,
Resonant ckts.

Schottky ~100V ~300A 0.2 - 0.9 V ~30nS LV HF


Diodes Rectification

Power Zener Operates in ~75 W - - References,


Diodes break down Voltage
~300 V Clamps
From Table 1 general trends can be seen. If high voltage and high current ratings
are needed then general-purpose diodes can be used; this is as long as switching
speeds are not too important. The faster switching diodes have restricted voltage
and current ratings and if they are used in high stress application they must be
placed in parallel and series to avoid damage.

2. Why are the diodes different?

The physical construction of a diode with a diffusion junction is shown in figure 1.


When a diode is reverse biased ie. a positive voltage is applied to the cathode with
respect to the anode, an electric field is formed between the cathode and anode
specifically across the depletion region. The diode is 'reverse biased' and cannot
conduct except for small leakage currents. However, if the electric field becomes
too strong 'avalanche breakdown' occurs and the diode will become a short circuit
and often be damaged. To counteract this physical distance between the anode and
cathode is increased by increasing the size of the bulk region and changing
impurity atom doping levels.

Figure 1: Diffusion junction diode

Construction process: N type silicon substrate heated to ~1000oC in presence of


vapor containing positive charged impurity atoms. P region diffused into N region.
The resultant effect is to cause more charge carriers to be present within the diode
when it is conducting. For the diode to switch OFF, the charge carriers must either
recombine (minority) or be removed, the latter mechanism appearing as a reverse
current (reverse recovery) flowing in the diode as it turns OFF. Put simply, diodes
with higher voltage ratings have larger bulk regions, require more time to remove
internal charges at turn OFF and are thus slower switching.

To achieve very fast switching, Schottky diodes (Fig. 2) can be used although their
current and voltage ratings are restricted. Rectifying action dependant solely on
majority carriers therefore no minority carrier recombination. Recovery is only
dependant on the capacitance of metal-silicon junction. Polished pre-doped N+
epitaxial substrate with thin N layer barrier metal deposit. Interface between metal
and N layer creates a barrier potential.

Figure 2: Schottky diode construction

Forward Biased PIN Diodes

When a PIN diode is forward biased, holes and electrons are injected from
the P and N regions into the I-region. These charges do not recombine
immediately. Instead, a finite quantity of charge always remains stored and
results in a lowering of the resistively of the I-region. The quantity of stored
charge, Q, depends on the recombination time, τ (the carrier lifetime), and
the forward bias current, IF, as follows (Equation 1):Q = IFτ [Coulombs] The
resistance of the I-region under forward bias, RS is inversely proportional to Q
and may be expressed as (Equation 2):RS = [Ohms]
where: W = I-region width µ N = electron mobility
µ p = hole mobility
Combining equations 1 and 2, the expression for RS as an inverse function of
current is shown as (Equation 3):RS = [Ohms]This equation is independent of
area. In the real world the RS is slightly dependent upon area because the
effective lifetime varies with area and thickness due to edge recombination
effects. Typically, PIN diodes display resistance characteristic consistent

Full Wave Rectifiers


4-8 Figure 4-5B.—Full-wave rectifier. NEGATIVE ALTERNATION Now that you
have a basic understanding of how a full-wave rectifier works, let's cover in detail a
practical full-wave rectifier and its waveforms. A Practical Full-Wave Rectifier A
practical full-wave rectifier circuit is shown in view A of figure 4-6. It uses two diodes
(D1 and D2) and a center-tapped transformer (T1). When the center tap is grounded,
the voltages at the opposite ends of the secondary windings are 180 degrees out of
phase with each other. Thus, when the voltage at point A is positive with respect to
ground, the voltage at point B is negative with respect to ground. Let's examine the
operation of the circuit during one complete cycle. Figure 4-6.—Practical full-wave
rectifier. During the first half cycle (indicated by the solid arrows), the anode of D1 is
positive with respect to ground and the anode of D2 is negative. As shown, current
flows from ground (center tap), up through the load resistor (RL), through diode D1 to
point A. In the transformer, current flows from point A, through

3. Reverse Recovery

Figure 3a and b show typical styles of reverse recovery. The area within the

negative portion of each curve, , is the total reverse recovery charge Qrr
and represents the charge removal from the junction and the bulk regions of the
diode and is effectively independent of the forward current in the diode. The
recovery time t2 - t1 is dependant on the size of the bulk region thus high di/dt
currents can be obtained when using fast diodes. If the di/dt of the snap recovery
is too high and stray inductance exists in the circuit then extremely high and
possibly damaging voltage spikes can be induced.

(Note: ). Qrr can be found from manufacturers specifications thus the


maximum reverse recovery current Irr is given by:

If ta is very small compared to ta then ta trr and knowing the rate of decrease of
current di/dt = Irr/ta Irr/trr leads to:
.

Figure 3: (a) Reverse recovery of a general purpose diode, (b) fast diode. Reverse
recovery time trr = t2 - t0.

The effect of reverse recovery on the output voltage of a rectifier feeding a


resistive load is shown in figure 4.

Figure 4: Bridge rectifier output voltage showing diode reverse recovery effects

4. Avalanche Breakdown

Avalanche breakdown occurs when a high reverse voltage is applied to a diode and
large electric field is created across the depletion region. The effect is dependant
on the doping levels in the region of the depletion layer. The field accelerates
minority carriers in the depletion region associated with small leakage currents to
high enough energies so that they ionise silicon atoms when they collide with
them. A new hole-electron pair are created which accelerate in opposite directions
causing further collisions and ionisation and avalanche breakdown.

Figure 5: Forward and reverse biased diode showing changing size of depletion
region
Figure 6: Typical diode characteristics

5. Zener Breakdown

Zener breakdown occurs with heavily doped junction regions (ie. highly doped
regions are better conductors). If a reverse voltage is applied and the depletion
region is too narrow for avalanche breakdown (minority carriers cannot reach high
enough energies over the distance travelled) the electric field will grow. However,
electrons are pulled directly from the valence band on the P side to the conduction
band on the N side. This type of breakdown is not destructive if the reverse current
is limited.

Figure 7: Operating range of a zener diode

=============================================================
Transistor Characteristics

Introduction
Junction Transistor
A Bipolar Transistor essentially consists of a pair of PN Junction Diodes that are joined
back-to-back. This forms a sort of a sandwich where one kind of semiconductor is
placed in between two others. There are therefore two kinds of Bipolar sandwich, the
NPN and PNP varieties. The three layers of the sandwich are conventionally called the
Collector, Base, and Emitter. The reasons for these names will become clear later once
we see how the transistor works.

Some of the basic properties exhibited by a Bipolar Transistor are immediately


recognizable as being diode-like. However, when the 'filling' of the sandwich is fairly
thin some interesting effects become possible that allow us to use the Transistor as an
amplifier or a switch. To see how the Bipolar Transistor works we can concentrate on
the NPN variety.

Figure 1 shows the energy levels in an NPN transistor when we aren't externally
applying any voltages. We can see that the arrangement looks like a back-to-back pair
of PN Diode junctions with a thin P-type filling between two N-type slices of 'bread'. In
Transistor as a Switch

• In many digital circuit applications, the transistor is merely used


as a switch.
• This means we can ignore fancy biasing circuitry and just turn
the device off (cut-off region) or on (saturation region)
• Some early digital circuitry used resistors and transistors as
indicated (RTL)

Transistor as an Amplifier

• How do we use the transistor as an amplifier?


• First, we must connect it appropriately to the supply voltages,
input signal, and load, so it can be used
• A useful mode of operation is the common-emitter configuration
Common Emitter Configuration

• To make a practical circuit, we have to add bias and


load resistors to ensure the transistor is at the
desired operating point (operating in the right current
Structure and principle of operation
A bipolar junction transistor consists of two back-to-back p-n junctions, who share a
thin common region with width, wB. Contacts are made to all three regions, the two
outer regions called the emitter and collector and the middle region called the base. The
structure of an NPN bipolar transistor is shown in Figure 1 (a). The device is called
"bipolar" since its operation involves both types of mobile carriers, electrons and holes.

Figure 1.: (a) Structure and sign convention of a NPN bipolar junction transistor.
(b) Electron and hole flow under forward active bias, VBE > 0 and VBC =
0.
Since the device consists of two back-to-back diodes, there are depletion regions
between the quasi-neutral regions w.
The sign convention of the currents and voltage is indicated on Fig 1(a). The base and
collector current are positive if a positive current goes into the base or collector contact.
The emitter current is positive for a current coming out of the emitter contact. This also
implies the emitter current, IE, equals the sum of the base current, IB, and the collector
current, IC:
(0)
The base-emitter voltage and the base-collector voltage are positive if a positive voltage
is applied to the base contact relative to the emitter and collector respectively.
The operation of the device is illustrated with Fig 1 (b). We consider here only the
forward active bias mode of operation, obtained by forward biasing the base-emitter
junction and reverse biasing the base-collector junction. To simplify the discussion
further, we also set VCE = 0. The corresponding energy band diagram is shown in Fig 2.
Electrons diffuse from the emitter into the base and holes diffuse from the base into the
emitter. This carrier diffusion is identical to that in a p-n junction. However, what is
different is that the electrons can diffuse as minority carriers through the quasi-neutral
region in the base. Once the electrons arrive at the base-collector depletion region, they
are swept through the depletion layer due to the electric field. These electrons contribute
to the collector current. In addition, there are two more currents, the base recombination
current, indicated on Fig 2 by the vertical arrow, and the base-emitter depletion layer
recombination current (not shown).

Figure 2. : Energy band diagram of a bipolar transistor biased in the forward active
mode.
The total emitter current is the sum of the electron diffusion current, IE,n, the hole
diffusion current, IE,p and the base-emitter depletion layer recombination current, Ir,d.
(1)
The total collector current is the electron diffusion current, IE,n, minus the base
recombination current, Ir,B.
(2)
The base current is the sum of the hole diffusion current, IE,p, the base recombination
current, Ir,B and the base-emitter depletion layer recombination current, Ir,d.
(3)
The transport factor, α , is defined as the ratio of the collector and emitter current:

(4)

Using Kirchoff's current law and the sign convention shown in Figure 1(a), we find that
the base current equals the difference between the emitter and collector current. The
current gain, β , is defined as the ratio of the collector and base current and equals:

(5)

This explains how a bipolar junction transistor can provide current amplification. If the
collector current is almost equal to the emitter current, the transport factor, α ,
approaches one. The current gain, β , can therefore become much larger than one.
To facilitate further analysis, we now rewrite the transport factor, α , as the product of
the emitter efficiency, γ E, the base transport factor, α T, and the depletion layer
recombination factor, δ r.
(6)
The emitter efficiency, γ E, is defined as the ratio of the electron current in the emitter,
IE,n, to the sum of the electron and hole current diffusing across the base-emitter junction,
IE,n + IE,p.

(7)

The base transport factor, α T, equals the ratio of the current due to electrons injected in
the collector, to the current due to electrons injected in the base.

(8)

Recombination in the depletion-region of the base-emitter junction further reduces the


current gain, as it increases the emitter current without increasing the collector current.
The depletion layer recombination factor, δ r, equals the ratio of the current due to
electron and hole diffusion across the base-emitter junction to the total emitter current:

The forward active mode is obtained by forward-biasing the base-emitter junction. In


addition we eliminate the base-collector junction current by setting VBC = 0. The
minority-carrier distribution in the quasi-neutral regions of the bipolar transistor, as
shown in Figure 3, is used to analyze this situation in more detail.

Figure 3. : Minority-carrier distribution in the quasi-neutral regions of a bipolar


transistor (a) Forward active bias mode. (b) Saturation mode.
The values of the minority carrier densities at the edges of the depletion regions are
indicated on the Fig 3. The carrier densities vary linearly between the boundary values
as expected when using the assumption that no significant recombination takes place in
the quasi-neutral regions. The minority carrier densities on both sides of the base-
collector depletion region equal the thermal equilibrium values since VBC was set to zero.
While this boundary condition is mathematically equivalent to that of an ideal contact,
there is an important difference. The minority carriers arriving at x = wB - xp,C do not
recombine. Instead, they drift through the base-collector depletion region and end up as
majority carriers in the collector region.
The emitter current due to electrons and holes are obtained using the "short" diode
expressions yielding:

(11)

and

(12)

It is convenient to rewrite the emitter current due to electrons, IE,n, as a function of the
total excess minority charge in the base, ∆ Qn,B. This charge is proportional to the
triangular area in the quasi-neutral base as shown in Fig 3 a) and is calculated from:
(13)

which for a "short" diode becomes:

(14)

And the emitter current due to electrons, IE,n, simplifies to:

(15)

where tr is the average time the minority carriers spend in the base layer, i.e. the transit
time. The emitter current therefore equals the excess minority carrier charge present in
the base region, divided by the time this charge spends in the base.
A combination of equations (11), (14) and (15) yields the transit time as a function of
the quasi-neutral layer width, wB', and the electron diffusion constant in the base, Dn,B.

(16)

We now turn our attention to the recombination current in the quasi-neutral base and
obtain it from the continuity equation:

(17)

In steady state and applied to the quasi-neutral region in the base, the continuity
equation yields the base recombination current, Ir,B:

(18)

which in turn can be written as a function of the excess minority carrier charge, ∆ Qn,B,
using equation (13).

(19)

The long minority-carrier lifetime and the long diffusion lengths in those materials
justify the exclusion of recombination in the base or the depletion layer. The resulting
current gain, under such conditions, is:
(21)

From this equation, we conclude that the current gain can be larger than one if the
emitter doping is much larger than the base doping. A typical current gain for a silicon
bipolar transistor is 50 - 150.
The base transport factor, as defined in equation (18), equals:

(22)

This expression is only valid if the base transport factor is very close to one, since it was
derived using the "short-diode" carrier distribution. This base transport factor can also be
expressed in function of the diffusion length in the base:

(23)

As the voltages applied to the base-emitter and base-collector junctions are changed, the
depletion layer widths and the quasi-neutral regions vary as well. This causes the
collector current to vary with the collector-emitter voltage as illustrated in Figure 4.

Figure 4. : Variation of the minority-carrier distribution in the base quasi-neutral


region due to a variation of the base-collector voltage.
A variation of the base-collector voltage results in a variation of the quasi-neutral width
in the base. The gradient of the minority-carrier density in the base therefore changes,
yielding an increased collector current as the collector-base current is increased. This
effect is referred to as the Early effect. The Early effect is observed as an increase in the
collector current with increasing collector-emitter voltage as illustrated with Figure 5.
The Early voltage, VA, is obtained by drawing a line tangential to the transistor I-V
characteristic at the point of interest. The Early voltage equals the horizontal distance
between the point chosen on the I-V characteristics and the intersection between the
tangential line and the horizontal axis. It is indicated on the figure by the horizontal
arrow.

Figure 5. : Collector current increase with an increase of the collector-emitter


voltage due to the Early effect. The Early voltage, VA, is also indicated
on the figure.

Now, to the heart of the matter!


We have an operating curve consisting of a fairly linear
segment bounded by two nonlinear ends: cutoff and
saturation.

Operating in the Middle


The transistor will operate very nicely if one could insure
that no input voltage, i.e., signal voltage--would cause the
collector current to ever operate beyond either end of the
linear portion of the operating curve.
To further beat a point into the
ground: if one increased the input
signal beyond this level, the
output signal would now start to
"clip" and cause distortion (sine
wave gets flat on top and bottom).
If the bias point were set either too
low or too high, then the sine
wave would start to clip on the top
before the bottom, or visa versa
(asymmetric clipping).

Effects of different bias settings

Effects of different bias settings

When (negative) feedback is introduced,


most of these problems diminish or
disappear, resulting in improved
performance and reliability. There are
several ways to introduce feedback to this
simple amplifier, the easiest and most
reliable of which is accomplished by
introducing a small value resistor in the
emitter circuit. The amount of feedback is
dependent on the relative signal level
dropped across this resistor, e.g., if the
resistor value approached that of the
collector load resistor, the gain would
approach unity (Gv ~ 1).
From the explanation of how a Bipolar Transistor works, we can expect the main
characteristic of a Bipolar Transistor to be its Current Gain value. In practice this value
isn't a 'universal constant' but depends on various factors: e.g. the transistor's
temperature, the size and shape of its Base region, the way it's various parts were doped
to make them into semiconductors, etc.

The above illustration shows how, for a 'typical' transistor, the Current Gain varies with
the Collector Current level, IC. from this graph we can see that the proportion of
electrons 'caught' by a hole whilst trying to cross the Base region does vary a bit
depending on the current level. Note that the graph doesn't show the transistor's beta
value, it shows a related figure called the transistor's Small Signal current gain, hfe.
This is similar to the beta value, but is defined in terms of small changes in the current
levels. This parameter is more useful than the beta value when considering the
transistor's use in signal amplifiers where we're interested in how the device responds to
changes in the applied voltages and currents.
The second way we can characterize the behavior of a Bipolar Transistor is by relating
the Base-Emitter voltage, VBE, we apply to the Base current, IB, it produces. As can
expect from the diode-like nature of the Base-Emitter junction this voltage/current
characteristic curve has an exponential-like shape similar to that of a normal PN
Junction diode.

As with the previous curve, the graph shown here should only be regarded as a 'typical'
example as the precise result will vary a bit from device to device and with the
temperature, etc.

In most practical situations we can expect the Collector current to be set almost entirely
by the chosen Base-Emitter voltage. However, this is only true when the Base-Collector
voltage we are applying is 'big enough' to quickly draw over to the Collector any free
electrons which enter the Base region from the Emitter.
The above plot of characteristic curves gives a more complete picture of what we can
expect from a working Bipolar Transistor. Each curve shows how the colletor current,
IC, varies with the Collector-Emitter voltage, VCE, for a specific fixed value of the
Base current, IB. This kind of characteristic curve 'family' is one of the most useful
ones when it comes to building amplifiers, etc, using Bipolar Transistors as it contains
quite a lot of detailed information.

When the applied VCE level is 'large enough' (typically above two or three volts,
shown as the region in blue) the Collector is able to to remove free electrons from the
Base almost as quickly as they Emitter injects them. Hence we get a current, which is
set by the Base-Emitter voltage and see a current gain value that doesn’t alter very
much if we change either the base current or the applied Collector potential.

However, when we reduce the Collector potential so that VCE is less than a couple of
volts, we find that it is no longer able to efficiently remove electrons from the Base.
This produces a sort of partial 'roadblock' effect where free electrons tend to hang about
in the Base region. (Cream-colored region) These make the Base region seem 'more
negative' to any electrons in the Emitter and tends to reduce the overall flow of current
through the device. As we lower the Collector potential to become almost the same as
that of the Base and Emitter it eventually stops drawing any electrons out of the device
and the Collector current falls towards zero.

The precise voltage at which the Collector ceases to be an effective 'collector of


electrons' depends on the temperature and the manufacturing details of the transistor. In
general we can expect most Bipolar Transistors to work efficiently provided that we
Output Characteristic Curves

For each transistor configuration, common emitter, common base and emitter
follower the output curves are slightly different. A typical output
characteristic for a BJT in common emitter mode are shown below :-

After the initial bend, the curves approximate a straight line. The slope or
gradient of each line represents the output impedance, for a particular input
base current. So what has all this got to do with biasing? Take, for example
the middle curve. The collector emitter voltage is displayed up to 20 volts.
Let's assume that we have a single stage amplifier, working in common
emitter mode, and the supply voltage is 10 volts. The output terminal is the
collector, the input is the base, where do you set the bias conditions? The
answer is anywhere on the flat part of the graph. However, imagine the bias is
set so that the collector voltage is 2 volts. What happens if the output signal is
4 volts peak to peak ? Depending on whether the transistor used is a PNP or
NPN, then one half cycle will be amplified cleanly, the other cycle will
approach the limits of the power supply and will "clip". This is shown below :
5.2.1.1 - Biasing Common Emitter Transistors

· A common emitter configuration is shown in the figure below.

· Consider the common emitter amplifier shown. The resistors provide DC biasing to
select an operating point. The capacitor Ce is used to allow the AC to bypass Re.

· To perform the design we must first bias the transistor using the curves below.
THE SMALL SIGNAL AMPLIFIER

The two transistor types have opposite polarity power


supplies.
The polarity of the capacitors is reversed.
If the transistors have the same characteristics, then resistor
values are the same in both circuits.

R1 and R2 are the base bias resistors, setting the bias point.

R3 is the collector load resistor.


R4 is the emitter stabilising resistor.
C3 is the emitter decoupling capacitor.
C1 and C2 are coupling capacitors which allow ac signals to
pass but block dc.

=============================================================
Theory Of sinusoidal Oscillators

OSCILLATORS
Oscillators require a resonator, per the previous section, and an active device. Typically
additional resistors and capacitors are also required in the circuit.

Theory of operation
Two requirements must be fulfilled in order to obtain oscillation in the closed loop
circuit:
1. The closed loop gain must be greater than or equal to one.
2. The phase shift around the loop is N*360E, where N is an integer.
In addition, it is important to note that at power up the noise of the component around
the oscillator circuit is what actually starts the oscillation. Some of the active devices
used are discrete transistors, Field Effect Transistors, Op-Amps, and digital gates. A
common transistor circuit is shown in figure 9 (colpitts
THE DESIGN PRINCIPLES OF CRYSTAL OSCILLATORS

Crystal Oscillators are usually, fixed frequency oscillators where stability and accuracy
are the primary considerations. For example it is almost impossible to design a stable
and accurate LC oscillator for the upper HF and higher frequencies without resorting to
some sort of crystal control. Hence the reason for crystal oscillators.

I won't be discussing frequency synthesizers and direct digital synthesis (DDS) here.
They are particularly interesting topics to be covered later.
A PRACTICAL EXAMPLE OF A CRYSTAL OSCILLATOR

Fig 1.

This is a typical example of the type of crystal oscillators that may be used for say
converters. Some points of interest on crystal oscillators..

The transistor could be a general purpose type with an Ft of at least 150


hz for HF use. A typical example would be a 2N2222A.

The turns ratio on the tuned circuit depicts an anticipated nominal load
of 50 ohms. This allows a theoretical 2K5 ohm on the collector. If it is
followed by a buffer amplifier (recommended) I would simply maintain
the typical 7:1 turns ratio. I have included a formula for determining L
and C in the tuned circuits of crystal oscillators in case you have
forgotten earlier tutorials. Personally I would make L a reactance of
around 250 ohms. In this case I'd make C a smaller trimmer in parallel
with a standard fixed value.

=========================================================
Operational Amplifier(OPAMP)

Operational Amplifier Circuits

We have built voltage and current amplifiers using transistors. Circuits of this kind with
nice properties (high gain and high input impedance, for example), packaged as
integrated circuits (ICs), are called operational amplifiers or op amps. They are called
``operational'' amplifiers, because they can be used to perform arithmetic operations
(addition, subtraction, multiplication) with signals. In fact, op amps can also be used to
integrate (calculate the areas under) and differentiate (calculate the slopes of) signals.

Figure 22: A circuit model of an operational amplifier (op amp) with gain and input

and output resistances and .

A circuit model of an operational amplifier is shown in Figure The output voltage of the
op amp is linearly proportional to the voltage difference between the input terminals

by a factor of the gain . However, the output voltage is limited to the range

, where does the designer of the op amp specify the supply

voltage. The range is often called the linear region of the amplifier,

and when the output swings to or , the op amp is said to be saturated. The
output ranges of the amplifiers we built as part of Lab 3 were similarly limited by the
supply voltage.
An ideal op amp has infinite gain ( ), infinite input resistance ( ), and

zero output resistance ( ). You should use these two assumptions to analyze
the op amp circuits covered in the assignments below. A consequence of the
assumption of infinite gain is that, if the output voltage is within the finite linear region,

we must have . A real op amp has a gain on the range - (depending on


the type), and hence actually maintains a very small difference in input terminal
voltages when operating in its linear region. For most applications, we can get away

with assuming .

Figure 23: (a) Schematic symbol for an op amp. (b) Connection diagram for the LM741
and LF411 8 pin dual inline packages (DIPs). We will not make use of the null (LM741)
/ balance (LF411) pins. Pins labeled NC are not connected to the integrated circuit.

We will use two operational amplifiers in our laboratory exercises, the LM741, a
general purpose bipolar junction transistor (BJT) based amplifier with a typical input
resistance of 2 M , and the LF411, with field effect transistors (FETs) at the inputs
giving a much larger input resistance ( ). Detailed data sheets for these devices
are available for download at the National Semiconductor web site the two, the LF411
comes closest to satisfying our two assumptions associated with ideal op amp behavior.
It costs more than the LM741 (a whopping $0.61 vs. $0.23 as of spring 2001). The
schematic symbol for an op amp and the connection diagram for the chips, called dual
inline packages (DIPs), we will be using are shown in Figure .

Inverting Amplifier
Figure 24: Inverting amplifier circuit.

An inverting amplifier circuit is shown in Figure 24.

1. Show that the gain of the amplifier is


(18)
2.

3. Build the circuit, and check your prediction experimentally for gains of 10 and
100.
4. Measure the bandwidth (the difference between the upper and lower 3 dB
points) of the amplifier for each gain. The product of the gain and bandwidth
should be constant. Is it?
5. Check the linearity of the amplifier for each gain over its useful frequency
range.
6. Measure the input impedance of the amplifier by placing various resistors in
series with the source. Explain your result.

Noninverting Amplifier
Figure 25: Noninverting amplifier circuit.

A noninverting amplifier circuit is shown in Figure 25.

1. Show that the gain of the amplifier is


(19)
2.

3. Build the circuit, and check your prediction experimentally for gains of 10 and
100.
4. What is the input impedance of the amplifier?

Voltage Follower
Figure 26: Voltage follower circuit.

A voltage follower circuit is shown in Figure 26.

1. What's the point?


2. What is the input impedance of the amplifier?
3. Build the circuit, and use it to improve the input impedance of an inverting amp.

Differential Amplifier

Figure 27: Differential amplifier circuit.

A differential amplifier circuit is shown in Figure 27.

1. Show that the output signal of the amplifier is


(20)
2.

3. Build the circuit, and check your prediction experimentally for a gain of 10.
4. Measure the input impedance of the amplifier by placing various resistors in
series with the source. To measure the impedance of one terminal, drive it with
a small signal through a resistor and ground the other. Explain your result.

Summing Amplifier

Figure 28: Summing amplifier circuit.

A summing amplifier circuit is shown in Figure 28.

1. Show that the output signal of the amplifier is


(21)
2.

3. Build the circuit, and check your prediction experimentally for a gain of 10.
4. Measure the input impedance of the amplifier by placing various resistors in
series with the source. To measure the impedance of one terminal, drive it with
a small signal through a resistor and ground the other. Explain your result.

Integrator
Figure 29: Integrator circuit.

An integrator circuit is shown in Figure 29.

1. Show that the output signal of the amplifier is


(22)
2.

3. Build the circuit with k , F and use square and sinusoidal


wave forms to test the predicted behavior. Also place a M resistor in
parallel with the capacitor. This resistor drains charge to avoid saturation due to
very low frequency or DC signals.

Differentiator
Figure 30: Differentiator circuit.

A differentiator circuit is shown in Figure 30.

1. Show that the output signal of the amplifier is


(23)
2.

3. Build the circuit with k , F and use triangle and sinusoidal


wave forms to test the predicted behavior.

Schmitt Trigger
Figure 31: Schmitt trigger circuit. and are relative to ground, or some reference

between and .

A Schmitt trigger circuit is shown in Figure 31. The analysis is not difficult. It is,

however, tedious. The , voltage divider sets the rough neighborhood of the

trigger thresholds. controls the hysteresis of the switch (the difference between the

``turn on'' and ``turn off'' thresholds). The feedback resistor should be a factor 10-
100 larger than the voltage divider resistors. Otherwise, it drags the thresholds apart.

1. Predict the ``turn on'' and ``turn off'' thresholds for k , k ,

k , and k . Rather than finding a general expression,


it's fine to consider this particular case. For the analysis, assume a maximal

output voltage swing of V. This actually varies with each op amp, but
should not be far from the truth.
2. Build the circuit, using the resistance values given above. Measure the input
thresholds of the trigger and compare with your predictions.

=============================================================

Communication Systems
Concept
We can see from our initial overviews that FM and PM modulation schemes have alot
in common. Both of them are altering the angle of the carrier sinusoid according to
some function. It turns out that we can go so far as to generalize the two together into a
single modulation scheme known as angle modulation. Note that we will never
abbreviate "angle modulation" with the letters "AM", because AM radio is completely
different from angle modulation.

Block diagram of communication system

Fundamental model of communication

Figure 1: The Fundamental Model of Communication.

The fundamental model of communications is portrayed in figure 1. In this fundamental


model, each message-bearing signal, exemplified by s(t is analog and is a function of
time. A system operates on zero, one, or several signals to produce more signals or to
simply absorb them (figure 2). In electrical engineering, we represent a system as a box,
receiving input signals (usually coming from the left) and producing from them new
output signals. This graphical representation is known as a block diagram. We denote
input signals by lines having arrows pointing into the box, output signals by arrows
pointing away. As typified by the communications model, how information flows, how
it is corrupted and manipulated, and interconnecting block diagrams summarizes how it
is ultimately received: The outputs of one or more systems serve as the inputs to others.
In the communications model, the source produces a signal that will be absorbed by the
sink. Examples of time-domain signals produced by a source are music, speech, and
characters typed on a keyboard. Signals can also be functions of two variables—an
image is a signal that depends on two spatial variables—or more—television pictures
(video signals) are functions of two spatial variables and time. Thus, information
sources produce signals. In physical systems, each signal corresponds to an electrical
voltage or current. To be able to design systems, we must understand electrical science
and technology. However, we first need to understand the big picture to appreciate the
context in which the electrical engineer works.
In communication systems, messages—signals produced by sources—must be recast
for transmission. The block diagram has the message s(t) passing through a block
labeled transmitter that produces the signal x(t). In the case of a radio transmitter, it
accepts an input audio signal and produces a signal that physically is an
electromagnetic wave radiated by an antenna and propagating as Maxwell's equations
predict. In the case of a computer network, typed characters are encapsulated in
packets, attached with a destination address, and launched into the Internet. From the
communication systems “big picture” perspective, the same block diagram applies
although the systems can be very different. In any case, the transmitter should not
operate in such a way that the message s(t) cannot be recovered from x(t). In the
mathematical sense, the inverse system must exist; else the communication system
cannot be considered reliable. (It is ridiculous to transmit a signal in such a way that no
one can recover the original. However, clever systems exist that transmits signals so
that only the “in crowd” can recover them. Such crytographic systems underlie secret
communications.)
Transmitted signals next pass through the next stage, the evil channel. Nothing good
happens to a signal in a channel: It can become corrupted by noise, distorted, and
attenuated among many possibilities. The channel cannot be escaped (the real world is
cruel), and transmitter design and receiver design focus on how best to jointly fend off
the channel's effects on signals. The channel is another system in our block diagram,
and produces r(t), the signal received by the receiver. If the channel were benign (good
luck finding such a channel in the real world), the receiver would serve as the inverse
system to the transmitter, and yield the message with no distortion. However, because
of the channel, the receiver must do its best to produce a received message s(t) that
resembles s(t) as much as possible. Shannon showed in his 1948 paper that reliable—
for the moment, take this word to mean error-free—digital communication was possible
over arbitrarily noisy channels. It is this result that modern communications systems
exploit, and why many communications systems are going “digital.” The module on
Information Communication details Shannon's theory of information, and there we
learn of Shannon's result and how to use it.
Finally, the received message is passed to the information sink that somehow makes
use of the message. In the communications model, the source is a system having no
input but producing an output; a sink has an input and no output.
Understanding signal generation and how systems work amounts to understanding
signals, the nature of the information they represent, how information is transformed
between analog and digital forms, and how information can be processed by systems
operating on information-bearing signals. This understanding demands two different
fields of knowledge. One is electrical science: How are signals represented and
manipulated electrically? The second is signal science: What is the structure of signals,
no matter what their source, what is their information content, and what capabilities
does this structure force upon communication systems?

Instantaneous Phase
Let us now look at some things that FM and PM have in common:

sFM = Acos(2π[fc + ks(t)]t + φ)

sPM = Acos(2πfct + αs(t))

What we want to analyze is the argument of the sinusoid, and we will call it Psi. Let us
show the Psi for the bare carrier, the FM case, and the PM case:

Ψcarrier(t) = 2πfct + φ

ΨFM(t) = 2π[fc + ks(t)]t + φ

ΨPM(t) = 2πfct + αs(t)

s(t) = Acos(Ψ(t))

This Psi value is called the Instantaneous phase of the sinusoid.

Instantaneous Frequency
Using the Instantaneous phase value, we can find the Instantaneous frequency of the
wave with the following formula:

We can also express the instantaneous phase in terms of the instantaneous frequency:

Where the greek letter "lambda" is simply a dummy variable used for integration. Using
these relationships, we can begin to study FM and PM signals further.

Determining FM or PM
If we are given the equation for the instantaneous phase of a particular angle modulated
transmission, is it possible to determine if the transmission is using FM or PM? it turns
out that it is possible to determine which is which, by following 2 simple rules:

1. In PM, instantaneous phase is a linear function.


2. In FM, instantaneous frequency minus carrier frequency is a linear function.
For a refresher course on Linearity, there is a chapter on the subject in the Signals and
Systems book worth re-reading.

Amplitude Modulation
In Amplitude Modulation or AM, the carrier signal

has its amplitude

Modulated in proportion to the message bearing (lower frequency) signal

to give

The magnitude of

is chosen to be less than or equal to 1, from reasons having to do with demodulation,


i.e. recovery of the signal

from the received signal. The modulation index is then defined to be

Figures 1 and 2 are some matlab plots of what the modulated signal looks like for

. The frequency of the modulating signal is chosen to be much smaller than that of the
carrier signal. Try to think of what would happen if the modulating index were bigger
than 1.
Figure 1: AM modulation with modulation index .2

Note that the AM signal is of the form

This has frequency components at frequencies

PHASE MODULATION

Frequency modulation requires the oscillator frequency to deviate both above and
below the carrier frequency. During the process of frequency modulation, the peaks of
each successive cycle in the modulated waveform occur at times other than they would
if the carrier were unpopulated. This is actually an incidental phase shift that takes
place along with the frequency shift in fm. Just the opposite action takes place in phase
modulation. The af signal is applied to a PHASE MODULATOR in pm. The resultant
wave from the phase modulator shifts in phase, as illustrated in figure 2-17. Notice that
the time period of each successive cycle varies in the modulated wave according to the
audio-wave variation. Since frequency is a function of time period per cycle, we can
see that such a phase shift in the carrier will cause its frequency to change. The
frequency change in fm is vital, but in pm it is merely incidental. The amount of
frequency change has nothing to do with the resultant modulated wave shape in pm. At
this point the comparison of fm to pm may seem a little hazy, but it will clear up as we
progress.
Figure 2-17.—Phase modulation.

Let’s review some voltage phase relationships. Look at figure 2-18 and compare the
three voltages (A, B, and C). Since voltage A begins its cycle and reaches its peak
before voltage B, it is said to lead voltage B. Voltage C, on the other hand, lags voltage
B by 30 degrees. In phase modulation the phase of the carrier is caused to shift at the
rate of the af modulating signal. In figure 2-19, note that the unpopulated carrier has
constant phase, amplitude, and frequency. The dotted wave shape represents the
modulated carrier. Notice that the phase on the second peak leads the phase of the
unpopulated carrier. On the third peak the shift is even greater; however, on-the fourth
peak, the peaks begin to realign phase with each other. These relationships represent the
effect of 1/2 cycle of an af modulating signal. On the negative alternation of the af
intelligence, the phase of the carrier would lag and the peaks would occur at times later
than they would in the unpopulated carrier. Figure 2-18.—Phase relationships.

Frequency Modulation
FM is a so-called angle modulation scheme; it was inspired by phase modulation but
has proved to be more useful partly for its ease of generation and decoding. The main
advantages of FM over AM are:

1. Improved signal to noise ratio (about 25dB) w.r.t. to man made interference.
2. Smaller geographical interference between neighboring stations.
3. Less radiated power.
4. Well-defined service areas for given transmitter power.
Disadvantages of FM:

1. Much more Bandwidth (as much as 20 times as much).


2. More complicated receiver and transmitter.

In this scheme the frequency of the modulating signal is changed in proportion to the
message signal . Thus the signal that is transmitted is of the form

Here the signal

is assumed to be normalized so that the maximum of the integral is 1 and

is called the frequency deviation of the modulation scheme. The index of modulation of
an FM signal of the form

is defined to be

Figures 3, 4, and 5 are examples of what FM signals look like in the time domain for a
message signal of the form
Figure 3: FM modulation with modulating frequency 1, carrier frequency 10 and
modulation index 2

In general the determination of the frequency content of an FM waveform is


complicated, but when

=============================================================
Digital electronics
Basic Concepts Behind the Binary System
To understand binary numbers, begin by recalling elementary school math. When we
first learned about numbers, we were taught that, in the decimal system, things are
organized into columns:

H | T | O
1 | 9 | 3
such that "H" is the hundreds column, "T" is the tens column, and "O" is the ones
column. So the number "193" is 1-hundreds plus 9-tens plus 3-ones.

Years later, we learned that the ones column meant 10^0, the tens column meant 10^1,
the hundreds column 10^2 and so on, such that

10^2|10^1|10^0
1 | 9 | 3
the number 193 is really {(1*10^2)+(9*10^1)+(3*10^0)}.

As you know, the decimal system uses the digits 0-9 to represent numbers. If we
wanted to put a larger number in column 10^n (e.g., 10), we would have to multiply
10*10^n, which would give 10^(n+1), and be carried a column to the left. For example,
putting ten in the 10^0 column is impossible, so we put a 1 in the 10^1 column, and a 0
in the 10^0 column, thus using two columns. Twelve would be 12*10^0, or
10^0(10+2), or 10^1+2*10^0, which also uses an additional column to the left (12).

The binary system works under the exact same principles as the decimal system, only it
operates in base 2 rather than base 10. In other words, instead of columns being

10^2|10^1|10^0
they are
2^2|2^1|2^0

Instead of using the digits 0-9, we only use 0-1 (again, if we used anything larger it
would be like multiplying 2*2^n and getting 2^n+1, which would not fit in the 2^n
column. Therefore, it would shift you one column to the left. For example, "3" in binary
cannot be put into one column. The first column we fill is the right-most column, which
is 2^0, or 1. Since 3>1, we need to use an extra column to the left, and indicate it as
"11" in binary (1*2^1) + (1*2^0).
Examples: What would the binary number 1011 be in decimal notation?

Try converting these numbers from binary to decimal:

• 10
• 111
• 10101
• 11110

Remember:
2^4| 2^3| 2^2| 2^1| 2^0
| | | 1 | 0
| | 1 | 1 | 1
1 | 0 | 1 | 0 | 1
1 | 1 | 1 | 1 | 0

Binary Addition
Consider the addition of decimal numbers:

23
+48
___

We begin by adding 3+8=11. Since 11 is greater than 10, a one is put into the 10's
column (carried), and a 1 is recorded in the one's column of the sum. Next, add {(2+4)
+1} (the one is from the carry)=7, which is put in the 10's column of the sum. Thus, the
answer is 71.

Binary addition works on the same principle, but the numerals are different. Begin with
one-bit binary addition:

0 0 1
+0 +1 +0
___ ___ ___
0 1 1

1+1 carries us into the next column. In decimal form, 1+1=2. In binary, any digit higher
than 1 puts us a column to the left (as would 10 in decimal notation). The decimal
number "2" is written in binary notation as "10" (1*2^1)+(0*2^0). Record the 0 in the
ones column, and carry the 1 to the twos column to get an answer of "10." In our
vertical notation,
1
+1
___
10

The process is the same for multiple-bit binary numbers:

1010
+1111
______

• Step one:
Column 2^0: 0+1=1.
Record the 1.
Temporary Result: 1; Carry: 0
• Step two:
Column 2^1: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 01; Carry: 1
• Step three:
Column 2^2: 1+0=1 Add 1 from carry: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 001; Carry: 1
• Step four:
Column 2^3: 1+1=10. Add 1 from carry: 10+1=11.
Record the 11.
Final result: 11001

Alternately:

11 (carry)
1010
+1111
______
11001

Always remember

• 0+0=0
• 1+0=1
• 1+1=10

Try a few examples of binary addition:

111 101 111


+110 +111 +111
______ _____ _____
Binary Multiplication
Multiplication in the binary system works the same way as in the decimal system:

• 1*1=1
• 1*0=0
• 0*1=0

101
* 11
____
101
1010
_____
1111

Note that multiplying by two is extremely easy. To multiply by two, just add a 0 on the
end.

Binary Division
Follow the same rules as in decimal division. For the sake of simplicity, throw away the
remainder.

For Example: 111011/11

10011 r 10
_______
11)111011
-11
______
101
-11
______
101
11
______
10

Decimal to Binary
Converting from decimal to binary notation is slightly more difficult conceptually, but
can easily be done once you know how through the use of algorithms. Begin by
thinking of a few examples. We can easily see that the number 3= 2+1. and that this is
equivalent to (1*2^1)+(1*2^0). This translates into putting a "1" in the 2^1 column and
a "1" in the 2^0 column, to get "11". Almost as intuitive is the number 5: it is obviously
4+1, which is the same as saying [(2*2) +1], or 2^2+1. This can also be written as
[(1*2^2)+(1*2^0)]. Looking at this in columns,

2^2 | 2^1 | 2^0


1 0 1
or 101.

What we're doing here is finding the largest power of two within the number (2^2=4 is
the largest power of 2 in 5), subtracting that from the number (5-4=1), and finding the
largest power of 2 in the remainder (2^0=1 is the largest power of 2 in 1). Then we just
put this into columns. This process continues until we have a remainder of 0. Let's take
a look at how it works. We know that:

2^0=1
2^1=2
2^2=4
2^3=8
2^4=16
2^5=32
2^6=64
2^7=128
and so on. To convert the decimal number 75 to binary, we would find the largest
power of 2 less than 75, which is 64. Thus, we would put a 1 in the 2^6 column, and
subtract 64 from 75, giving us 11. The largest power of 2 in 11 is 8, or 2^3. Put 1 in the
2^3 column, and 0 in 2^4 and 2^5. Subtract 8 from 11 to get 3. Put 1 in the 2^1 column,
0 in 2^2, and subtract 2 from 3. We're left with 1, which goes in 2^0, and we subtract
one to get zero. Thus, our number is 1001011.

Making this algorithm a bit more formal gives us:

1. Let D=number we wish to convert from decimal to binary


2. Repeat until D=0
o a. Find the largest power of two in D. Let this equal P.
o b. Put a 1 in binary column P.
o c. Subtract P from D.
3. Put zeros in all columns, which don't have ones.

This algorithm is a bit awkward. Particularly step 3, "filling in the zeros." Therefore,
we should rewrite it such that we ascertain the value of each column individually,
putting in 0's and 1's as we go:

1. Let D= the number we wish to convert from decimal to binary


2. Find P, such that 2^P is the largest power of two smaller than D.
3. Repeat until P<0
o If 2^P<=D then
 put 1 into column P
 subtract 2^P from D
o Else
 put 0 into column P
o End if
o Subtract 1 from P

Now that we have an algorithm, we can use it to convert numbers from decimal to
binary relatively painlessly. Let's try the number D=55.

• Our first step is to find P. We know that 2^4=16, 2^5=32, and 2^6=64.
Therefore, P=5.
• 2^5<=55, so we put a 1 in the 2^5 column: 1-----.
• Subtracting 55-32 leaves us with 23. Subtracting 1 from P gives us 4.
• Following step 3 again, 2^4<=23, so we put a 1 in the 2^4 column: 11----.
• Next, subtract 16 from 23, to get 7. Subtract 1 from P gives us 3.
• 2^3>7, so we put a 0 in the 2^3 column: 110---
• Next, subtract 1 from P, which gives us 2.
• 2^2<=7, so we put a 1 in the 2^2 column: 1101--
• Subtract 4 from 7 to get 3. Subtract 1 from P to get 1.
• 2^1<=3, so we put a 1 in the 2^1 column: 11011-
• Subtract 2 from 3 to get 1. Subtract 1 from P to get 0.
• 2^0<=1, so we put a 1 in the 2^0 column: 110111
• Subtract 1 from 1 to get 0. Subtract 1 from P to get -1.
• P is now less than zero, so we stop.

Another algorithm for converting decimal to binary

However, this is not the only approach possible. We can start at the right, rather than
the left.

All binary numbers are in the form

a[n]*2^n + a[n-1]*2^(n-1)+...+a[1]*2^1 + a[0]*2^0


where each a[i] is either a 1 or a 0 (the only possible digits for the binary system). The
only way a number can be odd is if it has a 1 in the 2^0 column, because all powers of
two greater than 0 are even numbers (2, 4, 8, 16...). This gives us the rightmost digit as
a starting point.

Now we need to do the remaining digits. One idea is to "shift" them. It is also easy to
see that multiplying and dividing by 2 shifts everything by one column: two in binary is
10, or (1*2^1). Dividing (1*2^1) by 2 gives us (1*2^0), or just a 1 in binary. Similarly,
multiplying by 2 shifts in the other direction: (1*2^1)*2=(1*2^2) or 10 in binary.
Therefore

{a[n]*2^n + a[n-1]*2^(n-1) + ... + a[1]*2^1 + a[0]*2^0}/2

is equal to
a[n]*2^(n-1) + a[n-1]*2^(n-2) + ... + a[1]2^0

Let's look at how this can help us convert from decimal to binary. Take the number
163. We know that since it is odd, there must be a 1 in the 2^0 column (a[0]=1). We
also know that it equals 162+1. If we put the 1 in the 2^0 column, we have 162 left, and
have to decide how to translate the remaining digits.

Two's column: Dividing 162 by 2 gives 81. The number 81 in binary would also have a
1 in the 2^0 column. Since we divided the number by two, we "took out" one power of
two. Similarly, the statement a[n-1]*2^(n-1) + a[n-2]*2^(n-2) + ... + a[1]*2^0 has a
power of two removed. Our "new" 2^0 column now contains a1. We learned earlier that
there is a 1 in the 2^0 column if the number is odd. Since 81 is odd, a[1]=1. Practically,
we can simply keep a "running total", which now stands at 11 (a[1]=1 and a[0]=1).
Also note that a1 is essentially "remultiplied" by two just by putting it in front of a[0],
so it is automatically fit into the correct column.

Four's column: Now we can subtract 1 from 81 to see what remainder we still must
place (80). Dividing 80 by 2 gives 40. Therefore, there must be a 0 in the 4's column,
(because what we are actually placing is a 2^0 column, and the number is not odd).

Eight's column: We can divide by two again to get 20. This is even, so we put a 0 in the
8's column. Our running total now stands at a[3]=0, a[2]=0, a[1]=1, and a[0]=1.

We can continue in this manner until there is no remainder to place.

Let's formalize this algorithm:


1. Let D= the number we wish to convert from decimal to binary.
2. Repeat until D=0:
a) If D is odd, put "1" in the leftmost open column, and subtract
1 from D.
b) If D is even, put "0" in the leftmost open column.
c) Divide D by 2.
End Repeat
For the number 163, this works as follows:
1. Let D=163
2. b) D is odd, put a 1 in the 2^0 column.
Subtract 1 from D to get 162.
c) Divide D=162 by 2.
Temporary Result: 01 New D=81
D does not equal 0, so we repeat step 2.

2. b) D is odd, put a 1 in the 2^1 column.


Subtract 1 from D to get 80.
c) Divide D=80 by 2.
Temporary Result: 11 New D=40
D does not equal 0, so we repeat step 2.

2. b) D is even, put a 0 in the 2^2 column.


c) Divide D by 2.
Temporary Result:011 New D=20
2. b) D is even, put a 0 in the 2^3 column.
c) Divide D by 2.
Temporary Result: 0011 New D=10

2. b) D is even, put a 0 in the 2^4 column.


c) Divide D by 2.
Temporary Result: 00011 New D=5

2. a) D is odd, put a 1 in the 2^5 column.


Subtract 1 from D to get 4.
c) Divide D by 2.
Temporary Result: 100011 New D=2

2. b) D is even, put a 0 in the 2^6 column.


c) Divide D by 2.
Temporary Result: 0100011 New D=1

2. a) D is odd, put a 1 in the 27 column.


Subtract 1 from D to get D=0.
c) Divide D by 2.
Temporary Result: 10100011 New D=0

D=0, so we are done, and the decimal number 163 is equivalent to the
binary number 10100011.

Since we already knew how to convert from binary to decimal, we can easily verify our
result. 10100011=(1*2^0)+(1*2^1)+(1*2^5)+(1*2^7)=1+2+32+128= 163.

Negation in the Binary System


• Signed Magnitude
• One's Complement
• Two's Complement
• Excess 2^(m-1)

These techniques work well for non-negative integers, but how do we indicate negative
numbers in the binary system?

Before we investigate negative numbers, we note that the computer uses a fixed number
of "bits" or binary digits. An 8-bit number is 8 digits long. For this section, we will
work with 8 bits.

Signed Magnitude:

The simplest way to indicate negation is signed magnitude. In signed magnitude, the
left-most bit is not actually part of the number, but is just the equivalent of a +/- sign.
"0" indicates that the number is positive, "1" indicates negative. In 8 bits, 00001100
would be 12 (break this down into (1*2^3) + (1*2^2) ). To indicate -12, we would
simply put a "1" rather than a "0" as the first bit: 10001100.
One's Complement:

In one's complement, positive numbers are represented as usual in regular binary.


However, negative numbers are represented differently. To negate a number, replace all
zeros with ones, and ones with zeros - flip the bits. Thus, 12 would be 00001100, and
-12 would be 11110011. As in signed magnitude, the leftmost bit indicates the sign (1 is
negative, 0 is positive). To compute the value of a negative number, flip the bits and
translate as before.

Two's Complement:

Begin with the number in one's complement. Add 1 if the number is negative. Twelve
would be represented as 00001100, and -12 as 11110100. To verify this, let's subtract 1
from 11110110, to get 11110011. If we flip the bits, we get 00001100, or 12 in decimal.

In this notation, "m" indicates the total number of bits. For us (working with 8 bits), it
would be excess 2^7. To represent a number (positive or negative) in excess 2^7, begin
by taking the number in regular binary representation. Then add 2^7 (=128) to that
number. For example, 7 would be 128 + 7=135, or 2^7+2^2+2^1+2^0, and, in
binary,10000111. We would represent -7 as 128-7=121, and, in binary, 01111001.

Note:

• Unless you know which representation has been used, you cannot figure out the
value of a number.
• A number in excess 2^(m-1) is the same as that number in two's complement
with the leftmost bit flipped.

To see the advantages and disadvantages of each method, let's try working with them.

Using the regular algorithm for binary adition, add (5+12), (-5+12), (-12+-5), and (12+-
12) in each system. Then convert back to decimal numbers.

Logic gates

• Logic gates represent Boolean functions5; Boolean functions


– receive a number of 0s and 1s as parameters (input),
– and return a 0 or 1 as the result (output)
• Any Boolean function may be implemented using a combination of
three
simple logic gates AND, OR, NOT:
Inputs Output
A B A AND B
000
010
100
111
Inputs Output
A B A OR B
000
011
101
111
Input Output
A NOT A
01
10
5George Boolean, 1815–1846, Mathematician and Logician
48
Basic logic gates
• A number of different notations may be used to denote the three
basic
logic gates:
Gate
A AND B A ^ B A · B
A OR B A _ B A + B
NOT A ¬ A A
– In the last column, note how the truth table for AND and the
arithmetic
operation · (multiply) coincide (a similar remark applies to OR and +)
49
Detecting bit patterns with AND
• We can generalize the operation of the AND gate from 2 to n, n > 2
input parameters:
only if all n input parameters are 1, the result will be 1 (in all other
cases, the result is 0)
• Example (detect input bit patterns 111, 101, 010):
50
Controlling data flow with AND: a data valve
• Example:
1 Assume a data signal is arriving on wire D
2 Only if the control wire X indicates a logic 1, we want to let D pass,
otherwise keep the output at logic 0
• Note: the truth table for the AND gate can equivalently be written
as
Inputs Output
D X D AND X
d00
d1d
51
A multiplexer (data selector)
• To become more familiar with logic gates and boolean functions, let
us
review the design of a multiplexer, a generalization of the “data
valve”
just seen
• An n-way multiplexer accepts 2 × n inputs:
– n data lines, and
– n control lines: whenever the i -th control line (1 6 i 6 n) is a logic 1,
the i -th data line is seen as the output
Example (truth table for 4-way multiplexer):
Control Data
W X Y Z A B C D OUT
0001abcda
0010abcdb
0100abcdc
1000abcdd
52
A multiplexer (data selector)
Control Data
W X Y Z A B C D OUT
0001abcda
0010abcdb
0100abcdc
1000abcdd
• Remember that
A · 0 = 0 and A · 1 = A
and
A+0=A
• A boolean function that implements the 4-way multiplexer thus is
OUT = (A · Z) + (B · Y) + (C · X) + (D · W)
53
A multiplexer (data selector)
• Example (assume Z = 0, Y = 0, X = 1,W = 0, i.e., data line C is
selected):
OUT = (A · Z)+(B · Y)+(C · X)+(D · W)
= (A · 0) + (B · 0) + (C · 1) + (D · 0)
=0+0+C+0
=C
54
A two-bit decoder
• Observe how the multiplexer will fail if more than control line
indicates a
logic 1
• A better approach, a decoder:
– Number the four control lines (here: 0 . . . 3)
– Select a control line using a two-bit input () 22 possible control
lines)
– If the two-bit input indicates n, emit bit string 00· · · 1 · · · " n-th bit
00
– Feed this bit string as control input into the multiplexer
Selector Line
YXdcba
000001
010010
100100
111000
55
A two-bit decoder
• To detect the selected control line, we need to check both input Y
and X:
a=Y·X
b=Y·X
c=Y·X
d=Y·X
• Given this circuit, develop the complete Boolean equation for OUT
56
Programmable logic arrays (PLAs)
• Note that the boolean equation for the multiplexer as well as the
decoder
were of the general form (inputs ik, k = 1, 2, . . . ):
OUT = (i1 · i2 · i3 · · · ) + (i1 · i2 · i3 · · · ) + (i1 · i2 · i3 · · · ) + · · ·
• Many Boolean equations may be brought into this sum-of-products
form

Das könnte Ihnen auch gefallen