Sie sind auf Seite 1von 182

SWITCHING THEORY & LOGIC DESIGN

ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

UNIT-1
NUMBER SYSTEM AND BOOLEAN ALGEBRA AND SWITCHING
FUNCTIONS
Introduction:
The expression of numerical quantities is something we tend to take for granted. This is both a
good and a bad thing in the study of electronics. It is good, in that we're accustomed to the use and
manipulation of numbers for the many calculations used in analyzing electronic circuits. On the other
hand, the particular system of notation we've been taught from grade school onward is not the system
used internally in modern electronic computing devices, and learning any different system of notation
requires some re-examination of deeply ingrained assumptions.
First, we have to distinguish the difference between numbers and the symbols we use to
represent numbers. A number is a mathematical quantity, usually correlated in electronics to a physical
quantity such as voltage, current, or resistance. There are many different types of numbers. Here are just
a few types, for example:
WHOLE NUMBERS:
1, 2, 3, 4, 5, 6, 7, 8, 9 . . .
INTEGERS:
-4, -3, -2, -1, 0, 1, 2, 3, 4 . . .
IRRATIONAL NUMBERS:
(approx. 3.1415927), e (approx. 2.718281828),
square root of any prime
REAL NUMBERS:
(All one-dimensional numerical values, negative and positive,
including zero, whole, integer, and irrational numbers)
COMPLEX NUMBERS:
3 - j4 , 34.5 20o

Different types of numbers find different application in the physical world. Whole numbers
work well for counting discrete objects, such as the number of resistors in a circuit. Integers are needed
when negative equivalents of whole numbers are required. Irrational numbers are numbers that cannot
be exactly expressed as the ratio of two integers, and the ratio of a perfect circle's circumference to its
diameter () is a good physical example of this. The non-integer quantities of voltage, current, and
resistance that we're used to dealing with in DC circuits can be expressed as real numbers, in either
fractional or decimal form. For AC circuit analysis, however, real numbers fail to capture the dual
essence of magnitude and phase angle, and so we turn to the use of complex numbers in either
rectangular or polar form.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
If we are to use numbers to understand processes in the physical world, make scientific
predictions, or balance our checkbooks, we must have a way of symbolically denoting them. In other
words, we may know how much money we have in our checking account, but to keep record of it we
need to have some system worked out to symbolize that quantity on paper, or in some other kind of
form for record-keeping and tracking. There are two basic ways we can do this: analog and digital. With
analog representation, the quantity is symbolized in a way that is infinitely divisible. With digital
representation, the quantity is symbolized in a way that is discretely packaged.
You're probably already familiar with an analog representation of money, and didn't realize it for
what it was. Have you ever seen a fund-raising poster made with a picture of a thermometer on it, where
the height of the red column indicated the amount of money collected for the cause? The more money
collected, the taller the column of red ink on the poster.

This is an example of an analog representation of a number. There is no real limit to how finely
divided the height of that column can be made to symbolize the amount of money in the account.
Changing the height of that column is something that can be done without changing the essential nature
of what it is. Length is a physical quantity that can be divided as small as you would like, with no
practical limit. The slide rule is a mechanical device that uses the very same physical quantity -- length - to represent numbers, and to help perform arithmetical operations with two or more numbers at a time.
It, too, is an analog device.
On the other hand, a digital representation of that same monetary figure, written with standard
symbols (sometimes called ciphers), looks like this:
$35,955.38

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Unlike the "thermometer" poster with its red column, those symbolic characters above cannot be
finely divided: that particular combination of ciphers stand for one quantity and one quantity only. If
more money is added to the account (+ $40.12), different symbols must be used to represent the new
balance ($35,995.50), or at least the same symbols arranged in different patterns. This is an example of
digital representation. The counterpart to the slide rule (analog) is also a digital device: the abacus, with
beads that are moved back and forth on rods to symbolize numerical quantities:

Let's contrast these two methods of numerical representation:


ANALOG
DIGITAL
-----------------------------------------------------------------Intuitively understood ----------- Requires training to interpret
Infinitely divisible -------------- Discrete
Prone to errors of precision ------ Absolute precision

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Interpretation of numerical symbols is something we tend to take for granted, because it has
been taught to us for many years. However, if you were to try to communicate a quantity of something
to a person ignorant of decimal numerals, that person could still understand the simple thermometer
chart!
The infinitely divisible vs. discrete and precision comparisons are really flip-sides of the same
coin. The fact that digital representation is composed of individual, discrete symbols (decimal digits and
abacus beads) necessarily means that it will be able to symbolize quantities in precise steps. On the
other hand, an analog representation (such as a slide rule's length) is not composed of individual steps,
but rather a continuous range of motion. The ability for a slide rule to characterize a numerical quantity
to infinite resolution is a trade-off for imprecision. If a slide rule is bumped, an error will be introduced
into the representation of the number that was "entered" into it. However, an abacus must be bumped
much harder before its beads are completely dislodged from their places (sufficient to represent a
different number).
Please don't misunderstand this difference in precision by thinking that digital representation is
necessarily more accurate than analog. Just because a clock is digital doesn't mean that it will always
read time more accurately than an analog clock, it just means that the interpretation of its display is less
ambiguous.
Divisibility of analog versus digital representation can be further illuminated by talking about
the representation of irrational numbers. Numbers such as are called irrational, because they cannot be
exactly expressed as the fraction of integers, or whole numbers. Although you might have learned in the
past that the fraction 22/7 can be used for in calculations, this is just an approximation. The actual
number "pi" cannot be exactly expressed by any finite, or limited, number of decimal places. The digits
of go on forever:
3.1415926535897932384 . . . . .

It is possible, at least theoretically, to set a slide rule (or even a thermometer column) so as to
perfectly represent the number , because analog symbols have no minimum limit to the degree that
they can be increased or decreased. If my slide rule shows a figure of 3.141593 instead of 3.141592654,
I can bump the slide just a bit more (or less) to get it closer yet. However, with digital representation,
such as with an abacus, I would need additional rods (place holders, or digits) to represent to further
degrees of precision. An abacus with 10 rods simply cannot represent any more than 10 digits worth of
the number , no matter how I set the beads. To perfectly represent , an abacus would have to have an
infinite number of beads and rods! The tradeoff, of course, is the practical limitation to adjusting, and
reading, analog symbols. Practically speaking, one cannot read a slide rule's scale to the 10th digit of
precision, because the marks on the scale are too coarse and human vision is too limited. An abacus, on
the other hand, can be set and read with no interpretational errors at all.
Furthermore, analog symbols require some kind of standard by which they can be compared for
precise interpretation. Slide rules have markings printed along the length of the slides to translate length
into standard quantities. Even the thermometer chart has numerals written along its height to show how
much money (in dollars) the red column represents for any given amount of height. Imagine if we all
tried to communicate simple numbers to each other by spacing our hands apart varying distances. The

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
number 1 might be signified by holding our hands 1 inch apart, the number 2 with 2 inches, and so on.
If someone held their hands 17 inches apart to represent the number 17, would everyone around them be
able to immediately and accurately interpret that distance as 17? Probably not. Some would guess short
(15 or 16) and some would guess long (18 or 19). Of course, fishermen who brag about their catches
don't mind overestimations in quantity!
Perhaps this is why people have generally settled upon digital symbols for representing numbers,
especially whole numbers and integers, which find the most application in everyday life. Using the
fingers on our hands, we have a ready means of symbolizing integers from 0 to 10. We can make hash
marks on paper, wood, or stone to represent the same quantities quite easily:

For large numbers, though, the "hash mark" numeration system is too inefficient.
Systems of numeration
The Romans devised a system that was a substantial improvement over hash marks, because it
used a variety of symbols (or ciphers) to represent increasingly large quantities. The notation for 1 is the
capital letter I. The notation for 5 is the capital letter V. Other ciphers possess increasing values:
X
L
C
D
M

=
=
=
=
=

10
50
100
500
1000

If a cipher is accompanied by another cipher of equal or lesser value to the immediate right of it,
with no ciphers greater than that other cipher to the right of that other cipher, that other cipher's value is
added to the total quantity. Thus, VIII symbolizes the number 8, and CLVII symbolizes the number
157. On the other hand, if a cipher is accompanied by another cipher of lesser value to the immediate
left, that other cipher's value is subtracted from the first. Therefore, IV symbolizes the number 4 (V
minus I), and CM symbolizes the number 900 (M minus C). You might have noticed that ending credit
sequences for most motion pictures contain a notice for the date of production, in Roman numerals. For
the year 1987, it would read: MCMLXXXVII. Let's break this numeral down into its constituent parts, from
left to right:
M = 1000
+
CM = 900
+
L = 50
+
XXX = 30

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
+
V = 5
+
II = 2

Aren't you glad we don't use this system of numeration? Large numbers are very difficult to
denote this way, and the left vs. right / subtraction vs. addition of values can be very confusing, too.
Another major problem with this system is that there is no provision for representing the number zero or
negative numbers, both very important concepts in mathematics. Roman culture, however, was more
pragmatic with respect to mathematics than most, choosing only to develop their numeration system as
far as it was necessary for use in daily life.
We owe one of the most important ideas in numeration to the ancient Babylonians, who were the
first (as far as we know) to develop the concept of cipher position, or place value, in representing larger
numbers. Instead of inventing new ciphers to represent larger numbers, as the Romans did, they re-used
the same ciphers, placing them in different positions from right to left. Our own decimal numeration
system uses this concept, with only ten ciphers (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) used in "weighted"
positions to represent very large and very small numbers.
Each cipher represents an integer quantity, and each place from right to left in the notation
represents a multiplying constant, or weight, for each integer quantity. For example, if we see the
decimal notation "1206", we known that this may be broken down into its constituent weight-products
as such:
1206 = 1000 + 200 + 6
1206 = (1 x 1000) + (2 x 100) + (0 x 10) + (6 x 1)

Each cipher is called a digit in the decimal numeration system, and each weight, or place value,
is ten times that of the one to the immediate right. So, we have a ones place, a tens place, a hundreds
place, a thousands place, and so on, working from right to left.
Right about now, you're probably wondering why I'm laboring to describe the obvious. Who
needs to be told how decimal numeration works, after you've studied math as advanced as algebra and
trigonometry? The reason is to better understand other numeration systems, by first knowing the how's
and why's of the one you're already used to.
The decimal numeration system uses ten ciphers, and place-weights that are multiples of ten.
What if we made a numeration system with the same strategy of weighted places, except with fewer or
more ciphers?
The binary numeration system is such a system. Instead of ten different cipher symbols, with
each weight constant being ten times the one before it, we only have two cipher symbols, and each
weight constant is twice as much as the one before it. The two allowable cipher symbols for the binary
system of numeration are "1" and "0," and these ciphers are arranged right-to-left in doubling values of
weight. The rightmost place is the ones place, just as with decimal notation. Proceeding to the left, we
have the twos place, the fours place, the eights place, the sixteens place, and so on. For example, the

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
following binary number can be expressed, just like the decimal number 1206, as a sum of each cipher
value times its respective weight constant:
11010 = 2 + 8 + 16 = 26
11010 = (1 x 16) + (1 x 8) + (0 x 4) + (1 x 2) + (0 x 1)

This can get quite confusing, as I've written a number with binary numeration (11010), and then
shown its place values and total in standard, decimal numeration form (16 + 8 + 2 = 26). In the above
example, we're mixing two different kinds of numerical notation. To avoid unnecessary confusion, we
have to denote which form of numeration we're using when we write (or type!). Typically, this is done
in subscript form, with a "2" for binary and a "10" for decimal, so the binary number 110102 is equal to
the decimal number 2610.
The subscripts are not mathematical operation symbols like superscripts (exponents) are. All
they do is indicate what system of numeration we're using when we write these symbols for other
people to read. If you see "310", all this means is the number three written using decimal numeration.
However, if you see "310", this means something completely different: three to the tenth power (59,049).
As usual, if no subscript is shown, the cipher(s) are assumed to be representing a decimal number.
Commonly, the number of cipher types (and therefore, the place-value multiplier) used in a
numeration system is called that system's base. Binary is referred to as "base two" numeration, and
decimal as "base ten." Additionally, we refer to each cipher position in binary as a bit rather than the
familiar word digit used in the decimal system.
Now, why would anyone use binary numeration? The decimal system, with its ten ciphers,
makes a lot of sense, being that we have ten fingers on which to count between our two hands. (It is
interesting that some ancient central American cultures used numeration systems with a base of twenty.
Presumably, they used both fingers and toes to count!!). But the primary reason that the binary
numeration system is used in modern electronic computers is because of the ease of representing two
cipher states (0 and 1) electronically. With relatively simple circuitry, we can perform mathematical
operations on binary numbers by representing each bit of the numbers by a circuit which is either on
(current) or off (no current). Just like the abacus with each rod representing another decimal digit, we
simply add more circuits to give us more bits to symbolize larger numbers. Binary numeration also
lends itself well to the storage and retrieval of numerical information: on magnetic tape (spots of iron
oxide on the tape either being magnetized for a binary "1" or demagnetized for a binary "0"), optical
disks (a laser-burned pit in the aluminum foil representing a binary "1" and an unburned spot
representing a binary "0"), or a variety of other media types.
Before we go on to learning exactly how all this is done in digital circuitry, we need to become
more familiar with binary and other associated systems of numeration
Decimal versus binary numeration
Let's count from zero to twenty using four different kinds of numeration systems: hash marks,
Roman numerals, decimal, and binary:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
System:
------Zero
One
Two
Three
Four
Five
Six
Seven
Eight
Nine
Ten
Eleven
Twelve
Thirteen
Fourteen
Fifteen
Sixteen
Seventeen
Eighteen
Nineteen
Twenty

Hash Marks
---------n/a
|
||
|||
||||
/|||/
/|||/ |
/|||/ ||
/|||/ |||
/|||/ ||||
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/
/|||/ /|||/

|
||
|||
||||
/|||/
/|||/
/|||/
/|||/
/|||/
/|||/

|
||
|||
||||
/|||/

Roman
----n/a
I
II
III
IV
V
VI
VII
VIII
IX
X
XI
XII
XIII
XIV
XV
XVI
XVII
XVIII
XIX
XX

Decimal
------0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Binary
-----0
1
10
11
100
101
110
111
1000
1001
1010
1011
1100
1101
1110
1111
10000
10001
10010
10011
10100

Neither hash marks nor the Roman system are very practical for symbolizing large numbers.
Obviously, place-weighted systems such as decimal and binary are more efficient for the task. Notice,
though, how much shorter decimal notation is over binary notation, for the same number of quantities.
What takes five bits in binary notation only takes two digits in decimal notation.
This raises an interesting question regarding different numeration systems: how large of a
number can be represented with a limited number of cipher positions, or places? With the crude hashmark system, the number of places IS the largest number that can be represented, since one hash mark
"place" is required for every integer step. For place-weighted systems of numeration, however, the
answer is found by taking base of the numeration system (10 for decimal, 2 for binary) and raising it to
the power of the number of places. For example, 5 digits in a decimal numeration system can represent
100,000 different integer number values, from 0 to 99,999 (10 to the 5th power = 100,000). 8 bits in a
binary numeration system can represent 256 different integer number values, from 0 to 11111111
(binary), or 0 to 255 (decimal), because 2 to the 8th power equals 256. With each additional place
position to the number field, the capacity for representing numbers increases by a factor of the base (10
for decimal, 2 for binary).
An interesting footnote for this topic is the one of the first electronic digital computers, the
Eniac. The designers of the Eniac chose to represent numbers in decimal form, digitally, using a series
of circuits called "ring counters" instead of just going with the binary numeration system, in an effort to
minimize the number of circuits required to represent and calculate very large numbers. This approach
turned out to be counter-productive, and virtually all digital computers since then have been purely
binary in design.
To convert a number in binary numeration to its equivalent in decimal form, all you have to do
is calculate the sum of all the products of bits with their respective place-weight constants. To illustrate:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Convert 110011012
bits =
1
.
weight =
1
(in decimal
2
notation)
8

to decimal
1 0 0 1
- - - 6 3 1 8
4 2 6

form:
1 0 1
- - 4 2 1

The bit on the far right side is called the Least Significant Bit (LSB), because it stands in the
place of the lowest weight (the one's place). The bit on the far left side is called the Most Significant Bit
(MSB), because it stands in the place of the highest weight (the one hundred twenty-eight's place).
Remember, a bit value of "1" means that the respective place weight gets added to the total value, and a
bit value of "0" means that the respective place weight does not get added to the total value. With the
above example, we have:
12810

+ 6410

+ 810

+ 410

+ 110

= 20510

If we encounter a binary number with a dot (.), called a "binary point" instead of a decimal point,
we follow the same procedure, realizing that each place weight to the right of the point is one-half the
value of the one to the left of it (just as each place weight to the right of a decimal point is one-tenth the
weight of the one to the left of it). For example:
Convert 101.0112 to decimal form:
.
bits =
1 0 1 . 0 1 1
.
- - - - - - weight =
4 2 1
1 1 1
(in decimal
/ / /
notation)
2 4 8
410

+ 110

+ 0.2510

+ 0.12510

= 5.37510

Octal and hexadecimal numeration


Because binary numeration requires so many bits to represent relatively small numbers
compared to the economy of the decimal system, analyzing the numerical states inside of digital
electronic circuitry can be a tedious task. Computer programmers who design sequences of number
codes instructing a computer what to do would have a very difficult task if they were forced to work
with nothing but long strings of 1's and 0's, the "native language" of any digital circuit. To make it
easier for human engineers, technicians, and programmers to "speak" this language of the digital world,
other systems of place-weighted numeration have been made which are very easy to convert to and from
binary.
One of those numeration systems is called octal, because it is a place-weighted system with a
base of eight. Valid ciphers include the symbols 0, 1, 2, 3, 4, 5, 6, and 7. Each place weight differs from
the one next to it by a factor of eight.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Another system is called hexadecimal, because it is a place-weighted system with a base of
sixteen. Valid ciphers include the normal decimal symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9, plus six
alphabetical characters A, B, C, D, E, and F, to make a total of sixteen. As you might have guessed
already, each place weight differs from the one before it by a factor of sixteen.
Let's count again from zero to twenty using decimal, binary, octal, and hexadecimal to contrast
these systems of numeration:
Number
-----Zero
One
Two
Three
Four
Five
Six
Seven
Eight
Nine
Ten
Eleven
Twelve
Thirteen
Fourteen
Fifteen
Sixteen
Seventeen
Eighteen
Nineteen
Twenty

Decimal
------0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Binary
------0
1
10
11
100
101
110
111
1000
1001
1010
1011
1100
1101
1110
1111
10000
10001
10010
10011
10100

Octal
----0
1
2
3
4
5
6
7
10
11
12
13
14
15
16
17
20
21
22
23
24

Hexadecimal
----------0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
10
11
12
13
14

Octal and hexadecimal numeration systems would be pointless if not for their ability to be easily
converted to and from binary notation. Their primary purpose in being is to serve as a "shorthand"
method of denoting a number represented electronically in binary form. Because the bases of octal
(eight) and hexadecimal (sixteen) are even multiples of binary's base (two), binary bits can be grouped
together and directly converted to or from their respective octal or hexadecimal digits. With octal, the
binary bits are grouped in three's (because 23 = 8), and with hexadecimal, the binary bits are grouped in
four's (because 24 = 16):
BINARY TO OCTAL CONVERSION
Convert 10110111.12 to octal:
.
.
implied zero
.
|
.
010
110
Convert each group of bits
###
###
to its octal equivalent:
2
6
.
Answer:
10110111.12 = 267.48

implied zeros
||
111
100
### . ###
7
4

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
We had to group the bits in three's, from the binary point left, and from the binary point right,
adding (implied) zeros as necessary to make complete 3-bit groups. Each octal digit was translated from
the 3-bit binary groups. Binary-to-Hexadecimal conversion is much the same:
BINARY TO HEXADECIMAL CONVERSION
Convert 10110111.12 to hexadecimal:
.
.
.
.
1011
Convert each group of bits
---to its hexadecimal equivalent:
B
.
Answer:
10110111.12 = B7.816

implied zeros
|||
0111
1000
---- . ---7
8

Here we had to group the bits in four's, from the binary point left, and from the binary point
right, adding (implied) zeros as necessary to make complete 4-bit groups:
Likewise, the conversion from either octal or hexadecimal to binary is done by taking each octal
or hexadecimal digit and converting it to its equivalent binary (3 or 4 bit) group, then putting all the
binary bit groups together.
Incidentally, hexadecimal notation is more popular, because binary bit groupings in digital
equipment are commonly multiples of eight (8, 16, 32, 64, and 128 bit), which are also multiples of 4.
Octal, being based on binary bit groups of 3, doesn't work out evenly with those common bit group
sizings.
Octal and hexadecimal to decimal conversion
Although the prime intent of octal and hexadecimal numeration systems is for the "shorthand"
representation of binary numbers in digital electronics, we sometimes have the need to convert from
either of those systems to decimal form. Of course, we could simply convert the hexadecimal or octal
format to binary, then convert from binary to decimal, since we already know how to do both, but we
can also convert directly.
Because octal is a base-eight numeration system, each place-weight value differs from either
adjacent place by a factor of eight. For example, the octal number 245.37 can be broken down into place
values as such:
octal
digits =
.
weight =
(in decimal
notation)
.

2
6
4

4
8

5
1

.
-

3
1
/
8

7
1
/
6
4

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The decimal value of each octal place-weight times its respective cipher multiplier can be
determined as follows:
(2 x 6410) + (4 x 810) + (5 x 110)
(7 x 0.01562510) = 165.48437510

(3 x 0.12510)

The technique for converting hexadecimal notation to decimal is the same, except that each
successive place-weight changes by a factor of sixteen. Simply denote each digit's weight, multiply each
hexadecimal digit value by its respective weight (in decimal form), then add up all the decimal values to
get a total. For example, the hexadecimal number 30F.A916 can be converted like this:
hexadecimal
digits =
.
weight =
(in decimal
notation)
.
.

3
2
5
6

0
1
6

F
1

.
-

A
1
/
1
6

9
1
/
2
5
6

(3 x 25610) + (0 x 1610) + (15 x 110)


(9 x 0.0039062510) = 783.6601562510

(10 x 0.062510)

These basic techniques may be used to convert a numerical notation of any base into decimal
form, if you know the value of that numeration system's base.
Conversion from decimal numeration
Because octal and hexadecimal numeration systems have bases that are multiples of binary (base
2), conversion back and forth between either hexadecimal or octal and binary is very easy. Also,
because we are so familiar with the decimal system, converting binary, octal, or hexadecimal to decimal
form is relatively easy (simply add up the products of cipher values and place-weights). However,
conversion from decimal to any of these "strange" numeration systems is a different matter.
The method which will probably make the most sense is the "trial-and-fit" method, where you
try to "fit" the binary, octal, or hexadecimal notation to the desired value as represented in decimal
form. For example, let's say that I wanted to represent the decimal value of 87 in binary form. Let's start
by drawing a binary number field, complete with place-weight values:
.
.
weight =
(in decimal
notation)

1
2
8

6
4

3
2

1
6

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Well, we know that we won't have a "1" bit in the 128's place, because that would immediately
give us a value greater than 87. However, since the next weight to the right (64) is less than 87, we
know that we must have a "1" there.
.
.
weight =
(in decimal
notation)

1
6
4

3
2

1
6

Decimal value so far = 6410

If we were to make the next place to the right a "1" as well, our total value would be 6410 + 3210,
or 9610. This is greater than 8710, so we know that this bit must be a "0". If we make the next (16's) place
bit equal to "1," this brings our total value to 6410 + 1610, or 8010, which is closer to our desired value
(8710) without exceeding it:
.
.
weight =
(in decimal
notation)

1
6
4

0
3
2

1
1
6

Decimal value so far = 8010

By continuing in this progression, setting each lesser-weight bit as we need to come up to our
desired total value without exceeding it, we will eventually arrive at the correct figure:
.
.
weight =
(in decimal
notation)

1
6
4

0
3
2

1
1
6

0
8

1
4

1
2

1
1

Decimal value so far = 8710

This trial-and-fit strategy will work with octal and hexadecimal conversions, too. Let's take the
same decimal figure, 8710, and convert it to octal numeration:
.
.
weight =
(in decimal
notation)

6
4

If we put a cipher of "1" in the 64's place, we would have a total value of 6410 (less than 8710). If
we put a cipher of "2" in the 64's place, we would have a total value of 12810 (greater than 8710). This
tells us that our octal numeration must start with a "1" in the 64's place:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
.
.
weight =
(in decimal
notation)

1
6
4

Decimal value so far = 6410

Now, we need to experiment with cipher values in the 8's place to try and get a total (decimal)
value as close to 87 as possible without exceeding it. Trying the first few cipher options, we get:
"1" = 6410 + 810 = 7210
"2" = 6410 + 1610 = 8010
"3" = 6410 + 2410 = 8810

A cipher value of "3" in the 8's place would put us over the desired total of 8710, so "2" it is!
.
.
weight =
(in decimal
notation)

1
6
4

2
8

Decimal value so far = 8010

Now, all we need to make a total of 87 is a cipher of "7" in the 1's place:
.
.
weight =
(in decimal
notation)

1
6
4

2
8

7
1

Decimal value so far = 8710

Of course, if you were paying attention during the last section on octal/binary conversions, you
will realize that we can take the binary representation of (decimal) 8710, which we previously
determined to be 10101112, and easily convert from that to octal to check our work:
.
Implied zeros
.
||
.
001 010 111
.
---
.
1
2
7
.
Answer: 10101112 = 1278

Binary
Octal

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Can we do decimal-to-hexadecimal conversion the same way? Sure, but who would want to?
This method is simple to understand, but laborious to carry out. There is another way to do these
conversions, which is essentially the same (mathematically), but easier to accomplish.
This other method uses repeated cycles of division (using decimal notation) to break the decimal
numeration down into multiples of binary, octal, or hexadecimal place-weight values. In the first cycle
of division, we take the original decimal number and divide it by the base of the numeration system that
we're converting to (binary=2 octal=8, hex=16). Then, we take the whole-number portion of division
result (quotient) and divide it by the base value again, and so on, until we end up with a quotient of less
than 1. The binary, octal, or hexadecimal digits are determined by the "remainders" left over by each
division step. Let's see how this works for binary, with the decimal example of 8710:
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

87
= 43.5
2

Divide 87 by 2, to get a quotient of 43.5


Division "remainder" = 1, or the < 1 portion
of the quotient times the divisor (0.5 x 2)

43
= 21.5
2

Take the whole-number portion of 43.5 (43)


and divide it by 2 to get 21.5, or 21 with
a remainder of 1

21
= 10.5
2

And so on . . . remainder = 1 (0.5 x 2)

10
= 5.0
2

And so on . . . remainder = 0

5
= 2.5
2

And so on . . . remainder = 1 (0.5 x 2)

2
= 1.0
2

And so on . . . remainder = 0

1
= 0.5
2

. . . until we get a quotient of less than 1


remainder = 1 (0.5 x 2)

The binary bits are assembled from the remainders of the successive division steps, beginning
with the LSB and proceeding to the MSB. In this case, we arrive at a binary notation of 10101112.
When we divide by 2, we will always get a quotient ending with either ".0" or ".5", i.e. a remainder of
either 0 or 1. As was said before, this repeat-division technique for conversion will work for numeration
systems other than binary. If we were to perform successive divisions using a different number, such as
8 for conversion to octal, we will necessarily get remainders between 0 and 7. Let's try this with the
same decimal number, 8710:
.
.

87
= 10.875

Divide 87 by 8, to get a quotient of 10.875


Division "remainder" = 7, or the < 1 portion

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
.
.
.
.
.
.
.
.
.
.
.

of the quotient times the divisor (.875 x 8)

10
= 1.25
8

Remainder = 2

1
= 0.125
8
RESULT:

Quotient is less than 1, so we'll stop here.


Remainder = 1
8710

= 1278

We can use a similar technique for converting numeration systems dealing with quantities less
than 1, as well. For converting a decimal number less than 1 into binary, octal, or hexadecimal, we use
repeated multiplication, taking the integer portion of the product in each step as the next digit of our
converted number. Let's use the decimal number 0.812510 as an example, converting to binary:
.
.
.
.
.
.
.
.
.
.
.
.

0.8125 x 2 = 1.625

Integer portion of product = 1

0.625 x 2 = 1.25

Take < 1 portion of product and remultiply


Integer portion of product = 1

0.25 x 2 = 0.5

Integer portion of product = 0

0.5 x 2 = 1.0

Integer portion of product = 1


Stop when product is a pure integer
(ends with .0)

RESULT:

0.812510

= 0.11012

As with the repeat-division process for integers, each step gives us the next digit (or bit) further
away from the "point." With integer (division), we worked from the LSB to the MSB (right-to-left), but
with repeated multiplication, we worked from the left to the right. To convert a decimal number greater
than 1, with a < 1 component, we must use both techniques, one at a time. Take the decimal example of
54.4062510, converting to binary:
REPEATED
.
.
54
.
=
.
2
.
.
27
.
=
.
2
.
.
13
.
=
.
2
.
.
6

DIVISION FOR THE INTEGER PORTION:


27.0

Remainder = 0

13.5

Remainder = 1

6.5

Remainder = 1 (0.5 x 2)

(0.5 x 2)

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
.
= 3.0
.
2
.
.
3
.
= 1.5
.
2
.
.
1
.
= 0.5
.
2
.
PARTIAL ANSWER:

Remainder = 0

Remainder = 1 (0.5 x 2)

Remainder = 1 (0.5 x 2)
5410

= 1101102

REPEATED MULTIPLICATION FOR THE < 1 PORTION:


.
. 0.40625 x 2 = 0.8125 Integer portion of product =
.
. 0.8125 x 2 = 1.625
Integer portion of product =
.
. 0.625 x 2 = 1.25
Integer portion of product =
.
. 0.25 x 2 = 0.5
Integer portion of product =
.
. 0.5 x 2 = 1.0
Integer portion of product =
.
. PARTIAL ANSWER: 0.4062510 = 0.011012
.
. COMPLETE ANSWER: 5410 + 0.4062510 = 54.4062510
.
.
1101102 + 0.011012 = 110110.011012

0
1
1
0
1

Sign-and-magnitude method
One may first approach the problem of representing a number's sign by allocating one sign bitto
represent the sign: set that bit(often the most significant bit) to 0for a positive number, and set to 1for a
negative number. The remaining bits in the number indicate the magnitude (or absolute value). Hence in
a bytewith only 7 bits (apart from the sign bit), the magnitude can range from 0000000 (0) to 1111111
(127). Thus you can represent numbers from 12710to +12710once you add the sign bit (the eighth bit).
A consequence of this representation is that there are two ways to represent zero, 00000000 (0) and
10000000 (0). Decimal 43 encoded in an eight-bit byte this way is 10101011. This approach is
directly comparable to the common way of showing a sign (placing a "+" or "" next to the number's
magnitude). Some early binary computers (e.g. IBM 7090) used this representation, perhaps because of
its natural relation to common usage. Sign-and-magnitude is the most common way of representing the
significandin floating pointvalues.

One's complement
8 bit one's complement

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Ones'
Binary value complement
interpretation
00000000
+0
00000001
1
...
...
01111101
125
01111110
126
01111111
127
10000000
127
10000001
126
10000010
125
...
...
11111101
2
11111110
1
11111111
0

Unsigned
interpretation
0
1
...
125
126
127
128
129
130
...
253
254
255

Alternatively, a system known as one's complement can be used to represent negative numbers.
The one's complement form of a negative binary number is the bitwise NOT applied to it the
"complement" of its positive counterpart. Like sign-and-magnitude representation, one's complement
has two representations of 0: 00000000 (+0) and 11111111 (0).
As an example, the one's complement form of 00101011 (43) becomes 11010100 (43). The
range of signed numbers using one's complement is represented by (2N11) to (2N11) and 0. A
conventional eight-bit byte is 12710 to +12710 with zero being either 00000000 (+0) or 11111111 (0).
To add two numbers represented in this system, one does a conventional binary addition, but it is
then necessary to add any resulting carry back into the resulting sum. To see why this is necessary,
consider the following example showing the case of the addition of 1 (11111110) to +2 (00000010).
binary
11111110
+ 00000010
............
1 00000000
1
............
00000001

decimal
1
+2
...
0
<-- not the correct answer
+1
<-- add carry
...
1
<-- correct answer

In the previous example, the binary addition alone gives 00000000, which is incorrect. Only
when the carry is added back in does the correct result (00000001) appear.
This numeric representation system was common in older computers; the PDP-1, CDC 160A
and UNIVAC 1100/2200 series, among many others, used one's-complement arithmetic.
A remark on terminology: The system is referred to as "one's complement" because the negation
of a positive value x (represented as the bitwise NOT of x) can also be formed by subtracting x from the

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
one's complement representation of zero that is a long sequence of ones (0). Two's complement
arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two
that is congruent to +0.[1] Therefore, one's complement and two's complement representations of the
same negative value will differ by one. Note that the one's complement representation of a negative
number can be obtained from the sign-magnitude representation merely by bitwise complementing the
magnitude.
Two's complement
8 bit two's complement
Binary value
00000000
00000001
...
01111110
01111111
10000000
10000001
10000010
...
11111110
11111111

Two's
Unsigned
complement
interpretation
interpretation
0
0
1
1
...
...
126
126
127
127
128
128
127
129
126
130
...
...
2
254
1
255

The problems of multiple representations of 0 and the need for the end-around carry are
circumvented by a system called two's complement. In two's complement, negative numbers are
represented by the bit pattern which is one greater (in an unsigned sense) than the one's complement of
the positive value.In two's-complement, there is only one zero (00000000). Negating a number (whether
negative or positive) is done by inverting all the bits and then adding 1 to that result. Addition of a pair
of two's-complement integers is the same as addition of a pair of unsigned numbers (except for
detection of overflow, if that is done). For instance, a two's-complement addition of 127 and 128 gives
the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8 bit two's
complement table.An easier method to get the negation of a number in two's complement is as follows:

1. Starting from the


right, find the first '1'
2. Invert all of the bits
to the left of that one

Example 1

Example 2

0101001

0101100

1010111

1010100

Binary Codes
Binary codes are codes which are represented in binary system with modification from the
original ones. Below we will be seeing the following:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Weighted Binary Systems


Non Weighted Codes

Weighted Binary Systems


Weighted binary codes are those which obey the positional weighting principles, each position
of the number represents a specific weight. The binary counting sequence is an example.
Decimal
8421
2421
5211
Excess-3
0
0000
0000
0000
0011
1
0001
0001
0001
0100
2
0010
0010
0011
0101
3
0011
0011
0101
0110
4
0100
0100
0111
0111
5
0101
1011
1000
1000
6
0110
1100
1010
1001
7
0111
1101
1100
1010
8
1000
1110
1110
1011
9
1001
1111
1111
1100
8421 Code/BCD Code
The BCD (Binary Coded Decimal) is a straight assignment of the binary equivalent. It is
possible to assign weights to the binary bits according to their positions. The weights in the BCD code
are 8,4,2,1.
Example: The bit assignment 1001, can be seen by its weights to represent the decimal 9
because:
1x8+0x4+0x2+1x1 = 9
2421 Code
This is a weighted code, its weights are 2, 4, 2 and 1. A decimal number is represented in 4-bit
form and the total four bits weight is 2 + 4 + 2 + 1 = 9. Hence the 2421 code represents the decimal
numbers from 0 to 9.
5211 Code
This is a weighted code, its weights are 5, 2, 1 and 1. A decimal number is represented in 4-bit
form and the total four bits weight is 5 + 2 + 1 + 1 = 9. Hence the 5211 code represents the decimal
numbers from 0 to 9.
Reflective Code
A code is said to be reflective when code for 9 is complement for the code for 0, and so is for 8
and 1 codes, 7 and 2, 6 and 3, 5 and 4. Codes 2421, 5211, and excess-3 are reflective, whereas the 8421
code is not.
Sequential Codes

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
A code is said to be sequential when two subsequent codes, seen as numbers in binary
representation, differ by one. This greatly aids mathematical manipulation of data. The 8421 and
Excess-3 codes are sequential, whereas the 2421 and 5211 codes are not.
Non Weighted Codes
Non weighted codes are codes that are not positionally weighted. That is, each position within
the binary number is not assigned a fixed value.
Excess-3 Code
Excess-3 is a non weighted code used to express decimal numbers. The code derives its name
from the fact that each binary code is the corresponding 8421 code plus 0011(3).
Example: 1000 of 8421 = 1011 in Excess-3
Gray Code
The gray code belongs to a class of codes called minimum change codes, in which only one bit
in the code changes when moving from one code to the next. The Gray code is non-weighted code, as
the position of bit does not contain any weight. The gray code is a reflective digital code which has the
special property that any two subsequent numbers codes differ by only one bit. This is also called a unitdistance code. In digital Gray code has got a special place.
Decimal Number
Binary Code
Gray Code
0
0000
0000
1
0001
0001
2
0010
0011
3
0011
0010
4
0100
0110
5
0101
0111
6
0110
0101
7
0111
0100
8
1000
1100
9
1001
1101
10
1010
1111
11
1011
1110
12
1100
1010
13
1101
1011
14
1110
1001
15
1111
1000
Binary to Gray Conversion

Gray Code MSB is binary code MSB.


Gray Code MSB-1 is the XOR of binary code MSB and MSB-1.
MSB-2 bit of gray code is XOR of MSB-1 and MSB-2 bit of binary code.
MSB-N bit of gray code is XOR of MSB-N-1 and MSB-N bit of binary code.

Binary logic
inary logic could refer to:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
any two-valued logic, especially in social sciences
classical propositional two-valued logic, also called boolean logic in engineering, which
is the logical foundation of digital electronics

circuits implementing boolean logic; see logic gates

an English Rock band active from 1986 - 1989 famous for their use of synthesisers in
tandem with guitar based harmonies

Parity bit:
A parity bit is an extra bit that is attached to the information that is being sent from one position to the
other. This bit is attached just to detect the error if any in the information during the transmission. This
bit is used to make number of 1s in the message or information either even or odd. So if there is any
change in any bit during transmission then there would be change in number of 1s hence error would
change the number of 1s from odd to even or even to odd which can be detected. This way parity bit
helps to detects errors. So whenever we need to send information well pass the information firstly
through PARITY GENERATOR CIRCUIT and then send. Also at the receiver end information
received is passed trough the PARITY CHECK CIRCUIT and if parity matches, only then we use the
information. If we make number of 1s even then it is called even parity and if number of 1s is made
odd then it is called odd parity.
But there are few drawbacks attached with this method:
o

This system can detect only odd combinations of error, not even combinations. That
means if there is 2 or 4 or 6 etc number of then it would go undetected while it can easily
detect 1, 3, 5 etc number of errors.
We can not check the position of error even if we are able to detect it.

Eg. Find out the parity bit (odd) for message 1101 and show us how it helps in detecting errors
As 1101 has 3 i.e. odd number of 1s so P=0 so that we still have the odd number of 1s in the
combination of 5 bits(message(4 bits) and parity bit(1 bit))
So message we send with parity is 11010 (5th bit from left is parity bit)
Lets now see the effect of errors on this message
1 error: Suppose we have error in 3rd bit from left so bit at 3rd position would change from 0 to 1.
Hence message received would be 11110 instead of 11010 and at the receiver we check the parity of
message and we see message has even number 1s in the received signal which should have been odd so
we have detected the error although we are not sure about the position of the error. So as error is
detected we can make a request for another send of the same message.
2 errors: Suppose we have error at positions 1 and 4 from left so we have a bit change from 1 to 0 at 1st
and 1 to 0 at 2nd position. We messaged received is 01000 and we have odd number of 1s in received
message which is also the parity status of sent message so no detection of errors so we can see this
method is unable to detect even number of errors

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
3 errors: Suppose we have error in 2nd , 3rd and 4th bit from left so bit change at 2nd position is 1 to 0, at
3rd position would from 0 to 1 and at 4th from 1 to 0. Hence message received would be 10100 instead
of 11010 and at the receiver we check the parity of message and we see message has even number 1s in
the received signal which should have been odd so we have detected the error although we are not sure
about the positions of the error. So as error is detected we can make a request for another send of the
same message.
So from here one can easily conclude that PARITY BIT method can detect only odd number of errors
independent on whether we have odd parity message or even parity message.
Eg. Find out the parity bit (odd) for message 1100
As 1100 has 2 i.e. even number of 1s so we take parity bit (odd) as 1 to make odd number of 1s out of
5 bits
So message we send is 11001 (5th is the parity bit)
Eg. Find out the parity bit (even) for message 1100
As we have 2 i.e. even number of 1s so we take parity bit (even) as 0 so that number of 1s remain
even. Hence the message is 11000 (5th is the parity bit)
Eg. Find out the parity bit (even) for message 1000
As we have 1 i.e. odd number of 1s so we take parity bit (even) as 1 so that number of 1s as even.
Hence the message is 10001 (5th is the parity bit)

Hamming code:
This code is used for single error correction i.e. using this code we can detect only single error. In parity
bit method we used only single extra bit but in this method number of extra bits (which also are parity
bits) vary with the number of bits of the message.

Suppose we have the number of information bits as m=4 then we have to determine number of parity
bits using above relation
2p >= 4 + p + 1
2p >= 5 + p

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
From this we can check for values of p, which one satisfies
For p=1

2 >= 6 doesnt satisfy

For p=2

4>= 7 doesnt satisfy

For p=3

8>=8 satisfies hence we have p=3

So now we have 4 information bits and 3 parity bits so total of 7 bits. In the parity bit method, we
placed the parity bit at rightmost position. But here we dont place the extra bits consecutively but the
positions are fixed by following rule:

As we need only three positions so we have to pick first 3 which are 1, 2, and 4.
So we have the composition of hamming code as follow:
Bit1
Parity
P1

bit2

bit3

parity
P2

bit4

bit5

bit6

bit7

parity
M1

P3

M2

M3

M4

Now we have to decide positions in the hamming code which would be covered by the parity bit i.e. the
positions considering which value of parity bit would be decided. Well be using following rule for this:

E.g. Consider the parity bit P1 and we have to find the position of message bits which well cover
with this parity bit.
Firstly write the binary equivalents of positions of message bit
Bit1
Parity
P1

bit2

bit3

parity
P2

bit4

bit5

bit6

bit7

parity
M1

P3

M2

M3

M4

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
001

010

011

100

101

110

111

Now lets see in the binary equivalent of position of parity bit P1 that at which position we have 1and we
see 1 is at LSB so we select the message bits which have positions with 1 at LSB which are M1, M2 and
M4 So P1 bit would check the parity for M1, M2 and M4
E.g. Consider the parity bit P2 and we have to find the position of message bits which well cover
with this parity bit
We have 1 at second position from left so we choose message bits which have 1 at 2nd position n their
positions binary equivalent. Hence we get message bits M1 M3 and M4. So P2 checks parity for message
bits of M1 M3 and M4
Similarly we have 1 at 3rd position of P3 message bits with 1 at 3rd position are M2M3 M4
So we now have
P1 Checks bit number 1,3,5,7
P2 Checks bit number 2,3,6,7
P3 Checks bit number 4,5,6,7
These parity bits can be either even or odd parity bits but all parity bits must be same i.e. all odd or all
even
Eg. So lets form hamming code using 4-bit message bits 1101 with parity bits as even parity bit
and check how it is able to detect and correct error.
As we have already decided parity bit positions and their corresponding message bits for a 4-bit
message
For the moment we have hamming code as P1 P2 1 P3 1 0 1
As we have already seen

P1 Checks bit number 1,3,5,7

So P1 = 1 to make number of 1s to 4 i.e. even in positions 1,3,5,7


P2 Checks bit number 2,3,6,7
So P2 = 0 to make number of 1s to 2 i.e. even in positions 2,3,6,7
P3 Checks bit number 4,5,6,7
So P3 = 0 to make number of 1s to 2 i.e. even in positions 4,5,6,7
Hence we have the hamming code as 1010101

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
As we have already mentioned that hamming code can only detect and correct only single error. So we
have error at 5th position which means bit changes from 1 to 0 so we have data changed from 1010101
to 1010001
Now lets start checking all 3 parity bits starting from P1
P1 Checks bit number 1,3,5,7 and we see we have number 1s in these bits is 3 i.e. odd which is wrong
as it should have been even so put a 1
P2 Checks bit number 2,3,6,7 and we see we have number 1s in these bits is 2 i.e. even which is right
so put a 0
P3 Checks bit number 4,5,6,7 and we see we have number 1s in these bits is 1 i.e. odd which is wrong
as it should have been even so put a 1
Now we collect all the bits recorded with bit from P1 as LSB
So we get 101 hence we get bit 5 with
So as P1 and P3 give wrong results so now we find which code bit position is common but not
found in code bit position of parity bit P2 and we see that we 5 and 7 common for P1 and P3 but 7
is also present in P2 so we are left with code position 5.
RULE TO FIND POSITION OF ERROR: We start from P1 and start checking whether bit is correct
or wrong and if it is wrong then we put a 1 and put a zero if it is correct. And while we reach end we
collect all those bits taking bit from P1 as LSB. The decimal equivalent we get from the bits collected is
the bit position of error

Hence we have the bit change in position 5 so re-change it to get the correct value. Hence we change
5th bit of received message which is 1010001, we get 1010101 which is correct one.
So we see hamming code is able to detect and correct single error.
Eg. Now form a hamming code for 5-bit information bits 10110 with odd parity
m=5 and we have to follow

2p >= m + p + 1

The value of p as 4 to satisfy 24 (16) >= 5 + 4 + 1 but p=3 doesnt satisfy as 23 (8) >= 5 + 3 + 1
So p=4 and hence a total of 9 bits
Parity bit positions are 1, 2, 4, 8
And hence hamming code composition is as follow
0001

0010

0011

0100

0101

0110

0111

1000

1001

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
1

P1

P2

M1

P3

M2

M3

M4

P4

M5

Now if we see P1 has 1 at LSB so message bits with this parity bit are M1 M2 M4 M5
Similarly we see P2 checks M1 M3 M4
Similarly we see P3 checks M2 M3 M4
Similarly we see P4 checks M5
Or we can also put it as
P1 checks code bits 1, 3, 5, 7, 9
P2 checks code bits 2, 3, 6, 7
P3 checks code bits 4, 5, 6, 7
P4 checks code bits 8, 9
For message 10110 we hamming code
Bit positions
1
1

2
P3

3
0

4
1

5
1

6
P4

7
0

We see P1=1 to make no. of 1s to 3 i.e. odd


We see P2=0 to make no. of 1s to 3 i.e. odd
We see P3=1 to make no. of 1s to 3 i.e. odd
We see P4=1 to make no. of 1s to 1 i.e. odd
So we get the hamming code as 101101110

PART 2
BOOLEAN ALGEBRA

9 P1

P2

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Introduction
Mathematical rules are based on the defining limits we place on the particular numerical
quantities dealt with. When we say that 1 + 1 = 2 or 3 + 4 = 7, we are implying the use of integer
quantities: the same types of numbers we all learned to count in elementary education. What most
people assume to be self-evident rules of arithmetic -- valid at all times and for all purposes -- actually
depend on what we define a number to be.
For instance, when calculating quantities in AC circuits, we find that the "real" number
quantities which served us so well in DC circuit analysis are inadequate for the task of representing AC
quantities. We know that voltages add when connected in series, but we also know that it is possible to
connect a 3-volt AC source in series with a 4-volt AC source and end up with 5 volts total voltage (3 + 4
= 5)! Does this mean the inviolable and self-evident rules of arithmetic have been violated? No, it just
means that the rules of "real" numbers do not apply to the kinds of quantities encountered in AC
circuits, where every variable has both a magnitude and a phase. Consequently, we must use a different
kind of numerical quantity, or object, for AC circuits (complex numbers, rather than real numbers), and
along with this different system of numbers comes a different set of rules telling us how they relate to
one another.
An expression such as "3 + 4 = 5" is nonsense within the scope and definition of real numbers,
but it fits nicely within the scope and definition of complex numbers (think of a right triangle with
opposite and adjacent sides of 3 and 4, with a hypotenuse of 5). Because complex numbers are twodimensional, they are able to "add" with one another trigonometrically as single-dimension "real"
numbers cannot.
Logic is much like mathematics in this respect: the so-called "Laws" of logic depend on how we
define what a proposition is. The Greek philosopher Aristotle founded a system of logic based on only
two types of propositions: true and false. His bivalent (two-mode) definition of truth led to the four
foundational laws of logic: the Law of Identity (A is A); the Law of Non-contradiction (A is not non-A);
the Law of the Excluded Middle (either A or non-A); and the Law of Rational Inference. These socalled Laws function within the scope of logic where a proposition is limited to one of two possible
values, but may not apply in cases where propositions can hold values other than "true" or "false." In
fact, much work has been done and continues to be done on "multivalued," or fuzzy logic, where
propositions may be true or false to a limited degree. In such a system of logic, "Laws" such as the Law
of the Excluded Middle simply do not apply, because they are founded on the assumption of bivalence.
Likewise, many premises which would violate the Law of Non-contradiction in Aristotelian logic have
validity in "fuzzy" logic. Again, the defining limits of propositional values determine the Laws
describing their functions and relations.
The English mathematician George Boole (1815-1864) sought to give symbolic form to
Aristotle's system of logic. Boole wrote a treatise on the subject in 1854, titled An Investigation of the
Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which
codified several rules of relationship between mathematical quantities limited to one of two possible
values: true or false, 1 or 0. His mathematical system became known as Boolean algebra.
All arithmetic operations performed with Boolean quantities have but one of two possible
outcomes: either 1 or 0. There is no such thing as "2" or "-1" or "1/2" in the Boolean world. It is a world
in which all other possibilities are invalid by fiat. As one might guess, this is not the kind of math you

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
want to use when balancing a checkbook or calculating current through a resistor. However, Claude
Shannon of MIT fame recognized how Boolean algebra could be applied to on-and-off circuits, where
all signals are characterized as either "high" (1) or "low" (0). His 1938 thesis, titled A Symbolic Analysis
of Relay and Switching Circuits, put Boole's theoretical work to use in a way Boole never could have
imagined, giving us a powerful mathematical tool for designing and analyzing digital circuits.
In this chapter, you will find a lot of similarities between Boolean algebra and "normal" algebra,
the kind of algebra involving so-called real numbers. Just bear in mind that the system of numbers
defining Boolean algebra is severely limited in terms of scope, and that there can only be one of two
possible values for any Boolean variable: 1 or 0. Consequently, the "Laws" of Boolean algebra often
differ from the "Laws" of real-number algebra, making possible such statements as 1 + 1 = 1, which
would normally be considered absurd. Once you comprehend the premise of all quantities in Boolean
algebra being limited to the two possibilities of 1 and 0, and the general philosophical principle of Laws
depending on quantitative definitions, the "nonsense" of Boolean algebra disappears.
It should be clearly understood that Boolean numbers are not the same as binary numbers.
Whereas Boolean numbers represent an entirely different system of mathematics from real numbers,
binary is nothing more than an alternative notation for real numbers. The two are often confused
because both Boolean math and binary notation use the same two ciphers: 1 and 0. The difference is that
Boolean quantities are restricted to a single bit (either 1 or 0), whereas binary numbers may be
composed of many bits adding up in place-weighted form to a value of any finite size. The binary
number 100112 ("nineteen") has no more place in the Boolean world than the decimal number 210
("two") or the octal number 328 ("twenty-six").

POSTULATES BOOLEAN ALGEBRA


1. A New Set of Postulates. In the development of a Boolean
Algebra, Boole's Law of Development
f(x) = / ( l ) * + ) ) * ',
stands out as a basic relationship. This law is so all embracing
that the question naturally arises, if this is set as a postulate,
what postulates in addition to it are needed to define a Boolean
Algebra? Using as undefined a class K and the Sheffer stroke
function, we shall show that, in addition to a form of Boole's
Law, only two "trivial" postulates are required.
POSTULATES.*
I. K contains at least two elements.
II. If a and b are elements of K, then a/b is an element of K.
Definitions: a' a/a, a-b=a'/bf, and a+b = (a/b)f.
III. There exists in K a unique element 0, such that, if f(x) is
any f unction definable in terms of/and elements of K, we have, for
any x in K,
f(x) =f(0')x+f(0)x'.
THEOREM 1. 0" = 0.
Proof: From III, and the preceding definitions, we have
(1) x = 0'x + Ox' = [{Vx)/{0xf)}')
in particular

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
* This is the smallest set of postulates for a Boolean Algebra yet given.
l ' , and 1 = 1".
Hence l ' = l ' " and 0 = 0".
THEOREM 2. x"=x.
Proof : x" = 0"'x+0"x' = 0'x+0x' = x.
From Theorem 1 and the definition of 1, we have 0' = 1, and thus
0/0 = 0' = 1, and 1/1 = 1' = 0.
THEOREM 3. 1/0 = 0/1 = 1.
Proof: From III and the definitions, we have
1 = O'l + Or = 0/0+1/1 = 1 + 0 = (1/0)';
0 = 1/0.
0' = 0"0 + O'O' =1/1 + 0/0 = 0 + 1 = (0/1)';
0 = 0/1.
THEOREM 4. 0+0 = 0, 1+0 = 1, 0+1 = 1, 1 + 1 = 1, 00 = 0,
10 = 0, 01=0, and 11 = 1.
Proof: These equations follow immediately upon using the
results of the preceding theorems in the definitions of + and .
In the following theorems the equations are obtained by letting
(x) equal the left-hand side.
THEOREMS. 1X = X = X 1 = 0 +X = X+0.
Proof: lx=(l 1)*+(1 0)*'=lx+0:r',
# = lx+0#',
xl = (l l)*+(0 1)*' = l*+0x',
0+x = (0+1)*+(0+0)*'= lx+Ox',
x+0=(l+0)x+(0+0)x' = lx+0*';
since all five are equal to lx+Ox', the theorem follows.
THEOREM 6. Ox = xO = 0 = xx'.
Proof: 0#=(0 l)x+(0 0)x' = 0x+0x',
*0 = (1 0)x+(0 0)x' = 0x+0x',
0 = 0*+0x',
xx' = (l l')x+(0 0 V = 0x+0x';
since all four are equal to Ox+Ox', the theorem follows.
THEOREM 7. \+x=x+l = 1 =*+#'.
Pm>/: l+x=(l + l)x+(l+0)x' = l x + l ^ ,
x+1 = (! + !)*+(0 + 1)*'= lx+lx',
l = lx+lx',
X+X' = (1 + 1 > + ( 0 + 0 ' ) X ' = 1X+1JC/;
THEOREM 8. ax = xa.
Proof: ax = (a l)x+(a 0)x' = (1 a)x+(0 a)x' = xa.
THEOREM 9. a+x = x+a.
Proof: a+x = (a+l)x+ (a+0)x' = (1 +a)x+ (0+a)x'
= x+a.
THEOREM 10. x+bc = (x+b)(x+c).
Proof: (x+b)(x+c) = (l+b)(l+c)x+(0+b)(0+c)x'
= (1) (1)*+ (b){c)x' = lx+bc x'
= (l+bc)x+(0+bc)x'=x+bc.
THEOREM 11. xb+xc = x(b+c).

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Proof: xb+xc=(lb + lc)x+(0b+0c)x' = (b+c)x+0x'
= (b+c)x = x(b+c).
2. Huntington's Postulates and their Derivation. The following
is Huntington's set of postulates; to each is appended a brief
indication of its derivation from those of our set.
1. (a) If a and b are elements of K> then a+bis an element of K.
By definition, a+b = (a/b) ' = (a/b) / (a/b) ; by Postulate I, if a and
b are elements of Ky a/b is an element, and a+b is an element.
(b) If a and b are elements of K, then a-b is an element of K.
By definition, a-b a'/b' {a/a)/\b/b) is an element of K as
above.
2. (a) There exists an element 0 in K such that a+Q = a.
(b) There exists an element 1 in K such that a-l=a.
Theorem 5.
3. (a) If a, b, a+b> b+a belong to K, then a+b = b+a.
Theorem 9.
(b) If a, b, ab, ba belong to K, then ab = ba.
Theorem 8.
4. (a) If a, by c, be, a+bcf a+b, a+c, and (a+b)(a+c) belong
to K, then a+bc= (a+b)(a+c).
Theorem 10.
(b) If a, b, cy b+c, a(b+c), ab, ac, and ab+ac belong to K, then
ab+ac = a(b+c).
Theorem 11.
5. If 0 and 1 exist and are unique, then for every element a belonging
to K there exists an element a' in K such that a+a' = 1 and
aa' = 0.
Theorems 1,6, and 7.
6. There are at least two distinct elements in K.
Postulate I.
3. A Two Element Boolean Algebra. A set of postulates for a
two element Boolean Algebra can be obtained by changing

Introduction
The most obvious way to simplify Boolean expressions is to manipulate them in the same way
as normal algebraic expressions are manipulated. With regards to logic relations in digital forms, a set
of rules for symbolic manipulation is needed in order to solve for the unknowns.
A set of rules formulated by the English mathematician George Boole describe certain propositions
whose outcome would be either true or false. With regard to digital logic, these rules are used to
describe circuits whose state can be either, 1 (true) or 0 (false). In order to fully understand this, the
relation between the AND gate, OR gate and NOT gate operations should be appreciated. A number of
rules can be derived from these relations as Table 1 demonstrates.

P1: X = 0 or X = 1
P2: 0 . 0 = 0
P3: 1 + 1 = 1
P4: 0 + 0 = 0
P5: 1 . 1 = 1

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

P6: 1 . 0 = 0 . 1 = 0
P7: 1 + 0 = 0 + 1 = 1

Table 1: Boolean Postulates

Laws of Boolean Algebra


Table 2 shows the basic Boolean laws. Note that every law has two expressions, (a) and (b). This
is known as duality. These are obtained by changing every AND(.) to OR(+), every OR(+) to AND(.)
and
all
1's
to
0's
and
vice-versa.
It has become conventional to drop the . (AND symbol) i.e. A.B is written as AB.
T1 : Commutative Law
(a)
A
+
B
=
B
+
A
(b) A B = B A
T2 : Associate Law
(a)
(A
+
B)
+
C
=
A
+
(B
+
C)
(b) (A B) C = A (B C)
T3 : Distributive Law
(a)
A
(B
+
C)
=
A
B
+
A
C
(b) A + (B C) = (A + B) (A + C)
T4 : Identity Law
(a)
A
+
A
=
A
(b) A A = A
T5 :
(a)
(b)
T6 : Redundance Law
(a)
A
+
A
B
=
A
(b) A (A + B) = A
T7 :
(a)
0
+
A
=
A
(b) 0 A = 0
T8 :
(a)
1
+
A
=
1
(b) 1 A = A
T9 :
(a)
(b)
T10 :
(a)
(b)
T11 : De Morgan's Theorem
(a)
(b)

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Table 2: Boolean Laws

Examples
Prove T10 : (a)
(1)

Algebraically:

(2)

Using

the

truth

Using the laws given above, complicated expressions can be simplified.

Postulates and Theorems of Boolean Algebra


Postulate 2

x+0=x

x.1=x

Postulate 5

x + x = 1

x . x = 0

Theorem 1

x+x=x

x.x=x

Theorem 2

x+1=1

x.0=0

(x) = x

xy =yx

x+y=y+x

x(yz) = (xy)z

Theorem
involution

3,

Postulate
commutative

3,
x + (y + z) = (x +
y) + z

Theorem

4,

x + yz = (x +
y)(x + z)

table:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
associative

x (y + z) = xy +

(xy) = x + y

xz
Postulate
distributive

4,

Theorem
DeMorgan

5,

Theorem
absorption

6,

x(x + y) = x
(x + y) = xy
x + xy = x

Sum of Products
Numerical Representation
Take as an example the truth table of a three-variable function as shown below. Three variables,
each of which can take the values 0 or 1, yields eight possible combinations of values for which the
function may be true. These eight combinations are listed in ascending binary order and the equivalent
decimal value is also shown in the table.
Decimal
A
B
C
f
Value
0
0
0
0
1
1
0
0
1
0
2
0
1
0
1
3
0
1
1
1
4
1
0
0
0
5
1
0
1
0
6
1
1
0
0
7
1
1
1
1
The function has a value 1 for the combinations shown, therefore:
......(1)
This can also be written as:
f(A, B, C) = 000 + 010 + 011 + 111
Note that the summation sign indicates that the terms are "OR'ed" together. The function can be
further reduced to the form:
f(A, B, C) =

(000, 010, 011, 111)

It is self-evident that the binary form of a function can be written directly from the truth table.
Note:
(a) the position of the digits must not be changed

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
(b) the expression must be in standard sum of products form.
It follows from the last expression that the binary form can be replaced by the equivalent
decimal form, namely:
f(A, B, C) =

(0,2,3,7)......(2)

Product of Sums Representation


From the truth table given above the function has the value 0 for the combinations shown,
therefore
......(3)
Writing the inverse of this function:

Applying De Morgan's Theorem we obtain:

Applying the second De Morgan's Theorem we obtain:


......(4)
The function is expressed in standard product of sums form.
Thus there are two forms of a function, one is a sum of products form (either standard or
normal) as given by expression (1), the other a product of sums form (either standard or normal) as
given by expression (4). The gate implementation of the two forms is not the same!

Examples
Consider the function:
In binary form: f(A, B, C, D) =
In decimal form: f(A, B, C, D) =

(5, 11, 12, 0, 10, 7)

(0101, 1011, 1100, 0000, 1010, 0111)

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Logic gates
Digital systems are said to be constructed by using logic gates. These gates are the AND, OR,
NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with the aid
of truth tables.
AND gate

The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are high.
A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is sometimes omitted
i.e. AB
OR gate

The OR gate is an electronic circuit that gives a high output (1) if one or more of its inputs are
high. A plus (+) is used to show the OR operation.
NOT gate

The NOT gate is an electronic circuit that produces an inverted version of the input at its output.
It is also known as an inverter. If the input variable is A, the inverted output is known as NOT A. This
is also shown as A', or A with a bar over the top, as shown at the outputs. The diagrams below show two
ways that the NAND logic gate can be configured to produce a NOT gate. It can also be done using
NOR logic gates in the same way.

NAND gate

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate. The outputs
of all NAND gates are high if any of the inputs are low. The symbol is an AND gate with a small circle
on the output. The small circle represents inversion.
NOR gate

This is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The outputs of
all NOR gates are low if any of the inputs are high.
The symbol is an OR gate with a small circle on the output. The small circle represents
inversion.
EXOR gate

The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not both, of its
two inputs are high. An encircled plus sign ( ) is used to show the EOR operation.
EXNOR gate

The 'Exclusive-NOR' gate circuit does the opposite to the EOR gate. It will give a low output if
either, but not both, of its two inputs are high. The symbol is an EXOR gate with a small circle on the
output. The small circle represents inversion.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The NAND and NOR gates are called universal functions since with either one the AND and OR
functions and NOT can be generated.
Note:
A function in sum of products form can be implemented using NAND gates by replacing all
AND and OR gates by NAND gates.
A function in product of sums form can be implemented using NOR gates by replacing all AND
and OR gates by NOR gates.
Table 1: Logic gate symbols

Table 2 is a summary truth table of the input/output combinations for the NOT gate together
with all possible input/output combinations for the other gate functions. Also note that a truth table with
'n' inputs has 2n rows. You can compare the outputs of different gates.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

UNIT-2
MINIMIZATION AND DESIGN OF COMBINATIONAL CIRCUITS
Karnaugh Maps
The Karnaugh map provides a simple and straight-forward method of minimising boolean
expressions. With the Karnaugh map Boolean expressions having up to four and even six variables can
be simplified.
So what is a Karnaugh map?
A Karnaugh map provides a pictorial method of grouping together expressions with common factors
and therefore eliminating unwanted variables. The Karnaugh map can also be described as a special
arrangement of a truth table.
for

The diagram below illustrates the correspondence between the Karnaugh map and the truth table
the
general
case
of
a
two
variable
problem.

The values inside the squares are copied from the output column of the truth table, therefore there is one
square in the map for every row in the truth table. Around the edge of the Karnaugh map are the values
of the two input variable. A is along the top and B is down the left hand side. The diagram below
explains
this:

The values around the edge of the map can be thought of as coordinates. So as an example, the square
on the top right hand corner of the map in the above diagram has coordinates A=1 and B=0. This square
corresponds to the row in the truth table where A=1 and B=0 and F=1. Note that the value in the F
column represents a particular function to which the Karnaugh map corresponds.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Examples
Example 1:
Consider the following map. The function plotted is: Z = f(A,B) = A

+ AB

Note that values of the input variables form the rows and columns. That is the logic
values of the variables A and B (with one denoting true form and zero denoting false form) form the
head of the rows and columns respectively.

Bear in mind that the above map is a one dimensional type which can be used to simplify
an expression in two variables.

There is a two-dimensional map that can be used for up to four variables, and a threedimensional map for up to six variables.

Using algebraic simplification,


Z = A + AB
Z = A( + B)
Z=A
Variable B becomes redundant due to Boolean Theorem T9a.
Referring to the map above, the two adjacent 1's are grouped together. Through inspection it can
be seen that variable B has its true and false form within the group. This eliminates variable B leaving
only variable A which only has its true form. The minimised answer therefore is Z = A.
Example 2:
Consider the expression Z = f(A,B) =

+ A

B plotted on the Karnaugh map:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Pairs of 1's are grouped as shown above, and the simplified answer is obtained by using the following
steps:
Note that two groups can be formed for the example given above, bearing in mind that the largest
rectangular clusters that can be made consist of two 1s. Notice that a 1 can belong to more than one
group.
The first group labelled I, consists of two 1s which correspond to A = 0, B = 0 and A = 1, B = 0. Put in
another way, all squares in this example that correspond to the area of the map where B = 0 contains 1s,
independent of the value of A. So when B = 0 the output is 1. The expression of the output will contain
the term
For group labelled II corresponds to the area of the map where A = 0. The group can therefore
be defined as . This implies that when A = 0 the output is 1. The output is therefore 1 whenever B = 0
and
A
=
0
Hence the simplified answer is Z = +
Karnaugh Maps - Rules of Simplification
The Karnaugh map uses the following rules for the simplification of expressions by grouping
together adjacent cells containing ones

Groups

may

Groups

may

That

is

Groups must
if n = 1,

not

be

include

horizontal

any

or

cell

vertical,

contain 1, 2, 4, 8,
a group will contain

or
two

containing

but

not

zero

diagonal.

in general 2n cells.
1's since 21 = 2.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
If

2,

Each

Each

group

group

cell

will

contain

should

containing

be

one

must

four

1's

as

be

large

in

22

since

at

as

least

4.

possible.

one

group.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Groups

may

overlap.

Groups may wrap around the table. The leftmost cell in a row may be grouped with
the rightmost cell and the top cell in a column may be grouped with the bottom cell.

the

There should be as few groups as possible, as long as this does not contradict any of
previous
rules.

Summmary:
1.
2.
3.
4.

No zeros allowed.
No diagonals.
Only power of 2 number of cells in each group.
Groups should be as large as possible.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
5.
6.
7.
8.

Every one must be in at least one group.


Overlapping allowed.
Wrap around allowed.
Fewest number of groups possible.

4-variable Karnaugh maps


Knowing how to generate Gray code should allow us to build larger maps. Actually, all we need
to do is look at the left to right sequence across the top of the 3-variable map, and copy it down the left
side of the 4-variable map. See below.

The following four variable Karnaugh maps illustrate reduction of Boolean expressions too
tedious for Boolean algebra. Reductions could be done with Boolean algebra. However, the Karnaugh
map is faster and easier, especially if there are many logic reductions to do.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The above Boolean expression has seven product terms. They are mapped top to bottom and left
to right on the K-map above. For example, the first P-term A'B'CD is first row 3rd cell, corresponding
to map location A=0, B=0, C=1, D=1. The other product terms are placed in a similar manner.
Encircling the largest groups possible, two groups of four are shown above. The dashed horizontal
group corresponds the the simplified product term AB. The vertical group corresponds to Boolean CD.
Since there are two groups, there will be two product terms in the Sum-Of-Products result of
Out=AB+CD.

Fold up the corners of the map below like it is a napkin to make the four cells physically
adjacent.

The four cells above are a group of four because they all have the Boolean variables B' and D' in
common. In other words, B=0 for the four cells, and D=0 for the four cells. The other variables (A, B)
are 0 in some cases, 1 in other cases with respect to the four corner cells. Thus, these variables (A, B)
are not involved with this group of four. This single group comes out of the map as one product term for
the simplified result: Out=B'C'

For the K-map below, roll the top and bottom edges into a cylinder forming eight adjacent cells.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

The above group of eight has one Boolean variable in common: B=0. Therefore, the one group
of eight is covered by one p-term: B'. The original eight term Boolean expression simplifies to Out=B'

The Boolean expression below has nine p-terms, three of which have three Booleans instead of
four. The difference is that while four Boolean variable product terms cover one cell, the three Boolean
p-terms cover a pair of cells each.

The six product terms of four Boolean variables map in the usual manner above as single cells.
The three Boolean variable terms (three each) map as cell pairs, which is shown above. Note that we are
mapping p-terms into the K-map, not pulling them out at this point.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
For the simplification, we form two groups of eight. Cells in the corners are shared with both
groups. This is fine. In fact, this leads to a better solution than forming a group of eight and a group of
four without sharing any cells. Final Solution is Out=B'+D'

Below we map the unsimplified Boolean expression to the Karnaugh map.

Above, three of the cells form into a groups of two cells. A fourth cell cannot be combined with
anything, which often happens in "real world" problems. In this case, the Boolean p-term ABCD is
unchanged in the simplification process. Result: Out= B'C'D'+A'B'D'+ABCD
Often times there is more than one minimum cost solution to a simplification problem. Such is
the case illustrated below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Both results above have four product terms of three Boolean variable each. Both are equally
valid minimal cost solutions. The difference in the final solution is due to how the cells are grouped as
shown above. A minimal cost solution is a valid logic design with the minimum number of gates with
the minimum number of inputs.

Below we map the unsimplified Boolean equation as usual and form a group of four as a first
simplification step. It may not be obvious how to pick up the remaining cells.

Pick up three more cells in a group of four, center above. There are still two cells remaining. the
minimal cost method to pick up those is to group them with neighboring cells as groups of four as at
above right.
On a cautionary note, do not attempt to form groups of three. Groupings must be powers of 2,
that is, 1, 2, 4, 8 ...

Below we have another example of two possible minimal cost solutions. Start by forming a
couple of groups of four after mapping the cells.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

The two solutions depend on whether the single remaining cell is grouped with the first or the
second group of four as a group of two cells. That cell either comes out as either ABC' or ABD, your
choice. Either way, this cell is covered by either Boolean product term. Final results are shown above.

Below we have an example of a simplification using the Karnaugh map at left or Boolean
algebra at right. Plot C' on the map as the area of all cells covered by address C=0, the 8-cells on the
left of the map. Then, plot the single ABCD cell. That single cell forms a group of 2-cell as shown,
which simplifies to P-term ABD, for an end result of Out = C' + ABD.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
5 -variable Karnaugh map
Larger Karnaugh maps reduce larger logic designs. How large is large enough? That depends on
the number of inputs, fan-ins, to the logic circuit under consideration. One of the large programmable
logic companies has an answer.
Altera's own data, extracted from its library of customer designs, supports the value of
heterogeneity. By examining logic cones, mapping them onto LUT-based nodes and sorting them by the
number of inputs that would be best at each node, Altera found that the distribution of fan-ins was
nearly flat between two and six inputs, with a nice peak at five.
The answer is no more than six inputs for most all designs, and five inputs for the average logic
design. The five variable Karnaugh map follows.

The older version of the five variable K-map, a Gray Code map or reflection map, is shown
above. The top (and side for a 6-variable map) of the map is numbered in full Gray code. The Gray code
reflects about the middle of the code. This style map is found in older texts. The newer preferred style is
below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

The overlay version of the Karnaugh map, shown above, is simply two (four for a 6-variable
map) identical maps except for the most significant bit of the 3-bit address across the top. If we look at
the top of the map, we will see that the numbering is different from the previous Gray code map. If we
ignore the most significant digit of the 3-digit numbers, the sequence 00, 01, 11, 10 is at the heading of
both sub maps of the overlay map. The sequence of eight 3-digit numbers is not Gray code. Though the
sequence of four of the least significant two bits is.

Let's put our 5-variable Karnaugh Map to use. Design a circuit which has a 5-bit binary input (A,
B, C, D, E), with A being the MSB (Most Significant Bit). It must produce an output logic High for any
prime number detected in the input data.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

We show the solution above on the older Gray code (reflection) map for reference. The prime
numbers are (1,2,3,5,7,11,13,17,19,23,29,31). Plot a 1 in each corresponding cell. Then, proceed with
grouping of the cells. Finish by writing the simplified result. Note that 4-cell group A'B'E consists of
two pairs of cell on both sides of the mirror line. The same is true of the 2-cell group AB'DE. It is a
group of 2-cells by being reflected about the mirror line. When using this version of the K-map look for
mirror images in the other half of the map.

Out = A'B'E + B'C'E + A'C'DE + A'CD'E + ABCE + AB'DE + A'B'C'D

Below we show the more common version of the 5-variable map, the overlay map.

If we compare the patterns in the two maps, some of the cells in the right half of the map are
moved around since the addressing across the top of the map is different. We also need to take a
different approach at spotting commonality between the two halves of the map. Overlay one half of the
map atop the other half. Any overlap from the top map to the lower map is a potential group. The figure
below shows that group AB'DE is composed of two stacked cells. Group A'B'E consists of two stacked
pairs of cells.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
For the A'B'E group of 4-cells ABCDE = 00xx1 for the group. That is A,B,E are the same 001
respectively for the group. And, CD=xx that is it varies, no commonality in CD=xx for the group of 4cells. Since ABCDE = 00xx1, the group of 4-cells is covered by A'B'XXE = A'B'E.

The above 5-variable overlay map is shown stacked.

(sum) and (product) notation


For reference, this section introduces the terminology used in some texts to describe the
minterms and maxterms assigned to a Karnaugh map. Otherwise, there is no new material here.
(sigma) indicates sum and lower case "m" indicates minterms. m indicates sum of minterms.
The following example is revisited to illustrate our point. Instead of a Boolean equation description of
unsimplified logic, we list the minterms.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
f(A,B,C,D) = m(1, 2, 3, 4, 5, 7, 8, 9, 11, 12, 13, 15)

or

f(A,B,C,D) = (m1,m2,m3,m4,m5,m7,m8,m9,m11,m12,m13,m15)

The numbers indicate cell location, or address, within a Karnaugh map as shown below right.
This is certainly a compact means of describing a list of minterms or cells in a K-map.

The Sum-Of-Products solution is not affected by the new terminology. The minterms, 1s, in the
map have been grouped as usual and a Sum-OF-Products solution written.
Below, we show the terminology for describing a list of maxterms. Product is indicated by the
Greek (pi), and upper case "M" indicates maxterms. M indicates product of maxterms. The same
example illustrates our point. The Boolean equation description of unsimplified logic, is replaced by a
list of maxterms.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
f(A,B,C,D) = M(2, 6, 8, 9, 10, 11, 14)

or

f(A,B,C,D) = (M2, M6, M8, M9, M10, M11, M14)

Once again, the numbers indicate K-map cell address locations. For maxterms this is the location
of 0s, as shown below. A Product-OF-Sums solution is completed in the usual manner.

Minterm vs maxterm solution


So far we have been finding Sum-Of-Product (SOP) solutions to logic reduction problems. For
each of these SOP solutions, there is also a Product-Of-Sums solution (POS), which could be more
useful, depending on the application. Before working a Product-Of-Sums solution, we need to introduce
some new terminology. The procedure below for mapping product terms is not new to this chapter. We
just want to establish a formal procedure for minterms for comparison to the new procedure for
maxterms.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

A minterm is a Boolean expression resulting in 1 for the output of a single cell, and 0s for all
other cells in a Karnaugh map, or truth table. If a minterm has a single 1 and the remaining cells as 0s, it
would appear to cover a minimum area of 1s. The illustration above left shows the minterm ABC, a
single product term, as a single 1 in a map that is otherwise 0s. We have not shown the 0s in our
Karnaugh maps up to this point, as it is customary to omit them unless specifically needed. Another
minterm A'BC' is shown above right. The point to review is that the address of the cell corresponds
directly to the minterm being mapped. That is, the cell 111 corresponds to the minterm ABC above left.
Above right we see that the minterm A'BC' corresponds directly to the cell 010. A Boolean expression
or map may have multiple minterms.
Referring to the above figure, Let's summarize the procedure for placing a minterm in a K-map:

Identify the minterm (product term) term to be mapped.


Write the corresponding binary numeric value.
Use binary value as an address to place a 1 in the K-map
Repeat steps for other minterms (P-terms within a Sum-Of-Products).

A Boolean expression will more often than not consist of multiple minterms corresponding to
multiple cells in a Karnaugh map as shown above. The multiple minterms in this map are the individual
minterms which we examined in the previous figure above. The point we review for reference is that the

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
1s come out of the K-map as a binary cell address which converts directly to one or more product terms.
By directly we mean that a 0 corresponds to a complemented variable, and a 1 corresponds to a true
variable. Example: 010 converts directly to A'BC'. There was no reduction in this example. Though, we
do have a Sum-Of-Products result from the minterms.
Referring to the above figure, Let's summarize the procedure for writing the Sum-Of-Products
reduced Boolean equation from a K-map:

Form largest groups of 1s possible covering all minterms. Groups must be a power of 2.
Write binary numeric value for groups.
Convert binary value to a product term.
Repeat steps for other groups. Each group yields a p-terms within a Sum-Of-Products.

Nothing new so far, a formal procedure has been written down for dealing with minterms. This
serves as a pattern for dealing with maxterms.
Next we attack the Boolean function which is 0 for a single cell and 1s for all others.

A maxterm is a Boolean expression resulting in a 0 for the output of a single cell expression, and
1s for all other cells in the Karnaugh map, or truth table. The illustration above left shows the maxterm
(A+B+C), a single sum term, as a single 0 in a map that is otherwise 1s. If a maxterm has a single 0 and
the remaining cells as 1s, it would appear to cover a maximum area of 1s.
There are some differences now that we are dealing with something new, maxterms. The
maxterm is a 0, not a 1 in the Karnaugh map. A maxterm is a sum term, (A+B+C) in our example, not a
product term.
It also looks strange that (A+B+C) is mapped into the cell 000. For the equation
Out=(A+B+C)=0, all three variables (A, B, C) must individually be equal to 0. Only (0+0+0)=0 will
equal 0. Thus we place our sole 0 for minterm (A+B+C) in cell A,B,C=000 in the K-map, where the
inputs are all0 . This is the only case which will give us a 0 for our maxterm. All other cells contain 1s
because any input values other than ((0,0,0) for (A+B+C) yields 1s upon evaluation.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Referring to the above figure, the procedure for placing a maxterm in the K-map is:

Identify the Sum term to be mapped.


Write corresponding binary numeric value.
Form the complement
Use the complement as an address to place a 0 in the K-map
Repeat for other maxterms (Sum terms within Product-of-Sums expression).

Another maxterm A'+B'+C' is shown above. Numeric 000 corresponds to A'+B'+C'. The
complement is 111. Place a 0 for maxterm (A'+B'+C') in this cell (1,1,1) of the K-map as shown above.
Why should (A'+B'+C') cause a 0 to be in cell 111? When A'+B'+C' is (1'+1'+1'), all 1s in,
which is (0+0+0) after taking complements, we have the only condition that will give us a 0. All the 1s
are complemented to all 0s, which is 0 when ORed.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
A Boolean Product-Of-Sums expression or map may have multiple maxterms as shown above.
Maxterm (A+B+C) yields numeric 111 which complements to 000, placing a 0 in cell (0,0,0). Maxterm
(A+B+C') yields numeric 110 which complements to 001, placing a 0 in cell (0,0,1).
Now that we have the k-map setup, what we are really interested in is showing how to write a
Product-Of-Sums reduction. Form the 0s into groups. That would be a group of two below. Write the
binary value corresponding to the sum-term which is (0,0,X). Both A and B are 0 for the group. But, C
is both 0 and 1 so we write an X as a place holder for C. Form the complement (1,1,X). Write the Sumterm (A+B) discarding the C and the X which held its' place. In general, expect to have more sum-terms
multiplied together in the Product-Of-Sums result. Though, we have a simple example here.

Let's summarize the procedure for writing the Product-Of-Sums Boolean reduction for a K-map:

Form largest groups of 0s possible, covering all maxterms. Groups must be a power of 2.
Write binary numeric value for group.
Complement binary numeric value for group.
Convert complement value to a sum-term.
Repeat steps for other groups. Each group yields a sum-term within a Product-Of-Sums

result.

Example:

Simplify the Product-Of-Sums Boolean expression below, providing a result in POS form.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Solution:

Transfer the seven maxterms to the map below as 0s. Be sure to complement the input variables
in finding the proper cell location.

We map the 0s as they appear left to right top to bottom on the map above. We locate the last
three maxterms with leader lines..
Once the cells are in place above, form groups of cells as shown below. Larger groups will give
a sum-term with fewer inputs. Fewer groups will yield fewer sum-terms in the result.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
We have three groups, so we expect to have three sum-terms in our POS result above. The group
of 4-cells yields a 2-variable sum-term. The two groups of 2-cells give us two 3-variable sum-terms.
Details are shown for how we arrived at the Sum-terms above. For a group, write the binary group input
address, then complement it, converting that to the Boolean sum-term. The final result is product of the
three sums.

Example:

Simplify the Product-Of-Sums Boolean expression below, providing a result in SOP form.

Solution:

This looks like a repeat of the last problem. It is except that we ask for a Sum-Of-Products
Solution instead of the Product-Of-Sums which we just finished. Map the maxterm 0s from the ProductOf-Sums given as in the previous problem, below left.

Then fill in the implied 1s in the remaining cells of the map above right.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Form groups of 1s to cover all 1s. Then write the Sum-Of-Products simplified result as in the
previous section of this chapter. This is identical to a previous problem.

Don't care cells in the Karnaugh map


Up to this point we have considered logic reduction problems where the input conditions were
completely specified. That is, a 3-variable truth table or Karnaugh map had 2n = 23 or 8-entries, a full
table or map. It is not always necessary to fill in the complete truth table for some real-world problems.
We may have a choice to not fill in the complete table.
For example, when dealing with BCD (Binary Coded Decimal) numbers encoded as four bits,
we may not care about any codes above the BCD range of (0, 1, 2...9). The 4-bit binary codes for the
hexadecimal numbers (Ah, Bh, Ch, Eh, Fh) are not valid BCD codes. Thus, we do not have to fill in
those codes at the end of a truth table, or K-map, if we do not care to. We would not normally care to

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
fill in those codes because those codes (1010, 1011, 1100, 1101, 1110, 1111) will never exist as long as
we are dealing only with BCD encoded numbers. These six invalid codes are don't cares as far as we
are concerned. That is, we do not care what output our logic circuit produces for these don't cares.
Don't cares in a Karnaugh map, or truth table, may be either 1s or 0s, as long as we don't care
what the output is for an input condition we never expect to see. We plot these cells with an asterisk, *,
among the normal 1s and 0s. When forming groups of cells, treat the don't care cell as either a 1 or a 0,
or ignore the don't cares. This is helpful if it allows us to form a larger group than would otherwise be
possible without the don't cares. There is no requirement to group all or any of the don't cares. Only use
them in a group if it simplifies the logic.

Above is an example of a logic function where the desired output is 1 for input ABC = 101 over
the range from 000 to 101. We do not care what the output is for the other possible inputs (110, 111).
Map those two as don't cares. We show two solutions. The solution on the right Out = AB'C is the more
complex solution since we did not use the don't care cells. The solution in the middle, Out=AC, is less
complex because we grouped a don't care cell with the single 1 to form a group of two. The third
solution, a Product-Of-Sums on the right, results from grouping a don't care with three zeros forming a
group of four 0s. This is the same, less complex, Out=AC. We have illustrated that the don't care cells
may be used as either 1s or 0s, whichever is useful.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The electronics class of Lightning State College has been asked to build the lamp logic for a
stationary bicycle exhibit at the local science museum. As a rider increases his pedaling speed, lamps
will light on a bar graph display. No lamps will light for no motion. As speed increases, the lower lamp,
L1 lights, then L1 and L2, then, L1, L2, and L3, until all lamps light at the highest speed. Once all the
lamps illuminate, no further increase in speed will have any effect on the display.
A small DC generator coupled to the bicycle tire outputs a voltage proportional to speed. It
drives a tachometer board which limits the voltage at the high end of speed where all lamps light. No
further increase in speed can increase the voltage beyond this level. This is crucial because the
downstream A to D (Analog to Digital) converter puts out a 3-bit code, ABC, 23 or 8-codes, but we only
have five lamps. A is the most significant bit, C the least significant bit.
The lamp logic needs to respond to the six codes out of the A to D. For ABC=000, no motion,
no lamps light. For the five codes (001 to 101) lamps L1, L1&L2, L1&L2&L3, up to all lamps will
light, as speed, voltage, and the A to D code (ABC) increases. We do not care about the response to
input codes (110, 111) because these codes will never come out of the A to D due to the limiting in the
tachometer block. We need to design five logic circuits to drive the five lamps.

Since, none of the lamps light for ABC=000 out of the A to D, enter a 0 in all K-maps for cell
ABC=000. Since we don't care about the never to be encountered codes (110, 111), enter asterisks into
those two cells in all five K-maps.
Lamp L5 will only light for code ABC=101. Enter a 1 in that cell and five 0s into the remaining
empty cells of L5 K-map.
L4 will light initially for code ABC=100, and will remain illuminated for any code greater,
ABC=101, because all lamps below L5 will light when L5 lights. Enter 1s into cells 100 and 101 of the
L4 map so that it will light for those codes. Four 0's fill the remaining L4 cells

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
L3 will initially light for code ABC=011. It will also light whenever L5 and L4 illuminate. Enter
three 1s into cells 011, 100, 101 for L3 map. Fill three 0s into the remaining L3 cells.
L2 lights for ABC=010 and codes greater. Fill 1s into cells 010, 011, 100, 101, and two 0s in the
remaining cells.
The only time L1 is not lighted is for no motion. There is already a 0 in cell ABC=000. All the
other five cells receive 1s.
Group the 1's as shown above, using don't cares whenever a larger group results. The L1 map
shows three product terms, corresponding to three groups of 4-cells. We used both don't cares in two of
the groups and one don't care on the third group. The don't cares allowed us to form groups of four.
In a similar manner, the L2 and L4 maps both produce groups of 4-cells with the aid of the don't
care cells. The L4 reduction is striking in that the L4 lamp is controlled by the most significant bit from
the A to D converter, L5=A. No logic gates are required for lamp L4. In the L3 and L5 maps, single
cells form groups of two with don't care cells. In all five maps, the reduced Boolean equation is less
complex than without the don't cares.
Gate universality
NAND and NOR gates possess a special property: they are universal. That is, given enough
gates, either type of gate is able to mimic the operation of any other gate type. For example, it is
possible to build a circuit exhibiting the OR function using three interconnected NAND gates. The
ability for a single gate type to be able to mimic any other gate type is one enjoyed only by the NAND
and the NOR. In fact, digital control systems have been designed around nothing but either NAND or
NOR gates, all the necessary logic functions being derived from collections of interconnected NANDs
or NORs.
As proof of this property, this section will be divided into subsections showing how all the basic
gate types may be formed using only NANDs or only NORs.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Constructing the NOT function

As you can see, there are two ways to use a NAND gate as an inverter, and two ways to use a
NOR gate as an inverter. Either method works, although connecting TTL inputs together increases the
amount of current loading to the driving gate. For CMOS gates, common input terminals decreases the
switching speed of the gate due to increased input capacitance.
Inverters are the fundamental tool for transforming one type of logic function into another, and
so there will be many inverters shown in the illustrations to follow. In those diagrams, I will only show
one method of inversion, and that will be where the unused NAND gate input is connected to +V (either
Vcc or Vdd, depending on whether the circuit is TTL or CMOS) and where the unused input for the NOR
gate is connected to ground. Bear in mind that the other inversion method (connecting both NAND or
NOR inputs together) works just as well from a logical (1's and 0's) point of view, but is undesirable
from the practical perspectives of increased current loading for TTL and increased input capacitance for
CMOS.

Constructing the "buffer" function


Being that it is quite easy to employ NAND and NOR gates to perform the inverter (NOT)
function, it stands to reason that two such stages of gates will result in a buffer function, where the
output is the same logical state as the input.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Constructing the AND function


To make the AND function from NAND gates, all that is needed is an inverter (NOT) stage on
the output of a NAND gate. This extra inversion "cancels out" the first N in NAND, leaving the AND
function. It takes a little more work to wrestle the same functionality out of NOR gates, but it can be
done by inverting ("NOT") all of the inputs to a NOR gate.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Constructing the NAND function


It would be pointless to show you how to "construct" the NAND function using a NAND gate,
since there is nothing to do. To make a NOR gate perform the NAND function, we must invert all
inputs to the NOR gate as well as the NOR gate's output. For a two-input gate, this requires three more
NOR gates connected as inverters.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Constructing the OR function


Inverting the output of a NOR gate (with another NOR gate connected as an inverter) results in
the OR function. The NAND gate, on the other hand, requires inversion of all inputs to mimic the OR
function, just as we needed to invert all inputs of a NOR gate to obtain the AND function. Remember
that inversion of all inputs to a gate results in changing that gate's essential function from AND to OR
(or vice versa), plus an inverted output. Thus, with all inputs inverted, a NAND behaves as an OR, a
NOR behaves as an AND, an AND behaves as a NOR, and an OR behaves as a NAND. In Boolean
algebra, this transformation is referred to as DeMorgan's Theorem, covered in more detail in a later
chapter of this book.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Constructing the NOR function


Much the same as the procedure for making a NOR gate behave as a NAND, we must invert all
inputs and the output to make a NAND gate function as a NOR.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

REVIEW:
NAND and NOR gates are universal: that is, they have the ability to mimic any type of
gate, if interconnected in sufficient numbers.

The Exclusive-OR function


One element conspicuously missing from the set of Boolean operations is that of Exclusive-OR.
Whereas the OR function is equivalent to Boolean addition, the AND function to Boolean
multiplication, and the NOT function (inverter) to Boolean complementation, there is no direct Boolean
equivalent for Exclusive-OR. This hasn't stopped people from developing a symbol to represent it,
though:

This symbol is seldom used in Boolean expressions because the identities, laws, and rules of
simplification involving addition, multiplication, and complementation do not apply to it. However,
there is a way to represent the Exclusive-OR function in terms of OR and AND, as has been shown in
previous chapters: AB' + A'B

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

As a Boolean equivalency, this rule may be helpful in simplifying some Boolean expressions.
Any expression following the AB' + A'B form (two AND gates and an OR gate) may be replaced by a
single Exclusive-OR gate.

COMBINATIONAL LOGIC
Unlike Sequential Logic Circuits whose outputs are dependant on both their present inputs and
their previous output state giving them some form of Memory, the outputs of Combinational Logic
Circuits are only determined by the logical function of their current input state, logic "0" or logic "1", at
any given instant in time as they have no feedback, and any changes to the signals being applied to their
inputs will immediately have an effect at the output. In other words, in a Combinational Logic Circuit,
the output is dependant at all times on the combination of its inputs and if one of its inputs condition
changes state so does the output as combinational circuits have "no memory", "timing" or "feedback
loops".

Combinational Logic

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Combinational Logic Circuits are made up from basic logic NAND, NOR or NOT gates that are
"combined" or connected together to produce more complicated switching circuits. These logic gates
are the building blocks of combinational logic circuits. An example of a combinational circuit is a
decoder, which converts the binary code data present at its input into a number of different output lines,
one at a time producing an equivalent decimal code at its output.
Combinational logic circuits can be very simple or very complicated and any combinational
circuit can be implemented with only NAND and NOR gates as these are classed as "universal" gates.
The three main ways of specifying the function of a combinational logic circuit are:

Truth Table Truth tables provide a concise list that shows the output values in
tabular form for each possible combination of input variables.

Boolean Algebra Forms an output expression for each input variable that
represents a logic "1"

Logic Diagram Shows the wiring and connections of each individual logic gate
that implements the circuit.
and all three are shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
As combinational logic circuits are made up from individual logic gates only, they can also be
considered as "decision making circuits" and combinational logic is about combining logic gates
together to process two or more signals in order to produce at least one output signal according to the
logical function of each logic gate. Common combinational circuits made up from individual logic gates
that carry out a desired application include Multiplexers, De-multiplexers, Encoders, Decoders, Full
and Half Adders etc.

Classification of Combinational Logic

The Half Adder Circuit


1-bit Adder with Carry-Out
Symbol

Truth Table
A
0
0
1
1

Boolean Expression: Sum = A B

B
UM
0
1
0
1

CA
RRY

0
1
1
0

0
0
0
1

Carry = A . B

From the truth table we can see that the SUM (S) output is the result of the Ex-OR gate and the
Carry-out (Cout) is the result of the AND gate. One major disadvantage of the Half Adder circuit when
used as a binary adder, is that there is no provision for a "Carry-in" from the previous circuit when
adding together multiple data bits. For example, suppose we want to add together two 8-bit bytes of
data, any resulting carry bit would need to be able to "ripple" or move across the bit patterns starting
from the least significant bit (LSB). The most complicated operation the half adder can do is "1 + 1" but

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
as the half adder has no carry input the resultant added value would be incorrect. One simple way to
overcome this problem is to use a Full Adder type binary adder circuit.

The Full Adder Circuit


The main difference between the Full Adder and the previous seen Half Adder is that a full
adder has three inputs, the same two single bit binary inputs A and B as before plus an additional CarryIn (C-in) input as shown below.

Full Adder with Carry-In


Symbol

Truth Table
A
0
0
1
1
0
0
1
1

B
-in
0
1
0
1
0
1
0
1

S
um

0
0
0
0
1
1
1
1

C
-out

0
1
1
0
1
0
0
1

0
0
0
1
0
1
1
1

Boolean Expression: Sum = A B C-in


The 1-bit Full Adder circuit above is basically two half adders connected together and consists
of three Ex-OR gates, two AND gates and an OR gate, six logic gates in total. The truth table for the full
adder includes an additional column to take into account the Carry-in input as well as the summed
output and carry-output. 4-bit full adder circuits are available as standard IC packages in the form of the
TTL 74LS83 or the 74LS283 which can add together two 4-bit binary numbers and generate a SUM and
a CARRY output. But what if we wanted to add together two n-bit numbers, then n 1-bit full adders
need to be connected together to produce what is known as the Ripple Carry Adder.

The 4-bit Binary Adder


The Ripple Carry Binary Adder is simply n, full adders cascaded together with each full adder
represents a single weighted column in the long addition with the carry signals producing a "ripple"
effect through the binary adder from right to left. For example, suppose we want to "add" together two
4-bit numbers, the two outputs of the first full adder will provide the first place digit sum of the addition
plus a carry-out bit that acts as the carry-in digit of the next binary adder. The second binary adder in
the chain also produces a summed output (the 2nd bit) plus another carry-out bit and we can keep
adding more full adders to the combination to add larger numbers, linking the carry bit output from the
first full binary adder to the next full adder, and so forth. An example of a 4-bit adder is given below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

A 4-bit Binary Adder

One main disadvantage of "cascading" together 1-bit binary adders to add large binary
numbers is that if inputs A and B change, the sum at its output will not be valid until any carry-input has
"rippled" through every full adder in the chain. Consequently, there will be a finite delay before the
output of a adder responds to a change in its inputs resulting in the accumulated delay especially in large
multi-bit binary adders becoming prohibitively large. This delay is called Propagation delay. Also
"overflow" occurs when an n-bit adder adds two numbers together whose sum is greater than or equal to
2n
One solution is to generate the carry-input signals directly from the A and B inputs rather than
using the ripple arrangement above. This then produces another type of binary adder circuit called a
Carry Look Ahead Binary Adder were the speed of the parallel adder can be greatly improved using
carry-look ahead logic.

half subtractor
The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It
has two inputs, X (minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow).

Logic diagram for a half subtractor

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It
has two inputs, X (minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow).

The truth table for the half subtractor is given below


X
0
0
1
1

YDB
0 0 0
1 1 1
0 1 0
1 0 0

From the above table one can draw the Karnaugh map for "difference" and "borrow".
So, Logic equations are:

Full subtractor
The full-subtractor is a combinational circuit which is used to perform subtraction of three bits.
It has three inputs, X (minuend) and Y (subtrahend) and Z (subtrahend) and two outputs D (difference)
and
B
(borrow).
Easy
way
to
write
truth
table
D=X-Y-Z
(don't
bother
about
sign)
B = 1 If X<(Y+Z)

[edit] Truth table


The truth table for the full subtractor is given below.[1]
X
0
0
0
0
1
1
1
1

YZDB
0 0 0 0
0 1 1 1
1 0 1 1
1 1 0 1
0 0 1 0
0 1 0 0
1 0 0 0
1 1 1 1

So, Logic equations are:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Addersubtractor
In digital circuits, an addersubtractor is a circuit that is capable of adding or subtracting
numbers (in particular, binary). Below is a circuit that does adding or subtracting depending on a
control signal. It is also possible to construct a circuit that performs both addition and subtraction at the
same time.

[edit] Construction

A 4-bit ripple-carry addersubtractor based on a 4-bit adder that performs two's complement on
A when D = 1 to yield S = B A
Having an n-bit adder for A and B, then S = A + B. Then, assume the numbers are in two's
complement. Then to perform B A, two's complement theory says to invert each bit with a NOT gate
then add one. This yields
, which is easy to do with a slightly modified adder.
By preceding each A input bit on the adder with a 2-to-1 multiplexer where:

Input 0 (I0) is straight through (Ai)


Input 1 (I1) is negated ( )

that has control input D and the initial carry connect is also connected to D then:

B to

when D = 0 the modified adder performs addition


when D = 1 the modified adder performs subtraction

This works because when D = 1 the A input to the adder is really


and 1 yields the desired subtraction of B A.

and the carry in is 1. Adding

A way you can Mark number A as positive or negative with out using a multiplexer on each bit
is to: Use a XOR (Exclusive OR) Gate to precede each bit instead.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

first input to the XOR gate is the actual input bit


Second input to the XOR gate for each is the Control input D

This Produces the same Truth table for the bit arriving at the adder as the multiplexer solution
does. As when D = 0 the XOR Gate output will be what the input bit is set to. and when D = 1 it will
effectively invert the input bit
ntil the late 1970s, most minicomputers did not have a multiply instruction, and so programmers
used a "multiply routine"[1][2] which repeatedly shifts and accumulates partial results, often written using
loop unwinding. Mainframe computers had multiply instructions, but they did the same sorts of shifts
and adds as a "multiply routine".
Early microprocessors also had no multiply instruction. The Motorola 6809, introduced in 1978,
was one of the earliest microprocessors with a dedicated hardware multiply instruction. It did the same
sorts of shifts and adds as a "multiply routine", but implemented in the microcode of the MUL
instruction.[citation needed]
As more transistors per chip became available (Moore's law), it became possible to put enough
adders on a single chip to sum all the partial products at once, rather than re-use a single adder to handle
each partial product one at a time.
Because some common digital signal processing algorithms spend most of their time
multiplying, people who design digital signal processors sacrifice a lot of chip area in order to make the
multiply as fast as possiblea single-cycle multiplyaccumulate unit often used up most of the chip
area of early DSPs.

The 4-bit Binary Subtractor


Now that we know how to "ADD" together two 4-bit binary numbers how would we subtract
two 4-bit binary numbers, for example, A - B using the circuit above. The answer is to use 2scomplement notation on all the bits in B must be complemented (inverted) and an extra one added using
the carry-input. This can be achieved by inverting each B input bit using an inverter or NOT-gate.
Also, in the above circuit for the 4-bit binary adder, the first carry-in input is held LOW at logic
"0", for the circuit to perform subtraction this input needs to be held HIGH at "1". With this in mind a
ripple carry adder can with a small modification be used to perform half subtraction, full subtraction
and/or comparison.
There are a number of 4-bit full-adder ICs available such as the 74LS283 and CD4008. which
will add two 4-bit binary number and provide an additional input carry bit, as well as an output carry
bit, so you can cascade them together to produce 8-bit, 12-bit, 16-bit, etc. adders.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Multiplication basics
The method taught in school for multiplying decimal numbers is based on calculating partial
products, shifting them to the left and then adding them together. The most difficult part is to obtain the
partial products, as that involves multiplying a long number by one digit (from 0 to 9):
123
x 456
=====
738
615
+ 492
=====
56088

(this is 123 x 6)
(this is 123 x 5, shifted one position to the left)
(this is 123 x 4, shifted two positions to the left)

A binary computer does exactly the same, but with binary numbers. In binary encoding each
long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the
product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers
comes down to calculating partial products (which are 0 or the first number), shifting them left, and then
adding them together (a binary addition, of course):
1011
x 1110
======
0000
1011
1011
+ 1011
=========
10011010

(this is 11 in binary)
(this is 14 in binary)
(this
(this
(this
(this

is
is
is
is

1011
1011
1011
1011

x
x
x
x

0)
1, shifted one position to the left)
1, shifted two positions to the left)
1, shifted three positions to the left)

(this is 154 in binary)

This is much simpler than in the decimal system, as there is no table of multiplication to
remember: just shifts and adds.
This method is mathematically correct, but it has two serious engineering problems. The first is
that it involves 32 intermediate additions in a 32-bit computer, or 64 intermediate additions in a 64-bit

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
computer. These additions take a lot of time. The engineering implementation of binary multiplication
consists, really, of taking a very simple mathematical process and complicating it a lot, in order to do
fewer additions; a modern processor can multiply two 64-bit numbers with 16 additions (rather than 64),
and can do several steps in parallelbut at a cost of making the process almost unreadable.
The second problem is that the basic school method handles the sign with a separate rule ("+
with + yields +", "+ with - yields -", etc.). Modern computers embed the sign of the number in the
number itself, usually in the two's complement representation. That forces the multiplication process to
be adapted to handle two's complement numbers, and that complicates the process a bit more. Similarly,
processors that use one's complement, sign-and-magnitude, IEEE-754 or other binary representations
require specific adjustments to the multiplication process.

Engineering approach: an unsigned example


For example, suppose we want to multiply two unsigned eight bit integers together: a[7:0] and
b[7:0]. We can produce eight partial products by performing eight one-bit multiplications, one for each
bit in multiplicand a:
p0[7:0]
p1[7:0]
p2[7:0]
p3[7:0]
p4[7:0]
p5[7:0]
p6[7:0]
p7[7:0]

=
=
=
=
=
=
=
=

a[0]
a[1]
a[2]
a[3]
a[4]
a[5]
a[6]
a[7]

b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]

=
=
=
=
=
=
=
=

{8{a[0]}}
{8{a[1]}}
{8{a[2]}}
{8{a[3]}}
{8{a[4]}}
{8{a[5]}}
{8{a[6]}}
{8{a[7]}}

&
&
&
&
&
&
&
&

b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]

where {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times (Verilog notation).
To produce our product, we then need to add up all eight of our partial products, as shown here:
p0[7] p0[6] p0[5] p0[4] p0[3]
p0[2] p0[1] p0[0]
+ p1[7] p1[6] p1[5] p1[4] p1[3] p1[2]
p1[1] p1[0] 0
+ p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1]
p2[0] 0

+ p3[7] p3[6] p3[5] p3[4] p3[3] p3[2] p3[1] p3[0]


+ p4[7] p4[6] p4[5] p4[4] p4[3] p4[2] p4[1] p4[0] 0
+ p5[7] p5[6] p5[5] p5[4] p5[3] p5[2] p5[1] p5[0] 0

+ p6[7] p6[6] p6[5] p6[4] p6[3] p6[2] p6[1] p6[0] 0


0
0
0
+ p7[7] p7[6] p7[5] p7[4] p7[3] p7[2] p7[1] p7[0] 0
0
0
0
0
0
0
-----------------------------------------------------------------------------------------P[15] P[14] P[13] P[12] P[11] P[10] P[9] P[8] P[7] P[6] P[5] P[4] P[3]
P[2] P[1] P[0]

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
In other words, P[15:0] is produced by summing p0, p1 << 1, p2 << 2, and so forth, to produce
our final unsigned 16-bit product.

Engineering approach: signed integers


If b had been a signed integer instead of an unsigned integer, then the partial products would
need to have been sign-extended up to the width of the product before summing. If a had been a signed
integer, then partial product p7 would need to be subtracted from the final sum, rather than added to it.
The above array multiplier can be modified to support two's complement notation signed
numbers by inverting several of the product terms and inserting a one to the left of the first partial
product term:
1
p0[4]

p0[3]

p0[2]

p0[1]

-p0[7]

p0[6]

p0[5]

+p1[6]

+p1[5]

+p1[4]

p0[0]
-p1[7]

+p1[3] +p1[2] +p1[1] +p1[0]

+p2[2] +p2[1] +p2[0]

+p3[1] +p3[0]

-p2[7]

+p2[6] +p2[5] +p2[4] +p2[3]

-p3[7] +p3[6] +p3[5] +p3[4] +p3[3] +p3[2]


0

-p4[7] +p4[6] +p4[5] +p4[4] +p4[3] +p4[2] +p4[1]


0
-p5[7] +p5[6] +p5[5] +p5[4] +p5[3] +p5[2] +p5[1] +p5[0]
0
0
0
0
0
-p6[7] +p6[6] +p6[5] +p6[4] +p6[3] +p6[2] +p6[1] +p6[0]
0
0
0
0
0
0
1
+p7[7] -p7[6] -p7[5] -p7[4] -p7[3] -p7[2] -p7[1] -p7[0]
0
0
0
0
0
0
0
----------------------------------------------------------------------------------------------------------P[15] P[14] P[13] P[12] P[11] P[10]
P[9]
P[8]
P[7]
P[6]
P[5]
P[4]
P[3]
P[2]
P[1] P[0]

+p4[0]

Example

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

The Digital Comparator


Another common and very useful combinational logic circuit is that of the Digital Comparator
circuit. Digital or Binary Comparators are made up from standard AND, NOR and NOT gates that
compare the digital signals present at their input terminals and produce an output depending upon the
condition of those inputs. For example, along with being able to add and subtract binary numbers we
need to be able to compare them and determine whether the value of input A is greater than, smaller
than or equal to the value at input B etc. The digital comparator accomplishes this using several logic
gates that operate on the principles of Boolean algebra. There are two main types of digital comparator
available and these are.

Identity Comparator - is a digital comparator that has only one output terminal for
when A = B either "HIGH" A = B = 1 or "LOW" A = B = 0

Magnitude Comparator - is a type of digital comparator that has three output


terminals, one each for equality, A = B greater than, A > B and less than A < B
The purpose of a Digital Comparator is to compare a set of variables or unknown numbers, for
example A (A1, A2, A3, .... An, etc) against that of a constant or unknown value such as B (B1, B2, B3,
.... Bn, etc) and produce an output condition or flag depending upon the result of the comparison. For
example, a magnitude comparator of two 1-bits, (A and B) inputs would produce the following three
output conditions when compared to each other.

Which means: A is greater than B, A is equal to B, and A is less than A


This is useful if we want to compare two variables and want to produce an output when any of
the above three conditions are achieved. For example, produce an output from a counter when a certain
count number is reached. Consider the simple 1-bit comparator below.

1-bit Comparator

Then the operation of a 1-bit digital comparator is given in the following Truth Table.

Truth Table
Inp

Outputs

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
uts
B
0
0
1
1

A
>B
0
1
0
1

A
=B

0
1
0
0

A
<B

1
0
0
1

0
0
1
0

You may notice two distinct features about the comparator from the above truth table. Firstly,
the circuit does not distinguish between either two "0" or two "1"'s as an output A = B is produced when
they are both equal, either A = B = "0" or A = B = "1". Secondly, the output condition for A = B
resembles that of a commonly available logic gate, the Exclusive-NOR or Ex-NOR function
(equivalence) on each of the n-bits giving: Q = A B
Digital comparators actually use Exclusive-NOR gates within their design for comparing their
respective pairs of bits. When we are comparing two binary or BCD values or variables against each
other, we are comparing the "magnitude" of these values, a logic "0" against a logic "1" which is where
the term Magnitude Comparator comes from.
As well as comparing individual bits, we can design larger bit comparators by cascading
together n of these and produce a n-bit comparator just as we did for the n-bit adder in the previous
tutorial. Multi-bit comparators can be constructed to compare whole binary or BCD words to produce
an output if one word is larger, equal to or less than the other. A very good example of this is the 4-bit
Magnitude Comparator. Here, two 4-bit words ("nibbles") are compared to each other to produce the
relevant output with one word connected to inputs A and the other to be compared against connected to
input B as shown below.

4-bit Magnitude Comparator

Some commercially available digital comparators such as the TTL 7485 or CMOS 4063 4-bit
magnitude comparator have additional input terminals that allow more individual comparators to be
"cascaded" together to compare words larger than 4-bits with magnitude comparators of "n"-bits being
produced. These cascading inputs are connected directly to the corresponding outputs of the previous
comparator as shown to compare 8, 16 or even 32-bit words.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

8-bit Word Comparator

When comparing large binary or BCD numbers like the example above, to save time the comparator
starts by comparing the highest-order bit (MSB) first. If equality exists, A = B then it compares the next
lowest bit and so on until it reaches the lowest-order bit, (LSB). If equality still exists then the two
numbers are defined as being equal. If inequality is found, either A > B or A < B the relationship
between the two numbers is determined and the comparison between any additional lower order bits
stops. Digital Comparator are used widely in Analogue-to-Digital converters, (ADC) and Arithmetic
Logic Units, (ALU) to perform a variety of arithmetic operations.

The Multiplexer
A data selector, more commonly called a Multiplexer, shortened to "Mux" or "MPX", are
combinational logic switching devices that operate like a very fast acting multiple position rotary
switch. They connect or control, multiple input lines called "channels" consisting of either 2, 4, 8 or 16
individual inputs, one at a time to an output. Then the job of a multiplexer is to allow multiple signals to
share a single common output. For example, a single 8-channel multiplexer would connect one of its
eight inputs to the single data output. Multiplexers are used as one method of reducing the number of
logic gates required in a circuit or when a single data line is required to carry two or more different
digital signals.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Digital Multiplexers are constructed from individual analogue switches encased in a single IC
package as opposed to the "mechanical" type selectors such as normal conventional switches and relays.
Generally, multiplexers have an even number of data inputs, usually an even power of two, n2 , a
number of "control" inputs that correspond with the number of data inputs and according to the binary
condition of these control inputs, the appropriate data input is connected directly to the output. An
example of a Multiplexer configuration is shown below.

4-to-1 Channel Multiplexer

Addressing
b
0
0
1
1

a
0
1
0
1

Input
Selected
A
B
C
D

The Boolean expression for this 4-to-1 Multiplexer above with inputs A to D and data select
lines a, b is given as:
Q = abA + abB + abC + abD
In this example at any one instant in time only ONE of the four analogue switches is closed,
connecting only one of the input lines A to D to the single output at Q. As to which switch is closed
depends upon the addressing input code on lines "a" and "b", so for this example to select input B to the
output at Q, the binary input address would need to be "a" = logic "1" and "b" = logic "0". Adding more
control address lines will allow the multiplexer to control more inputs but each control line
configuration will connect only ONE input to the output.
Then the implementation of this Boolean expression above using individual logic gates would
require the use of seven individual gates consisting of AND, OR and NOT gates as shown.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

4 Channel Multiplexer using Logic Gates

The symbol used in logic diagrams to identify a multiplexer is as follows.

Multiplexer Symbol

Multiplexers are not limited to just switching a number of different input lines or channels to one
common single output. There are also types that can switch their inputs to multiple outputs and have
arrangements or 4 to 2, 8 to 3 or even 16 to 4 etc configurations and an example of a simple Dual
channel 4 input multiplexer (4 to 2) is given below:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

4-to-2 Channel Multiplexer

Here in this example the 4 input channels are switched to 2 individual output lines but larger
arrangements are also possible. This simple 4 to 2 configuration could be used for example, to switch
audio signals for stereo pre-amplifiers or mixers.
The Multiplexer is a very useful combinational device that has its uses in many different
applications such as signal routing, data communications and data bus control. When used with a
demultiplexer, parallel data can be transmitted in serial form via a single data link such as a fibre-optic
cable or telephone line. They can also be used to switch either analogue, digital or video signals, with
the switching current in analogue power circuits limited to below 10mA to 20mA per channel in order
to reduce heat dissipation.

The Demultiplexer
The data distributor, known more commonly as a Demultiplexer or "Demux", is the exact
opposite of the Multiplexer we saw in the previous tutorial. The demultiplexer takes one single input
data line and then switches it to any one of a number of individual output lines one at a time. The
demultiplexer converts a serial data signal at the input to a parallel data at its output lines as shown
below.

1-to-4 Channel De-multiplexer

Addressing
b
0
0
1
1

a
0
1
0
1

Input
Selected
A
B
C
D

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The Boolean expression for this 1-to-4 Demultiplexer above with outputs A to D and data select
lines a, b is given as:
F = ab A + abB + abC + abD
The function of the Demultiplexer is to switch one common data input line to any one of the 4
output data lines A to D in our example above. As with the multiplexer the individual solid state
switches are selected by the binary input address code on the output select pins "a" and "b" and by
adding more address line inputs it is possible to switch more outputs giving a 1-to-2n data line outputs.
Some standard demultiplexer ICs also have an "enable output" input pin which disables or prevents the
input from being passed to the selected output. Also some have latches built into their outputs to
maintain the output logic level after the address inputs have been changed. However, in standard
decoder type circuits the address input will determine which single data output will have the same value
as the data input with all other data outputs having the value of logic "0".
The implementation of the Boolean expression above using individual logic gates would require
the use of six individual gates consisting of AND and NOT gates as shown.

4 Channel Demultiplexer using Logic Gates

The symbol used in logic diagrams to identify a demultiplexer is as follows.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Demultiplexer Symbol

Standard Demultiplexer IC packages available are the TTL 74LS138 1 to 8-output


demultiplexer, the TTL 74LS139 Dual 1-to-4 output demultiplexer or the CMOS CD4514 1-to-16
output demultiplexer. Another type of demultiplexer is the 24-pin, 74LS154 which is a 4-bit to 16-line
demultiplexer/decoder. Here the individual output positions are selected using a 4-bit binary coded
input. Like multiplexers, demultiplexers can also be cascaded together to form higher order
demultiplexers.

The Digital Encoder


Unlike a multiplexer that selects one individual data input line and then sends that data to a
single output line or switch, a Digital Encoder more commonly called a Binary Encoder takes ALL its
data inputs one at a time and then converts them into a single encoded output. So we can say that a
binary encoder, is a multi-input combinational logic circuit that converts the logic level "1" data at its
inputs into an equivalent binary code at its output. Generally, digital encoders produce outputs of 2-bit,
3-bit or 4-bit codes depending upon the number of data input lines. An "n-bit" binary encoder has 2n
input lines and n-bit output lines with common types that include 4-to-2, 8-to-3 and 16-to-4 line
configurations. The output lines of a digital encoder generate the binary equivalent of the input line
whose value is equal to "1" and are available to encode either a decimal or hexadecimal input pattern to
typically a binary or B.C.D. output code.

4-to-2 Bit Binary Encoder

One of the main disadvantages of standard digital encoders is that they can generate the wrong
output code when there is more than one input present at logic level "1". For example, if we make inputs
D1 and D2 HIGH at logic "1" at the same time, the resulting output is neither at "01" or at "10" but will
be at "11" which is an output binary number that is different to the actual input present. Also, an output

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
code of all logic "0"s can be generated when all of its inputs are at "0" OR when input D0 is equal to
one.
One simple way to overcome this problem is to "Prioritise" the level of each input pin and if
there was more than one input at logic level "1" the actual output code would only correspond to the
input with the highest designated priority. Then this type of digital encoder is known commonly as a
Priority Encoder or P-encoder for short.

Priority Encoder
The Priority Encoder solves the problems mentioned above by allocating a priority level to
each input. The priority encoders output corresponds to the currently active input which has the highest
priority. So when an input with a higher priority is present, all other inputs with a lower priority will be
ignored. The priority encoder comes in many different forms with an example of an 8-input priority
encoder along with its truth table shown below.

8-to-3 Bit Priority Encoder

Priority encoders are available in standard IC form and the TTL 74LS148 is an 8-to-3 bit priority
encoder which has eight active LOW (logic "0") inputs and provides a 3-bit code of the highest ranked
input at its output. Priority encoders output the highest order input first for example, if input lines "D2",
"D3" and "D5" are applied simultaneously the output code would be for input "D5" ("101") as this has
the highest order out of the 3 inputs. Once input "D5" had been removed the next highest output code
would be for input "D3" ("011"), and so on.
The truth table for a 8-to-3 bit priority encoder is given as:
Binary

Digital Inputs
D
7

D
5

0
0
0
0

D
4

0
0
0
0

D
3

0
0
0
0

D
2

0
0
0
0

D
1

0
0
0
1

D
0

0
0
1
X

Output
D
Q
2

0
1
X
X

1
X
X
X

0
0
1
1

0
1
0
1

0
0
0
0

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
0
0
0
1

0
0
1
X

0
1
X
X

1
X
X
X

X
X
X
X

X
X
X
X

X
X
X
X

X
X
X
X

1
1
1
1

0
0
1
1

0
1
0
1

From this truth table, the Boolean expression for the encoder above with inputs D0 to D7 and
outputs Q0, Q1, Q2 is given as:
Output Q0

Output Q1

Output Q2

Then the final Boolean expression for the priority encoder including the zero inputs is defined
as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
In practice these zero inputs would be ignored allowing the implementation of the final Boolean
expression for the outputs of the 8-to-3 priority encoder above to be constructed using individual OR
gates as follows.

Digital Encoder using Logic Gates

Binary Decoder
A Decoder is the exact opposite to that of an "Encoder" we looked at in the last tutorial. It is
basically, a combinational type logic circuit that converts the binary code data at its input into one of a
number of different output lines, one at a time producing an equivalent decimal code at its output.
Binary Decoders have inputs of 2-bit, 3-bit or 4-bit codes depending upon the number of data input
lines, and a n-bit decoder has 2n output lines. Therefore, if it receives n inputs (usually grouped as a
binary or Boolean number) it activates one and only one of its 2n outputs based on that input with all
other outputs deactivated. A decoders output code normally has more bits than its input code and
practical binary decoder circuits include, 2-to-4, 3-to-8 and 4-to-16 line configurations.
A binary decoder converts coded inputs into coded outputs, where the input and output codes are
different and decoders are available to "decode" either a Binary or BCD (8421 code) input pattern to
typically a Decimal output code. Commonly available BCD-to-Decimal decoders include the TTL 7442
or the CMOS 4028. An example of a 2-to-4 line decoder along with its truth table is given below. It
consists of an array of four NAND gates, one of which is selected for each combination of the input
signals A and B.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

A 2-to-4 Binary Decoders.

In this simple example of a 2-to-4 line binary decoder, the binary inputs A and B determine
which output line from D0 to D3 is "HIGH" at logic level "1" while the remaining outputs are held
"LOW" at logic "0" so only one output can be active (HIGH) at any one time. Therefore, whichever
output line is "HIGH" identifies the binary code present at the input, in other words it "de-codes" the
binary input and these types of binary decoders are commonly used as Address Decoders in
microprocessor memory applications.

74LS138 Binary Decoder

Some binary decoders have an additional input labelled "Enable" that controls the outputs from
the device. This allows the decoders outputs to be turned "ON" or "OFF" and we can see that the logic
diagram of the basic decoder is identical to that of the basic demultiplexer. Therefore, we say that a
demultiplexer is a decoder with an additional data line that is used to enable the decoder. An alternative
way of looking at the decoder circuit is to regard inputs A, B and C as address signals. Each
combination of A, B or C defines a unique address which can access a location having that address.
Sometimes it is required to have a Binary Decoder with a number of outputs greater than is
available, or if we only have small devices available, we can combine multiple decoders together to
form larger decoder networks as shown. Here a much larger 4-to-16 line binary decoder has been
implemented using two smaller 3-to-8 decoders.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

A 4-to-16 Binary Decoder Configuration.

Inputs A, B, C are used to select which output on either decoder will be at logic "1" (HIGH) and
input D is used with the enable input to select which encoder either the first or second will output the
"1".

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Tristate bus system:
In digital electronicsthree-state, tri-state, or 3-statelogic allows an output port to assume a high
impedance state in addition to the 0 and 1 logic levels, effectively removing the output from the circuit.
This allows multiple circuits to share the same output line or lines (such as a bus which cannot listen to
more than one device at a time).
Three-state outputs are implemented in many registers, bus drivers, and flip-flops in the 7400
and 4000 series as well as in other types, but also internally in many integrated circuits. Other typical
uses are internal and external buses in microprocessors, computer memory, and peripherals. Many
devices are controlled by an active-low input called OE (Output Enable) which dictates whether the
outputs should be held in a high-impedance state or drive their respective loads (to either 0- or 1-level).

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

UNIT-3
SEQUENTIAL MACHINES FUNDAMENTALS
INTRODUCTION
All the instructions that direct a computer's operation exist as a sequence of binary digits or bits
(0s and 1s) the instructions and the data are represented this way.
The logic gates can be arranged in groups that cause these binary numbers to either act as
adders, substractor, multipliers, dividers or logical comparators. Other groups of gates can act as
storage for the instructions and data. These groups are, in hardware design terms, latches and
flip flops

Unlike Combinational Logic circuits that change state depending upon the actual signals being
applied to their inputs at that time,
Sequential Logic circuits have some form of inherent "Memory" built in to them and they are able
to take into account their previous input state as well as those actually present, a sort of "before"
and "after" is involved.
They are generally termed as Two State or Bistable devices which can have their output set in
either of two basic states, a logic level "1" or a logic level "0" and will remain "Latched" indefinitely
in this current state or condition until some other input trigger pulse or signal is applied which will
change its state once again

The word "Sequential" means that things happen in a "sequence", one after another and in Sequential
Logic circuits, the actual clock signal determines when things will happen next.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Simple
sequential
logic
circuits
can
be
constructed
from
standard
Bistable
circuits
such
as
Flip-flops,
Latches
or
Counters
and
which
themselves
can
be
made
by
simply
connecting
together
NAND
Gates and/or NOR Gates in a particular combinational way to produce the
required sequential circuit.
Sequential Logic circuits can be divided into 3 main categories:
1. Clock Driven : Synchronous circuits that are synchronized to a specific
clock signal.
2. Event Driven : Asynchronous Circuits that react or change state when
an external event occurs.
3. Pulse Driven : which is a combination of Synchronous andAsynchronous circuit
Comparison between combinational
and sequential circuits Combinational
circuit
1. In combinational circuits, the output
variables at any instant of time are
dependent only on the present input
variables
2.memory unit is not requires in
combinational circuit
3. these circuits are faster because
the delay between the i/p and o/p
due to propagation delay of gates only
4. easy to design

Sequential circuit
1. in sequential circuits the output
variables at
any instant of time are dependent not only
on
the present input variables, but also on the
present state
2.memory unit is required to store the past
history of the input variables
3. sequential circuits are slower than
combinational
circuits
4. comparatively hard to design

LATCH :
An asynchronous latch is an electronic sequential logic circuit used to store
information in an asynchronous arrangement. (Asynchronous: they have no Clock
input.)One latch can store one bit. They change output state only in response to data input. Essentially,
they hold a bit value and it remains constant until new inputs force it to change. A type of single-bit
stable storage.
FLIP FLOPS :
As with latches, flip-flops are another example of a circuit employing sequential logic.
A flip-flop can also be called a BISTABLE GATE.
A type of single-bit storage but not as stable as a latch.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
A basic flip-flop has two stable states. The flip-flop maintains its states indefinitely until an input
pulse (a trigger from the clock) is received.
If a trigger is received, the flip-flop outputs change their states according to defined rules, and
remain in those states until another trigger is received.
There are several different kinds of flip-flop circuits, with designators such as :
R S Flip Flop
J K Flip Flop
D Flip flop
T Flip flop (a variation on J K Flip Flop)

DIFFERENCE BETWEEN LATCH AND FLIP FLOP :


Latches are level-sensitive while flip-flops are edge-sensitive.
Both might require the use of a clock signal and are used in sequential logic. (The clock on the
latch is for synchronisation whereas the clock on the flip-flop may trigger a change in output.)
For a latch, the output tracks the input when the clock signal is high, so as long as the clock is
logic 1 the output can change if the input also changes. (Logic 1 + new data = new output).
Flip-flops, in comparison, will store the input only when there is a rising/falling edge of the clock.
(Edge- triggered, so they may flip on clock pulses.)

Simple set-reset latches

SR NOR latch
When using static gates as building blocks, the most fundamental latch is the simple SR latch,
where S and R stand for set and reset. It can be constructed from a pair of cross-coupled NOR logic
gates. The stored bit is present on the output marked Q.
While the S and R inputs are both low, feedback maintains the Q and Q outputs in a constant state,
with Q the complement of Q. If S (Set) is pulsed high while R (Reset) is held low, then the Q output is
forced high, and stays high when S returns to low; similarly, if R is pulsed high while S is held low,
then the Q output is forced low, and stays low when R returns to low.
SR latch operation
S R Action
0 0 No Change
0 1 Q=0
1 0 Q=1
Restricted
11
combination

The symbol for an SR NOR


latch

The R = S = 1 combination is called a restricted combination or a forbidden state because, as


both NOR gates then output zeros, it breaks the logical equation Q = not Q. The combination is also

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
inappropriate in circuits where both inputs may go low simultaneously (i.e. a transition from restricted
to keep). The output would lock at either 1 or 0 depending on the propagation time relations between the
gates (a race condition). In certain implementations, it could also lead to longer ringings (damped
oscillations) before the output settles, and thereby result in undetermined values (errors) in highfrequency digital circuits. Although this condition is usually avoided, it can be useful in some
applications.
To overcome the restricted combination, one can add gates to the inputs that would convert (S,R)
= (1,1) to one of the non-restricted combinations. That can be:

Q = 1 (1,0) referred to as an S-latch


Q = 0 (0,1) referred to as an R-latch
Keep state (0,0) referred to as an E-latch

Alternatively, the restricted combination can be made to toggle the output. The result is the JK
latch.
Characteristic: Q+ = R'Q + R'S or Q+ = R'Q + S.

SR NAND latch

An SR latch
This is an alternate model of the simple SR latch built with NAND (not AND) logic gates. Set and
reset now become active low signals, denoted S and R respectively. Otherwise, operation is identical to
that of the SR latch. Historically, SR-latches have been predominant despite the notational
inconvenience of active-low inputs]

SR latch operation
S R Action
Restricted
00
combination
0 1 Q=1
1 0 Q=0
1 1 No Change

Symbol for an SR NAND


latch

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

JK latch
The JK latch is much less used than the JK flip-flop. The JK latch follows the following state
table:
JK latch truth table
Q
K
Comment

next

0
0
1
1

0
1
0
1

Q
0
1
Q

No change
Reset
Set
Toggle

Hence, the JK latch is an SR latch that is made to toggle its output when passed the restricted
combination of 11. Unlike the JK Flip-Flop, in the JK latch, this is not a useful state because the speed
of the toggling is not directed by a clock.[

Gated latches and conditional transparency


Latches are designed to be transparent. That is, input signal changes cause immediate changes in
output; when several transparent latches follow each other, using the same clock signal, signals can
propagate through all of them at once. Alternatively, additional logic can be added to a simple
transparent latch to make it non-transparent or opaque when another input (an "enable" input) is not
asserted. By following a transparent-high latch with a transparent-low (or opaque-high) latch, a
masterslave flip-flop is implemented.

Gated SR latch

A gated SR latch circuit diagram constructed from NOR gates.


A synchronous SR latch (sometimes clocked SR flip-flop) can be made by adding a second level of
NAND gates to the inverted SR latch (or a second level of AND gates to the direct SR latch). The extra
gates further invert the inputs so the simple SR latch becomes a gated SR latch (and a simple SR latch
would transform into a gated SR latch with inverted enable).
With E high (enable true), the signals can pass through the input gates to the encapsulated latch;
all signal combinations except for (0,0) = hold then immediately reproduce on the (Q,Q) output, i.e. the
latch is transparent.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
With E low (enable false) the latch is closed (opaque) and remains in the state it was left the last
time E was high.
The enable input is sometimes a clock signal, but more often a read or write strobe.
Gated SR latch operation
E/C
0

Action
No action (keep state)
Symbol for a gated SR latch

The same as non-clocked


SR latch

Gated D latch

A D-type transparent latch based on an SR NAND latch

A gated D latch based on an SR NOR latch


This latch exploits the fact that in the two active input combinations (01 and 10) of a gated SR
latch R is the complement of S. The input NAND stage converts the two D input states (0 and 1) to
these two input combinations for the next SR latch by inverting the data input signal. The low state of
the enable signal produces the inactive "11" combination. Thus a gated D-latch may be considered as a
one-input synchronous SR latch. This configuration prevents from applying the restricted combination
to the inputs. It is also known as transparent latch, data latch, or simply gated latch. It has a data input
and an enable signal (sometimes named clock, or control). The word transparent comes from the fact

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
that, when the enable input is on, the signal propagates directly through the circuit, from the input D to
the output Q.
Transparent latches are typically used as I/O ports or in asynchronous systems, or in synchronous
two-phase systems (synchronous systems that use a two-phase clock), where two latches operating on
different clock phases prevent data transparency as in a masterslave flip-flop.
Latches are available as integrated circuits, usually with multiple latches per chip. For example,
74HC75 is a quadruple transparent latch in the 7400 series.
Gated D latch truth table
E/
C

prev

Comment

Qprev

No change
Symbol for a gated D

Reset

Set

latch

The truth table shows that when the enable/clock input is 0, the D input has no effect on the
output. When E/C is high, the output equals D.

1. S-R Flip Flop


The SET-RESET flip flop is designed with the help of two NOR gates and also two NAND gates. These
flip flops are also called S-R Latch.

S-R Flip Flop using NOR Gate

The design of such a flip flop includes two inputs, called the SET [S] and RESET [R]. There are also
two outputs, Q and Q. The diagram and truth table is shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

S-R Flip Flop using NOR Gate


From the diagram it is evident that the flip flop has mainly four states. They are
S=1, R=0Q=1, Q=0
This state is also called the SET state.
S=0, R=1Q=0, Q=1
This state is known as the RESET state.
In both the states you can see that the outputs are just compliments of each other and that the value of Q
follows the value of S.
S=0, R=0Q & Q = Remember
If both the values of S and R are switched to 0, then the circuit remembers the value of S and R in their
previous state.
S=1, R=1Q=0, Q=0 [Invalid]
This is an invalid state because the values of both Q and Q are 0. They are supposed to be compliments
of each other. Normally, this state must be avoided.

S-R Flip Flop using NAND Gate

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The circuit of the S-R flip flop using NAND Gate and its truth table is shown below.

S-R Flip Flop using NAND Gate


Like the NOR Gate S-R flip flop, this one also has four states. They are
S=1, R=0Q=0, Q=1
This state is also called the SET state.
S=0, R=1Q=1, Q=0
This state is known as the RESET state.
In both the states you can see that the outputs are just compliments of each other and that the value of Q
follows the compliment value of S.
S=0, R=0Q=1, & Q =1 [Invalid]
If both the values of S and R are switched to 0 it is an invalid state because the values of both Q and Q
are 1. They are supposed to be compliments of each other. Normally, this state must be avoided.
S=1, R=1Q & Q= Remember
If both the values of S and R are switched to 1, then the circuit remembers the value of S and R in their
previous state.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Clocked S-R Flip Flop

It is also called a Gated S-R flip flop.


The problems with S-R flip flops using NOR and NAND gate is the invalid state. This problem can be
overcome by using a bistable SR flip-flop that can change outputs when certain invalid states are met,
regardless of the condition of either the Set or the Reset inputs. For this, a clocked S-R flip flop is
designed by adding two AND gates to a basic NOR Gate flip flop. The circuit diagram and truth table is
shown below.

A clock pulse [CP] is given to the inputs of the AND Gate. When the value of the clock pulse is 0, the
outputs of both the AND Gates remain 0. As soon as a pulse is given the value of CP turns 1. This
makes the values at S and R to pass through the NOR Gate flip flop. But when the values of both S and
R values turn 1, the HIGH value of CP causes both of them to turn to 0 for a short moment. As soon
as the pulse is removed, the flip flop state becomes intermediate. Thus either of the two states may be
caused, and it depends on whether the set or reset input of the flip-flop remains a 1 longer than the
transition to 0 at the end of the pulse. Thus the invalid states can be eliminated.

2. D Flip Flop
The circuit diagram and truth table is given below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

D Flip Flop
D flip flop is actually a slight modification of the above explained clocked SR flip-flop. From the figure
you can see that the D input is connected to the S input and the complement of the D input is connected
to the R input. The D input is passed on to the flip flop when the value of CP is 1. When CP is HIGH,
the flip flop moves to the SET state. If it is 0, the flip flop switches to the CLEAR state.
To know more about the triggering of flip flop click on the link below.

3. J-K Flip Flop


The circuit diagram and truth-table of a J-K flip flop is shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

J-K Flip Flop


A J-K flip flop can also be defined as a modification of the S-R flip flop. The only difference is that the
intermediate state is more refined and precise than that of a S-R flip flop.
The behavior of inputs J and K is same as the S and R inputs of the S-R flip flop. The letter J stands for
SET and the letter K stands for CLEAR.
When both the inputs J and K have a HIGH state, the flip-flop switch to the complement state. So, for a
value of Q = 1, it switches to Q=0 and for a value of Q = 0, it switches to Q=1.
The circuit includes two 3-input AND gates. The output Q of the flip flop is returned back as a feedback
to the input of the AND along with other inputs like K and clock pulse [CP]. So, if the value of CP is
1, the flip flop gets a CLEAR signal and with the condition that the value of Q was earlier 1. Similarly
output Q of the flip flop is given as a feedback to the input of the AND along with other inputs like J

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
and clock pulse [CP]. So the output becomes SET when the value of CP is 1 only if the value of Q was
earlier 1.
The output may be repeated in transitions once they have been complimented for J=K=1 because of the
feedback connection in the JK flip-flop. This can be avoided by setting a time duration lesser than the
propagation delay through the flip-flop. The restriction on the pulse width can be eliminated with a
master-slave or edge-triggered construction.

4. T Flip Flop
This is a much simpler version of the J-K flip flop. Both the J and K inputs are connected together and
thus are also called a single input J-K flip flop. When clock pulse is given to the flip flop, the output
begins to toggle. Here also the restriction on the pulse width can be eliminated with a master-slave or
edge-triggered construction. Take a look at the circuit and truth table below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Triggering of Flip Flops


The output of a flip flop can be changed by bring a small change in the input signal. This small
change can be brought with the help of a clock pulse or commonly known as a trigger pulse.
When such a trigger pulse is applied to the input, the output changes and thus the flip flop is said
to be triggered. Flip flops are applicable in designing counters or registers which stores data in the form
of multi-bit numbers.But such registers need a group of flip flops connected to each other as sequential
circuits. And these sequential circuits require trigger pulses.
The number of trigger pulses that is applied to the input of the circuit determines the number in a
counter. A single pulse makes the bit move one position, when it is applied onto a register that stores
multi-bit data.
In the case of SR Flip Flops, the change in signal level decides the type of trigger that is to be
given to the input. But the original level must be regained before giving a second pulse to the circuit.
If a clock pulse is given to the input of the flip flop at the same time when the output of the flip
flop is changing, it may cause instability to the circuit. The reason for this instability is the feedback that
is given from the output combinational circuit to the memory elements. This problem can be solved to a
certain level by making the flip flop more sensitive to the pulse transition rather than the pulse duration.
There are mainly four types of pulse-triggering methods. They differ in the manner in which the
electronic circuits respond to the pulse. They are

1. High Level Triggering


When a flip flop is required to respond at its HIGH state, a HIGH level triggering method is used.
It is mainly identified from the straight lead from the clock input. Take a look at the symbolic
representation shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
High Level Triggering

2. Low Level Triggering


When a flip flop is required to respond at its LOW state, a LOW level triggering method is used..
It is mainly identified from the clock input lead along with a low state indicator bubble. Take a look at
the symbolic representation shown below.

Low Level Triggering

3. Positive Edge Triggering


When a flip flop is required to respond at a LOW to HIGH transition state, POSITIVE edge
triggering method is used. It is mainly identified from the clock input lead along with a triangle. Take a
look at the symbolic representation shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Positive Edge Triggering

4. Negative Edge Triggering


When a flip flop is required to respond during the HIGH to LOW transition state, a NEGATIVE
edge triggering method is used.. It is mainly identified from the clock input lead along with a low-state
indicator and a triangle. Take a look at the symbolic representation shown below.

Negative Edge Triggering

Clock Pulse Transition


The movement of a trigger pulse is always from a 0 to 1 and then 1 to 0 of a signal. Thus it takes
two transitions in a single signal. When it moves from 0 to 1 it is called a positive transition and when it
moves from 1 to 0 it is called a negative transition. To understand more take a look at the images below.

Clock Pulse Transition


The clocked flip-flops already introduced are triggered during the 0 to 1 transition of the pulse,
and the state transition starts as soon as the pulse reaches the HIGH level. If the other inputs change
while the clock is still 1, a new output state may occur. If the flip-flop is made to then the multipletransition problem can be eliminated.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The multi-transition problem can be stopped is the flip flop is made to respond to the positive or
negative edge transition only, other than responding to the entire pulse dur

Master-Slave Flip Flop Circuit


Master-slave flip flop is designed using two separate flip flops. Out of these, one acts as the master
and the other as a slave. The figure of a master-slave J-K flip flop is shown below.

Master Slave Flip Flop


From the above figure you can see that both the J-K flip flops are presented in a series connection.
The output of the master J-K flip flop is fed to the input of the slave J-K flip flop. The output of the
slave J-K flip flop is given as a feedback to the input of the master J-K flip flop. The clock pulse [Clk]
is given to the master J-K flip flop and it is sent through a NOT Gate and thus inverted before passing it
to the slave J-K flip flop.

Working
When Clk=1, the master J-K flip flop gets disabled. The Clk input of the master input will be the
opposite of the slave input. So the master flip flop output will be recognized by the slave flip flop only
when the Clk value becomes 0. Thus, when the clock pulse males a transition from 1 to 0, the locked
outputs of the master flip flop are fed through to the inputs of the slave flip-flop making this flip flop
edge or pulse-triggered. To understand better take a look at the timing diagram illustrated below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Master Slave J-K Flip Flop Timing Diagram


Thus, the circuit accepts the value in the input when the clock is HIGH, and passes the data to the
output on the falling-edge of the clock signal. This makes the Master-Slave J-K flip flop a Synchronous
device as it only passes data with the timing of the clock signal.

EXCITATION TABLE OF FLIP-FLOPS


Excitation of a flip-flop is actually exact opposite of what a truth table is. The truth table for the flipflop gives us the output for the given combination of inputs and present output while an excitation table
gives the input condition for the given output change.
E.g. As in truth table we say for T flip-flop if input T is 1 and previous Q is 0 then we have output as 1
while in excitation table we are given that present output Q is 0 and new Q is 1 then input T is 1.
EXCITATION TABLE OF RS Flip-flop:
The truth table of the RS flip-flops is as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Now to write the excitation table of this flip-flop we first write the various output changes possible as:

Now we can see from that truth table that to change output from 0 to 0, we can keep inputs S, R as 0, 0
or 0, 1 and we can write both the combinations as 0, X which means we just need to keep S=0 and R
can have either of two possible values.
Similarly we can note that for output change from 0 to 1, we keep inputs at S=1, R=0.Similarly we can
find the other cases and we get the table as:

EXCITATION TABLE OF OTHER FFs


D Flip-flop: The excitation table of D flip-flop is as:

JK Flip-flop: The excitation table of JK flip-flop is as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
For output change from 0 to 1 we can either keep inputs J, K as 1, 0 or we can make use of toggle
input combination J=1, K=1 to get compliment of the output.
Similarly the other case
T Flip-flop: The excitation table of T flip-flop is as:

CONVERSION OF ONE FLIP-FLOP TO OTHER:


As we have already seen from the way we derived D flip-flop from RS flip-flop or the way we derived
T flip-flop from JK flip-flop or the way we derived JK flip-flop from RS flip-flop by feeding back
outputs that to derive a flip-flop from the other flip-flop we need to design a combinational arrangement
before the given flip-flop to convert the given to work as required flip-flop. Hence the general diagram
to obtain a flip-flop from the given flip-flop is as:

RS flip-flop to D flip-flop:
Lets first now derive the D flip-flop from RS flip-flop which we have already done:
We first write the truth table for required D flip-flop as

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Now we write the excitation table of given FF SR flip-flop as

Now we need to make a arrangement so that we manipulate input D to inputs R, S such that we get the
same output with RS FF as that of D FF. So we combine the two tables given above with same outputs
in the same row:

Now we design the combinational circuit to convert D input to SR inputs using K-map as:

K-map for S input:

K-map for R input:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Hence we convert the SR FF to D FF as:

RS flip-flop to JK flip-flop:
We first write the truth table for required Flip-flop i.e. JK FF

Now we write the excitation table of given FF SR flip-flop as

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Now we combine two tables to get the combinational circuit as:

Now we design the combinational circuit to convert J, K to corresponding R, S


K-map for S input:

K-map for R input:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

So we get the circuit to convert RS FF to JK FF:

D Flip-flop to RS flip-flop:
We first write the truth table for required Flip-flop i.e. RS FF

Now we write the excitation table of given FF i.e. D flip-flop as

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Now we combine two tables to get the combinational circuit as:

Now we design the combinational circuit to convert J, K to corresponding R, S

K-map for D input:

And we get the circuit to convert D to SR FF:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

D to T & T to D FF
Similarly we get the circuits as follow:
D FF to T FF:

T FF to D FF:

Note: We have not shown the clock but we can attach the clock signal to the given FF.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

UNIT-4
SEQUENTIAL CIRCUIT DESIGN AND ANALYSIS
Steps in Design of a Sequential Circuit:
1. Specification A description of the sequential circuit. Should include a detailing of the inputs, the
outputs, and the operation. Possibly assumes that you have knowledge of digital system basics.
2. Formulation: Generate a state diagram and/or a state table from the statement of the problem.
3. State Assignment: From a state table assign binary codes to the states.
4. Flip-flop Input Equation Generation: Select the type of flip-flop for the circuit and generate the
needed input for the required state transitions
5. Output Equation Generation: Derive output logic equations for generation of the output from the
inputs and current state.
6. Optimization: Optimize the input and output equations. Today, CAD systems are typically used for
this in real systems.
Mealy and Moore
1. Sequential machines are typically classified as either a Mealy machine or a Moore machine
implementation.
2. Moore machine: The outputs of the circuit depend only upon the current state of the circuit.
3. Mealy machine: The outputs of the circuit depend upon both the current state of the circuit and the
inputs.
An example to go through the steps
The specification: The circuit will have one input, X, and one output, Z. The output Z will be 0 except
when the input sequence 1101 are the last 4 inputs received on X. In that case it will be a 1

Capture this in a state diagram

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
1. Circles represent the states
2. Lines and arcs represent the transition between states.
3. The notation Input/output on the line or arc specifies the input that causes this transition and the
output for this change of state.
4. Add a state C Have detected the input sequence 11 which is the start of the sequence

Add a state D
State D have detected the 3rd input in the start of a sequence, a 0, now having 110. From State D, if
the next input is a 1 the sequence has been detected and a 1 is output.

The previous diagram was incomplete.In each state the next input could be a 0 or a 1. This must be
included

The state table


This can be done directly from the state diagram

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Now need to do a state assignment


Select a state assignment
1.Will select a gray encoding
2.For this state A will be encoded 00, state B 01, state C 11 and state D 10

Flip-flop input equations


1. Generate the equations for the flip-flop inputs
2. Generate the D0 equation

3.Generate the D1 equation

The output equation


The next step is to generate the equation for the output Z and what is needed to generate it.
Create a K-map from the truth table.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Now map to a circuit


The circuit has 2 D type F/Fs

Registers:
A register is a group of 1- bit memory cells. To make a N-bit register we need N 1-bit memory cells.
Register with parallel load: We can represent a simple 4-bit register as: We can give the values to be
stored at input and we get that value stored at the next clock pulse.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

But in this circuit we have to maintain the inputs to keep the outputs unchanged as we dont have the
input condition in D Flip-flop for unchanged output. Hence we modify the above circuit with an extra
input LOAD which when 1 would mean there is a new input data to be stored and LOAD=0 would
mean we have keep the stored data same. The modified circuit is as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

The Shift Register


The Shift Register is another type of sequential logic circuit that is used for the storage or transfer of
data in the form of binary numbers and then "shifts" the data out once every clock cycle, hence the
name "shift register". It basically consists of several single bit "D-Type Data Latches", one for each bit
(0 or 1) connected together in a serial or daisy-chain arrangement so that the output from one data latch
becomes the input of the next latch and so on. The data bits may be fed in or out of the register serially,
i.e. one after the other from either the left or the right direction, or in parallel, i.e. all together. The
number of individual data latches required to make up a single Shift Register is determined by the
number of bits to be stored with the most common being 8-bits wide, i.e. eight individual data latches.
The Shift Register is used for data storage or data movement and are used in calculators or computers to
store data such as two binary numbers before they are added together, or to convert the data from either
a serial to parallel or parallel to serial format. The individual data latches that make up a single shift
register are all driven by a common clock (Clk) signal making them synchronous devices. Shift register

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
IC's are generally provided with a clear or reset connection so that they can be "SET" or "RESET" as
required.
Generally, shift registers operate in one of four different modes with the basic movement of data
through a shift register being:

Serial-in to Parallel-out (SIPO) - the register is loaded with serial data, one bit at a time, with
the stored data being available in parallel form.
Serial-in to Serial-out (SISO) - the data is shifted serially "IN" and "OUT" of the register, one
bit at a time in either a left or right direction under clock control.
Parallel-in to Serial-out (PISO) - the parallel data is loaded into the register simultaneously and
is shifted out of the register serially one bit at a time under clock control.
Parallel-in to Parallel-out (PIPO) - the parallel data is loaded simultaneously into the register,
and transferred together to their respective outputs by the same clock pulse.

The effect of data movement from left to right through a shift register can be presented graphically as:

Also, the directional movement of the data through a shift register can be either to the left, (left shifting)
to the right, (right shifting) left-in but right-out, (rotation) or both left and right shifting within the same
register thereby making it bidirectional. In this tutorial it is assumed that all the data shifts to the right,
(right shifting).
Serial-in to Parallel-out (SIPO)

4-bit Serial-in to Parallel-out Shift Register

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The operation is as follows. Lets assume that all the flip-flops (FFA to FFD) have just been RESET
(CLEAR input) and that all the outputs QA to QD are at logic level "0" i.e, no parallel data output. If a
logic "1" is connected to the DATA input pin of FFA then on the first clock pulse the output of FFA and
therefore the resulting QA will be set HIGH to logic "1" with all the other outputs still remaining LOW
at logic "0". Assume now that the DATA input pin of FFA has returned LOW again to logic "0" giving
us one data pulse or 0-1-0.
The second clock pulse will change the output of FFA to logic "0" and the output of FFB and QB HIGH
to logic "1" as its input D has the logic "1" level on it from QA. The logic "1" has now moved or been
"shifted" one place along the register to the right as it is now at QA. When the third clock pulse arrives
this logic "1" value moves to the output of FFC (QC) and so on until the arrival of the fifth clock pulse
which sets all the outputs QA to QD back again to logic level "0" because the input to FFA has remained
constant at logic level "0".
The effect of each clock pulse is to shift the data contents of each stage one place to the right, and this is
shown in the following table until the complete data value of 0-0-0-1 is stored in the register. This data
value can now be read directly from the outputs of QA to QD. Then the data has been converted from a
serial data input signal to a parallel data output. The truth table and following waveforms show the
propagation of the logic "1" through the register from left to right as follows.

Basic Movement of Data through a Shift Register


Clock Pulse No
0
1
2
3
4
5

QA
0
1
0
0
0
0

QB
0
0
1
0
0
0

QC
0
0
0
1
0
0

QD
0
0
0
0
1
0

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Note that after the fourth clock pulse has ended the 4-bits of data (0-0-0-1) are stored in the register and
will remain there provided clocking of the register has stopped. In practice the input data to the register
may consist of various combinations of logic "1" and "0". Commonly available SIPO IC's include the
standard 8-bit 74LS164 or the 74LS594.

Serial-in to Serial-out (SISO)


This shift register is very similar to the SIPO above, except were before the data was read directly in a
parallel form from the outputs QA to QD, this time the data is allowed to flow straight through the
register and out of the other end. Since there is only one output, the DATA leaves the shift register one
bit at a time in a serial pattern, hence the name Serial-in to Serial-Out Shift Register or SISO.
The SISO shift register is one of the simplest of the four configurations as it has only three connections,
the serial input (SI) which determines what enters the left hand flip-flop, the serial output (SO) which is
taken from the output of the right hand flip-flop and the sequencing clock signal (Clk). The logic circuit
diagram below shows a generalized serial-in serial-out shift register.

4-bit Serial-in to Serial-out Shift Register

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
You may think what's the point of a SISO shift register if the output data is exactly the same as the input
data. Well this type of Shift Register also acts as a temporary storage device or as a time delay device
for the data, with the amount of time delay being controlled by the number of stages in the register, 4, 8,
16 etc or by varying the application of the clock pulses. Commonly available IC's include the 74HC595
8-bit Serial-in/Serial-out Shift Register all with 3-state outputs.

Parallel-in to Serial-out (PISO)


The Parallel-in to Serial-out shift register acts in the opposite way to the serial-in to parallel-out one
above. The data is loaded into the register in a parallel format i.e. all the data bits enter their inputs
simultaneously, to the parallel input pins PA to PD of the register. The data is then read out sequentially
in the normal shift-right mode from the register at Q representing the data present at PA to PD. This data
is outputted one bit at a time on each clock cycle in a serial format. It is important to note that with this
system a clock pulse is not required to parallel load the register as it is already present, but four clock
pulses are required to unload the data.

4-bit Parallel-in to Serial-out Shift Register

As this type of shift register converts parallel data, such as an 8-bit data word into serial format, it can
be used to multiplex many different input lines into a single serial DATA stream which can be sent
directly to a computer or transmitted over a communications line. Commonly available IC's include the
74HC166 8-bit Parallel-in/Serial-out Shift Registers.

Parallel-in to Parallel-out (PIPO)


The final mode of operation is the Parallel-in to Parallel-out Shift Register. This type of register also
acts as a temporary storage device or as a time delay device similar to the SISO configuration above.
The data is presented in a parallel format to the parallel input pins PA to PD and then transferred together
directly to their respective output pins QA to QA by the same clock pulse. Then one clock pulse loads
and unloads the register. This arrangement for parallel loading and unloading is shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

4-bit Parallel-in to Parallel-out Shift Register

The PIPO shift register is the simplest of the four configurations as it has only three connections, the
parallel input (PI) which determines what enters the flip-flop, the parallel output (PO) and the
sequencing clock signal (Clk).
Similar to the Serial-in to Serial-out shift register, this type of register also acts as a temporary storage
device or as a time delay device, with the amount of time delay being varied by the frequency of the
clock pulses. Also, in this type of register there are no interconnections between the individual flip-flops
since no serial shifting of the data is required.

Counters:
Counter is a device which stores (and sometimes displays) the number of times particular event or
process has occurred, often in relationship to a clock signal. A Digital counter is a set of flip flops
whose state change in response to pulses applied at the input to the counter. Counters may be
asynchronous counters or synchronous counters. Asynchronous counters are also called ripple counters
In electronics counters can be implemented quite easily using register-type circuits such as the flip-flops
and a wide variety of classifications exist:
1. Asynchronous (ripple) counter changing state bits are used as clocks to subsequent state flip-flops
2. Synchronous counter all state bits change under control of a single clock
3. Decade counter counts through ten states per stage
4. Up/down counter counts both up and down, under command of a control input
5. Ring counter formed by a shift register with feedback connection in a ring
6. Johnson counter a twisted ring counter
7. Cascaded counter
8. Modulus counter.
Each is useful for different applications. Usually, counter circuits are digital in nature, and count in
natural binary Many types of counter circuits are available as digital building blocks, for example a

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
number of chips in the 4000 series implement different counters. Occasionally there are advantages to
using a counting sequence other than the natural binary sequence such as the binary coded decimal
counter, a linear feed-back shift register counter, or a gray-code counter. Counters are useful for digital
clocks and timers, and in oven timers, VCR clocks, etc.

Asynchronous counters:
An asynchronous (ripple) counter is a single JK-type flip-flop, with its J (data) input fed from its own
inverted output. This circuit can store one bit, and hence can count from zero to one before it overflows
(starts over from 0). This counter will increment once for every clock cycle and takes two clock cycles
to overflow, so every cycle it will alternate between a transition from 0 to 1 and a transition from 1 to 0.
Notice that this creates a new clock with a 50% duty cycle at exactly half the frequency of the input
clock. If this output is then used as the clock signal for a similarly arranged D flip-flop (remembering to
invert the output to the input), one will get another 1 bit counter that counts half as fast. Putting them
together yields a two-bit counter:
Two-bit ripple up-counter using negative edge triggered flip flop:
Two bit ripple counter used two flip-flops. There are four possible states from 2 bit up-counting I.e.
00, 01, 10 and 11. The counter is initially assumed to be at a state 00 where the outputs of the tow flipflops are noted as Q1Q0. Where Q1 forms the MSB and Q0 forms the LSB.
For the negative edge of the first clock pulse, output of the first flip-flop FF1 toggles its state. Thus Q1
remains at 0 and Q0 toggles to 1 and the counter state are now read as 01. During the next negative
edge of the input clock pulse FF1 toggles and Q0 = 0. The output Q0 being a clock signal for the second
flip-flop FF2 and the present transition acts as a negative edge for FF2 thus toggles its state Q1 = 1. The
counter state is now read as 10. For the next negative edge of the input clock to FF1 output Q0 toggles
to 1. But this transition from 0 to 1 being a positive edge for FF2 output Q1 remains at 1. The counter
state is now read as 11. For the next negative edge of the input clock, Q0 toggles to 0. This transition
from 1 to 0 acts as a negative edge clock for FF2 and its output Q1 toggles to 0. Thus the starting state
00 is attained. Figure shown below

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Two-bit ripple down-counter using negative edge triggered flip flop:

A 2-bit down-counter counts in the order 0,3,2,1,0,1.,i.e, 00,11,10,01,00,11 ..,etc. the above fig.
shows ripple down counter, using negative edge triggered J-K FFs and its timing diagram.
For down counting, Q1 of FF1 is connected to the clock of Ff2. Let initially all the FF1 toggles,
so, Q1 goes from a 0 to a 1 and Q1 goes from a 1 to a 0.
The negative-going signal at Q1 is applied to the clock input of FF2, toggles Ff2 and, therefore,
Q2 goes from a 0 to a 1.so, after one clock pulse Q2=1 and Q1=1, I.e., the state of the counter is
11.
At the negative-going edge of the second clock pulse, Q1 changes from a 1 to a 0 and Q1 from
a 0 to a 1.
This positive-going signal at Q1 does not affect FF2 and, therefore, Q2 remains at a 1. Hence ,
the state of the counter after second clock pulse is 10

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
At the negative going edge of the third clock pulse, FF1 toggles. So Q1, goes from a 0 to a 1 and
Q1 from 1 to 0. This negative going signal at Q1 toggles FF2 and, so, Q2 changes from 1 to 0,
hence, the state of the counter after the third clock pulse is 01.
At the negative going edge of the fourth clock pulse, FF1 toggles. So Q1, goes from a 1 to a 0
and Q1 from 0 to 1. . This positive going signal at Q1 does not affect FF2 and, so, Q2 remains
at 0, hence, the state of the counter after the fourth clock pulse is 00.

The Ring Counter


if we apply a serial data signal to the input of a serial-in to serial-out shift register, the same sequence
of data will exit from the last flip-flip in the register chain after a preset number of clock cycles thereby
acting as a sort of time delay circuit to the original signal. But what if we were to connect the output of
this shift register back to its input so that the output from the last flip-flop, QD becomes the input of the
first flip-flop, DA. We would then have a closed loop circuit that "recirculates" the DATA around a
continuous loop for every state of its sequence, and this is the principal operation of a Ring Counter.
Then by looping the output back to the input, we can convert a standard shift register into a ring
counter. Consider the circuit below.

4-bit Ring Counter

The synchronous Ring Counter example above, is preset so that exactly one data bit in the register is
set to logic "1" with all the other bits reset to "0". To achieve this, a "CLEAR" signal is firstly applied to
all the flip-flops together in order to "RESET" their outputs to a logic "0" level and then a "PRESET"
pulse is applied to the input of the first flip-flop (FFA) before the clock pulses are applied. This then
places a single logic "1" value into the circuit of the ring counter . On each successive clock pulse, the
counter circulates the same data bit between the four flip-flops over and over again around the "ring"
every fourth clock cycle. But in order to cycle the data correctly around the counter we must first "load"
the counter with a suitable data pattern as all logic "0"'s or all logic "1"'s outputted at each clock cycle
would make the ring counter invalid.This type of data movement is called "rotation", and like the
previous shift register, the effect of the movement of the data bit from left to right through a ring
counter can be presented graphically as follows along with its timing diagram:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Rotational Movement of a Ring Counter

Since the ring counter example shown above has four distinct states, it is also known as a "modulo-4" or
"mod-4" counter with each flip-flop output having a frequency value equal to one-fourth or a quarter
(1/4) that of the main clock frequency.
The "MODULO" or "MODULUS" of a counter is the number of states the counter counts or sequences
through before repeating itself and a ring counter can be made to output any modulo number. A "modn" ring counter will require "n" number of flip-flops connected together to circulate a single data bit
providing "n" different output states. For example, a mod-8 ring counter requires eight flip-flops and a
mod-16 ring counter would require sixteen flip-flops. However, as in our example above, only four of
the possible sixteen states are used, making ring counters very inefficient in terms of their output state
usage.

Johnson Ring Counter


The Johnson Ring Counter or "Twisted Ring Counters", is another shift register with feedback exactly
the same as the standard Ring Counter above, except that this time the inverted output Q of the last flipflop is now connected back to the input D of the first flip-flop as shown below. The main advantage of
this type of ring counter is that it only needs half the number of flip-flops compared to the standard ring
counter then its modulo number is halved. So a "n-stage" Johnson counter will circulate a single data bit
giving sequence of 2n different states and can therefore be considered as a "mod-2n counter".

4-bit Johnson Ring Counter

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

This inversion of Q before it is fed back to input D causes the counter to "count" in a different way.
Instead of counting through a fixed set of patterns like the normal ring counter such as for a 4-bit
counter, "0001"(1), "0010"(2), "0100"(4), "1000"(8) and repeat, the Johnson counter counts up and then
down as the initial logic "1" passes through it to the right replacing the preceding logic "0". A 4-bit
Johnson ring counter passes blocks of four logic "0" and then four logic "1" thereby producing an 8-bit
pattern. As the inverted output Q is connected to the input D this 8-bit pattern continually repeats. For
example, "1000", "1100", "1110", "1111", "0111", "0011", "0001", "0000" and this is demonstrated in
the following table below.

Truth Table for a 4-bit Johnson Ring Counter


Clock Pulse No
0
1
2
3
4
5
6
7

FFA
0
1
1
1
1
0
0
0

FFB
0
0
1
1
1
1
0
0

FFC
0
0
0
1
1
1
1
0

FFD
0
0
0
0
1
1
1
1

As well as counting or rotating data around a continuous loop, ring counters can also be used to detect
or recognize various patterns or number values within a set of data. By connecting simple logic gates
such as the AND or the OR gates to the outputs of the flip-flops the circuit can be made to detect a set
number or value. Standard 2, 3 or 4-stage Johnson ring counters can also be used to divide the
frequency of the clock signal by varying their feedback connections and divide-by-3 or divide-by-5
outputs are also available.
A 3-stage Johnson Ring Counter can also be used as a 3-phase, 120 degree phase shift square wave
generator by connecting to the data outputs at A, B and NOT-B. The standard 5-stage Johnson counter
such as the commonly available CD4017 is generally used as a synchronous decade counter/divider
circuit. The smaller 2-stage circuit is also called a "Quadrature" (sine/cosine) Oscillator/Generator and
is used to produce four individual outputs that are each "phase shifted" by 90 degrees with respect to
each other, and this is shown below.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
ASYNCHRONOUS COUNTERS - MOD-2 counter:
If we see that flip-flop is a mod-2 counter with starting count as 0. If we connect J & K to HIGH and
supply clock to the flip-flop, well see that flip-flop would count pulses 0, then 1 and as it is a MOD-2
counter so itll reset and again count from 0.

And the output is as:

Also note that output pulse is of half the original frequency of the clock. Hence we can say that flip-flop
acts as a Divide by 2 circuit.

Ripple counter:
We can attach more flip-flops to make larger counter. We just use more flip-flops in cascade and give
output of first to the clock of 2nd and output of 2nd to clock of 3rd and so on. This way every flip-flop
would divide frequency of the clock by 2 and hence we can obtain a divide by larger value circuit. Lets
see how we can make larger counters:

And following waveforms would illustrate how the above circuit does counting. It is actually a MOD-8
counter so it would count from 0 to 7 and then again reset itself as shown:
With every negative edge, count is incremented and when the count reaches 7, next edge would reset
the value to 0.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
These waveforms represent count as (Q3 Q2 Q1) 2.

Design a MOD-14 counter:


First we design a counter of 2s multiple greater than 14 which is 16. So we first design a MOD-16
counter as:

Now we need to design a combinational circuit which would take care that counter is reset when count
value reaches 13. For this we first draw the waveforms as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
As we have to count till 13 and reset again. We see that when-ever Q4=1, Q3=1 & Q1=1, when have to
reset the value of all the flip-flops so that we get the value of count as 0. Hence we take NAND of these
3 variables due to which we get a zero when all 3 variables are 1 and output of NAND gate is connected
to all the ACTIVE LOW CLEAR lines to reset all flip-flops as follow. We also have to make sure that
the output of this NAND gate is zero only after 13.

And now we the output waveforms as:

And we can clearly observe that we have achieved MOD-14 counter as all count values are reset after
13 but in this method we have to observe the output waveforms and then decide the combinational
circuit to reset value after certain count.

Mod-6 Counter

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

And now we draw the table to represent the desired output of the combinational circuit to reset FFs as:

Q2

Q1

Q0

OUTPUT

And using K-map we get the combinational circuit as

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

And the complete circuit is as:

UP/DOWN COUNTER
Here well be counting in reverse order i.e. count would start from 15 to 0 and again value goes from 0
to 15. We just make a change in the circuit as we give Q bar to the CLK of next flip-flop or we use
positive edged flip-flops and give Q to CLK of next flip-flop.

And the output waveform would be as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Or

And the output waveform would be as:


In both cases we take (Q4 Q3 Q2 Q1) 2 as value of the count

Or
We can just use the same circuit as the UP counter but

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Consider the following circuit

And we see that this circuit is a UP counter which count from 0 to 7 and then it is reset but the same
circuit can also work as DOWN counter when we take count as combination of inverted outputs for
each FF. i.e.
again it is set to 7.

. Hence output count of the above circuit would go from 7 to 0 and then

Synchronous Counter
In synchronous counters we have the same clock signal to all the flip-flops.

MOD-4 Synchronous counter: We discuss here a 2-bit synchronous counter. We have the circuit for
this as:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

We have the initial outputs as Q0=0 & Q1=0. Whenever the first negative clock edge comes O/P of 1st
FF becomes 1 as we have J & K for 1st FF as 1 and hence output of 1st FF toggles and changes from 0 to
1. But when 1st cock edge had come output of 1st FF was 0. Hence J & K for 2nd FF for 1st edge are 0.
So output of this FF doesnt change and we get Q1=0. so the output is (Q1Q0)2= 012.
On the next edge, output of 1st FF changes from 1 to 0 as J & K are always 1 for this FF. Inputs for 2nd
edge for 2nd FF are J=1 & K=1. Hence output changes from 0 to 1. so we get the count as (Q1Q0)2= 102.
Similarly on the next edge well get the output count as (Q1Q0)2= 112.
And on the 4th clock edge both the outputs get reset and we get the output as (Q1Q0)2 = 002 and again
whole procedure is repeated

COMPARISON B/W SYNCHRONOUS & ASYNCHRONOUS COUNTERS


Asynchronous
Synchronous
The logic circuit of this
The circuit diagram for type
type of counters is simple
of counter becomes difficult
to design and we feed
as number of states increase
output of one FF to clock
in the counter
of next FF
Propagation time delay of Propagation time delay of
this type of counter is : this type of counter is:

Circuit

Propagation Time

Tpd = N * (Delay of 1 Tpd = (Delay of 1 FF) +


FF)
delay of 1 gate
which is quiet high
N is number of FFs

Maximum
frequency

Inclusion of delay of 1 gate


would be illustrated when
we design higher counters:

operating And
hence
operating And
hence
operating
frequency is Low
frequency is Higher

UNIT-5

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

SEQUENTIAL CIRCUITS
Finite State Machine:
Finite state machine can be defined as a type of machine whose past histories can affect its future
behavior in a finite number of ways. To clarify, consider for example of binary full adder. Its output
depends on the present input and the carry generated from the previous input. It may have a large
number of previous input histories but they can be divided into two types: (i) Input The most general
model of a sequential circuit has inputs, outputs and internal states. A sequential circuit is referred to as
a finite state machine (FSM). A finite state machine is abstract model that describes the synchronous
sequential machine. The fig. shows the block diagram of a finite state model. X1, X2,.., Xl, are
inputs. Z1, Z2,.,Zm are outputs. Y1,Y2,.Yk are state variables, and Y1,Y2,.Yk represent the next
state.

Capabilities and limitations of finite-state machine


Let a finite state machine have n states. Let a long sequence of input be given to the machine. The
machine will progress starting from its beginning state to the next states according to the state
transitions. However, after some time the input string may be longer than n, the number of states. As
there are only n states in the machine, it must come to a state it was previously been in and from this
phase if the input remains the same the machine will function in a periodically repeating fashion. From
here a conclusion that for a n state machine the output will become periodic after a number of clock
pulses less than equal to n can be drawn. States are memory elements. As for a finite state machine the
number of states is finite, so finite number of memory elements are required to design a finite state
machine.

Limitations:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
1. Periodic sequence and limitations of finite states: with n-state machines, we can generate periodic
sequences of n states are smaller than n states. For example, in a 6-state machine, we can have a
maximum periodic sequence as 0,1,2,3,4,5,0,1.
2. No infinite sequence: consider an infinite sequence such that the output is 1 when and only when the
number of inputs received so far is equal to P(P+1)/2 for P=1,2,3.,i.e., the desired input-output
sequence has the following form:
Input: x x x x x x x x x x x x x x x x x x x x x x
Output: 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1
Such an infinite sequence cannot be produced by a finite state machine.
3. Limited memory: the finite state machine has a limited memory and due to limited memory it cannot
produce certain outputs. Consider a binary multiplier circuit for multiplying two arbitrarily large binary
numbers. The memory is not sufficient to store arbitrarily large partial products resulted during
multiplication.
Finite state machines are two types. They differ in the way the output is generate they are:
1. Mealy type model: in this model, the output is a function of the present state and the present input.
2. Moore type model: in this model, the output is a function of the present state only.
Mathematical representation of synchronous sequential machine:
The relation between the present state S(t), present input X(t), and next state s(t+1) can be given as
S(t+1)= f{S(t),X(t)} The value of output Z(t) can be given as Z(t)= g{S(t),X(t)} for mealy model Z(t)=
G{S(t)} for Moore model Because, in a mealy machine, the output depends on the present state and
input, where as in a Moore machine, the output depends only on the present state.
Comparison between the Moore machine and mealy machine:
Moore machine
1. its output is a function of present
state only Z(t)= g{S(t)}
2. input changes do not affect the
output
3. it requires more number of states for
implementing same function

Mealy machine
1. its output is a function of present
state as well as present input
Z(t)=g{S(t),X(t)}
2. input changes may affect the output
of the circuit
3. it requires less number of states for
implementing same function

Mealy model:
When the output of the sequential circuit depends on the both the present state of the flip-flops and on
the inputs, the sequential circuit is referred to as mealy circuit or mealy machine. The fig. shows the
logic diagram of the mealy model. Notice that the output depends up on the present state as well as the
present inputs. We can easily realize that changes in the input during the clock pulse cannot affect the
state of the flip-flop. They can affect the output of the circuit. If the input variations are not
synchronized with a clock, he derived output will also not be synchronized with the clock and we get
false output. The false outputs can be eliminated by allowing input to change only at the active
transition of the clock.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Fig: logic diagram of a mealy model


The behavior of a clocked sequential circuit can be described algebraically by means of state equations.
A state equation specifies the next state as a function of the present state and inputs. The mealy model
shown in fig. consists of two D flip-flops, an input x and an output z. since the D input of a flip-flop
determines the value of the next state, the state equations for the model can be written as
Y1 (t+1)=y1(t)x(t)+y2(t)x(t)
Y2(t+1)= 1 (t)x(t)
And the output equation is
Z(t)={ y1(t)+y2(t)} X(t)
Where y(t+1) is the next state of the flip-flop one clock edge later, x(t) is the present input, and z(t) is
the present output. If y1(t+1) are represented by y1(t) and y2(t) , in more compact form, the equations
are
Y1(t+1)=y1=y1x+y2x
Y2(t+1)=y2=y1x
Z=(y1+y2)x
The stable table of the mealy model based on the above state equations and output equation is shown in
fig. the state diagram based on the state table is shown in fig.

In general form, the mealy circuit can be represented with its block schematic as shown in below fig.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Moore model:
When the output of the sequential circuit depends up only on the present state of the flip-flop, the
sequential circuit is referred as to as the Moore circuit or the Moore machine. Notice that the output
depend only on the present state. It does not depend upon the input at all. The input is used only to
determine the inputs of flip-flops. It is not used to determine the output. The circuit shown has two T
flip-flops, one input x, and one output z. it can be described algebraically by two input equations an
output equation.
T1=y2x
T2=x Z=y1y2

The characteristic equation of a T-flip-flop is


Q(t+1)=TQ+TQ
The values for the next state can be derived from the state equations by substituting T1 and T2 in the
characteristic equation yielding
Y1(t+1)=Y1=(y2x) =(2 )y1+(y2x)1
= y1 2 + y1 +1 y2x

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
= y2 (t+1)= xy2= x2 + y2
The state table of the Moore model based on the above state equations and output equation is shown in
fig.

In general form , the Moore circuit can be represented with its block schematic as shown in below fig.

Figure: moore circuit model

Figure: moore circuit model with an output decoder


Important definitions and theorems:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
1). Finite state machine-definitions: Consider the state diagram of a finite state machine shown in fig.
it is five-state machine with one input variable and one output variable.

Successor: looking at the state diagram when present state is A and input is 1, the next state is D. this
condition is specified as D is the successor of A. similarly we can say that A is the 1 successor of B,
and C,D is the 11 successor of B and C, C is the 00 successor of A and D, D is the 000 successor of
A,E, is the 10 successor of A or 0000 successor of A and so on.
Terminal state: looking at the state diagram , we observe that no such input sequence exists which can
take the sequential machine out of state E and thus state E is said to be a terminal state. Stronglyconnected machine: in sequential machines many times certain subsets of states may not be reachable
from other subsets of states. Even if the machine does not contain any terminal state. If for every pair
of states si, sj, of a sequential machine there exists an input sequence which takes the machine M from
si to sj, then the sequential machine is said to be strongly connected.
State equivalence and machine minimization: In realizing the logic diagram from a stat table or state
diagram many times we come across redundant states. Redundant states are states whose functions can
be accomplished by other states. The elimination of redundant states reduces the total number of states
of the machines which in turn results in reduction of the number of flip-flops and logic gates, reducing
the cost of the final circuit. Two states are said to be equivalent. When two states are equivalent, one of
them can be removed without altering the input output relationship. State equivalence theorem: it states
that two states s1, and s2 are equivalent if for every possible input sequence applied. The machine goes
to the same next state and generates the same output. That is If S1(t+1)= s2(t+1) and z1=z2, then s1=s2
Distinguishable states and distinguishing sequences: Two states sa, and sb of a sequential machine
are distinguishable, if and only if there exists at least one finite input sequence which when applied to
the sequential machine causes different outputs sequences depending on weather sa or sb is the initial
state. Consider states A and B in the state table, when input X=0, their outputs are 0 and 1 respectively
and therefore, states A and B are called 1-distinguishabke. Now consider states A and E . the output
sequence is as follows.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Here the outputs are different after 3- transition and hence states A and B are 3-distuingshable. the
concept of K- distuingshable leads directly to the definition of K-equivalence. States that are not Kdistinguishable are said to be K-equivalent.
Truth table for Distunigshable states:
PS
X=0
A
B
C
D
E
F

NS,Z
X=1
C,0
F,0
D,1 F,0
E,0
B,0
B,1
E,0
D,0 B,0
D,1 B,0

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Merger Chart Methods:
Merger graphs:
The merger graph is a state reducing tool used to reduce states in the incompletely specified machine.
The merger graph is defined as follows.
1. Each state in the state table is represented by a vertex in the merger graph. So it contains the same
number of vertices as the state table contains states.
2. Each compatible state pair is indicated by an unbroken line draw between the two state vertices
3. Every potentially compatible state pair with non-conflicting outputs but with different next states is
connected by a broken line. The implied states are written in theline break between the two potentially
compatible states.
4. If two states are incompatible no connecting line is drawn.
Consider a state table of an incompletely specified machine shown in fig. the corresponding merger
graph shown in fig.
State table:
PS
I1
A
B
C
D
E
F

a) Merger graph

I2

F,1

C,0
D,0

NS,Z
I3
I4
E,1
B,1
.
D,1

F,1

C,1

A,0
F,1
A,1
B,0

b) simplified merger graph

States A and B have non-conflicting outputs, but the successor under input I2are compatible only if
implied states D and E are compatible. So, draw a broken line from A to B with DE written in between
states A and C are compatible because the next states and output entries of states A and C are not
conflicting. Therefore, a line is drawn between nodes A and C. states A and D have non-conflicting
outputs but the successor under input I3 are B and C. hence join A and D by a broken line with BC
entered In between.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Two states are said to be incompatible if no line is drawn between them. If implied states are
incompatible, they are crossed and the corresponding line is ignored. Like, implied states D and E are
incompatible, so states A and B are also incompatible. Next, it is necessary to check whether the
incompatibility of A and B does not invalidate any other broken line. Observe that states E and F also
become incompatible because the implied pair AB is incompatible. The broken lines which remain in
the graph after all the implied pairs have been verified to be compatible are regarded as complete lines.
After checking all possibilities of incompatibility, the merger graph gives the following seven
compatible pairs.
These compatible pairs are further checked for further compatibility. For example, pairs
(B,C)(B,D)(C,D) are compatible. So (B, C, D) is also compatible. Also pairs (A,c)(A,D)(C,D) are
compatible. So (A,C,D) is also compatible. . In this way the entire set of compatibles of sequential
machine can be generated from its compatible pairs. To find the minimal set of compatibles for state
reduction, it is useful to find what are called the maximal compatibles. A set of compatibles state pairs
is said to be maximal, if it is not completely covered by any other set of compatible state pairs. The
maximum compatible can be found by looking at the merger graph for polygons which are not
contained within any higher order complete polygons. For example only triangles (A, C,D) and (B,C,D)
are of higher order. The set of maximal compatibles for this sequential machine given as
Example:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Figure: state table

State Minimization:
Completely Specified Machines
1. Two states, si and sj of machine M are distinguishable if and only if there exists a finite input
sequence which when applied to M causes different output sequences depending on whether M started
in si or sj.
2. Such a sequence is called a distinguishing sequence for (si, sj).
3. If there exists a distinguishing sequence of length k for (si, sj), they are said to be k-distinguishable.
EXAMPLE:

States A and B are 1-distinguishable, since a 1 input applied to A yields an output 1, versus an output
0 from B.
States A and E are 3-distinguishable, since input sequence 111 applied to A yields output 100, versus
an output 101 from E.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
States si and sj (si ~ sj ) are said to be equivalent iff no distinguishing sequence exists for (si, sj ).
If si ~ sj and sj ~ sk, then si ~ sk. So state equivalence is an equivalence relation (i.e. it is a reflexive,
symmetric and transitive relation).
An equivalence relation partitions the elements of a set into equivalence classes.
Property: If si ~sj, their corresponding X-successors, for all inputs X, are also equivalent.
Procedure: Group states of M so that two states are in the same group iff they are equivalent (forms a
partition of the states).
Completely Specified Machines:

Theorem. For every machine M there is a minimum machine Mred ~ M. Mred is unique up to
isomorphism.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Goal: Develop an implementation such that all computations can be assigned to transitions containing a
state for which the name of the corresponding class is changed. Suitable data structures achieve an O
(kn log n) implementation.
State Minimization:
Incompletely Specified Machines Statement of the problem: given an incompletely specified machine
M, find a machine M such that:

On any input sequence, M produces the same outputs as M, whenever M is specified.

There does not exist a machine M with fewer states than M which has the same
property
Machine M:

Attempt to reduce this case to usual state minimization of completely specified machines.
Brute Force Method: Force the dont cares to all their possible values and choose the smallest of
the completely specified machines so obtained.
In this example, it means to state minimize two completely specified machines obtained from M,
by setting the dont care to either 0 and 1.
Suppose that the - is set to be a 0.

States s1 and s2 are equivalent if s3 and s2 are equivalent, but s3 and s2 assert different outputs
under input 0, so s1 and s2 are not equivalent.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

States s1 and s3 are not equivalent either.


So this completely specified machine cannot be reduced further (3 states is the minimum).

Suppose that the - is set to be a 1.

States s1 is incompatible with both s2 and s3.


States s3 and s2 are equivalent.
So number of states is reduced from 3 to 2.
Machine Mred :

Machine M2 and M3 are formed by filling in the unspecified entry in M with 0 and 1, respectively.
Both machines M2 and M3 cannot be reduced. Conclusion?: M cannot be minimized further! But is it a
correct conclusion? Note: that we want to merge two states when, for any input sequence, they
generate the same output sequence, but only where both outputs are specified.
Definition: A set of states is compatible if they agree on the outputs where they are all specified.
Machine M :

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

In this case we have two compatible sets: A = (s1, s2) and B = (s3, s2). A reduced machine Mred can be
built as follows.
Machine Mred

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
When a next state is unspecified, the future behavior of the machine is unpredictable. This suggests the
definition of admissible input sequence.
Definition. An input sequence is admissible, for a starting state of a machine if no unspecified next state
is encountered, except possibly at the final step.
Definition. State si of machine M1 is said to cover, or contain, state sj of M2 provided
1. Every input sequence admissible to sj is also admissible to si , and
2. Its application to both M1 and M2 (initially is si and sj, respectively) results in identical output
sequences whenever the outputs of M2 are specified.
Definition. Machine M1 is said to cover machine M2 if for every state sj in M2, there is a corresponding
state si in M1 such that si covers sj.
Algorithmic State Machines:
The binary information stored in the digital system can be classified as either data or control
information.
The data information is manipulated by performing arithmetic, logic, shift and other data
processing tasks.
The control information provides the command signals that controls the various operations on
the data in order to accomplish the desired data processing task.
Design a digital system we have to design two subsystems data path subsystem and control
subsystem.

PART-II
Algorithmic State Machine (ASM)
The algorithmic state machine (ASM) method is a method for designing finite state machines. It is
used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but
less formal and thus easier to understand. An ASM chart is a method of describing the sequential
operations of a digital system.

ASM method
The ASM method is composed of the following steps:
1. Create an algorithm, using pseudocode, to describe the desired operation of the device.
2. Convert the pseudocode into an ASM chart.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
3. Design the datapath based on the ASM chart.
4. Create a detailed ASM chart based on the datapath.
5. Design the control logic based on the detailed ASM chart.

ASM chart
An ASM chart consists of an interconnection of four types of basic elements: state names, states,
condition checks and conditional outputs. An ASM state, represented as a rectangle, corresponds to one
state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box.

State name: The name of the state is indicated inside the circle and the circle is placed in the top left
corner or the name is placed without the circle.

Representation of State Name


State box : The output of the state is indicated inside the rectangle box

Representation of State box


Decision box
Decision box: A diamond indicates that the stated condition expression is to be tested and the exit path
is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite
State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for
true and false), is used to conditionally transfer between two states or between a state and a conditional
output. The decision box contains the stated condition expression to be tested, the expression contains
one or more inputs of the FSM.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Conditional output box


Conditional output box: An oval denotes the output signals that are of Mealy type. These outputs
depend not only on the state but also the inputs to the FSM.

Datapath
Once the desired operation of a circuit has been described using RTL operations, the datapath
components may be derived. Every unique variable that is assigned a value in the RTL program can be
implemented as a register. Depending on the functional operation performed when assigning a value to
a variable, the register for that variable may be implemented as a straightforward register, a shift
register, a counter, or a register preceded by a combinational logic block. The combinational logic block
associated with a register may implement an adder, subtracter, multiplexer, or some other type of
combinational logic function.
Detailed ASM chart
Once the datapath is designed, the ASM chart is converted to a detailed ASM chart. The RTL notation
is replaced by signals defined in the datapath.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

ASM Design: A Binary Multiplier


Introduction
An algorithmic state machine (ASM) is a Finite State Machine that uses a sequential circuit (the
Controller) to coordinates a series of operations among other functional units such as counters,
registers, adders etc. (the Datapath). The series of operations implement an algorithm. The Controller
passes control signals which can be Moore or Mealy outputs from the Controller, to the Datapath.
The Datapath returns information to the Controller in the form of status information that can then be
used to determine the sequence of states in the Controller. Both the Controller and the Datapath may
each have external inputs and outputs and are clocked simultaneously as shown in the following figure:
Inputs

Inputs Outputs

Status

Controller

Control

Datapath

clock
Outputs

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Think about this: A microprocessor may be considered as a (large !) ASM with many inputs, states and
outputs. A program (any software) is really just a method for specification of its initial state
The two basic strategies for the design of a controller are:
1. Hardwired control which includes techniques such as one-hot-state (also known as "one flipflop
per state") and decoded sequence registers.
2. Microprogrammed control which uses a memory device to produce a sequence of control words
to a datapath..
Since hardwired control is, generally speaking, fast compared with microprogramming strategies, most
modern microprocessors incorporate hardwired control to help achieve their high performance (or in
some cases, a combination of hardwired and microprogrammed control). The early generations of
microprocessors used microprogramming almost exclusively. We will discuss some basic concepts in
microprogramming later in the course for now we concentrate on a design example of hardwired
control. The ASM we will design is an n-bit unsigned binary multiplier.

Binary Multiplication
The design of binary multiplication strategies has a long history. Multiplication is such a fundamental
and frequently used operation in digital signal processing, that most modern DSP chips have dedicated
multiplication hardware to
maximize performance. Examples are filtering, coding
and
compression
for(the product)
telecommunications and control applications as well
as many others. Multiplier
units must be fast !
The first example that we considered (in class) that used a repeated addition strategy is not always fast.
In fact, the time required to multiply two numbers is variable and dependent on the value of the
multiplier itself. For example, the calculation of 5 x 9 as 5 + 5 + 5 + 5 + 5 + 5 + 5 + 5 + 5 requires more
clock pulses than the calculation of 5 x 3 = 5 + 5 + 5. The larger the multiplier, the more iterations that
are required. This is not practical. Think about this: How many iterations are required for multiplying say,
two 16-bit numbers, in the worst case ?
Another approach to achieve fast multiplication is the look-up table (LUT). The multiplier and multiplicand
are used to form an address in memory in which the corresponding, pre-computed value of
the product is stored . For an n-bit multiplier (that is, multiplying an n-bit number by an n-bit number), a
(2n+n x 2n)-bit memory is required to hold all possible products. For example, a 4-bit x 4-bit multiplier
requires (28) x 8 = 2048 bits. For an 8-bit x 8-bit multiplier, a (28+8) x 16 = 1 Mbit memory is required.
This approach is conceptually simple and has a fixed multiply time equal to the access time of the
memory device, regardless of the data being multiplied. But it is also impractical for larger values of n.
Think about this: What memory capacity is required for multiplying two 16-bit numbers ? Two 32-bit
numbers ?

Most multiplication hardware units use iterative algorithms implemented as an ASM for which the
worst-case multiplication time can be guaranteed. The algorithm we present here is similar to the

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
pencil-and-paper technique that we naturally use for multiplying in base 10. Consider the following
example:
123
(the multiplicand)
x 432 (the multiplier)
--246
(1st partial product) 369 (2nd
partial product)
492
(3rd partial product)
----53136
Each digit of the multiplier is multiplied by the multiplicand to form a partial product. Each partial
product is shifted left (that is, multiplied by the base) by the amount equal to the power of the digit of
the corresponding multiplier. In the example above, 246 is actually 246x100 , 369 is 369x101= 3690 and
492 is actually 492x102 = 49200, etc. There are as many partial products as there are digits in the
multiplier.
Binary multiplication can be done in exactly the same way:
1100
x 1011
---1100
1100
0000
1100
-------10000100

(the multiplicand)
(the multiplier)
(1st partial product)
(2nd partial product)
(3rd partial product)
(4th partial product)
(the product)

However, with binary digits we can make some important observations:


-

Since we multiply by only 1 or 0, each partial product is either a copy of the multiplicand
shifted by the appropriate number of places, or, it is 0.
The number of partial products is the same as the number of bits in the multiplier
The number of bits in the product is twice the number of bits in the multiplicand. Multiplying
two n-bit numbers produces a 2n-bit product.

We could then design datapath hardware using a 2n-bit adder plus some other components (as in the
example of Figure 10.17 of Brown and Vranesic) that emulates this manual procedure. However, the
hardware requirement can be reduced by considering the multiplication in a different light. Our
algorithm may be informally described as follows.
Consider each bit of the multiplier from right to left. When a bit is 1, the multiplicand is added to the
running total that is then shifted right. When the multiplier bit is 0, no add is necessary since the partial
product is 0 and then only the shift takes place. After n cycles of this strategy (once for each bit in the
multiplier) the final answer is produced. Consider the previous example again:

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

1100 (the multiplicand)


1011 (the multiplier)
---0000 (initial partial product, start with 0000)
1100 (1st multiplier bit is 1, so add the multiplicand)
---1100 (sum)
---01100 (shift sum one position to the right)
1100 (2nd multiplier bit is 1, so add multiplicand again)
---100100 (sum, with a carry generated on the left)
----

100100 (shift sum once to the right, including carry)


0100100 (3rd multiplier bit is 0, so skip add, shift once)
---1100 (4th multiplier bit is 1, so add multiplicand again)
---10000100 (sum, with a carry generated on the left) 10000100

(shift sum once to

the right, including carry)

Notice that all the adds take place in these 4 bit positions we need only a 4-bit adder ! We also
need shifting capability to capture the bits moving to the right as well as a way to store the
carries resulting from the additions. The final answer (the product) consists of the accumulated
sum and the bits shifted out to the right. A hardware design that can implement this algorithm is
described in the next section.

Design of the Binary Multiplier Datapath


The multiplication as described above can be implemented with the components as shown in the figure
on the next page (note that for simplicity, the clock inputs are not shown). It is the role of the controller
to provide a sequence of the inputs to each component to cause the datapath hardware to perform the
desired operations. Registers A and Q are controlled with synchronous inputs Load (parallel load), Shift
(shift one position to the right with left serial input) and Clear (force the contents to 0). The D flipflop
has an asynchronous Clear input and the counter has an asynchronous input Init (force the contents to
11..1).

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The log2n-bit counter (Counter P) is used to keep track of the number of iterations (n). Counter P is
loaded with the value n-1 and counts down to zero - thus n operations are ensured. Each operation is
either (a) add then shift or (b) just shift as described in the multiply algorithm above. Zero detection on
the counter produces an output Z that is HI when the counter hits zero and this is used to tell the
controller that the sequence is complete. The Counter P is initialized to n-1 with input Init = 1.
The multiplicand is applied to one n-bit input of the adder. The sum output from the adder is stored as a
parallel load into Register A. Register A can also shift to the right, accepting a 1-bit serial input from the
left. This is provided from the output of a D flip flop which stores the value of the carry out from the
adder in the previous addition. Register Q receives its left serial input when shifting from the right-most
bit (lsb) of Register A. Register A and Q are identical in operation (but controlled differently) and
together with the carry flipflop, they form a (1 + n + n)-bit shift register. That is, Registers C, A and Q
are connected such that the carry value stored in the flipflop enters Register A from the left and the bit
shifted out from the right of Register A enters Register Q from its left.
At the end of the process, registers A and Q will hold the 2n-bit product (the n msbs are in Register A).
The multiplier is initially stored in Register Q via its parallel load capability. The reason for this is that
it provides a convenient way to access each bit of the multiplier in succession at the lsb position (Q0) of
Register Q. In the multiply algorithm, each bit of the multiplier is used to decide if there should be an
(a) add with shift or (b) shift only. So, Q0 is used to tell the controller which of these operations to
perform on each iteration. After each shift, one bit of the multiplier is lost to the right and the Product
shifts into Register Q from the left. After n shifts, Register Q holds the n lsbs of the product and the
Multiplier is totally lost.
Putting the datapath circuit for the binary multiplier into a box, we see it has:
Data Inputs:

Multiplicand (n bits)
Multiplier (n bits)

Data Outputs: Product (2n bits)


Control inputs: Clear carry
Load, Shift and Clear (for each shift register)
Init (for the counter)
Status outputs: Z (zero detect) and Q0 (each bit of the Multiplier, in succession)

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Cin

Multiplier

n-1
log 2 n

n
Binary
Down
Counter

A B
Parallel
Adder

Cout

Counter P

L
o
a
d
S
h
i
f
t
C
l
e
a
r

SUM
n

Left serial input

D Q
Register A
Left serial input

Clear

Clear

Shift Reg

Shift

Shift Reg

(Zero Detect)

Load

Flipflop

Register Q
1
(lsb of Reg A)
n

Product (msb's)

1
n

Product (lsb's)

Datapath for Binary Multiplier

Q0 (lsb of Reg Q)

Init

Multiplicand

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Design of the Binary Multiplier Controller


An ASM chart that implements the binary multiply algorithm is given below. Note that << indicates
an assignment, for example, C<<0 means set C to 0.

IDL
E

C 0, A 0, P n-1
Q multiplier

MUL0

Q0

1
A A + multiplicand
C Cout

C0

MUL1 C|A|Q shr (C|A|Q)


P P-1

The process is achieved with 3 states (IDLE, MUL0 and MUL1). Each state will provide control signals
to the Datapath to perform the multiplication sequence. The process is started with an input G. As long
as G remains LO, the ASM remains in state IDLE. When G=1, the multiplication process is started. As
the ASM moves to state MUL0, the carry flip flop is cleared (C<<0), Reg A is cleared (A<<0), the
Counter is preset to n-1 (P << n-1) and Register Q is loaded with the Multiplier.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
In state MUL0, the value of each bit of the multiplier (available on Q0 ) determines if the multiplicand
is added (Q0 = 1) or not (Q0=0). For the case Q0=0, the Carry flipflop is cleared ; for the case Q0=1,
the Cout from the adder is stored in the carry flipflop. The next state is always MUL1.

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
In MUL1, the Carry flipflop, Reg A and Reg Q are treated as a (1 + n + n)-bit register and shifted one
position to the right, together. This is indicated with the notation C|A|Q << shr (C|A|Q) in the ASM chart.
The counter is also decremented (P << P 1). The value of Z then determines whether to:

return to state MUL0 (Z=0) to continue iteration OR


return to state IDLE (Z=1) thus completing the process. Remember that Z=1 means that the
counter has counted down from n-1 to 0 and therefore n iterations have been completed.
State IDLE=0 therefore indicates that the Multiplier is currently multiplying and when the
ASM returns to state IDLE (IDLE=1), it indicates that multiplication is completed.
At this point in the design process, the control signals must be identified and their names chosen. This is
done by inspection of the ASM chart and the datapath circuit. In MUL0, the operations P << n 1,
A<<0 and Q << multiplier are all independent of one another in the datapath and thus can be done
simultaneously and therefore can share a common control signal (Initialize). However, the operation
C<<0 must have its own control signal (Clear_C) since it occurs in both states IDLE and in MUL0.
Operations C << Cout and A << A + multiplicand, required in state MUL0, can share a control signal
(Load) since they are also independent functions in the datapath. And, similarly, the shifting of registers
C|A|Q and decrementing of counter P can share a common control signal since they are independent
operations in the datapath and are required in state MUL1 (Shift_dec). The names of the control signals
are of course, a matter of design choice.
We can summarize all the operations that must take place on each component in the datapath and
indicate the corresponding control signal names that should be passed to the datapath in the
following table:

Datapath
component
Carry flipflop
Counter P
Register A
Register Q

Operation
C << 0
C << Cout (from the adder)
P << n - 1
P << P 1
A << 0
A << A + multiplicand
C|A|Q << shr (C|A|Q)
Q << multiplier
C|A|Q << shr (C|A|Q

Control
Signal name
Clear_C
Load
Initialize
Shift_dec
Initialize
Load
Shift_dec
Initialize
Shift_dec

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
The state transition diagram for the controller for this ASM is shown below. Note that only the inputs
are shown; the outputs are not indicated:

G=0
G=1
IDLE

MUL0

z=0

MUL
1

z=1

From inspection of the state transition diagram, the input equations for the D flipflops (using
one flipflop per state) are easily formed:
DIDLE = G IDLE + MUL1 Z
DMUL0 = IDLE G + MUL1 Z
DMUL1 = MUL0
From the ASM chart and the table above, the equations for the control signals outputs from the
controller are formed:
Initialize = G IDLE
Clear_C = G IDLE + MUL0 Q0
Load = MUL0 Q0
Shift_dec = MUL1
Finally, to provide a mechanism to force the state machine to state IDLE (such as at power-up), an
asynchronous input Reset_to_IDLE is connected to the asynchronous inputs of the flipflops. The
circuit for the controller is then simply, an implementation of all of these equations as follows:

SWITCHING THEORY & LOGIC


DESIGN
ECE DEPARTMENT
St. MARTINS ENGINEERING
COLLEGE

Controller for Binary multiplier

Go
Reset_to_IDLE
D
Clock

PQ

IDLE

IDLE

Initialize
Clear_C

Q0
MUL0
D

Load

MUL0

Z
MUL1
D

Q
Shift_dec

Our binary multiplier ASM has the form:


Reset to
Multiplicand Multiplier

Go

IDLE

clock

Controller

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
n
Z, Q0

Initialize, Clear_C,
Load, Shift_dec

Datapath

2n
Product

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
Combining the controller and the datapath to form the top level of our design, the binary multiplier may
be viewed as:

Multiplier
Multiplicand
Go
IDLE

n
n

Binary
Multiplier

2n

Product

Reset Clock
to IDLE
Note that the IDLE state variable has been brought to the top level since it can be use to indicate when
the Binary Multiplier is busy. The Go and IDLE lines are called handshaking lines and are used to
coordinate the operation of the multiplier with the external world. If IDLE =1, a multiply can be started
by putting the numbers to be multiplied on the Multiplier and Multiplicand inputs and setting Go=1 at
which time the state machine jumps to state MUL0 (and therefore, simultaneously, IDLE changes to 0)
to start the process. When IDLE returns to 1, the answer is available on the Product output and another
multiplication could be started. No multiplication should be attempted while IDLE is 0.

Conclusion
This design of a Binary Multiplier is valid for any value of n. For example, for n=16, the multiplication
of two 16-bit numbers, the datapath components would simply be extended to accommodate 16 bits in
Registers A and Q and the counter would require log 2(16) = 4 bits. The adder would also be required to
be 16-bits in width. However, the same controller implementation can be used since its design is
independent of n. The multiplication time for n=16 would be 2(16) + 1 = 33 clocks. The product would
contain 32 bits.
Further refinements can be made to enhance the speed and capability of the ASM. For example, in our
algorithm, each 0 in the multiplier input data causes a shift without an add, each taking a clock pulse. If
the multiplier input contains runs of consecutive 0s, a barrel shifter could be used to implement all of
the required shifts (equal to the length of the run of 0s) in a single clock.
Think about this:

What modifications to our design would be required in order to be able to handle signed numbers. ?

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

Example: Multiply 12 x 5 = 60 (with n = 4)


Assuming a 4-bit multiplier, in binary, this is 1100 x 0101 = 00111100. The
following table summarizes all the values in the ASM for each step in this
multiplication. The left column represents each clock pulse applied to the
multiplier. The multiplication time for this ASM is always 2n+1 clocks (confirm
this with the state transition diagram) . Since n=4, there are 9 clocks required to
complete a multiplication. Multiplication time is not data dependent as in our first
example that used repeated addition !
The first row of the table is the initial state (state IDLE) at which every multiply
begins. Then, for each clock pulse applied, we move down one row in the table.
Counter P is this example has 2 bits (to count 4 iterations) and the zero detect Z
can be seen to be Z=1 only when Counter P counts down to 00.
The values of registers C, A and Q are shown for each clock pulse in the
process. Note that the multiplier is initially stored in Q, then shifted out to the
right giving access to each bit in the multiplier at the Q0 (lsb) position. At the
same time, the product shifts in from the left. The product is formed in registers
A and Q with the addition on each iteration occurring in Register A if the
contents of the lsb of Register Q is HI (i.e. Q0 = 1). Notice that registers C, A
and Q are shifted on every iteration and that the final answer 00111100 is
contained in Registers A and Q on the final clock pulse. At this point, we have
returned to state IDLE indicating that multiplication is complete.
The current state of the ASM is indicated with a 1 in the appropriate States
column. Note that since we are using one flipflop per state, only one of the 3
columns can contain a 1; the others are of course, 0.
In the Control Signals columns, the values for each control signal are provided
for each clock pulse. Note that Initialize, Clear_C and Load are Mealy-type
outputs since they are a function of both current state and inputs. Shift_dec is a
Moore-type output since it depends only on current state (MUL1) and is not a
function of any input. In fact, Shift_dec = MUL1.
Work through this example line by line to verify its operation.

Example: 12 x 5
Clock

States

Control Signals

pulse

Counter
P

Reg A

Reg Q

IDLE

MUL0 MUL1 Initialize

Clear_C Load Shift_

1
2
3
4
5
6
7

11
11
11
10
10
01
01
00

0
0
0
0
0
0
0
1

x
0
0
0
0
0
0
0

xxxx
0000
1100
0110
0110
0011
1111
0111

xxxx
0101
0101
0010
0010
0001
0001
1000

1
0
0
0
0
0
0
0

0
1
0
1
0
1
0
1

1
1
0
1
0
0
0
1

0
0
1
0
1
0
1
0

1
0
0
0
0
0
0
0

0
1
0
0
0
1
0
0

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
8
9

00
11

1
0

0
0

0111
0011

1000
1100

0
1

0
0

1
0

0
0

0
0

0
0

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE

SWITCHING THEORY & LOGIC DESIGN


ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE