ECE DEPARTMENT
St. MARTINS ENGINEERING COLLEGE
UNIT1
NUMBER SYSTEM AND BOOLEAN ALGEBRA AND SWITCHING
FUNCTIONS
Introduction:
The expression of numerical quantities is something we tend to take for granted. This is both a
good and a bad thing in the study of electronics. It is good, in that we're accustomed to the use and
manipulation of numbers for the many calculations used in analyzing electronic circuits. On the other
hand, the particular system of notation we've been taught from grade school onward is not the system
used internally in modern electronic computing devices, and learning any different system of notation
requires some reexamination of deeply ingrained assumptions.
First, we have to distinguish the difference between numbers and the symbols we use to
represent numbers. A number is a mathematical quantity, usually correlated in electronics to a physical
quantity such as voltage, current, or resistance. There are many different types of numbers. Here are just
a few types, for example:
WHOLE NUMBERS:
1, 2, 3, 4, 5, 6, 7, 8, 9 . . .
INTEGERS:
4, 3, 2, 1, 0, 1, 2, 3, 4 . . .
IRRATIONAL NUMBERS:
(approx. 3.1415927), e (approx. 2.718281828),
square root of any prime
REAL NUMBERS:
(All onedimensional numerical values, negative and positive,
including zero, whole, integer, and irrational numbers)
COMPLEX NUMBERS:
3  j4 , 34.5 20o
Different types of numbers find different application in the physical world. Whole numbers
work well for counting discrete objects, such as the number of resistors in a circuit. Integers are needed
when negative equivalents of whole numbers are required. Irrational numbers are numbers that cannot
be exactly expressed as the ratio of two integers, and the ratio of a perfect circle's circumference to its
diameter () is a good physical example of this. The noninteger quantities of voltage, current, and
resistance that we're used to dealing with in DC circuits can be expressed as real numbers, in either
fractional or decimal form. For AC circuit analysis, however, real numbers fail to capture the dual
essence of magnitude and phase angle, and so we turn to the use of complex numbers in either
rectangular or polar form.
This is an example of an analog representation of a number. There is no real limit to how finely
divided the height of that column can be made to symbolize the amount of money in the account.
Changing the height of that column is something that can be done without changing the essential nature
of what it is. Length is a physical quantity that can be divided as small as you would like, with no
practical limit. The slide rule is a mechanical device that uses the very same physical quantity  length  to represent numbers, and to help perform arithmetical operations with two or more numbers at a time.
It, too, is an analog device.
On the other hand, a digital representation of that same monetary figure, written with standard
symbols (sometimes called ciphers), looks like this:
$35,955.38
It is possible, at least theoretically, to set a slide rule (or even a thermometer column) so as to
perfectly represent the number , because analog symbols have no minimum limit to the degree that
they can be increased or decreased. If my slide rule shows a figure of 3.141593 instead of 3.141592654,
I can bump the slide just a bit more (or less) to get it closer yet. However, with digital representation,
such as with an abacus, I would need additional rods (place holders, or digits) to represent to further
degrees of precision. An abacus with 10 rods simply cannot represent any more than 10 digits worth of
the number , no matter how I set the beads. To perfectly represent , an abacus would have to have an
infinite number of beads and rods! The tradeoff, of course, is the practical limitation to adjusting, and
reading, analog symbols. Practically speaking, one cannot read a slide rule's scale to the 10th digit of
precision, because the marks on the scale are too coarse and human vision is too limited. An abacus, on
the other hand, can be set and read with no interpretational errors at all.
Furthermore, analog symbols require some kind of standard by which they can be compared for
precise interpretation. Slide rules have markings printed along the length of the slides to translate length
into standard quantities. Even the thermometer chart has numerals written along its height to show how
much money (in dollars) the red column represents for any given amount of height. Imagine if we all
tried to communicate simple numbers to each other by spacing our hands apart varying distances. The
For large numbers, though, the "hash mark" numeration system is too inefficient.
Systems of numeration
The Romans devised a system that was a substantial improvement over hash marks, because it
used a variety of symbols (or ciphers) to represent increasingly large quantities. The notation for 1 is the
capital letter I. The notation for 5 is the capital letter V. Other ciphers possess increasing values:
X
L
C
D
M
=
=
=
=
=
10
50
100
500
1000
If a cipher is accompanied by another cipher of equal or lesser value to the immediate right of it,
with no ciphers greater than that other cipher to the right of that other cipher, that other cipher's value is
added to the total quantity. Thus, VIII symbolizes the number 8, and CLVII symbolizes the number
157. On the other hand, if a cipher is accompanied by another cipher of lesser value to the immediate
left, that other cipher's value is subtracted from the first. Therefore, IV symbolizes the number 4 (V
minus I), and CM symbolizes the number 900 (M minus C). You might have noticed that ending credit
sequences for most motion pictures contain a notice for the date of production, in Roman numerals. For
the year 1987, it would read: MCMLXXXVII. Let's break this numeral down into its constituent parts, from
left to right:
M = 1000
+
CM = 900
+
L = 50
+
XXX = 30
Aren't you glad we don't use this system of numeration? Large numbers are very difficult to
denote this way, and the left vs. right / subtraction vs. addition of values can be very confusing, too.
Another major problem with this system is that there is no provision for representing the number zero or
negative numbers, both very important concepts in mathematics. Roman culture, however, was more
pragmatic with respect to mathematics than most, choosing only to develop their numeration system as
far as it was necessary for use in daily life.
We owe one of the most important ideas in numeration to the ancient Babylonians, who were the
first (as far as we know) to develop the concept of cipher position, or place value, in representing larger
numbers. Instead of inventing new ciphers to represent larger numbers, as the Romans did, they reused
the same ciphers, placing them in different positions from right to left. Our own decimal numeration
system uses this concept, with only ten ciphers (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) used in "weighted"
positions to represent very large and very small numbers.
Each cipher represents an integer quantity, and each place from right to left in the notation
represents a multiplying constant, or weight, for each integer quantity. For example, if we see the
decimal notation "1206", we known that this may be broken down into its constituent weightproducts
as such:
1206 = 1000 + 200 + 6
1206 = (1 x 1000) + (2 x 100) + (0 x 10) + (6 x 1)
Each cipher is called a digit in the decimal numeration system, and each weight, or place value,
is ten times that of the one to the immediate right. So, we have a ones place, a tens place, a hundreds
place, a thousands place, and so on, working from right to left.
Right about now, you're probably wondering why I'm laboring to describe the obvious. Who
needs to be told how decimal numeration works, after you've studied math as advanced as algebra and
trigonometry? The reason is to better understand other numeration systems, by first knowing the how's
and why's of the one you're already used to.
The decimal numeration system uses ten ciphers, and placeweights that are multiples of ten.
What if we made a numeration system with the same strategy of weighted places, except with fewer or
more ciphers?
The binary numeration system is such a system. Instead of ten different cipher symbols, with
each weight constant being ten times the one before it, we only have two cipher symbols, and each
weight constant is twice as much as the one before it. The two allowable cipher symbols for the binary
system of numeration are "1" and "0," and these ciphers are arranged righttoleft in doubling values of
weight. The rightmost place is the ones place, just as with decimal notation. Proceeding to the left, we
have the twos place, the fours place, the eights place, the sixteens place, and so on. For example, the
This can get quite confusing, as I've written a number with binary numeration (11010), and then
shown its place values and total in standard, decimal numeration form (16 + 8 + 2 = 26). In the above
example, we're mixing two different kinds of numerical notation. To avoid unnecessary confusion, we
have to denote which form of numeration we're using when we write (or type!). Typically, this is done
in subscript form, with a "2" for binary and a "10" for decimal, so the binary number 110102 is equal to
the decimal number 2610.
The subscripts are not mathematical operation symbols like superscripts (exponents) are. All
they do is indicate what system of numeration we're using when we write these symbols for other
people to read. If you see "310", all this means is the number three written using decimal numeration.
However, if you see "310", this means something completely different: three to the tenth power (59,049).
As usual, if no subscript is shown, the cipher(s) are assumed to be representing a decimal number.
Commonly, the number of cipher types (and therefore, the placevalue multiplier) used in a
numeration system is called that system's base. Binary is referred to as "base two" numeration, and
decimal as "base ten." Additionally, we refer to each cipher position in binary as a bit rather than the
familiar word digit used in the decimal system.
Now, why would anyone use binary numeration? The decimal system, with its ten ciphers,
makes a lot of sense, being that we have ten fingers on which to count between our two hands. (It is
interesting that some ancient central American cultures used numeration systems with a base of twenty.
Presumably, they used both fingers and toes to count!!). But the primary reason that the binary
numeration system is used in modern electronic computers is because of the ease of representing two
cipher states (0 and 1) electronically. With relatively simple circuitry, we can perform mathematical
operations on binary numbers by representing each bit of the numbers by a circuit which is either on
(current) or off (no current). Just like the abacus with each rod representing another decimal digit, we
simply add more circuits to give us more bits to symbolize larger numbers. Binary numeration also
lends itself well to the storage and retrieval of numerical information: on magnetic tape (spots of iron
oxide on the tape either being magnetized for a binary "1" or demagnetized for a binary "0"), optical
disks (a laserburned pit in the aluminum foil representing a binary "1" and an unburned spot
representing a binary "0"), or a variety of other media types.
Before we go on to learning exactly how all this is done in digital circuitry, we need to become
more familiar with binary and other associated systems of numeration
Decimal versus binary numeration
Let's count from zero to twenty using four different kinds of numeration systems: hash marks,
Roman numerals, decimal, and binary:
Hash Marks
n/a




//
// 
// 
// 
// 
// //
// //
// //
// //
// //
// //
// //
// //
// //
// //
// //




//
//
//
//
//
//




//
Roman
n/a
I
II
III
IV
V
VI
VII
VIII
IX
X
XI
XII
XIII
XIV
XV
XVI
XVII
XVIII
XIX
XX
Decimal
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Binary
0
1
10
11
100
101
110
111
1000
1001
1010
1011
1100
1101
1110
1111
10000
10001
10010
10011
10100
Neither hash marks nor the Roman system are very practical for symbolizing large numbers.
Obviously, placeweighted systems such as decimal and binary are more efficient for the task. Notice,
though, how much shorter decimal notation is over binary notation, for the same number of quantities.
What takes five bits in binary notation only takes two digits in decimal notation.
This raises an interesting question regarding different numeration systems: how large of a
number can be represented with a limited number of cipher positions, or places? With the crude hashmark system, the number of places IS the largest number that can be represented, since one hash mark
"place" is required for every integer step. For placeweighted systems of numeration, however, the
answer is found by taking base of the numeration system (10 for decimal, 2 for binary) and raising it to
the power of the number of places. For example, 5 digits in a decimal numeration system can represent
100,000 different integer number values, from 0 to 99,999 (10 to the 5th power = 100,000). 8 bits in a
binary numeration system can represent 256 different integer number values, from 0 to 11111111
(binary), or 0 to 255 (decimal), because 2 to the 8th power equals 256. With each additional place
position to the number field, the capacity for representing numbers increases by a factor of the base (10
for decimal, 2 for binary).
An interesting footnote for this topic is the one of the first electronic digital computers, the
Eniac. The designers of the Eniac chose to represent numbers in decimal form, digitally, using a series
of circuits called "ring counters" instead of just going with the binary numeration system, in an effort to
minimize the number of circuits required to represent and calculate very large numbers. This approach
turned out to be counterproductive, and virtually all digital computers since then have been purely
binary in design.
To convert a number in binary numeration to its equivalent in decimal form, all you have to do
is calculate the sum of all the products of bits with their respective placeweight constants. To illustrate:
to decimal
1 0 0 1
   6 3 1 8
4 2 6
form:
1 0 1
  4 2 1
The bit on the far right side is called the Least Significant Bit (LSB), because it stands in the
place of the lowest weight (the one's place). The bit on the far left side is called the Most Significant Bit
(MSB), because it stands in the place of the highest weight (the one hundred twentyeight's place).
Remember, a bit value of "1" means that the respective place weight gets added to the total value, and a
bit value of "0" means that the respective place weight does not get added to the total value. With the
above example, we have:
12810
+ 6410
+ 810
+ 410
+ 110
= 20510
If we encounter a binary number with a dot (.), called a "binary point" instead of a decimal point,
we follow the same procedure, realizing that each place weight to the right of the point is onehalf the
value of the one to the left of it (just as each place weight to the right of a decimal point is onetenth the
weight of the one to the left of it). For example:
Convert 101.0112 to decimal form:
.
bits =
1 0 1 . 0 1 1
.
      weight =
4 2 1
1 1 1
(in decimal
/ / /
notation)
2 4 8
410
+ 110
+ 0.2510
+ 0.12510
= 5.37510
Decimal
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Binary
0
1
10
11
100
101
110
111
1000
1001
1010
1011
1100
1101
1110
1111
10000
10001
10010
10011
10100
Octal
0
1
2
3
4
5
6
7
10
11
12
13
14
15
16
17
20
21
22
23
24
Hexadecimal
0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
10
11
12
13
14
Octal and hexadecimal numeration systems would be pointless if not for their ability to be easily
converted to and from binary notation. Their primary purpose in being is to serve as a "shorthand"
method of denoting a number represented electronically in binary form. Because the bases of octal
(eight) and hexadecimal (sixteen) are even multiples of binary's base (two), binary bits can be grouped
together and directly converted to or from their respective octal or hexadecimal digits. With octal, the
binary bits are grouped in three's (because 23 = 8), and with hexadecimal, the binary bits are grouped in
four's (because 24 = 16):
BINARY TO OCTAL CONVERSION
Convert 10110111.12 to octal:
.
.
implied zero
.

.
010
110
Convert each group of bits
###
###
to its octal equivalent:
2
6
.
Answer:
10110111.12 = 267.48
implied zeros

111
100
### . ###
7
4
implied zeros

0111
1000
 . 7
8
Here we had to group the bits in four's, from the binary point left, and from the binary point
right, adding (implied) zeros as necessary to make complete 4bit groups:
Likewise, the conversion from either octal or hexadecimal to binary is done by taking each octal
or hexadecimal digit and converting it to its equivalent binary (3 or 4 bit) group, then putting all the
binary bit groups together.
Incidentally, hexadecimal notation is more popular, because binary bit groupings in digital
equipment are commonly multiples of eight (8, 16, 32, 64, and 128 bit), which are also multiples of 4.
Octal, being based on binary bit groups of 3, doesn't work out evenly with those common bit group
sizings.
Octal and hexadecimal to decimal conversion
Although the prime intent of octal and hexadecimal numeration systems is for the "shorthand"
representation of binary numbers in digital electronics, we sometimes have the need to convert from
either of those systems to decimal form. Of course, we could simply convert the hexadecimal or octal
format to binary, then convert from binary to decimal, since we already know how to do both, but we
can also convert directly.
Because octal is a baseeight numeration system, each placeweight value differs from either
adjacent place by a factor of eight. For example, the octal number 245.37 can be broken down into place
values as such:
octal
digits =
.
weight =
(in decimal
notation)
.
2
6
4
4
8
5
1
.

3
1
/
8
7
1
/
6
4
(3 x 0.12510)
The technique for converting hexadecimal notation to decimal is the same, except that each
successive placeweight changes by a factor of sixteen. Simply denote each digit's weight, multiply each
hexadecimal digit value by its respective weight (in decimal form), then add up all the decimal values to
get a total. For example, the hexadecimal number 30F.A916 can be converted like this:
hexadecimal
digits =
.
weight =
(in decimal
notation)
.
.
3
2
5
6
0
1
6
F
1
.

A
1
/
1
6
9
1
/
2
5
6
(10 x 0.062510)
These basic techniques may be used to convert a numerical notation of any base into decimal
form, if you know the value of that numeration system's base.
Conversion from decimal numeration
Because octal and hexadecimal numeration systems have bases that are multiples of binary (base
2), conversion back and forth between either hexadecimal or octal and binary is very easy. Also,
because we are so familiar with the decimal system, converting binary, octal, or hexadecimal to decimal
form is relatively easy (simply add up the products of cipher values and placeweights). However,
conversion from decimal to any of these "strange" numeration systems is a different matter.
The method which will probably make the most sense is the "trialandfit" method, where you
try to "fit" the binary, octal, or hexadecimal notation to the desired value as represented in decimal
form. For example, let's say that I wanted to represent the decimal value of 87 in binary form. Let's start
by drawing a binary number field, complete with placeweight values:
.
.
weight =
(in decimal
notation)
1
2
8
6
4
3
2
1
6
1
6
4
3
2
1
6
If we were to make the next place to the right a "1" as well, our total value would be 6410 + 3210,
or 9610. This is greater than 8710, so we know that this bit must be a "0". If we make the next (16's) place
bit equal to "1," this brings our total value to 6410 + 1610, or 8010, which is closer to our desired value
(8710) without exceeding it:
.
.
weight =
(in decimal
notation)
1
6
4
0
3
2
1
1
6
By continuing in this progression, setting each lesserweight bit as we need to come up to our
desired total value without exceeding it, we will eventually arrive at the correct figure:
.
.
weight =
(in decimal
notation)
1
6
4
0
3
2
1
1
6
0
8
1
4
1
2
1
1
This trialandfit strategy will work with octal and hexadecimal conversions, too. Let's take the
same decimal figure, 8710, and convert it to octal numeration:
.
.
weight =
(in decimal
notation)
6
4
If we put a cipher of "1" in the 64's place, we would have a total value of 6410 (less than 8710). If
we put a cipher of "2" in the 64's place, we would have a total value of 12810 (greater than 8710). This
tells us that our octal numeration must start with a "1" in the 64's place:
1
6
4
Now, we need to experiment with cipher values in the 8's place to try and get a total (decimal)
value as close to 87 as possible without exceeding it. Trying the first few cipher options, we get:
"1" = 6410 + 810 = 7210
"2" = 6410 + 1610 = 8010
"3" = 6410 + 2410 = 8810
A cipher value of "3" in the 8's place would put us over the desired total of 8710, so "2" it is!
.
.
weight =
(in decimal
notation)
1
6
4
2
8
Now, all we need to make a total of 87 is a cipher of "7" in the 1's place:
.
.
weight =
(in decimal
notation)
1
6
4
2
8
7
1
Of course, if you were paying attention during the last section on octal/binary conversions, you
will realize that we can take the binary representation of (decimal) 8710, which we previously
determined to be 10101112, and easily convert from that to octal to check our work:
.
Implied zeros
.

.
001 010 111
.

.
1
2
7
.
Answer: 10101112 = 1278
Binary
Octal
87
= 43.5
2
43
= 21.5
2
21
= 10.5
2
10
= 5.0
2
And so on . . . remainder = 0
5
= 2.5
2
2
= 1.0
2
And so on . . . remainder = 0
1
= 0.5
2
The binary bits are assembled from the remainders of the successive division steps, beginning
with the LSB and proceeding to the MSB. In this case, we arrive at a binary notation of 10101112.
When we divide by 2, we will always get a quotient ending with either ".0" or ".5", i.e. a remainder of
either 0 or 1. As was said before, this repeatdivision technique for conversion will work for numeration
systems other than binary. If we were to perform successive divisions using a different number, such as
8 for conversion to octal, we will necessarily get remainders between 0 and 7. Let's try this with the
same decimal number, 8710:
.
.
87
= 10.875
10
= 1.25
8
Remainder = 2
1
= 0.125
8
RESULT:
= 1278
We can use a similar technique for converting numeration systems dealing with quantities less
than 1, as well. For converting a decimal number less than 1 into binary, octal, or hexadecimal, we use
repeated multiplication, taking the integer portion of the product in each step as the next digit of our
converted number. Let's use the decimal number 0.812510 as an example, converting to binary:
.
.
.
.
.
.
.
.
.
.
.
.
0.8125 x 2 = 1.625
0.625 x 2 = 1.25
0.25 x 2 = 0.5
0.5 x 2 = 1.0
RESULT:
0.812510
= 0.11012
As with the repeatdivision process for integers, each step gives us the next digit (or bit) further
away from the "point." With integer (division), we worked from the LSB to the MSB (righttoleft), but
with repeated multiplication, we worked from the left to the right. To convert a decimal number greater
than 1, with a < 1 component, we must use both techniques, one at a time. Take the decimal example of
54.4062510, converting to binary:
REPEATED
.
.
54
.
=
.
2
.
.
27
.
=
.
2
.
.
13
.
=
.
2
.
.
6
Remainder = 0
13.5
Remainder = 1
6.5
Remainder = 1 (0.5 x 2)
(0.5 x 2)
Remainder = 0
Remainder = 1 (0.5 x 2)
Remainder = 1 (0.5 x 2)
5410
= 1101102
0
1
1
0
1
Signandmagnitude method
One may first approach the problem of representing a number's sign by allocating one sign bitto
represent the sign: set that bit(often the most significant bit) to 0for a positive number, and set to 1for a
negative number. The remaining bits in the number indicate the magnitude (or absolute value). Hence in
a bytewith only 7 bits (apart from the sign bit), the magnitude can range from 0000000 (0) to 1111111
(127). Thus you can represent numbers from 12710to +12710once you add the sign bit (the eighth bit).
A consequence of this representation is that there are two ways to represent zero, 00000000 (0) and
10000000 (0). Decimal 43 encoded in an eightbit byte this way is 10101011. This approach is
directly comparable to the common way of showing a sign (placing a "+" or "" next to the number's
magnitude). Some early binary computers (e.g. IBM 7090) used this representation, perhaps because of
its natural relation to common usage. Signandmagnitude is the most common way of representing the
significandin floating pointvalues.
One's complement
8 bit one's complement
Unsigned
interpretation
0
1
...
125
126
127
128
129
130
...
253
254
255
Alternatively, a system known as one's complement can be used to represent negative numbers.
The one's complement form of a negative binary number is the bitwise NOT applied to it the
"complement" of its positive counterpart. Like signandmagnitude representation, one's complement
has two representations of 0: 00000000 (+0) and 11111111 (0).
As an example, the one's complement form of 00101011 (43) becomes 11010100 (43). The
range of signed numbers using one's complement is represented by (2N11) to (2N11) and 0. A
conventional eightbit byte is 12710 to +12710 with zero being either 00000000 (+0) or 11111111 (0).
To add two numbers represented in this system, one does a conventional binary addition, but it is
then necessary to add any resulting carry back into the resulting sum. To see why this is necessary,
consider the following example showing the case of the addition of 1 (11111110) to +2 (00000010).
binary
11111110
+ 00000010
............
1 00000000
1
............
00000001
decimal
1
+2
...
0
< not the correct answer
+1
< add carry
...
1
< correct answer
In the previous example, the binary addition alone gives 00000000, which is incorrect. Only
when the carry is added back in does the correct result (00000001) appear.
This numeric representation system was common in older computers; the PDP1, CDC 160A
and UNIVAC 1100/2200 series, among many others, used one'scomplement arithmetic.
A remark on terminology: The system is referred to as "one's complement" because the negation
of a positive value x (represented as the bitwise NOT of x) can also be formed by subtracting x from the
Two's
Unsigned
complement
interpretation
interpretation
0
0
1
1
...
...
126
126
127
127
128
128
127
129
126
130
...
...
2
254
1
255
The problems of multiple representations of 0 and the need for the endaround carry are
circumvented by a system called two's complement. In two's complement, negative numbers are
represented by the bit pattern which is one greater (in an unsigned sense) than the one's complement of
the positive value.In two'scomplement, there is only one zero (00000000). Negating a number (whether
negative or positive) is done by inverting all the bits and then adding 1 to that result. Addition of a pair
of two'scomplement integers is the same as addition of a pair of unsigned numbers (except for
detection of overflow, if that is done). For instance, a two'scomplement addition of 127 and 128 gives
the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8 bit two's
complement table.An easier method to get the negation of a number in two's complement is as follows:
Example 1
Example 2
0101001
0101100
1010111
1010100
Binary Codes
Binary codes are codes which are represented in binary system with modification from the
original ones. Below we will be seeing the following:
Binary logic
inary logic could refer to:
an English Rock band active from 1986  1989 famous for their use of synthesisers in
tandem with guitar based harmonies
Parity bit:
A parity bit is an extra bit that is attached to the information that is being sent from one position to the
other. This bit is attached just to detect the error if any in the information during the transmission. This
bit is used to make number of 1s in the message or information either even or odd. So if there is any
change in any bit during transmission then there would be change in number of 1s hence error would
change the number of 1s from odd to even or even to odd which can be detected. This way parity bit
helps to detects errors. So whenever we need to send information well pass the information firstly
through PARITY GENERATOR CIRCUIT and then send. Also at the receiver end information
received is passed trough the PARITY CHECK CIRCUIT and if parity matches, only then we use the
information. If we make number of 1s even then it is called even parity and if number of 1s is made
odd then it is called odd parity.
But there are few drawbacks attached with this method:
o
This system can detect only odd combinations of error, not even combinations. That
means if there is 2 or 4 or 6 etc number of then it would go undetected while it can easily
detect 1, 3, 5 etc number of errors.
We can not check the position of error even if we are able to detect it.
Eg. Find out the parity bit (odd) for message 1101 and show us how it helps in detecting errors
As 1101 has 3 i.e. odd number of 1s so P=0 so that we still have the odd number of 1s in the
combination of 5 bits(message(4 bits) and parity bit(1 bit))
So message we send with parity is 11010 (5th bit from left is parity bit)
Lets now see the effect of errors on this message
1 error: Suppose we have error in 3rd bit from left so bit at 3rd position would change from 0 to 1.
Hence message received would be 11110 instead of 11010 and at the receiver we check the parity of
message and we see message has even number 1s in the received signal which should have been odd so
we have detected the error although we are not sure about the position of the error. So as error is
detected we can make a request for another send of the same message.
2 errors: Suppose we have error at positions 1 and 4 from left so we have a bit change from 1 to 0 at 1st
and 1 to 0 at 2nd position. We messaged received is 01000 and we have odd number of 1s in received
message which is also the parity status of sent message so no detection of errors so we can see this
method is unable to detect even number of errors
Hamming code:
This code is used for single error correction i.e. using this code we can detect only single error. In parity
bit method we used only single extra bit but in this method number of extra bits (which also are parity
bits) vary with the number of bits of the message.
Suppose we have the number of information bits as m=4 then we have to determine number of parity
bits using above relation
2p >= 4 + p + 1
2p >= 5 + p
For p=2
For p=3
So now we have 4 information bits and 3 parity bits so total of 7 bits. In the parity bit method, we
placed the parity bit at rightmost position. But here we dont place the extra bits consecutively but the
positions are fixed by following rule:
As we need only three positions so we have to pick first 3 which are 1, 2, and 4.
So we have the composition of hamming code as follow:
Bit1
Parity
P1
bit2
bit3
parity
P2
bit4
bit5
bit6
bit7
parity
M1
P3
M2
M3
M4
Now we have to decide positions in the hamming code which would be covered by the parity bit i.e. the
positions considering which value of parity bit would be decided. Well be using following rule for this:
E.g. Consider the parity bit P1 and we have to find the position of message bits which well cover
with this parity bit.
Firstly write the binary equivalents of positions of message bit
Bit1
Parity
P1
bit2
bit3
parity
P2
bit4
bit5
bit6
bit7
parity
M1
P3
M2
M3
M4
010
011
100
101
110
111
Now lets see in the binary equivalent of position of parity bit P1 that at which position we have 1and we
see 1 is at LSB so we select the message bits which have positions with 1 at LSB which are M1, M2 and
M4 So P1 bit would check the parity for M1, M2 and M4
E.g. Consider the parity bit P2 and we have to find the position of message bits which well cover
with this parity bit
We have 1 at second position from left so we choose message bits which have 1 at 2nd position n their
positions binary equivalent. Hence we get message bits M1 M3 and M4. So P2 checks parity for message
bits of M1 M3 and M4
Similarly we have 1 at 3rd position of P3 message bits with 1 at 3rd position are M2M3 M4
So we now have
P1 Checks bit number 1,3,5,7
P2 Checks bit number 2,3,6,7
P3 Checks bit number 4,5,6,7
These parity bits can be either even or odd parity bits but all parity bits must be same i.e. all odd or all
even
Eg. So lets form hamming code using 4bit message bits 1101 with parity bits as even parity bit
and check how it is able to detect and correct error.
As we have already decided parity bit positions and their corresponding message bits for a 4bit
message
For the moment we have hamming code as P1 P2 1 P3 1 0 1
As we have already seen
Hence we have the bit change in position 5 so rechange it to get the correct value. Hence we change
5th bit of received message which is 1010001, we get 1010101 which is correct one.
So we see hamming code is able to detect and correct single error.
Eg. Now form a hamming code for 5bit information bits 10110 with odd parity
m=5 and we have to follow
2p >= m + p + 1
The value of p as 4 to satisfy 24 (16) >= 5 + 4 + 1 but p=3 doesnt satisfy as 23 (8) >= 5 + 3 + 1
So p=4 and hence a total of 9 bits
Parity bit positions are 1, 2, 4, 8
And hence hamming code composition is as follow
0001
0010
0011
0100
0101
0110
0111
1000
1001
P1
P2
M1
P3
M2
M3
M4
P4
M5
Now if we see P1 has 1 at LSB so message bits with this parity bit are M1 M2 M4 M5
Similarly we see P2 checks M1 M3 M4
Similarly we see P3 checks M2 M3 M4
Similarly we see P4 checks M5
Or we can also put it as
P1 checks code bits 1, 3, 5, 7, 9
P2 checks code bits 2, 3, 6, 7
P3 checks code bits 4, 5, 6, 7
P4 checks code bits 8, 9
For message 10110 we hamming code
Bit positions
1
1
2
P3
3
0
4
1
5
1
6
P4
7
0
PART 2
BOOLEAN ALGEBRA
9 P1
P2
Introduction
The most obvious way to simplify Boolean expressions is to manipulate them in the same way
as normal algebraic expressions are manipulated. With regards to logic relations in digital forms, a set
of rules for symbolic manipulation is needed in order to solve for the unknowns.
A set of rules formulated by the English mathematician George Boole describe certain propositions
whose outcome would be either true or false. With regard to digital logic, these rules are used to
describe circuits whose state can be either, 1 (true) or 0 (false). In order to fully understand this, the
relation between the AND gate, OR gate and NOT gate operations should be appreciated. A number of
rules can be derived from these relations as Table 1 demonstrates.
P1: X = 0 or X = 1
P2: 0 . 0 = 0
P3: 1 + 1 = 1
P4: 0 + 0 = 0
P5: 1 . 1 = 1
P6: 1 . 0 = 0 . 1 = 0
P7: 1 + 0 = 0 + 1 = 1
Examples
Prove T10 : (a)
(1)
Algebraically:
(2)
Using
the
truth
x+0=x
x.1=x
Postulate 5
x + x = 1
x . x = 0
Theorem 1
x+x=x
x.x=x
Theorem 2
x+1=1
x.0=0
(x) = x
xy =yx
x+y=y+x
x(yz) = (xy)z
Theorem
involution
3,
Postulate
commutative
3,
x + (y + z) = (x +
y) + z
Theorem
4,
x + yz = (x +
y)(x + z)
table:
x (y + z) = xy +
(xy) = x + y
xz
Postulate
distributive
4,
Theorem
DeMorgan
5,
Theorem
absorption
6,
x(x + y) = x
(x + y) = xy
x + xy = x
Sum of Products
Numerical Representation
Take as an example the truth table of a threevariable function as shown below. Three variables,
each of which can take the values 0 or 1, yields eight possible combinations of values for which the
function may be true. These eight combinations are listed in ascending binary order and the equivalent
decimal value is also shown in the table.
Decimal
A
B
C
f
Value
0
0
0
0
1
1
0
0
1
0
2
0
1
0
1
3
0
1
1
1
4
1
0
0
0
5
1
0
1
0
6
1
1
0
0
7
1
1
1
1
The function has a value 1 for the combinations shown, therefore:
......(1)
This can also be written as:
f(A, B, C) = 000 + 010 + 011 + 111
Note that the summation sign indicates that the terms are "OR'ed" together. The function can be
further reduced to the form:
f(A, B, C) =
It is selfevident that the binary form of a function can be written directly from the truth table.
Note:
(a) the position of the digits must not be changed
(0,2,3,7)......(2)
Examples
Consider the function:
In binary form: f(A, B, C, D) =
In decimal form: f(A, B, C, D) =
Logic gates
Digital systems are said to be constructed by using logic gates. These gates are the AND, OR,
NOT, NAND, NOR, EXOR and EXNOR gates. The basic operations are described below with the aid
of truth tables.
AND gate
The AND gate is an electronic circuit that gives a high output (1) only if all its inputs are high.
A dot (.) is used to show the AND operation i.e. A.B. Bear in mind that this dot is sometimes omitted
i.e. AB
OR gate
The OR gate is an electronic circuit that gives a high output (1) if one or more of its inputs are
high. A plus (+) is used to show the OR operation.
NOT gate
The NOT gate is an electronic circuit that produces an inverted version of the input at its output.
It is also known as an inverter. If the input variable is A, the inverted output is known as NOT A. This
is also shown as A', or A with a bar over the top, as shown at the outputs. The diagrams below show two
ways that the NAND logic gate can be configured to produce a NOT gate. It can also be done using
NOR logic gates in the same way.
NAND gate
This is a NOTAND gate which is equal to an AND gate followed by a NOT gate. The outputs
of all NAND gates are high if any of the inputs are low. The symbol is an AND gate with a small circle
on the output. The small circle represents inversion.
NOR gate
This is a NOTOR gate which is equal to an OR gate followed by a NOT gate. The outputs of
all NOR gates are low if any of the inputs are high.
The symbol is an OR gate with a small circle on the output. The small circle represents
inversion.
EXOR gate
The 'ExclusiveOR' gate is a circuit which will give a high output if either, but not both, of its
two inputs are high. An encircled plus sign ( ) is used to show the EOR operation.
EXNOR gate
The 'ExclusiveNOR' gate circuit does the opposite to the EOR gate. It will give a low output if
either, but not both, of its two inputs are high. The symbol is an EXOR gate with a small circle on the
output. The small circle represents inversion.
Table 2 is a summary truth table of the input/output combinations for the NOT gate together
with all possible input/output combinations for the other gate functions. Also note that a truth table with
'n' inputs has 2n rows. You can compare the outputs of different gates.
UNIT2
MINIMIZATION AND DESIGN OF COMBINATIONAL CIRCUITS
Karnaugh Maps
The Karnaugh map provides a simple and straightforward method of minimising boolean
expressions. With the Karnaugh map Boolean expressions having up to four and even six variables can
be simplified.
So what is a Karnaugh map?
A Karnaugh map provides a pictorial method of grouping together expressions with common factors
and therefore eliminating unwanted variables. The Karnaugh map can also be described as a special
arrangement of a truth table.
for
The diagram below illustrates the correspondence between the Karnaugh map and the truth table
the
general
case
of
a
two
variable
problem.
The values inside the squares are copied from the output column of the truth table, therefore there is one
square in the map for every row in the truth table. Around the edge of the Karnaugh map are the values
of the two input variable. A is along the top and B is down the left hand side. The diagram below
explains
this:
The values around the edge of the map can be thought of as coordinates. So as an example, the square
on the top right hand corner of the map in the above diagram has coordinates A=1 and B=0. This square
corresponds to the row in the truth table where A=1 and B=0 and F=1. Note that the value in the F
column represents a particular function to which the Karnaugh map corresponds.
Examples
Example 1:
Consider the following map. The function plotted is: Z = f(A,B) = A
+ AB
Note that values of the input variables form the rows and columns. That is the logic
values of the variables A and B (with one denoting true form and zero denoting false form) form the
head of the rows and columns respectively.
Bear in mind that the above map is a one dimensional type which can be used to simplify
an expression in two variables.
There is a twodimensional map that can be used for up to four variables, and a threedimensional map for up to six variables.
+ A
Groups
may
Groups
may
That
is
Groups must
if n = 1,
not
be
include
horizontal
any
or
cell
vertical,
contain 1, 2, 4, 8,
a group will contain
or
two
containing
but
not
zero
diagonal.
in general 2n cells.
1's since 21 = 2.
2,
Each
Each
group
group
cell
will
contain
should
containing
be
one
must
four
1's
as
be
large
in
22
since
at
as
least
4.
possible.
one
group.
Groups
may
overlap.
Groups may wrap around the table. The leftmost cell in a row may be grouped with
the rightmost cell and the top cell in a column may be grouped with the bottom cell.
the
There should be as few groups as possible, as long as this does not contradict any of
previous
rules.
Summmary:
1.
2.
3.
4.
No zeros allowed.
No diagonals.
Only power of 2 number of cells in each group.
Groups should be as large as possible.
The following four variable Karnaugh maps illustrate reduction of Boolean expressions too
tedious for Boolean algebra. Reductions could be done with Boolean algebra. However, the Karnaugh
map is faster and easier, especially if there are many logic reductions to do.
Fold up the corners of the map below like it is a napkin to make the four cells physically
adjacent.
The four cells above are a group of four because they all have the Boolean variables B' and D' in
common. In other words, B=0 for the four cells, and D=0 for the four cells. The other variables (A, B)
are 0 in some cases, 1 in other cases with respect to the four corner cells. Thus, these variables (A, B)
are not involved with this group of four. This single group comes out of the map as one product term for
the simplified result: Out=B'C'
For the Kmap below, roll the top and bottom edges into a cylinder forming eight adjacent cells.
The above group of eight has one Boolean variable in common: B=0. Therefore, the one group
of eight is covered by one pterm: B'. The original eight term Boolean expression simplifies to Out=B'
The Boolean expression below has nine pterms, three of which have three Booleans instead of
four. The difference is that while four Boolean variable product terms cover one cell, the three Boolean
pterms cover a pair of cells each.
The six product terms of four Boolean variables map in the usual manner above as single cells.
The three Boolean variable terms (three each) map as cell pairs, which is shown above. Note that we are
mapping pterms into the Kmap, not pulling them out at this point.
Above, three of the cells form into a groups of two cells. A fourth cell cannot be combined with
anything, which often happens in "real world" problems. In this case, the Boolean pterm ABCD is
unchanged in the simplification process. Result: Out= B'C'D'+A'B'D'+ABCD
Often times there is more than one minimum cost solution to a simplification problem. Such is
the case illustrated below.
Both results above have four product terms of three Boolean variable each. Both are equally
valid minimal cost solutions. The difference in the final solution is due to how the cells are grouped as
shown above. A minimal cost solution is a valid logic design with the minimum number of gates with
the minimum number of inputs.
Below we map the unsimplified Boolean equation as usual and form a group of four as a first
simplification step. It may not be obvious how to pick up the remaining cells.
Pick up three more cells in a group of four, center above. There are still two cells remaining. the
minimal cost method to pick up those is to group them with neighboring cells as groups of four as at
above right.
On a cautionary note, do not attempt to form groups of three. Groupings must be powers of 2,
that is, 1, 2, 4, 8 ...
Below we have another example of two possible minimal cost solutions. Start by forming a
couple of groups of four after mapping the cells.
The two solutions depend on whether the single remaining cell is grouped with the first or the
second group of four as a group of two cells. That cell either comes out as either ABC' or ABD, your
choice. Either way, this cell is covered by either Boolean product term. Final results are shown above.
Below we have an example of a simplification using the Karnaugh map at left or Boolean
algebra at right. Plot C' on the map as the area of all cells covered by address C=0, the 8cells on the
left of the map. Then, plot the single ABCD cell. That single cell forms a group of 2cell as shown,
which simplifies to Pterm ABD, for an end result of Out = C' + ABD.
The older version of the five variable Kmap, a Gray Code map or reflection map, is shown
above. The top (and side for a 6variable map) of the map is numbered in full Gray code. The Gray code
reflects about the middle of the code. This style map is found in older texts. The newer preferred style is
below.
The overlay version of the Karnaugh map, shown above, is simply two (four for a 6variable
map) identical maps except for the most significant bit of the 3bit address across the top. If we look at
the top of the map, we will see that the numbering is different from the previous Gray code map. If we
ignore the most significant digit of the 3digit numbers, the sequence 00, 01, 11, 10 is at the heading of
both sub maps of the overlay map. The sequence of eight 3digit numbers is not Gray code. Though the
sequence of four of the least significant two bits is.
Let's put our 5variable Karnaugh Map to use. Design a circuit which has a 5bit binary input (A,
B, C, D, E), with A being the MSB (Most Significant Bit). It must produce an output logic High for any
prime number detected in the input data.
We show the solution above on the older Gray code (reflection) map for reference. The prime
numbers are (1,2,3,5,7,11,13,17,19,23,29,31). Plot a 1 in each corresponding cell. Then, proceed with
grouping of the cells. Finish by writing the simplified result. Note that 4cell group A'B'E consists of
two pairs of cell on both sides of the mirror line. The same is true of the 2cell group AB'DE. It is a
group of 2cells by being reflected about the mirror line. When using this version of the Kmap look for
mirror images in the other half of the map.
Below we show the more common version of the 5variable map, the overlay map.
If we compare the patterns in the two maps, some of the cells in the right half of the map are
moved around since the addressing across the top of the map is different. We also need to take a
different approach at spotting commonality between the two halves of the map. Overlay one half of the
map atop the other half. Any overlap from the top map to the lower map is a potential group. The figure
below shows that group AB'DE is composed of two stacked cells. Group A'B'E consists of two stacked
pairs of cells.
or
f(A,B,C,D) = (m1,m2,m3,m4,m5,m7,m8,m9,m11,m12,m13,m15)
The numbers indicate cell location, or address, within a Karnaugh map as shown below right.
This is certainly a compact means of describing a list of minterms or cells in a Kmap.
The SumOfProducts solution is not affected by the new terminology. The minterms, 1s, in the
map have been grouped as usual and a SumOFProducts solution written.
Below, we show the terminology for describing a list of maxterms. Product is indicated by the
Greek (pi), and upper case "M" indicates maxterms. M indicates product of maxterms. The same
example illustrates our point. The Boolean equation description of unsimplified logic, is replaced by a
list of maxterms.
or
Once again, the numbers indicate Kmap cell address locations. For maxterms this is the location
of 0s, as shown below. A ProductOFSums solution is completed in the usual manner.
A minterm is a Boolean expression resulting in 1 for the output of a single cell, and 0s for all
other cells in a Karnaugh map, or truth table. If a minterm has a single 1 and the remaining cells as 0s, it
would appear to cover a minimum area of 1s. The illustration above left shows the minterm ABC, a
single product term, as a single 1 in a map that is otherwise 0s. We have not shown the 0s in our
Karnaugh maps up to this point, as it is customary to omit them unless specifically needed. Another
minterm A'BC' is shown above right. The point to review is that the address of the cell corresponds
directly to the minterm being mapped. That is, the cell 111 corresponds to the minterm ABC above left.
Above right we see that the minterm A'BC' corresponds directly to the cell 010. A Boolean expression
or map may have multiple minterms.
Referring to the above figure, Let's summarize the procedure for placing a minterm in a Kmap:
A Boolean expression will more often than not consist of multiple minterms corresponding to
multiple cells in a Karnaugh map as shown above. The multiple minterms in this map are the individual
minterms which we examined in the previous figure above. The point we review for reference is that the
Form largest groups of 1s possible covering all minterms. Groups must be a power of 2.
Write binary numeric value for groups.
Convert binary value to a product term.
Repeat steps for other groups. Each group yields a pterms within a SumOfProducts.
Nothing new so far, a formal procedure has been written down for dealing with minterms. This
serves as a pattern for dealing with maxterms.
Next we attack the Boolean function which is 0 for a single cell and 1s for all others.
A maxterm is a Boolean expression resulting in a 0 for the output of a single cell expression, and
1s for all other cells in the Karnaugh map, or truth table. The illustration above left shows the maxterm
(A+B+C), a single sum term, as a single 0 in a map that is otherwise 1s. If a maxterm has a single 0 and
the remaining cells as 1s, it would appear to cover a maximum area of 1s.
There are some differences now that we are dealing with something new, maxterms. The
maxterm is a 0, not a 1 in the Karnaugh map. A maxterm is a sum term, (A+B+C) in our example, not a
product term.
It also looks strange that (A+B+C) is mapped into the cell 000. For the equation
Out=(A+B+C)=0, all three variables (A, B, C) must individually be equal to 0. Only (0+0+0)=0 will
equal 0. Thus we place our sole 0 for minterm (A+B+C) in cell A,B,C=000 in the Kmap, where the
inputs are all0 . This is the only case which will give us a 0 for our maxterm. All other cells contain 1s
because any input values other than ((0,0,0) for (A+B+C) yields 1s upon evaluation.
Another maxterm A'+B'+C' is shown above. Numeric 000 corresponds to A'+B'+C'. The
complement is 111. Place a 0 for maxterm (A'+B'+C') in this cell (1,1,1) of the Kmap as shown above.
Why should (A'+B'+C') cause a 0 to be in cell 111? When A'+B'+C' is (1'+1'+1'), all 1s in,
which is (0+0+0) after taking complements, we have the only condition that will give us a 0. All the 1s
are complemented to all 0s, which is 0 when ORed.
Let's summarize the procedure for writing the ProductOfSums Boolean reduction for a Kmap:
Form largest groups of 0s possible, covering all maxterms. Groups must be a power of 2.
Write binary numeric value for group.
Complement binary numeric value for group.
Convert complement value to a sumterm.
Repeat steps for other groups. Each group yields a sumterm within a ProductOfSums
result.
Example:
Simplify the ProductOfSums Boolean expression below, providing a result in POS form.
Solution:
Transfer the seven maxterms to the map below as 0s. Be sure to complement the input variables
in finding the proper cell location.
We map the 0s as they appear left to right top to bottom on the map above. We locate the last
three maxterms with leader lines..
Once the cells are in place above, form groups of cells as shown below. Larger groups will give
a sumterm with fewer inputs. Fewer groups will yield fewer sumterms in the result.
Example:
Simplify the ProductOfSums Boolean expression below, providing a result in SOP form.
Solution:
This looks like a repeat of the last problem. It is except that we ask for a SumOfProducts
Solution instead of the ProductOfSums which we just finished. Map the maxterm 0s from the ProductOfSums given as in the previous problem, below left.
Then fill in the implied 1s in the remaining cells of the map above right.
Form groups of 1s to cover all 1s. Then write the SumOfProducts simplified result as in the
previous section of this chapter. This is identical to a previous problem.
Above is an example of a logic function where the desired output is 1 for input ABC = 101 over
the range from 000 to 101. We do not care what the output is for the other possible inputs (110, 111).
Map those two as don't cares. We show two solutions. The solution on the right Out = AB'C is the more
complex solution since we did not use the don't care cells. The solution in the middle, Out=AC, is less
complex because we grouped a don't care cell with the single 1 to form a group of two. The third
solution, a ProductOfSums on the right, results from grouping a don't care with three zeros forming a
group of four 0s. This is the same, less complex, Out=AC. We have illustrated that the don't care cells
may be used as either 1s or 0s, whichever is useful.
Since, none of the lamps light for ABC=000 out of the A to D, enter a 0 in all Kmaps for cell
ABC=000. Since we don't care about the never to be encountered codes (110, 111), enter asterisks into
those two cells in all five Kmaps.
Lamp L5 will only light for code ABC=101. Enter a 1 in that cell and five 0s into the remaining
empty cells of L5 Kmap.
L4 will light initially for code ABC=100, and will remain illuminated for any code greater,
ABC=101, because all lamps below L5 will light when L5 lights. Enter 1s into cells 100 and 101 of the
L4 map so that it will light for those codes. Four 0's fill the remaining L4 cells
As you can see, there are two ways to use a NAND gate as an inverter, and two ways to use a
NOR gate as an inverter. Either method works, although connecting TTL inputs together increases the
amount of current loading to the driving gate. For CMOS gates, common input terminals decreases the
switching speed of the gate due to increased input capacitance.
Inverters are the fundamental tool for transforming one type of logic function into another, and
so there will be many inverters shown in the illustrations to follow. In those diagrams, I will only show
one method of inversion, and that will be where the unused NAND gate input is connected to +V (either
Vcc or Vdd, depending on whether the circuit is TTL or CMOS) and where the unused input for the NOR
gate is connected to ground. Bear in mind that the other inversion method (connecting both NAND or
NOR inputs together) works just as well from a logical (1's and 0's) point of view, but is undesirable
from the practical perspectives of increased current loading for TTL and increased input capacitance for
CMOS.
REVIEW:
NAND and NOR gates are universal: that is, they have the ability to mimic any type of
gate, if interconnected in sufficient numbers.
This symbol is seldom used in Boolean expressions because the identities, laws, and rules of
simplification involving addition, multiplication, and complementation do not apply to it. However,
there is a way to represent the ExclusiveOR function in terms of OR and AND, as has been shown in
previous chapters: AB' + A'B
As a Boolean equivalency, this rule may be helpful in simplifying some Boolean expressions.
Any expression following the AB' + A'B form (two AND gates and an OR gate) may be replaced by a
single ExclusiveOR gate.
COMBINATIONAL LOGIC
Unlike Sequential Logic Circuits whose outputs are dependant on both their present inputs and
their previous output state giving them some form of Memory, the outputs of Combinational Logic
Circuits are only determined by the logical function of their current input state, logic "0" or logic "1", at
any given instant in time as they have no feedback, and any changes to the signals being applied to their
inputs will immediately have an effect at the output. In other words, in a Combinational Logic Circuit,
the output is dependant at all times on the combination of its inputs and if one of its inputs condition
changes state so does the output as combinational circuits have "no memory", "timing" or "feedback
loops".
Combinational Logic
Combinational Logic Circuits are made up from basic logic NAND, NOR or NOT gates that are
"combined" or connected together to produce more complicated switching circuits. These logic gates
are the building blocks of combinational logic circuits. An example of a combinational circuit is a
decoder, which converts the binary code data present at its input into a number of different output lines,
one at a time producing an equivalent decimal code at its output.
Combinational logic circuits can be very simple or very complicated and any combinational
circuit can be implemented with only NAND and NOR gates as these are classed as "universal" gates.
The three main ways of specifying the function of a combinational logic circuit are:
Truth Table Truth tables provide a concise list that shows the output values in
tabular form for each possible combination of input variables.
Boolean Algebra Forms an output expression for each input variable that
represents a logic "1"
Logic Diagram Shows the wiring and connections of each individual logic gate
that implements the circuit.
and all three are shown below.
Truth Table
A
0
0
1
1
B
UM
0
1
0
1
CA
RRY
0
1
1
0
0
0
0
1
Carry = A . B
From the truth table we can see that the SUM (S) output is the result of the ExOR gate and the
Carryout (Cout) is the result of the AND gate. One major disadvantage of the Half Adder circuit when
used as a binary adder, is that there is no provision for a "Carryin" from the previous circuit when
adding together multiple data bits. For example, suppose we want to add together two 8bit bytes of
data, any resulting carry bit would need to be able to "ripple" or move across the bit patterns starting
from the least significant bit (LSB). The most complicated operation the half adder can do is "1 + 1" but
Truth Table
A
0
0
1
1
0
0
1
1
B
in
0
1
0
1
0
1
0
1
S
um
0
0
0
0
1
1
1
1
C
out
0
1
1
0
1
0
0
1
0
0
0
1
0
1
1
1
One main disadvantage of "cascading" together 1bit binary adders to add large binary
numbers is that if inputs A and B change, the sum at its output will not be valid until any carryinput has
"rippled" through every full adder in the chain. Consequently, there will be a finite delay before the
output of a adder responds to a change in its inputs resulting in the accumulated delay especially in large
multibit binary adders becoming prohibitively large. This delay is called Propagation delay. Also
"overflow" occurs when an nbit adder adds two numbers together whose sum is greater than or equal to
2n
One solution is to generate the carryinput signals directly from the A and B inputs rather than
using the ripple arrangement above. This then produces another type of binary adder circuit called a
Carry Look Ahead Binary Adder were the speed of the parallel adder can be greatly improved using
carrylook ahead logic.
half subtractor
The halfsubtractor is a combinational circuit which is used to perform subtraction of two bits. It
has two inputs, X (minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow).
YDB
0 0 0
1 1 1
0 1 0
1 0 0
From the above table one can draw the Karnaugh map for "difference" and "borrow".
So, Logic equations are:
Full subtractor
The fullsubtractor is a combinational circuit which is used to perform subtraction of three bits.
It has three inputs, X (minuend) and Y (subtrahend) and Z (subtrahend) and two outputs D (difference)
and
B
(borrow).
Easy
way
to
write
truth
table
D=XYZ
(don't
bother
about
sign)
B = 1 If X<(Y+Z)
YZDB
0 0 0 0
0 1 1 1
1 0 1 1
1 1 0 1
0 0 1 0
0 1 0 0
1 0 0 0
1 1 1 1
Addersubtractor
In digital circuits, an addersubtractor is a circuit that is capable of adding or subtracting
numbers (in particular, binary). Below is a circuit that does adding or subtracting depending on a
control signal. It is also possible to construct a circuit that performs both addition and subtraction at the
same time.
[edit] Construction
A 4bit ripplecarry addersubtractor based on a 4bit adder that performs two's complement on
A when D = 1 to yield S = B A
Having an nbit adder for A and B, then S = A + B. Then, assume the numbers are in two's
complement. Then to perform B A, two's complement theory says to invert each bit with a NOT gate
then add one. This yields
, which is easy to do with a slightly modified adder.
By preceding each A input bit on the adder with a 2to1 multiplexer where:
that has control input D and the initial carry connect is also connected to D then:
B to
A way you can Mark number A as positive or negative with out using a multiplexer on each bit
is to: Use a XOR (Exclusive OR) Gate to precede each bit instead.
This Produces the same Truth table for the bit arriving at the adder as the multiplexer solution
does. As when D = 0 the XOR Gate output will be what the input bit is set to. and when D = 1 it will
effectively invert the input bit
ntil the late 1970s, most minicomputers did not have a multiply instruction, and so programmers
used a "multiply routine"[1][2] which repeatedly shifts and accumulates partial results, often written using
loop unwinding. Mainframe computers had multiply instructions, but they did the same sorts of shifts
and adds as a "multiply routine".
Early microprocessors also had no multiply instruction. The Motorola 6809, introduced in 1978,
was one of the earliest microprocessors with a dedicated hardware multiply instruction. It did the same
sorts of shifts and adds as a "multiply routine", but implemented in the microcode of the MUL
instruction.[citation needed]
As more transistors per chip became available (Moore's law), it became possible to put enough
adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle
each partial product one at a time.
Because some common digital signal processing algorithms spend most of their time
multiplying, people who design digital signal processors sacrifice a lot of chip area in order to make the
multiply as fast as possiblea singlecycle multiplyaccumulate unit often used up most of the chip
area of early DSPs.
Multiplication basics
The method taught in school for multiplying decimal numbers is based on calculating partial
products, shifting them to the left and then adding them together. The most difficult part is to obtain the
partial products, as that involves multiplying a long number by one digit (from 0 to 9):
123
x 456
=====
738
615
+ 492
=====
56088
(this is 123 x 6)
(this is 123 x 5, shifted one position to the left)
(this is 123 x 4, shifted two positions to the left)
A binary computer does exactly the same, but with binary numbers. In binary encoding each
long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the
product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers
comes down to calculating partial products (which are 0 or the first number), shifting them left, and then
adding them together (a binary addition, of course):
1011
x 1110
======
0000
1011
1011
+ 1011
=========
10011010
(this is 11 in binary)
(this is 14 in binary)
(this
(this
(this
(this
is
is
is
is
1011
1011
1011
1011
x
x
x
x
0)
1, shifted one position to the left)
1, shifted two positions to the left)
1, shifted three positions to the left)
This is much simpler than in the decimal system, as there is no table of multiplication to
remember: just shifts and adds.
This method is mathematically correct, but it has two serious engineering problems. The first is
that it involves 32 intermediate additions in a 32bit computer, or 64 intermediate additions in a 64bit
=
=
=
=
=
=
=
=
a[0]
a[1]
a[2]
a[3]
a[4]
a[5]
a[6]
a[7]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
=
=
=
=
=
=
=
=
{8{a[0]}}
{8{a[1]}}
{8{a[2]}}
{8{a[3]}}
{8{a[4]}}
{8{a[5]}}
{8{a[6]}}
{8{a[7]}}
&
&
&
&
&
&
&
&
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
b[7:0]
where {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times (Verilog notation).
To produce our product, we then need to add up all eight of our partial products, as shown here:
p0[7] p0[6] p0[5] p0[4] p0[3]
p0[2] p0[1] p0[0]
+ p1[7] p1[6] p1[5] p1[4] p1[3] p1[2]
p1[1] p1[0] 0
+ p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1]
p2[0] 0
p0[3]
p0[2]
p0[1]
p0[7]
p0[6]
p0[5]
+p1[6]
+p1[5]
+p1[4]
p0[0]
p1[7]
+p3[1] +p3[0]
p2[7]
+p4[0]
Example
Identity Comparator  is a digital comparator that has only one output terminal for
when A = B either "HIGH" A = B = 1 or "LOW" A = B = 0
1bit Comparator
Then the operation of a 1bit digital comparator is given in the following Truth Table.
Truth Table
Inp
Outputs
A
>B
0
1
0
1
A
=B
0
1
0
0
A
<B
1
0
0
1
0
0
1
0
You may notice two distinct features about the comparator from the above truth table. Firstly,
the circuit does not distinguish between either two "0" or two "1"'s as an output A = B is produced when
they are both equal, either A = B = "0" or A = B = "1". Secondly, the output condition for A = B
resembles that of a commonly available logic gate, the ExclusiveNOR or ExNOR function
(equivalence) on each of the nbits giving: Q = A B
Digital comparators actually use ExclusiveNOR gates within their design for comparing their
respective pairs of bits. When we are comparing two binary or BCD values or variables against each
other, we are comparing the "magnitude" of these values, a logic "0" against a logic "1" which is where
the term Magnitude Comparator comes from.
As well as comparing individual bits, we can design larger bit comparators by cascading
together n of these and produce a nbit comparator just as we did for the nbit adder in the previous
tutorial. Multibit comparators can be constructed to compare whole binary or BCD words to produce
an output if one word is larger, equal to or less than the other. A very good example of this is the 4bit
Magnitude Comparator. Here, two 4bit words ("nibbles") are compared to each other to produce the
relevant output with one word connected to inputs A and the other to be compared against connected to
input B as shown below.
Some commercially available digital comparators such as the TTL 7485 or CMOS 4063 4bit
magnitude comparator have additional input terminals that allow more individual comparators to be
"cascaded" together to compare words larger than 4bits with magnitude comparators of "n"bits being
produced. These cascading inputs are connected directly to the corresponding outputs of the previous
comparator as shown to compare 8, 16 or even 32bit words.
When comparing large binary or BCD numbers like the example above, to save time the comparator
starts by comparing the highestorder bit (MSB) first. If equality exists, A = B then it compares the next
lowest bit and so on until it reaches the lowestorder bit, (LSB). If equality still exists then the two
numbers are defined as being equal. If inequality is found, either A > B or A < B the relationship
between the two numbers is determined and the comparison between any additional lower order bits
stops. Digital Comparator are used widely in AnaloguetoDigital converters, (ADC) and Arithmetic
Logic Units, (ALU) to perform a variety of arithmetic operations.
The Multiplexer
A data selector, more commonly called a Multiplexer, shortened to "Mux" or "MPX", are
combinational logic switching devices that operate like a very fast acting multiple position rotary
switch. They connect or control, multiple input lines called "channels" consisting of either 2, 4, 8 or 16
individual inputs, one at a time to an output. Then the job of a multiplexer is to allow multiple signals to
share a single common output. For example, a single 8channel multiplexer would connect one of its
eight inputs to the single data output. Multiplexers are used as one method of reducing the number of
logic gates required in a circuit or when a single data line is required to carry two or more different
digital signals.
Addressing
b
0
0
1
1
a
0
1
0
1
Input
Selected
A
B
C
D
The Boolean expression for this 4to1 Multiplexer above with inputs A to D and data select
lines a, b is given as:
Q = abA + abB + abC + abD
In this example at any one instant in time only ONE of the four analogue switches is closed,
connecting only one of the input lines A to D to the single output at Q. As to which switch is closed
depends upon the addressing input code on lines "a" and "b", so for this example to select input B to the
output at Q, the binary input address would need to be "a" = logic "1" and "b" = logic "0". Adding more
control address lines will allow the multiplexer to control more inputs but each control line
configuration will connect only ONE input to the output.
Then the implementation of this Boolean expression above using individual logic gates would
require the use of seven individual gates consisting of AND, OR and NOT gates as shown.
Multiplexer Symbol
Multiplexers are not limited to just switching a number of different input lines or channels to one
common single output. There are also types that can switch their inputs to multiple outputs and have
arrangements or 4 to 2, 8 to 3 or even 16 to 4 etc configurations and an example of a simple Dual
channel 4 input multiplexer (4 to 2) is given below:
Here in this example the 4 input channels are switched to 2 individual output lines but larger
arrangements are also possible. This simple 4 to 2 configuration could be used for example, to switch
audio signals for stereo preamplifiers or mixers.
The Multiplexer is a very useful combinational device that has its uses in many different
applications such as signal routing, data communications and data bus control. When used with a
demultiplexer, parallel data can be transmitted in serial form via a single data link such as a fibreoptic
cable or telephone line. They can also be used to switch either analogue, digital or video signals, with
the switching current in analogue power circuits limited to below 10mA to 20mA per channel in order
to reduce heat dissipation.
The Demultiplexer
The data distributor, known more commonly as a Demultiplexer or "Demux", is the exact
opposite of the Multiplexer we saw in the previous tutorial. The demultiplexer takes one single input
data line and then switches it to any one of a number of individual output lines one at a time. The
demultiplexer converts a serial data signal at the input to a parallel data at its output lines as shown
below.
Addressing
b
0
0
1
1
a
0
1
0
1
Input
Selected
A
B
C
D
Demultiplexer Symbol
One of the main disadvantages of standard digital encoders is that they can generate the wrong
output code when there is more than one input present at logic level "1". For example, if we make inputs
D1 and D2 HIGH at logic "1" at the same time, the resulting output is neither at "01" or at "10" but will
be at "11" which is an output binary number that is different to the actual input present. Also, an output
Priority Encoder
The Priority Encoder solves the problems mentioned above by allocating a priority level to
each input. The priority encoders output corresponds to the currently active input which has the highest
priority. So when an input with a higher priority is present, all other inputs with a lower priority will be
ignored. The priority encoder comes in many different forms with an example of an 8input priority
encoder along with its truth table shown below.
Priority encoders are available in standard IC form and the TTL 74LS148 is an 8to3 bit priority
encoder which has eight active LOW (logic "0") inputs and provides a 3bit code of the highest ranked
input at its output. Priority encoders output the highest order input first for example, if input lines "D2",
"D3" and "D5" are applied simultaneously the output code would be for input "D5" ("101") as this has
the highest order out of the 3 inputs. Once input "D5" had been removed the next highest output code
would be for input "D3" ("011"), and so on.
The truth table for a 8to3 bit priority encoder is given as:
Binary
Digital Inputs
D
7
D
5
0
0
0
0
D
4
0
0
0
0
D
3
0
0
0
0
D
2
0
0
0
0
D
1
0
0
0
1
D
0
0
0
1
X
Output
D
Q
2
0
1
X
X
1
X
X
X
0
0
1
1
0
1
0
1
0
0
0
0
0
0
1
X
0
1
X
X
1
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
1
1
1
1
0
0
1
1
0
1
0
1
From this truth table, the Boolean expression for the encoder above with inputs D0 to D7 and
outputs Q0, Q1, Q2 is given as:
Output Q0
Output Q1
Output Q2
Then the final Boolean expression for the priority encoder including the zero inputs is defined
as:
Binary Decoder
A Decoder is the exact opposite to that of an "Encoder" we looked at in the last tutorial. It is
basically, a combinational type logic circuit that converts the binary code data at its input into one of a
number of different output lines, one at a time producing an equivalent decimal code at its output.
Binary Decoders have inputs of 2bit, 3bit or 4bit codes depending upon the number of data input
lines, and a nbit decoder has 2n output lines. Therefore, if it receives n inputs (usually grouped as a
binary or Boolean number) it activates one and only one of its 2n outputs based on that input with all
other outputs deactivated. A decoders output code normally has more bits than its input code and
practical binary decoder circuits include, 2to4, 3to8 and 4to16 line configurations.
A binary decoder converts coded inputs into coded outputs, where the input and output codes are
different and decoders are available to "decode" either a Binary or BCD (8421 code) input pattern to
typically a Decimal output code. Commonly available BCDtoDecimal decoders include the TTL 7442
or the CMOS 4028. An example of a 2to4 line decoder along with its truth table is given below. It
consists of an array of four NAND gates, one of which is selected for each combination of the input
signals A and B.
In this simple example of a 2to4 line binary decoder, the binary inputs A and B determine
which output line from D0 to D3 is "HIGH" at logic level "1" while the remaining outputs are held
"LOW" at logic "0" so only one output can be active (HIGH) at any one time. Therefore, whichever
output line is "HIGH" identifies the binary code present at the input, in other words it "decodes" the
binary input and these types of binary decoders are commonly used as Address Decoders in
microprocessor memory applications.
Some binary decoders have an additional input labelled "Enable" that controls the outputs from
the device. This allows the decoders outputs to be turned "ON" or "OFF" and we can see that the logic
diagram of the basic decoder is identical to that of the basic demultiplexer. Therefore, we say that a
demultiplexer is a decoder with an additional data line that is used to enable the decoder. An alternative
way of looking at the decoder circuit is to regard inputs A, B and C as address signals. Each
combination of A, B or C defines a unique address which can access a location having that address.
Sometimes it is required to have a Binary Decoder with a number of outputs greater than is
available, or if we only have small devices available, we can combine multiple decoders together to
form larger decoder networks as shown. Here a much larger 4to16 line binary decoder has been
implemented using two smaller 3to8 decoders.
Inputs A, B, C are used to select which output on either decoder will be at logic "1" (HIGH) and
input D is used with the enable input to select which encoder either the first or second will output the
"1".
UNIT3
SEQUENTIAL MACHINES FUNDAMENTALS
INTRODUCTION
All the instructions that direct a computer's operation exist as a sequence of binary digits or bits
(0s and 1s) the instructions and the data are represented this way.
The logic gates can be arranged in groups that cause these binary numbers to either act as
adders, substractor, multipliers, dividers or logical comparators. Other groups of gates can act as
storage for the instructions and data. These groups are, in hardware design terms, latches and
flip flops
Unlike Combinational Logic circuits that change state depending upon the actual signals being
applied to their inputs at that time,
Sequential Logic circuits have some form of inherent "Memory" built in to them and they are able
to take into account their previous input state as well as those actually present, a sort of "before"
and "after" is involved.
They are generally termed as Two State or Bistable devices which can have their output set in
either of two basic states, a logic level "1" or a logic level "0" and will remain "Latched" indefinitely
in this current state or condition until some other input trigger pulse or signal is applied which will
change its state once again
The word "Sequential" means that things happen in a "sequence", one after another and in Sequential
Logic circuits, the actual clock signal determines when things will happen next.
Sequential circuit
1. in sequential circuits the output
variables at
any instant of time are dependent not only
on
the present input variables, but also on the
present state
2.memory unit is required to store the past
history of the input variables
3. sequential circuits are slower than
combinational
circuits
4. comparatively hard to design
LATCH :
An asynchronous latch is an electronic sequential logic circuit used to store
information in an asynchronous arrangement. (Asynchronous: they have no Clock
input.)One latch can store one bit. They change output state only in response to data input. Essentially,
they hold a bit value and it remains constant until new inputs force it to change. A type of singlebit
stable storage.
FLIP FLOPS :
As with latches, flipflops are another example of a circuit employing sequential logic.
A flipflop can also be called a BISTABLE GATE.
A type of singlebit storage but not as stable as a latch.
SR NOR latch
When using static gates as building blocks, the most fundamental latch is the simple SR latch,
where S and R stand for set and reset. It can be constructed from a pair of crosscoupled NOR logic
gates. The stored bit is present on the output marked Q.
While the S and R inputs are both low, feedback maintains the Q and Q outputs in a constant state,
with Q the complement of Q. If S (Set) is pulsed high while R (Reset) is held low, then the Q output is
forced high, and stays high when S returns to low; similarly, if R is pulsed high while S is held low,
then the Q output is forced low, and stays low when R returns to low.
SR latch operation
S R Action
0 0 No Change
0 1 Q=0
1 0 Q=1
Restricted
11
combination
Alternatively, the restricted combination can be made to toggle the output. The result is the JK
latch.
Characteristic: Q+ = R'Q + R'S or Q+ = R'Q + S.
SR NAND latch
An SR latch
This is an alternate model of the simple SR latch built with NAND (not AND) logic gates. Set and
reset now become active low signals, denoted S and R respectively. Otherwise, operation is identical to
that of the SR latch. Historically, SRlatches have been predominant despite the notational
inconvenience of activelow inputs]
SR latch operation
S R Action
Restricted
00
combination
0 1 Q=1
1 0 Q=0
1 1 No Change
JK latch
The JK latch is much less used than the JK flipflop. The JK latch follows the following state
table:
JK latch truth table
Q
K
Comment
next
0
0
1
1
0
1
0
1
Q
0
1
Q
No change
Reset
Set
Toggle
Hence, the JK latch is an SR latch that is made to toggle its output when passed the restricted
combination of 11. Unlike the JK FlipFlop, in the JK latch, this is not a useful state because the speed
of the toggling is not directed by a clock.[
Gated SR latch
Action
No action (keep state)
Symbol for a gated SR latch
Gated D latch
prev
Comment
Qprev
No change
Symbol for a gated D
Reset
Set
latch
The truth table shows that when the enable/clock input is 0, the D input has no effect on the
output. When E/C is high, the output equals D.
The design of such a flip flop includes two inputs, called the SET [S] and RESET [R]. There are also
two outputs, Q and Q. The diagram and truth table is shown below.
A clock pulse [CP] is given to the inputs of the AND Gate. When the value of the clock pulse is 0, the
outputs of both the AND Gates remain 0. As soon as a pulse is given the value of CP turns 1. This
makes the values at S and R to pass through the NOR Gate flip flop. But when the values of both S and
R values turn 1, the HIGH value of CP causes both of them to turn to 0 for a short moment. As soon
as the pulse is removed, the flip flop state becomes intermediate. Thus either of the two states may be
caused, and it depends on whether the set or reset input of the flipflop remains a 1 longer than the
transition to 0 at the end of the pulse. Thus the invalid states can be eliminated.
2. D Flip Flop
The circuit diagram and truth table is given below.
D Flip Flop
D flip flop is actually a slight modification of the above explained clocked SR flipflop. From the figure
you can see that the D input is connected to the S input and the complement of the D input is connected
to the R input. The D input is passed on to the flip flop when the value of CP is 1. When CP is HIGH,
the flip flop moves to the SET state. If it is 0, the flip flop switches to the CLEAR state.
To know more about the triggering of flip flop click on the link below.
4. T Flip Flop
This is a much simpler version of the JK flip flop. Both the J and K inputs are connected together and
thus are also called a single input JK flip flop. When clock pulse is given to the flip flop, the output
begins to toggle. Here also the restriction on the pulse width can be eliminated with a masterslave or
edgetriggered construction. Take a look at the circuit and truth table below.
Working
When Clk=1, the master JK flip flop gets disabled. The Clk input of the master input will be the
opposite of the slave input. So the master flip flop output will be recognized by the slave flip flop only
when the Clk value becomes 0. Thus, when the clock pulse males a transition from 1 to 0, the locked
outputs of the master flip flop are fed through to the inputs of the slave flipflop making this flip flop
edge or pulsetriggered. To understand better take a look at the timing diagram illustrated below.
Now we can see from that truth table that to change output from 0 to 0, we can keep inputs S, R as 0, 0
or 0, 1 and we can write both the combinations as 0, X which means we just need to keep S=0 and R
can have either of two possible values.
Similarly we can note that for output change from 0 to 1, we keep inputs at S=1, R=0.Similarly we can
find the other cases and we get the table as:
RS flipflop to D flipflop:
Lets first now derive the D flipflop from RS flipflop which we have already done:
We first write the truth table for required D flipflop as
Now we need to make a arrangement so that we manipulate input D to inputs R, S such that we get the
same output with RS FF as that of D FF. So we combine the two tables given above with same outputs
in the same row:
Now we design the combinational circuit to convert D input to SR inputs using Kmap as:
RS flipflop to JK flipflop:
We first write the truth table for required Flipflop i.e. JK FF
D Flipflop to RS flipflop:
We first write the truth table for required Flipflop i.e. RS FF
D to T & T to D FF
Similarly we get the circuits as follow:
D FF to T FF:
T FF to D FF:
Note: We have not shown the clock but we can attach the clock signal to the given FF.
UNIT4
SEQUENTIAL CIRCUIT DESIGN AND ANALYSIS
Steps in Design of a Sequential Circuit:
1. Specification A description of the sequential circuit. Should include a detailing of the inputs, the
outputs, and the operation. Possibly assumes that you have knowledge of digital system basics.
2. Formulation: Generate a state diagram and/or a state table from the statement of the problem.
3. State Assignment: From a state table assign binary codes to the states.
4. Flipflop Input Equation Generation: Select the type of flipflop for the circuit and generate the
needed input for the required state transitions
5. Output Equation Generation: Derive output logic equations for generation of the output from the
inputs and current state.
6. Optimization: Optimize the input and output equations. Today, CAD systems are typically used for
this in real systems.
Mealy and Moore
1. Sequential machines are typically classified as either a Mealy machine or a Moore machine
implementation.
2. Moore machine: The outputs of the circuit depend only upon the current state of the circuit.
3. Mealy machine: The outputs of the circuit depend upon both the current state of the circuit and the
inputs.
An example to go through the steps
The specification: The circuit will have one input, X, and one output, Z. The output Z will be 0 except
when the input sequence 1101 are the last 4 inputs received on X. In that case it will be a 1
Add a state D
State D have detected the 3rd input in the start of a sequence, a 0, now having 110. From State D, if
the next input is a 1 the sequence has been detected and a 1 is output.
The previous diagram was incomplete.In each state the next input could be a 0 or a 1. This must be
included
Registers:
A register is a group of 1 bit memory cells. To make a Nbit register we need N 1bit memory cells.
Register with parallel load: We can represent a simple 4bit register as: We can give the values to be
stored at input and we get that value stored at the next clock pulse.
But in this circuit we have to maintain the inputs to keep the outputs unchanged as we dont have the
input condition in D Flipflop for unchanged output. Hence we modify the above circuit with an extra
input LOAD which when 1 would mean there is a new input data to be stored and LOAD=0 would
mean we have keep the stored data same. The modified circuit is as:
Serialin to Parallelout (SIPO)  the register is loaded with serial data, one bit at a time, with
the stored data being available in parallel form.
Serialin to Serialout (SISO)  the data is shifted serially "IN" and "OUT" of the register, one
bit at a time in either a left or right direction under clock control.
Parallelin to Serialout (PISO)  the parallel data is loaded into the register simultaneously and
is shifted out of the register serially one bit at a time under clock control.
Parallelin to Parallelout (PIPO)  the parallel data is loaded simultaneously into the register,
and transferred together to their respective outputs by the same clock pulse.
The effect of data movement from left to right through a shift register can be presented graphically as:
Also, the directional movement of the data through a shift register can be either to the left, (left shifting)
to the right, (right shifting) leftin but rightout, (rotation) or both left and right shifting within the same
register thereby making it bidirectional. In this tutorial it is assumed that all the data shifts to the right,
(right shifting).
Serialin to Parallelout (SIPO)
QA
0
1
0
0
0
0
QB
0
0
1
0
0
0
QC
0
0
0
1
0
0
QD
0
0
0
0
1
0
Note that after the fourth clock pulse has ended the 4bits of data (0001) are stored in the register and
will remain there provided clocking of the register has stopped. In practice the input data to the register
may consist of various combinations of logic "1" and "0". Commonly available SIPO IC's include the
standard 8bit 74LS164 or the 74LS594.
As this type of shift register converts parallel data, such as an 8bit data word into serial format, it can
be used to multiplex many different input lines into a single serial DATA stream which can be sent
directly to a computer or transmitted over a communications line. Commonly available IC's include the
74HC166 8bit Parallelin/Serialout Shift Registers.
The PIPO shift register is the simplest of the four configurations as it has only three connections, the
parallel input (PI) which determines what enters the flipflop, the parallel output (PO) and the
sequencing clock signal (Clk).
Similar to the Serialin to Serialout shift register, this type of register also acts as a temporary storage
device or as a time delay device, with the amount of time delay being varied by the frequency of the
clock pulses. Also, in this type of register there are no interconnections between the individual flipflops
since no serial shifting of the data is required.
Counters:
Counter is a device which stores (and sometimes displays) the number of times particular event or
process has occurred, often in relationship to a clock signal. A Digital counter is a set of flip flops
whose state change in response to pulses applied at the input to the counter. Counters may be
asynchronous counters or synchronous counters. Asynchronous counters are also called ripple counters
In electronics counters can be implemented quite easily using registertype circuits such as the flipflops
and a wide variety of classifications exist:
1. Asynchronous (ripple) counter changing state bits are used as clocks to subsequent state flipflops
2. Synchronous counter all state bits change under control of a single clock
3. Decade counter counts through ten states per stage
4. Up/down counter counts both up and down, under command of a control input
5. Ring counter formed by a shift register with feedback connection in a ring
6. Johnson counter a twisted ring counter
7. Cascaded counter
8. Modulus counter.
Each is useful for different applications. Usually, counter circuits are digital in nature, and count in
natural binary Many types of counter circuits are available as digital building blocks, for example a
Asynchronous counters:
An asynchronous (ripple) counter is a single JKtype flipflop, with its J (data) input fed from its own
inverted output. This circuit can store one bit, and hence can count from zero to one before it overflows
(starts over from 0). This counter will increment once for every clock cycle and takes two clock cycles
to overflow, so every cycle it will alternate between a transition from 0 to 1 and a transition from 1 to 0.
Notice that this creates a new clock with a 50% duty cycle at exactly half the frequency of the input
clock. If this output is then used as the clock signal for a similarly arranged D flipflop (remembering to
invert the output to the input), one will get another 1 bit counter that counts half as fast. Putting them
together yields a twobit counter:
Twobit ripple upcounter using negative edge triggered flip flop:
Two bit ripple counter used two flipflops. There are four possible states from 2 bit upcounting I.e.
00, 01, 10 and 11. The counter is initially assumed to be at a state 00 where the outputs of the tow flipflops are noted as Q1Q0. Where Q1 forms the MSB and Q0 forms the LSB.
For the negative edge of the first clock pulse, output of the first flipflop FF1 toggles its state. Thus Q1
remains at 0 and Q0 toggles to 1 and the counter state are now read as 01. During the next negative
edge of the input clock pulse FF1 toggles and Q0 = 0. The output Q0 being a clock signal for the second
flipflop FF2 and the present transition acts as a negative edge for FF2 thus toggles its state Q1 = 1. The
counter state is now read as 10. For the next negative edge of the input clock to FF1 output Q0 toggles
to 1. But this transition from 0 to 1 being a positive edge for FF2 output Q1 remains at 1. The counter
state is now read as 11. For the next negative edge of the input clock, Q0 toggles to 0. This transition
from 1 to 0 acts as a negative edge clock for FF2 and its output Q1 toggles to 0. Thus the starting state
00 is attained. Figure shown below
A 2bit downcounter counts in the order 0,3,2,1,0,1.,i.e, 00,11,10,01,00,11 ..,etc. the above fig.
shows ripple down counter, using negative edge triggered JK FFs and its timing diagram.
For down counting, Q1 of FF1 is connected to the clock of Ff2. Let initially all the FF1 toggles,
so, Q1 goes from a 0 to a 1 and Q1 goes from a 1 to a 0.
The negativegoing signal at Q1 is applied to the clock input of FF2, toggles Ff2 and, therefore,
Q2 goes from a 0 to a 1.so, after one clock pulse Q2=1 and Q1=1, I.e., the state of the counter is
11.
At the negativegoing edge of the second clock pulse, Q1 changes from a 1 to a 0 and Q1 from
a 0 to a 1.
This positivegoing signal at Q1 does not affect FF2 and, therefore, Q2 remains at a 1. Hence ,
the state of the counter after second clock pulse is 10
The synchronous Ring Counter example above, is preset so that exactly one data bit in the register is
set to logic "1" with all the other bits reset to "0". To achieve this, a "CLEAR" signal is firstly applied to
all the flipflops together in order to "RESET" their outputs to a logic "0" level and then a "PRESET"
pulse is applied to the input of the first flipflop (FFA) before the clock pulses are applied. This then
places a single logic "1" value into the circuit of the ring counter . On each successive clock pulse, the
counter circulates the same data bit between the four flipflops over and over again around the "ring"
every fourth clock cycle. But in order to cycle the data correctly around the counter we must first "load"
the counter with a suitable data pattern as all logic "0"'s or all logic "1"'s outputted at each clock cycle
would make the ring counter invalid.This type of data movement is called "rotation", and like the
previous shift register, the effect of the movement of the data bit from left to right through a ring
counter can be presented graphically as follows along with its timing diagram:
Since the ring counter example shown above has four distinct states, it is also known as a "modulo4" or
"mod4" counter with each flipflop output having a frequency value equal to onefourth or a quarter
(1/4) that of the main clock frequency.
The "MODULO" or "MODULUS" of a counter is the number of states the counter counts or sequences
through before repeating itself and a ring counter can be made to output any modulo number. A "modn" ring counter will require "n" number of flipflops connected together to circulate a single data bit
providing "n" different output states. For example, a mod8 ring counter requires eight flipflops and a
mod16 ring counter would require sixteen flipflops. However, as in our example above, only four of
the possible sixteen states are used, making ring counters very inefficient in terms of their output state
usage.
This inversion of Q before it is fed back to input D causes the counter to "count" in a different way.
Instead of counting through a fixed set of patterns like the normal ring counter such as for a 4bit
counter, "0001"(1), "0010"(2), "0100"(4), "1000"(8) and repeat, the Johnson counter counts up and then
down as the initial logic "1" passes through it to the right replacing the preceding logic "0". A 4bit
Johnson ring counter passes blocks of four logic "0" and then four logic "1" thereby producing an 8bit
pattern. As the inverted output Q is connected to the input D this 8bit pattern continually repeats. For
example, "1000", "1100", "1110", "1111", "0111", "0011", "0001", "0000" and this is demonstrated in
the following table below.
FFA
0
1
1
1
1
0
0
0
FFB
0
0
1
1
1
1
0
0
FFC
0
0
0
1
1
1
1
0
FFD
0
0
0
0
1
1
1
1
As well as counting or rotating data around a continuous loop, ring counters can also be used to detect
or recognize various patterns or number values within a set of data. By connecting simple logic gates
such as the AND or the OR gates to the outputs of the flipflops the circuit can be made to detect a set
number or value. Standard 2, 3 or 4stage Johnson ring counters can also be used to divide the
frequency of the clock signal by varying their feedback connections and divideby3 or divideby5
outputs are also available.
A 3stage Johnson Ring Counter can also be used as a 3phase, 120 degree phase shift square wave
generator by connecting to the data outputs at A, B and NOTB. The standard 5stage Johnson counter
such as the commonly available CD4017 is generally used as a synchronous decade counter/divider
circuit. The smaller 2stage circuit is also called a "Quadrature" (sine/cosine) Oscillator/Generator and
is used to produce four individual outputs that are each "phase shifted" by 90 degrees with respect to
each other, and this is shown below.
Also note that output pulse is of half the original frequency of the clock. Hence we can say that flipflop
acts as a Divide by 2 circuit.
Ripple counter:
We can attach more flipflops to make larger counter. We just use more flipflops in cascade and give
output of first to the clock of 2nd and output of 2nd to clock of 3rd and so on. This way every flipflop
would divide frequency of the clock by 2 and hence we can obtain a divide by larger value circuit. Lets
see how we can make larger counters:
And following waveforms would illustrate how the above circuit does counting. It is actually a MOD8
counter so it would count from 0 to 7 and then again reset itself as shown:
With every negative edge, count is incremented and when the count reaches 7, next edge would reset
the value to 0.
Now we need to design a combinational circuit which would take care that counter is reset when count
value reaches 13. For this we first draw the waveforms as:
And we can clearly observe that we have achieved MOD14 counter as all count values are reset after
13 but in this method we have to observe the output waveforms and then decide the combinational
circuit to reset value after certain count.
Mod6 Counter
And now we draw the table to represent the desired output of the combinational circuit to reset FFs as:
Q2
Q1
Q0
OUTPUT
UP/DOWN COUNTER
Here well be counting in reverse order i.e. count would start from 15 to 0 and again value goes from 0
to 15. We just make a change in the circuit as we give Q bar to the CLK of next flipflop or we use
positive edged flipflops and give Q to CLK of next flipflop.
Or
Or
We can just use the same circuit as the UP counter but
And we see that this circuit is a UP counter which count from 0 to 7 and then it is reset but the same
circuit can also work as DOWN counter when we take count as combination of inverted outputs for
each FF. i.e.
again it is set to 7.
. Hence output count of the above circuit would go from 7 to 0 and then
Synchronous Counter
In synchronous counters we have the same clock signal to all the flipflops.
MOD4 Synchronous counter: We discuss here a 2bit synchronous counter. We have the circuit for
this as:
We have the initial outputs as Q0=0 & Q1=0. Whenever the first negative clock edge comes O/P of 1st
FF becomes 1 as we have J & K for 1st FF as 1 and hence output of 1st FF toggles and changes from 0 to
1. But when 1st cock edge had come output of 1st FF was 0. Hence J & K for 2nd FF for 1st edge are 0.
So output of this FF doesnt change and we get Q1=0. so the output is (Q1Q0)2= 012.
On the next edge, output of 1st FF changes from 1 to 0 as J & K are always 1 for this FF. Inputs for 2nd
edge for 2nd FF are J=1 & K=1. Hence output changes from 0 to 1. so we get the count as (Q1Q0)2= 102.
Similarly on the next edge well get the output count as (Q1Q0)2= 112.
And on the 4th clock edge both the outputs get reset and we get the output as (Q1Q0)2 = 002 and again
whole procedure is repeated
Circuit
Propagation Time
Maximum
frequency
operating And
hence
operating And
hence
operating
frequency is Low
frequency is Higher
UNIT5
SEQUENTIAL CIRCUITS
Finite State Machine:
Finite state machine can be defined as a type of machine whose past histories can affect its future
behavior in a finite number of ways. To clarify, consider for example of binary full adder. Its output
depends on the present input and the carry generated from the previous input. It may have a large
number of previous input histories but they can be divided into two types: (i) Input The most general
model of a sequential circuit has inputs, outputs and internal states. A sequential circuit is referred to as
a finite state machine (FSM). A finite state machine is abstract model that describes the synchronous
sequential machine. The fig. shows the block diagram of a finite state model. X1, X2,.., Xl, are
inputs. Z1, Z2,.,Zm are outputs. Y1,Y2,.Yk are state variables, and Y1,Y2,.Yk represent the next
state.
Limitations:
Mealy machine
1. its output is a function of present
state as well as present input
Z(t)=g{S(t),X(t)}
2. input changes may affect the output
of the circuit
3. it requires less number of states for
implementing same function
Mealy model:
When the output of the sequential circuit depends on the both the present state of the flipflops and on
the inputs, the sequential circuit is referred to as mealy circuit or mealy machine. The fig. shows the
logic diagram of the mealy model. Notice that the output depends up on the present state as well as the
present inputs. We can easily realize that changes in the input during the clock pulse cannot affect the
state of the flipflop. They can affect the output of the circuit. If the input variations are not
synchronized with a clock, he derived output will also not be synchronized with the clock and we get
false output. The false outputs can be eliminated by allowing input to change only at the active
transition of the clock.
In general form, the mealy circuit can be represented with its block schematic as shown in below fig.
Moore model:
When the output of the sequential circuit depends up only on the present state of the flipflop, the
sequential circuit is referred as to as the Moore circuit or the Moore machine. Notice that the output
depend only on the present state. It does not depend upon the input at all. The input is used only to
determine the inputs of flipflops. It is not used to determine the output. The circuit shown has two T
flipflops, one input x, and one output z. it can be described algebraically by two input equations an
output equation.
T1=y2x
T2=x Z=y1y2
In general form , the Moore circuit can be represented with its block schematic as shown in below fig.
Successor: looking at the state diagram when present state is A and input is 1, the next state is D. this
condition is specified as D is the successor of A. similarly we can say that A is the 1 successor of B,
and C,D is the 11 successor of B and C, C is the 00 successor of A and D, D is the 000 successor of
A,E, is the 10 successor of A or 0000 successor of A and so on.
Terminal state: looking at the state diagram , we observe that no such input sequence exists which can
take the sequential machine out of state E and thus state E is said to be a terminal state. Stronglyconnected machine: in sequential machines many times certain subsets of states may not be reachable
from other subsets of states. Even if the machine does not contain any terminal state. If for every pair
of states si, sj, of a sequential machine there exists an input sequence which takes the machine M from
si to sj, then the sequential machine is said to be strongly connected.
State equivalence and machine minimization: In realizing the logic diagram from a stat table or state
diagram many times we come across redundant states. Redundant states are states whose functions can
be accomplished by other states. The elimination of redundant states reduces the total number of states
of the machines which in turn results in reduction of the number of flipflops and logic gates, reducing
the cost of the final circuit. Two states are said to be equivalent. When two states are equivalent, one of
them can be removed without altering the input output relationship. State equivalence theorem: it states
that two states s1, and s2 are equivalent if for every possible input sequence applied. The machine goes
to the same next state and generates the same output. That is If S1(t+1)= s2(t+1) and z1=z2, then s1=s2
Distinguishable states and distinguishing sequences: Two states sa, and sb of a sequential machine
are distinguishable, if and only if there exists at least one finite input sequence which when applied to
the sequential machine causes different outputs sequences depending on weather sa or sb is the initial
state. Consider states A and B in the state table, when input X=0, their outputs are 0 and 1 respectively
and therefore, states A and B are called 1distinguishabke. Now consider states A and E . the output
sequence is as follows.
Here the outputs are different after 3 transition and hence states A and B are 3distuingshable. the
concept of K distuingshable leads directly to the definition of Kequivalence. States that are not Kdistinguishable are said to be Kequivalent.
Truth table for Distunigshable states:
PS
X=0
A
B
C
D
E
F
NS,Z
X=1
C,0
F,0
D,1 F,0
E,0
B,0
B,1
E,0
D,0 B,0
D,1 B,0
a) Merger graph
I2
F,1
C,0
D,0
NS,Z
I3
I4
E,1
B,1
.
D,1
F,1
C,1
A,0
F,1
A,1
B,0
States A and B have nonconflicting outputs, but the successor under input I2are compatible only if
implied states D and E are compatible. So, draw a broken line from A to B with DE written in between
states A and C are compatible because the next states and output entries of states A and C are not
conflicting. Therefore, a line is drawn between nodes A and C. states A and D have nonconflicting
outputs but the successor under input I3 are B and C. hence join A and D by a broken line with BC
entered In between.
State Minimization:
Completely Specified Machines
1. Two states, si and sj of machine M are distinguishable if and only if there exists a finite input
sequence which when applied to M causes different output sequences depending on whether M started
in si or sj.
2. Such a sequence is called a distinguishing sequence for (si, sj).
3. If there exists a distinguishing sequence of length k for (si, sj), they are said to be kdistinguishable.
EXAMPLE:
States A and B are 1distinguishable, since a 1 input applied to A yields an output 1, versus an output
0 from B.
States A and E are 3distinguishable, since input sequence 111 applied to A yields output 100, versus
an output 101 from E.
Theorem. For every machine M there is a minimum machine Mred ~ M. Mred is unique up to
isomorphism.
Goal: Develop an implementation such that all computations can be assigned to transitions containing a
state for which the name of the corresponding class is changed. Suitable data structures achieve an O
(kn log n) implementation.
State Minimization:
Incompletely Specified Machines Statement of the problem: given an incompletely specified machine
M, find a machine M such that:
There does not exist a machine M with fewer states than M which has the same
property
Machine M:
Attempt to reduce this case to usual state minimization of completely specified machines.
Brute Force Method: Force the dont cares to all their possible values and choose the smallest of
the completely specified machines so obtained.
In this example, it means to state minimize two completely specified machines obtained from M,
by setting the dont care to either 0 and 1.
Suppose that the  is set to be a 0.
States s1 and s2 are equivalent if s3 and s2 are equivalent, but s3 and s2 assert different outputs
under input 0, so s1 and s2 are not equivalent.
Machine M2 and M3 are formed by filling in the unspecified entry in M with 0 and 1, respectively.
Both machines M2 and M3 cannot be reduced. Conclusion?: M cannot be minimized further! But is it a
correct conclusion? Note: that we want to merge two states when, for any input sequence, they
generate the same output sequence, but only where both outputs are specified.
Definition: A set of states is compatible if they agree on the outputs where they are all specified.
Machine M :
In this case we have two compatible sets: A = (s1, s2) and B = (s3, s2). A reduced machine Mred can be
built as follows.
Machine Mred
PARTII
Algorithmic State Machine (ASM)
The algorithmic state machine (ASM) method is a method for designing finite state machines. It is
used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but
less formal and thus easier to understand. An ASM chart is a method of describing the sequential
operations of a digital system.
ASM method
The ASM method is composed of the following steps:
1. Create an algorithm, using pseudocode, to describe the desired operation of the device.
2. Convert the pseudocode into an ASM chart.
ASM chart
An ASM chart consists of an interconnection of four types of basic elements: state names, states,
condition checks and conditional outputs. An ASM state, represented as a rectangle, corresponds to one
state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box.
State name: The name of the state is indicated inside the circle and the circle is placed in the top left
corner or the name is placed without the circle.
Datapath
Once the desired operation of a circuit has been described using RTL operations, the datapath
components may be derived. Every unique variable that is assigned a value in the RTL program can be
implemented as a register. Depending on the functional operation performed when assigning a value to
a variable, the register for that variable may be implemented as a straightforward register, a shift
register, a counter, or a register preceded by a combinational logic block. The combinational logic block
associated with a register may implement an adder, subtracter, multiplexer, or some other type of
combinational logic function.
Detailed ASM chart
Once the datapath is designed, the ASM chart is converted to a detailed ASM chart. The RTL notation
is replaced by signals defined in the datapath.
Inputs Outputs
Status
Controller
Control
Datapath
clock
Outputs
Think about this: A microprocessor may be considered as a (large !) ASM with many inputs, states and
outputs. A program (any software) is really just a method for specification of its initial state
The two basic strategies for the design of a controller are:
1. Hardwired control which includes techniques such as onehotstate (also known as "one flipflop
per state") and decoded sequence registers.
2. Microprogrammed control which uses a memory device to produce a sequence of control words
to a datapath..
Since hardwired control is, generally speaking, fast compared with microprogramming strategies, most
modern microprocessors incorporate hardwired control to help achieve their high performance (or in
some cases, a combination of hardwired and microprogrammed control). The early generations of
microprocessors used microprogramming almost exclusively. We will discuss some basic concepts in
microprogramming later in the course for now we concentrate on a design example of hardwired
control. The ASM we will design is an nbit unsigned binary multiplier.
Binary Multiplication
The design of binary multiplication strategies has a long history. Multiplication is such a fundamental
and frequently used operation in digital signal processing, that most modern DSP chips have dedicated
multiplication hardware to
maximize performance. Examples are filtering, coding
and
compression
for(the product)
telecommunications and control applications as well
as many others. Multiplier
units must be fast !
The first example that we considered (in class) that used a repeated addition strategy is not always fast.
In fact, the time required to multiply two numbers is variable and dependent on the value of the
multiplier itself. For example, the calculation of 5 x 9 as 5 + 5 + 5 + 5 + 5 + 5 + 5 + 5 + 5 requires more
clock pulses than the calculation of 5 x 3 = 5 + 5 + 5. The larger the multiplier, the more iterations that
are required. This is not practical. Think about this: How many iterations are required for multiplying say,
two 16bit numbers, in the worst case ?
Another approach to achieve fast multiplication is the lookup table (LUT). The multiplier and multiplicand
are used to form an address in memory in which the corresponding, precomputed value of
the product is stored . For an nbit multiplier (that is, multiplying an nbit number by an nbit number), a
(2n+n x 2n)bit memory is required to hold all possible products. For example, a 4bit x 4bit multiplier
requires (28) x 8 = 2048 bits. For an 8bit x 8bit multiplier, a (28+8) x 16 = 1 Mbit memory is required.
This approach is conceptually simple and has a fixed multiply time equal to the access time of the
memory device, regardless of the data being multiplied. But it is also impractical for larger values of n.
Think about this: What memory capacity is required for multiplying two 16bit numbers ? Two 32bit
numbers ?
Most multiplication hardware units use iterative algorithms implemented as an ASM for which the
worstcase multiplication time can be guaranteed. The algorithm we present here is similar to the
(the multiplicand)
(the multiplier)
(1st partial product)
(2nd partial product)
(3rd partial product)
(4th partial product)
(the product)
Since we multiply by only 1 or 0, each partial product is either a copy of the multiplicand
shifted by the appropriate number of places, or, it is 0.
The number of partial products is the same as the number of bits in the multiplier
The number of bits in the product is twice the number of bits in the multiplicand. Multiplying
two nbit numbers produces a 2nbit product.
We could then design datapath hardware using a 2nbit adder plus some other components (as in the
example of Figure 10.17 of Brown and Vranesic) that emulates this manual procedure. However, the
hardware requirement can be reduced by considering the multiplication in a different light. Our
algorithm may be informally described as follows.
Consider each bit of the multiplier from right to left. When a bit is 1, the multiplicand is added to the
running total that is then shifted right. When the multiplier bit is 0, no add is necessary since the partial
product is 0 and then only the shift takes place. After n cycles of this strategy (once for each bit in the
multiplier) the final answer is produced. Consider the previous example again:
Notice that all the adds take place in these 4 bit positions we need only a 4bit adder ! We also
need shifting capability to capture the bits moving to the right as well as a way to store the
carries resulting from the additions. The final answer (the product) consists of the accumulated
sum and the bits shifted out to the right. A hardware design that can implement this algorithm is
described in the next section.
Multiplicand (n bits)
Multiplier (n bits)
Cin
Multiplier
n1
log 2 n
n
Binary
Down
Counter
A B
Parallel
Adder
Cout
Counter P
L
o
a
d
S
h
i
f
t
C
l
e
a
r
SUM
n
D Q
Register A
Left serial input
Clear
Clear
Shift Reg
Shift
Shift Reg
(Zero Detect)
Load
Flipflop
Register Q
1
(lsb of Reg A)
n
Product (msb's)
1
n
Product (lsb's)
Q0 (lsb of Reg Q)
Init
Multiplicand
IDL
E
C 0, A 0, P n1
Q multiplier
MUL0
Q0
1
A A + multiplicand
C Cout
C0
The process is achieved with 3 states (IDLE, MUL0 and MUL1). Each state will provide control signals
to the Datapath to perform the multiplication sequence. The process is started with an input G. As long
as G remains LO, the ASM remains in state IDLE. When G=1, the multiplication process is started. As
the ASM moves to state MUL0, the carry flip flop is cleared (C<<0), Reg A is cleared (A<<0), the
Counter is preset to n1 (P << n1) and Register Q is loaded with the Multiplier.
Datapath
component
Carry flipflop
Counter P
Register A
Register Q
Operation
C << 0
C << Cout (from the adder)
P << n  1
P << P 1
A << 0
A << A + multiplicand
CAQ << shr (CAQ)
Q << multiplier
CAQ << shr (CAQ
Control
Signal name
Clear_C
Load
Initialize
Shift_dec
Initialize
Load
Shift_dec
Initialize
Shift_dec
G=0
G=1
IDLE
MUL0
z=0
MUL
1
z=1
From inspection of the state transition diagram, the input equations for the D flipflops (using
one flipflop per state) are easily formed:
DIDLE = G IDLE + MUL1 Z
DMUL0 = IDLE G + MUL1 Z
DMUL1 = MUL0
From the ASM chart and the table above, the equations for the control signals outputs from the
controller are formed:
Initialize = G IDLE
Clear_C = G IDLE + MUL0 Q0
Load = MUL0 Q0
Shift_dec = MUL1
Finally, to provide a mechanism to force the state machine to state IDLE (such as at powerup), an
asynchronous input Reset_to_IDLE is connected to the asynchronous inputs of the flipflops. The
circuit for the controller is then simply, an implementation of all of these equations as follows:
Go
Reset_to_IDLE
D
Clock
PQ
IDLE
IDLE
Initialize
Clear_C
Q0
MUL0
D
Load
MUL0
Z
MUL1
D
Q
Shift_dec
Go
IDLE
clock
Controller
Initialize, Clear_C,
Load, Shift_dec
Datapath
2n
Product
Multiplier
Multiplicand
Go
IDLE
n
n
Binary
Multiplier
2n
Product
Reset Clock
to IDLE
Note that the IDLE state variable has been brought to the top level since it can be use to indicate when
the Binary Multiplier is busy. The Go and IDLE lines are called handshaking lines and are used to
coordinate the operation of the multiplier with the external world. If IDLE =1, a multiply can be started
by putting the numbers to be multiplied on the Multiplier and Multiplicand inputs and setting Go=1 at
which time the state machine jumps to state MUL0 (and therefore, simultaneously, IDLE changes to 0)
to start the process. When IDLE returns to 1, the answer is available on the Product output and another
multiplication could be started. No multiplication should be attempted while IDLE is 0.
Conclusion
This design of a Binary Multiplier is valid for any value of n. For example, for n=16, the multiplication
of two 16bit numbers, the datapath components would simply be extended to accommodate 16 bits in
Registers A and Q and the counter would require log 2(16) = 4 bits. The adder would also be required to
be 16bits in width. However, the same controller implementation can be used since its design is
independent of n. The multiplication time for n=16 would be 2(16) + 1 = 33 clocks. The product would
contain 32 bits.
Further refinements can be made to enhance the speed and capability of the ASM. For example, in our
algorithm, each 0 in the multiplier input data causes a shift without an add, each taking a clock pulse. If
the multiplier input contains runs of consecutive 0s, a barrel shifter could be used to implement all of
the required shifts (equal to the length of the run of 0s) in a single clock.
Think about this:
What modifications to our design would be required in order to be able to handle signed numbers. ?
Example: 12 x 5
Clock
States
Control Signals
pulse
Counter
P
Reg A
Reg Q
IDLE
1
2
3
4
5
6
7
11
11
11
10
10
01
01
00
0
0
0
0
0
0
0
1
x
0
0
0
0
0
0
0
xxxx
0000
1100
0110
0110
0011
1111
0111
xxxx
0101
0101
0010
0010
0001
0001
1000
1
0
0
0
0
0
0
0
0
1
0
1
0
1
0
1
1
1
0
1
0
0
0
1
0
0
1
0
1
0
1
0
1
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
00
11
1
0
0
0
0111
0011
1000
1100
0
1
0
0
1
0
0
0
0
0
0
0