Sie sind auf Seite 1von 173

1

Name of the Course: PGDCA


Title of the Paper: Digital Computer Fundamentals and Computer Architecture
No. of Units: 5
Unit I: Lesson 1,2,3,4
Unit II: Lesson 5,6,7,8
Unit III: Lesson 9,10,11,12
Unit IV: Lesson 13, 14,15,16,17
Unit V: Lesson 18,19,20,21

Digital Computer Fundamentals and Computer Architecture


PGDCA [SDE]
2

UNIT I

Lesson 1 : Binary Systems, Digital Computers and Digital Systems

Contents:

1.0 Aims and Objectives


1.1 Introduction
1.2 Binary Systems
1.2.1 Digital Computers and Digital Systems
1.2.2 Binary Numbers
1.3 Let us Sum Up
1.4 Lesson-end Activities
1.5 Points for discussions
1.6 References

1.0 Aims and Objectives

The main objective of this lesson is to learn the block diagram of a digital computer
and various processing units in it. The concept of Binary numbers and its
representation is discussed.

1.1 Introduction

Digital computers have made possible scientific, industrial and commercial


advancement rapidly. A program is a sequence of instructions. Discrete elements of
information are represented in a digital system by physical quantities called signals.
The representation of decimal numbers and binary numbers and the explanations of
base or radix are given.

1.2 Binary Systems

1.2.1 Digital Computers and Digital Systems

A computer is a programmable device, usually electronic in nature that can


store, retrieve, and process data.
A computer that stores data in terms of digits (numbers) and proceeds in
discrete steps from one state to the next is called as digital computer.
The most striking property of a digital computer is a generality.
It can follow a sequence of instructions.
The user can specify and change programs and/or data according to the
specific need.
As a result of this flexibility, general purpose digital computers can perform a
wide variety of information-processing tasks.
3

Digital System

Characteristics of a digital system is its manipulation of discrete elements of


information.
Such discrete elements may be electric impulses, the decimal digits, and the
letters of an alphabet, arithmetic operations, punctuation marks, or any other
set of meaningful symbols.
A sequence of discrete elements forms a language.
Early digital computers were used mostly for numeric computations.
In this case, discrete elements used are the digits.
Discrete elements of information are represented in a digital system by
physical quantities called signals.
The signals in all present-day electronic digital system have only two discrete
values and are said to be binary.
To make use of binary signals, digital system is constructed with transistor
circuit that is either ON or OFF has two possible signal values and can be
constructed to be extremely reliable.
To simulate physical process in a digital computer, the quantities must be
quantized.
When the variables of the process are presented by real-time continuous
signals, the latter are quantized by an analog-to-digital conversion device.
A physical system whose device is described by mathematical equations is
simulated in a digital computer by means of numerical methods.
When the problem to be processed is inherently discrete, as in commercial
applications, the digital computer manipulates the variables in their natural
form.

Control Unit

The control unit (often called a control system or central controller) directs the
various components of a computer.
It reads and interprets (decodes) instructions in the program one by one.
The control system decodes each instruction and turns it into a series of
control signals that operate the other parts of the computer.
Control systems in advanced computers may change the order of some
instructions so as to improve performance.

Memory, Processor, I/O Unit

The memory unit stores programs as well as input, output, and intermediate
data.
The processor unit performs arithmetic and other data-processing tasks as
specified by a program.
The input and output devices are special digital systems driven by
electromechanical parts and controlled by electronic digital circuits.
An electronic calculator is a digital system similar to a digital computer, with
the input device being a keyboard and the output device a numerical display.
A digital computer, however, is a more powerful device than a calculator.
4

A digital computer can accommodate many other input and output devices; it
can perform not only arithmetic computations, but logical operations as well
and can be programmed to make decisions based on internal and external
conditions.
A digital computer is an interconnection of digital modules.

Figure 1.1 A block diagram of the Digital Computer

1.2.2 Binary Numbers

The number system followed by computers


Base is two and any number is represented as an array containing 1s and 0s
representing coefficients of power of two.
Used in computer systems because of the ease of representing 1 and 0 as two
levels of voltage/power high and low

Numbers can be expressed using many representations.


For eg:

15 - Decimal
1111 Binary
F Hexadecimal
17 - Octal

Base or Radix of a number is the number of different digits which can occur at each
position in the number system

Counting is done with reference to a Base


Decimal has base of Ten
Binary has base of Two
Octal has base of Eight
Hexadecimal has base of Sixteen
5

Decimal 0,1,2,3.9
Octal 0,1,2,3,..7
Binary 0,1
Hexadecimal 0-9,A-F

Position value of a digit is determined by its position in the number.


Position value of 1 in 100 is different from that of 1 in 10
In 1002, 1 is the most significant digit and 2 is the least significant digit
Position values can be found by raising the base of the number system to the power of
the position
Position value of 1 in 1002 is 103

A number with a decimal point is represented as


A 6 a 5 a 4 a 3 a 2 a 1 a 0 . a - 1 a -2 a -3

Example : 3456.789

Decimal Binary Octal Hexadecimal


00 0000 00 0
01 0001 01 1
02 0010 02 2
03 0011 03 3
04 0100 04 4
05 0101 05 5
06 0110 06 6
07 0111 07 7
08 1000 10 8
09 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F

Table 1.1 Numbers with Different Bases


1.3 Let us Sum Up

The block diagram of digital computer and its various units has been explained. The
concept of binary term has been discussed as on or off state. The numbers with
different bases has been depicted in the form of table.
6

1.4 Lesson-end Activities

1. With a neat diagram, explain the components of a digital computer.


2. Write short notes on number systems.

1.5 Points for discussions

Various units of block diagram of digital computer.

1.6 References

http://poppy.snu.ac.kr/~kchoi/class/lc_intro/number_sys.pdf.
http://www.ncb.ernet.in/education/modules/mfcs/resources/BinaryNumberSys
tems.pdf.
http://www.danbbs.dk/~erikoest/binary.htm
7

Lesson 2 : Number Base Conversions, Octal and Hexadecimal Numbers

Contents:

2.0 Aims and Objectives


2.1 Introduction
2.2 Number Base Systems
2.2.1 Decimal Number Base Systems
2.2.2 Octal Number Base Systems
2.2.3 Hexadecimal Number Base Systems
2.2.4 Number Base Conversion
2.2.4.1 Decimal to Binary Conversion
2.2.4.2 Binary to Decimal Conversion
2.2.4.3 Binary to Octal Conversion
2.2.4.4 Octal to Binary Conversion
2.2.4.5 Octal to Decimal Conversion
2.2.4.6 Decimal to Octal Conversion
2.2.4.7 Hexadecimal to Binary Conversion
2.2.4.8 Binary to Hexadecimal Conversion
2.2.4.9 Hexadecimal to Decimal Conversion
2.2.4.10 Decimal to Hexadecimal Conversion
2.3 Let us Sum Up
2.4 Lesson-end Activities
2.5 Points for discussions
2.6 References

2.0 Aims and Objectives

The aim of this lesson is to perform conversion between one base to another base and
to gain knowledge about number system.

2.1 Introduction

A binary number can be converted to decimal by forming the sum of powers


of each part done separated of 2 of those co-efficient whose value is 1. The
conversion from decimal to binary or to any other base-r system is more convenient if
the number is separated into an integer part and a fraction part and the conversion

2.2 Number Base Systems


2.2.1 Decimal Number Base Systems
The Decimal Number System uses base 10. It includes the digits from 0 through 9.
The weighted values for each position is as follows:

10^4 10^3 10^2 10^1 10^0 10^-1 10^-2 10^-3

10000 1000 100 10 1 .1 .01 .001

The number 123 represents:


1 * 10^2 + 2 * 10^1 + 3 * 10^0 =
8

1 * 100 + 2 * 10 + 3 * 1 =
100 + 20 + 3 = 123

Each digit appearing to the left of the decimal point represents a value between zero
and nine times power of ten represented by its position in the number.
Digits appearing to the right of the decimal point represent a value between zero and
nine times an increasing negative power of ten.
For example, the value 725.194 is represented as follows:

7 * 10^2 + 2 * 10^1 + 5 * 10^0 + 1 * 10^-1 + 9 * 10^-2 + 4 * 10^-3 =


7 * 100 + 2 * 10 + 5 * 1 + 1 * 0.1 + 9 * 0.01 + 4 * 0.001 =
700 + 20 + 5 + 0.1 + 0.09 + 0.004 =
725.194

2.2.2 Octal Number Base Systems

The Octal Number System:

uses base 8
includes only the digits 0 through 7 (any other digit would make the
number an invalid octal number)

The weighted value for each position is as follows:


8^5 8^4 8^3 8^2 8^1 8^0

32768 4096 512 64 8 1

2.2.3 Hexadecimal Number Base Systems

When dealing with large values, binary numbers quickly become too unwieldy. The
hexadecimal (base 16) numbering system solves these problems. Hexadecimal
numbers offer the two features:

hex numbers are very compact


it is easy to convert from hex to binary and binary to hex.

A different method is required to enter a hexadecimal numbers into the computer


system.
The Hexadecimal system is based on the binary system using a Nibble or 4-bit
boundary. In Assembly Language programming, most assemblers require the first
digit of a hexadecimal number to be 0, and we place an H at the end of the number to
denote the number base.

The Hexadecimal Number System:


uses base 16
includes only the digits 0 through 9 and the letters A, B, C, D, E, and F

In the Hexadecimal number system, the hex values greater than 9 carry the following
decimal value:
9

Binary Octal Decimal Hex

0000B 00Q 00 00H

0001B 01Q 01 01H

0010B 02Q 02 02H

0011B 03Q 03 03H

0100B 04Q 04 04H

0101B 05Q 05 05H

0110B 06Q 06 06H

0111B 07Q 07 07H

1000B 10Q 08 08H

1001B 11Q 09 09H

1010B 12Q 10 0AH

1011B 13Q 11 0BH

1100B 14Q 12 0CH

1101B 15Q 13 0DH

1110B 16Q 14 0EH

1111B 17Q 15 0FH

10000B 20Q 16 10H


10

2.2.4 Number Base Conversion

2.2.4.1 Decimal to Binary Conversion

Divide the decimal number repeatedly by 2


Reminders form the binary number
First reminder is the least significant digit (LSD) and last reminder becomes
the most significant digit (MSD)

Representation of 25 10 in binary system


25/2 => 12, reminder = 1
12/2 => 6, reminder = 0
6/3 => 3, reminder = 0
3/2 => 1, reminder = 1
1/ => 0, reminder = 1
2510= 11001 2

Decimal Binary(base 2)
e.g. convert (41.375)10 to binary representation

1. Integer part

Method: Coefficients obtained from remainders of successive division by 2. The first


remainder is the least significant bit(LSB) and the last remainder is the most
significant bit(MSB).

(41)10 = (101001)2
41/2 =20 rem 1
20/2 =10 rem 0
10/2 =5 rem 0
5/2 =2 rem 1
2/2 =1 rem 0
11

1/2 =0 rem 1
Hence result: (101001)2

2. Fractional part

Method: Coefficients of binary fraction obtained by successive multiplication of


decimal fraction by 2. Coefficient will appear as integer portion of successive
multiplication.

(.375)10 = (.011)2
.375 x 2 = 0.75 0 MSB
.75 x 2 = 1.5 1
.5 x 2 = 1.0 1 LSB
Hence result: (.011)2

In general: Decimal Base r


- whole number: repeated division by r
- fractions : repeated multiplication by r

2.2.4.2 Binary to Decimal Conversion

It is very easy to convert from a binary number to a decimal number. Just like the
decimal system, we multiply each digit by its weighted position, and add each of the
weighted values together. For example, the binary value 1100 1010 represents:

1*2^7 + 1*2^6 + 0*2^5 + 0*2^4 + 1*2^3 + 0*2^2 + 1*2^1 + 0*2^0 =

1 * 128 + 1 * 64 + 0 * 32 + 0 * 16 + 1 * 8 + 0 * 4 + 1 * 2 + 0 * 1 =

128 + 64 + 0 + 0 + 8 + 0 + 2 + 0 =

202

2.2.4.3 Binary to Octal Conversion

It is easy to convert from an integer binary number to octal. This is accomplished by:
1. Break the binary number into 3-bit sections from the LSB to the MSB.
2. Convert the 3-bit binary number to its octal equivalent.
For example, the binary value 1010111110110010 will be written:
001 010 111 110 110 010

1 2 7 6 6 2

2.2.4.4 Octal to Binary Conversion

It is also easy to convert from an integer octal number to binary. This is accomplished
by:
1. Convert the decimal number to its 3-bit binary equivalent.
2. Combine the 3-bit sections by removing the spaces.
12

For example, the octal value 127662 will be written:


1 2 7 6 6 2

001 010 111 110 110 010


This yields the binary number 001010111110110010 or 00 1010 1111 1011 0010 in
our more readable format.

2.2.4.5 Octal to Decimal Conversion

To convert from Octal to Decimal, multiply the value in each position by its Octal
weight and add each value. Using the value from the previous example, 127662, we
would expect to obtain the decimal value 44978.
1*8^5 2*8^4 7*8^3 6*8^2 6*8^1 2*8^0

1*32768 2*4096 7*512 6*64 6*8 2*1

32768 8192 3584 384 48 2

32768 + 8192 + 3584 + 384 + 48 + 2 = 44978

2.2.4.6 Decimal to Octal Conversion

To convert decimal to octal is slightly more difficult. The typical method to convert
from decimal to octal is repeated division by 8.

Repeated Division By 8

For this method, divide the decimal number by 8, and write the remainder on the side
as the least significant digit. This process is continued by dividing he quotient by 8
and writing the remainder until the quotient is 0. When performing the division, the
remainders which will represent the octal equivalent of the decimal number are
written beginning at the least significant digit (right) and each new digit is written to
the next more significant digit (the left) of the previous digit. Consider the number
44978.
Division Quotient Remainder Octal Number

44978 / 8 5622 2 2

5622 / 8 702 6 62

702 / 8 87 6 662

87 / 8 10 7 7662

10 / 8 1 2 27662

1/8 0 1 127662
13

2.2.4.7 Hexadecimal to Binary Conversion

To convert a hexadecimal number into a binary number, simply break the binary
number into 4-bit groups beginning with the LSB and substitute the corresponding
four bits in binary for each hexadecimal digit in the number.

For example, to convert 0ABCDh into a binary value, simply convert each
hexadecimal digit according to the table above. The binary equivalent is:

0ABCDH = 0000 1010 1011 1100 1101

1. Convert the Hex number to its 4-bit binary equivalent.


2. Combine the 4-bit sections by removing the spaces.

For example, the hex value 0AFB2 will be written:

A F B 2

1010 1111 1011 0010


This yields the binary number 1010111110110010 or 1010 1111 1011 0010 in our
more readable format

2.2.4.8 Binary to Hexadecimal Conversion

The first step is to pad the binary number with leading zeros to make sure that the the
binary number contains multiples of four bits.
For example, given the binary number 10 1100 1010, the first step would be to add
two bits in the MSB position so that it contains 12 bits. The revised binary value is
0010 1100 1010.
The next step is to separate the binary value into groups of four bits, e.g., 0010 1100
1010. Finally, look up these binary values in the table above and substitute the
appropriate hexadecimal digits, e.g., 2CA.
The weighted values for each position is as follows:

16^3 16^2 16^1 16^0

4096 256 16 1

It is easy to convert from an integer binary number to hex. This is accomplished by:

1. Break the binary number into 4-bit sections from the LSB to the MSB.
2. Convert the 4-bit binary number to its Hex equivalent.

For example, the binary value 1010111110110010 will be written:


14

1010 1111 1011 0010

A F B 2

2.2.4.9 Hexadecimal to Decimal Conversion

To convert from Hex to Decimal, multiply the value in each position by its hex weight
and add each value. Using the value from the previous example, 0AFB2H, we would
expect to obtain the decimal value 44978.

A*16^3 F*16^2 B*16^1 2*16^0

10*4096 15*256 11*16 2*1

40960 3840 176 2


40960 + 3840 + 176 + 2 = 44978

2.2.4.10 Decimal to Hexadecimal Conversion

The typical method to convert from decimal to hex is repeated division by 16.

Repeated Division By 16

For this method, divide the decimal number by 16, and write the remainder on the
side as the least significant digit. This process is continued by dividing the quotient by
16 and writing the remainder until the quotient is 0. When performing the division,
the remainders which will represent the hex equivalent of the decimal number are
written beginning at the least significant digit (right) and each new digit is written to
the next more significant digit (the left) of the previous digit. Consider the number
44978.

Division Quotient Remainder Hex Number

44978 / 16 2811 2 2

2811 / 16 175 11 B2

175 / 16 10 15 FB2

10 / 16 0 10 0AFB2

Examples

25 => 2 x 10 1+ 5 x 10 0 (decimal)
11001 => 1 x 2 4+ 1 x 23+ 0 x 22+ 0 x 2 1+ 1 x 20 (binary)
19 => 1 x 16 1+ 9 x 16 0 (hexadecimal)
31 => 3 x 81+ 1 x 8 0(octal)
15

For floating points:


-(1010.011)2 = 2 3+2 1+2-2+2 -3=(10.375)10
- (630.4)8=6*82+3*8+4*8-1=(408.5)10
2.3 Let us Sum Up

This lesson has discussed various number systems with different base and the
conversion from one base to another base has been discussed in detail with examples.

2.4 Lesson-end Activities

1. Convert i. 58.46 10 ii. 125.09 10 to binary, octal and hexadecimal number base
systems.
2. Convert i. 1011.0112 ii. 11111.11012 to decimal, octal and hexadecimal number
base systems.
3. Convert i. 7438 ii. 452.0168 to binary, decimal and hexadecimal number base
systems.
4. Convert i. FF.AD16 ii. BC.EF16 to binary, octal and decimal number base systems.
5. Convert the Following
5.1 Binary to Decimal
a. 101110
b. 1110101
c. 110110100
d. 1101101.1111
e. 10.100001
5.2 Decimal to Binary
a. 1231
b. 673.23
c. 175
d. 1998
e. 12.45
5.3 Decimal to Octal
a. 7562
b. 225.225
5.4 Octal to Decimal
a. 623.77

5.5 Decimal to Hexadecimal


a. 1938
b. 222.225
5.6 Hexadecimal to Decimal
a. 2AC5.D
5.7 Hexadecimal to Octal and Binary
a. F3A732
5.8 Octal to hexadecimal
a. 623.77
5.9 Hexadecimal to octal
a. 2AC5.D
16

2.5 Points for discussions

Conversion from one base to another base


Different number systems

2.6 References

http://poppy.snu.ac.kr/~kchoi/class/lc_intro/number_sys.pdf.
http://www.ncb.ernet.in/education/modules/mfcs/resources/BinaryNumberSys
tems.pdf.
http://www.danbbs.dk/~erikoest/binary.htm
17

Lesson 3 : Complements, Binary Codes

Contents:

3.0 Aims and Objectives


3.1 Introduction
3.2 Complements
3.2.1 Diminished Radix Complement
3.2.2 Radix Complement
3.2.3 Subtraction using Rs Complement
3.2.4 Subtraction using (R-1)s Complement
3.2.5 Calculation of 2s Complement
3.2.6 2s Complement.
3.2.6.1 2s Complement Addition
3.2.6.1 2s Complement Subtraction
3.2.1.7 1s Complement
3.3 Let us Sum Up
3.4 Lesson-end Activities
3.5 Points for discussions
3.6 References

3.0 Aims and Objectives

The main objective of this lesson is to learn the concepts of complementary


subtraction such as 1s and 2s

3.1 Introduction

Complements are used in digital computers for simplifying the subtraction operation
and for logical manipulation. Various complementary methods are diminished radix
and radix complement. The concept of 9s, 10s 1s and 2s complementary
subtraction have been discussed in this lesson.

3.2 Complements

Complements are used in digital computers for simplifying the subtraction operation
and for logical manipulation.
There are two types of complements
Diminished Radix (or r-1s) complement
Radix (or rs) complement

3.2.1 Diminished Radix Complement

Given an n-digit number, (N)r, its (r-1s) complement is:


(r n 1) N

e.g. The (r-1s) complement, or 9s complement of (15)10 is:


(102 1) 15 = 99 15
= (84)9s
18

The (r-1s) complement, or 7s complement of (327)8 is:


(83 1) (327)8 = 777 327
= (450)7s

3.2.2 Radix Complement

Given an n-digit number, (N)r, its rs complement is:


r n N

e.g. The rs complement, or 10s complement of (15)10 is:10 15 = 100 15


= (85)10s

Technique is use: (r-1s) complement + 1


e.g. The 8s complement of (57)8 is:
(82 1) (57)8 = 77 57 (7s complement)
= (20)7s + 1 (add 1)
= (21)8s (8s complement)

3.2.3 Subtraction using Rs Complement

Technique: Given 2 unsigned base-r numbers, M & N, Subtraction of (MN) is done


by: n n
1. add M to the r-complement N: M + (r N) = (MN) + r
2. if M >= N, then there will be an end-carry rn. So discard this end carry.
3. if M < N, then there is no end carry. To get the normal form, take rs
complement to get: r n - ((M-N) + rn ) = N-M. Then put MINUS sign in front.
e.g.
(33)10 (22)10 = 33 + (10 2 - 22)10s
= (33 + 78)10s
= (111)10s (discard end carry)
= (11)10
2
(10)10 (22)10 = 10 + (10 22)10s
= (10 + 78)10s
2
= (88)10s (no end carry, so complement it)
2 2
= (10 88)10
= (12)10

3.2.4 Subtraction using (R-1)s Complement

Technique: Given 2 unsigned base-r numbers, M & N, Subtraction of (MN) is done


by: n n
1. add M to the r-complement N: M + (r 1 N) = (MN) + r
1. if M > N, then there will be an end-carry r n. So discard this end carry and add
1 to get: (MN1) + 1 = M N
n
2. if M <= N, then there is no end carry, but have ve result(incl 0): (M-N-1)+ r
n
To get the normal form, n
take (r-1) complement to get: (r n 1) ((MN) + r n )
= N-M. Then put MINUS sign in front(if result is not 0).
19

e.g.
(33)10 (22)10 = 33 +(99-22)9s
= (33 + 77)9s
= (110)9s (discard end carry, add 1)
= (10 +1)10
= (11)10

(10)10 (22)10 = 10 + (99 22)9s


= (10 + 77)9s
= (87)9s (no end carry, so complement it)
= -(99 87)10
= -(12)10

3.2.5 2s Complement
To calculate the 2's complement of an integer, invert the binary equivalent of the
number by changing all of the ones to zeroes and all of the zeroes to ones (also called
1's complement), and then add one.

For example,

0001 0001(binary 17) 1110 1111(two's complement -17)

NOT(0001 0001) = 1110 1110 (Invert bits)


1110 1110 + 0000 0001 = 1110 1111 (Add 1)

3.2.6.1 2s Complement Addition

Two's complement addition follows the same rules as binary addition.


For example,
5 + (-3) = 2 0000 0101 = +5
+ 1111 1101 = -3

0000 0010 = +2

3.2.6.2 2s Complement Subtraction

Two's complement subtraction is the binary addition of the minuend to the 2's
complement of the subtrahend (adding a negative number is the same as subtracting a
positive one).
20

For example,

7 - 12 = (-5) 0000 0111 = +7


+ 1111 0100 = -12

1111 1011 = -5

3.2.7 1s Complement

Given an n-bit number, x, its negative number can be obtained in 1s complement


representation using: n
x = 2 x 1

e.g. given an 8-bit number:


(01001100)2 = 8(96)10
= (2 96 1) 10
=(159)10
=(10110011)1s

Technique is INVERT all bits.(10 or 0 1)


(01010101)2 = (10101010)1s
Note, given a positive n-bit binary number, its 1s complement representation is still
the same as the binary number.
e.g. +(00011001)2 = (00011001)1s

2s Complement

Given an n-bit binary number, x, its negative number can be obtained in 2s


complement representation using:
n
x = 2 x
e.g. given an 8-bit number:
(01001100)2 = (96)
8
10
= (2 96)10
= (160)10
= (10110100)2s
Technique is INVERT all the bits and ADD 1
e.g. (01010101)2 = (10101010)1s (invert)
= (10101011)2s (add 1)
Note: Given a positive n-bit number, its 2s complement representation is still the same
as the binary number.
e.g. +(00100100)2 = (00100100)2s
21

3.3 Let us Sum Up

Various complementary methods have been discussed such as 1s, 2s r, r-1


complement and for every method an example has been given. Diminished radix
complement and radix complement has also been discussed.

3.4 Lesson-end Activities

1. What are complements? Discuss the types of complements.


2. Write the steps to perform subtraction using Rs complement and R-1s
complement.
3. Perform 1s complement subtraction. i. 101102 10011 2 ii. 110112 111012.
4. Perform 2s complement subtraction. i. 111102 10111 2 ii. 100112 110012.
5. Obtain the 9s Complement
a. 12349876
b. 00980100
c. 90009951
d. 00000000
6. Obtain the 10s Complement
a. 123900
b. O90657
c. 100000
d. 000000
7. Obtain the 1s and 2s complement
a. 10101110
b. 10000001
c. 10000000
d. 00000001
e. 00000000
8. Perform 10s complement Subtraction
a. 5250 1321
b. 20 100
c. 1753 8640
d. 1200 250
9. Perform 2s complement Subtraction
a. 11010 100000 b.100-110000
c.11010-1101 d. 1010100-1010100

3.5 Points for discussions

Rs Complement
R-1s Complement
1s Complement
2s Complement
10s and 9s Complement
22

3.6 References

http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_com
p.htm#calculate#calculate
23

Lesson 4 : Binary Codes

Contents:

4.0 Aims and Objectives


4.1 Introduction
4.2 Binary Codes
4.2.1 Decimal Codes
4.2.2 Binary Coded Decimal
4.2.3 Error Detecting Codes
4.2.4 Reflected / Gray Code
4.2.5 Excess-3-Code
4.2.6 ASCII Alphanumeric Code
4.3 Let us Sum Up
4.4 Lesson-end Activities
4.5 Points for discussions
4.6 References

4.0 Aims and Objectives

The aim of this lesson is to explain about various Binary codes, Binary coded
decimal, Error detecting codes, reflected / gray code, Excess-3-Code and ASCII
alphanumeric code.

4.1 Introduction

Electronic systems use signals that have 2 distinct values and circuit elements
that have 2 stable states. A binary digit is represented as 0 or 1. A group of 4 bit
elements requires a 4 bit code. A group of 8 bit elements requires a three bit code. The
BCD is a straight assignment of the binary equivalent.

4.2 Binary Codes

Electronic circuits have 2 distinct values and circuit elements have 2 stable states. A
BIT by definition, is a Binary Digit. To represent a group of 2n distinct elements in a
binary code requires a minimum of n bits.

4.2.1 Decimal Codes

Binary codes for decimal digits require a minimum of 4 bits.


Sign 2's
Offset 4221 Gray
... magni- Compli- BCD Excess-3
Binary Code Code
tude ment
-8 ... 0000 1000 ... ... ... ...
-7 1111 0001 1001 ... ... ... ...
-6 1110 0010 1010 ... ... ... ...
-5 1101 0011 1011 ... ... ... ...
24

Sign 2's
Offset 4221 Gray
... magni- Compli- BCD Excess-3
Binary Code Code
tude ment
-4 1100 0100 1100 ... ... ... ...
-3 1011 0101 1101 ... ... ... ...
-2 1010 0110 1110 ... ... ... ...
-1 1001 0111 1111 ... ... ... ...
0000 0011
0 0000 1000 0000 0000 0000
0000 0011
0000 0011
1 0001 1001 0001 0001 0001
0001 0100
0000 0011
2 0010 1010 0010 0010 0011
0010 0101
0000 0011
3 0011 1011 0011 0011 0010
0011 0110
0000 0011
4 0100 1100 0100 1000 0110
0100 0111
0000 0011
5 0101 1101 0101 0111 0111
0101 1000
0000 0011
6 0110 1110 0110 1100 0101
0110 1001
0000 0011
7 0111 1111 0111 1101 0100
0111 1010
0000 0011
8 ... ... ... 1110 1100
1000 1011
0000 0011
9 ... ... ... 1111 1101
1001 1100
0001 0100
10 ... ... ... ... 1111
0000 0011
0001 0100
11 ... ... ... ... 1110
0001 0100
0001 0100
12 ... ... ... ... 1010
0010 0101
0001 0100
13 ... ... ... ... 1011
0011 0110
0001 0100
14 ... ... ... ... 1001
0100 0111
0001 0100
15 ... ... ... ... 1000
0101 1000

4.2.2 Binary Coded Decimal


25

One of the most widely used representations of numerical data is the binary coded
decimal (BCD) form in which each integer of a decimal number is represented by a 4-
bit binary number (see conversion table). It is particularly useful for the driving of
display devices where a decimal output is desired. BCD usually refers to such coding
in which the binary digits have their normal values, i.e., 8421. Sometimes it is written
"8421 BCD" to clearly distinguish it from other binary codes such as the 4221 Code,
but when BCD is used without qualification, the 8421 version is assumed.

The field for Binary Coded Decimal (BCD) numbers is calculated from any of the
other bases, but no calculation routine is included here for conversion from BCD to
the other forms.

Another way of storing Decimal numbers is by a method called Binary Coded


Decimal (or BCD). Each Digit in the Decimal number is converted into a 4-bit
Binary number. Then all of the groups of 4 digits (each group representing one
decimal digit) are stuck together again

For Example:

Convert 4096 into BCD


First of all we take the Left Hand digit.
4 = 0100
Then we go through the rest of the digits in turn:
0 = 0000
9 = 1001
6 = 0110

Finally we stick all of it together in the order it appears in the original number,
i.e. 0100 0000 1001 0110
so 4096 = 0100000010010110 (BCD).

It is interesting to note that this is not the same as the binary equivilent, because:
4096 = 0001000000000000

4.2.3 Error Detecting Codes

How does the recipient know that the frame it received is correct?
What kinds of errors can we get?
o Bits can flip
o Can bits be lost?
o Is it usally single bits or sequences of bits?
The sender could send two copies. If they didn't match, the recipient could
assume they were incorrect.
26

o Of course, if the same bit is flipped in both copies, there's no way to


detect the error.
o There is also a lot of overhead, since we're sending two bits for every
bit of real information.
A simple solution is to use a parity bit. Every n bits, you count up the number
of 1's and add a 1 or a 0 depending on whether the count is even or odd.
A single parity bit catches only one odd numbers of one-bit errors.
o In particular, it misses two one-bit errors.

Odd parity Even parity


Message P Message P
0000 0 0000 0
0001 0 0001 1
0010 0 0010 1
0011 1 0011 0
0100 0 0100 1
0101 1 0101 0
0110 1 0110 0
0111 0 0111 1

Thus if the receiver detects a parity error, it sends a negative acknowledge


message.
If no error is detected, the receiver sends back an acknowledgement message.

4.2.4 Reflected / Gray Code

A Gray code is an encoding of numbers so that adjacent numbers have a single digit
differing by 1. The term Gray code is often used to refer to a "reflected" code, or more
specifically still, the binary reflected Gray code.

Conversion of Binary to Gray

Put the MSB (Most Significant bit) as such


Add the MSB with the succeeding bit until LSB is reached

2 3 4
0 0 1 1
1 0 0 1 0

Conversion of Gray to Binary

Put the MSB as such


Add the generated bit with the succeeding bit until LSB is reached

0 1 1 1
1
0 2 0 3 1 4 0
27

Gray codes corresponding to the first few nonnegative integers are given in the
following table.

0 0 20 11110 40 111100
1 1 21 11111 41 111101
2 11 22 11101 42 111111
3 10 23 11100 43 111110
4 110 24 10100 44 111010
5 111 25 10101 45 111011
6 101 26 10111 46 111001
7 100 27 10110 47 111000
8 1100 28 10010 48 101000
9 1101 29 10011 49 101001
10 1111 30 10001 50 101011
11 1110 31 10000 51 101010
12 1010 32 110000 52 101110
13 1011 33 110001 53 101111
14 1001 34 110011 54 101101
15 1000 35 110010 55 101100
16 11000 36 110110 56 100100
17 11001 37 110111 57 100101
18 11011 38 110101 58 100111
19 11010 39 110100 59 100110

4.2.5 Excess-3-Code

It is a 4 bit code.
In this code, a digit is represented by adding 3 to the number and then converting it to
a 4-bit binary number. It can be used for the representation of multi-digit decimal
numbers as can BCD.

Conversion of Decimal to Excess 3- Code

eg1: Convert 12 to Excess 3 Code

Add 3 to each decimal digit

1 2
3 3
3 5

Convert into BCD Form


0011 0101
Excess 3 Code for 12 = 00110101)xs3

eg2: Convert 29 to Excess 3 code


28

2 9
3 3
5 12
0101 1100

Excess 3 code for 29 = (0 1 0 1 1 1 0 0)

4.2.6 ASCII Alphanumeric Code

The American Standard Code for Information Interchange (ASCII) is the standard
alphanumeric code for keyboards and a host of other data interchange tasks. Letters,
numbers, and single keystroke commands are represented by a seven-bit word.
Typically a strobe bit or start bit is sent first, followed by the code with LSB first.
Being a 7-bit code, it has 2^7 or 128 possible code groups.

ASCII Alphanumeric Code


Char 7 bit ASCII HEX Char 7 bit ASCII HEX Char 7 bit ASCII HEX
A 100 0001 41 a 110 0001 61 0 011 0000 30
B 100 0010 42 b 110 0010 62 1 011 0001 31
C 100 0011 43 c 110 0011 63 2 011 0010 32
D 100 0100 44 d 110 0100 64 3 011 0011 33
E 100 0101 45 e 110 0101 65 4 011 0100 34
F 100 0110 46 f 110 0110 66 5 011 0101 35
G 100 0111 47 g 110 0111 67 6 011 0110 36
H 100 1000 48 h 110 1000 68 7 011 0111 37
I 100 1001 49 i 110 1001 69 8 011 1000 38
J 100 1010 4A j 110 1010 6A 9 011 1001 39
K 100 1011 4B k 110 1011 6B blank 010 0000 20
L 100 1100 4C l 110 1100 6C . 010 1110 2E
M 100 1101 4D m 110 1101 6D ( 010 1000 28
N 100 1110 4E n 110 1110 6E + 010 1011 2B
O 100 1111 4F o 110 1111 6F $ 010 0100 24
P 101 0000 50 p 111 0000 70 * 010 1010 2A
29

Q 101 0001 51 q 111 0001 71 ) 010 1001 29


R 101 0010 52 r 111 0010 72 - 010 1101 2D
S 101 0011 53 s 111 0011 73 / 010 1111 2F
T 101 0100 54 t 111 0100 74 , 010 1100 2C
U 101 0101 55 u 111 0101 75 = 011 1101 3D
V 101 0110 56 v 111 0110 76 RETURN 000 1101 0D
W 101 0111 57 w 111 0111 77 LNFEED 000 1010 0A
X 101 1000 58 x 111 1000 78 0 011 0000 30
Y 101 1001 59 y 111 1001 79 0 011 0000 30
Z 101 1010 5A z 111 1010 7A 0 011 0000 30

Problem-1

ASCII keyboard, produces ASCII equivalent of the designated character.

PRINT X

What is the output?

P 101 0000
R 101 0010
I 100 1001
N 100 1110
T 101 0100
010 0000
X 101 1000

4.3 Let us Sum Up

Various codes has been explained in detail with examples and a parity bit helps in
detecting transmission errors, the reflected changes by only one bit as it proceeds
from one number to the next. ASCII refers to American Standard Code for
Information Interchange.
4.4 Lesson-end Activities

1 Convert the following


a. Decimal to Excess-3-Code
39
b. Gray to Binary
1100
c. Binary to Gray
1111
2 Decode the following ASCII Text
1000010 1001100 1001001 1010011
1010111 1000101
30

3. Discuss error detecting codes.

4.5 Points for discussions

Binary Codes
Binary Coded Decimal
Gray Code
Excess-3-Code
ASCII Code

4.6 References

Digital Logic and Computer Design M.Morris Mano


31

Unit II

Lesson 5 : Boolean Algebra and Logic gates, Basic definitions, Axiomatic


Definitions of Boolean Algebra

Contents:

5.0 Aims and Objectives


5.1 Introduction
5.2 Boolean Algebra
5.2.1 Axiomatic Definition of Boolean Algebra
5.2.2 Basic laws of Boolean Algebra
5.2.3 Digital Logic Circuits
5.3 Let us Sum Up
5.4 Lesson-end Activities
5.5 Points for discussions
5.6 References

5.0 Aims and Objectives

The main objective of this lesson is to learn the basics of Boolean algebra its
postulates and laws. The basic building blocks that is the digital circuits has been
discussed.

5.1 Introduction

Boolean algebra is a deductive mathematical system closed over the values zero and
one (false and true). This was invented by george Boolie and named after him. various
laws and postulates has been proved. Logic circuits refers to one or more input
voltages but only one output voltage, and gates are often called logic circuits. The
axiomatic definition of Boolean algebra has been discussed.

5.2 Boolean Algebra

Boolean algebra is a deductive mathematical system closed over the values zero and
one (false and true).

A binary operator defined on a set of values accepts a pair of boolean inputs and
produces a single boolean value. For example, the boolean AND operator accepts two
boolean inputs and produces a single boolean output (the logical AND of the two
inputs).

In a algebra system, there are some initial assumptions, or postulates, that the system
follows. Boolean algebra systems often employ the following postulates:

Closure. The boolean system is closed with respect to a binary operator if for
every pair of boolean values, it produces a boolean result. For example, logical
AND is closed in the boolean system because it accepts only boolean operands
and produces only boolean results.
32

Commutativity. A binary operator "?" is said to be commutative if A?B = B?A


for all possible boolean values A and B.
Associativity. A binary operator "?" is said to be associative
if (A ? B) ? C = A ? (B ? C) for all boolean values A, B, and C.
Distribution. Two binary operators "?" and "%" are distributive if
A ?(B % C) = (A ? B) % (A ? C) for all boolean values A, B, and C.
Identity. A boolean value I is said to be the identity element with respect to
some binary operator "?" if A ? I = A for all boolean values A.
Inverse. A boolean value I is said to be the inverse element with respect to
some binary operator "?" if A ? I = B and B?A (i.e., B is the opposite value of
A in a boolean system) for all boolean values A and B.

5.2.1 Axiomatic Definition of Boolean Algebra

Boolean algebra: an algebraic system of logic introduced by George Boole in 1854.


Switching algebra: a 2-valued Boolean algebra introduced by Claude Shannon
in 1938.
Huntington postulates for Boolean algebra: defined on a set B with binary
operators + & , and the equivalence relation = (Edward Huntington, 1904):

(a) Closure with respect to +.


(b) Closure with respect to
(a) Identity element 0 with respect to +.
(b) Identity element 1 with respect to
(a) Commutative with respect to +.
(b) Commutative with respect to .
(a) is distributive over +
(b) + is distributive over

We will also use the following set of postulates:

P1 Boolean algebra is closed under the AND, OR, and NOT operations.

P2 The identity element with respect to is one and + is zero. There is no identity
element with respect to logical NOT.

P3 The and + operators are commutative.

P4 and + are distributive with respect to one another. That is, A (B + C) = (A B) +


(A C) and A + (B C) = (A + B) (A + C).
P5 For every value A there exists a value A' such that AA' = 0 and A+A' = 1. This
value is the logical complement (or NOT) of A.

P6 and + are both associative. That is, (AB)C = A(BC) and (A+B)+C = A+(B+C).

5.2.2 Basic laws of Boolean Algebra


33

T1 : Commutative Law
(a) A + B = B + A
(b) A B = B A

T2 : Associate Law
(a) (A + B) + C = A + (B + C)
(b) (A B) C = A (B C)

T3 : Distributive Law
(a) A (B + C) = A B + A C
(b) A + (B C) = (A + B) (A + C)

T4 : Identity Law
(a) A + A = A
(b) A A = A

T5 :
(a)
(b)

T6 : Redundance Law
(a) A + A B = A
(b) A (A + B) = A

T7 :
(a) 0 + A = A
(b) 0 A = 0

T8 :
(a) 1 + A = 1
(b) 1 A = A

T9 :
(a)
(b)
T10 :
(a)
(b)

T11 : De Morgan's Theorem

(a)
(b)

Prove the Following

Question : 1
34

(1) Algebraically

(2) Using the truth table:

Question : 2
35

Complement of a Function

Complement of a variable x is x (0 -> 1 and 1 -> 0)


The complement of a function F is F and is obtained from an interchange of 0s for
1s and 1s for 0s in the value of F
The dual of a function is obtained from the interchange of AND and OR operators and
1s and 0s
The complement of a function can be obtained by demorgans law.
The demorgan law with 2 variables

(X+Y)' = x' y'


(XY) ' = x' + y'
Demorgans law can be extended for 3 variables

(X + Y + Z) ' = (X +A)' A = Y+Z


= X'A' Demorgans Theorem
= X' (Y+Z)' A=Y+Z
= X'(Y'Z') Demorgans theorem
= X'Y'Z' Associative Law

Example

Find the Complement of the Function

F1 = X'YZ' + Z'Y'Z
= (X + Y' + Z) (X + Y + Z')

F2 = X(Y'Z' + YZ)
= (X' + ( Y + Z) (Y' + Z')

5.2.3 Digital Logic Circuits

The logical element or condition must have a logic value either 0 or 1.

The logical functions are represented with some special symbols to denote these
functions in a logical diagram. There are three fundamental logical operations,
from which all other functions, no matter how complex, can be derived. These
functions are named and, or, and not.

The AND Gate

The AND gate implements the


AND function. With the gate
shown to the left, both inputs must
have logic 1 signals applied to
them in order for the output to be
a logic 1. With either input at
logic 0, the output will be held to
logic 0.
36

The OR Gate

The OR gate is sort of the reverse


of the AND gate. The OR
function, like its verbal
counterpart, allows the output to
be true (logic 1) if any one or
more of its inputs are true. In
symbols, the OR function is
designated with a plus sign (+). In
logical diagrams, the symbol to
the left designates the OR gate.
The NOT Gate, or Inverter

The inverter is a little different


from AND and OR gates in that it
always has exactly one input as
well as one output. Whatever
logical state is applied to the
input, the opposite state will
appear at the output.

The NOT function is denoted by a


horizontal bar over the value to be
inverted, as shown in the figure to
he left. In some cases a single
quote mark (') may also be used
for this purpose: 0' = 1 and 1' = 0.

Derived Logical Functions and Gates

While the three basic functions AND, OR, and NOT are sufficient to accomplish
all possible logical functions and operations, some combinations are used so
commonly that they have been given names and logic symbols of their own.

The first is called NAND, and consists of an AND function followed by a NOT
function. The second, as you might expect, is called NOR. This is an OR function
followed by NOT. The third is a variation of the OR function, called the Exclusive-
OR, or XOR function.

The NAND Gate

The NAND gate implements


the NAND function, which is
exactly inverted from the AND
function you already examined.
With the gate shown to the left,
both inputs must have logic 1
37

signals applied to them in order


for the output to be a logic 0.
With either input at logic 0, the
output will be held to logic 1.

The circle at the output of the


NAND gate denotes the logical
inversion
The NOR Gate

The NOR gate is an OR gate


with the output inverted. Where
the OR gate allows the output
to be true (logic 1) if any one or
more of its inputs are true, the
NOR gate inverts this and
forces the output to logic 0
when any input is true.

In symbols, the NOR function


is designated with a plus sign
(+), with an overbar over the
entire expression to indicate the
inversion. In logical diagrams,
the symbol to the left designates
the NOR gate. As expected, this
is an OR gate with a circle to
designate the inversion.
The Exclusive-OR, or XOR
Gate

The Exclusive-OR, or XOR


function is an interesting and
useful variation on the basic OR
function. Verbally, it can be
stated as, "Either A or B, but
not both." The XOR gate
produces a logic 1 output only
if its two inputs are different. If
the inputs are the same, the
output is a logic 0.

The XOR symbol is a variation


on the standard OR symbol. It
consists of a plus (+) sign with
a circle around it. The logic
symbol, as shown here, is a
variation on the standard OR
symbol.
38

The NAND and NOR gates are called universal functions since with either one the
AND and OR functions and NOT can be generated.

Note:

A function in sum of products form can be implemented using NAND gates by


replacing all AND and OR gates by NAND gates.

A function in product of sums form can be implemented using NOR gates by


replacing all AND and OR gates by NOR gates.

5.3 Let us Sum Up

The concepts of logic circuits with neat graphic symbol representation has been
presented and various laws of Boolean algebra has been explained. The complement
of a function with a example has been explained.

5.4 Lesson-end Activities

1. Give the basic laws of Boolean algebra and their duals. Prove the Boolean laws.
2. Neatly, draw the basic logic gates and give their truth table.
3. What are universal gates? Why are they called so?

5.5 Points for discussions

Laws of Boolean Algebra


Complement of a Function
Logic Circuits and universal Gates

5.6 References

Digital logic and Computer Design M.Morris Mano


39

Lesson 6 : Canonical and standard forms, Other Logical operations , Digital


logical gates, IC digital logical families, semi conductor Memories

Contents:

6.0 Aims and Objectives


6.1 Introduction
6.2 Canonical and standard forms
6.2.1 Minterms and Maxterms
6.2.1.1 Sum of MinTerms
6.2.1.2 Product of MaxTerms
6.2.3 Integrated Circuits
6.2.4 Levels of Integration
6.2.5 Digital Logic Families
6.2.5.1 RAM
6.2.5.2 PROM
6.2.5.3 EPROM
6.3 Let us Sum Up
6.4 Lesson-end Activities
6.5 Points for discussions
6.6 References

6.0 Aims and Objectives

The main aim of this lesson is to learn the sum of products, product of sums,
minterms and maxterms and digital logic families.

6.1 Introduction

The lesson covers the conversion part from sum of products to product of sums using
terms and using truth tables. The digital logical families includes RAM, PROM and
EPROM.

6.2 Canonical and standard forms

6.2.1.1 Minterms and Maxterms

A binary variable may appear either in its normal form (x) or in its complement form
X'. Consider 2 binary variables X & Y combined with an AND operation There are 4
possible outcomes
1. X'Y'
2. XY'
3. X'Y
4. XY
Each of these four AND terms represent a minterm or standard product. N variables
can be combined to form 2n Minterms.

In a similar way, n variables forming an OR term provide 2n combination called


Maxterms
40

Expressing combinations of 0s and 1s with binary variables (normal form x or


complement form x)
Any Boolean function can be expressed as a sum of minterms
Any Boolean function can be expressed as a product of maxterms

Minterm (or standard product):


n variables combined with AND
n variables can be combined to form 2n minterms
two variables: xy, xy, xy, and xy
A variable of a minterm is primed if the corresponding bit of the binary
number is a 0, and unprimed if a 1

Maxterm (or standard sum):


n variables combined with OR
A variable of a maxterm is unprimed if the corresponding bit is a 0 and primed
if a 1

Any Boolean function can be expressed as a sum of minterms or a product of


maxterms (either 0 or 1 for each term) is said to be in a canonical form

Each maxterm is the complement of its corresponding minterm: m'i = Mi


Variables Minterms Maxterms
X Y Z Term Designation Term Designation
0 0 0 X'y'z' m0 x'+y'+z' m0
0 0 1 x'y'z m1 x'+y'+z m1
0 1 0 x'yz' m2 x'+y+z' m2
0 1 1 xy'z' m3 x'+y+z m3
1 0 0 xy'z' m4 x+y'+z' m4
1 0 1 xy'z m5 x+y'+z m5
1 1 0 xyz' m6 x+y+z' m6
1 1 1 xyz m7 x+y+z m7

Consider the two functions F2 and F1 and as shown in the following table.
41

6.2.1 Sum of MinTerms

Each term is expected to see if it contains all the variables. If it misses one or more
variables, it is ANDed with an expression such as x+x' where x is one of the missing
variable

Express the Boolean Function

F = A + B'C is a sum of minterms.

The function has 3 variables A,B,C. The first term A has 2 Missing variables B & C
and is written as

A = A(B + B') (C+C') (B + B') = 1


(C + C') = 1
A = (AB + AB') (C + C')
= ABC + ABC' + AB'C + AB'C'
The Second term B'C has one of the missing variable A
B'C (A + A')
= B'CA + B'CA'
= AB'C + A'B'C (Rewriting)

Combining all terms we get


= ABC + ABC' + AB'C + AB'C' + AB'C + A'B'C
= ABC + ABC' + AB'C + AB'C' + A'B'C (since x + x' = 1)

Rearranging the minterms in Ascending order

A'B' C + AB'C' + AB'C + ABC' + ABC


m1 + m4 + m5 + m6 + m7

F(A,B,C) = (1,4,5,6,7)

- represents ORing terms

Another way using truth table we get


A B C F = A + B'C
0 0 0 0
0 0 1 11
0 1 0 0
0 1 1 0
1 0 0 1 4
1 0 1 1 5
1 1 0 1 6
1 1 1 1 7

Put 1 under those combinations where A = 1 and B'C = 01


42

6.2.2 Product of MaxTerms

Express the Boolean function

F = XY+X'Z is a product of maxterms


Converting the function into OR terms

F = (X+X') (X+Z) (Y+X') (Y+Z)


= 1 (X+Z) (X'+Y) (Y +Z)

Each one has missing variables

F = (X+Z) (X'+Y) (Y +Z)

(X+Z) (X+Z) + YY' (X + Z+Y) (X+Z+ Y')


(X'+Y) (X'+Y) + ZZ' (X'+Y+Z) (X'+Y+Z')
(Y+Z) (Y+Z) + XX' (Y+Z+X) (Y+Z+X')

Combining all these terms


(X+Y+Z) (X+Y'+Z) (X'+Y+Z) (X'+Y+Z') (X+Y+Z) (X'+Y+Z)
(X+Y+Z) (X+Y'+Z) (X'+Y+Z) (X+Y+Z')

(X+Y+Z) has appeared twice and written only once

M0 M2 M4 M5

F(X,Y,Z) = (0,2,4,5)

denotes ANDing og maxterms

Representing through Truth Table

X Y Z F = XY + X'Z
0 0 0 00
0 0 1 1
0 1 0 02
0 1 1 1
1 0 0 04
1 0 1 05
1 1 0 1
1 1 1 1

Represented as sum of Minterms

F(X,Y,Z) = (1,3,6,7)

Product of Maxterms

F(X,Y,Z) = (0,2,4,5)
43

Standard Forms
2 types

1. Sum of Products
2. Products of sums

Sum of Products

Sum of Product is as Boolean function or expression containing AND terms called


Product terms. The sum denotes the ORing of these terms.
Eg
F1 = Y' +XY+X'YZ'
The expression consists of 1,2 and 3 literals each

Product of Sums

A product of sums is a Boolean expression containing OR terms called sum terms.


The product denotes the ANding of these terms.

F2 = X (Y'+Z) (X'+Y+Z'+W)

The expression consists 2,4 literals

6.2.3 Integrated Circuits

Digital circuits are constructed with integrated circuits. An integrated circuits a


small silicon semi conductor crystal, called a chip, containing the electronic
components for the digital gates. The various gates are interconnected inside the chip
to form the required circuit. The chip is mounted in a ceramic or plastic container, and
connections are welded to external pins to form the integrated circuit. The no of pins
may range from 14 in a small IC package to 64 or more in a larger package. The size
of the IC package is very small.

6.2.1.2.1 Levels of Integration

Digital ICs are often categorized according to their circuit complexity as


measured by the number of logic gates in a single package.

Small-scale integration (SSI) devices contain several independent in a single


package. The inputs and outputs of the gates are connected directly to the pins in the
package. The number of gates is usually fewer than 10 and is limited by the no of pins
available in the IC.

Medium-scale integration (MSI) devices have a complexity of approximately 10 to


100 gates in a single package. They usually perform specific elementary digital
operators such as decoders, adders, or multiplexers.

Large-scale integration (LSI) devices contain between 100 and a few thousand
gates in a single package. They include digital systems such as processors, memory
chips, and programmable logic devices.
44

Very-large-scale integration (VLSI) devices contain thousands of gates within a


single package. Examples are large memory arrays and complex microcomputer chips
because of their small size and low cost, VLSI devices have revolutionized the
computer system design technology, giving the designer the capabilities to create
structures that previously were uneconomical.

6.2.4 Digital Logic Families

Digital integrated circuits are classified not only by their complexity or logical
operations, but also by the specific circuit technology to which they belong. The
circuit technology is referred to as a digital logic family. Each logic family has its
own electronic circuit technology circuit upon which more complex digital circuits
and components are developed. The basic circuit in each technology is a NAND,
NOR, or an inverter gate.

Many different logic families of digital integrated circuits have been introduced
commercially. The following are the most popular:

TTL transistor-transistor logic


ECL emitter coupled logic
MOS metal-oxide semiconductor

TTL is a widespread logic family that has been in operation for some time and is
considered as standard. ECL has an advantage in systems requiring high speed
operation. MOS is suitable for circuits that need high component density, and CMOS
is preferable in systems requiring low power consumption.

Emitter-coupled logic (ECL) circuits provides the highest speed among the
integrated digital logic families .ECL is used in systems such as super computers and
signal processors, where high speed is essential.

The metal-oxide semiconductor(MOS) is a unipolar transistor that depends up on


the flow of only one type of carrier, which may be electrons (n-channel) or holes(p-
channel).this is in contrast to the bipolar transistor used in TTL and ECL gates, where
both carriers exist during normal operation. Complementary MOS
(CMOS)technology uses one PMOS and NMOS transistor connected in a
complementary fashion in all circuits. The most important advantages of MOS over
bipolar transistors are the high packing density of circuits, a simpler processing
technique during fabrication, and a more economical operation because of the low
power consumption.

The characteristics of digital logic families are usually compared by analyzing circuit
of the basic gate in each family.

6.2.4.1 RAM

Random access memory

In random-access memory (RAM) the memory cells can be accessed for


information transfer from any desired random location. that is, the process of locating
45

a word in memory is the same and requires an equal amount of time no matter where
the cells are located physically in memory: thus the name: "random access.

The n data input lines provide the information to be stored in memory, and the n
data output lines supply the information coming out of memory. The k address lines
provide a binary number of k bits that specify a particular word chosen among the 2k
available inside the memory.

Assume the RAM contains r = 2k words. It needs the following

n data input lines

n data output lines

k address lines

A Read control line

A Write control line

data input lines

address lines
k
RAM
Read
unit
Write
n
data output lines

6.1 Block Diagram of RAM


46

6.2.4.2 PROM

For small quantities it is more economical to use a second type of ROM called a
programmable read only memory or PROM. When ordered, PROM units contain all
the fuses intact, giving all the fuses intact, giving all the 1s in the bits of stored
words.

A blown fuse defines binary 0 states, and an intact fuse gives binary 1 state. All
procedures for programming ROMs are hardware procedures even though the word
programming is used.

6.2.4.3 EPROM

A third type of ROM available is called as erasable PROM. The EPROM can be
reconstructed to the initial value even though it fuses have been blown previously.
When the EPROM is placed under a ultraviolet light for a given period of time, the
shortwave radiation discharges the internal gates that serves as fuses. Certain PROMs
can be erased electrically and these are called electrically erasable PROMs.

6.3 Let us Sum Up

The canonical forms, the concept of minterms, maxterms, converting from one form
to another and digital logical families has been dicussed.

6.4 Lesson-end Activities

1. Explain i. Sum of minterms and ii. Product of maxterms.


2. F= xyz+xyz+xyz+xyz. Give the truth table for the function.
3. F= (A+B+C+D)(A+B+C+D)(A+B+C+D)(A+B+C+D)(A+B+C+D).
Give the truth table for the function.
4. Explain integrated circuits.
5. Write short notes on (i) RAM (ii) PROM (iii) EPROM.

6.5 Points for discussions

Minterms
Maxterms
Digital Logic Families RAM,PROM, EPROM

6.6 References

Digital Logic and Computer Design M.Morris Mano


47

Lesson 7 : Simplification of Boolean Functions, The Map method, Product of


sums and sum of product

Contents:

7.0 Aims and Objectives


7.1 Introduction
7.2 Simplification of Boolean Functions
7.2.1 The Karnaugh Map Method
7.2.2 Two Variable Map
7.2.3 Three Variable Map
7.2.4 Four Variable Map
7.2.5 Overlapping Groups
7.2.6 Rolling the map
7.2.7 Eliminate the Redundant Groups
7.2.8 Product of Sums
7.2.8.1 Sum of Products
7.2.8.2 Don't Care Conditions
7.3 Let us Sum Up
7.4 Lesson-end Activities
7.5 Points for discussions
7.6 References

7.0 Aims and Objectives

The aim of this lesson is to learn the concept of minimizing the Boolean functions
using KMap method and evaluating dont cares cases in KMap.

7.1 Introduction

A karnaugh map is a visual display of the fundamental products needed for a sum of
products solution. The map method provides a simple straight forward procedure for
minimizing Boolean functions. It is very easy and simple to implement. The concept
of rolling the map, eliminating redundant groups has been discussed along with dont
care conditions.
7.2 Simplification of Boolean Functions

7.2.1 The Karnaugh Map Method

The Karnaugh map provides a simple and straight-forward method of minimizing


boolean expressions. With the Karnaugh map Boolean expressions having up to four
and even six variables can be simplified.
A Karnaugh map provides a pictorial method of grouping together expressions with
common factors and therefore eliminating unwanted variables. The Karnaugh map
can also be described as a special arrangement of a truth table.

Map is made up of squares


Each square represents one Minterm

7.2.2 Two Variable Map


48

The diagram below illustrates the correspondence between the Karnaugh map and the
truth table for the general case of a two variable problem.

The values inside the squares are copied from the output column of the truth table,
therefore there is one square in the map for every row in the truth table. Around the
edge of the Karnaugh map are the values of the two input variable. A is along the top
and B is down the left hand side. The diagram below explains this:

The values around the edge of the map can be thought of as coordinates. So as an
example, the square on the top right hand corner of the map in the above diagram has
coordinates A=1 and B=0. This square corresponds to the row in the truth table where
A=1 and B=0 and F=1. Note that the value in the F column represents a particular
function to which the Karnaugh map corresponds.

7.2.3 Three Variable Map

Example 1:
49

Consider the following map. The function plotted is: Z = f(A,B) = A + AB

Note that values of the input variables form the rows and columns. That is the logic
values of the variables A and B (with one denoting true form and zero denoting false
form) form the head of the rows and columns respectively.

Bear in mind that the above map is a one dimensional type which can be used to
simplify an expression in two variables.

There is a two-dimensional map that can be used for up to four variables, and a three-
dimensional map for up to six variables.

Using algebraic simplification,

Z = A + AB

Z = A( + B)

Z=A

Variable B becomes redundant due to Boolean Theorem T9a.

Referring to the map above, the two adjacent 1's are grouped together. Through
inspection it can be seen that variable B has its true and false form within the group.
This eliminates variable B leaving only variable A which only has its true form. The
minimised answer therefore is Z = A.

Example 2:

Consider the expression Z = f(A,B) = +A + B plotted on the Karnaugh map:

Pairs of 1's are grouped as shown above, and the simplified answer is obtained by
using the following steps:

Note that two groups can be formed for the example given above, bearing in mind that
50

the largest rectangular clusters that can be made consist of two 1s. Notice that a 1 can
belong to more than one group.
The first group labelled I, consists of two 1s which correspond to A = 0, B = 0 and A
= 1, B = 0. Put in another way, all squares in this example that correspond to the area
of the map where B = 0 contains 1s, independent of the value of A. So when B = 0 the
output is 1. The expression of the output will contain the term

For group labelled II corresponds to the area of the map where A = 0. The group can
therefore be defined as . This implies that when A = 0 the output is 1. The output is
therefore 1 whenever B = 0 and A = 0
Hence the simplified answer is Z = +

Simplify the following Boolean functions:

(a)

(b)

(c)
51

(e)

(f)

(g)

Minimise the following problems using the Karnaugh maps method.

(1) Z = f(A,B,C) = + B + AB + AC

(2) Z = f(A,B,C) = B + B + BC + A

Solutions

(i) Z = f(A,B,C) = + B + AB + AC
52

By using the rules of simplification and ringing of adjacent cells in order to make as
many variables redundant, the minimised result obtained is B + AC+

Solutions

(ii) Z = f(A,B,C) = B + B + BC + A

By using the rules of simplification and ringing of adjacent cells in order to make as
many variables redundant, the minimised result obtained is B + A

7.2.4 Four Variable Map

The map is considered to lie on a surface with the top and bottom edges, as well as the
right and left edges, touching each other to form adjacent squares.

One square) a minterm of 4 literals.


Two adjacent squares) a term of 3 literals.
Four adjacent squares)a term of 2 literals.
Eight adjacent squares) a term of 1 literal.
Sixteen adjacent squares)the constant 1.
53

Simplify the Boolean Function

7.2.5 Overlapping Groups

The same 1 can be used more than once

F = W + XY'Z

Y'Z' Y'Z YZ YZ'

W'X' 1

W'X 1 1 1 1

WX 1 1 1 1
54

7.2.6 Rolling the map

BC'D' + BCD'
BD' (C +C')
BD'

C'D' C'D CD CD'

A'B'

A'B 1 1

AB 1 1

AB'

7.2.7 Eliminate the Redundant Groups

After the groups have been encircled, eliminate the redundant groups if occurred. This
is the group whose 1's are already used by other group.

Y'Z' Y'Z YZ YZ'

W'X' 1

W'X 1 1 1

WX 1 1 1 1

WX' 1
55

Y'Z' Y'Z YZ YZ'

W'X' 1

W'X 1 1 1

WX 1 1 1

WX' 1

Y'Z' Y'Z YZ YZ'

W'X' 1

W'X 1 1 1

WX 1 1 1

WX' 1

The quad is encircled


The next 1's is paired
All the 1's of quad are used by pairs
Because of this quad is redundant and so it can be eliminated

Karnaugh Mapping for simplifying Boolean Functions

1. Enter a 1 on the Karnaugh Map for each fundamental product that produces a
1 output in the truth table, enter 0 otherwise
2. Encircle the octete, quads and pairs, remember to roll and overlap to get the
possible group.
3. If any isolated 1's remain, encircle each.
4. eliminate any redundant groups
5. Write the Boolean equation by ORing the products corresponding to the
encircled groups.
56

7.2.8 Product of Sums

7.2.8.1 Sum of Products

Simplify the Boolean Function

F = A'B'C' + B'CD' + A'BCD' + AB'C'


= B'C' + A'CD' + B'D'

C'D' C'D CD CD'

A'B' 1 1
1

A'B 1

AB

AB' 1 1
1

The minimized Boolean function derived from the map in all previous expressed in
the sum of product form.
The product of sum form can be obtained as the 1's placed in the squared of the map
represent the minterms of the function. The minterm not included in the function
denotes the complement of the function. If the squares not placed by 1 is marked with
0 and combines the 0's to valid adjacent squares.
Simplify the Boolean Function

1. Sum Of Products
2. Product of Sums

F(A,B,C,D) = (0,1,2,5,8,9,10)

The 1's marked represent the Minterm

F = B'D' + B'C' + A'C'D


57

C'D' C'D CD CD'

A'B' 1 1 0 1

A'B 0 1 0 0

AB 0 0 0 0

AB' 1 1 0 1

F = CD + AB + BD' Product of sums

Apply demorgan Theorem

F = (A'+B') (C'+D') + (B'+D)

7.2.8.2 Don't Care Conditions

In some digital system certain input conditions never occur during normal operations,
so its corresponding output also never occurs. Since the output never appears it is
indicated by 'X' in the truth table.
The 'X' is called as Don't Care Condition.
When choosing adjacent squares simplify the function in a map, the don't care
minterms may be assumed to be either 0 or 1 whichever produces the simple
expression.

Simplify the Boolean Function

F(W,X,Y,Z) = (1,3,7,11,15)

That has don't care condition

d(W,X,Y,Z) = d(0,2,5)

W X Y Z F

0 0 0 0 0 X

1 0 0 0 1 1

2 0 0 1 0 X
58

3 0 0 1 1 1

4 0 1 0 0 0

5 0 1 0 1 X

6 0 1 1 0 0

7 0 1 1 1 1

8 1 0 0 0 0

9 1 0 0 1 0

10 1 0 1 0 0

11 1 0 1 1 1

12 1 1 0 0 0

13 1 1 0 1 0

14 1 1 1 0 0

15 1 1 1 1 0
59

C'D' C'D CD CD'

A'B' X 1 1 X

A'B X 1

AB 1

AB' 1

C'D' C'D CD CD'

A'B' 1 1 1 1

A'B X 1

AB 1

AB' 1
60

C'D' C'D CD CD'

A'B' X 1 1 X

A'B 1 1

AB 1

AB' 1

F(W,X,Y,Z) = YZ + W'X' = (0,1,2,3,7,11,15)


F(W,X,Y,Z) = YZ + W'Z = (1,3,5,7,1,15)

7.3 Let us Sum Up

The Kmap method for minimizing the boolean has been explained with 2 variable, 3
variable and 4 variable map. The sum of products and product of sum has been
illustrated in Kmap and dont care conditions in Kmap has been discussed with
examples

7.4 Lesson-end Activities

Simplify the following using K-Map method.


1. F= (m0, m2, m6, m7, m10, m12)
2. F( a,b,c,d)=(0,1,2,5,8,9) & d(a,b,c,d)=(3,14,15)
3. F=(M1,M5,M7) & d=(M0,M4)
4. F(K,L,M,N)=(3,7,9,11,13,15) & (2,6,8)

7.5 Points for discussions

K-Map Concept
2,3,4 Variable Map
Dont Care Conditions

7.6 References

Digital Logic and Computer Design M.Morris Mano


61

Lesson 8 : NAND and NOR implementation, Tabulation Method


Contents
8.0 Aims and Objectives
8.1 Introduction
8.2 NAND and NOR Implementation
8.2.1 Tabulation Method
8.3 Let us Sum Up
8.4 Lesson-end Activities
8.5 Points for discussions
8.6 References

8.0 Aims and Objectives

The main of this lesson is to learn the concept of universal gates and implementation
of NAND and NOR gates using simple gates. One more method called as Tabulation
method has been discussed to reduce Boolean expressions.

8.1 Introduction

This lesson explains the building of NAND and NOR gates using simple logic gates
and the tabulation method is an another simplification process of minimizing Boolean
expressions.

8.2 NAND and NOR Implementation

The NAND Gate

To prove that we can construct any boolean function using only NAND gates, we
need only show how to build an inverter (NOT), an AND gate, and an OR gate from a
NAND (since we can create any boolean function using only AND, NOT, and OR).
Building an inverter is easy, just connect the two inputs together

Figure 8.1 Inverter Built from a NAND Gate

Once we can build an inverter, building an AND gate is easy - just invert the output of
a NAND gate. After all, NOT (NOT (A AND B)) is equivalent to A AND B .Of
course, this takes two NAND gates to construct a single AND gate, but no one said
that circuits constructed only with NAND gates would be optimal, only that it is
possible.
62

Figure 8.2 Constructing an AND Gate From Two NAND Gates

The remaining gate we need to synthesize is the logical-OR gate. We can easily
construct an OR gate from NAND gates by applying DeMorgan's theorems.

(A or B)' = A' and B' DeMorgan's Theorem.

A or B = (A' and B')' Invert both sides of the equation.

A or B = A' nand B' Definition of NAND operation.

By applying these transformations, the circuit obtained is .

Figure 8.3 Constructing an OR Gate from NAND Gates

8.2.1 Tabulation Method

In order to understand the tabular method of minimisation, it is best to understand the


numerical assignment of Karnaugh map cells and the incompletely specified functions
also known as the dont care conditions. This is because the tabular method is based
on these principles.

The tabular method which is also known as the Quine-McCluskey method is


particularly useful when minimising functions having a large number of variables, e.g.
The six-variable functions. Computer programs have been developed employing this
algorithm. The method reduces a function in standard sum of products form to a set of
prime implicants from which as many variables are eliminated as possible. These
prime implicants are then examined to see if some are redundant.

The tabular method makes repeated use of the law A + = 1. Note that Binary
notation is used for the function, although decimal notation is also used for the
functions. As usual a variable in true form is denoted by 1, in inverted form by 0, and
the abscence of a variable by a dash ( - ).
63

Rules of Tabular Method

Consider a function of three variables f(A, B, C):

Consider the function:

Listing the two minterms shows they can be combined

Now consider the following:

Note that these variables cannot be combined

This is because the FIRST RULE of the Tabular method for two terms to combine,
and thus eliminate one variable, is that they must differ in only one digit position.

when two terms are combined, one of the combined terms has one digit more at logic
1 than the other combined term. This indicates that the number of 1's in a term is
significant and is referred to as its index.

For example: f(A, B, C, D)

0000...................Index 0
0010, 1000.............Index 1
1010,0011,1001.......Index 2
1110, 1011.............Index 3
1111...................Index 4

The necessary condition for combining two terms is that the indices of the two terms
must differ by one logic variable which must also be the same.

Example 1:

Consider the function: Z = f(A,B,C) = + C+A +A C


64

To make things easier, change the function into binary notation with index value and
decimal value.

Tabulate the index groups in a column and insert the decimal value alongside.

From the first list, combine terms that differ by 1 digit only from one index group to
the next. These terms from the first list are then seperated into groups in the second
list. Note that the ticks are just there to show that one term has been combined with
another term. From the second list we can see that the expression is now reduced to: Z
= + + C+A

From the second list note that the term having an index of 0 can be combined with the
terms of index 1. Bear in mind that the dash indicates a missing variable and must line
up in order to get a third list. The final simplified expression is: Z =

Bear in mind that any unticked terms in any list must be included in the final
expression (none occurred here except from the last list). Note that the only prime
implicant here is Z = .

The tabular method reduces the function to a set of prime implicants.

Note that the above solution can be derived algebraically. Attempt this in your notes.

Example 2:

Consider the function f(A, B, C, D) = (0,1,2,3,5,7,8,10,12,13,15), note that this is


in decimal form.

(0000,0001,0010,0011,0101,0111,1000,1010,1100,1101,1111) in binary form.

(0,1,1,2,2,3,1,2,2,3,4) in the index form.


65

The prime implicants are: + + D + BD + A + AB

The chart is used to remove redundant prime implicants. A grid is prepared having all
the prime implicants listed at the left and all the minterms of the function along the
top. Each minterm covered by a given prime implicant is marked in the appropriate
position.

From the above chart, BD is an essential prime implicant. It is the only prime
implicant that covers the minterm decimal 15 and it also includes 5, 7 and 13. is
also an essential prime implicant. It is the only prime implicant that covers the
minterm denoted by decimal 10 and it also includes the terms 0, 2 and 8. The other
minterms of the function are 1, 3 and 12. Minterm 1 is present in and D.
Similarly for minterm 3. We can therefore use either of these prime implicants for
these minterms. Minterm 12 is present in A and AB , so again either can be used.

Thus, one minimal solution is: Z = + BD + +A


66

8.3 Let us Sum Up

The disadvantages of Map method has been overcome and Tabulation method has
been discussed and many Computer programs have been developed employing this
algorithm. The method reduces a function in standard sum of products form to a set of
prime implicants from which as many variables are eliminated as possible.

8.4 Lesson-end Activities

Simplify the following using Tabulation method.

1. F= (m0, m2, m6, m7, m10, m12)


2. F( a,b,c,d)=(0,1,2,5,8,9) & d(a,b,c,d)=(3,14,15)
3. F=(M1,M5,M7) & d=(M0,M4)
4. F(K,L,M,N)=(3,7,9,11,13,15) & (2,6,8)

8.5 Points for discussions

Tabulation Method
NAND and NOR Gates

8.6 References

Digital Logic and Computer Design M.Morris Mano


67

UNIT III

Lesson 9 : Combinational Logic And Sequential Logic Adders, Subtracters, code


Conversion

9.0 Aims and Objectives


9.1 Introduction
9.2 Combinational Circuits
9.2.1 Adders
9.2.2 Half Adder
9.2.2.1 Full Adder
9.2.2.2 Subtracter
9.2.2.3 Half Subtracter
9.2.2.4 Full Subtracter
9.2.2.5 Code Conversion
9.2.2.6 Conversion from BCD to Excess-3-Code
9.3 Let us Sum Up
9.4 References
9.5 Points for discussions
9.6 Lesson-end Activities

9.0 Aims and Objectives

The main objective of this lesson is to know the basics of combinational circuit and
formulate various systematic design and analysis procedures of combinational
circuits.

9.1 Introduction

Logic circuits for digital systems may be combinational circuits or sequential circuits.
A combinational circuit consists of input variables, logic gates and output variables.
The logic gates accepts signals from the inputs and generate signals to the outputs.
Digital computers performs a variety of information processing tasks.

A combinational circuit that performs the addition of 2 bits is called as Half adder and
a combinational circuit that performs the addition of 3 bits (2 significant bit and a
previous carry) is called as Full Adder. Taking the complement of the subtrahend and
adding it to the minuend accomplish the subtraction of 2 binary numbers.

9.2 Combinational Circuits

A combinational logic circuit has:


A set of m Boolean inputs,
A set of n Boolean outputs, and
n switching functions, each mapping the 2m input combinations to an
output such that the current output depends only on the current input
values
68

Figure 9.1 A Block Diagram


9.2.1 Adders

Adders are the basic building blocks of all arithmetic circuits; adders add two
binary numbers and give out sum and carry as output.

A quarter adder is a circuit that can add two binary digits but will not produce a carry.
This circuit will produce the following results:

0 plus 0 = 0
0 plus 1 = 1
1 plus 0 = 1
1 plus 1 = 0 (no carry)

Figure 9.2 Quarter Adder

Basically we have two types of adders.

Half Adder.
Full Adder.

9.2.2 Half Adder

Adding two single-bit binary values X, Y produces a sum S bit and a carry out C-out
bit. This operation is called half addition and the circuit to realize it is called a half
adder.
69

X Y SUM CARRY
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

Table 9.1 Truth Table of Half Adder

Figure 9.3 Symbol of Half Adder

Booelan Expressions

S (X,Y) = (1,2)
S = X'Y + XY'
S=X Y
CARRY(X,Y) = (3)
CARRY = XY

Figure 9.4 Circuit of Half Adder


9.2.2.1 Full Adder

Full adder takes a three-bits input. Adding two single-bit binary values X, Y with a
carry input bit C-in produces a sum bit S and a carry out C-out bit.

X Y Z SUM CARRY
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
70

0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

Table 9.2 Truth Table of Half Adder

Booelan Expressions

SUM (X,Y,Z) = (1,2,4,7)


CARRY (X,Y,Z) = (3,5,6,7)
SUM = X'Y'Z + XY'Z' + X'YZ'
SUM = X Y Z

Figure 9.5 Kmap SUM

Figure 9.6 Kmap CARRY


Algebric Equation

CARRY = XY + XZ + YZ

Full Adder using AND-OR

The below implementation shows implementing the full adder with AND-OR gates,
instead of using XOR gates. The basis of the circuit below is from the above Kmap.
71

Figure 9.7 Full Adder Circuit Kmap SUM

Figure 9.8 Full Adder Circuit Kmap CARRY

Figure 9.9 Full Adder Circuit SUM

Figure 9.10 Full Adder Circuit CARRY

9.2.2.2 Subtracter

Subtracter circuits take two binary numbers as input and subtract one binary number
input from the other binary number input. Similar to adders, it gives out two outputs,
difference and borrow (carry-in the case of Adder).

There are two types of subtracters.

Half Subtracter.
72

Full Subtracter.

9.2.2.3 Half Subtracter

The half-subtracter is a combinational circuit which is used to perform subtraction of


two bits. It has two inputs, X (minuend) and Y (subtrahend) and two outputs D
(difference) and B (borrow). The logic symbol and truth table are shown below.

Figure 9.11 Symbol of Half Subtracter

X Y D B
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0

Table 9.3 Truth Table of Half Subtracter

From the above table we can draw the Kmap as shown below for "difference" and
"borrow". The boolean expression for the difference and Borrow can be written.

Figure 9.12 Kmap Borrow & Difference

Figure 9.13 Circuit of Half Subtracter


73

9.2.2.4 Full Subtracter

A full subtracter is a combinational circuit that performs subtraction involving three


bits, namely minuend, subtrahend, and borrow-in. The logic symbol and truth table
are shown below.

Figure 9.14 Symbol of Full Subtracter

X Y Bin D Bout
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Table Truth Table of Full Subtracter


Boolean Expression

D = X'Y'Bin + X'YBin' + XY'Bin' + XYBin


= (X'Y' + XY)Bin + (X'Y + XY')Bin'
= (X Y)'Bin + (X Y)Bin'
= X Y Bin
Bout = X'.Y + X'.Bin + Y.Bin
74

Figure 9.15 Half Subtractor for Boolean Expression

Figure 9.16 K-Map Borrow & Difference

Figure 9.17 Circuit of Full Subtracter

A full-subtracter circuit is more or less same as a full-adder with slight modification.

9.2.2.4 Code Conversion

The availability of large variety of codes for the same discrete elements of the
information results in different variety of codes by different digital systems.
Sometimes the output of one system will be an input of another system.
A conversion circuit must be inserted between 2 systems if uses each different codes
for the same information. Thus a code converter is a circuit that makes the 2 systems
compatible even though each uses a different binary code
75

9.2.2.5 Conversion from BCD to Excess-3-Code

Transforms BCD code for the decimal digits to Excess-3 code for the decimal
digits
BCD code words for digits 0 through 9: 4-bit patterns 0000 to 1001,
respectively
Excess-3 code words for digits 0 through 9: 4-bit patterns consisting of 3
(binary 0011) added to each BCD code word

Formulation
Conversion of 4-bit codes can be most easily formulated by a truth table

Variables - BCD: A,B,C,D


Variables - Excess-3: W,X,Y,Z

Dont Cares- BCD 1010 to 1111

Maps for BCD to Excess 3 Code Converter

In p u t B C D O u tp u t E x c e ss-3
A B C D W X Y Z
0 0 0 0 0 0 1 1
0 0 0 1 0 1 0 0
0 0 1 0 0 1 0 1
0 0 1 1 0 1 1 0
0 1 0 0 0 1 1 1
0 1 0 1 1 0 0 0
0 1 1 0 1 0 0 1
0 1 1 1 1 0 1 0
1 0 0 0 1 0 1 1
1 0 0 1 1 0 1 1
76

Figure 9.18 Logic Diagram for BCD to Excess-3-Converter


9.3 Let us Sum Up

The combinational circuit for performing arithmetic operations has been discussed
and it includes half adder, full adder, half subtracter and full subtracter. The Boolean
expressions of each circuit has been discussed with circuit diagram.

9.4 Lesson-end Activities

1. With neat circuit and truth table, discuss half adder and half Subtracter.
2. Construct a full adder with half adders.
3. Give the truth table and logic circuit of full Subtracter.
4. Discuss code converter in detail.

9.5 Points for discussions

Half Adder
o A combinational circuit that performs the addition of 2 bits is called as
Half adder
Full Adder
o A combinational circuit that performs the addition of 3 bits (2 significant
bit and a previous carry) is called as Full Adder
Half Subtracter
o A half subtracter that subtracts 2 bits and produces their difference.
Full Subtracter
o A full subtracter is a combinational circuit that performs a subtraction
between 2 bits taking into account that a 1 may be borrowed by a lower
significant stage
9.6 References

www.asic-world.com/digital/arithmetic3.html
www.cs.princeton.edu/courses/archive/spr05/cos126/lectures/10.pdf
www.cs.bu.edu/faculty/snyder/cs450/Chapter02.pdf
77

Lesson 10 : Binary Parallel Adder, Decimal Adder

10.0 Aims and Objectives


10.1 Introduction
10.2 Binary Parallel Adder
10.2.1 Decimal Adder
10.2.2 Algorithm for BCD Adder
10.3 Let us Sum Up
10.4 Lesson-end Activities
10.5 Points for discussions
10.6 References

10.0 Aims and Objectives

The main of this lesson is to learn the concepts of binary parallel adder and how full
adder is fashioned in a parallel manner and the concept of converting decimal
numbers to binary coded decimal representation.

10.1 Introduction

A binary parallel adder is a digital function that produces the arithmetic sum of 2
binary numbers in parallel. It consists of full adders connected in cascade with the
output carry from one full adder connected to the input carry of the next full adder. A
decimal adder converts a given decimal number into a binary coded decimal form

10.2 Binary Parallel Adder

A Binary Parallel adder is a digital function that produces the arithmetic sum of 2
binary numbers in parallel. It consists of full adders connected in cascade with the
output carry of one full adder connected to the input of the next full adder.

The adders discussed in the previous section have been limited to adding single-digit
binary numbers and carries. The largest sum that can be obtained using a full adder is
112. Parallel adders let us add multiple-digit numbers. If we place full adders in
parallel, we can add two- or four-digit numbers or any other size desired.

Parallel binary adder. Now lets add 2 digit numbers

To add 10 2 (addend) and 01 2 (augend), assume there are numbers at the appropriate
inputs. The addend inputs will be 1 on A2 and 0 on A1. The augend inputs will be 0
on B2 and 1 on B1. Working from right to left, as we do in normal addition, lets
calculate the outputs of each full adder. With A1 at 0 and B1 at 1, the output of adder
1 will be a sum (S1) of 1 with no carry (C1). Since A2 is 1 and B2 is 0, we have a
sum (S2) of 1 with no carry (C2) from adder 1. To determine the sum, read the
outputs (C2, S 2, and S1) from left to right. In this case, C2 = 0, S2 = 1, and S1 = 1.
The sum, then, of 10 2 and 01 2 is 011 2 or 11 2. To add 112 and 012, assume one number
is applied to A1 and A2, and the other to B1 and B2, as shown in figure 3-10. Adder 1
produces a sum (S1) of 0 and a carry (C1) of 1. Adder 2 gives us a sum (S2)
78

Figure 10.1 Two Bit Full Adder


10.2.1 Decimal Adder

Computers perform arithmetic operations directly in the decimal number system


represent decimal numbers in binary coded form. A decimal adder requires a
minimum of nine inputs and five outputs, since four bits are required to code each
decimal digit and the circuit must have an input carry and output carry.

Addition of two BCD digits requires two 4-bit Parallel Adder Circuits. One 4-bit
Parallel Adder adds the two BCD digits. A BCD Adder uses a circuit which checks
the result at the output of the first adder circuit to determine if the result has exceeded
9 or a Carry has been generated. If the circuit determines any of the two error
conditions the circuit adds a 6 to the original result using the second Adder circuit.
The output of the second Adder gives the correct BCD output. If the circuit finds the
result of the first Adder circuit to be a valid BCD number (between 0 and 9 and no
Carry has been generated), the circuit adds a zero to the valid BCD result using the
second Adder. The output of the second Adder gives the same result.

Binary Sum BCD Sum


Decimal
K Z8 Z4 Z2 Z1 C S8 S4 S2 S1
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1 1
0 0 0 1 0 0 0 0 1 0 2
0 0 0 1 1 0 0 0 1 1 3
0 0 1 0 0 0 0 1 0 0 4
0 0 1 0 1 0 0 1 0 1 5
0 0 1 1 0 0 0 1 1 0 6
0 0 1 1 1 0 0 1 1 1 7
0 1 0 0 0 0 1 0 0 0 8
0 1 0 0 1 0 1 0 0 1 9

0 1 0 1 0 1 0 0 0 0 10
0 1 0 1 1 1 0 0 0 1 11
0 1 1 0 0 1 0 0 1 0 12
0 1 1 0 1 1 0 0 1 1 13
0 1 1 1 0 1 0 1 0 0 14
0 1 1 1 1 1 0 1 0 1 15
1 0 0 0 0 1 0 1 1 0 16
79

1 0 0 0 1 1 0 1 1 1 17
1 0 0 1 0 1 1 0 0 0 18
1 0 0 1 1 1 1 0 0 1 19

Table 10.1 Derivation of a BCD Adder

10.2.3 Algorithm for BCD Adder

If sum is up to 9
Use the regular Adder.

If the sum > 9


Use the regular adder and add 6 to the result

Figure 10.2 Block Diagram of a BCD Adder

10.3 Let us Sum Up

The main topics discussed in the binary parallel adder and algorithm for BCD adder

10.4 Lesson-end Activities

1. Give the logic circuit of parallel binary adder and explain.


2. Discuss BCD adder.

10.5 Points for discussions

Binary Parallel Adder


80

Decimal Adder

10.6 References

www.asic-world.com/digital/arithmetic3.html
81

Lesson 11 : Decoder, Encoder

11.0 Aims and Objectives


11.1 Introduction
11.2 Decoders
11.2.1 Binary n to 2n Decoder
11.2.1.1 2 to 4 Binary Decoder
11.2.1.2 3 to 8 Binary Decoder
11.2.2 Encoders
11.2.2.1 Binary 2 n to n Encoder
11.2.2.2 Octal to Binary Encoder
11.2.2.3 Decimal to Binary Encoder
11.3 Let us Sum Up
11.4 Lesson-end Activities
11.5 Points for discussions
11.6 References

11.0 Aims and Objectives


The main aim of this lesson is to know about decoder and encoder and a few models
of it
11.1 Introduction

A decoder is a combinational circuit that converts binary information from n input


lines to a maximum of 2 n unique output lines. A decoder does not contain input data.
A encoder is a digital function that produces a reverse operation from that of a
decoder.

11.2 Decoders

A decoder is a multiple-input, multiple-output logic circuit that converts coded inputs


into coded outputs, where the input and output codes are different; e.g. n-to-2n, BCD
decoders.
Enable inputs must be on for the decoder to function, otherwise its outputs assume a
single "disabled" output code word.
Decoding is necessary in applications such as data multiplexing, 7 segment display
and memory address decoding.

Figure 11.1 Pseudo Block of a Decoder


82

11.2.1 Binary n to 2n Decoder

A binary decoder has n inputs and 2n outputs. Only one output is active at any one
time, corresponding to the input value.

Figure 11.2 Symbol of n to 2 n Decoder

11.2.1.1 to 4 Binary Decoder

A 2 to 4 decoder consists of two inputs and four outputs, truth table and symbols of
which is shown below.

X Y F0 F1 F2 F3
0 0 1 0 0 0
0 1 0 1 0 0
1 0 0 0 1 0
1 1 0 0 0 1

Table 11.1 Truth table of 2 to 4 Binary Decoder

Figure 11.3 Symbol of 2 to 4 Decoder

Note: Each output is a 2-variable minterm (X'Y', X'Y, XY', XY)

Figure 11.4 Circuit of 2 to 4 Decoder


83

11.2.1.2 3 to 8 Binary Decoder

A 3 to 8 decoder consists of three inputs and eight outputs, truth table and symbols of
which is shown below.

Figure 11.5 Symbol of 3 to 8 Binary Decoder

X Y Z F0 F1 F2 F3 F4 F5 F6 F7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1

Table 11.2 Truth Table of 3 to 8 Binary Decoder

From the truth table we can draw the circuit diagram as shown in figure below.
84

Figure 11.6 Circuit of 3 to 8 Decoder

Implementing Functions Using Decoders

Any n-variable logic function, in canonical sum-of-minterms form can be


implemented using a single n-to-2n decoder to generate the minterms, and an OR gate
to form the sum.
The output lines of the decoder corresponding to the minterms of the function are
used as inputs to the or gate.
Any combinational circuit with n inputs and m outputs can be implemented with an n-
to-2n decoder with m OR gates.
Suitable when a circuit has many outputs, and each output function is expressed with
few minterms.

11.2.2 Encoders

An encoder is a combinational circuit that performs the inverse operation of a


decoder. If a device output code has fewer bits than the input code has, the device is
usually called an encoder. e.g. 2n-to-n, priority encoders.

11.2.2.1 Binary 2 n to n Encoder

The simplest encoder is a 2n-to-n binary encoder, where it has only one of 2n inputs =
1 and the output is the n-bit binary number corresponding to the active input.
85

Figure 11.7 Symbol of 2n to n Binary Encoder

11.2.2.2 Octal to Binary Encoder

Octal-to-Binary take 8 inputs and provides 3 outputs, thus doing the opposite of what
the 3-to-8 decoder does. At any one time, only one input line has a value of 1.

I0 I1 I2 I3 I4 I5 I6 I7 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1

Table 11.3 Truth table of Octal-to-Binary Encoder

For an 8-to-3 binary encoder with inputs I0-I7 the logic expressions of the outputs
Y0-Y2 are:

Y0 = I1 + I3 + I5 + I7
Y1= I2 + I3 + I6 + I7
Y2 = I4 + I5 + I6 +I7

Based on the above equations, we can draw the circuit as shown below

Figure 11.8 Circuit of Octal to Binary Encoder

11.2.2.3 Decimal to Binary Encoder

Decimal-to-Binary take 10 inputs and provides 4 outputs, thus doing the opposite of
what the 4-to-10 decoder does. At any one time, only one input line has a value of 1.
The figure below shows the truth table of a Decimal-to-binary encoder
86

I0 I1 I2 I3 I4 I5 I6 I7 I8 I9 Y3 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 1 0 0 0 0 0 1 0 1
0 0 0 0 0 0 1 0 0 0 0 1 1 0
0 0 0 0 0 0 0 1 0 0 0 1 1 1
0 0 0 0 0 0 0 0 1 0 1 0 0 0
0 0 0 0 0 0 0 0 0 1 1 0 0 1

Table 11.4 Truth table of Decimal to Binary Encoder

From the above truth table , we can derive the functions Y3, Y2, Y1 and Y0 as given
below.

Y3 = I8 + I9
Y2 = I4 + I5 + I6 + I7
Y1 = I2 + I3 + I6 + I7
Y0 = I1 + I3 + I5 + I7 + I9

11.3 Let us Sum Up

The concepts discussed in the above lesson is how binary information are converted
from n input lines to a maximum of 2n output lines and from 2n input lines to n output
lines and various examples of encoders and decoders has been discussed.

11.4 Lesson-end Activities

1. Explain the various types of decoders with logic circuit and truth table.
2. Explain the various types of encoders with logic circuit and truth table.

11.5 Points for discussions


Decoder
A decoder is a combinational circuit that converts binary information from n
input lines to a maximum of 2n unique output lines.
Encoder
A encoder is a digital function that converts binary information from
maximum of 2 n input liens to n output lines.

11.6 References
www.asic-world.com/digital/combo3.html
87

Lesson 12 : Multiplexers, Demultiplexers, Flip Flops, Flip Flop Excitation Tables

12.0 Aims and Objectives


12.1 Introduction
12.2 Multiplexer
12.2.1 4-by-1 Multiplexer
12.2.2 Demultiplexer
12.2.3 Sequential Logic
12.2.3.1 Asynchronous Sequential Circuit
12.2.3.2 Synchronous Sequential Circuit
12.2.4 Flip Flops
12.2.4.1 R-S Flip Flop
12.2.4.2 D Flip Flip
12.2.4.3 T Flip Flop
12.2.4.4 JK Flip Flop
12.2.4.5 JK Master Slave Flip Flop
12.2.4.6 Flip Flop Excitation Tables
12.3 Let us Sum Up
12.4 Lesson-end Activities
12.5 Points for discussions
12.6 References

12.0 Aims and Objectives


The main aim of this lesson is to learn the concepts of multiplexer, demultiplexer and
flip flops

12.1 Introduction

Multiplexing means transmitting a large number of information units over a smaller


number of channels or lines. A digital multiplexer is a combinational circuit that
selects from binary information one of many input lines and directs it to a single
output line. A demultiplexer is a logic circuit with one input and many outputs. A flip
flop can maintain a binary state indefinitely until directed by a input signal to switch
states.

12.2 Multiplexer

A multiplexer is a combinatorial circuit that is given a certain number (usually a


power of two) data inputs, let us say 2n, and n address inputs used as a binary number
to select one of the data inputs. The multiplexer has a single output, which has the
same value as the selected data input.

Multiplexing means transmitting a large number of information units over a smaller


number of channels or lines. A digital multiplexer is a combinational circuit that
selects from binary information one of many input lines and directs it to a single
output line. The selection of a particular input is controlled by a set of selection lines.
These circuits are used when a complex logic circuit is shared by number of inputs
88

12.2.1 4:1 Multiplexer

Example : 4:1 MUX Truth Table

Figure 12.1 Symbol of Multiplexer

A B f
0 0 C0
0 1 C1
1 0 C2
1 1 C3

Table 12.1 Truth Table of 4 : 1 Multiplexer

Assume that we have four lines, C0, C1, C2 and C3, which are to be multiplexed on a
single line, Output (f). The four input lines are also known as the Data Inputs. Since
there are four inputs, we will need two additional inputs to the multiplexer, known as
the Select Inputs, to select which of the C inputs is to appear at the output. Call these
select lines A and B.

The gate implementation of a 4-line to 1-line multiplexer is shown below:


89

Figure 12.2 Circuit diagram of 4 : 1 Multiplexer

Figure 12.3 Circuit diagram of 8 : 1 Multiplexer

12.2.2 Demultiplexer

The demultiplexer is the inverse of the multiplexer, in that it takes a single data input
and n address outputs. It has 2n outputs. A demultiplexer is a logic circuit with one
input and many outputs. A demultiplexer is a circuit that receives information on a
single line and transmits this information on one of the 2n possible output lines.
90

The selection of the specific output line is controller by the bit values of n selection
lines.

D0
D1
E 1x4 D2
Input Demultiplexer
D3

A B

Selection Lines

Figure 12.4 Symbol of Demultiplexer

E is taken as data input line and A & B are taken as selection lines.
The single input variable has a path to all 4 outputs but he input information is
directed to only of the output lines based on the binary values of the 2 selection lines.

If the selection line AB = 00 the output will be the same as the input value E while all
other outputs are maintained at 0

A B D0 D1 D2 D3
0 0 E 0 0 0
0 1 0 E 0 0
1 0 0 0 E 0
1 1 0 0 0 E

Table 12.2 Truth Table for 1 : 4 Demultiplexer


91

Figure 12.5 Circuit diagram of 8 : 1 Multiplexer

12.2.3 Sequential Logic

Digital electronics is classified into combinational logic and sequential logic.


Combinational logic output depends on the inputs levels, whereas sequential logic
output depends on stored levels and also the input levels.

Figure 12.6 Block Diagram of a Sequential Circuit

The memory elements are devices capable of storing binary info. The binary info
stored in the memory elements at any given time defines the state of the sequential
circuit. The input and the present state of the memory element determines the output.
Memory elements next state is also a function of external inputs and present state. A
sequential circuit is specified by a time sequence of inputs, outputs, and internal states.

There are two types of sequential circuits. Their classification depends on the timing
of their signals:
92

Synchronous sequential circuits


Asynchronous sequential circuits

12.2.3.1 Asynchronous Sequential Circuit

This is a system whose outputs depend upon the order in which its input variables
change and can be affected at any instant of time.

Gate-type asynchronous systems are basically combinational circuits with feedback


paths. Because of the feedback among logic gates, the system may, at times, become
unstable. Consequently they are not often used.

12.2.3.2 Synchronous Sequential Circuit

This type of system uses storage elements called flip-flops that are employed to
change their binary value only at discrete instants of time. Synchronous sequential
circuits use logic gates and flip-flop storage devices. Sequential circuits have a clock
signal as one of their inputs. All state transitions in such circuits occur only when the
clock value is either 0 or 1 or happen at the rising or falling edges of the clock
depending on the type of memory elements used in the circuit. Synchronization is
achieved by a timing device called a clock pulse generator. Clock pulses are
distributed throughout the system in such a way that the flip-flops are affected only
with the arrival of the synchronization pulse. Synchronous sequential circuits that use
clock pulses in the inputs are called clocked-sequential circuits. They are stable and
their timing can easily be broken down into independent discrete steps, each of which
is considered separately.

12.2.4 Flip Flops

A flip flop circuit can maintain a binary state indefinitely (as long as the power is
delivered to the circuit) until directed by an input signal to switch states. The major
difference among various types of flip flops are in the number of inputs they possess
and in the manner in which the inputs affect the binary state.

Various types of flip flops are

1) R-S Flip flop


2) D Flip Flop
3) T Flip Flop
4) J-K Flip Flop

12.2.4.1 R-S Flip Flop

The R-S FF is used to temporarily hold or store information until it is needed. A


single R-S FF will store one binary digit, either a 1 or a 0. Storing a four-digit binary
number would require four R-S FFs. The name is derived from the inputs, R for reset
and S for set. It is often referred to as an R-S LATCH. The outputs Q and Q' are
complements, as mentioned earlier.
93

Figure 12.7 R-S Flip Flop Circuit

S R Q Q+
0 0 0 0
0 0 1 1
0 1 X 0
1 0 X 1
1 1 X 0

Table 12.3 Truth Table of RS Flip Flop

The operation has to be analyzed with the 4 inputs combinations together with the 2
possible previous states.

When S = 0 and R = 0: If we assume Q = 1 and Q' = 0 as initial condition,


then output Q after input is applied would be Q = (R + Q')' = 1 and Q' = (S +
Q)' = 0. Assuming Q = 0 and Q' = 1 as initial condition, then output Q after the
input applied would be Q = (R + Q')' = 0 and Q' = (S + Q)' = 1. So it is clear
that when both S and R inputs are LOW, the output is retained as before the
application of inputs. (i.e. there is no state change).
When S = 1 and R = 0: If we assume Q = 1 and Q' = 0 as initial condition,
then output Q after input is applied would be Q = (R + Q')' = 1 and Q' = (S +
Q)' = 0. Assuming Q = 0 and Q' = 1 as initial condition, then output Q after the
input applied would be Q = (R + Q')' = 1 and Q' = (S + Q)' = 0. So in simple
words when S is HIGH and R is LOW, output Q is HIGH.
When S = 0 and R = 1: If we assume Q = 1 and Q' = 0 as initial condition,
then output Q after input is applied would be Q = (R + Q')' = 0 and Q' = (S +
Q)' = 1. Assuming Q = 0 and Q' = 1 as initial condition, then output Q after the
input applied would be Q = (R + Q')' = 0 and Q' = (S + Q)' = 1. So in simple
words when S is LOW and R is HIGH, output Q is LOW.
When S = 1 and R =1 : No matter what state Q and Q' are in, application of 1
at input of NOR gate always results in 0 at output of NOR gate, which results
in both Q and Q' set to LOW (i.e. Q = Q'). LOW in both the outputs basically
is wrong, so this case is invalid.

The waveform below shows the operation of NOR gates based RS Latch
94

Figure 12.8 Wave Diagram of RS Flip Flop

The basic flip flop is an asynchronous sequential circuit. By adding gates to the inputs
of the basic circuit, the flip flop can be made to respond to input levels during the
occurrence of a clock pulse.

The clocked RS flip flop consists of a basic NOR Flip flop and 2 AND gates. The
outputs of the 2 AND gates remain 0 as long as the clock pulse is 0 regardless of S
and R Input values. When the clock pulse goes to 1, information from S and R inputs
is allowed to reach the basic flip flop.

The set state is reached with S =1 and R =0 and CP = 1.

To change to clear state the inputs must be S =0 and R = 1 and CP = 1. With both S
=1 and R = 1 the occurrence of a clock pulse causes both outputs to momentarily go
to 0

Figure 12.9 Clocked RS Flip Flop Circuit Diagram

FLIP-FLOP FLIP-FLOP CHARACTERISTIC CHARACTERISTIC


NAME SYMBOL TABLE EQUATION

S R Q(next)
0 0 Q Q(next) = S + R'Q
SR 0 1 0
SR = 0
1 0 1
1 1 ?
95

12.2.4.2 D Flip Flop

The RS latch seen earlier contains ambiguous state; to eliminate this condition we can
ensure that S and R are never equal. This is done by connecting S and R together with
an inverter. Thus we have D Latch: the same as the RS latch, with the only difference
that there is only one input, instead of two (R and S). This input is called D or Data
input. D latch is called D transparent latch for the reasons explained earlier. Delay
flip-flop or delay latch is another name used. Below is the truth table and circuit of D
latch.

In real world designs (ASIC/FPGA Designs) only D latches/Flip-Flops are used.

Figure 12.10 Circuit Diagram of D Flip Flop

D Q Q+
1 X 1
0 X 0

Table 12.4 Truth Table of D Flip FLop

Below is the D latch waveform, which is similar to the RS latch one, but with R
removed.

Figure 12.11 Wave Diagram of D Flip Flop

FLIP-FLOP FLIP-FLOP CHARACTERISTIC CHARACTERISTIC


NAME SYMBOL TABLE EQUATION

D Q(next)
D 0 0 Q(next) = D
1 1
96

12.2.4.3 T Flip Flop

When the two inputs of JK latch are shorted, a T Latch is formed. It is called T latch
as, when input is held HIGH, output toggles.

Figure 12.12 Circuit Diagram of T Flip Flop

T Q Q+
1 0 1
1 1 0
0 1 1
0 0 0

Table 12.5 Truth Table of T Flip Flop

FLIP-FLOP FLIP-FLOP CHARACTERISTIC CHARACTERISTIC


NAME SYMBOL TABLE EQUATION

T Q(next)
T 0 Q Q(next) = TQ' + T'Q
1 Q'

12.2.4.4 JK Flip Flop

The ambiguous state output in the RS latch was eliminated in the D latch by joining
the inputs with an inverter. But the D latch has a single input. JK latch is similar to RS
latch in that it has 2 inputs J and K as shown figure below. The ambiguous state has
been eliminated here: when both inputs are high, output toggles. The only difference
here is output feedback to inputs, which is not there in the RS latch.
97

Figure 12.13 Circuit Diagram of JK Flip Flop

J K Q
1 1 0
1 1 1
1 0 1
0 1 0

Table 12.6 Truth Table of JK Flip Flop

FLIP-FLOP FLIP-FLOP CHARACTERISTIC CHARACTERISTIC


NAME SYMBOL TABLE EQUATION

J K Q(next)
0 0 Q
Q(next) = JQ' +
JK 0 1 0
K'Q
1 0 1
1 1 Q'

12.2.4.5 JK Master Slave Flip Flop

All sequential circuits that we have seen in the last few pages have a problem (All
level sensitive sequential circuits have this problem). Before the enable input changes
state from HIGH to LOW (assuming HIGH is ON and LOW is OFF state), if inputs
changes, then another state transition occurs for the same enable pulse. This sort of
multiple transition problem is called racing.

If we make the sequential element sensitive to edges, instead of levels, we can


overcome this problem, as input is evaluated only during enable/clock edges.
98

Figure 12.14 Circuit Diagram of Master Slave Flip Flop

In the figure above there are two latches, the first latch on the left is called master
latch and the one on the right is called slave latch. Master latch is positively clocked
and slave latch is negatively clocked.

Figure 12.15 Master Slave Flip Flop

A Master Slave Flip Flop has been constructed by 2 separate flip flops. One circuit
serves as a master and the other serve as a slave and the overall circuit is referred as
master Slave Flip Flop.

When A Clock Pulse CP is 0, the output of the inverter is 1. Since the clock input of
the slave flip flop is 1, the flip flop is enabled and the output is Q and Q. The master
flip flop is disabled because cp = 0.

12.2.4.6 Flip Flop Excitation Tables

All flip-flops can be divided into four basic types: SR, JK, D and T. They differ in
the number of inputs and in the response invoked by different value of input signals.
99

Flip-flop Types

FLIP-
FLIP-FLOP CHARACTERISTIC CHARACTERISTIC
FLOP EXCITATION TABLE
SYMBOL TABLE EQUATION
NAME
S R Q(next) Q Q(next) S R
0 0 Q Q(next) = S + R'Q 0 0 0 X
SR 0 1 0 0 1 1 0
SR = 0 1 0 0 1
1 0 1
1 1 ? 1 1 X 0

J K Q(next) Q Q(next) J K
0 0 Q 0 0 0 X
JK 0 1 0 Q(next) = JQ' + K'Q 0 1 1 X
1 0 1 1 0 X 1
1 1 Q' 1 1 X 0

Q Q(next) D
D Q(next) 0 0 0
D 0 0 Q(next) = D 0 1 1
1 1 1 0 0
1 1 1

Q Q(next) T
T Q(next) 0 0 0
T 0 Q Q(next) = TQ' + T'Q 0 1 1
1 Q' 1 0 1
1 1 0

Each of these flip-flops can be uniquely described by its graphical symbol, its
characteristic table, its characteristic equation or excitation table. All flip-flops have
output signals Q and Q'.

The characteristic table in the third column of Table 1 defines the state of each flip-
flop as a function of its inputs and previous state. Q refers to the present state and
Q(next) refers to the next state after the occurrence of the clock pulse. The
characteristic table for the RS flip-flop shows that the next state is equal to the present
state when both inputs S and R are equal to 0. When R=1, the next clock pulse clears
the flip-flop. When S=1, the flip-flop output Q is set to 1. The equation mark (?) for
the next state when S and R are both equal to 1 designates an indeterminate next state.

The characteristic table for the JK flip-flop is the same as that of the RS when J and K
are replaced by S and R respectively, except for the indeterminate case. When both J
100

and K are equal to 1, the next state is equal to the complement of the present state,
that is, Q(next) = Q'.

The next state of the D flip-flop is completely dependent on the input D and
independent of the present state.

The next state for the T flip-flop is the same as the present state Q if T=0 and
complemented if T=1.

The characteristic table is useful during the analysis of sequential circuits when the
value of flip-flop inputs are known and we want to find the value of the flip-flop
output Q after the rising edge of the clock signal. As with any other truth table, we
can use the map method to derive the characteristic equation for each flip-flop, which
are shown in the third column of Table 1.

During the design process we usually know the transition from present state to the
next state and wish to find the flip-flop input conditions that will cause the required
transition. For this reason we will need a table that lists the required inputs for a given
change of state. Such a list is called the excitation table, which is shown in the fourth
column of Table 1. There are four possible transitions from present state to the next
state. The required input conditions are derived from the information available in the
characteristic table. The symbol X in the table represents a "don't care" condition, that
is, it does not matter whether the input is 1 or 0.

12.3 Let us Sum Up

The concepts of Multiplexer, demultiplexer and the various types of flip flops with
excitation tables has been discussed in detail

12.4 Lesson-end Activities

1. What are combinational circuits? Explain multiplexer & demultiplexer with


neat circuits.
2. What are synchronous sequential circuits? Explain.
3. What are asynchronous sequential circuits? Explain.
4. Discuss the various types of flip-flops.

12.5 Points for discussions

Multiplexer
Demultiplexer
Types of Flip Flops

12.6 References

Digital Logic and Computer Design M.Morris Mano


www.asic-world.com/digital
101

UNIT 4

Lesson 13: Computer Organization and Central Processing Unit Organization of


Computer, Characteristics, Machine language, Assembly Language, Rules of the
language

Contents:

13.0 Aims and Objectives


13.1 Introduction
13.2. Organization of Computer
13.2.1 Characteristics
13.2.2 Machine Language
13.2.3 Assembly Language
13.2.3.1 Rules of the Language
13.3 Let us Sum Up
13.4 Lesson-end Activities
13.5 Points for discussions
13.6 References

13.0 Aims and Objectives

The main aim of this lesson is to learn the basics of computer, concepts of machine
language, various categories of languages, and rules of the language.

13.1 Introduction

A computer system includes both hardware and software. Software refers to the
program and hardware refers to the computer on which the instructions are executed.
A program written in a computer may be either dependent or independent of the
computer. The computer understands only machine language and user writes coding
in high level language and so there exist a translator who converts high level language
to machine level language and vice versa. The translator used to convert is referred as
compiler. The basic elementary programming concepts are discussed in this lesson.

13.2 Organization of Computer

13.2.1 Characteristics
A total computer system includes both hardware and software.

Hardware: Hardware is the physical components of the computer

Software: Software refers to the set of programs written for the computer

A program written by user may be either dependent or independent of the


physical computer that runs the program.
102

If the program is machine independent then a translator software is required to


convert the high level language instructions to machine level language instructions
and vice versa. This translator is called as compiler.

13.2.2 Machine Language

A program is a set of instructions written directing the computer to perform a


required data processing task.

There are various programming languages present but only the computer can
understand programs written in binary form. Programs written in any other languages
have to be converted to binary representation before they are executed in the
computer.

Hexa Decimal
Symbol Description
Code
AND 0 or 8 AND M to AC
ADD 1 or 9 Add M to AC, carry to E
LDA 2 or A Load AC from M
STA 3 or B Store AC in M
BUN 4 or C Branch unconditionally to m
BSA 5 or D Save return address in m and branch to m+1
ISZ 6 or E Increment M and skip if zero
CLA 7800 Clear AC
CLE 7400 Clear E
CMA 7170 Complement AC
CME 7100 Complement E
CIR 7080 Circulate right E and AC
CIL 7040 Circulate left E and AC
INC 7017 Increment AC, carry to E
SPA 7010 Skip if AC is positive
SNA 7008 Skip if AC is negative
SZA 7004 Skip if AC is zero
SZE 7002 Skip if E is zero
HLT 7001 Halt computer
INP F800 Input information and clear flag
OUT F400 Output information and clear flag
SKI F170 Skip if input flag is on
SKO F100 Skip if output flag is on
ION F080 Turn interrupt on
IOF F040 Turn interrupt off
MIN 106
SUB 107
DIF 108
Table 13.1 Computer Instructions
103

m: effective address

M: memory word (operand) Found at m

Hierarchy of programming languages

Machine-Language
Binary code
Exact representations of instructions in memory in form of 0s and 1s
Octal or hexadecimal code
Equivalent Translation of Binary code to octal or hexa decimal code

Assembly-Language
It uses Symbolic code or mnemonic codes such as ADD, SUB. LOAD. Each
symbolic instruction code can be translated into binary coded instruction. The
Translation is done by a special program called an ASSEMBLER

High-Level Language
It uses simple English representation and the programs are written as sequence of
instructions and each statement must be converted into machine level language before
execution. The conversion part is carried by the translator called as COMPILER.

13.2.3 Assembly Language

A programming language is defined by a set of rules. Users must follow all


rules of that language to efficiently execute a program.

13.2.3.1 Rules of the Language

Each line is arranged in three columns called fields


Label field
May be empty or may specify a symbolic address consists of up to 3 characters
Terminated by a comma
Instruction field
Specifies a machine or a pseudo instruction
May specify one of
* Memory reference instr. (MRI)
MRI consists of two or three symbols separated by spaces.
ADD OPR (direct address MRI)
ADD PTR I (indirect address MRI)
* Register reference or input-output instr.
Non-MRI does not have an address part
* Pseudo instr. with or without an operand
Symbolic address used in the instruction field must be defined
somewhere as a label
Comment field
May be empty or may include a comments
104

Definition Of Pseudo Instructions

ORG N
Hexadecimal number N is the memory loc.
for the instruction or operand listed in the following line
END
Denotes the end of symbolic program
DEC N
Signed decimal number N to be converted to the binary
HEX N
Hexadecimal number N to be converted to the binary

Assembly Language Program To Subtract Two Numbers

ORG 100 / Origin of program is location 100


LDA SUB / Load subtrahend to AC
CMA / Complement AC
INC / Increment AC
ADD MIN / Add minuend to AC
STA DIF / Store difference
HLT / Halt computer
MIN, DEC 83 / Minuend
SUB, DEC -23 / Subtrahend
DIF, HEX 0 / Difference stored here
END / End of symbolic program

The first line has a pseudo instruction ORG define the origin of the program at
memory location 100.
The next six lines define machine instructions, and the last 4 lines have pseudo
instructions

13.3 Let us Sum Up

The concepts of machine language its various instruction set, a sample program to
perform the summation of 2 numbers has been explained in various language
instructions sets. A programming language is defined as a set of rules. Various rules
of a programming language have been clearly explained in this lesson.

13.4 Lesson-end Activities

1. Discuss low level languages in detail..


2. Explain about the organization of computer.
105

13.5 Points for discussions

Compiler is a translator, which converts high-level language to machine level


language and vice versa.

Assembly level language uses pneumonic codes and assembler is a translator which
converts assembly level language into machine level language and vice versa.

Hardware refers to the physical components of the computer

Software refers to the instructions executed in hardware.

13.6 References

Computer Organization and Programming Gear.C.W


Computer system concept and design Gibson
106

Lesson 14 : Translation to Binary, Register Transfer Language, ALU

Contents:

14.0 Aims and Objectives


14.1 Introduction
14.2 Translation to Binary
14.2.1 Register Transfer Language
14.2.1.1 Register Transfer
14.2.2 ALU
14.3 Let us Sum Up
14.4 Lesson-end Activities
14.5 Points for discussions
14.6 References

14.0 Aims and Objectives

The main aim of this lesson is to learn the concept of micro operations and its
execution in registers. Various micro operations such as Arithmetic and Logical micro
operations are discussed in this lesson in detail.

14.1 Introduction

A special program called as Assembler does the translation of the symbolic


program into binary form. A digital system is an interconnection of digital hardware
modules that performs a specific task. The registers they contain best define digital
modules and the operations are performed on the data stored in them. The operations
executed on data stored in registers are called micro operations. The micro operation
transfer among registers is called register transfer language. A register transfer
language is a system of expressing in symbolic form the micro operation sequences
among the registers of a digital module. Register transfer micro operations transfer
binary information from one register to another register. Arithmetic micro operations
perform arithmetic operations on numeric data item stored in registers. Logic micro
operations perform bit manipulation operations on non numeric data stored in
registers.

14.2 Translation to Binary

The translation of the symbolic program into binary is done by a special


program called as ASSEMBLER. The process starts by scanning and replacing the
symbols into its equivalent machine code.

The equivalent machine code for pneumonic codes are given in the table below
107

Hexadecimal Code
Location Content Symbolic Program
100 2107 ORG 100
101 7170 LDA SUB
102 7017 CMA
103 1106 INC
104 3108 ADD MIN
105 7001 STA DIF
106 0053 MIN, HLT
107 FFE9 SUB, DEC 83
108 0000 DIF, DEC -23
HEX 0
END

A Register is a group of flip flops capable of storing one bit of information.

The operations executed on data stored in registers are called as


Micro operations.
The Micro operation transfers among registers are called as Register Transfer
Language

14.2.1 Register Transfer Language

A symbolic language
A convenient tool for describing the internal organization of digital computers
Can also be used to facilitate the design process of digital systems.

Designation Of Registers

Registers are designated by capital letters, sometimes followed by numbers


(e.g., A, R13, IR)
Often the names indicate function:
MAR - memory address register
PC - program counter
IR - instruction register
Registers and their contents can be viewed and represented in various ways
A register can be viewed as a single entity:
MAR

Registers may also be represented showing the bits of data they contain
Designation of a register
a register
portion of a register
a bit of a register
108

Register Showing individual bits


R1 7 6 5 4 3 2 1 0

15 0 15 8 7 0
R2 PC(H) PC(L)
Numbering of bits Subfields

Figure 14.1 Block Diagram of register

14.2.1.1 Register Transfer

Copying the contents of one register to another is a register transfer.


A register transfer is indicated as R2 R1

In this case the contents of register R2 are copied (loaded) into register R1
A simultaneous transfer of all bits from the source R1 to the
destination register R2, during one clock pulse
Note that this is a non-destructive; i.e. the contents of R1 are not altered by
copying (loading) them to R2

A register transfer such as R3 R5 implies that the digital system has

the data lines from the source register (R5) to the destination register (R3)
Parallel load in the destination register (R3)
Control lines to perform the action

Control Functions

Often actions need to only occur if a certain condition is true


This is similar to an if statement in a programming language
In digital systems, this is often done via a control signal, called a control
function
If the signal is 1, the action takes place

This is represented as:

P: R2 R1

Which means if P = 1, then load the contents of register R1 into register R2, i.e., if
(P = 1) then (R2 R1)
109

Implementation Of Controlled Transfer

P: R2 R1

Figure 14.2 Transfer From R1 To R2 When P =1 (Block Diagram)

t t+1
Clock

Load

Figure 14.3 Timing Diagram

The same clock controls the circuits that generate the control function and the
destination register
Registers are assumed to use positive-edge-triggered flip-flops
If two or more operations are to occur simultaneously, they are separated with
commas

P: R3 R5, MAR IR

Here, if the control function P = 1, load the contents of R5 into R3, and at the same
time (clock), load the contents of register IR into register MAR

Basic Symbols For Register Transfers

14.2.3 ALU

Computer system micro operations are of four types:


-Register transfer micro operations
- Arithmetic micro operations
- Logic micro operations
- Shift micro operations

The basic arithmetic micro operations are


110

Addition
Subtraction
Increment
Decrement

The additional arithmetic micro operations are


Add with carry
Subtract with borrow
Transfer/Load
etc.

R3 R1 + R2 Contents of R1 plus R2 transferred to R3


R3 R1 - R2 Contents of R1 minus R2 transferred to R3
R2 R2 Complement the contents of R2
R2 R2+ 1 2's complement the contents of R2 (negate)
R3 R1 + R2+ 1 subtraction
R1 R1 + 1 Increment
R1 R1 - 1 Decrement

B3 A3 B2 A2 B1 A1 B0 A0

Binary Adder C3 C2 C1 C0
FA FA FA FA

C4 S3 S2 S1 S0

Binary Adder-Subtractor
B3 A3 B2 A2 B1 A1 B0 A0

C3 C2 C1 C0
FA FA FA FA

C4 S3 S2 S1 S0

Binary Incrementer A3 A2 A1 A0 1

x y x y x y x y
HA HA HA HA
C S C S C S C S

C4 S3 S2 S1 S0

Figure 14.3 Binary Adder / Subtractor / Incrementer


111

Cin
S1
S0
A0 X0 C0
S1 D0
S0 FA
B0 0 4x1 Y0 C1
1
2 MUX
3
A1 X1 C1
S1 D1
S0 FA
B1 0 4x1 Y1 C2
1
2 MUX
3
A2 X2 C2
S1 D2
S0 FA
B2 0 4x1 Y2 C3
1
2 MUX
3
A3 X3 C3
S1 D3
S0 FA
B3 0 4x1 Y3 C4
1
2 MUX
3 Cout
0 1

S1 S0 Cin Y Output Microoperation


0 0 0 B D= A + B Add
0 0 1 B D= A+B+1 Add with carry
0 1 0 B D = A + B Subtract with borrow
0 1 1 B D= A + B+ 1 Subtract
1 0 0 0 D= A Transfer A
1 0 1 0 D= A + 1 Increment A
1 1 0 1 D= A - 1 Decrement A
1 1 1 1 D= A Transfer A

Figure 14.4 Arithmetic Circuit


Logical Micro operations

Specify binary operations on the strings of bits in registers


Logic micro operations are bit-wise operations, i.e., they work on the
individual bits of data
useful for bit manipulations on binary data
useful for making logical decisions based on the bit value
There are, in principle, 13 different logic functions that can be defined over
two binary input variables

A B F0 F1 F2 . . . F13 F14 F15


0 0 0 0 0 ... 1 1 1
0 1 0 0 0 ... 1 1 1
1 0 0 0 1 ... 0 1 1
1 1 0 1 0 ... 1 0 1

Most systems only implement four of these


AND (), OR (), XOR (), Complement/NOT
The others can be created from combination of these.
List of Logic Micro operations
- 13 different logic operations with 2 binary vars.
- n binary vars functions

Truth tables for 13 functions of 2 variables and the corresponding 13 logic


micro-operations
112

0 0 0 0 F0 = 0 F0 Clear
0 0 0 1 F1 = xy FAB AND
0 0 1 0 F2 = xy' F A B
0 0 1 1 F3 = x FA Transfer A
0 1 0 0 F4 = x'y F A B
0 1 0 1 F5 = y FB Transfer B
0 1 1 0 F6 = x y FAB Exclusive-OR
0 1 1 1 F7 = x + y F AB OR
1 0 0 0 F8 = (x + y)' F A B) NOR
1 0 0 1 F9 = (x y)' F (A B) Exclusive-NOR
1 0 1 0 F10 = y' F B Complement B
1 0 1 1 F11 = x + y' FAB
1 1 0 0 F12 = x' F A Complement A
1 1 0 1 F13 = x' + y F A B
1 1 1 0 F14 = (xy)' F (A B) NAND
1 1 1 1 F15 = 1 F all 1's Set to all 1's

Ai
0
Bi

1
4X1 Fi
MUX
2

3 Select

S1
S0

Function table
S1 S0 Output -operation
0 0 F=AB AND
0 1 F = AB OR
1 0 F=AB XOR
1 1 F = A Complement

Figure 14.5 Hardware Implementation of Logic Microoperations

Applications of Logical Microoperations

Logic microoperations can be used to manipulate individual bits or a portions


of a word in a register
Consider the data in a register A. In another register, B, is bit data that will be
used to modify the contents of A

Selective-set AA+B
Selective-complement AAB
Selective-clear A A B
Mask (Delete) AAB
113

Clear AAB
Insert A (A B) + C
Compare AAB

Figure 14.6 One Stage of Arithmetic Logic Shift Unit

14.3 Let us Sum Up


The lesson has discussed about various register transfer microoperations its timing
diagram and the basic symbols used in the microoperations. The register transfer
microoperation does not change the information content when the binary information
moves from source register to destination register. The other microoperations such as
arithmetic microoperations and logic microoperations change the information content
during the transfer. The basic arithmetic microoperations are discussed with its logical
circuits. Truth tables for logical microoperations and various applications of logical
microoperations have been discussed.

14.4 Lesson-end Activities

1.Define micro-operation. What are the types of micro-operations?


2.With a neat diagram, explain register-transfer.
3. Give the arithmetic circuit and explain the various micro operations that can be
performed using the circuit.
4. Draw the logic circuit and list out the various logic micro-operations that can be
performed using the circuit.
5. Discuss ALU in detail.

14.5 Points for discussions

Assembler is a translator which converts assembly language programming into high-


level language programming.
A Register is a group of flip flops capable of storing one bit of information.
A register transfer language is
1 A symbolic language
2 A convenient tool for describing the internal organization of digital computers
3 Can also be used to facilitate the design process of digital systems.
114

Computer system microoperations are of four types:

Register transfer microoperations


Arithmetic microoperations
Logic microoperations
Shift microoperations

14.6 References

Computer System Architecture - M.Morris Mano


115

Lesson 15: General Register Organization, Stack Organization

Contents:

15.0 Aims and Objectives


15.1 Introduction
15.2 General Register Organization
15.2.1 Stack Organization
15.2.1.1 Reverse Polish Notation
15.3 Let us Sum Up
15.4 Lesson-end Activities
15.5 Points for discussions
15.6 References

15.0 Aims and Objectives

The main objective of this lesson is to learn the concepts of general register
organization, the interconnectivity among registers, control word and the storage
device called as stack based on last in first out mechanism.

15.1 Introduction

The part of the computer that performs data processing operations is called the central
processing unit. It is made up of 3 major parts called as Arithmetic and logic unit
performs required microoperations for executing the instructions, the control unit
supervises the transfer of information among registers and instructs the ALU to
perform the desired operation. The stack in digital computer is a memory unit with an
address register that can count only. The register that holds the address for the stack is
called as stack pointer because its values always points at the top item in the stack.
The 2 operations performed in the stack are PUSH and POP.

15.2 General Register Organization

Central Processing Unit


The part of the computer that performs bulk of data processing operations is called as
the Central Processing Unit

Major Components of CPU


Register set
o Holds intermediate data used during instruction execution
Arithmetic Logic Unit (ALU)
o Performs microoperations for executing instructions
Control unit
o Supervises data transfers among registers
o Tells ALU the type of operation to perform
116

Figure 15.1 Major Components of CPU

General Register Organization

When a large number of registers are included in the CPU, it is most efficient
to connect them through a common bus system.

Figure 15.2 Block Diagram

A bus organization for seven CPU registers is shown above. The output of
each register is connected to 2 multipliers (MUX) to form the two buses A and B. The
selection lines in each multiplier select one register or the input data for the particular
bus. The A and B buses form the inputs to a common arithmetic and logical unit. The
operations selected in the ALU determine the arithmetic and logical microoperation to
be performed. The result of the microoperation is available for output data and also
goes into the inputs of all the registers.

To perform the operation


R1 R2 + R3
the control must provide binary selection variables to the following selector inputs
MUX A selector (SELA)
o Put content of R2 on bus A
MUX B selector (SELB)
117

o Put content of R3 on bus B


ALU operation selector (OPR)
o Perform arithmetic addition A + B
Decoder destination selector (SELD)
o Transfer content of output bus into R1

Control Word

There are 14 binary selection inputs in the unit and their combined value specifies a
control word.

Table 15.1 Encoding of Register Selection Fields

The 3 bit binary code in the first column of the table specifies the binary code for each
of the three fields. The register selected by fields SELA, SELB and SELD is the one
whose decimal number is equivalent to the binary number in the code. When SELA or
SELB is 000, the corresponding multiplexer selects the external input data. When
SELD = 000, no destination register is selected but the contents of the output bus are
available in the external output.
The ALU provides arithmetic and logical operations. In addition, the CPU must either
provide post shift operations or pre shift operations. The encoding of the ALU
operations for the CPU is given below

Table 15.2 ALU Function Table


118

Table 15.3 Encoding of ALU Operations

Table 15.4 Examples of Microoperations

15.2.1 Stack Organization

Stack Organization

Stack works under the principle of Last-in first-out (LIFO) in such a way that the item
stored last in the list is retrieved is first
Stack in digital computer is essentially a memory unit with an address register that
can count only(after an initial value is loaded into it).
119

Figure 15.3 Conceptual View of Stack

Stack pointer (SP)


Always points to top item in stack
Register that holds stack address
Operations
Push put new item in top of stack
Pop remove item from top of stack

Register Stack

Stack can reside in a portion of a large memory unit or it can be organized as a


collection of a finite number of (fast) registers.
120

Figure 15.4 Organization of 64-word register stack

Registers:

FULL one bit (when stack is FULL)


EMTY one bit (when stack is EMPTY)
SP six bit
DR stack I/O(Holds the data to be written into or read out of the stack)

Stack Initialization
SP cleared to 0
EMTY set to 1
FULL cleared to 0

Push
SP SP + 1
M[SP] DR
If (SP = 0) then (FULL 1)
EMTY 0

Pop
DR M[SP]
SP SP 1
If (SP = 0) then (EMTY 1)
FULL 0

Memory Stack

The implementation of a stack in the CPU is done by assigning a portion of


memory to a stack operation using a processor register as a stack pointer. SP points at
the top of the stack. The three register are connected to a common address bus, and
either one can provide an address for memory. PC is used during the fetch phase to
read an instruction. AR is used during the execute phase to read an operand. Sp is
used to Push or Pop items or from the stack.
121

Items in the stack communicate with the data register DR

Figure 15.5 Computer memory with program, data and stack segments

The stack limits can be checked by using 2 processor registers: one to hold the upper
limit (3000 an example) and the other to hold the lower limit.

15.2.1.1 Reverse Polish Notation

A stack is effective for evaluating arithmetic expressions.


Arithmetic operations are usually written in infix notation: each operator
resides between the operands,
e.g.: (A * B) + (C * D),
where * denotes multiplication: A * B and C * D has to be computed and stored. after
the two products, sum (A * B) + (C *D) is computed, there is no straight forward way
to determine the next operation that is performed.
Arithmetic expressions can be presented in prefix notation (also referred to as
Polish notation by Polish mathematician Lukasiewicz): operators are placed before
the operands. The postfix notation (reverse Polish notation (RPN)) places the
operator after the operand.
e.g.: A + B , infix notation
+AB , prefix notation
AB+ , postfix notation (RPN)

Reverse Polish Notation

A stack organization is very effective for evaluating arithmetic expressions.


Reverse Polish notation (RPN) , also known as postfix notation, is an arithmetic
formula notation, derived from the Polish notation introduced in 1617 by the Polish
mathematician Jan ukasiewicz. RPN was invented by Australian philosopher and
122

Computer scientist Charles Hamblin in the mid-1650s, to enable zero-address


memory stores.

As a user interface for calculation the notation was first used in Hewlett-Packard's
desktop calculators from the late 1660s and then in the HP-35 handheld scientific
calculator launched in 1672. In RPN the operands precede the operator, thus
dispensing with the need for parentheses.

For example, the expression 3 * ( 4 + 7) would be written as 3 4 7 + *, and done on an


RPN calculator as "3", "Enter", "4", "Enter", "7", "+", "*". (Alternatively, and more-
compactly, it could also be re-ordered and written as 4 7 + 3 *, and done on an RPN
calculator as "4", "Enter", "7", "+", "3", "*".)

Implementations of RPN are stack-based; that is, operands are popped from a stack,
and calculation results are pushed back onto it

The calculation: ((1 + 2) * 4) + 3 can be written down like this in RPN:


12+4*3+
The expression is evaluated in the following way (the Stack is displayed after
Operation has taken place):

Input Stack Operation


1 1 Push operand
2 1,2 Push operand
+ 3 Addition
4 3,4 Push Operand
* 12 Multiplication
3 12,3 Push Operand
+ 15 Addition

The final result, 15, lies on the top of the stack at the end of the calculation.

Stack operations to evaluate 3* 4 + 5 * 6

15.3 Let us Sum Up

A bus organization for seven CPU registers has been discussed. The concept of
control word and examples of microoperations has been discussed. Stack is a storage
device and it performs two operations in it called as Push and Pop. The sequence of
push operations and pop operations has been discussed in a series of microoperations.
The concept of stack limits and usage of stack for evaluating arithmetic expressions
has been discussed with examples.
123

15.4 Lesson-end Activities

1.Explain the general register organization.


2. Discuss in detail stack organization.
3. What is meant by reverse polish notation? How it is evaluated using stack? Explain.

15.5 Points for discussions

Stack pointer (SP)


1 Always points to top item in stack
2 Register that holds stack address
Operations
1 Push put new item in top of stack
2 Pop remove item from top of stack

Reverse Polish Notation : A stack organization is very effective for evaluating


arithmetic expressions. Reverse Polish notation (RPN) , also known as postfix
notation, is an arithmetic formula notation, derived from the Polish notation
introduced in 1617 by the Polish mathematician Jan ukasiewicz. RPN was invented
by Australian philosopher and computer scientist Charles Hamblin in the mid-1650s.

15.6 References

Computer Architecture - A. Abdul Namith


Computer System Architecture - M.Morris Mano
124

Lesson 16 : Instruction Formats , Addressing Modes

Contents:

16.0 Aims and Objectives


16.1 Introduction
16.2 Instruction Formats
16.2.1 Addressing Modes
16.3 Let us Sum Up
16.4 Lesson-end Activities
16.5 Points for discussions
16.6 References

16.0 Aims and Objectives

The main objective of this lesson is to learn the concepts of instruction formats such
as three address instructions, two address instructions etc and the various addressing
modes.

16.1 Introduction

A computer will have variety of instruction code formats. the most common fields
found in instruction formats are, an operation code fields, an address field and a mode
field. The addressing mode specifies a rule for interpreting or modifying the address
field of the instruction before the operand is actually referenced. The addressing
modes techniques are used for giving versatility and to reduce the number of bits in
the addressing field of instructions.

16.2 Instruction Formats

A computer will have a variety of instruction code formats.


The most common fields found in instruction formats are
An operation field that specifies the operation to be performed
An address field that designates a memory address or a processor register
A mode field that specifies the way the operand or the effective address is
determined

Types of CPU Organizations

Single accumulator
General register
Stack

Single Accumulator

ADD X
AC AC + M[X]

General Register
125

ADD R1, R2, R3


R1 R2 + R3
ADD R1, R2
R1 R1 + R2
MOV R1, R2
R1 R2
ADD R1, X
R1 R1 + M[X]
Stack

PUSH X
ADD
Zero address
Pop two numbers off stack
Add them
Push result back on stack

Three Address Instructions

It results in short programs when evaluating arithmetic expressions


X = (A + B) * (C + D)
ADD R1, A, B R1 M[A] + M[B]
ADD R2, C, D R2 M[C] + M[D]
MUL X, R1, R2 M[X] R1 * R2

Two Address Instructions

This format is commonly used in commercial computers


X = (A + B) * (C + D)
MOV R1, A
ADD R1, B
MOV R2, C
ADD R2, D
MUL R1, R2
MOV X, R1

One Address Instructions

One address instructions use an implied accumulator (AC) register for all data
manipulation
X = (A + B) * (C + D)
LOAD A
ADD B
STORE T
LOAD C
ADD D
MUL T
STORE X
126

Zero Address Instructions

A stack-organized computer does not use an address field for the instructions ADD
and MUL
X = (A + B) * (C + D)
PUSH A
PUSH B
ADD
PUSH C
PUSH D
ADD
MUL
POP X

RISC Instructions

Only load and store instructions can reference memory, all other instructions can only
reference registers

RISC Instructions

X = (A + B) * (C + D)
LOAD R1, A
LOAD R2, B
LOAD R3, C
LOAD R4, D
ADD R1, R1, R2
ADD R3, R3, R4
MUL R1, R1, R3
STORE X, R1

16.2.1 Addressing Modes

Figure 16.1 Instruction Format


Addressing Modes
Implied operands specified implicitly
E.g., complement accumulator

Immediate Mode - operand is part of the instruction itself


Example: ADD #1 add 1 to AC
LDR 3,#359 R3 #359

Register Mode - operand is in some CPU register


Example: ADD R1,R2,R3
LR R5,R10 R3 R10
127

Register indirect Mode - the register in the CPU contains the address of the location
in memory that contains the operand
Example: LD R1,@R2 R1 M[R2]

Auto-increment/Auto-decrement - similar to register indirect, however, the register is


incremented (decrement) before (after) being used
Example: LD R2,(R1)+ R2 M[R1],
R1 R1 + 1

Direct Address the address of the operand is given in the instruction


Example: ADD R1,$0x22330
LD R2,X R2 M[X]

Indirect Address effective address is stored in memory location specified in address


part of the instruction
Example: LDA X I AC M[M[X]]

Relative Address - the address in the instruction is added to the program counter to
obtain the effective address
Example: JMP -17 ADR 17
PC M[PC+ADR]

Indexed Addressing value of index register added to address part of the instruction
to yield effective address
Example: LD R1,X(R3) ADR addr(X), R1 M[ADR+R3]

Base Register Addressing similar to Indexed, used for location independent


programs
Example: LD R1,X ADR addr(X), R1 M[ADR]

Figure 16.2 Numerical Example for Addressing Modes


128

16.3 Let us Sum Up

The various instruction formats such as three address, two address, one address and
zero address has been discussed with examples and various addressing modes such as
implied mode, immediate mode, Register mode, Register indirect mode, relative
addressing mode has been discussed.

16.4 Lesson-end Activities

1.Discuss about the addressing modes.


2.Explain in detail about the general instruction format..

16.5 Points for discussions

Three address Instructions


Two Address Instructions
One Address Instructions
Zero Address Instructions
Various Addressing Modes

16.6 References

Computer System Architecture - M.Morris Mano


129

Lesson 17 : Data Transfer and Manipulation, Program Control

Contents:

17.0 Aims and Objectives


17.1 Introduction
17.2 Data Transfer and Manipulation
17.2.1 Program Control
17.3 Let us Sum Up
17.4 Lesson-end Activities
17.5 Points for discussions
17.6 References

17.0 Aims and Objectives

The main objective of the lesson is to learn the data transfer instructions, Arithmetic
Instructions and Logical instructions and the program control instructions and types of
interrupts.

17.1 Introduction

Computers give an extensive set of instructions to give the user the flexibility to carry
out various computational tasks.
The computer instructions are classified into three categories

1 Data Transfer Instructions


2 Data Manipulation Instructions
3 Program Control Instructions
Data Transfer instructions move data from one place in the computer to another
without changing the data content. Data Manipulation Instructions performs
operations on data and provide the computational capabilities for the computer.
Program Control Instructions uses a special status or flag register, called a Program
Status Word (PSW), to store the results of an instruction (such as compare).
Subsequent instructions can test the bits in the status work and branch depending on
the value.

17.2 Data Transfer and Manipulation

Computers give an extensive set of instructions to give the user the flexibility to carry
out various computational tasks.

The computer instructions are classified into three categories

Data Transfer Instructions


Data Manipulation Instructions
Program Control Instructions

Data Transfer Instructions


130

Data Transfer instructions move data from one place in the computer to another
without changing the data content. The most common transfers are between memory
and processor registers, between processor registers and input or output, and between
the processor register themselves.

The load instruction has been used to transfer from memory to a processor register.
The store instruction has been used to transfer from a processor register to memory.
The Move register has been used among multiple CPU registers to transfer from one
register to another register

Name Mnemonic Description


Load LD transfer from memory to a processor register.
Store ST transfer from a processor register to memory.
Move MOV used among multiple CPU registers to transfer from one
register to another register
Exchange XCH Swaps Information between 2 registers
Input IN Transfer between processor register and input terminal
Output OUT Transfer between processor register and output terminal
Push PUSH Transfer between processor register and memory stack
Pop POP Transfer between processor register and memory stack

Table 17.1 Typical Data Transfer Instructions

Data Manipulation Instructions

It performs operations on data and provides the computational capabilities for the
computer. It is divided into 3 basic types.

Arithmetic Instructions
Logical and Bit Manipulation Instructions
Shift Instructions

Name Mnemonic Description


Increment INC Adds 1 to the value stored in register
Decrement DEC Subtract 1 from the value stored in register
Add ADD Perform addition on the data in processor register
Subtract SUB Perform Subtraction on the data in processor register
Multiply MUL Perform multiplication on the data in processor register
Divide DIV Perform division on the data in processor register
Add with ADDC Performs the addition of 2 operands plus the value of carry
Carry from the previous computation
Subtract SUBB Subtracts 2 words and a borrow resulted from a previous
with operation.
Borrow
Negate NEG Forms the 2s complement of a number

Table 17.2 Arithmetic Instructions

Name Mnemonic
131

Clear CLR
Complement COM
AND AND
OR OR
Exclusive OR XOR
Clear Carry CLRC
Set carry SETC
Complement Carry COMC
Enable Interrupt E1
Disable Interrupt D1

Table 17.3 Logical and Bit Manipulation Instructions

Name Mnemonic
Logical Shift Right SHR
Logical Shift Left SHL
Arithmetic Shift Right SHRA
Arithmetic Shift Left SHLA
Rotate Right ROR
Rotate Left ROL
Rotate Right Through carry RORC
Rotate Left through Carry ROLC

Table 17.4 Shift Instructions

17.2.1 Program Control

Use a special status or flag register, called a Program Status Word (PSW), to store the
results of an instruction (such as compare).
Subsequent instructions can test the bits in the status work and branch depending on
the value.
Bit C is set to 1 if the end carry is 1; cleared to 0 if the carry is 0.
Bit S (sign) is set to 1 if the highest bit in the AC is 1; 0 otherwise
Bit Z set to 1 after previous load of a register has a value of zero
Bit V(overflow) is set to 1 if XOR of the last 2 carriess is equal to 1 and
cleared to 0
Subroutine Call and Return
Assume that there exists a stack and that a register (SP) points to the top of the stack.
Call
JSR X
SP SP -1
M[SP] PC
PC effective address
Return
RTN PC M[SP]
SP SP + 1
132

Interrupts

The concept of program interrupt is used to handle a variety of problems that arise out
of normal program, sequence
Program Status Word Collection of all status bit conditions in the CPU.
Bit I is 1 if Interrupts are enabled
Bit S is 1 if in supervisory mode

Supervisory mode
1. special instructions can be executed
2. set when running in O/S
3. results from an interrupt C S Z V I S

Interrupt Vector

Located in a fixed part of memory


On interrupt, all registers are pushed on to the current program stack.

Types of Interrupts

1. External Interrupt come from an input output devices, from a timing device
Example of External Interrupts are
I/O device requesting transfer of data
Elapse time of an event
Power failure
I/O device finished transfer of data

2. Internal interrupt arise from illegal or erroneous use of an instruction or data. They
are also called as Traps.
Examples of Internal Interrupts are
Register Overflow
Attempt to divide by zero
Invalid Operation Code
Stack Overflow

3. Software Interrupt is initiated by executing an instruction


requests to the operating system; many of these
Example:
open, close, read, and write a file or terminal
133

17.3 Let us Sum Up

This lesson covered the three categories of computer instructions and its mnemonic
codes. The concept of status bit register has been explained by a diagrammatic
representation. The concepts of subroutine calls and return and types of interrupts has
been discussed.

17.4 Lesson-end Activities

1.Discuss data transfer instructions.


2.Discuss data manipulation instructions.
3. Discuss program control instructions.
4. What are interrupts? Explain the types of interrupts.

17.5 Points for discussions

The collection of all status bit conditions in the CPU is called as a Program Status
Word.

External Interrupt - I/O device, Timing Device

Internal Interrupt - Illegal Instruction or data

Software Interrupt - Supervisor Call Instruction

17.6 References

Computer System Architecture M.Morris Mano


134

UNIT V
Lesson 18: Input Output Organization and Memories: Peripheral Devices,
Input-Output Interface, and Asynchronous Data Transfer

Contents:

18.0 Aims and Objectives


18.1 Introduction
18.2 Peripheral Devices
18.2.1 Input - Output Interface
18.2.1.1 I/O Bus and Interface Modules
18.2.1.2 I/O versus Memory Bus
18.2.1.3 Isolated versus Memory Mapped I/O
18.2.2 Asynchronous Data Transfer
18.2.2.1 Strobe Control
18.2.2.2 Handshaking
18.2.2.3 Asynchronous Serial Transfer
18.2.2.4 Asynchronous Communication Interface
18.3 Let us Sum Up
18.4 Lesson-end Activities
18.5 Points for discussions
18.6 References

18.0 Aims and Objectives

The aim of this lesson is learn about peripheral devices such as monitor, keyboard,
printer, magnetic tape, disk etc and the method of transferring information between
internal storage and external I/O devices and the various synchronous data transfer
methods has been discussed.

18.1 Introduction

The input-output subsystem of a computer is referred as I/O provides an efficient


mode of communication between central system and the outside environment. The
various peripheral devices are Monitor, keyboard, printer and the storage devices are
magnetic tape, magnetic disk etc. The role of ASCII code has been discussed during
I/O operations. Input output interface provides a method of transferring information
between internal storage and external I/O devices. The method of asynchronous data
transfer can be achieved either by strobe or handshaking principle.

18.2 Peripheral Devices

Efficient communication between CPU & outside


Gets inputs external sources
Keyboard, mouse, touch screen, barcode reader, modem, NIC, MIC,
disk, tape
Send outputs
Monitor, NIC, Printer, Modem, Speaker, disk, tape
Peripheral devices extremely slow to CPU
135

Large amount of data prepares in advance to transfer to CPU


The result of programs transfer to high-speed disk and then to the
peripheral like printers
On-line devices
Devices under direct computer control
Read/write information to memory on CPU command
Peripheral
Devices attached to the computer
Electromechanical/electromagnetic
Monitor/Keyboard
Printer
Magnetic tape
Magnetic disk

Video monitors are most commonly used peripherals. They consist of a


keyboard as input device and a display unit as the output device.
Printer provides a permanent record on paper of computer output data or text.
Magnetic tapes are used for storing files of data.
Magnetic Disk has high-speed rotational surfaces coated with magnetic
material. Access is achieved by moving a read-write mechanism to a track in
the magnetized surface.

Table 18.0 Peripheral Examples


136

ASCII Alphanumeric Characters


American Standard Code for Information Interchange
128 characters
94 printable
26 Uppercase, 26 Lowercase, 10 numeric, 32 special (%, *)
34 non-printable (control)
Abbreviated names
Format characters (BS - backspace, CR - carriage
return)
Information separators (RS - record separators, FS - file
separators)
Communication controls (STX - start of text, ETX - end
of text)
Computer uses 8 bits for a character
Additional bit
As the parity for character
For other special characters (Italic, Greek)

18.2.1 Input - Output Interface

Transfer information between internal storage and external peripherals


Resolves the difference between CPU and peripherals
Signal value changes
Transfer rate mapping
Data code format mapping
Control operating modes (not to disturb each other)
Interfaces supervise and synchronize transfers
Lies between processor bus and peripheral device (or its controller)
I/O bus consists of
Data lines
Control lines
Address lines
Each peripheral is associated with its interface unit
Decode address and control received
Interrupts for the device
Provide signals for device
Synchronize and supervise data flow

18.2.1.1 I/O Bus and Interface Modules

Each interface has own address


Processor places address on address lines
Corresponding peripheral responds while others are deactivated
Same time processor provides function code on control lines
Interface executes function on peripheral
4 main command types
Control commands, status, output data, input data.
A control command is used to activate the peripheral
137

A status command is used to test various status conditions in the


interface and the peripheral.
A data output command causes the interface to respond by transferring
data from the bus into one of its registers.
The data input command is the opposite of data input command

Figure 18.0 I/O Bus and Interface Modules

18.2.1.2 I/O versus Memory Bus

3 ways to communicate with I/O and memory


Use two separate buses
Uses separate I/O processor
Memory communicate with both IOP and CPU via memory bus
IOP has separate set of data, address and control lines with
peripheral interfaces
Use common bus but separate control lines
Common bus and control lines

18.2.1.3 Isolated versus Memory Mapped I/O

Uses common bus for data transfer


Isolated I/O
Uses separate Memory or I/O R/W lines
Separate I/O instruction set
Own address spaces for memory and I/O
Complex but high flexible
Memory mapped I/O
Uses same address space with memory
Only one set of R/W signals
138

Interface registers considered as a part of memory system


Reduces memory address space available
Same instruction set

Example of I/O Interface (I/O Port)

Interface communicate with CPU though data bus


CS and RS determine interface address
Set of 4 registers directly communicate with the device
I/O data can transfer port A or B
Interface can use bidirectional lines when it is connected to
Input only (Character reader)
Output only (Printer)
Both (but not at the same time) (Hard disk)
Issue commands write command word to control register
Receive status reading from status register
Data transfer via port A and B registers
Function code of I/O bus not used
CS RS1 RS0 Register selected
0 X X None: data bus in high impedance
1 0 0 Port A register
1 0 1 Port B register
1 1 0 Control register
1 1 1 Status register

Figure 18.1 Example of I/O Interface Unit


139

18.2.2 Asynchronous Data Transfer

Internal operations in a computer are synchronize with internal clock pulse


generator
Applied to all registers
All data transfer among registers happen at the same time with the
occurrence of the clock pulse
But the CPU and I/O interfaces are independent
They run on their own clocks
If I/O shares common clock with CPU two units said to be
synchronize.
Otherwise asynchronous
Asynchronous data transfer
Requires control signals to indicate time at which transmission occurs
Main methods of indicate data transfer time
Strobe - Giving a signal by one unit to indicate transfer time
Handshaking - Transfer on agreement
Data transfer with a control signal indicating the presence of
data
Data receiver sends an acknowledge receipt of data
Timing diagrams are commonly used to show the relationship between control
signals
Sequence of control signals depends whether transfer initiated by source or
destination

18.2.2.1 Strobe Control

Uses a control line to time each transfer


Strobe is activated either by source or destination
Strobe says when there is valid data on data bus
Generally strobes activated by clock signals
CPU is always in control of the transfer (i.e. strobe is always from CPU)
This method is mainly applicable in memory R/W operations.
Most of I/O operations use handshaking

Source Initiated Data Transfer

Source places data on the bus


Have a brief delay to settle data on the bus
Source activate the strobe pulse
Then destination reads data to internal register (Often uses falling edge)
Source removes data after brief delay (Not necessary)
140

Data Bus
Source Destination
Strobe

Figure 18.2 Source Initiated Data Transfer

Figure 18.3 Timing Diagram

Destination Initiated Transfer

Initiated by destination
Destination activates strobe
Source places data on the bus
Keeps data until accept them by destination
Reads data to a register (Generally at falling edge of the strobe)
Destination disable strobe
Source removes data after predetermined time
Data Bus
Source Destination
Strobe

Figure 18.4 Destination Initiated Data Transfer

Figure 18.5 Timing Diagram


18.2.2.2 Handshaking

Strobe disadvantage
In source initiation - Source doesnt know whether destination got the data
In destination initiation Destination doesnt know whether source has
placed the data on the bus
Handshaking introduce a reply method to solve this problem
141

Two-Wired Handshaking

1st control line


Same direction as the data flow
Use by the source
Indicates whether it has valid data
2nd control line
From destination to source
Uses by the destination
Indicates whether it can accept data
Sequence of control used depends on the unit initiate transfer
In a fault at one end timeout uses to detect the error

Figure 18.6 Source Initiated transfer using Handshaking

Figure 18.7 Destination Initiated Transfer using Handshaking


142

18.2.2.3 Asynchronous Serial Transfer

Parallel transmission
Each bit of message has its own path
Total message transmitted at the same time
N bit message requires N conduction paths
Faster/expensive
Short distance transmission
Serial transmission
Each bit in message sent in sequence one at a time
Uses only one pair of conductors (or one conductor with a common
ground
Slower/cheaper
Uses for long distance transmission
Synchronous
Uses common clock frequency
Transmits bits continuously
For long distance transmission
Use separate clocks with same frequency
Keep clocks in step via synchronization signals send
periodically
Periodic synchronization signals should transfer even no data to
transmit
Asynchronous
Transmits only when data available to transmit
Otherwise keeps in idle
Uses start and stop bits at the both ends of the character code
Transmission line rests at state 1 while idle
Start bit is always 0
Stop bit can be 1 or more 1s

Transmission Rules

When a character is not send line keeps in 1 state


Start of transmission of a bit is determined by the start bit (usually 0)
Character bits always follows start bit
After the last character bit stop bit is detected when the line returns to the state
1 for at least one bit time

Figure 18.8 Asynchronous Serial Transmission


143

Working Principle

Using transmission rules receiver detects start bit when line goes 1 to 0
Receiver knows
o Bit transmission rate
o Number of bits in character
After a character transmission line keeps at state 1 for at least one or two bits
for resynchronization at both transmitter and receiver
Ex: Transmission rate 10 characters/sec (at 1 start bit, 8 info bits and 2 stop
bits) (1+8+2)*10 bit/s 110 bit/s i.e. baud rate 110 baud

Definitions

Baud rate Rate at which serial information is transmitted and equivalent to


the data transfer in bits per second
UART Universal Asynchronous Receiver-Transmitter (Asynchronous
Communication Interface)
An interface which accept 8 bit character code from a computer and forward
corresponding 11 bit serial code to the device or does the function other way
around

Figure 18.9 Block Diagram of a Typical Asynchronous communication Interface

18.2.2.4 Asynchronous Communication Interface

Can function as both transmitter and receiver


Initial mode is setup by control byte loaded to control register
CPU load and retrieve data via interface registers while shift registers are
used for data serialization
144

Register selection function shows bellow

Operation of Asynchronous Communication Interface

Start by CPU by sending a byte to control register specifying

Mode of operation
Baud rate to use
Bits in each character
No of stops bits should append
Whether to use parity check

Status Register

2 bit Flags
o Transmit register is empty
o Receiver register is full

Operation of the Transmitter

CPU reads status register


Check flag to know whether transmit register is empty
If empty CPU transfer character to transmit register & interface marks register
full
Set 1st bit of shift register to 0, transfer character there and append appropriate
no of stop bits
Mark transmit register empty
Character is transmitted bit at a time in specified baud rate
CPU can load another character after checking flag
This is a double buffered interface, since new character can be loaded as soon
as previous one start transmission

Operation of the Receiver

Receive data input line is in state 1 when line is idle


Receiver controller monitors input line for occurrence of a 0
Once start bit detected incoming bits of character are shifted to register at
prescribed baud rate
Then it checks for parity and stop bits
Character without start and stop bits transfer in parallel to the receiver register
Flag in status register set to indicate receiver register is full
CPU checks flag and if data available read data and clears receiver register full
flag
145

Possible receiving error

Parity error Failure in parity bit checking


Framing error Invalid stop bits
Overrun error Write next character before read previous by CPU

18.3 Let us Sum Up

Various peripheral devices have been discussed with a tabular representation.


Computer system includes special hardware components between CPU and
peripherals to supervise and synchronize all input and output transfers. These
components called as Interface units. The distinction between a memory transfer and
I/O transfer has been discussed. Various asynchronous data transfer modes has been
explained.

18.4 Lesson-end Activities

1. Write short notes on Peripheral Devices.


2. Explain in detail Input - Output Interface
3. What is meant by Asynchronous Data Transfer? Explain Asynchronous Serial
Transfer and Asynchronous Communication Interface.
4. Explain Strobe Control and Handshaking.

18.5 Points for discussions

Baud rate Rate at which serial information is transmitted and equivalent to


the data transfer in bits per second
UART Universal Asynchronous Receiver-Transmitter (Asynchronous
Communication Interface)

18.6 References

Computer System Architecture M.Morris Mano


146

Lesson 19 :Modes of Transfer, Priority Interrupt

Contents:

19.0 Aims and Objectives


19.1 Introduction
19.2 Modes of Transfer
19.2.1 Programmed I/O
19.2.2 Interrupt Initiated I/O
19.2.3 Priority Interrupt
19.2.3.1 Polling
19.2.3.2 Hardware Priority Interrupts Unit
19.2.3.2.1 Daisy Chain Priority Interrupt
19.2.3.2.2 Parallel Priority Interrupt
19.2.3.3 Priority Encoder
19.2.2.4 Interrupt Cycle
19.3 Let us Sum Up
19.4 Lesson-end Activities
19.5 Points for discussions
19.6 References

19.0 Aims and Objectives

The main aim of this lesson is to learn various modes of transfer between central
computer and I/O devices and various priority interrupts and servicing methods.

19.1 Introduction

Various modes of transfer are Programmed I/O, Interrupt Initiated I/O and DMA.
Programmed I/O operations are result of the I/O instructions written in the computer
program. Interrupt is initiated when the data for the device is ready and in the DMA
process the interface transfers data into and out of the memory unit through the
memory bus.

19.2 Modes of Transfer

Binary information received from an external device is usually stored in memory for
later processing. CPU merely executes instruction and accepts data temporally from
I/O devices but the ultimate source and destination is memory unit.
Receiving data from input devices stores in memory
Sending data to output devices from memory
I/O handling modes
Programmed I/O
Interrupt-initiated I/O
Direct Memory Access (DMA)

19.2.1 Programmed I/O

I/O instructions are executed according to a program in CPU


I/O instruction transfers from and to CPU registers
147

A memory load instruction used to load it memory


Another instruction used to verify data and count the number of words
transferred
Constant I/O monitoring is required by CPU
CPU stays in a program loop until I/O unit indicate data ready
This waste CPU time

Programmed I/O Input Device

Figure 19.1 Data Transfer I/O Device to CPU

I/O device and Interface use handshaking for data transfer


Once data available on Data Register Interface sets flag bit (F) indicating data
availability
Interface do not reset data accepted line until CPU reads data and clear the
flag
CPU needs 3 instruction for each byte transfer
Read the status register
Check the flag bit
Read data register when data available
Transfer can be done in blocks for efficiency

Figure 19.2 Flow Chart For CPU Program To Input Data

Useful small low-speed computers dedicate monitor device continuously


148

Difference transfer rates between CPU and I/O makes it inefficient


o Ex: Assumes a system read and check flag in 1s
o Device transfer rate = 100 characters/s

19.2.2 Interrupt Initiated I/O

Instead of continues monitoring at CPU interface inform when data ready


Uses interrupts
CPU deviated from current program and take care of data transfer
Save return address from program counter to stack
Then control branches to service routing
Non vectored interrupts
Vectored interrupts
After completing I/O transfer it returns back to previous program
In addition to h/w computer should have s/w routings to control interface and
transfer data
Ex: Control commands activate peripheral, check data state, stop tape, print
character, issue interrupt
I/O control software is fairly complex
Standard I/O routing provided by the manufacturer and included with OS
I/O routings available as OS procedures; do not need to go to assembly level
details

19.2.3 Priority Interrupt

Generally I/O data transfer is initiated by CPU


But device must be ready first
Device readiness for data transfer can be identify by the interrupt signal
How CPU responds to the interrupt request
Push return address to the memory stack
Branch to the interrupt service routing
Priority Interrupt system
Deals with simultaneous interrupts and determine which one to serve
first (critical situation / fast I/O)
Determine in which conditions allow interrupting while executing
another interrupt service routing

19.2.3.1 Polling

Priority identification mechanism in software


For all interrupts has a common branch address
Then polls the interrupt devices in sequence
The order at which it polls determine the priority
Higher priority device is tested first.
If its interrupt signal is on serves the device
Then test for the next device
Proceed on until last device
Disadvantage: When there are multiple interrupts polling time might exceed
time available to service the I/O device
Solution: Hardware priority interrupts unit
149

19.2.3.2 Hardware Priority Interrupts Unit

Accepts interrupts from many sources


Determine which one has higher priority
Issue interrupt accordingly to the CPU
Further each interrupt source has its own interrupt vector to access its own
service routing directly
No polling required
2 major establishments of hardware priority function
Serial Connection (Daisy-chain)
Parallel Connection

19.2.3.2.1 Daisy Chain Priority Interrupt

Figure 19.3 Daisy Chain Priority Interrupt

Serial connection of all interrupt devices


Higher priority one places first
Interrupt request line is common (wired logic)
CPU responds interrupt via Interrupt Acknowledge line
If Device 1 has pending interrupt disable P1 and place its own Interrupt
vector. Otherwise pass it to the next device via P0

19.2.3.2.2 Parallel Priority Interrupt

Figure 19.4 Priority Interrupt Hardware

Uses a register to set interrupt bits by each device


Uses Mask Register change the sequence of servicing
Priority encoder generates Interrupt Vector
150

CPU interrupt is generated only when InterruptENable and InterruptSTate are


set
IEN can be set by the program
With the INTACK from CPU VAD is put on to the bus.

19.2.3.3 Priority Encoder

Implements the priority function

Inputs Outputs
Boolean function
I0 I1 I2 I3 x y IST

1 x x x 0 0 1
0 1 x x 0 1 1 x = I0 I1
0 0 1 x 1 0 1 x = I0 I1 + I0 I2
0 0 0 1 1 1 1 (IST) = I0 + I1 + I2 + I3

0 0 0 0 1 1 0

Table 19.1 Priority Encoder Truth Table

19.2.3.4 Interrupt Cycle

IEN uses by the program to enable or disable interrupts while running


At the end of each instruction cycle CPU checks IEN and if enabled checks
IST
Sequence of micro operations follows as receive an interrupt:
SP SP 1
M[SP] PC
INTACK 1
PC VAD
IEN 0
Go to fetch next instruction

Software Routines

Priority interrupt system is a combination of Hardware and Software


techniques
Computer has software routing to service interrupt requests and to control
interrupt hardware registers
151

Figure 19.5 Program stored in memory for servicing interrupts


Initial
Clear lower-level mask register bits
Clear interrupt status bit IST
Save contents of processor registers
Set interrupt enable bit IEN
Proceed with service routine
Final
Clear interrupt enable bit IEN
Restore contents of processor registers
Clear the bit in the interrupt register belonging to the source that have been
serviced
Set lower-level priority bits in the mask register
Restore return address into PC and set IEN
19.3 Let us Sum Up

Various modes of transfer have been discussed in detail with flowchart and examples.
Concept of priority interrupt and its various methods of servicing th einterrupt has
been discussed.

19.4 Lesson-end Activities


1. Discuss Programmed I/O and Interrupt Initiated I/O.
2. Explain the types of Priority Interrupt.
3. Discuss Priority Encoder
4. Explain Interrupt Cycle.

19.5 Points for discussions

I/O handling modes


Programmed I/O
Interrupt-initiated I/O
Direct Memory Access (DMA)
Priority Interrupt
Polling
Daisy chain priority
Parallel priority interrupt

19.6 References

Computer System Architecture M.Morris Mano


152

Lesson 20 : Direct Memory Access, Input - Output Processor (IOP)


Contents:

20.0 Aims and Objectives


20.1 Introduction
20.2 Direct Memory Access
20.2.1 DMA Controller
20.2.2 DMA Transfer
20.2.3 Input-Output Processor (IOP)
20.3 Let us Sum Up
20.4 Lesson-end Activities
20.5 Points for discussions
20.6 References

20.0 Aims and Objectives

The main aim of this lesson is to learn the principles of DMA and IO processor.

20.1 Introduction

DMA is a technique that lets the peripheral device manage the memory buses directly
to improve the speed of transfer. The DMA controller is an Interface with CPU and
I/O devices. DMA can have even more than one channel and is commonly used in
devices like magnetic disks and screen display. IOP is similar to CPU except its
handles only I/O processing. Unlike DMA controller totally setup by the CPU, IOP
can fetch and executes its own instructions

20.2 Direct Memory Access

The transfer of data between a fast storage device such as magnetic disk and
memory is often limited by the speed of the CPU. Removing CPU from the path and
letting the peripheral device manage the memory buses directly would improve the
speed of transfer. This transfer technique is called direct memory access.

The two control signals in the CPU that facilitate the DMA transfer are
a. Bus request
b. Bus grant

Bus Request (BR) input is used by the DMA controller to request the CPU to
relinquish control of the buses.
When this input is active, the CPU terminates the execution of the current instruction
and place the address bus, the data bus, and the read and write lines into a high
impedance state.

Bus grant (BG), the CPU activates the bus grant output to inform the external DMA
tat the buses are in high impedance state.

Burst transfer: when the DMA takes control of the bus system, it communiates
directly with the memory. The transfer can be made in several ways. In DMA burst
transfer, a block sequence consisting of a number of memory words is transferred in a
153

continuous burst while the DMA controller is master of the memory buses. This mode
of transfer is need for fast devices such as magnetic devices.

Cycle stealing: this allows the DMA controller to transfer one data word at a time,
after which it must return control of the buses to the CPU. The CPU merely delays its
operation for one memory cycle to allow the direct memory I/O transfer to steal one
memory cycle.

Bus Request BR Dbus Data bus


Abus Address bus
CPU RD Read
BG
Bus Grant WR Write

Figure 20.1 CPU Signals for DMA Transfer

20.2.1 DMA Controller

The DMA controller needs the usual circuits of an interface to communicate with the
CPU and I/O device.

Figure 20.2 Block diagram of DMA controller

It is an Interface with CPU and I/O devices


Further DMA controller has
Address Register used for direct communication with memory
Set of Address Lines used for direct communication with memory
Word Count Register specifies the number of words to be transferred
Control register Specifies the mode of transfer
Data transfer is done directly between device and memory under DMA control
RD/WR signals are bidirectional
154

When BG is set DMA can use them for memory RD/WR


Otherwise CPU uses it DS and RS to write and read from DMA
DMA uses Req and Ack signals with handshaking to communicate with external
peripheral devices
CPU treat all DMA registers as I/O interface registers and can be read and write under
program control
DMA is initialized by CPU
Then DMA continues transfer an entire block of data between memory and peripheral
CPU sends following information on DMA initialization
Starting address for memory read or write
Word count the number of words to be transferred
Control to specify mode of transfer
Control to start transfer
CPU communicates with DMA after transfer initialization only if
It receives an interrupt
It wants to know how many words have been transferred
CPU communicates with DMA through address and data bus as with any other
interface unit
DMA has its own address by which DS & RS activates

CPU initialize DMA through data bus

DMA can start data transfer between memory and peripheral device as it gets the
control command
The address register and address lines are used for direct communication with the
memory. The registers are selected by the CPU through the address bus by enabling
the DS(DMA select) and RS(register select) inputs.
The RD(read) and WR(write) inputs are bidirectional. When the BG input is 0, the
CPU can communicate with the DMA registers through the data bus to read from or
to write to the DMA registers. When the BG=1, the CPU has relinquished
the buses and the DMA can communicate with the memory by specifying an address
in the address bus and activating the RD and WR control. The CPU initializes the
DMA and then it starts and transfers data between memory and peripheral until an
entire block is transferred.

The CPU initializes the DMA by sending the following information through the data
bus:
1. The starting address of the memory blocks where data are available or where
data are to be stored.
2. the word count, which is the number of words in the memory block
3. control to specify the mode of transfer such as read or write
4. a control to start the DMA transfer
The starting address is stored in the address register. The word count is stored in the
word count register, and the control information in the control register. Once the
DMA initialized, the CPU stops communicating with the DMA
155

Figure 20.3 DMA Transfer in a Computer System

20.2.2 DMA Transfer

Peripheral device sends DMA request


DMA controller activates BR
CPU finishes current bus cycle and grant the bus by activating BG
DMA puts current address to the address bus and activate RD or WR accordingly and
acknowledges peripheral
Then peripheral puts data to (or reads data from) the bus
Thus peripheral directly read or write memory
For each word transferred DMA increment address and decrement word count register
If word count is not zero DMA checks request line coming from peripheral
If active (fast devices) initiate the second transfer immediately
Otherwise disable BR
If word count is zero
DMA stop transfer, disable BR and inform CPU the termination of data
transfer
Zero value in word count indicates successful data transfer
DMA can have even more than one channel
DMA commonly used in devices like magnetic disks and screen display.

20.2.3 Input-Output Processor (IOP)

Instead of each interface communicating with the CPU


One or more external processors assigned to communicate directly with I/O
devices
IOP has both direct memory access and I/O communication capabilities
IOP releases the CPU from the housekeeping the chores involved in I/O
transfer
Processors handling serial communication with remote terminal are called
Data Communication Processors (DCP)
IOP is similar to CPU except its handles only I/O processing
156

Unlike DMA controller totally setup by the CPU, IOP can fetch and executes
its own instructions
Additionally IOP can perform other tasks like arithmetic, logic, branching and
code translation

Figure 20.4 Computer with I/O Processor

Responsibilities of IOP

Memory at the centre can communicate with both processors via DMA
CPU responsible for data processing and computational tasks
IOP transfers data between peripheral devices and memory
CPU is usually assigned the task of initiating the I/O program, there after IOP
operates independently
IOP take care of data format difference and structure mapping between
memory and various I/O devices
Communication between CPU and IOP is similar to programmed I/O
Communication between Memory and IOP is done as DMA
IOP can be independent or slave processors depending on the sophistication of
the system
Instructions for IOP generally refers as commands

CPU IOP Communication

20.3 Let us Sum Up


The transfer of data between a fast storage device such as magnetic disk and memory
157

is often limited by the speed of the CPU. In DMA transfer the peripheral devices
manages the memory buses directly. The DMA transfer and DMA controller
functions have been discussed in detail. The IOP is similar to CPY except that it is
designed to handle the details of I/O processing.

20.4 Lesson-end Activities

1.Explain how DMA controller is used to speed up the data transfer rate.
2.What is IOP? How is it different from CPU? Explain its functionalities.

20.5 Points for discussions

Bus Request (BR) input is used by the DMA controller to request the CPU to
relinquish control of the buses.
Bus grant (BG), the CPU activates the bus grant output to inform the external DMA
tat the buses are in high impedance state.
Burst transfer: In DMA burst transfer, a block sequence consisting of a number of
memory words is transferred in a continuous burst while the DMA controller is
master of the memory buses. This mode of transfer is need for fast devices such as
magnetic devices.
Cycle stealing: This allows the DMA controller to transfer one data word at a time,
after which it must return control of the buses to the CPU. The CPU merely delays its
operation for one memory cycle to allow the direct memory I/O transfer to steal one
memory cycle.

20.4 References

Computer System Architecture M.Morris Mano


158

Lesson 21 : Memory Organization, Auxiliary Memory, Associative Memory,


Cache Memory, Virtual Memory

Contents:

21.0 Aims and Objectives


21.1 Introduction
21.2 Lesson Description
21.2 Memory Hierarchy
21.2.1 Auxiliary Memory
21.2.2 Associative Memory
21.2.3 Cache Memory
21.2.3.1 Mapping
21.2.3.1.1 Associative Mapping
21.2.3.1.2 Direct Mapping
21.2.3.1.3 Set-Associative Mapping
21.2.4 Virtual Memory
21.3 Let us Sum Up
21.4 Lesson-end Activities
21.5 Points for discussions
21.6 References

21.0 Aims and Objectives

The main objective of this lesson is to learn the memory hierarchy and the need
for the expandable , fast memory to meet the needs of the user.

21.1 Introduction

The memory unit is an essential component in any digital computer for storing
programs and data. The memory unit that directly communicates directly with the
CPU is called as Main memory and the backup storage devices are called as Auxiliary
devices. Cache memory is used to speed up the processing performance. Virtual
memory helps in accommodating a very large program in memory.

21.2 Memory Hierarchy

Memory stores input for and output from the CPU as well as the instructions
that are followed by the CPU
the amount stored is measured in bits, bytes, Kbytes (K, Kb, 103 bytes),
Megabytes (Mb, 106 bytes), Gigabytes (Gb, 109), Terabytes (Tb, 1012)
there are two kinds of memory:
main memory (or internal or primary memory) is essential for the operation of
the computer, all data and
Instructions must be in main memory first before it can be processed by the
computer
o most costly memory
o in the form of microchips integrated with the computer's central
processor
159

o fastest access - any byte can be accessed equally rapidly (random


access, hence it is called RAM)
o temporary - since data and instructions are stored in main memory as
electrical voltages, power failures cause the loss of all data in main
memory
o ranges from several hundred Kbytes for typical PC to many Megabytes
for mainframes
secondary memory (or auxiliary memory or secondary storage) is used for
large, permanent or semi-permanent files
o GIS programs and data generally require very large amounts of storage
o data storage is covered after this overview of the components of
computers

Magnetic
Tapes

I/O Processor Main Memory


Magnetic
Disks

Cache
CPU

Figure 21.1 Memory Hierarchy

Storage Media

computers can use several different media for storing information


o needed to store both raw data and programs
media differ by
o storage capacity
o speed of access
o permanency of storage
o mode of access
o cost

Fixed Disks

most costly memory next to main/internal memory is fixed disk memory


ranges from 10 Megabytes for typical PC to hundreds of Gigabytes in large
"disk farms"
random access but slower than internal memory
permanent (i.e. does not disappear when power is turned off), though data can
be erased and modified

Dismountable Devices

dismountable devices can be removed for storage or shipping, include:


160

o floppy diskettes
up to 1.44 Megabytes for PC - random access
o magnetic tapes
tens of Megabytes for standard tape
access is sequential, not random
can take minutes to reach a particular set of data on the tape,
depending on where it is stored
o optical compact disks (CDs)
around 250 Megabytes per CD
random access, but the delay in reaching a given item of data
may be 1 second or more

Volumes

a volume is a single tape, CD, diskette or fixed disk, i.e. a physical unit of
storage

Files

a file is a logical collection of data - a table, document, program, map


many files can be stored on a single volume
files are given names
o the rules for naming files vary among types of systems
the computer operating system keeps track of files stored in a volume by using
a table called a directory
o files are identified in the directory by name, size, date of creation and
often type of contents
files are often organized into subdirectories so that the user can group files
under specific topics

21.2.1 Auxiliary Memory

The common auxiliary memory devices used in computer system are Magnetic Disks
and Tapes. The important characteristics of any device are
Access Mode
Access Time
Transfer Rate
Capacity
Cost
The average time required to reach a storage location in memory and obtain its
contents is called the access time.
Bits are recorded as magnetic spots on the surface as it passes a stationary mechanism
called a Write Head.
Stored Bits are detected by a change in magnetic field produced by a recorded spot on
the surface a it passes through a Read Head.

Magnetic Disk
161

Figure 21.2 Magnetic Disk

A magnetic disk is a circular plate constructed of metal or plastic coated with


magnetized material. Bits are stored on the surface of the plates , in concentric circles
called as tracks. Tracks are divided into sectors.
Single read/ write head for an entire magnetic disk is available or separate read/ write
head for every track in each surface is also available.

A disk arm consists of platters or plates coated with magnetized material. The arm
assembly moved inside or outside. The spindle contains high rotating plates. It can
rotate and the plates and every disk surface contains a R/W head which helps in
reading or writing information into the disk.
1. Head movement from current position to desired cylinder: Seek time (0-10s ms).
The movement of R/W head from one track to another track is called as Seek Time.
2. Disk rotation until the desired sector arrives under the head: Rotational latency (0-
10s ms)
3. Disk rotation until sector has passed under the head: Data transfer time (< 1 ms)

Magnetic Tape
162

The Tape itself is a plastic coated with magnetic recording medium. Bits are recorded
as magnetic spots on the tape along several tracks. R/W heads are mounted one in
each track so the data can be recorded and read as a sequence of characters.
Magnetic tapes can be stopped, started to move forward or in reverse. Information
recorded in blocks called as records. Records may be of Fixed length or variable
length.

The gap between 2 records are defined as Inter Record Gap.

21.2.2 Associative Memory

Many data processing require the search of items in a table stored in memory. In
general the search procedure involves, choosing the addresses, reading the content of
the address, and comparing the information with the key item. The searching
continues until the search is successful or a failure occurs.

The time required to find a item stored in memory can be reduced considerable if
stored data can be identified by the content of the data rather than by an address. A
memory unit accessed by content is called as associative memory or content
addressable memory.
It is uniquely suited for parallel searches.

Hardware Organization

It consists of a memory array and logic m words with n bits per word.
Argument register A and key register k each have n bits, one for each bit of a word.
The match register M has m bits, one for each memory word. Each word in memory is
compared with the content of the argument registers. After the matching process
reading is accomplished

The key register provides a mask for choosing a particular field or key in the
argument word. The entire argument is compared with each memory word if the key
register contains all 1s. Otherwise, only those bits in the argument that have 1s in
their corresponding position of the key register are compared.
163

Argument Register (A)

Key Register (K)

Input

Associative Memory &


Logic
Read M words M
N bits per word
Write

Output
Argument 101 111100
A
Key K 111 000000
(mask)
Word 1 100 111100 no
match
Word 2 101 000001 match

Figure 21.3 Block Diagram of Associative Memory

The K and A register has the following bits. Only the left most bits of A are compared
with memory words because K has 1s in three positions. Word 2 matches.
164

A1 Ai An

K1 Ki Kn

Word C11 Ci1 C1n M1


1

Word i Ci1 Ci1 Ci1 Mi

Word m Cm Cmi Cm Mm

Bit 1 Bit j Bit n

Figure 21.4 Associative Memory of m word, n cells per word

Read Operation

If more than one word in memory matches the unmasked argument field, all the
matched words will have 1s in the corresponding bit position of the match register. It
is then necessary to scan the bits of the match register one at a time. The matched
words are read in sequence by applying a read signal to each word line whose
corresponding Mi bit as 1.

Write Operation

An associative memory must have a write capability for storing the information to be
searched. Writing in an associative memory can take different forms, depending on
the application.
165

Figure 21.5 Match Logic for one word Associative Memory

One bit for each associative memory word


Set to 1 for active word
Reset to 0 for deleted word
Masked along with argument word so only
active words are compared

21.2.3 Cache Memory

Figure 21.6 Cache Memory Example

Locality of reference

Memory references at any given interval of time tend to be confined within a few
localized areas in memory

Cache memory
Fast, small memory with fastest access speed compared to other memory types
166

Hit ratio

Cache hits / (cache hits + cache misses)


When the CPU refers to a memory and finds the word in cache, it is said to produce a
HIT.
When the CPU refers to a memory and does not find the word in cache, it is said to
produce a MISS.

21.2.3.1 Mapping

The transformation of data from main memory to cache memory is referred as


mapping.

Examples of Cache Memory

Three types of mapping are :

Associative Mapping
Direct Mapping
Set Associative Mapping

21.2.3.1.1 Associative Mapping

Figure 21.7 Conceptual Implementation of Cache Memory

The Above figure shows a conceptual implementation of a cache memory. This system is
called set associative because the cache is partitioned into distinct sets of blocks, ad each
set contains a small fixed number of blocks. The sets are represented by the rows in the
figure. In this case, the cache has N sets, and each set contains four blocks. When an
access occurs to this cache, the cache controller does not search the entire cache looking
for a match. Instead, the controller maps the address to a particular set of the cache and
searches only the set for a match.

If the block is in the cache, it is guaranteed to be in the set that is searched. Hence, if the
block is not in that set, the block is not present in the cache, and the cache controller
searches no further.
167

The most obvious way of relating cached data to the main memory address is to store
both memory address and data together in the cache. This the fully associative mapping
approach. A fully associative cache requires the cache to be composed of associative
memory holding both the memory address and the data for each cached line. The
incoming memory address is simultaneously compared with all stored addresses using the
internal logic of the associative memory, as shown in the diagram of conceptual
implementation. If a match is found, the corresponding data is read out. Single words
form anywhere within the main memory could be held in the cache, if the associative part
of the cache is capable of holding a full address.

Figure 21.8 Cache will full associative Mapping

21.2.3.1.2 Direct Mapping

The fully associative cache is expensive to implement because of requiring a


comparator with each cache location, effectively a special type of memory. In direct
mapping, the cache consists of normal high speed random access memory, and each
location in the cache holds the data, at an address in the cache given by the lower
significant bits of the main memory address. This enables the block to be selected
directly from the lower significant bits of the memory address. The remaining higher
significant bits of the address are stored in the cache with the data to complete the
identification of the cached data.

The address from the processor is divided into tow fields, a tag and an index. The tag
consists of the higher significant bits of the address, which are stored with the data.
The index is the lower significant bits of the address used to address the cache.
168

Figure 21.9 Cache Memory with Direct Mapping

When the memory is referenced, the index is first used to access a word in the cache.
Then the tag stored in the accessed word is read and compared with the tag in the address.
If the two tags are the same, indicating that the word is the one required, access is made to
the addressed cache word. However, if the tags are not the same, indicating that the
required word is not in the cache, reference is made to the main memory to find it. For a
memory read operation, the word is then transferred into the cache where it is accessed. It
is possible to pass the information to the cache and the processor simultaneously, i.e., to
read-through the cache, on a miss. The cache location is altered for a write operation. The
main memory may be altered at the same time (write-through) or later.

21.2.3.1.3 Set-Associative Mapping

In the direct scheme, all words stored in the cache must have different indices. The tags
may be the same or different. In the fully associative scheme, blocks can displace any
other block and can be placed anywhere, but the cost of the fully associative memories
operate relatively slowly.
Set-associative mapping allows limited number of blocks, with the same index and
different tags, in the cache and can therefore be considered as a compromise between a
fully associative cache and a direct mapped cache. The cache is divided into "sets" of
blocks. A four-way set associative cache would have four blocks in each set. The number
of blocks in a set is known as the associativity or set size. Each block in each set has a
stored tag which, together with the index, completes the identification of the block. First,
the index of the address from the processor is used to access the set. Then, comparators
are used to compare all tags of the selected set with the incoming tag. If a match is found,
the corresponding location is accessed, other wise, as before, an access to the main
memory is made.
169

Figure 21.10 Cache with set-associative mapping

The tag address bits are always chosen to be the most significant bits of the full address,
the block address bits are the next significant bits and the word/byte address bits form the
least significant bits as this spreads out consecutive man memory blocks throughout
consecutive sets in the cache. This addressing format is known as bit selection and is used
by all known systems. In a set-associative cache it would be possible to have the set
address bits as the most significant bits of the address and the block address bits as the
next significant, with the word within the block as the least significant bits, or with the
block address bits as the least significant bits and the word within the block as the middle
bits.
The association between the stored tags and the incoming tag is done using comparators
and can be shared for each associative search, and all the information, tags and data, can
be stored in ordinary random access memory. The number of comparators required in the
set-associative cache is given by the number of blocks in a set, not the number of blocks
in all, as in a fully associative memory. The set can be selected quickly and all the blocks
of the set can be read out simultaneously with the tags before waiting for the tag
comparisons to be made. After a tag has been identified, the corresponding block can be
selected.
The replacement algorithm for set-associative mapping need only consider the lines in
one set, as the choice of set is predetermined by the index in the address. Hence, with two
blocks in each set, for example, only one additional bit is necessary in each set to identify
the block to replace.

Replacement policy

When the required word of a block is not held in the cache, we have seen that it is
necessary to transfer the block from the main memory into the cache, displacing an
existing block if the cache is full. Except for direct mapping, which does not allow a
replacement algorithm, the existing block in the cache is chosen by a replacement
algorithm. The replacement mechanism must be implemented totally in hardware,
preferably such that the selection can be made completely during the main memory cycle
170

for fetching the new block. Ideally, the block replaced will not be needed again in the
future.

The most common replacement algorithms are

Random Replacement Algorithm

A true random replacement algorithm would select a block to replace in a totally random
order, with no regard to memory references or previous selections.

First In First Out

The first-in first-out replacement algorithm removes the block that has been in the cache
for the longest time. The first-in first-out algorithm would naturally be implemented with
a first-in first-out queue of block address.

Least Recently Used

In the least recently used (LRU) algorithm, the block which has not been referenced for
the longest time is removed from the cache.

Writing into cache


Write-through
Update cache and main memory in parallel
Needed when using DMA for I/O device communication
Write-back
Update cache only
Update main memory only when updated word is flushed from cache

21.2.4 Virtual Memory

Virtual memory is a concept used in some large computer systems that permit the user to
construct programs as though large memory space were available equal to the totality of
the auxiliary memory. At any time only a part of the program will reside in main memory,
and other parts will otherwise remain on hard disk and may be switched into memory
later if needed.

Address Space

An address used by the programmer is called as virtual address or address space.

Memory Space

An address in main memory is called as physical address or memory space.

Address space refers to the address generated by programs and the memory space refers
to the actual physical locations.
171

Figure 21.11 Relation between address and memory space in a virtual memory

Figure 21.12 Table for Mapping a Virtual Address

The mapping is a dynamic operation, which means that every address is translated
immediately as a word is referenced by CPU.

Paging

In Paging memory management, each process is associated with a page table. Each
entry in the table contains the frame number of the corresponding page in the virtual
address space of the process. This same page table is also the central data structure for
virtual memory mechanism based on paging.

Control bits

Since only some pages of a process may be in main memory, a bit in the page table
entry, P in below Figure, is used to indicate whether the corresponding page is present
in main memory or not.
Another control bit needed in the page table entry is a modified bit, M, indicating
whether the content of the corresponding page have been altered or not since the page
was last loaded into main memory. This is often said swapping in and swapping out,
suggesting that a process is typically separated into two parts, one residing in main
memory and the other in secondary memory, and some pages may be removed from
172

one part and join the other. They together make up of the whole process image.
Actually the secondary memory contains the whole image of the process and part of it
may have been loaded into main memory. When swapping out is to be performed,
typically the page to be swapped out may be simply overwritten by the new page,
since the corresponding page is already on secondary memory. However sometimes
the content of a page may have been altered at runtime, say a page containing data. In
this case, the alteration should be reflected in secondary memory. So when the M bit
is 1, then the page to be swapped.

Figure 21.13 Page Table Entry

Figure 21.14 Address Space and memory space


split into groups of 1k words

Figure 21.15 Memory Table in a Paged System


173

21.3 Let us Sum Up


Memory unit is very essential in a digital computer. Its of 2 types primary and
secondary memory. The various types of memories have been discussed. Magnetic
tape and magnetic disks are auxiliary memory and the presence of cache memory
helps in increasing the speed of processing. Associative memories are used to search
the items in a table stored in memory. Cache memory mapping is divided into 2 types
Associative mapping, direct mapping and set associative mapping. The concept of
paging has been discussed in virtual memory.

21.4 Lesson-end Activities

1. With a neat diagram, explain Memory Organization and Memory Hierarchy


2. Write short notes on Auxiliary Memory.
3. Explain Associative Memory in detail.
4. What is the significance of cache memory? Discuss the mapping techniques
used in cache memory.
5. Explain Virtual Memory.

21.5 Points for discussions


Types of Memories
Cache Memory
Hit ratio
Seek time
Latency Time
Paging

21.6 References

Computer system Architecture M.Morris mano


Computer Architecture John Wiley

Das könnte Ihnen auch gefallen