Sie sind auf Seite 1von 64

Master of Computer Application (MCA) – Semester 1

MC0062 – Digital Systems, Computer Organization and Architecture


(Book ID: B0680 & B0684)
Assignment Set – 1
Q.-1. Describe the concept of Binary arithmetic with suitable numerical examples.

Answer.

Binary arithmetic
Arithmetic in binary is much like arithmetic in other numeral systems. Addition,
subtraction, multiplication, and division can be performed on binary numerals.
Addition
The circuit diagram for a binary half adder, which adds two bits together, producing
sum and carry bits.
The simplest arithmetic operation in binary is addition. Adding two single-digit
binary numbers is relatively simple, using a form of carrying:
0+0→0
0+1→1
1+0→1
1 + 1 → 10, carry 1 (since 1 + 1 = 0 + 1 × binary 10)

Adding two "1" digits produces a digit "0", while 1 will have to be added to the next
column. This is similar to what happens in decimal when certain single-digit
numbers are added together; if the result equals or exceeds the value of the radix
(10), the digit to the left is incremented:
5 + 5 → 0, carry 1 (since 5 + 5 = 0 + 1 × 10)
7 + 9 → 6, carry 1 (since 7 + 9 = 6 + 1 × 10)
This is known as carrying. When the result of an addition exceeds the value of a
digit, the procedure is to "carry" the excess amount divided by the radix (that is,
10/10) to the left, adding it to the next positional value. This is correct since the
next position has a weight that is higher by a factor equal to the radix. Carrying
works the same way in binary:
1 1 1 1 1 (carried digits)
01101
+ 10111
-------------
=100100
In this example, two numerals are being added together: 011012 (1310) and
101112 (2310). The top row shows the carry bits used. Starting in the rightmost
column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom
of the rightmost column. The second column from the right is added: 1 + 0 + 1 =
102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1
+ 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding
like this gives the final answer 1001002 (36 decimal).
When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for
any two bits x and y allows for very fast calculation, as well.

A simplification for many binary addition problems is the Long Carry Method or
Brookhouse Method of Binary Addition. This method is generally useful in any binary
addition where one of the numbers has a long string of “1” digits. For example the
following large binary numbers can be added in two simple steps without multiple
carries from one place to the next.

(carried digits) Versus


1 1 1 1 1 1 1 1 (Long Carry Method)
1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0
+ 1 0 1 0 1 1 0 0 1 1 + 1 0 1 0 1 1 0 0 1 1 (add
crossed out digits first)

----------------------- + 1 0 0 0 1 0 0 0 0 0 0 = (sum
of crossed out digits)
(now add remaining digits)
= 1 1 0 0 1 1 1 0 0 0 1 ---------------------------
1 1 0 0 1 1 1 0 0 0 1

In this example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02


(95810) and 1 0 1 0 1 1 0 0 1 12 (69110). The top row shows the carry bits used.
Instead of the standard carry from one column to the next, the lowest place-valued
"1" with a "1" in the corresponding place value beneath it may be added and a "1"
may be carried to one digit past the end of the series. These numbers must be
crossed off since they are already added. Then simply add that result to the
uncanceled digits in the second row. Proceeding like this gives the final answer 1 1
0 0 1 1 1 0 0 0 12 (164910).

Addition table
0 1
0 0 1
1 1 10
Subtraction
Subtraction works in much the same way:
0−0→0
0 − 1 → 1, borrow 1
1−0→1
1−1→0
Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be
subtracted from the next column. This is known as borrowing. The principle is the
same as for carrying. When the result of a subtraction is less than 0, the least
possible value of a digit, the procedure is to "borrow" the deficit divided by the radix
(that is, 10/10) from the left, subtracting it from the next positional value.

* * * * (starred columns are borrowed from)


1101110
− 10111
----------------
=1010111

Subtracting a positive number is equivalent to adding a negative number of equal


absolute value; computers typically use two's complement notation to represent
negative values. This notation eliminates the need for a separate "subtract"
operation. Using two's complement notation subtraction can be summarized by the
following formula:
A − B = A + not B + 1
For further details, see two's complement.

Multiplication
Multiplication in binary is similar to its decimal counterpart. Two numbers A and B
can be multiplied by partial products: for each digit in B, the product of that digit in
A is calculated and written on a new line, shifted leftward so that its rightmost digit
lines up with the digit in B that was used. The sum of all these partial products gives
the final result.

Since there are only two digits in binary, there are only two possible outcomes of
each partial multiplication:

* If the digit in B is 0, the partial product is also 0


* If the digit in B is 1, the partial product is equal to A

For example, the binary numbers 1011 and 1010 are multiplied as follows:

1011 (A)
×1010 (B)
---------
0000 ← Corresponds to a zero in B
+ 1011 ← Corresponds to a one in B
+ 0000
+1011
---------------
=110111 0

Binary numbers can also be multiplied with bits after a binary point:

1 0 1.1 0 1 (A) (5.625 in decimal)


× 1 1 0.0 1 (B) (6.25 in decimal)
-------------
1 0 1 1 0 1 ← Corresponds to a one in B
+ 000000 ← Corresponds to a zero in B
+ 000000
+ 101101
+ 101101
-----------------------
= 1 0 0 0 1 1.0 0 1 0 1 (35.15625 in decimal)

See also Booth's multiplication algorithm.


Multiplication table
0 1
0 0 0
1 0 1
Division
See also: Division (digital)
Binary division is again similar to its decimal counterpart:

Here, the divisor is 1012, or 5 decimal, while the dividend is 110112, or 27 decimal.
The procedure is the same as that of decimal long division; here, the divisor 1012
goes into the first three digits 1102 of the dividend one time, so a "1" is written on
the top line. This result is multiplied by the divisor, and subtracted from the first
three digits of the dividend; the next digit (a "1") is included to obtain a new three-
digit sequence:

1
___________
101 )11011
−101
-----
011

The procedure is then repeated with the new sequence, continuing until the digits in
the dividend have been exhausted:
101
___________
101 )11011
−101
-----
011
−000
-----
111
−101
-----
10
Q-2. Explain the Boolean rules with relevant examples.

Ans-

Boolean algebra finds its most practical use in the simplification of logic
circuits. If we translate a logic circuit's function into symbolic (Boolean) form,
and apply certain algebraic rules to the resulting equation to reduce the
number of terms and/or arithmetic operations, the simplified equation may
be translated back into circuit form for a logic circuit performing the same
function with fewer components. If equivalent function may be achieved with
fewer components, the result will be increased reliability and decreased cost
of manufacture.

To this end, there are several rules of Boolean algebra presented in this
section for use in reducing expressions to their simplest forms. The identities
and properties already reviewed in this chapter are very useful in Boolean
simplification, and for the most part bear similarity to many identities and
properties of "normal" algebra. However, the rules shown in this section are
all unique to Boolean mathematics.

Example:

This rule may be proven symbolically by factoring an "A" out of the two terms, then applying
the rules of A + 1 = 1 and 1A = A to achieve the final result:
Please note how the rule A + 1 = 1 was used to reduce the (B + 1) term to 1. When a rule like "A
+ 1 = 1" is expressed using the letter "A", it doesn't mean it only applies to expressions
containing "A". What the "A" stands for in a rule like A + 1 = 1 is any Boolean variable or
collection of variables. This is perhaps the most difficult concept for new students to master in
Boolean simplification: applying standardized identities, properties, and rules to expressions not
in standard form.

For instance, the Boolean expression ABC + 1 also reduces to 1 by means of the "A + 1 = 1"
identity. In this case, we recognize that the "A" term in the identity's standard form can represent
the entire "ABC" term in the original expression.

The next rule looks similar to the first one shown in this section, but is actually quite different
and requires a more clever proof:
Note how the last rule (A + AB = A) is used to "un-simplify" the first "A" term
in the expression, changing the "A" into an "A + AB". While this may seem
like a backward step, it certainly helped to reduce the expression to
something simpler! Sometimes in mathematics we must take "backward"
steps to achieve the most elegant solution. Knowing when to take such a
step and when not to is part of the art-form of algebra, just as a victory in a
game of chess almost always requires calculated sacrifices.

Another rule involves the simplification of a product-of-sums expression:

And, the corresponding proof:


To summarize, here are the three new rules of Boolean simplification expounded in this section:

Definition of Boolean Expressions

A Boolean expression is defined as the expression in which


constituents have one of two rates and the algebraic processes defined on
the set are logical OR, a type of addition, and logical AND, a type of
multiplication.

Boolean expression varies from the ordinary expression in three


ways: the values that variables may presume, which are a logical instead of
a numeric quality, perfectly 0 and 1; in the processes related to these
values; and in the properties of those processes, that is, the laws that they
follow. Expressions contain mathematical logical, digital logics, computer
programming, set theory, and statistics.

Laws of Boolean Algebra:

Laws of Boolean algebra:

(1a) x·y=y·x

(1b) x+y=y+x
(1c) 1+x=1

(2a) x · (y · z) = (x · y) · z

(2b) x + (y + z) = (x + y) + z

(3a) x · (y + z) = (x · y) + (x · z)

(3b) x + (y · z) = (x + y) · (x + z)

(4a) x·x= x

(4b) x+x=x

(5a) x · (x + y) = x

(5b) x + (x · y) = x

(6a) x · x1 = 0

(6b) x + x1 = 1

(7) (x1)1 = x

(8a) (x · y)1 = x1 + y1

(8b) (x + y) 1 = x1 · y1

Thus the above rules are used to translate the logic gates directly. Although some of the
rules can be derived from simpler identities derived in the packages.

Example for Boolean Expressions:


Example 1: Determine the OR operations using Boolean expressions.

Solution: Construct a truth table for OR Operation,

x y xVy

F F F

F T T

T F T

T T T

Example 2: Determine the AND operations using Boolean expressions.


Solution: Construct a truth table for AND Operation,

x y xΛy

F F F

F T F

T F F

T T T

Example3 : Determine the NOT operations using Boolean expressions.

Solution: Construct a truth table for NOT Operation,

x ¬x

F T

T F
Q-3 Explain Karnaugh maps with your own examples illustrating the
formation.
Ans-

Karnaugh map

K-map showing minterms and boxes covering the desired minterms. The
brown region is an overlapping of the red (square) and green regions.

The input variables can be combined in 16 different ways, so the Karnaugh


map has 16 positions, and therefore is arranged in a 4 × 4 grid.

The binary digits in the map represent the function's output for any given
combination of inputs. So 0 is written in the upper leftmost corner of the map
because ƒ = 0 when A = 0, B = 0, C = 0, D = 0. Similarly we mark the bottom
right corner as 1 because A = 1, B = 0, C = 1, D = 0 gives ƒ = 1. Note that
the values are ordered in a Gray code, so that precisely one variable
changes between any pair of adjacent cells.

After the Karnaugh map has been constructed the next task is to find the
minimal terms to use in the final expression. These terms are found by
encircling groups of 1s in the map. The groups must be rectangular and must
have an area that is a power of two (i.e. 1, 2, 4, 8…). The rectangles should
be as large as possible without containing any 0s. The optimal groupings in
this map are marked by the green, red and blue lines. Note that groups may
overlap. In this example, the red and green groups overlap. The red group is
a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is
indicated in brown.

The grid is toroidally connected, which means that the rectangular groups
can wrap around edges, so is a valid term, although not part of the
minimal set—this covers Minterms 8, 10, 12, and 14.
Perhaps the hardest-to-visualize wrap-around term is which covers the
four corners—this covers minterms 0, 2, 8, 10.

Examples:

Karnaugh maps are used to facilitate the simplification of Boolean algebra


functions. The following is an unsimplified Boolean Algebra function with
Boolean variables A, B, C, D, and their inverses. They can be represented in
two different functions:

• f(A,B,C,D) = ∑
(6,8,9,10,11,12,13,14) Note: The values inside are the ∑
minterms to map (i.e. which rows have output 1 in the truth table).

Truth table

Using the defined minterms, the truth table can be created:

# A BC D f(A,B,C
0 0 0 0 0 0 ,D)
1 0 001 0
2 0 010 0
3 0 011 0
4 0 100 0
5 0 101 0

6 0 110 1
7 0 111 0
8 1 000 1
9 1 001 1
1 1 010 1
1
0 1 011 1
1 1 100 1
1
2 1 101 1
1
3 1 110 1
1
4 1 111 0
5

Parity generator

This picture show the function : parity generator.The input is 1(0) if is input with cans (or odd)
number of 1(0). This function is MAX lengt of 4 input karnaugh map - as you can see there is no
way how make simple this function - no 1(0) has not 1(0) as neighbor
Karnaugh map for 5 (five) inputs
More people do not know how create map for 5 (five) inputs. Next picture showing
how you can do it. And show you different rules on 5 inputs karnaugh map:

The adder
The basic circuit is a adder. There is a example of basic component of adder (also
you can use half-adders).
Q-4. With a neat labeled diagram explain the working of binary half
and full adders.
Ans.

A half adder is a logical circuit that performs an addition operation on two


binary digits. The half adder produces a sum and a carry value which are
both binary digits.

A full adder is a logical circuit that performs an addition operation on three


binary digits. The full adder produces a sum and carry value, which are both
binary digits. It can be combined with other full adders or work on its own.

Q.-5. Describe the concept of timing diagrams and synchronous logic.

Ans-

Timing Diagrams:

Timing diagrams are the main key in understanding digital systems. Timing diagrams explain
digital circuitry functioning during time flow. Timing diagrams help to understand how digital
circuits or sub circuits should work or fit in to larger circuit system. So learning how to read
Timing diagrams may increase your work with digital systems and integrate them.
Bellow is a list o most commonly used timing diagram fragments:
• Low level to supply voltage:

• Transition to low or high level:

• Bus signals – parallel signals transitioning from one level to other:

• High Impedance state:

• Bus signal with floating impedance:

• Conditional change of on signal depending on another signal transition:

• Transition on a signal causes state changes in a BUS:

• More than one transition causes changes in a BUS:


• Sequential transition – one signal transition causes another signal transition and second
signal transition causes third signal transition.

As you see timing diagrams together with digital circuit can completely describe the circuits
working. To understand the timing diagrams, you should follow all symbols and transitions
in timing diagrams.
You can find plenty of symbols in timing diagrams. It depends actually on designer or circuit
manufacturer. But once you understand the whole picture, you can easy read any timing
diagram like in this example:

Synchronous sequential logic

Nearly all sequential logic today is 'clocked' or 'synchronous' logic: there is a


'clock' signal, and all internal memory (the 'internal state') changes only on a
clock edge. The basic storage element in sequential logic is the flip-flop.

The main advantage of synchronous logic is its simplicity. Every operation in


the circuit must be completed inside a fixed interval of time between two
clock pulses, called a 'clock cycle'. As long as this condition is met (ignoring
certain other details), the circuit is guaranteed to be reliable. Synchronous
logic also has two main disadvantages, as follows.

1. The clock signal must be distributed to every flip-flop in the circuit. As the
clock is usually a high-frequency signal, this distribution consumes a
relatively large amount of power and dissipates much heat. Even the flip-
flops that are doing nothing consume a small amount of power, thereby
generating waste heat in the chip.
2. The maximum possible clock rate is determined by the slowest logic path
in the circuit, otherwise known as the critical path. This means that every
logical calculation, from the simplest to the most complex, must complete in
one clock cycle. One way around this limitation is to split complex operations
into several simple operations, a technique known as 'pipelining'. This
technique is prominent within microprocessor design, and helps to improve
the performance of modern processors.

Q.6- Discuss the Quine McCluskey method.


Answer-
Quine–McCluskey algorithm
The Quine–McCluskey algorithm (or the method of prime implicants)
is a method used for minimization of boolean functions which was developed
by W.V. Quine and Edward J. McCluskey. It is functionally identical to
Karnaugh mapping, but the tabular form makes it more efficient for use in
computer algorithms, and it also gives a deterministic way to check that the
minimal form of a Boolean function has been reached. It is sometimes
referred to as the tabulation method.

The method involves two steps:

1. Finding all prime implicants of the function.


2. Use those prime implicants in a prime implicant chart to find the
essential prime implicants of the function, as well as other prime
implicants that are necessary to cover the function.

Complexity:
Although more practical than Karnaugh mapping when dealing with more
than four variables, the Quine–McCluskey algorithm also has a limited range
of use since the problem it solves is NP-hard: the runtime of the Quine–
McCluskey algorithm grows exponentially with the number of variables. It
can be shown that for a function of n variables the upper bound on the
number of prime implicants is 3n/n. If n = 32 there may be over 6.5 * 1015
prime implicants. Functions with a large number of variables have to be
minimized with potentially non-optimal heuristic methods, of which the
Espresso heuristic logic minimizer is the de-facto standard

Step 1: finding prime implicants

Minimizing an arbitrary function:

A BCD f
m0 0 0 0 0 0
m1 0 0 0 1 0
m2 0 0 1 0 0
m3 0 0 1 1 0
m4 0 1 0 0 1
m5 0 1 0 1 0
m6 0 1 1 0 0
m7 0 1 1 1 0
m8 1 0 0 0 1
m9 1 0 0 1 x
m1 1 010 1
0
m1 1 011 1
1
m1 1 100 1
2
m1 1 101 0
3
m1 1 110 x
4
m1 1 111 1
5
One can easily form the canonical sum of products expression from this table, simply by
summing the minterms (leaving out don't-care terms) where the function evaluates to one:

fA,B,C,D = A'BC'D' + AB'C'D' + AB'CD' + AB'CD + ABC'D' + ABCD.

Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first
placed in a minterm table. Don't-care terms are also added into this table, so they can be
combined with minterms:

Number Minter Binary


of 1s m Representation
m4 0100
1
m8 1000
m9 1001
2 m10 1010
m12 1100
m11 1011
3
m14 1110
4 m15 1111

At this point, one can start combining minterms with other minterms. If two
terms vary by only a single digit changing, that digit can be replaced with a
dash indicating that the digit doesn't matter. Terms that can't be combined
any more are marked with a "*". When going from Size 2 to Size 4, treat '-' as
a third bit value. Ex: -110 and -100 or -11- can be combined, but not -110
and 011-. (Trick: Match up the '-' first.)

Number Minter 0- Size 2 Size 4


of 1s m Cube Implicants Implicants
m4 0100 m(4,12) m(8,9,10,11)
1 -100* 10--*
m8 1000 m(8,9) 100- m(8,10,12,14)
1--0*
-- -- -- m(8,10) 10-0 --
m9 1001 m(8,12) 1-00 m(10,11,14,15)
1-1-*
2 m10 1010 -- --
m12 1100 m(9,11) 10-1 --
-- -- -- m(10,11) --
101-
m11 1011 m(10,14) 1- --
3 10
m14 1110 m(12,14) 11- --
0
m15 1111 m(11,15) 1- --
4 11
m(14,15)
111-
Note: In this example, none of the terms in the size 4 implicants table can be
combined any further. Be aware that this processing should be continued
otherwise (size 8 etc).

Step 2: prime implicant chart


None of the terms can be combined any further than this, so at this point we
construct an essential prime implicant table. Along the side goes the prime
implicants that have just been generated, and along the top go the minterms
specified earlier. The don't care terms are not placed on top - they are
omitted from this section because they are not necessary inputs.

481 1 1 1 = A BCD
m(4,12)* X 0 1 2
X 5 - >
= - 100
m(8,9,10,11) X X X 100 =
10- > 10- -
m(8,10,12,14 X X X -
1-- >
= 1- - 0
)
m(10,11,14,1 X X 0 =
X 1- > 1- 1-
5)* 1- >
Here, each of the essential prime implicants has been starred - the second
prime implicant can be 'covered' by the third and fourth, and the third prime
implicant can be 'covered' by the second and first, and neither is thus
essential. If a prime implicant is essential then, as would be expected, it is
necessary to include it in the minimized boolean equation. In some cases,
the essential prime implicants do not cover all minterms, in which case
additional procedures for chart reduction can be employed. The simplest
"additional procedure" is trial and error, but a more systematic way is
Petrick's Method. In the current example, the essential prime implicants do
not handle all of the minterms, so, in this case, one can combine the
essential implicants with one of the two non-essential ones to yield one of
these two equations:
Both of those final equations are functionally

Q-7. With appropriate diagrams explain the bus structure


of a computer system.
Ans-

BUS structure : A group of lines that serves as a connecting path for


several devices is called bus.In addition to the lines that carry the data, the
bus must have lines for address and control purposes.
In computer architecture, a bus is a subsystem that transfers data between
computer components inside a computer or between computers

A computer bus structure is provided which permits replacement of


removable modules during operation of a computer wherein means are
provided to precharge signal output lines to within a predetermined range
prior to the usage of the signal output lines to carry signals, and further,
wherein means are provided to minimize arcing to pins designed to carry the
power and signals of a connector. In a specific embodiment, pin length, i.e.,
separation between male and female components of the connector, are
subdivided into long pin length and short pin length. Ground connections and
power connections for each voltage level are assigned to the long pin
lengths. Signal connections and a second power connection for each voltage
level is assigned to the short pin lengths. The precharge/prebias circuit
comprises a resistor divider coupled between a power source and ground
with a high impedance tap coupled to a designated signal pin, across which
is coupled a charging capacitor or equivalent representing the capacitance of
the signal line. Bias is applied to the precharge/prebias circuit for a sufficient
length of time to precharge the signal line to a desired neutral signal level
between expected high and low signal values prior to connection of the short
pin to its mate..

Early computer buses were literally parallel electrical buses with multiple
connections, but the term is now used for any physical arrangement that
provides the same logical functionality as a parallel electrical bus. Modern
computer buses can use both parallel and bit-serial connections, and can be
wired in either a multidrop (electrical parallel) or daisy chain topology, or
connected by switched hubs, as in the case of USB.
Description of BUS :
At one time, "bus" meant an electrically parallel system, with electrical
conductors similar or identical to the pins on the CPU. This is no longer the
case, and modern systems are blurring the lines between buses and
networks.
Buses can be parallel buses, which carry data words in parallel on multiple
wires, or serial buses, which carry data in bit-serial form. The addition of
extra power and control connections, differential drivers, and data
connections in each direction usually means that most serial buses have
more conductors than the minimum of one used in the 1-Wire and UNI/O
serial buses. As data rates increase, the problems of timing skew, power
consumption, electromagnetic interference and crosstalk across parallel
buses become more and more difficult to circumvent. One partial solution to
this problem has been to double pump the bus. Often, a serial bus can
actually be operated at higher overall data rates than a parallel bus, despite
having fewer electrical connections, because a serial bus inherently has no
timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this.
Multidrop connections do not work well for fast serial buses, so most modern
serial buses use daisy-chain or hub designs.
Most computers have both internal and external buses. An internal bus
connects all the internal components of a computer to the motherboard (and
thus, the CPU and internal memory). These types of buses are also referred
to as a local bus, because they are intended to connect to local devices, not
to those in other machines or external to the computer. An external bus
connects external peripherals to the motherboard.
Network connections such as Ethernet are not generally regarded as buses,
although the difference is largely conceptual rather than practical. The
arrival of technologies such as InfiniBand and HyperTransport is further
blurring the boundaries between networks and buses. Even the lines
between internal and external are sometimes fuzzy, I²C can be used as both
an internal bus, or an external bus (where it is known as ACCESS.bus), and
InfiniBand is intended to replace both internal buses like PCI as well as
external ones like Fibre Channel. In the typical desktop application, USB
serves as a peripheral bus, but it also sees some use as a networking utility
and for connectivity between different computers, again blurring the
conceptual distinction.

Q.8- Explain the CPU organization with relevant diagrams.

Answer.
CPU which is the heart of a computer consists of Registers, Control Unit and
Arithmetic Logic Unit. The following tasks are to be performed by the CPU:
1. Fetch instructions: The CPU must read instructions from the memory.
2. Interpret instructions: The instructions must be decoded to determine
what action is required.
3. Fetch data: The execution of an instruction may require reading data
from memory or an I/O module.
4. Process data: The execution of an instruction may require performing
some arithmetic or logical operations on data.
5. Write data: The results of an execution may require writing data to the
memory or an I/O module.

Function of computer organization:


1.(a) A CPU can be defined as a general purpose instruction set processor
responsible for program execution.
(b) A CPU consists of address bus, data bus and control bus.
(c) A computer
with one CPU is called microprocessor and with more than one CPU in called
multiprocessor.
(d) The address bus in used to transfer addresses from the CPU to main
memory or to I/O devices
(e) Data bus is the main path by which information is transferred to and from
the CPU.
(f) Control bus is used by CPU to control various devices connected and to
synchronise their operations with those of the CPU.
2. (a) A control unit take the instructions one by one to execute. It takes data
from input devices and store it in memory. And also sends data from
memory to the output device.
(b) All arithmetic and logical operations are carried out by Arithmetic Logical
Unit
(c) A control unit and the arithmetic logical unit together is known as CPU
3. (a) The accumulator is the main register of the ALU.
(b) In execution of the most of the instructions the accumulator is used to
store a input data or output result.
(c) Instructions register holds the opcode of the current instruction
(d) Memory address register holds the address of the current instructions.
4. A accumulator based CPU consists of (a) data processing unit (b) program
control unit and (c) memory and I/O interface unit.
(a) (i) In the data processing unit, data is processed to get some results.
(ii) the accumulator in the main operand register of the ALU.
(b) (i) Program control unit controls various parts of CPU.
(ii) Program counter holds the address of the next instructions to be read
from memory after the current instruction is executed.
(iii) Instruction register holds the opcode of the current instruction.
(iv) Control circuits hold the responsibility of every operation of the CPU.
(c) (i) Data registers of the memory and I/O interface unit acts as a buffer
between the CPU and main memory.
(ii) Address register contains the address of the present instructions obtained
from the program control unit.
5. (a) Stack pointer and flag register are special registers of CPU.
(b) Stack pointers holds the address of the most recently entered item into
the stack.
(c) Flag registers indicates the status which depends on the results of the
operation.
6. (a) Micro operations are the operations executed on data stored in
registers.
(b) A set of micro operations specified by an instruction is known as macro
operation.
7. (a) A sequence of operations involved in processing an instruction
constitutes an instruction cycle
(b) Fetch cycle in defined as the time required for getting the instruction
code from main memory to CPU
(c) Executed cycle is the time required to decode and execute an instruction.
(d) Fetch cycle requires a fixed time slot and execute cycle requires variable
time slot.
8. (a) a word length indicates the number of bits the CPU can process at a
time.
(b) A memory size indicates the total storage capacity of the CPU.
(c) Word length is the indication of bit length of each register.
9. A computer is said to be operated based on stored program concept if it
stores the instructions as well as data of a program in main memory when
they are waiting to execute.
10. A stored program in main memory is executed instruction after
instruction in a successive method by using program counter.

Q.9: Explain the organization of 8085 processor with the relevant diagrams.
Ans- The Intel 8085 is an 8-bit microprocessor introduced by Intel in 1977.
It was binary-compatible with the more-famous Intel 8080 but required less
supporting hardware, thus allowing simpler and less expensive
microcomputer systems to be built.
The "5" in the model number came from the fact that the 8085 required only
a +5-volt (V) power supply rather than the +5V, -5V and +12V supplies the
8080 needed. Both processors were sometimes used in computers running
the CP/M operating system, and the 8085 later saw use as a microcontroller,
by virtue of its low component count. Both designs were eclipsed for desktop
computers by the compatible Zilog Z80, which took over most of the CP/M
computer market as well as taking a share of the booming home computer
market in the early-to-mid-1980s.
The 8085 had a long life as a controller. Once designed into such products as
the DECtape controller and the VT100 video terminal in the late 1970s, it
continued to serve for new production throughout the life span of those
products (generally longer than the product life of desktop computers).
The 8085 Architecture follows the "von Neumann architecture", with a 16-bit
address bus, and a 8-bit data bus.The 8085 used a multiplexed Data Bus
i.e.the address was split between the 8-bit address bus and 8-bit data bus.
(For saving Number of Pins)

Registers:

The 8085 can access 216 (= 65,536) individual 8-bit memory locations, or in
other words, its address space is 64 KB. Unlike some other microprocessors
of its era, it has a separate address space for up to 28 (=256) I/O ports. It
also has a built in register array which are usually labeled A (Accumulator),
B, C, D, E, H, and L. Further special-purpose registers are the 16-bit Program
Counter (PC), Stack Pointer (SP), and 8-bit flag register F. The microprocessor
has three maskable interrupts (RST 7.5, RST 6.5 and RST 5.5), one Non-
Maskable interrupt (TRAP), and one externally serviced interrupt (INTR). The
RST n.5 interrupts refer to actual pins on the processor-a feature which
permitted simple systems to avoid the cost of a separate interrupt controller
chip.

Buses:
* Address bus - 16 line bus accessing 216 memory locations (64 KB) of
memory.
* Data bus - 8 line bus accessing one (8-bit) byte of data in one operation.
Data bus width is the traditional measure of processor bit designations, as
opposed to address bus width, resulting in the 8-bit microprocessor
designation.
* Control buses - Carries the essential signals for various operations.

Q-10. Describe the theory of addressing modes.


Answer.

Addressing modes are an aspect of the instruction set architecture in most


central processing unit (CPU) designs. The various addressing modes that
are defined in a given instruction set architecture define how machine
language instructions in that architecture identify the operand (or operands)
of each instruction. An addressing mode specifies how to calculate the
effective memory address of an operand by using information held in
registers and/or constants contained within a machine instruction or
elsewhere.
In computer programming, addressing modes are primarily of interest to
compiler writers and to those who write code directly in assembly language.
How many Addressing modes :
Different computer architectures vary greatly as to the number of addressing
modes they provide in hardware. There are some benefits to eliminating
complex addressing modes and using only one or a few simpler addressing
modes, even though it requires a few extra instructions, and perhaps an
extra register.[1] It has proven[citation needed] much easier to design pipelined
CPUs if the only addressing modes available are simple ones.
Most RISC machines have only about five simple addressing modes, while
CISC machines such as the DEC VAX supermini have over a dozen
addressing modes, some of which are quite complicated. The IBM
System/360 mainframe had only three addressing modes; a few more have
been added for the System/390.
When there are only a few addressing modes, the particular addressing
mode required is usually encoded within the instruction code (e.g. IBM
System/390, most RISC). But when there are lots of addressing modes, a
specific field is often set aside in the instruction to specify the addressing
mode. The DEC VAX allowed multiple memory operands for almost all
instructions, and so reserved the first few bits of each operand specifier to
indicate the addressing mode for that particular operand. Keeping the
addressing mode specifier bits separate from the opcode operation bits
produces a orthogonal instruction set.

Even on a computer with many addressing modes, measurements of actual


programs[indicate that the simple addressing modes listed below account for
some 90% or more of all addressing modes used. Since most such
measurements are based on code generated from high-level languages by
compilers, this reflects to some extent the limitations of the compilers being
used.

Simple addressing modes for code


Absolute
+----+------------------------------+
|jump| address |
+----+------------------------------+

(Effective PC address = address)


The effective address for an absolute instruction address is the address
parameter itself with no modifications.
PC-relative
+----+------------------------------+
|jump| offset | jump relative
+----+------------------------------+

(Effective PC address = next instruction address + offset, offset may be


negative)
The effective address for a PC-relative instruction address is the offset
parameter added to the address of the next instruction. This offset is usually
signed to allow reference to code both before and after the instruction.
This is particularly useful in connection with jumps, because typical jumps
are to nearby instructions (in a high-level language most if or while
statements are reasonably short). Measurements of actual programs suggest
that an 8 or 10 bit offset is large enough for some 90% of conditional
jumps[citation needed].
Another advantage of program-relative addressing is that the code may be
position-independent, i.e. it can be loaded anywhere in memory without the
need to adjust any addresses.
Some versions of this addressing mode may be conditional referring to two
registers ("jump if reg1==reg2"), one register ("jump unless reg1==0") or
no registers, implicitly referring to some previously-set bit in the status
register. See also conditional execution below.
Register indirect
+-------+-----+
|jumpVia| reg |
+-------+-----+

(Effective PC address = contents of register 'reg')


The effective address for a Register indirect instruction is the address in the
specified register. For example, (A7) to access the content of address
register A7.
The effect is to transfer control to the instruction whose address is in the
specified register.
Many RISC machines have a subroutine call instruction that places the return
address in an address register—the register indirect addressing mode is used
to return from that subroutine call.
Sequential addressing modes
sequential execution
+------+
| nop | execute the following instruction
+------+
(Effective PC address = next instruction address)
The CPU, after executing a sequential instruction, immediately executes the
following instruction.
Sequential execution is not considered to be an addressing mode on some
computers.
Most instructions on most CPU architectures are sequential instructions.
Because most instructions are sequential instructions, CPU designers often
add features that deliberately sacrifice performance on the other instructions
—branch instructions—in order to make these sequential instructions run
faster.
Conditional branches load the PC with one of 2 possible results, depending
on the condition—most CPU architectures use some other addressing mode
for the "taken" branch, and sequential execution for the "not taken" branch.
Many features in modern CPUs -- instruction prefetch and more complex
pipelineing, Out-of-order execution, etc. -- maintain the illusion that each
instruction finishes before the next one begins, giving the same final results,
even though that's not exactly what happens internally.
Each "basic block" of such sequential instructions exhibits both temporal and
spatial locality of reference.
CPUs that do not use sequential execution
CPUs that do not use sequential execution with a program counter are
extremely rare. In some CPUs, each instruction always specifies the address
of next instruction. Such CPUs have a instruction pointer that holds that
specified address, but they do not have a complete program counter. Such
CPUs include some drum memory computers, the SECD machine, and the
RTX 32P.
Other computing architectures go much further, attempting to bypass the
von Neumann bottleneck using a variety of alternatives to the program
counter.
conditional execution
Some computer architectures (e.g. ARM) have conditional instructions which
can in some cases obviate the need for conditional branches and avoid
flushing the instruction pipeline. An instruction such as a 'compare' is used to
set a condition code, and subsequent instructions include a test on that
condition code to see whether they are obeyed or ignored.
+------+-----+-----+
|skipEQ| reg1| reg2| skip the following instruction if reg1=reg2
+------+-----+-----+

(Effective PC address = next instruction address + 1)


Skip addressing may be considered a special kind of PC-relative addressing
mode with a fixed "+1" offset. Like PC-relative addressing, some CPUs have
versions of this addressing mode that only refer to one register ("skip if
reg1==0") or no registers, implicitly referring to some previously-set bit in
the status register. Other CPUs have a version that selects a specific bit in a
specific byte to test ("skip if bit 7 of reg12 is 0").
Unlike all other conditional branches, a "skip" instruction never needs to
flush the instruction pipeline.

Q-11. Describe the following elements of Bus Design:


A) Bus Types
B) Arbitration Methods
C) Bus Timing
Ans.-
BUS TYPES

• Bus channels can be separated into two general types, namely a


dedicated and multiplexed. A bus line dedicated permanently assigned
a function or a subset of physical computer components.
• As an example of dedication to the function is the use of separate
dedicated address and data lines, which is a common thing for the bus.
• However, this is not important. For example, the address and data
information can be transmitted through the same number of channels
using control channel address is invalid. In the early transfer of data,
address bus and address placed on a valid control activated. At this
time, each module has a specific time period to copy the address and
determine whether the address is a module located. Then address
removed from the bus and the bus connection is used for reading or
writing data transfer next. Methods use the same line for various
purposes is known as time multiplexing.
• The advantage is time multiplexing requires fewer channels, which
saves space and cost. The disadvantage is the need for a more
complex circuit within each module. There's also a fairly large
decrease in performance due to certain events that use the channels
together cannot function in parallel.
• Physical dedication associated with the use of multiple buses, each bus
it is connected with only a subset of the modules. A common example
is the use of bus I / O to interconnect all I / O module, then this bus is
connected to the main bus through a kind of adapter module I / O. The
main advantage of physical dedication is a high throughput; because
of the traffic congestion is only small data. The disadvantage is the
increased size and cost of the system.

b) ARBITRATION METHOD
In all systems except the simplest system, more than one module is required to
control the bus. For example, an I / O module may be required to read or write
directly to memory, without sending data to the CPU. Because at one point is just a
unit that will successfully transmit data through the bus, then take a few method of
arbitration. The various methods by and large can be classified as a method
centralized and distributed methods. In the centralized method, a hardware device,
known as bus controllers or arbitrary, is responsible for the allocation of time on the
bus. Perhaps it is CPU device module shaped or a separate section.
In a distributed method, there is no central controller. Rather, each module
consists of access control logic and modules work together to put on a bus
together. In the second method of arbitration, the goal is to assign a device,
the CPU or I / O module, acting as master. Then the master can initiate data
transfer (e.g., read or write) by using other devices, which work as slave for
this particular data exchange.

C) Bus timing:
The timing diagram example on the right describes the Serial Peripheral
Interface (SPI) Bus. Most SPI master nodes have the ability to set the clock
polarity (CPOL) and clock phase (CPHA) with respect to the data. This timing
diagram shows the clock for both values of CPOL as well as the values for the
two data lines (MISO & MOSI) for each value of CPHA. Note that when
CPHA=1 then the data is delayed by one-half clock cycle.

SPI operates in the following way:

• The master determines an appropriate CPOL & CPHA value


• The master pulls down the slave select (SS) line for a specific slave
chip
• The master clocks SCK at a specific frequency
• During each of the 8 clock cycles the transfer is full duplex:
• The master writes on the MOSI line and reads the MISO line
o The slave writes on the MISO line and reads the MOSI line
• When finished the master can continue with another byte transfer or
pull SS high to end the transfer

When a slave's SS line is high then both of its MISO and MOSI line should be
high impedance so to avoid disrupting a transfer to a different slave. Prior to
SS being pulled low, the MISO & MOSI lines are indicated with a "z" for high
impedance. Also prior to the SS being pulled low the "cycle #" row is
meaningless and is shown greyed-out.

Note that for CPHA=1 the MISO & MOSI lines are undefined until after the
first clock edge and are also shown greyed-out before that.

A more typical timing diagram has just a single clock and numerous data
lines

Q-12. Explain the following types of operations:


 Types of Operations
 Data transfer
Arithmetic
Ans.-
Types of Operations
A computer has a set of instructions that allows the user to formulate any
data-processing task. To carry out tasks, regardless of whether a computer
has 100 instructions or 30U instructions, its instructions must be capable of
performing following basic operations :
• Data movement
• Data processing
Program sequencing and control
The details of above operations are discussed here. The standard notations
used in the discussion* are as listed below :
• Processor registers are represented by notations R^ K-, R2/ ... and so
on,
• The addresses of the memory locations are represented by names
such as LOC, PLACE, MEM, etc
• I/O registers are represented by names such as DATA IN, DATA OUT
and so on.
• Tlie contents of register or memory location are denoted by placing
square brackets around the name of the register or memory location-

Data Movement Operations


These operations include the following data transfer operations :
• Data transfer between memory and CPU register
• Data transfer between CPU registers
• Data transfer between processor and input/output devices.

Data Transfer between Memory and CPU Registers


Arithmetic and logical operations are performed primarily on data in CPU
registers. Therefore it is necessary to transfer data from memory to CPU
registers before operation and transfer data from CPU registers to memory
after operation.
We know that both program instructions and data operands are stored in the
memory. To execute an instruction processor has to fetch instruction and
read the operand or operands if necessary from the memory. After execution
of instruction processor may store the result or modified operand back in the
memory. Thus, there are two basic operations required in the memory
access : 1. Load (Read or fetch) 2. Store (write).
Load Operation : In the load operation the contents of the specified memory
location are read by the processor. For load operation processor sends the
address of the memory location whose contents are to be read and
generates the appropriate read signal to indicate that it is a read operation.
The memory identifies the addressed memory location and sends the
contents of that memory location to the processor. This operation is
illustrated in Fig. 3.1
Store Operation : In the store operation die data from the processor is stored
in the

specified memory location. For store operation processor sends the address
of the memory
location where the data is to be written on the address lines, the data to be
written on the data lines and generates the appropriate write signal to
indicate the store operation.
The memory identifies the addressed memory location and writes the data
sent by the processor at that location destroying the former contents of that
location. This operation is illustrated in Fig. 3.2*

Fig. 3.2 Store operation

The data items or instructions which are transferred between memory and
processor may be either byte or word- They can be transferred using a single
operation* Mainly this transfer takes place between processor registers and
the memory locations. We know that processor has small number of built-in
registers. Each register is capable of holding a word data- These registers are
either the source or the destination of a transfer to or from the memory.

Example : R2 <- |LOG]

This expression states that the contents of memory location LOC are
transferred into the processor register R2.

Data Transfer between CPU Registers


As per the requirement of registers, the CPU registers can be made free by
data transfer operations between CPU registers*
Example : R3 <- [R2]
This expression states that the contents of processor register R2 are
transferred into processor register R3.
Data Transfer between Processor and Input/Output Devices
Many applications of a processor based system requires the transfer of data
between external circuitry to the processor and processor to the external
circuitry, e.g. - user can give information to the processor using keyboard
and user can see the result or output information from the processor with the
help of display device. The transfer of data between keyboard and processor
and display device is called input/output (I/O) data transfer.
Example : R1 <- [DATA IN]
This expression states that the contents of I/O register, DATA IN are
transferred into processor register R1.
The Fig. 3.3 shows the typical bus connection for processor, keyboard and
display. The DATA IN and DATA OUT are the registers by which the processor
reads the contents from keyboard and sends the data for display,
respectively. We know that the rate of data transfer from the keyboard is
limited by typing speed of the user and the rate of data transfer to the
display device is determined by the rate at which characters can be
transmitted over the link between the computer and the display device. The
rate of output data transfer to display is much higher than the input data
rate from the keyboard; however both of these rates are much slower than
the speed of a processor that can execute many millions of instructions per
second. Due to the speed difference between these devices we have to use
synchronisation mechanism for proper transfer of data between them.
As shown in the Fig. 3.3, SIN and SOUT status bits are used to synchronize data
transfer between keyboard and processor, and data transfer between display
and processor, respectively.

When key is pressed, the corresponding character code is stored in the DATA
IN register and SIN status bit is set to indicate that the valid character code is
available in the DATA IN register. Under program control processor checks
the SIN bit, and when it finds SIN = 1, it reads the contents of the DATA IN
register. After completion of read operation SIN is automatically cleared to 0.
If another key is pressed, the corresponding character code is entered into
the DATA IN register, SIN is again set to 1 and the process repeats.

When the character is to be transferred from the processor to the display,


DATA OUT register and SOUT status bit are used. Under program control,
processor checks SOUT bit SOUT = 1, indicates that the display is ready to
receive character. Therefore, when processor wants to transfer data to the
display device and SOUT = 1, processor transfers data to be displayed to the
DATA OUT register and dears SOUT status bit to 0. The display device then
reads the character from DATA OUT register. After acceptance of the data by
the display device, the SOUT bit is automatically set to 1 and display device is
ready to accept the next data byte.

Data Processing Operations


These operations include the following types of operations.
• Arithmetic operations
• Logical operations
• Shift and Rotate operations
Arithmetic Operations
These operations include basically the following operations.
• Add
• Subtract
• Multiply
• Divide §

• Increment
• Decrement
• Negate (Change sign of operand)
Example : ADO R1, R2, R3
This expression states that the contents of processor registers Rl and R2 are
added and the result is stored in the register R3.
The multiplication instruction does the multiplication of two operands and
stores the result in the destination operand. On the other hand the division
instruction does the division of two operands and stores the result in the
destination operand. For example

MUL R1, RO : R1 <-R1 x RO


DIV R1, RO : R1<-R1 + R0
Unfortunately, all processors do not provide these instructions. On those
processors MUL and DIV instruction are implemented by performing basic
operations such as add, subtract shift and rotate, repeatedly.
Logical Operations

These operations include basically the following operations.

• AND
• OR
• NOT
• EXOR
• Compare
• Shift
• Rotate
Example : AND R1f R2, R3

This expression states that the contents of processor registers Rl and R2 are
logically ANDed and the result is stored in the register R3.

Shift and Rotate Operations

These operations shift or rotate the bits of the operand right or left by some
specified number of bit positions.

Logical Shift : There are two logical shift instructions logical shift left
(LShiftL) and logical shift right (LShiftR). These two instructions shift
an operand by a number of the positions specified in a count
operand contain in the instruction. The general syntax for these
instructions are:

LshiftL dst, count


LshiftR dst count
The count operand may be an immediate number, or it may be contains of
the processor register. The Fig. 3.4 (see on next page) shows the operation
of LShiftL and LShiftR instructions.

Arithmetic Shift : In logical shift operation we have seen that the vacant
positions created within the register due to shift operation arc filled with
zeroes. In arithmetic right shift it is necessary to repeat the sign bit as the
fill-in bit for the vacated position. This requirement on right shifting
distinguishes arithmetic shifts from logical shifts. Otherwise two shift
operations are very similar. The Fig. 3.5 shows the example of an arithmetic
right shift (AshiftR). The arithmetic left shift (AshiftL) is same as the logical
left shift.
Master of Computer Application (MCA) – Semester 1
MC0062 – Digital Systems, Computer Organization and Architecture
(Book ID: B0680 & B0684)
Assignment Set – 1

Q. 1. With a neat labeled diagram explain the working of Gated Latches.

Answer.

Gated SR Latch

Two possible circuits for gated SR latch are shown in Figure 1. The graphical symbol for gated SR latch is
shown in Figure 2.
R S
Q Q

Clk Clk

QQSR

(a) Gated SR latch with NOR and AND gates. (b) Gated SR latch with NAND gates.

Figure 1. Circuits for gated SR latch.

S Q
Clk
R Q

Figure 2. The graphical symbol for gated SR latch


The characteristic table for a gated SR latch which describes its behavior is as follows.

+
Clk S R Q Comments
0 × × Q No change, typically stable states
Q = 0, Q = 1 or Q = 1, Q = 0
1 0 0 Q No change, typically stable states
Q = 0, Q = 1 or Q = 1, Q = 0
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 × Avoid this setting

Figure 3 shows an example timing diagram for gated SR latch (assuming negligible propagation
delays through the logic gates). Notice that during the last clock cycle when Clk = 1, both R = 1 and S = 1.
So as Clk returns to 0, the next state will be uncertain. This explains why we need to avoid the setting in
the last row of the above characteristic table in normal operation of a gated SR latch.
1
Clk
0
1
R
0
1
S
0

Q
1 ?
0
Q 1 ?
0

Time
Figure 3. An example timing diagram for gated SR latch.

Gated D Latch

A possible circuit for gated D latch is shown in Figure 4. The graphical symbol for gated D latch is shown
in Figure 5.
S
D

(Data) Q

Clk

Q
R

Figure 4. A circuit for gated D latch.

D Q

Clk Q

Figure 5. The graphical symbol for gated D latch


The characteristic table for a gated D latch which describes its behavior is as follows.

+
Clk D Q Comments
0 × Q No change
1 0 0
1 1 1

Figure 6 shows an example timing diagram for gated D latch (assuming negligible propagation delays
through the logic gates).
t1 t2 t3 t4
1
Clk
0
1
D
0
1
Q
0
Time

Figure 6. An example timing diagram for gated D latch.

A Negative-edge-triggered Master-Slave D Flip-Flop


A possible circuit for a negative-edge-triggered master-slave D flip-flop is shown in Figure 7. The
graphical symbol for a negative-edge-triggered D flip-flop is shown in Figure 8.
Master Slave
Qm Qs
D D Q D Q Q

Clock Clk Q Clk Q Q

. A negative−edge−triggered master−slave D flip−flop.

D Q

The graphical symbol for negative−edge−triggered D flip−flop.


Note that unlike latches which are level sensitive (i.e., the output of a latch is controlled by the level of
the clock input), flip-flops are edge triggered (i.e., the output changes only at the point in time when the
clock changes from one value to the other). The master-slave D flip-flop shown in Figure 7 responds on
the negative edge (i.e., the edge where the clock signal changes from 1 to 0) of the clock signal. Hence it
is negative-edge-triggered. The circuit can be changed to respond to the positive clock edge by
connecting the slave stage directly to the clock and the master stage to the complement of the clock.
Figure 9 shows an example timing diagram for negative-edge-triggered D flip-flop (assuming
negligible propagation delays through the logic gates).
t1 t2 t3

1
Clock
0
1
D
0
1

Qm
0
1
Q=Qs
0
Time
An example timing diagram for negative−edge−triggered master−slave D flip−flop.

A Positive-edge-triggered D Flip-Flop
Besides building a positive-edge-triggered master-slave D flip-flop as mentioned in our preceding
discussion, we can accomplish the same task by a circuit presented in Figure 10. It requires only six
NAND gates and, hence, fewer logic gates.

1 P3

P1
2
5 Q

Clock

P2 6 Q
3

4 P4
D

The operation of the circuit in Figure 10 is as follows. When Clock = 0, the outputs of gates 2 and 3 are
high. Thus P 1 = P 2 = 1, which maintains the output latch, comprising gates 5 and 6, in its present state.
At the same time, the signal P 3 is equal to D, and P 4 is equal to its complement D. When Clock changes
to 1, the following changes take place. The values of P 3 and P 4 are transmitted through gates 2 and 3 to
cause P 1 = D and P 2 = D, which sets Q = D and Q = D. To operate reliably, P 3 and P 4 must be stable
when Clock changes from 0 to 1. Hence the setup time of the flip-flop is equal to the delay from the D
input through gates 4 and 1 to P 3. The hold time is given by the delay through gate 3 because once P 2
is stable, the changes in D no longer matter.
For proper operation it is necessary to show that, after Clock changes to 1, any further changes in D
will not a ect the output latch as long as Clock = 1. We have to consider two cases. Suppose first that D =
0 at the positive edge of the clock. Then P 2 = 0, which will keep the output of gate 4 equal to 1 as long as
Clock = 1, regardless of the value of the D input. The second case is if D = 1 at the positive edge
of the clock. Then P 1 = 0, which forces the outputs of gates 1 and 3 to be equal to 1, regardless of the D
input. Therefore, the flip-flop ignores changes in the D input while Clock = 1.
Figure 11 shows the graphical symbol for a positive-edge-triggered D flip-flop.

D Q

Figure 11. The graphical symbol for positive−edge−triggered D flip−flop.


Figure 12 shows an example timing diagram for positive-edge-triggered D flip-flop (assuming
negligible propagation delays through the logic gates).
t1 t2 t3

1
Clock
0
1
D
0
1
Q
0
Time

Figure 12. An example timing diagram for positive−edge−triggered master−slave


D flip−flop.

D Flip-Flops with Clear and Preset


In using flip-flops, it is often necessary to force the flip-flops into a known initial state. A simple way of
providing the clear and present capability is to add extra input to the flip-flops. Figure 12 shows a master-
slave D flip-flop with Clear and Preset. Placing a 0 on the Clear input will force the flip-flop into the state
Q = 0. If Clear = 1, then this input will have no e ect on the NAND gates. Similarly, Preset = 0 forces the
flip-flop into the state Q = 1, while Preset = 1 has no e ect. To denote that the Clear and Preset inputs are
active when their value is 0, we placed an overbar on the names in Figure 13. Note that the circuit that
uses this flip-flop should not try to force both Clear and Preset to 0 at the same time. A graphical symbol
for this flip-flop is shown in Figure 14.
Preset

D
Q

Clock

Clear

A circuit for master−slave D flip−flop with Clear and Preset.


Preset

D Q

Clear

Figure 14. The graphical symbol for master−slave D flip−flop with Clear and Preset.
A similar modification can be done on the positive-edge-triggered D flip-flop of Figure 10, as indicated
in Figure 15. A graphical symbol for this flip-flop is shown in Figure 16. Again, both Clear and Preset
inputs are active low. They do not disturb the flip-flop when they are equal to 1.

Preset

Clock

Clear

Figure 15. A positive−edge−triggered D flip−flop with Clear and Preset.


Preset

D Q

Clear

Figure 16. The graphical symbol for positive−edge−triggered D flip−flop with Clear and Preset.
In the circuits in Figures 13 and 15, the e ect of a low signal on either the Clear or Preset input is
immediate. For example, if Clear = 0 then the flip-flop goes into the state Q = 0 immediately, regardless of
the value of the clock signal. In such a circuit, where the Clear signal is used to clear a flip-flop without
regard to the clock signal, we say that the flip-flop has an asynchronous clear. In practice, it is often
preferable to clear the flip-flops on the active edge of the clock. Such synchronous clear can be
accomplished as shown in Figure 17. The flip-flop operates normally when the Clear input is equal to 1.
But if Clear goes to 0, then on the next positive edge of the clock the flip-flop will be cleared to 0.
Clear D Q Q
D

Clock Q Q

Figure 17. Synchronous reset for a D flip−flop.

Q. 2. With a neat labeled diagram explain the working of Master


Slave JK Flip Flop.
Answer.
The Master-Slave JK Flip-flop
The Master-Slave Flip-Flop is basically two gated SR flip-flops connected
together in a series configuration with the slave having an inverted clock
pulse. The outputs from Q and Q from the "Slave" flip-flop are fed back to the
inputs of the "Master" with the outputs of the "Master" flip-flop being
connected to the two inputs of the "Slave" flip-flop. This feedback
configuration from the slave's output to the master's input gives the
characteristic toggle of the JK flip-flop as shown below.
The Master-Slave JK Flip-Flop

The input signals J and K are connected to the gated "master" SR flip-flop
which "locks" the input condition while the clock (Clk) input is "HIGH" at logic
level "1". As the clock input of the "slave" flip-flop is the inverse
(complement) of the "master" clock input, the "slave" SR flip-flop does not
toggle. The outputs from the "master" flip-flop are only "seen" by the gated
"slave" flip-flop when the clock input goes "LOW" to logic level "0". When the
clock is "LOW", the outputs from the "master" flip-flop are latched and any
additional changes to its inputs are ignored. The gated "slave" flip-flop now
responds to the state of its inputs passed over by the "master" section. Then
on the "Low-to-High" transition of the clock pulse the inputs of the "master"
flip-flop are fed through to the gated inputs of the "slave" flip-flop and on the
"High-to-Low" transition the same inputs are reflected on the output of the
"slave" making this type of flip-flop edge or pulse-triggered.
Then, the circuit accepts input data when the clock signal is "HIGH", and
passés the data to the output on the falling-edge of the clock signal. In other
words, the Master-Slave JK Flip-flop is a "Synchronous" device as it only
passes data with the timing of the clock signal.

Q.3. Describe the functioning of Negative edge triggered 2 – Bit


Ripple up counters with relevant diagram(s).
Answer.
Two bit ripple counter used two flip-flops. There are four possible states from
2 – bit up-counting I.e. 00, 01, 10 and 11.

• The counter is initially assumed to be at a state 00 where the outputs of


the tow flip-flops are noted as Q1Q0. Where Q1 forms the MSB and Q0 forms
the LSB.
• For the negative edge of the first clock pulse, output of the first flip-flop
FF1 toggles its state. Thus Q1 remains at 0 and Q0 toggles to 1 and the
counter state are now read as 01.
• During the next negative edge of the input clock pulse FF1 toggles and Q0
= 0. The output Q0 being a clock signal for the second flip-flop FF2 and the
present transition acts as a negative edge for FF2 thus toggles its state Q1
= 1. The counter state is now read as 10.
• For the next negative edge of the input clock to FF1 output Q0 toggles to
1. But this transition from 0 to 1 being a positive edge for FF 2 output Q1
remains at 1. The counter state is now read as 11.
• For the next negative edge of the input clock, Q 0 toggles to 0. This
transition from 1 to 0 acts as a negative edge clock for FF2 and its output
Q1 toggles to 0. Thus the starting state 00 is attained.
Q.4 Describe the functioning of Three and Four bit Synchronous
Binary Up-counter.
Answer.
3-bit Synchronous Binary Up-counter.
Q0 changes on each clock pulse. Observation on F1 indicates that it changes
or toggles its output Q1 state only when Q0 is 1 or HIGH. This occurs at the
negative edges of second, fourth, sixth, eighth clock pulses. Therefore Q0 is
connected to the inputs J and K of the F1.
Similarly Q2 changes state only when Q0 = 1 and Q1 = 1 . This is been
realized with an AND gate logic. Inputs J and K are connected through an
AND gate which has the function Q0 Q1.
Clock pulse Q2 Q1 Q0
0 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 0 0 0

Three bit synchronous up counter


4- bit Synchronous Binary Up-counter.
The J and K input control for the first three flip-flops is the same as presented
in three stage counter. For the fourth stage. Q3, changes only twice in the
sequence. And notice that during both of these transitions occur following
the times, QA, QB and QC are all HIGH. This condition is denoted by a three
input AND gate. For all other times inputs J and K of Q3 and LOW, and it is in
a no-change condition.
Time diagram for 4-bit synchronous up-counter
Q.5 Explain the design of modulus counters.

Answer.

A single flip-flop used gives two states output and is referred to as mod-2
counter. With two flip-flops four output states can be counted in ascending or
in descending way and is referred to as mod-4 or mod-22 counter. With ‘n’
flip-flops a mod-2n counting is possible either of ascending or of descending
type.
To design an asynchronous counter to count till M or mod-M counter where M
is not a power of 2, following procedure is used.
• Find the number of flip-flops required n = log2 M. calculated value is
not an integer value if the M # 2n then select n by rounding to the next
integer value.
• First writer the sequence of counting till M either in ascending or in
descending way.
• Tabulate the value to reset the flip-flops in a mod-M count.
• Find the flip-flop outputs which are to reset from the tabulated value.
• Tap the output from these flip-flops and feed it to an NAND gate whose
output is connected to the clear pin.

Q.6 Describe Serial in and Serial out shift registers.

Ans.
Serial-in, serial-out shift registers delay data by one clock time for each
stage. They will store a bit of data for each register. A serial-in, serial-out
shift register may be one to 64 bits in length, longer if registers or packages
are cascaded.
Below is a single stage shift register receiving data which is not synchronized
to the register clock. The "data in" at the D pin of the type D FF (Flip-Flop)
does not change levels when the clock changes for low to high. We may want
to synchronize the data to a system wide clock in a circuit board to improve
the reliability of a digital logic circuit.
The obvious point (as compared to the figure below) illustrated above is that
whatever "data in" is present at the D pin of a type D FF is transferred from
D to output Q at clock time. Since our example shift register uses positive
edge sensitive storage elements, the output Q follows the D input when the
clock transitions from low to high as shown by the up arrows on the diagram
above. There is no doubt what logic level is present at clock time because
the data is stable well before and after the clock edge. This is seldom the
case in multi-stage shift registers. But, this was an easy example to start
with. We are only concerned with the positive, low to high, clock edge. The
falling edge can be ignored. It is very easy to see Q follow D at clock time
above. Compare this to the diagram below where the "data in" appears to
change with the positive clock edge.

Since "data in" appears to changes at clock time t1 above, what does the
type D FF see at clock time? The short over simplified answer is that it sees
the data that was present at D prior to the clock. That is what is transfered
to Q at clock time t1. The correct waveform is QC. At t1 Q goes to a zero if it is
not already zero. The D register does not see a one until time t2, at which
time Q goes high.
Since data, above, present at D is clocked to Q at clock time, and Q cannot
change until the next clock time, the D FF delays data by one clock period,
provided that the data is already synchronized to the clock. The Q A waveform
is the same as "data in" with a one clock period delay.

Three type D Flip-Flops are cascaded Q to D and the clocks paralleled to


form a three stage shift register above.
Type JK FFs cascaded Q to J, Q' to K with clocks in parallel to yield an
alternate form of the shift register above.
A serial-in/serial-out shift register has a clock input, a data input, and a data
output from the last stage. In general, the other stage outputs are not
available Otherwise, it would be a serial-in, parallel-out shift register..
Q. 7. Explain the main memory system.

Answer.
Main Memory
The main memory stores data and instructions. Main memories are usually
built from dynamic IC’s known as dynamic RAMs. These semiconductor ICs
can also implement static memories referred to as static RAMs (SRAMs).
SRAMs are faster but cost per bit is higher. These are oten used to build
caches.
Types of Random-Access Semiconductor Memory
Dynamic RAM: for example: Charge in capacitor. It requires periodic
refreshing. Static RAM: for example: Flip-flop logic-gate. Applying power is
enough no need for refreshing). Dynamic RAM is simpler and hence smaller
than the static RAM. Therefore more dense and less expensive. But it
requires supporting refresh circuitry. Static RAM/s are faster than dynamic
RAM’s.
ROM: The data is actually wired in the factory. Can never be altered.
PROM: Programmable ROM. It can only be programmed once after its
fabrication. It requires special device to program.
EPROM: Erasable Programmable ROM. It can be programmed multiple times.
Whole capacity need to be erased by ultraviolet radiation before a new
programming activity. It can be partially programmed.
EEPROM: Electrically Erasable Programmable ROM. Erased and programmed
electrically. It can be partially programmed. Write operation takes
considerably longer time compared to read operation.
Each more functional Rom is more expensive to build, and has smaller
capacity then less functional ROM’s.

Q. 8. Describe various Cache replacement algorithms.

Ans.

For and set associative mapping a replacement algorithm is needed. The


most common algorithms are discussed here. When a new block is to be
brought into the cache and all the positions that it may occupy are full, then
the cache controller must decide which of the old blocks to overwrite.
Because the programs usually stay in localized areas for a reasonable period
of time, there is high probability that the blocks that have been referenced
recently will be referenced again soon. Therefore when a block is to be
overwritten, the bock that has not referenced for the longest time is
overwritten. This block is called least-recently-used block (LRU), and the
technique is called the LRU replacement algorithm. In order to use the LRU
algorithm, the cache controller must track the LRU block as computation
proceeds.

There are several replacement algorithms that require less overhead than
LRU method. One method is to remove the oldest block from a full set when
a new block must be brought in. this method is referred to as FIFO. In this
technique no updating is needed when hit occurs. However, because the
algorithm does not consider the recent patterns of access to blocks in the
cache, it is not effective as LRU approach in choosing the best block to
remove. There is another method called least frequently used (LFU) that
replaces that block in the set which has experienced the fewer references. It
is implemented by associating a counter with each slot. Yet another simplest
algorithm called random, is to choose a block to be overwritten in random.

Q. 9. Explain I/O module and its usage.

Ans.

Input-output interface provides a method for transferring information be-


tween internal storage and external I/O devices. Peripherals connected to a
computer need special communication links for interfacing them with the
central processing unit. The purpose of the communication link is to resolve
the differences that exist between the central computer and each peripheral.
The major differences are:

1. Peripherals are electromechanical and electromagnetic devices


and their manner of operation is different from the operation of
the CPU and memory, which are electronic devices. Therefore, a
conversion of signal values may be required.
2. The data transfer rate of peripherals is usually slower than the
transfer rate of the CPU, and consequently, a synchronization
mechanism may be needed.
3. Data codes and formats in peripherals differ from the word
format in the CPU and memory.
4. The operating modes of peripherals are different from each other
and each must be controlled so as not to disturb the operation of
other peripherals connected to the CPU.
Input/Output Module
• Interface to CPU and Memory
• Interface to one or more peripherals
• GENERIC MODEL OF I/O DIAGRAM 6.1

I/O Module Function


• Control & Timing
• CPU Communication
• Device Communication
• Data Buffering
• Error Detection
I/O Steps
• CPU checks I/O module device status
• I/O module returns status
• If ready, CPU requests data transfer
• I/O module gets data from device
• I/O module transfers data to CPU
• Variations for output, DMA, etc.
I/O Module Decisions
• Hide or reveal device properties to CPU
• Support multiple or single device
• Control device functions or leave for CPU
• Also O/S decisions
 e.g. Unix treats everything it can as a file
Input Output Techniques
• Programmed
• Interrupt driven
• Direct Memory Access (DMA)

Q. 10. Write about the following with respect to CPU Control:

 Functional Requirements
 Control Signals
Answer.

Functional Requirements
The functional requirements of control unit are those functions that the
control unit must perform. And these are the basis for the design and
implementation of the control unit.
A three step process that lead to the characterization of the Control Unit:
• Define the basis elements of the processor
• Describe the micro-operations that the processor performs
• Determine the functions that the control unit must perform to cause
the micro-operations to be performed.

1. Basic Elements of Processor


The following are the basic functional elements of a CPU:
• ALU: is the functional essence of the computer.
• Registers: are used to store data internal to the CPU.
2. Types of Micro-operation
These operations consist of a sequence of micro operations. All micro
instructions fall into one of the following categories:
• Transfer data between registers
• Transfer data from register to external
• Transfer data from external to register
• Perform arithmetic or logical ops
3. Functions of Control Unit
Now we define more explicitly the function of control unit. The control unit
perform two tasks:
• Sequencing: The control unit causes the CPU to step through a series
of micro-operations in proper sequence based on the program being
executed.
• Execution: The control unit causes each micro-operation to be
performed.

For the control unit to perform its function, it must have inputs that allow it
to determine the state of the system and outputs that allow it to control the
behavior of the system. These are the external specifications of the control
unit. Internally, the control unit must have the logic required to perform
sequencing and execution functions.

Control unit Inputs


The possible inputs for the control units are:
Clock: The control unit uses clock to maintain the timings.
Instruction register: Op-code of the current instruction is used to determine
which micro-instructions to be performed during the execution cycle.
Flags: These are needed by the control unit to determine the status of the
CPU and outcome of previous ALU operations.
Example: As seen earlier the instruction ISZ, which is increment and skip if
zero, the control unit will increment the PC if the zero flag is set.
Control signals from control bus: the control bus portion of the system bus
provides signals to the control unit, such as interrupt signals and
acknowledgements.
Control Signals – output
The following are the control signals which are output of the control unit:
Control signals within CPU: There are two types
1. Signals that cause data to be moved from one register to another.
2. Signals that activate specific ALU functions
Control signals to control bus: There are two types
1. Signals to memory
2. Signals to I/O modules
11. Discuss and Differentiate Hardware and Micro-programmed
control unit.
Ans. Hardware Control Unit

There are two major types of control organization: hardwired control and
microprogrammed control. In the hardwired organization, the control logic is
implemented with gates, flip-flops, decoders, and other digital circuits. It has
the advantage that it can be optimized to produce a fast mode of operation.
In the microprogrammed organization, the control information is stored in a
control memory. The control memory is programmed to initiate the required
sequence of microoperations. A hardwired control, as the name implies, re-
quires changes in the wiring among the various components if the design
has to be modified or changed. In the microprogrammed control, any
required changes or modifications can be done by updating the
microprogram in control memory.
The block diagram of the control unit is shown in Fig. 5-6. It consists of two
decoders, a sequence counter, and a number of control logic gates. An
instruction read from memory is placed in the instruction register (IR). The
position of this register in the common bus system is indicated in Fig. 5-4.
The instruction register is shown again in Fig. 5-6, where it is divided into
three parts: the I bit, the operation code, and bits 0 through 11. The
operation code in bits 12 through 14 are decoded with a 3 x 8 decoder. The
eight outputs of the decoder are designated by the symbols D0 through D7.
The subscripted decimal number is equivalent to the binary value of the
corresponding operation code. Bit 15 of the instruction is transferred to a
flip-flop designated by the symbol I. Bits 0 through 11 are applied to the
control logic gates. The 4-bit sequence counter can count in binary from 0
through 15. The outputs of the counter are decoded into 16 timing signals T 0
through T15.

Micro Programed Control Unit


The control memory is assumed to be a ROM, within which all control
information is permanently stored. The control memory address register
specifies the address of the microinstruction, and d the control data register
holds the microinstruction read from memory.
The microinstruction contains a control word that specifies one or more
micro-operations for the data processor. Once these operations are
executed, the control must determine the next address. The location of the
next microinstruction may be the one next in sequence, or it may be located
somewhere else in the control memory. For this reason it is necessary to use
some bits of the present microinstruction to control the generation of the
address of the next microinstruction. The next address may also be a
function of external input conditions. While the microoperations are being
executed, the next address is computed in the next address generator circuit
and then transferred into the control address register to read the next
microinstruction. Thus a microinstruction contains bits for initiating
microoperations in the data processor part and bits that determine the
address sequence for the control memory.
Exter Contr
Next Control
nal Control ol
address address Control
input word
generator register memory data
(sequence (ROM)
register
r)

Next address information

The next address generator is sometimes called a microprogram sequencer,


as it determines the address sequence that is read from control memory.
The address of the next microinstruction can be specified in several ways,
depending on the sequencer inputs. Typical functions of a microprogram
sequencer are incrementing the control address register by one, loading into
the control address register an address from control memory, transferring an
external address, or loading an initial address to start the control operations.
The control data register holds the present microinstruction while the next
address is computed and read from memory. The data register is some-times
called a pipeline register. It allows the execution of the microoperations
specified by the control word simultaneously with the generation of the next
microinstruction. This configuration requires a two-phase clock, with one
clock applied to the address register and the other to the data register.
The system can operate without the control data register by applying a
single-phase clock to the address register. The control word and next-
address information are taken directly from the control memory. It must be
realized that a ROM operates as a combinational circuit, with the address
value as the input and the corresponding word as the output. The content of
the specified word in ROM remains in the output wires as long as its address
value remains in the address register. No read signal is needed as in a
random-access memory. Each clock pulse will execute the microoperations
specified by the control word and also transfer a new address to the control
address register. In the example that follows we assume a single-phase clock
and therefore we do not use a control data register. In this way the address
register is the only component in the control system that receives clock
pulses. The other two components: the sequencer and the control memory
are combinational circuits and do not need a clock.
The main advantage of the microprogrammed control is the fact that once
the hardware configuration is established, there should be no need for
further hardware or wiring changes. If we want to establish a different
control sequence for the system, all we need to do is specify a different set
of microinstructions for control memory. The hardware configuration should
not be changed for different operations; the only thing that must be changed
is the microprogram residing in control memory.

Q. 12. Explain:

RISC and CISC

An important aspect of computer architecture is the design of the instruction


set for the processor. The instruction set chosen for a particular computer
determines the way that machine language programs are constructed. Early
computers had small and simple instruction sets, forced mainly by the need
to minimize the hardware used to implement them. As digital hardware
became cheaper with the advent of integrated circuits, computer instructions
tended to increase both in number and complexity. Many computers have
instruction sets that include more than 100 and sometimes even more than
200 instructions. These computers also employ a variety of data types and a
large number of addressing modes. The trend into computer hardware
complexity was influenced by various factors, such as upgrading existing
models to provide more customer applications, adding instructions that
facilitate the translation from high-level language into machine language
programs, and striving to develop machines that move functions from
software implementation into hardware implementation. A computer with a
large number of instructions is classified as a complex instruction set
computer, abbreviated CISC. In the early 1980s, a number of computer
designers recommended that computers use fewer instructions with simple
constructs so they can be executed much faster within the CPU without
having to use memory as often. This type of computer is classified as a
reduced instruction set computer or RISC. In this section we introduce the
major characteristics of CISC and RISC architectures and then present the
instruction set and instruction format of a RISC processor.

CISC Characteristics
The design of an instruction set for a computer must take into consideration
not only machine language constructs, but also the requirements imposed
on the use of high-level programming languages. The translation from high-
level to machine language programs is done by means of a compiler
program. One reason for the trend to provide a complex instruction set is
the desire to simplify the compilation and improve the overall computer
performance. The task of a compiler is to generate a sequence of machine
instructions for each high-level language statement. The task is simplified if
there are machine instructions that implement the statements directly. The
essential goal of a CISC architecture is to attempt to provide a single
machine instruction for each statement that is written in a high-level
language. Examples of CISC architectures are the Digital Equipment
Corporation VAX computer and the IBM 370 computer.

Another characteristic of CISC architecture is the incorporation of


variable-length instruction formats. Instructions that require register
operands may be only two bytes in length, but instructions that need two
memory addresses may need five bytes to include the entire instruction
code. If the computer has 32-bit words (four bytes), the first instruction
occupies half a word, while the second instruction needs one word in
addition to one byte in the next word. Packing variable instruction formats in
a fixed-length memory word requires special decoding circuits that count
bytes within words and frame the instructions according to their byte length.

The instructions in a typical CISC processor provide direct


manipulation of operands residing in memory. For example, an ADD
instruction may specify one operand in memory through index addressing
and a second operand in memory through a direct addressing. Another
memory location may be included in the instruction to store the sum. This
requires three memory references during execution of the instruction.
Although CISC processors have instructions that use only processor
registers, the availability of other modes of operations tend to simplify high-
level language compilation. However, as more instructions and addressing
modes are incorporated into a computer, the more hardware logic is needed
to implement and support them, and this may cause the computations to
slow down. In summary, the major characteristics of CISC architecture are:

1. A large number of instructions—typically from 100 to 250


instructions

2. Some instructions that perform specialized tasks and are used


infrequently

3. A large variety of addressing modes—typically from 5 to 20


different modes

4. Variable-length instruction formats


5. Instructions that manipulate operands in memory

RISC Characteristics

The concept of RISC architecture involves an attempt to reduce execution


time by simplifying the instruction set of the computer. The major
characteristics of a RISC processor are:

1. Relatively few instructions

2. Relatively few addressing modes

3. Memory access limited to load and store instructions

4. All operations done within the registers of the CPU

5. Fixed-length, easily decoded instruction format

6. Single-cycle instruction execution

7. Hardwired rather than microprogrammed control

The small set of instructions of a typical RISC processor consists mostly of


register-to-register operations, with only simple load and store operations for
memory access. Thus each operand is brought into a processor register with
a load instruction. All computations are done among the data stored in
processor registers. Results are transferred to memory by means of store
instructions. This architectural feature simplifies the instruction set and
encourages the optimization of register manipulation. The use of only a few
addressing modes results from the fact that almost all instructions have
simple register addressing. Other addressing modes may be included, such
as immediate operands and relative mode.

By using a relatively simple instruction format, the instruction length can be


fixed and aligned on word boundaries. An important aspect of RISC instruc-
tion format is that it is easy to decode. Thus the operation code and register
fields of the instruction code can be accessed simultaneously by the control.
By simplifying the instructions and their format, it is possible to simplify the
control logic. For faster operations, a hardwired control is preferable over a
microprogrammed control.

A characteristic of RISC processors is their ability to execute one instruction


per clock cycle. This is done by overlapping the fetch, decode, and execute
phases of two or three instructions by using a procedure referred to as
pipelining. A load or store instruction may require two clock cycles because
access to memory takes more time than register operations. Efficient
pipelining, as well as a few other characteristics, are sometimes attributed to
RISC, although they may exist in non-RISC architectures as well. Other
characteristics attributed to RISC architecture are:

A relatively large number of registers in the processor unit Use of overlapped


register windows to speed-up procedure call and return Efficient instruction
pipeline

Compiler support for efficient translation of high-level language programs


into machine language programs. A large number of registers is useful for
storing intermediate results and for optimizing operand references. The
advantage of register storage as opposed to memory storage is that
registers can transfer information to other registers much faster than the
transfer of information to and from memory. Thus register-to-memory
operations can be minimized by keeping the most frequent accessed
operands in registers. Studies that show improved performance for RISC
architecture do not differentiate between the effects of the reduced
instruction set and the effects of a large register file. For this reason a large
number of registers in the processing unit are sometimes associated with
RISC processors.

Das könnte Ihnen auch gefallen