Sie sind auf Seite 1von 31

-=[ASSIGNMENT 3]=-

Solved Question Paper 2017

By:
B V Karthik [160718735002]
M S V Sai Praneeth [160718735015]
B.Sai Preeth [160718735041]
M Abhiram Sri Karthik Chowhan [160718735049]
Subject:
COA
Faculty:
Maharshi Sanand Yadav

PAGE NO 1 OF 31
B.E. 3/4 (E.C.E) I- Semester Examination,
December 2017

PART -A

1.Draw the Block Diagram of 4-bit Combinational Circuit


Shifter and write its Function table.[3M]

A) 4-bit combinational circuit shifter


=>It can be constructed with multiplexers
=>4 data inputs A0 through A3.
=>4 data outputs H0 through H3
=>2 serial inputs: IL for shift left
IR for shift right
=>when S=0 the input data are shifted right
=>when S=1 the input data are shifted left

PAGE NO 2 OF 31
2.Write the Basic Computer instruction formats for the memory,
register and I/O reference instructions.[3M]

A)

PAGE NO 3 OF 31
3. Differentiate between Single precision and Double precision
IEEE Standard floating point representations.[2M]
A) Basically, single precision floating point arithmetic deals with 32
bit floating point numbers whereas double precision deals with 64 bit.
The number of bits in double precision increases the maximum value
that can be stored as well as increasing the precision (ie the number
of significant digits).

4.Explain briefly the microinstruction format.[2M]


A) Microinstruction Format:
The microoperations are subdivided into three fields of three bits each.
The three bits in each field are encoded to specify seven distinct
microoperations. This gives a total of 21 microoperations.

PAGE NO 4 OF 31
5.A Stack is organized in such a way that SP always points at the
next empty location on the stack. List the micro-operations for the
push and Pop operations. Assume stack grows downward.[3M]

A)Push and Pop operations:


a stack is an abstract data type that serves as a collection of
elements, with two principal operations:
1. push, which adds an element to the collection.
2. pop, which removes the most recently added element that was not
yet removed.

PAGE NO 5 OF 31
6. Mention the ways that computer buses can be used to
communicate with memory and IO.[3M]

A)As a CPU needs to communicate with the various memory and


input-output devices (I/O) as we know data between the processor and
these devices flow with the help of the system bus. There are three
ways in which system bus can be allotted to them :
1. Separate set of address, control and data bus to I/O and memory.
2. Have common bus (data and address) for I/O and memory but
separate control lines.
3. Have common bus (data, address, and control) for I/O and
memory.

7. Draw the flow chart for destination initiated transfer using


handshaking.[2M]

PAGE NO 6 OF 31
8. What do you mean by a page fault? Which hardware is
responsible for detecting the page fault?[2M]

A)memory management unit:


A page fault is a trap to the software raised by the hardware when a
program accesses a page that is mapped in the virtual address space,
but not loaded in physical memory. The hardware that detects a page
fault is the memory management unit in a processor.

9. A non pipeline takes 50ns to process a task. The same task can
be processed in a six segment pipeline with a clock cycle of 10ns.
Determine the Speed up ratio of the pipeline for 100 tasks. What is
the maximum Speed up that can be achieved?[3M]
A)
The speed up ratio of the pipeline for 100 tasks is 4.76
Explanation:Total Number of tasks "n" = 100
Time taken by non pipeline "Tn" = 50
nsTime period of 100 tasks = ntn
= 100 x 50 = 5000 ns
Number of segment pipeline "K" = 6
Time period of 1 clock cycle = 10 ns
Total time required = ( k + n - 1)tp
= ( 6 + 100 - 1)10
= 1050
nsSpeed up ratio " S" = 5000/1050
= 4.76

10. What is meant by "locality of reference and how does it help


in faster execution of programs? [2M]
A)The term locality of reference refers to the fact that there is usually
some predictability in the sequence of memory addresses accessed by a
program during its execution. Programs are said to exhibit temporal
locality if they access the same memory location several times within a
small window of time.
PAGE NO 7 OF 31
PART-B
11. a) Design a 4-bit Combinational Circuit decremented using
four full-adder circuit.[5M]
A)4 bit Binary Decrementer using 4 full adder circuits:

Basic Theory:
The binary decrementer decreases the value stored in a register by ‘1’.
For this, we can simply add ‘1’ to the each bit of the existing value
stored in a register. This is basically the concept of two's complement
used for subtraction of '1' from given data. It is made by cascading ‘n’
full adders for ‘n’ number of bits i.e. the storage capacity of the register
to be decremented. Hence, a 4-bit binary decrementer requires 4
cascaded full adder circuits. As stated above we add '1111' to 4 bit data
in order to subtract '1' from it. 
Observed Values:
Following set of values were obtained in observation.
1.      0011 => 0010
2.      1010 => 1001
3.      1101 => 1100
4.      0010 => 0001
5.      0000 => 1111

PAGE NO 8 OF 31
b) Draw and explain the flow chart for an interrupt cycle. Write
the sequence of micro operations for the same.[5M]
A) Interrupt Cycle:
 An instruction cycle (sometimes called fetch-and-execute cycle,
fetch-decode-execute cycle, or FDX) is the basic operation cycle
of a computer. It is the process by which a computer retrieves a
program instruction from its memory, determines what actions the
instruction requires, and carries out those actions. This cycle is
repeated continuously by the central processing unit (CPU), from
bootupto when the computer is shut down.

Block diagram of Interrupt Cycle


 After the execute cycle is completed, a test is made to determine if
an interrupt was enabled (e.g. so that another process can access
the CPU)
 If not, instruction cycle returns to the fetch cycle
 If so, the interrupt cycle might performs the following tasks:
(simplified...)
 move the current value of PC into MBR
 move the PC-save-address into MAR
 move the interrupt-routine-address into PC
 move the contents of the address in MBR into indicated memory
cell
 continue the instruction cycle within the interrupt routine

PAGE NO 9 OF 31
 after the interrupt routine finishes, the PC-save-address is used to
reset the value of PC and program execution can continue

Micro-Operation. The execution of a program consists of the


sequential execution of instructions. Each instruction is executed
during an instruction cycle made up of shorter sub-cycles ( 1.fetch,
2.indirect, 3.execute,4. interrupt).Micro-operations are the
functional, or atomic, operations of a processor.

PAGE NO 10 OF 31
12. a) Using Booth's multiplication algorithm, multiply (6) x (-
8)showing all the steps.[6M]
A)Booths law multiplication:(6)x(-8)

PAGE NO 11 OF 31
b) Compare and contrast between horizontal and vertical
approach of microinstruction.[4M]
A) horizontal and vertical approach of microinstruction

PAGE NO 12 OF 31
13. a) What is the purpose of micro program sequencer? Explain
with block Diagram, how the sequencer present addresses to
control memory.[5M]

A) Micro program sequencer , In computer architecture and


engineering, a sequencer or micro program sequencer generates the
addresses used to step through the microprogram of a control store. It
is used as a part of the control unit of a CPU or as a stand-alone
generator for address ranges.
Microprogrammed Control Unit In this type, the control variables
stored in memory at any given time can be represented by a string of
1's and 0's called a control word.

As such, control words can be programmed to perform various


operations on the components of the system. Each word in control
memory contains within it a microinstruction.

Generally, a microinstruction specifies one or more microoperations. A


sequence of microinstructions forms what is called a microprogram.

Generally, a computer that employs a microprogrammed control unit


will have two separate memories:

1. The main memory This memory is available to the user for storing
programs. The user's program in main memory consists of machine
instructions and data.

2. The control memory This memory contains a fixed microprogram


that cannot alter by the occasional user. The microprogram consists of
microinstructions that specify various internal control signals for
execution of register micro-operations.

PAGE NO 13 OF 31
Each instruction initiates a series of microinstructions in control
memory. These microinstructions generate the microoperations to:
1. Fetch the instruction from main memory.
2. Evaluate the effective address.
3. Execute the operation specified by the instruction.
4. Finally, return the control to the fetch phase in order to repeat the
cycle for the next instruction.

Figure 8.1 shows the general block diagram of a microprogrammed


control unit which assumed to be a ROM.

The function of the control address register is to specify the address of


the microinstruction, while the function of control data register is to
holds the microinstruction read from memory.

The microinstruction contains a control word that specifies one or more


microoperations for the data processor. Once these operations are
executed, the control must determine the next address.

The location of the next microinstruction may be the one next in


sequence, or it may be located somewhere else in the control memory.

PAGE NO 14 OF 31
For this reason it is necessary to use some bits of the present
microinstruction to control the generation of the address of the next
microinstruction. The next address may also be a function of external
input conditions.
While the microoperations are being executed, the next address is
computed in the next address generator circuit and then transferred into
the control address register to read the next microinstruction. Thus a
microinstruction contains bits for initiating microoperations in the data
processor part and bits that determine the address sequence for the
control memory.

The next address generator is sometimes called a microprogram


sequencer, as it determines the address sequence that is read from
control memory. Depending on the sequencer inputs, the address of
the next microinstruction can be specified in several ways:

1. By incrementing control address register by one.

2. Loading the control address register an address from control


memory.

3. Transferring an external address.

4. Loading an initial address to start the control operations.

The control data register holds the present microinstruction while the
next address is computed and read from memory. The data register is
sometimes called a pipeline register.

It allows the execution of the microoperations specified by the control


word simultaneously with the generation of the next microinstruction.
This configuration requires a two-phase clock, one clock applied to the
address register and the other to the data register.

PAGE NO 15 OF 31
The system can operate without the control data register by applying a
single-phase clock to the address register. The control word and next
address information are taken directly from the control memory.

It must be realized that a ROM operates as a combinational circuit,


with the address value as the input and the corresponding word as the
output. The content of the specified word in ROM remains in the
output wires as long as its address value remains in the address register.
No read signal is needed as in a random-access memory. Each clock
pulse will execute the microoperations specified by the control word
also transfer a new address to the control address register.

In the example that follows we assume a single-phase clock and


therefore we do not use a control data register, in this way the address
register is the only component in the control system that receives clock
pulses. The other two components: the sequencer and the control
memory are combinational circuits and do not need a clock.

The main advantage of the microprogrammed control is the fact that


once the hardware configuration is established; there should be no need
for further hardware or wiring changes.

If we want to establish a different control sequence for the system, all


we need to do is specify a different set of microinstructions for control
memory (i.e. different microprogram residing in control memory).

b) Explain data manipulation operations of a basic computer.[5M]

A) DATA MANIPULATION INSTRUCTIONS:


DATA MANIPULATION INSTRUCTIONS Data manipulation
instructions perform operations on data and provide the computational
capabilities for the computer. The data manipulation instruction in a
computer are usually divided into three basic types: Arithmetic

PAGE NO 16 OF 31
instructions Logical and Bit manipulation instructions Shift
instructions.
Arithmetic Instructions::
Arithmetic Instructions: The four basic arithmetic operations are
addition, subtraction, multiplication & division. Some computers have
only addition &subtraction instruction .The multiplication & division
must then be generated by means of Software Subroutines (Self
contained sequence of instruction that perform a computational task).
A list of Typical arithmetic instruction are as follows:-

Increment (INC):-this instruction adds 1 to the value stored in a register


or memory word. - One common characteristic of increment operations
when executed in processor registers is that a binary number of all 1’s
when incremented produces a result of all 0’s.
2. Decrement (DEC):-this instruction subtracts 1 from the value stored
in a register or memory word. - One common characteristic of
decrement operations when executed in processor registers is that a
binary number of all 0’s when decremented, produces a result of all
1’s.

3. Addition (ADD)
4. Subtract (SUB)
5. Multiply (MUL)
6. Divide (DIV) These instructions may be available for different types
of data .it may be binary, decimal, floating-point data. The mnemonics
for three add instructions that specify different data types are shown
below:
ADDI:- Add two binary integer numbers. ADDF:-Add two floating-
point numbers. ADDD:-Add two decimal numbers in BCD.

PAGE NO 17 OF 31
7. Add with Carry (ADDC):- A special carry flip- flop is used to store
the carry from an operation. The instruction “add with carry” performs
the addition on two operands plus the value of the carry from the
previous computational.:
8. Subtract with borrow(SUBB):- subtracts two words and a borrow
which may have resulted from a previous subtract operation.
9. Negate (2’s complement):-the negate instruction forms the 2’s
complement of a number effectively.
Logical and Manipulation instructions:
Logical and Manipulation instructions They are useful for manipulating
individual bits or a group of bits that represent binary-coded
information. The logical instruction consider each bit of the operand
separately and treat it as a boolean variable. Logical and bit
manipulation instructions are as follows:
1.Clear(CLR):- the clear instruction causes the specified operand to be
replaced by 0’s.
2. Complement (COM):-the complement instruction produces the 1’s
complement by inverting all the bits of the operands.
3.AND(AND)
4.OR(OR)
5.Exclusive-OR(XOR) The AND,OR,XOR instructions produces the
corresponding logical operations on individual bits of the operands.
6.Clear Carry (CLRC):-
7. Set Carry (SETC):-
8. Complement Carry (COMC): Individual bits such as a carry can be
cleared, set,or complement with appropriate instructions.

PAGE NO 18 OF 31
9. Enable interrupts (EI):- 10.Disable interrupts (DI):- Flips flops that
controls the interrupts facility and is either enabled or disabled by
means of bit manipulation instructions.
Shift instructions:-:
Shift instructions:- Instructions to shift the content of an operand Shifts
are operations in which the bits of a word are moved to the left or right.
the bit shifted in an end of the word determines the type of shift used.
Shift instruction may specify either Logical shifts, Arithmetic shifts ,or
rotate –type operations. Instructions are as follows:

Logical shift right (SHR) Logical shift LEFT(SHL) Logical shift


inserts 0 to the end bit position. the end position is the leftmost bit for
shift right and the rightmost bit position for the shift left.

14. a) Draw the block diagram of an asynchronous communication


interface and explain its operation.[5M]
A) Asynchronous Communication:

The asynchronous communication technique is a transmission


technique which is most widely used by personal computers to provide
connectivity to printers, modems, fax machines, etc. This allows a
series of bytes (or ASCII characters) to be sent along a single wire
(actually a ground wire is required to complete the circuit).

PAGE NO 19 OF 31
The data is sent as a series of bits. A shift register (in either hardware
or software) is used to serialise each information byte into the series of
bits which are then sent on the wire using an I/O port and a bus driver
to connect to the cable.
At the receiver, the remote system reassembles the series of bits to
form a byte and forwards the frame for processing by the link layer. A
clock (timing signal) is needed to identify the boundaries between the
bits (in practice it is preferable to identify the centre of the bit - since
this usually indicates the point of maximum signal power). There are
two systems used to providing timing:
1. Asynchronous Communication (independent transmit & receive
clocks)
 Simple interface (limited data rate, typically < 64 kbps)
 Used for connecting: Printer, Terminal, Modem, home
connections to the Internet
 No clock sent (Tx & Rx have own clocks)
 Requires start and stop bits which provides byte timing and
increases overhead
 Parity often used to validate correct reception

b) Describe in detail how data is transferred using DMA. Draw


necessary diagrams to support your explanation Direct Memory
Access (DMA) in Computer Architecture[5M]
A)For the execution of a computer program, it requires the
synchronous working of more than one component of a computer. For
example, Processors – providing necessary control information,
addresses…etc, buses – to transfer information and data to and from
memory to I/O devices…etc.

PAGE NO 20 OF 31
The interesting factor of the system would be the way it handles the
transfer of information among processor, memory and I/O devices.
Usually, processors control all the process of transferring data, right
from initiating the transfer to the storage of data at the destination.
This adds load on the processor and most of the time it stays in the
ideal state, thus decreasing the efficiency of the system.
To speed up the transfer of data between I/O devices and memory,
DMA controller acts as station master. DMA controller transfers data
with minimal intervention of the processor.
DMA Controller:
The term DMA stands for direct memory access. The hardware device
used for direct memory access is called the DMA controller. DMA
controller is a control unit, part of I/O device’s interface circuit, which
can transfer blocks of data between I/O devices and main memory with
minimal intervention from the processor.
DMA Controller Diagram in Computer Architecture
DMA controller provides an interface between the bus and the input-
output devices. Although it transfers data without intervention of
processor, it is controlled by the processor. The processor initiates the
DMA controller by sending the starting address, Number of words in
the data block and direction of transfer of data .i.e. from I/O devices to
the memory or from main memory to I/O devices. More than one
external device can be connected to the DMA controller.

PAGE NO 21 OF 31
DMA controller contains an address unit, for generating addresses and
selecting I/O device for transfer. It also contains the control unit and
data count for keeping counts of the number of blocks transferred and
indicating the direction of transfer of data.

When the transfer is completed, DMA informs the processor by raising


an interrupt. The typical block diagram of the DMA controller is shown

in the figure below.

PAGE NO 22 OF 31
15. a) Why page-table is required in a virtual memory system?
Explain different ways of organizing a page – table.[5M]

A) Page Table is a data structure used by the virtual memory system


to store the mapping between logical addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the
processes therefore they are generally used by the processes.
the requirement of page-table and the different ways in which the table
can be organized.
- For any computer generally the memory space is lesser as compared
to the address space this implies that the main memory is lesser as
compared to the secondary memory.

- On the basis of the demands of the CPU data is transferred between


the two memories.

- Due to this a mapping technique is required which can be


implemented using page-table.

- The page table can be organized in two ways namely in the R/W
memory and by using associative logic.

- In case of R/W memory the speed of execution of programs is slow as


it requires two main memory references to read data. It is also known
as memory page table.

- In case of associative logic it is considered to be more effective


because it can be built with simply keeping mind to have equal no. of
blocks in the memory as many as there are words.

PAGE NO 23 OF 31
b) What do you mean by memory hierarchy? Describe in detail.
[5M]

A) In computer architecture, the memory hierarchy separates


computer storage into a hierarchy based on response time. Since
response time, complexity, and capacity are related, the levels may also
be distinguished by their performance and controlling technologies. [1]
Memory hierarchy affects performance in computer architectural
design, algorithm predictions, and lower level programming constructs
involving locality of reference.
Designing for high performance requires considering the restrictions of
the memory hierarchy, i.e. the size and capabilities of each component.
Each of the various components can be viewed as part of a hierarchy of
memories (m1,m2,...,mn) in which each member mi is typically smaller
and faster than the next highest member mi+1 of the hierarchy.
To limit waiting by higher levels, a lower level will respond by filling a
buffer and then signaling for activating the transfer.
There are four major storage levels.[1]
1. Internal – Processor registers and cache.
2. Main – the system RAM and controller cards.
3. On-line mass storage – Secondary storage.
4. Off-line bulk storage – Tertiary and Off-line storage.
This is a general memory hierarchy structuring. Many other structures
are useful. For example, a paging algorithm may be considered as a
level for virtual memory when designing a computer architecture, and
one can include a level of nearline storage between online and offline
storage.

PAGE NO 24 OF 31
Properties of the technologies in the memory hierarchy
 Adding complexity slows down the memory hierarchy.
 CMOx memory technology stretches the Flash space in the
memory hierarchy
 One of the main ways to increase system performance is
minimising how far down the memory hierarchy one has to go to
manipulate data.
 Latency and bandwidth are two metrics associated with caches.
Neither of them is uniform, but is specific to a particular
component of the memory hierarchy.
 Predicting where in the memory hierarchy the data resides is
difficult.
 The location in the memory hierarchy dictates the time required
for the prefetch to occur.

PAGE NO 25 OF 31
16. a) Discuss SMID processor organization.[4M]
A) SIMD
SIMD stands for 'Single Instruction and Multiple Data Stream'. It
represents an organization that includes many processing units under
the supervision of a common control unit.
All processors receive the same instruction from the control unit but
operate on different items of data.
The shared memory unit must contain multiple modules so that it can
communicate with all the processors simultaneously.

SIMD is mainly dedicated to array processing machines. However,


vector processors can also be seen as a part of this group.

PAGE NO 26 OF 31
b) Explain 4 possible hardware schemes that can be used in an
instruction pipeline in order to minimize the performance
degradation caused by instruction branching.[6M]

A) Instruction pipelining is a technique used in the design of modes


micro processers micro controllers and cpu to increase their instruction
through output (the number of instructions that can be executed in a
unit of time).
The main idea is to divide (termed "split") the processing of a CPU
instruction, as defined by the instruction microcode, into a series of
independent steps of micro-operations (also called "microinstructions",
"micro-op" or "µop"), with storage at the end of each step. This allows
the CPUs control logic to handle instructions at the processing rate of
the slowest step, which is much faster than the time needed to process
the instruction as a single step.
The term pipeline refers to the fact that each step is carrying a single
microinstruction (like a drop of water), and each step is linked to
another step (analogy; similar to water pipes).
Most modern CPUs are driven by a clock. The CPU consists internally
of logic and memory (flip flops). When the clock signal arrives, the flip
flops store their new value then the logic requires a period of time to
decode the flipflops new values. Then the next clock pulse arrives and
the flip flops store another values, and so on. By breaking the logic into
smaller pieces and inserting flip flops between pieces of logic, the time
required by the logic (to decode values till generating valid outputs
depending on these values) is reduced. In this way the clock period can
be reduced.
For example, the RISC pipeline is broken into five stages with a set of
flip flops between each stage as follows:
1. Instruction fetch
2. Instruction decode and register fetch

PAGE NO 27 OF 31
3. Execute
4. Memory access
5. Register write back
Processors with pipelining consist internally of stages (modules) which
can semi-independently work on separate microinstructions. Each stage
is linked by flipflops to the next stage (like a "chain") so that the stage's
output is an input to another stage until the job of processing
instructions is done. Such organization of processor internal modules
reduces the instruction's overall processing time.
A non-pipeline architecture is not as efficient because some CPU
modules are idle while another module is active during the instruction
cycle. Pipelining does not completely remove idle time in a pipelined
CPU, but making CPU modules work in parallel increases instruction
throughput.
An instruction pipeline is said to be fully pipelined if it can accept a
new instruction every clock cycle. A pipeline that is not fully pipelined
has wait cycles that delay the progress of the pipeline

PAGE NO 28 OF 31
17. Write any TWO of the following a) Stack organized instruction
formats b) Carry look ahead adder c) Vector processing[5*2=10M]

A) a) STACK CPU:
In this organization, ALU operands are performed only on a stack data.
This means that both of the ALU operations are always required in the
stack. The same stack is also used as the destination. In the stack, we
can perform insert and deletion operation at only one end which is
called as the top of a stack. So in this format, there is no need of
address because in this TOS becomes the default location.

In this organization, only the ALU operands are zero address operation
whereas data transfer instructions are not a zero address instruction.
The computable instruction format of STACK CPU is Zero Address
Instruction Format.

b) Carry-Lookahead Adder
A carry-Lookahead adder is a fast parallel adder as it reduces the
propagation delay by more complex hardware, hence it is costlier. In
this design, the carry logic over fixed groups of bits of the adder is

PAGE NO 29 OF 31
reduced to two-level logic, which is nothing but a transformation of the
ripple carry design.
This method makes use of logic gates so as to look at the lower order
bits of the augend and addend to see whether a higher order carry is to
be generated or not. Let us discuss in detail.

c)Vector Processing:
Vector processing performs the arithmetic operation on the large array
of integers or floating-point number. Vector processing operates on all
the elements of the array in parallel providing each pass is independent
of the other.

PAGE NO 30 OF 31
Vector processing avoids the overhead of the loop control mechanism
that occurs in general-purpose computers.

In this section, we will have a brief introduction on vector processing,


its characteristics, about vector instructions and how the performance
of the vector processing can be enhanced.

Vector processing operates on the entire array in just one operation i.e.
it operates on elements of the array in parallel. But, vector processing
is possible only if the operations performed in parallel are
independent.

Look at the figure below, and compare the vector processing with the
general computer processing, you will notice the difference. Below,
instructions in both the blocks are set to add two arrays and store the
result in the third array. Vector processing adds both the array in
parallel by avoiding the use of the loop.

*****THE END******

PAGE NO 31 OF 31

Das könnte Ihnen auch gefallen