Sie sind auf Seite 1von 10

General Aspects of Computer Organization

(Lecture-4)

R S Ananda Murthy

Associate Professor and Head


Department of Electrical & Electronics Engineering,
Sri Jayachamarajendra College of Engineering,
Mysore 570 006

R S Ananda Murthy General Aspects of Computer Organization


Specific Learning Outcomes

After completing this lecture, the student should be able to


Explain the meaning of datapath cycle.
List RISC design principles followed to improve processor
performance.
State the meaning of parallelism and why it is adopted in
modern computer architectures.
Explain how pipelining can speed up program execution.
Explain the meaning of superscalar architecture.

R S Ananda Murthy General Aspects of Computer Organization


Data Path In Side CPU

A+B A B ALU A+B

Registers ALU Input ALU ALU Output


Register Input Bus Register

Feeding two operands to the ALU and storing the output of


ALU in an internal register is called data path cycle.
Faster data path cycle results in faster program execution.
Multiple ALUs operating in parallel results in faster data
path cycle.

R S Ananda Murthy General Aspects of Computer Organization


RISC Design Speeds Up Program Execution

Most manufacturers today implement the following features in


their processors to improve performance
All instructions are directly executed by hardware instead
of being interpreted by a microprogram.
Maximize the rate at which instructions are issued by
adopting parallelism.
Use simple fixed-length instructions to speed-up decoding.
Avoid performing arithmetic and logical operations directly
on data present in the memory i.e., only LOAD and STORE
instructions should be executed with reference to memory.
Provide plently of registers in side the CPU.

R S Ananda Murthy General Aspects of Computer Organization


Parallelism for Faster Execution

Instruction execution can be made faster only upto a


certain limit by increasing the processor clock frequency as
it increases the power loss in the chip.
Consequently modern computer architects adopt some
kind of parallelism doing more operations simultaneously
to speed up performance.
Kinds of parallelism employed in computer architecture are
Instruction Level Parallelism (ILP)
Pipelining
Superscalar Architectures
Processor Level Parallelism (PLP)
SIMD or Vector Processor
Multiprocessors
Multicomputers

R S Ananda Murthy General Aspects of Computer Organization


Pipelining for High Performance

Number of stages in a pipeline varies depending upon the


hardware design of the CPU.
Each stage in a pipeline is executed by a dedicated
hardware unit in side the CU.
Each stage in a pipeline takes the same amount of time to
complete its task.
Hardware units of different stages in a pipeline can work
concurrently.
Operation of hardware units is synchronized by the clock
signal.
To implement pipelining instructions must be of fixed length
and same instruction cycle time.
Pipelining requires sophisticated compiling techniques to
be implemented in the compiler.

R S Ananda Murthy General Aspects of Computer Organization


A 4-Stage Pipeline
1 2 3 4 5 6 7 Period of clock signal
Clock Cycle
Time No. of stages in pipeline
Instruction
I1 F1 D1 E1 W1
Hardware Stages in Pipeline
I2 F2 D2 E2 W2 F: Fetch instruction
I3 F3 D3 E3 W3 D: Decode and get operands
E: Execute the instruction
I4 F4 D4 E4 W4 W: Write result at destination

D: Decode
F: Fetch E: Execute W: Write
B1 and get B2 B3
Instruction operation results
operands

B1, B2, and B3 are storage buffers.


Information is passed from one stage to the next through
storage buffers.
Time taken to execute each instruction is nT .
Processor Band Width is 1/(T 106 ) MIPS (Million
Instructions Per Second).
R S Ananda Murthy General Aspects of Computer Organization
Superscalar Architecture
S1 S2 S3 S4 S5
Instruction Operand Instruction Write
decode fetch execution back
Instruction unit unit unit unit
fetch
unit Instruction Operand Instruction Write
decode fetch execution back
unit unit unit unit

Superscalar architecture has multiple pipelines as shown


above.
In the above example, a single fetch unit fetches a pair of
instructions together and puts each one into its own
pipeline, complete with its own ALU for parallel operation.
Compiler must ensure that the two instructions fetched do
not conflict over resource usage.

R S Ananda Murthy General Aspects of Computer Organization


Superscalar Architecture with Five Functional Units
S4

ALU

S1 S2 S3 ALU S5
Instruction Instruction Operand Write
fetch decode fetch LOAD back
unit unit unit unit
STORE

Floating
Point

Now-a-days the word superscalar is used to describe


processors that issue multiple instructions often four to
six in a single clock cycle.
Superscalar processors generally have one pipeline with
multiple functional units as shown above.

R S Ananda Murthy General Aspects of Computer Organization


License

This work is licensed under a


Creative Commons Attribution 4.0 International License.

R S Ananda Murthy General Aspects of Computer Organization

Das könnte Ihnen auch gefallen