Sie sind auf Seite 1von 13

Bus Architecture:Bus architecture is a system designed on how the data signaling is used commonly with multiple devices inside

a computer when they need to communicate or transfer data between them. A "BUS" is a circuit that connects the electrical components of the computer and helps the tranfer of electronic impulses from one component to the other. The architecture defines how the bus are to be used in a system by multiple system processes with as much efficiency as possible.

History of the Bus:In the early years, computer devices were connected with parallel electrical connections which were derived from busbar. But modern computer are able to use both parallel and bit serial connections. The early bus were capable of transferring 1 bit signals which then grew to 2, 4, 8, 16, 32, 64 where the signals could be multiplexed.

First Generation Bus:-

The first generation of computer buses were all wires that were connected to memory and other components of the computers.

The first generation saw the development of 8 Bit parallel processors.

This was when the first implication of interrupts were used where a sequence of the task could be paused to continue with other task and also get back to the original task when completed which was done by interrupt prioritization.

All the communications were controlled by the CPU which were timed by a central clock that would determine the speed of the CPU.

But this created some drawbacks as all the peripherals had to communicate at the same speed.

Second Generation Bus:-

y y

Memory and CPU were seperated from the other devices. Then the bus controller transferred the data from the CPU side to other devices removing the burden on CPU.

y y

This allowed the development of buses separate from the CPU and the memory. Now that the CPU were isolated from other system devices it could focus on more speed and efficiency.

Third Generation Bus:-

y y

The third generation introduced technology such as HyperTransport and Infiniband. The bus became more flexible physically which allowed them to be used as the internal bus and external too.

Bus Arbritation
y y y A bus may be controlled by more than one module. ie CPU and DMA(Direct Memory Access) Controller. The bus can be used by only one module at a time. It may be centralized or distributed.

Types of Bus
There are three bus according to the method of operation.

Figure 1 - Processor Schematic Architecture


Control Bus:-

Directs and monitors the actual activities by signaling the functional parts of the computer system.

Control Bus carries signals from the CPU to read or write data or instructions from the memory or I/O devices.

Signals such as read, write, acknowledge, interrupt to co-ordinate the operations of the system.

Syncronize the subsystems: memory and I/O system.

Address Bus:-

Transfers the address of the memory or a register where the data is to be read or is to be written.

An address must be transmitted over the address lines before reading or writing data or instruction through the memory by the CPU.

The actual width of any microprocessors address lines is what determines the amount of individual memory space locations the microprocessor is capable of addressing. A processor with a 32 bit address bus is able to address 232(4,294,967,296) memory location, that is 4GB addressable space at a time.

Data Bus:-

y y y y

Data lines transfer data or instructions to or from the memory. It is bi-directional. Transmit data in a single direction at a time. Also transmits data between memory and I/O sections during input or output operations. More the width of data bus, more data can flow through the data bus.

External Bus:
Kinds of External Bus: y y y y y ISA (Industry Standard Architecture) PCI (Peripheral Component Interface) PCI-e (PCI Express) USB (Universal Serial Bus) AGP (Accelerated Graphics Port)

ISA (Industry Standard Architecture)

y y y y

It was developed by IBM and has evolved from 8 bit to 16 bit and EISA was an attemp on 32 bit. These are used for industry and legacy PC and are rarely on modern PC's. Connections are directly made to FSB(Front Side Bus) Has a transfer rate of 8 MHz.

PCI Bus Architecture (32/64 Bit)


The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again.

PCI Bus Architecture y PCI was invented as industry standard released by Intel in 1992, based on ISA(Industry Standard Architecture) and VL(VESA Local)Bus. y Provides direct access to the memory to the devices which are directly connected to the bus. y y y y y y Offers higher performance without slowing down the processor. Initially was 33 Mhz and then upgraded to 66 Mhz. The PCI Bus Concept utilized "Plug And Play". Has Burst mode which allows the sending of multiple data set. The Devices on PCI bus could transfer directly. The latest version of the architecture is called PCI Express.

USB (Universal Serial Bus)


y USB in these days is one of the most popular and easy to use connectivity interface standard. y y Created low cost hot-swappable plug and play support for many computer peripherals. USB can support high bandwidth computer peripherals.

y y

USB 3.0 can support bandwidth up to 5 Gigabyte. USB can distribute power to low power connecting devices and supports support/resume power saving mode.

Pipeline Architecture:
Pipelining is a mechanism used in computer devices to increase the throughput of the instructions in succession without waiting for the completion of the previous instruction. Pipeling is generally devided into sub operations:y y y y y Instruction Fetch (IF) Instruction Decode (ID) Operands Fetch (OP) Execute (EXEC) Write Back (WB)

Mechanism:y y Pipelining creates a segmentation of a instruction to sub instructions. A task is broken down to multiple independent substep and processing units run those task. When a substeps completes processing another another task can take its place while the first task moves forward to other substeps. y Doesn't decrease the speed of execution of single instruction however it increases the instruction throughput. y Instruction throughput is the number of task that can be completed by a pipeline in a unit time. y Pipeline clock period is controlled by the stage with the max delay, and unless the stage delays are managed a slow stage can slow down the whole process. y Pipelining is the backbone of vector supercomputers.

Time-Space depicting the Overlap operations.

Types of pipelining architecture:RISC (Reduced Instruction Set Computer):RISC was developed in 1970 to increase the clock cycle of the CPU. It could process instructions at the rate of one instruction for a machine cycle. It uses single word instruction with fixed field decoding. CISC (Complex Instruction Set Computer):CISC is a computer which was able to run low level operations like load memory, read memory from a single instruction. It uses variable length instruction with variable format.

Issues with Pipelining Hazards with Pipelining and Solutions:


Hazards are something that doesn't let a instruction being executed in the instruction stream. Structure Hazard: y When a functional units are not fully pipelined the instructions from those units will not be processed. y Some of the resource required for multiple instruction has not been freed so that the instructions will not execute. y Solution is pipelining be stalled for one clock cycle when memory access occurs to occupy the resources for that instruction slot.

Data Hazard: y Occurs when pipelining changes the order of read and write operation of the operands during overlapping of the instructions.

y y

Multiple instructions are being executed and they are referencing to the same data. The system must ensure that the later instruction will not try to access the data before the first instruction, otherwise this will lead to incorrect results of the instructions.

Internal forwarding is used to deal with data hazards.

Control Hazard: y Caused by uncertainity of the execution paths. when decision is made before the condition is evaluated. y The solution to control hazard to stall the pipeline until the execution path is known.

Limitation and issues on pipelining arise from: Pipeline Latency: y The point that the actual execution time period of each and every instructions doesn't get reduced sets restrictions upon pipeline depth. Imbalance among pipeline stages: y Improper balance on pipeline stages will reduce the completion of execution of the instructions because the clock will not be able to run faster than the time required for the completion of the slowest pipeline stage. Pipeline Overhead: y This arises from pipeline register delay (setup time + registration delay) and the clock stew(difference in arrival time of the supposedly simultaneous events). Interrupts: y Interrupts insert new instruction while a instruction is running through the system. Interrupts should take effect between intructions when one instruction is complete and other not started, but in pipelining the other instruction ususally begins before the completion of the preceding instruction.

Advantages of Pipelining over Non Pipelining.


y y y y y A non pipelined system executes a single instruction at a time while a pipelined system can overlap instruction execution to increase the system performance. Better utilization of the CPU resources. Achive high instruction throughtput without reducing instruction latency. Pipelining uses combinational logic to generate control signals. Overall improvement of the speed of the Computer.

Hardware Support to MEMORY MANAGEMENT


Memory management is required to improve the performance of the system by increasing the efficiency to read and write instruction and data from the memory. The primary goal of of MMU is to optimize the number of runable processes in the memory. Memory Management Unit (MMU) is a hardware device which controls the access between the system memory and the CPU. The primary goal of of MMU is to optimize the number of runable processes in the memory. Functions provided by MMU can be listed as: y y y hardware memory management operating system memory management application memory management

Memory Hierarchy
y A rank given to the devices that are used for memory storage according to the level of speed and the capacity of it. y The main point of memory hierarchy is to allow fast access to a large amount of the memory according to need of speed and cost.

y y

Registers are the smallest and also provide the fastest access to the data. Level 1 cache are much larger than register and the size ranges between 4 bytes and 32 bytes.

y y

Level 2 cache can be internal as well as external. Main memory are RAM SDRAM etc.

Virtual Memory:y

The virtual memory is something that the program uses so that it doesn't conflict with programs trying to access the same location. Modern operating systems have multiple programs running concurrently without interfering with each other. So virtual memory provides each process its own address space. So if another program is trying to access the same memory location and other is trying to load something in the same memory then they become physically different.

This feat is achieved by using paging to create virtual addresses of physical memory of the memory.

Paging breaks up memory into blocks called pages, after breaking memory into blocks the system uses the lookup table to translate the H.O bits of the virtual address select a page and the L.O. bit is used to select a page.

One example is, using a 4,096-byte page, you'd probably utilize the L.O. 12 bits of the actual virtual address since the offset inside the page within physical memory space. The upper 20 bits of the actual address you'd utilize just as one index in to a lookup table which returns the specific higher 20 bits of the actual physical address Segmentation, another way to achieve memory protection which is achieved by sealing off some part of the memory from the running processes. An element is identfied with their offset from the beginning of the segment. During address translation a mapping is required to convert logical address to physical address. During a process the logical address and the system memory is devided into segments which may vary in sizes. Memory Interleaving: y Interleaving is an sophisticated method utilized by high-end motherboards/chipsets to further improve overall memory performance . y It increases the bandwidth of the memory by allowing simultanous access to a portion of the memory so that the CPU can transfer much more data to or from the memory. y It helps decrease the Memory/CPU bottleneck that decrease the performance of the system.

Interleaving breaks down the memory into blocks which are accessed by different sets of control lines combined jointly within the memory.

Cache Memory:

Cache memory is a method utilized to help memory access programs that are frequently accessed.

Some algorithms introduced to handle the data access and retrival from the memory is as follows: 1. FIFO (First In First Out) 2. Random Replacement

3. Least Recently Used(LRU) The memory access time is significantly increase due to a fact that it records the memory address of the regularly accessed programs in higher priority and the lessed used program to lesser priority.

A cache hit is a phrase that refers to when the CPU tries to find a main memory address and finds it in the cache.

Das könnte Ihnen auch gefallen