Sie sind auf Seite 1von 29

Assignment A Marks 10 Answer all questions. 1. Explain Flynn's classification of computer architecture using neat block diagram. Ans.

Flynn's Taxonomy of Computer Architecture : The most popular taxonomy of computer architecture was defined by Flynn in 1966. Flynn's classification scheme is based on the notion of a stream of information. Two types of information flow into a processor: Instruction. The instruction stream is defined as the sequence of instructions performed by the processing unit. Data. The data stream is defined as the data traffic exchanged between the memory and the processing unit. According to Flynn's classification, either of the instruction or data streams can be single or multiple. Computer architecture can be classified into the following four distinct categories: 1) single instruction single data streams (SISD) 2) single instruction multiple data streams (SIMD) 3) multiple instruction single data streams (MISD) 4) multiple instruction multiple data streams (MIMD). 1) SISD (single instruction, single data) is a term referring to a computer architecture in which a single processor, a uniprocessor, executes a single instruction stream, to operate on data stored in a single memory. This corresponds to the von Neumann architecture. SISD is one of the four main classifications as defined in Flynn's taxonomy. In this system classifications are based upon the number of concurrent instructions and data streams present in the computer architecture. According to Michael J. Flynn, SISD can have concurrent processing characteristics. Instruction fetching and pipelined execution of instructions are common examples found in most modern SISD computers.

2) Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism. SIMD is particularly applicable to common tasks like adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions in order to improve the performance of multimedia use.

3) MISD (multiple instruction, single data) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. Fault-tolerant computers executing the same

instructions redundantly in order to detect and mask errors, in a manner known as task replication, may be considered to belong to this type. Not many instances of this architecture exist, as MIMD and SIMD are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources than MISD does. However, one prominent example of MISD in computing are the Space Shuttle flight control computers.[citation needed] A systolic array is an example of a MISD structure.

4) MIMD (multiple instruction, multiple data) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches. MIMD machines can be of either shared memory or distributed memory categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the bus-based, extended, or hierarchical type. Distributed memory machines may have hypercube or mesh interconnection schemes. A multi-core CPU is an MIMD machine.

2. Write about the classification of operating system. Ans. Classification of Operating Systems Many operating systems have been designed and developed in the past several decades. They may be classified into different categories depending on their features: (1) multiprocessor, (2) multiuser, (3) multiprogram, (3) multiprocess, (5) multithread, (6) preemptive, (7) reentrant, (8) microkernel, and so forth. These features, and the challenges involved in implementing them, are discussed very briefly in the following subsections. 1.11.1. Multiprocessor Systems A multiprocessor system is one that has more than one processor on-board in the computer. They execute independent streams of instructions simultaneously. They share system buses, the system clock, and the main memory, and may share peripheral devices too. Such systems are also referred to as tightly coupled multiprocessor systems as opposed to network of computers (called distributed systems). A uniprocessor system can execute only one process at any point of real time, though there might be many processes ready to be executed. By contrast, a multiprocessor system can execute many different processes simultaneously at the same real time. However, the number of processors in the system restricts the degree of simultaneous process executions. In multiprocessor systems, many processes may execute the kernel simultaneously. In uniprocessor systems, concurrency is only achieved in the form of execution interleavings;

only one process can make progress in the kernel mode, while others are blocked in the kernel waiting for processor allocation or some events to occur. There are two primary models of multiprocessor operating systems: symmetric and asymmetric. In a symmetric multiprocessor system, each processor executes the same copy of the resident operating system, takes its own decisions, and cooperates with other processors for smooth functioning of the entire system. In an asymmetric multiprocessor system, each processor is assigned a specific task, and there is a designated master processor that controls activities of the other subordinate processors. The master processor assigns works to subordinate processors. In multiprocessor systems, many processors can execute operating system programs simultaneously. Consequently, kernel path synchronization is a major challenge in designing multiprocessor operating systems. We need a highly concurrent kernel to achieve real gains in system performance. Synchronization has a much stronger impact on performance in multiprocessor systems than on uniprocessor systems. Many known uniprocessor synchronization techniques are ineffective in multiprocessor systems. Multiprocessor systems need very sophisticated, specialized synchronization schemes. Another challenge in symmetric multiprocessor systems is to balance the workload among processors rationally. Multiprocessor operating systems are expected to be fault tolerant, that is, failures of a few processors should not halt the entire system, a concept called graceful degradation of the system. 1.11.2. Multiuser Systems A multiuser system is one that can be used by more than one user. The system provides an environment in which many users can use the system at the same time or exclusively at different times. Each user can execute her applications without any concern about what other users are doing in the system. When many users run their applications at the same time, they compete and contend for system resources. The operating system allocates them the resources in an orderly manner.

Security is a major design issue in multiuser operating systems. Each user has a private space in the system where she maintains her programs and data, and the operating system must ensure that this space is visible only to her and authorized ones, and is protected from unauthorized and malicious users. The system needs to arbitrate resource sharing among active users so that nobody is starved of system resources. Multiuser systems may need an accounting mechanism to keep track of statistics of resource usage by individual users. 1.11.3. Multiprogram Systems A multiprogram system is one where many application programs can reside in the main memory at the same time (see Fig. 1.18). (By contrast, in uniprogram systems, at the most one application program can reside in the main memory.) Applications definitely need to share the main memory, and they may also need to share other system resources among themselves. Figure 1.18. Multiplexing of the main memory by applications.

Memory management is a major design challenge in multiprogram operating systems. Multiplexing of the main memory is essential to hold multiple applications in it. Different standalone applications should be able to share common subprograms (routines) and data. Processor scheduling (long-term) is another design issue in such systems as the operating system needs to decide the best applications to bring in the main memory. Protection of programs from their own executions is another issue in designing such systems. 1.11.4. Multiprocess Systems A multiprocess system (also known as multitasking system) is one that executes many processes concurrently (simultaneously or in an interleaved fashion). In a uniprocess system, when the lone process executes a wait operation, the processor would sit idle and waste its time until the process comes out of the wait state. The objective of multiprocessing is to have a process running on the processor at all times, doing purposeful work. Many processes are executed concurrently to improve the performance of the system, and to improve the utilization of system resources such as the processor, the main memory, disks, printers, network interface cards, etc. Processes may execute the same program (in uniprogram systems) or different programs (in multiprogram systems). They share the processor among themselves in addition to sharing the main memory and I/O devices. The operating system executes processes by switching the processor among them. This switching is called context switching, process switching, or task switching. Short-term processor scheduling is a major design issue in multiprocess systems. Multiprocess systems need to have schemes for interprocess communications and process synchronizations. Protection of one process from another is mandatory in multiprocess systems. These systems, of course, need to provide operations for process creation, maintenance, suspension, resumption, and destruction. 1.11.5. Time-sharing Systems

In an interactive system, many users directly interact with the computer from terminals connected to the computer system. They submit small execution requests to the computer and expect results back immediately, after a short enough delay to satisfy their temperament. We need a computer system that supports both multiprograms and multiprocesses. The processes appear to be executing simultaneously, each at its own speed. Apparent simultaneous execution of processes is achieved by frequently switching the processor from one process to another in a short span of time. These systems are often called time-sharing systems. It is essentially a rapid time division multiplexing of the processor time among several processes. The switching is so frequent, it almost seems each process has its own processor. A time-sharing system is indeed a multiprocess system, but it switches the processor among processes more frequently. Thus, one additional goal of time-sharing is to help users to do effective interaction with the system while they run their applications.

3. Discuss different types of Interconnection Networks. Ans.

4. Explain the conditions for partitioning and parallelism with example. Ans. Conditions of Parallelism : 1. Data and resource dependencies : A program is consist of several segments, so the ability of executing several program segment in parallel requires that each segment should be independent other segment. Dependencies in various segment of a program may be in various form like resource dependency, control depending & data depending. Dependence graph is used to describe the relation. Program statements are represented by nodes and the directed edge with different labels shows the ordered relation among the statements. After analyzing dependence graph, it can be shown that where opportunity exist for parallelization & vectorization. Data Dependencies: Relation between statements is represented by data dependences. There are 5 types of data dependencies given below: (a) Antidependency: A statement S2 is antidependent on statement S1 if S2 follows S1 in program order and if the output of S2 overlap the input to S1. (b) Input dependence: Read & write are input statement input dependence occur not because of same variables involved put because of same file is referenced by both input statements. (c) Unknown dependence: The dependence relation between two statement cannot be determined in following situation The subscript of variable is itself subscribed. The subscript does not contain the loop index variable. Subscript is non linear in the loop index variable. (d) Output dependence: Two statements are output dependence if they produce the same output variable. (e) Flow dependence: The statement S2 is flow dependent if an statement S1, if an expression path exists from S1 to S2 and at least are output of S, feeds in an input to S2. 2. Bernsteins condition : Bernstein revealed a set of conditions depending on which two process can execute in parallel. A process is a program that is in execution. Process is an active entity. Actually it is an obstraction of a program fragment defined at various processing levels. Ii is the inputset of process Pi which is set of all input variables needed to execute the process similarly the output set of consist of all output variable generated after execution of all process Pi. Input variables are actually the operands which are fetched from the memory or registers. Output variables are the result to be stored in working registers or memory locations. Lets consider that there are 2 processes P1 & P2

Input sets are I1 & I2 Output sets are O1 & O2 The two processes P1 & P2 can execute in parallel & are directed by P1/P2 if & only if they are independent and do not create confusing results. 3. Software Parallelism : Software dependency is defined by control and data dependency of programs. Degree of parallelism is revealed in the program profile or in program flow graph. Software parallelism is a function of algorithm, programming style and compiler optimization. Program flow graphs shows the pattern of simultaneously executable operation. Parallelism in a program varies during the execution period. 4. Hardware Parallelism : Hardware Parallelism is defined by hardware multiplicity & machine hardware. It is a function of cost & performance trade off. It displays the resource utilization patterns of simultaneously executable operations. It also indicates the performance of the processor resources. One method of identifying parallelism in hardware is by means by number of instructions issued per machine cycle.

5. What are the Characteristics of CISC and RISC Architecture? Ans.

Assignment B Marks 10 Answer all questions. 1. What is pipeline computer? Explain the principles for Pipelining. Ans.

2. Discuss SIMD Architecture in detail with its variances.

Ans.

3. What is a Vector Processor? Compare Vector and Stream Architecture.

Ans.

4. Read the case study given below and answer the questions given at the end. Case Study The key to higher performance in microprocessors for a broad range of applications is the ability to exploit fine-grain, instruction-level parallelism. Some methods for exploiting fine-grain parallelism include: 1. Pipelining 2. Multiple Processors 3. Superscalar implementation 4. Specifying multiple independent operations per instruction. Pipelining is now universally implemented in high-performance processors. Little more can be gained by improving the implementation of a single pipeline. Using multiple processors improves performance for only a restricted set of applications. Superscalar implementations can improve performance for all types of applications. Superscalar means the ability to fetch, issue to execution units, and complete more than one instruction at a time. Superscalar implementations

are required when architectural compatibility must be preserved, and they will be used for entrenched architectures with legacy software, such as X 86 architecture that dominates the desktop computer market. Specifying multiple operations per instruction creates a very-long instruction word architecture or VLIW. AVLIW implementation has capabilities very similar to those of a superscalar processor issuing and completing more than one operation at a timewith one important exception: the VLIW hardware is not responsible for discovering opportunities to execute multiple operations concurrently. For the VLIW implementation, the long instruction word already encodes the concurrent operations. This explicit encoding leads to dramatically reduced hardware complexity compared to a highdegree superscalar implementation of a RISC or CISC. The big advantage of VLIW, then, is that a highly concurrent (parallel) implementation is much simpler and cheaper to build than equivalently concurrent RISC or CISC chips. VLIW is a simpler way to build a superscalar microprocessor. Questions: 1. Why do we need VLIW Architecture? Ans.

2. Compare VILW with CISC and RISC? Ans.

3. Discuss Software instead of Hardware implementation advantages of VLIW. Ans.

Assignment C Marks 10 Answer all questions. Tick mark() the most appropriate answer 1. Multi-computers a) b) c) d) Ans: A are-Distributed address space accessible by local processors Simultaneous access to shared variables can produce inconsistent results. Requires message filtering for more than 1 computer. Share the common memory.

2. Multi-Processors are-a) Share the common memory. b) Systems contain multiple processors on a single machine. c) Consists of a number of processors accessing other processors. d) Multiprocessor implementation for Non-embedded systems. Ans: A 3. Multivector is -a) A manufacturer of Process Control b) An element of a vector space V. A, c) unique High-Expression (HExTM) technology platforms d) Pair End Reads. Faster, Easier.

4. SIMD computers are-a) Computer consists of limited identical processors. b) A modern supercomputer is almost always a cluster of MIMD machines c) Single instruction with multiple data. d) General instruction in computer. Ans: C 5. Program Partitioning and Scheduling Lines are defined as those lines which are coplanar and do not intersect, is the Condition of-a) Partitioning b) Parallelism. c) Scheduling d) Multiprocessing VLSI stands for-a) Very Large Scale Integration b) Variable length serial mask. c) Virtual limit of sub interface. d) Very last stack instruction.

Ans: D 6.

Ans: A 7. Which Parallel Algorithms is used for multiprocessor-a) SIMD b) VLSI c) APST d) NDPL

Ans: B 8. IEEE standard backplane bus specification is for -a) Multilevel architectures b) Multiprocessor architectures c) Multipath architectures. d) Multi programming architecture. Ans: B 9. Hierarchical memory system technology uses-a) Cache memory b) Memory sticks. c) Hdd. d) Virtual memory.

Ans: A 10. An arbitration protocol governs the--. a) Interrupt b) I/O c) H/w d) Parity. In which of the following order of program execution explicitly stated in user programs? a) Program Flow Mechanisms b) Control Flow mechanism c) Data Flow mechanism d) Reduction flow mechanism shared memory, program counter, control sequencer are features of -a) b) c) d) Ans: C 13. Instruction address (es) effectively replaces the program counter in a control flow machine. a) Dataflow Architecture b) Demand-Driven Mechanisms c) Data Reduction Mechanism. d) Reduction mechanism. 14. APT is -a) b) c) d) Ans: A Data Flow Program Flow Control Flow mechanism Reduction flow mechanism

Ans: A 11.

Ans: A 12

Advanced processor technology Advertise poster trend. Addition part of tech. Actual planning of tech.

15. Addressing Modes on the Y86 are used in--

ISA. APT. VLSI ISMD Ans: A 16. The crossbar switch was most popular from -1950 to 1980 1980-2000 1970-1990 1960-2000 Ans: A 17. A memory shared by many processors to communicate among is termed as-Multiport memory Multiprocessor memory. Multilevel memory. Multidevice memory. Ans: B 18. A switching system for accessing memory modules in a multiprocessor is called-a) Combining n/w. b) Combining processors. c) Combining devices. d) Combining cables. Ans: B 19. What does the following diagram show?

a) b) c) d) Ans: B 20. .. is a) b) c) d)

Process Hierarchy Memory Hierarchy Accessing of Memory CPU connections.

the first super computer produced by India. PARAM Intel 5000 super India none of these

Ans: A 21. SIMD stands for-a) Single Instruction Multiple Data stream b) Synchronous Instruction Multiple Data stream c) Single Interface Multiple Data stream d) Single Instruction Multiple Data signal Ans: A

22. SISD stands for-a) Single Instruction Several Data stream. b) Single Instruction Single Data stream c) Single Instruction Several Document stream d) none of these` Ans: A 23. MIMD stands for -a) Multiple Instruction Multiple Data stream. b) Multiple Instruction Meta Data stream. c) Multiple Instruction Modular Data stream. d) none of these Ans: A 24. MISD stands for -a) Multiple Instruction Single Data stream. b) More Instruction Single Data stream. c) Multiple Instruction Simple Data stream. d) none of these Ans: A 25. Which one is true about MISD? a) Is not a practically existing model. b) Practically existing model. c) c.Meta instruction single data d) All above are true Ans: B 26. SISD is the example of -a) Distributed parallel processor system. b) Sequential system c) Multiprocessing system d) None of these Ans: B 27. VLIW is stand for-a) Variable length Instruction wall b) Vary large instruction word c) Vary long instruction word d) None of these Ans: C 28. RISC stands for-a) Rich instruction set computers. b) Rich instruction serial computers. c) Real instruction set computers. d) none of these Ans: D 29. SIMD has following component-a) PE b) CU c) AU d) All of Above Ans: B

30. CISC stands for -a) Complex instruction set computers b) Complete instruction set computers c) Core instruction set computers d) None of these Ans: A 31. The ALU and control unit of most of the microcomputers are combined and manufactured on a single silicon chip. What is it called? a) Monochip b) Microprocessor c) Alu d) Control unit Ans: B 32. Which of the following registers is used to keep track of address of the memory location where the next instruction is located? a) Memory Address Register b) Memory Data Register c) Instruction Register d) Program Register Ans: A 33. A complete microcomputer system consist of-a) Microprocessor b) Memory c) Peripheral equipment d) All of above Ans: D 34. CPU does perform the operation of-a) Data transfer b) Logic operation c) Arithmetic operation d) All of above Ans: D 35. Pipelining strategy is called implementing-a) Instruction execution b) Instruction prefetch c) Instruction decoding d) Instruction manipulation Ans: B 36. What is a) b) c) d) Ans: A the function of control unit in a CPU? To transfer data to primary storage To store program instruction To perform logic operations To decode program instruction

37. Pipeline implements-a) Fetch instruction

b) c) d) e) f) Ans : F

Decode instruction Fetch operand Calculate operand Execute instruction All of above

38. Memory access in RISC architecture is limited to instructions like-a) CALL and RET b) PUSH and POP c) STA and LDA d) MOV and JMP Ans: C 39. The most common addressing techniques employed by a CPU is-a) Immediate b) Direct c) Indirect d) Register e) All of the above Ans: E 40. A shared memory SIMD model is .. than distributed memory model. a) More complex b) Less complex c) Equally complex d) Cant say Ans: B

Das könnte Ihnen auch gefallen