Beruflich Dokumente
Kultur Dokumente
Agenda
1 Parallel Computing Overview 2 How to Parallelize a Code 3 Porting Issues 4 Scalar Tuning 5 Parallel Code Tuning 6 Timing and Profiling 7 Cache Tuning 8 Parallel Performance Analysis 9 About the IBM Regatta P690
2
Agenda
1 Parallel Computing Overview
1.1 Introduction to Parallel Computing
1.1.1 Parallelism in our Daily Lives 1.1.2 Parallelism in Computer Programs 1.1.3 Parallelism in Computers 1.1.4 Performance Measures 1.1.5 More Parallelism Issues
concepts even if you dont plan to do any programming. Note: Advanced users may opt to skip this chapter.
desktop computer
fast CPUs, large memory, high speed interconnects, and
high speed input/output able to speed up computations by making the sequential components run faster by doing more operations in parallel
demand
need for tremendous computational capabilities in
thunderstorms Computational biologists: analyze DNA sequences Pharmaceutical companies: design of new drugs Oil companies: seismic exploration Wall Street: analysis of financial markets NASA: aerospace vehicle design Entertainment industry: special effects in movies and commercials
These complex scientific and business
Agenda
1 Parallel Computing Overview
1.1 Introduction to Parallel Computing
1.1.1 Parallelism in our Daily Lives 1.1.2 Parallelism in Computer Programs 1.1.2.1 Data Parallelism 1.1.2.2 Task Parallelism 1.1.3 Parallelism in Computers 1.1.4 Performance Measures 1.1.5 More Parallelism Issues
parallelism. Algorithm: the "sequence of steps" necessary to do a computation. The first 30 years of computer use, programs were run sequentially.
The 1980's saw great successes with parallel
computers.
Dr. Geoffrey Fox published a book entitled Parallel
Parallel Computing
What a computer does when it carries out more than
one computation at a time using more than one processor. By using many processors at once, we can speedup the execution
If one processor can perform the arithmetic in time t. Then ideally p processors can perform the arithmetic in
time t/p. What if I use 100 processors? What if I use 1000 processors?
Almost every program has some form of parallelism.
You need to determine whether your data or your
program can be partitioned into independent pieces that can be run simultaneously. Decomposition is the name given to this partitioning process.
10
Types of parallelism:
Data Parallelism
The same code segment runs concurrently on
each processor, but each processor is assigned its own part of the data to work on.
Do loops (in Fortran) define the parallelism. The iterations must be independent of each other.
Parallel Code !$OMP PARALLEL DO DO K=1,N DO J=1,N DO I=1,N C(I,J) = C(I,J) + A(I,K)*B(K,J) END DO END DO END DO !$END PARALLEL DO
12
proc3
K=16:20
14
loop. 2. Compute performance of the code. 3. If performance is not satisfactory, parallelize another loop. 4. Repeat steps 2 and 3 as many times as needed.
The ability to perform incremental parallelism is
15
considered a positive feature of data parallelism. It is contrasted with the MPI (Message Passing Interface) style of parallelism, which is an "all or nothing" approach.
Task Parallelism
Task parallelism may be thought of as the opposite of
16
data parallelism. Instead of the same operations being performed on different parts of the data, each process performs different operations. You can use task parallelism when your program can be split into independent pieces, often subroutines, that can be assigned to different processors and run concurrently. Task parallelism is called "coarse grain" parallelism because the computational work is spread into just a few subtasks. More code is run in parallel because the parallelism is implemented at a higher level than in data parallelism. Task parallelism is often easier to implement and has less overhead than data parallelism.
Task Parallelism
The abstract code shown in the diagram is
decomposed into 4 independent code segments that are labeled A, B, C, and D. The right hand side of the diagram illustrates the 4 code segments running concurrently.
17
Task Parallelism
Original Code
program main
Parallel Code
program main !$OMP PARALLEL !$OMP SECTIONS code segment labeled !$OMP SECTION code segment labeled !$OMP SECTION code segment labeled !$OMP SECTION code segment labeled !$OMP END SECTIONS !$OMP END PARALLEL end
code segment labeled A code segment labeled B code segment labeled C code segment labeled D
A B C D
end
18
SECTION(S) directive is allocated to a different processor. In our sample parallel code, the allocation of code segments to processors is as follows. Processor Code
proc0 code segment labeled A code segment labeled B code segment labeled C code segment labeled D
proc1
proc2
proc3
19
Parallelism in Computers
How parallelism is exploited and enhanced within
memory
disk
20
version of the Unix operating system. In the table below each OS listed is in fact Unix, but the name of the Unix OS varies with each vendor.
Parallel Computer
SGI Origin2000 HP V-Class Cray T3E IBM SP
OS
Workstation Clusters
documents is available.
can run the executable a.out in the background and simultaneously view the man page for the etime function in the foreground. There are two Unix commands that accomplish this:
a.out > results & man etime
cron feature
With the Unix cron feature you can submit a job that
Arithmetic Parallelism
Multiple execution units facilitate arithmetic parallelism. The arithmetic operations of add, subtract, multiply, and divide (+
- * /) are each done in a separate execution unit. This allows several execution units to be used simultaneously, because the execution units operate independently.
Fused multiply and add is another parallel arithmetic feature. Parallel computers are able to overlap multiply and add. This
arithmetic is named MultiplyADD (MADD) on SGI computers, and Fused Multiply Add (FMA) on HP computers. In either case, the two arithmetic operations are overlapped and can complete in hardware in one computer cycle.
Superscalar arithmetic is the ability to issue several arithmetic operations per computer
23
cycle. It makes use of the multiple, independent execution units. On superscalar computers there are multiple slots per cycle that can be filled with work. This gives rise to the name n-way superscalar, where n is the number of slots per cycle. The SGI Origin2000 is called a 4-way superscalar computer.
Memory Parallelism
memory interleaving memory is divided into multiple banks, and consecutive data
elements are interleaved among them. For example if your computer has 2 memory banks, then data elements with even memory addresses would fall into one bank, and data elements with odd memory addresses into the other.
multiple memory ports Port means a bi-directional memory pathway. When the data
elements that are interleaved across the memory banks are needed, the multiple memory ports allow them to be accessed and fetched in parallel, which increases the memory bandwidth (MB/s or GB/s).
multiple levels of the memory hierarchy There is global memory that any processor can access. There is
memory that is local to a partition of the processors. Finally there is memory that is local to a single processor, that is, the cache memory and the memory elements held in registers.
Cache memory Cache is a small memory that has fast access compared with the
24
larger main memory and serves to keep the faster processor filled with data.
Memory Parallelism
Memory Hierarchy Cache Memory
25
Disk Parallelism
RAID (Redundant Array of Inexpensive Disk)
RAID disks are on most parallel computers. The advantage of a RAID disk system is that it
provides a measure of fault tolerance. If one of the disks goes down, it can be swapped out, and the RAID disk system remains operational.
Disk Striping
When a data set is written to disk, it is striped
26
across the RAID disk system. That is, it is broken into pieces that are written simultaneously to the different disks in the RAID disk system. When the same data set is read back in, the pieces are read in parallel, and the full data set is reassembled in memory.
Agenda
1 Parallel Computing Overview
1.1 Introduction to Parallel Computing
1.1.1 Parallelism in our Daily Lives 1.1.2 Parallelism in Computer Programs 1.1.3 Parallelism in Computers 1.1.4 Performance Measures 1.1.5 More Parallelism Issues
27
Performance Measures
Peak Performance is the top speed at which the computer can operate. It is a theoretical upper limit on the computer's performance. Sustained Performance is the highest consistently achieved speed. It is a more realistic measure of computer performance. Cost Performance is used to determine if the computer is cost effective. MHz is a measure of the processor speed. The processor speed is commonly measured in millions of cycles
per second, where a computer cycle is defined as the shortest time in which some work can be done.
MIPS is a measure of how quickly the computer can issue instructions. Millions of instructions per second is abbreviated as MIPS, where
28
the instructions are computer instructions such as: memory reads and writes, logical operations , floating point operations, integer operations, and branch instructions.
Performance Measures
Mflops (Millions of floating point operations per second)
measures how quickly a computer can perform floating-point
processors, compared to the performance on one processor. Ideal speedup happens when the performance gain is linearly proportional to the number of processors used.
Benchmarks
are used to rate the performance of parallel computers and
parallel programs. A well known benchmark that is used to compare parallel computers is the Linpack benchmark. Based on the Linpack results, a list is produced of the Top 500 Supercomputer Sites. This list is maintained by the University of Tennessee and the University of Mannheim.
29
processors. For data parallelism it involves how iterations of loops are allocated to processors. Load balancing is important because the total time for the program to complete is the time spent by the longest executing thread.
The problem size
must be large and must be able to grow as you compute with more
processors. In order to get the performance you expect from a parallel computer you need to run a large application with large data sizes, otherwise the overhead of passing information between processors will dominate the calculation time.
Good software tools
are essential for users of high performance parallel computers. These tools include: parallel compilers parallel debuggers
30
chaotic. Many supercomputer vendors are no longer in business, making the portability of your application very important. A workstation farm
is defined as a fast network connecting heterogeneous
31
workstations. The individual workstations serve as desktop systems for their owners. When they are idle, large problems can take advantage of the unused cycles in the whole system. An application of this concept is the SETI project. You can participate in searching for extraterrestrial intelligence with your home PC. More information about this project is available at the SETI Institute. Condor
is software that provides resource management services for
Agenda
1 Parallel Computing Overview
1.1 Introduction to Parallel Computing 1.2 Comparison of Parallel Computers
1.2.1 Processors 1.2.2 Memory Organization 1.2.3 Flow of Control 1.2.4 Interconnection Networks 1.2.4.1 Bus Network 1.2.4.2 Cross-Bar Switch Network 1.2.4.3 Hypercube Network 1.2.4.4 Tree Network 1.2.4.5 Interconnection Networks Self-test 1.2.5 Summary of Parallel Computer Characteristics
1.3 Summary
32
of parallel computers:
kinds of processors types of memory organization flow of control
interconnection networks
Kinds of Processors
There are three types of parallel computers:
1. computers with a small number of powerful
processors
Typically have tens of processors. The cooling of these computers often requires very
sophisticated and expensive equipment, making these computers very expensive for computing centers. They are general-purpose computers that perform especially well on applications that have large vector lengths. The examples of this type of computer are the Cray SV1 and the Fujitsu VPP5000.
34
Kinds of Processors
There are three types of parallel computers:
2. computers with a large number of less powerful
processors
Named a Massively Parallel Processor (MPP), typically have
35
thousands of processors. The processors are usually proprietary and air-cooled. Because of the large number of processors, the distance between the furthest processors can be quite large requiring a sophisticated internal network that allows distant processors to communicate with each other quickly. These computers are suitable for applications with a high degree of concurrency. The MPP type of computer was popular in the 1980s. Examples of this type of computer were the Thinking Machines CM-2 computer, and the computers made by the MassPar
Kinds of Processors
There are three types of parallel computers:
3. computers that are medium scale in between the two
extremes
Typically have hundreds of processors. The processor chips are usually not proprietary; rather they are
commodity processors like the Pentium III. These are general-purpose computers that perform well on a wide range of applications. The most common example of this class is the Linux Cluster.
36
computers: Computer
SGI Origin2000 HP V-Class Cray T3E IBM SP
37
Processor MIPS RISC R12000 HP PA 8200 Compaq Alpha IBM Power3 Intel Pentium III, Intel Itanium
Workstation Clusters
Memory Organization
The following paragraphs describe the three
38
Distributed Memory
In distributed memory computers, the total memory is
39
which is proportional to the distance between the two communicating processors. On NUMA computers, data is accessed the quickest from a private memory, while data from the most distant processor takes the longest to access. Some examples are the Cray T3E, the
Distributed Memory
When programming distributed memory
40
computers, the code and the data should be structured such that the bulk of a processors data accesses are to its own private (local) memory. This is called having good data locality. Today's distributed memory computers use message passing such as MPI to communicate between processors as shown in the
Distributed Memory
One advantage of distributed memory computers
is that they are easy to scale. As the demand for resources grows, computer centers can easily add more memory and processors.
This is often called the LEGO block approach.
41
Shared Memory
In shared memory computers, all processors have access
to a single pool of centralized memory with a uniform address space. Any processor can address any memory location at the same speed so there is Uniform Memory Access time (UMA). The advantages and Processors communicate with each other through the disadvantages of shared memory. shared memory machines are roughly the opposite of distributed memory computers.
They are easier to
42
program because they resemble the programming of single processor machines But they don't scale like their distributed
fashion. Memory is physically distributed but logically shared. Attention shared to data locality again is important. Distributed
43
memory computers combine the best features of both distributed memory computers and shared memory computers. That is, DSM computers have both the scalability of distributed memory computers and the ease of programming of shared memory computers. Some examples of DSM
1980s
1990s 2000s
Distributed Memory
Distributed Shared Memory Distributed Memory
Computer
The memory
44
IBM SP
Distributed
Flow of Control
When you look at the control of flow you will see
45
Flynns Taxonomy
Flynns Taxonomy, devised in 1972 by Michael Flynn of
Stanford University, describes computers by how streams of instructions interact with streams of data. There can be single or multiple instruction streams, and there can be single or multiple data streams. This gives rise to 4 types of computers as shown in the diagram below: Flynn's taxonomy names the 4 computer types SISD, MISD, SIMD and MIMD.
Of these 4, only
46
SIMD and MIMD are applicable to parallel computers. Another computer type, SPMD, is a
SIMD Computers
SIMD stands for Single Instruction Multiple Data. Each processor follows the same set of instructions.
simple processors, and the processors run in lock step. SIMD computers, popular in the 1980s, are useful for fine grain data parallel applications, such as neural networks. Some examples of SIMD computers were the Thinking Machines CM-2 computer and the computers from the MassPar company. The processors are commanded by the global controller that sends instructions to the processors.
It says add, and they all add. It says shift to the right, and
47
they all shift to the right. The processors are like obedient soldiers, marching in unison.
MIMD Computers
MIMD stands for Multiple Instruction Multiple Data. There are multiple instruction streams with separate code
48
segments distributed among the processors. MIMD is actually a superset of SIMD, so that the processors can run the same instruction stream or different instruction streams. In addition, there are multiple data streams; different data elements are allocated to each processor. MIMD computers can have either distributed memory or shared memory. While the processors on SIMD computers run in lock step, the processors on MIMD computers run independently of each other. MIMD computers can be used for either data parallel or task parallel applications. Some examples of MIMD computers are the SGI Origin2000 computer and
SPMD Computers
SPMD stands for Single Program Multiple Data. SPMD is a special case of MIMD. SPMD execution happens when a MIMD computer is programmed
to have the same set of instructions per processor. With SPMD computers, while the processors are running the same code segment, each processor can run that code segment asynchronously. Unlike SIMD, the synchronous execution of instructions is relaxed. An example is the execution of an if statement on a SPMD computer.
Because each processor computes with its own partition of the
49
data elements, it may evaluate the right hand side of the if statement differently from another processor. One processor may take a certain branch of the if statement, and another processor may take a different branch of the same if statement. Hence, even though each processor has the same set of instructions, those instructions may be evaluated in a different order from one processor to the next.
The analogies we used for describing SIMD computers can be
50
Cray T3E
IBM SP
51
MIMD
MIMD MIMD
Workstation Clusters
Agenda
1 Parallel Computing Overview
1.1 Introduction to Parallel Computing 1.2 Comparison of Parallel Computers
1.2.1 Processors 1.2.2 Memory Organization 1.2.3 Flow of Control 1.2.4 Interconnection Networks 1.2.4.1 Bus Network 1.2.4.2 Cross-Bar Switch Network 1.2.4.3 Hypercube Network 1.2.4.4 Tree Network 1.2.4.5 Interconnection Networks Self-test 1.2.5 Summary of Parallel Computer Characteristics
1.3 Summary
52
Interconnection Networks
What exactly is the interconnection network? The interconnection network is made up of the wires and cables
that define how the multiple processors of a parallel computer are connected to each other and to the memory units. The time required to transfer data is dependent upon the specific type of the interconnection network. This transfer time is called the communication time.
What network characteristics are important? Diameter: the maximum distance that data must travel for 2
processors to communicate. Bandwidth: the amount of data that can be sent through a network connection. Latency: the delay on a network while a data packet is being stored and forwarded.
Types of Interconnection Networks
53
The network topologies (geometric arrangements of the computer network connections) are: Bus Cross-bar Switch Hybercube
Interconnection Networks
The aspects of network issues are:
diameter. Degree: how many communicating wires are coming out of each processor.
A large degree is a benefit because it has multiple paths.
Bus Network
Bus topology is the original coaxial cable-based Local Area
Network (LAN) topology in which the medium forms a single bus to which all stations are attached. The positive aspects
It is also a mature technology that is well known and reliable. The cost is also very low.
simple to construct.
The negative
aspects
limited data
Example: SGI
Power Challenge.
more.
The telephone system uses this type of network. An example of
56
a computer with this type of network is the HP V-Class. of a cross-bar switch network which shows the processors talking through the switchboxes to store or retrieve data in memory. There are multiple paths for a processor to communicate with a certain memory. The switches
Here is a diagram
57
connected as if they were corners of a multidimensional cube. Each node in an N dimensional cube is directly connected to N other The fact that the number of nodes. directly connected, "nearest neighbor", nodes increases with the total size of the network is also highly desirable for a parallel computer. The degree of a hypercube network is log n and the diameter is log n, where n is the number of processors. Examples of computers with
Tree Network
The processors are the bottom nodes of the tree. For
58
a processor to retrieve data, it must go up in the network and then go back down. This is useful for decision making applications that can be mapped as trees. The degree of a tree network is 1. The diameter of the network is 2 log (n+1)-2 where n is the number of processors. The Thinking Machines CM5 is an example of a parallel computer with this type of network. Tree networks are very suitable for database applications because it allows multiple searches through the database at a time.
Interconnected Networks
Torus Network: A mesh with wrap-around
connections in both the x and y directions. Multistage Network: A network with more than one networking unit. Fully Connected Network: A network where every processor is connected to every other processor. Hypercube Network: Processors are connected as if they were corners of a multidimensional cube. Mesh Network: A network where each interior processor is connected to its four nearest neighbors.
59
Interconnected Networks
Bus Based Network: Coaxial cable based LAN
topology in which the medium forms a single bus to which all stations are attached. Cross-bar Switch Network: A network that works through a switching mechanism to access shared memory. Tree Network: The processors are the bottom nodes of the tree. Ring Network: Each processor is connected to two others and the line of connections forms a circle.
60
Bus Crossbar Hypercube Tree Torus Multistage Fully Connected Mesh Ring Hybrid
63
SGI OpenMP Origin2000 MPI HP VClass Cray T3E OpenMP MPI SHMEM
DSM
MIMD
HP-UX
MIMD
Unicos
IBM SP
MPI
AIX
IBM Power3
Linux
Summary
This completes our introduction to parallel computing. You have learned about parallelism in computer
programs, and also about parallelism in the hardware components of parallel computers. In addition, you have learned about the commonly used parallel computers, and how these computers compare to each other. There are many good texts which provide an introductory treatment of parallel computing. Here are two useful references:
Highly Parallel Computing, Second Edition George S. Almasi and Allan Gottlieb Benjamin/Cummings Publishers, 1994
65
Parallel Computing Theory and Practice Michael J. Quinn McGraw-Hill, Inc., 1994