Sie sind auf Seite 1von 32

A Presentation on

Parallel Computing

-Ameya Waghmare(Rno 41,BE CSE)


Guided by-Dr.R.P.Adgaonkar(HOD),CSE Dept.
• Parallel computing is a form ofcomputation in
which many instructions are carried out
simultaneously operating on the principlethat
large problems can often be divided into
smaller ones, which are then solved
concurrently (in parallel).

Why is it required?
With the increased use of computers in every
sphere of human activity,computer scientists
are faced with twocrucial issues today.

Processing has to be done faster likenever


before
Larger or complex computation problems
need to besolved
Increasing the number of transistors as per
Moore’s Law isn’t a solution,as it also increases
the frequency scaling and powerconsumption.
Power consumption has been a major issue
recently,as it causes a problem of processor
heating.
The perfect solution isPARALLELISM
In hardware as well assoftware.
Difference With Distributed
Computing
When different processors/computers work on a
single common goal,it is parallel computing.
Eg.Ten men pulling a rope to lift up one
rock,supercomputers implement parallel
computing.
Distributed computing is where severaldifferent
computers work separately on a multi-faceted
computing workload.
Eg.Ten men pulling ten ropes to lift ten different
rocks,employees working in an office doing their
own work.
Difference With Cluster Computing
A computer cluster is a group of linked computers,
working together closely so that in many respectsthey
form a single computer.
Eg.,In an office of 50 employees,group of 15doing
some work,25 some other,and remaining 10
something else.
Similarly,in a network of 20 computers,16 working ona
common goal,whereas 4 on some other common goal.
Cluster Computing is a specific case of parallel
computing.
Difference With Grid Computing
Grid Computing makes use of computers
communicating over the Internet towork on a given
problem.
Eg.When 3 persons,one of them from USA,another
from Japan and a third from Norway are working
together online on a common project.
Websites like
Wikipedia,Yahoo!Answers,YouTube,FlickR or open
source OSlike Linux are examples of grid computing.
Again,it serves a san example of parallel computing.
The Concept Of Pipelining
In computing, a pipeline is a set of data
processing elements connected in series, so
that the output of one element is the input of
the next one. The elements of a pipeline are
often executed in parallel or in time-sliced
fashion; in that case, some amount of buffer
storage is often inserted betweenelements.
Approaches To Parallel Computing
Flynn’s Taxonomy

 SISD(Single Instruction Single Data)


 SIMD(Single Instruction Multiple Data)
 MISD(Multiple Instruction SingleData)
 MIMD(Multiple Instruction Multiple Data)
Approaches Based On Computation

 Massively Parallel
 Embarrassingly Parallel
 Grand Challenge Problems
Massively Parallel Systems
 It signifies the presence of many
independent units or entiremicroprocessors,
that run inparallel.

 The term massive connotes hundreds if not


thousands of suchunits.
Example:the Earth
Simulator(Supercomputer from 2002-
2004)
Embarrassingly Parallel Systems
An embarrassingly parallel system is one for which
no particular effort is needed to segment the
problem into a very large number of paralleltasks.
Examples include surfing two websites
simultaneously , or running two applications on a
home computer.
They lie to an end of spectrum of parallelisation
where tasks can be readily parallelised.
Grand Challenge Problems
A grand challenge is a fundamental problem in
science or engineering, with broad applications,
whose solution would be enabled by the
application of high performance computing
resources that could become available in thenear
future.
Grand Challenges were USApolicy terms set as
goals in the late 1980s for funding high-
performance computing and communications
research in part in response to the Japanese 5th
Generation (or Next Generation) 10-yearproject.
Types Of Parallelism

• Bit-Level
• Instructional
• Data
• Task
Bit-Level Parallelism
When an 8-bit processor needs to add two 16-
bit integers,it’s to be done in two steps.
 The processor must first add the 8lower-order
bits from each integer using the standard
addition instruction,
 Then add the 8 higher-order bitsusing an add-
with-carry instruction and the carry bit from
the lower orderaddition
Instruction Level Parallelism
The instructions given to a computer for
processing can be divided into groups, or re-
ordered and then processed without changing
the final result.
This is known as instruction-level parallelism.
i.e.,ILP.
An Example
1. e = a + b
2. f = c +d
3. g = e * f
Here, instruction 3 is dependenton
instruction 1 and 2.
However,instruction 1 and 2 canbe
independently processed.
Data Parallelism

Data parallelism focuses on distributing the


data across different parallel computingnodes.

It is also called as loop-levelparallelism.


An Illustration
In a data parallel implementation, CPUA could
add all elements from the top half of the
matrices, while CPUBcould add all elements
from the bottom half of the matrices.
Since the two processors work in parallel, the
job of performing matrix addition would take
one half the time of performing the same
operation in serial using one CPUalone.
Task Parallelism
Task Parallelism focuses on distribution of
tasks across different processors.

It is also known as functional parallelismor


control parallelism
An Example
As a simple example, if we are running codeon
a 2-processor system (CPUs "a" & "b") in a
parallel environment and we wish to do tasks
"A" and "B" , it is possible to tell CPU"a" to do
task "A" and CPU"b" to do task 'B"
simultaneously, thereby reducing the runtime
of the execution.
Key Difference Between Data And Task
Parallelism
Data Parallelism Task Parallelism
 It is the division of  It is the divisionsamong
threads(processes) or threads(processes) or
instructions or tasks instructions or tasks
internally into sub-parts for themselves for execution.
execution.
 A task ‘A’ is dividedinto  A task ‘A’ and task ‘B’
sub-parts and then are processed separately
processed. by different processors.
Implementation Of Parallel Computing
In Software
When implemented in software(or rather
algorithms), the terminology calls it ‘parallel
programming’.

An algorithm is split intopieces and then


executed, as seen earlier.
Important Points In Parallel
Programming
Dependencies-A typical scenario when line 6
of an algorithm is dependent on lines 2,3,4 and
5
Application Checkpoints-Just like savingthe
algorithm, or like creating a backuppoint.
Automatic Parallelisation-Identifying
dependencies and parallelising algorithms
automatically.This has achieved limited
success.
Implementation Of Parallel Computing
In Hardware
When implemented in hardware, it is called as
‘parallel processing’.

Typically,when a chunk of load forexecution is


divided for processing by units like
cores,processors,CPUs,etc.
An Example:Intel Xeon Series
Processors
References
http://portal.acm.org/citation.cfm?id=290768&coll=port
al&dl=ACM
http://www-users.cs.umn.edu/~karypis/parbook/
www.cs.berkeley.edu/~yelick/cs267-
sp04/lectures/01/lect01-intro
www.cs.berkeley.edu/~demmel/cs267_Spr99/Lectures/L
ect_01_1999b
http://www.intel.com/technology/computing/dual-
core/demo/popup/dualcore.swf
www.parallel.ru/ftp/computers/intel/xeon/24896607.pdf
www.intel.com
Thank You!
ANY QUERIES?

Das könnte Ihnen auch gefallen