Sie sind auf Seite 1von 25

VLSI Computations

EVOLUTION OF COMPUTER SYSTEMS

Over the past four decades the computer industry has experienced four
generations of development, physically marked by the rapid changing of building blocks
from relays and vacuum tubes (1940-1950s) to discrete diodes and transistors
(1950-1960s), to small- and medium-scale integrated (SSI/MSI) circuits (1960-1970s),
and to large- and very-large-scale integrated (LSI/VLSI) devices (1970s and beyond).
Increases in device speed and reliability and reductions in hardware cost and physical
size have greatly enhanced computer performance. However, better devices are not the
sole factor contributing to high performance. Ever since the stored-program concept of
von Neumann, the computer has been recognized as more than just a hardware
organization problem. A modern computer system is really a composite of such items as
processors, memories, functional units, interconnection networks, compilers, operating
systems, peripherals devices, communication channels, and database banks.

To design a powerful and cost-effective computer system and to devise efficient


programs to solve a computational problem, one must understand the underlying
hardware and software system structures and the computing algorithm to be implemented
on the machine with some user-oriented programming languages. These disciplines
constitute the technical scope of computer architecture. Computer architecture is really a
system concept integrating hardware, software algorithms, and languages to perform
large computations. A good computer architect should master all these disciplines. It is
the revolutionary advances in integrated circuits and system architecture that have
contributed most to the significant improvement of computer performance during the past
40 years. In this section, we review the generations of computer systems and indicate the
general tends in the development of high performance computers.

Generation of Computer Systems

The division of computer systems into generations is determined by the device


technology, system architecture, processing mode, and languages used. We consider each
VLSI Computations

generation to have a time span of about 10 years. Adjacent generations may overlap in
several years as demonstrated in the figure. The long time span is intended to cover both
development and use of the machines in various parts of the world. We are currently in
the fourth generation, while the fifth generation is not materialized yet.

Computer
Generation

Fourth

Third
Second
First

1940 1950 1960 1970 1980 1990 Year

The Future
Computers to be used in the 1990s may be the next generation. Very large-scale
integrated (VLSI) chips will be used along with high-density modular design.
Multiprocessors like the 16 processors in the S-1 project at Lawrence Livermore National
Laboratory and in the Denelcor’s HEP will be required. Cray-2 is expected to have four
processors, to be delivered in 1985. More than 1000 mega float-point operations per
second (megaflops) are expected in these future supercomputers.
VLSI Computations

NEED FOR PARALLEL PROCESSING


Achieving high performance depends not only on using faster and more reliable
hardware devices, but also on major improvements in computer architecture and
processing techniques. State – of – the art parallel computer systems can be characterized
into three structural classes: pipelined computers, array processors and multi-processor
systems. Parallel processing computers provide a cost-effective means to achieve high
system performance through concurrent activities.
Parallel computers are those systems that emphasize parallel processing. Parallel
computers are divided into three architectural configurations:
 Pipeline Computers
 Array Processors
 Multiprocessor Systems
A pipeline computer performs overlapped computations to exploit temporal
parallelism. An array processor uses multiple synchronized ALUs to achieve spatial
parallelism. A multi-processor system achieves asynchronous parallelism through a set of
interactive processors with shared resources.
New computing concepts include data flow computers and VLSI algorithmic
processors. These new approaches demand extensive hardware to achieve parallelism.
VLSI Computing Structures: The rapid advent of VLSI technology has created a new
architectural horizon in implementing parallel algorithms directly in hardware. We will
study the VLSI computing algorithms in detail.
VLSI Computations

VLSI COMPUTING STRUCTURES

Highly parallel computing structures promise to be a major application area for


the million-transistor chips that will be possible in just a few years. Such computing
system has structural properties that are suitable for VLSI implementation. Almost by
definition, parallel structures imply a basic computational element repeated perhaps
hundreds or thousands of times. This architectural style immediately reduces the design
problem by similar orders of magnitude. In this section, we examine some VLSI
computing structures that have been suggested by computer researchers. We begin with a
characterization of the systolic architecture. Then we describe methodologies for
mapping parallel algorithms into processor arrays. Finally, we present the reconfigurable
processor arrays for designing algorithmically presented below. Modularly structured
VLSI computing structures will be presented. Described below are key attributes of VLSI
computing structures.

Simplicity and regularity


Cost effectiveness has always been a major concern in designing special-purpose
VLSI system; their cost must be low enough to justify their limited applicability.
Special-purpose design costs can be reduced by the use of appropriate architectures. If a
structure can truly be decomposed into a few types of building blocks which are used
repetitively with simple interfaces, great savings can be achieved. This is especially true
for VLSI designs where a single chip comprises hundreds of thousands of identical
components. To cope with that complexity, simple and regular designs are essential.
VLSI system based on simple, regular layout is likely to be modular and adjustable to
various performance levels.

Concurrency and communication


Since the technological trend clearly indicates a diminishing growth rate for
component speed, any major improvement in computation speed must come from the
concurrent use of many processing elements. The degree of currency in a VLSI
computing structure is largely determined by the underlying algorithm. Massive
parallelism can be achieved if the algorithm is designed to introduce high degrees of
VLSI Computations

pipelining and multiprocessing. When a large number of processing elements work


simultaneously, coordination and communication become significant – especially with
VLSI technology where routing costs dominate the power, time, and area required to
implement a computation. The issue here is to design algorithms that support high
degrees of concurrency, and in the meantime to employ only simple, regular
communication and control to allow efficient implementation. The locality of
interprocessor communications is a desired feature to have in any processor arrays.

Computation intensive
VLSI processing structures are suitable for implementing compute-bound
algorithms rather than I/O-bound computations. In a compute-bound algorithm, the
number of computing operations is larger than the total number of input and output
elements. Otherwise, the problem is I/O bound. For example, the matrix-matrix
multiplication algorithm represents a compute-bound task, which has O (n3) multiply-add
steps, but only O (n2) I/O elements. On the other hand, adding two matrices is I/O bound,
since there are n2 adds and 3n2 I/O operations for the two input matrices and one output
matrix. The I/O-bound problems are not suitable for VLSI because CLSI packaging must
be constrained with limited I/O pins. A VLSI device must balance its computation with
the I/O bandwidth. Knowing the I/O-imposed performance limit helps prevent overkill in
the design of a special-purpose VLSI device.
VLSI Computations

ELECTRON BEAM LITHOGRAPHY

Electron Beam Lithography (EBL) utilizes the fact that certain chemicals change
their properties when irradiated with electrons just as a photographic film does when
irradiated with light. The electron beam is generated in a Scanning Electron Microscope
which normally is set up to provide an image of an object by rastering with a well
focused beam of electrons over it. Collecting electrons that are scattered or emanated
from that object for each raster point provides an image. With computer control of the
position of the electron beam it is possible to write arbitrary structures onto a surface.

The steps to produce a structure by EBL are shown below: The sample is covered with a
thin layer of PMMA, and then the desired structure is exposed with a certain dose of
electrons. The exposed PMMA changes its solubility towards certain chemicals. This can
be used to produce a trench in the thin layer. If one wants to produce a metallic structure,
a metal film is evaporated onto the sample and after dissolving the unexposed PMMA
with its cover (lift-off) the desired metallic nanostructure remains on the substrate.
VLSI Computations

THE SYSTOLIC ARRAY ARCHITECTURE


The choice of an appropriate architecture for any electronic system is very closely
related to the implementation technology. This is especially true in VLSI. The
constraints of power dissipation, I/O pin count, relatively long communication delays,
difficulty in design and layout, etc., all important problems in VLSI, are much less
critical in other technologies. As a compensation, however, VLSI offers very fast and
inexpensive computational elements with some unique and exciting properties. For
example, bi-directional transmission gates (Pass transistors) enable a full barrel shifter to
be configured in a very compact NMOS array.
Properly designed parallel structures that need to communicate only with their
nearest neighbors will gain the most form very-large-scale integration. Precious time is
lost when modules that are far apart must communicate. For example, the delay in
crossing a chip on polysilicon, one of the three primary interconnect layers on an NMOS
chip, can be 10 to 50 times the delay of an individual gate. The architect must keep this
communication bottleneck uppermost in his mind when evaluating possible structures
and architectures for implementation in VLSI.
The systolic architectural concept was developed by Kung and associates at
Carnegie-Mellon University, and many versions of systolic processors are being designed
by universities and industrial organizations. This subsection reviews the basic principle
of systolic architectures and explains why they should result in cost-effective, high-
performance, special-purpose systems for a wide range of potential applications.
A systolic system consists of a set of interconnected cells each capable of
performing some simple operation. Because simple, regular communication and control
structures have substantial advantages over complicated once in design and
implementation, cells in a systolic system are typically interconnected to form a systolic
array or a systolic tree. Information in a systolic system flows between cells in a
pipelined fashion, and communication with the outside world occurs only at the
“boundary” cells. For example, in a systolic array, only those cells on the array
boundaries may be I/O ports for the system.
The basic principle of a systolic array is illustrated in Figure 10.25. By replacing a
single processing element with an array of PEs, a higher computation throughput can be
VLSI Computations

achieved without increasing memory bandwidth. The function of the memory in the
diagram is analogous to that of the heart; it “pulses” data through the array of PEs. The
crux of this approach is to ensure that once a data item is brought out from the memory it
can be used effectively at each cell it passes. This is possible for a wide class of
compute-bound computations where multiple operations are performed on each data item
in a repetitive manner.
Suppose each PE in Figure operates with a clock period of 100 ns. The
conventional memory-processor organization in Figure has at most a performance of 5
million operations per second. With the same clock rate, the systolic array will result in
30 MOPS performance. This gain in processing speed can also be justified with the fact
that the number of pipeline stages has been increased six times in the figure. Being able
to use each input data item a number of times is just one of the many advantages of the
systolic approach. Other advantages include modular expansionability, simple and
regular data and control flows, use of simple and uniform cells, elimination of global
broadcasting, limited fan-in and fast response time.

Memory

PE

(a) The conventional processor

Memory

PE PE PE PE PE PE

(b) A systolic processor array

Basic processing cells used in the construction of systolic arithmetic arrays are the
additive multiply cells specified in Figure 3.29. This cell has the three inputs a, b, c, and
VLSI Computations

the three outputs a = a, b = b, and d = c + a * b. One can assume six interface registers
are attached at the I/O ports of a processing cell. All registers are clocked for
synchronous transfer of data among adjacent cells. The additivemultiply operation is
needed in performing the inner product of two vectors, matrix-matrix multiplication,
matrix inversion, and L-U decomposition of a dense matrix.

Illustrated below is the construction of a systolic array for the multiplication of


two banded matrices. An example of band matrix multiplication is shown in the figure.
Matrix A has a bandwidth (3 + 2) – 1 = 4 and matrix B has a bandwidth (2 + 3) – 1 = 4
along their principal diagonals. The product matrix c = A B then has a bandwidth (4 + 4)
– 1 = 7 along its principal diagonal. Note that all three matrices have dimension
n x n, as show by the dotted entries. The matrix of bandwidth w may have w diagonals
that are not all zeros. The entries outside the diagonal band are all zeros.
It requires w1 x w2 processing cells to form a systolic array for the multiplication
of two sparse matrices of bandwidth w1 and w2, respectively. The resulting product
matrix has a bandwidth of w1 + w2 – 1. For this example, w1 x w2 = 4 x 4 = 16 multiply
cells are needed to construct the systolic array shown in figure. It should be noted that
the size of the array is determined by the bandwidth w1 and w2, independent of the
dimension n x n of the matrices. Data flows in this diamond-shaped systolic array are
indicated by the arrows among the processing cells.

The elements of A = (aij) and B = (bij) matrices enter the array along the two
diagonal data streams. The initial values of C = (cij) entries are zeros. The outputs at the
top of the vertical data stream give the product matrix. Three data streams flow through
the array in a pipelined fashion. Let the time delay of each processing cell be one unit
time. This systolic array can finish the band matrix multiplication in T time units, where

T = 3n + min (w1, w2)

Therefore, the computation time is linearly proportional to the dimension n of the


matrix. When the matrix bandwidths increase to w1 = w2 = n (for dense matrices A and
VLSI Computations

B), the time becomes O (4n), neglecting the I/O time delays. If one used a single
additive-multiply processor to perform the same matrix multiplication, O (n3)
computation time would be needed. The systolic multiplier thus has a speed gain of O
(n2). For large n, this improvement in speed is rather impressive.
VLSI systolic arrays can assume many different structures for different compute-
bound algorithms. Figure shows various systolic array configurations and their potential
usage in performing those computations is listed in the table. These computations form
the basis of signal and image processing, matrix arithmetic, combinatorial, database
algorithms. Due to their simplicity and strong appeal to intuition, systolic techniques
attracted a great deal of attention recently. However, the implementation of systolic
arrays on a VLSI chip has many practical constraints.

(a) One-dimensional linear array

(b) Two-dimensional square array (c) Two-dimensional hexagonal


array
VLSI Computations

(d) Binary tree (e) Triangular array

A SYSTOLIC ARRAY PROCESSOR IMPLEMENTED

The major problem with a systolic array is still in its I/O barrier. The globally
structured systolic array can speed-up computations only if the I/O bandwidth is high.
With current IC packaging technology, only a small number of I/O pins can be used for a
VLSI chip. For example, a systolic array of n2 step processors can perform the L-U
decomposition in 4n time units. However, such a systolic array in a single chip may
require 4n x w I/O terminals, where w is the word length.
VLSI Computations

Table: Computation functions and desired VLSI structures

Processor array structure Computation functions


1-D lincar arrays Fir-filter, convolution, discrete Fourier transform (DFT),
solution of triangular lincar system pipelining, Cartesian
product, odd even transportation sort, real-time priority
queue, pipeline arithmetic units.
2-D square arrays Dynamic programming for optimal parenthesization, graph
algorithms involving adjacency matrices.
2-D hexagonal arrays Matrix arithmetic (matrix multiplication, L-U
decomposition by Gaussian elimination without pivoting.
QR-factorization), transitive closure, pattern match, DFT,
relational database operations.
Trees Searching algorithms (queries on nearest neighbor, rank,
etc., systolic search tree), parallel function evaluation,
recurrence evaluation.
Triangular arrays Inversion of triangular matrix, formal language recognition.

For large n (say n > 1000) with typical operand width w = 32 bits, it is rather
impractical to fabricate an n x n systolic array on a monolithic chip with over 4n x w =
12,000 I/O terminals. Of course, I/O port sharing and time-division multiplexing can be
used to alleviate the problem. But still, I/O is the bottleneck. Until the I/O problem can
be satisfactorily solved, the systolic arrays can be only constructed in small sizes.
VLSI Computations

MAPPING ALGORITHMS INTO VLSI ARRAYS


Procedures to map cyclic loop algorithms into special-purpose VLSI arrays are
described below. The method is based on mathematical transformation of the index sets
and the data-dependence vectors associated with a given algorithm. After the algorithmic
transformation, one can devise a more efficient array structure that can better exploit
parallelism and pipelining by removing unnecessary data dependencies.
The exploitation of parallelism is often necessary because computational
problems are larger than a single VLSI device can process at a time. If a parallel
algorithm is structured as a network of smaller computational modules, then these
modules can be assigned to different VLSI devices. The communications between these
modules and their operation control dictates the structure of the VLSI system and its
performances. In Figure 10.28, a simplistic organization of a computer system is shown
consisting of several VLSI devices shared by two processors through a resource
arbitration network.
The I/O bottleneck problem in a VLSI system presents a serious restriction
imposed on the algorithm design. The challenge is to design parallel algorithms which
can be partitioned such that the amount of communication between modules is as small as
possible. Moreover, data entering the VLSI device should be utilized exhaustively before
passing again through the I/O ports. A global model of the VLSI processor array can be
formally described by a 3-tuple (G,F,T); where G is the network geometry, F is the cell
function, and T is the network timing. These features are described below separately.

The network geometry G refers to the geometrical layout of the network. The
position of each processing cell in the plane is described by its Cartesian coordinates.
Then, the interconnection between cells can easily be described by the position of the
terminal cells. These interconnections support the flow of data through the network; a
link can be dedicated only to one data stream of variables or it can be used for the
transport of several data streams at different time instances. A simple and regular
geometry is desired to uphold local communications.
VLSI Computations

M M
…….. M
(Shared memory modules)

Interconnection network

Processor Processor

Resource arbitration network

VLSI VLSI …….. VLSI


device device device

(Shared VLSI Resource pool)

A dual-processor system with


shared memories and shared VLSI
Resource pool

The functions F associated to each processing cell represent the totality of


arithmetic and logic expressions that a cell is capable of performing. We assume that
each cell consists of a small number of registers, an ALU, and control logic. Several
different types of processing cells may coexist in the same network; however, one design
goal should be the reduction of the number of cell types used.

The network timing T specifies for each cell the time when the processing of
functions F occurs and when the data communications take place. A correct timing
assures that the right data reaches the right place at the right time. The speed of the data
streams through the network is given by the ratio between the distance of the
communication link over the communication time. Networks with constant data speeds
are preferable because they require a simpler control logic.
VLSI Computations

The basic structural features of an algorithm are dictated by the data and control
dependencies. These dependencies refer to precedence relations of computations which
need to be satisfied in order to compute correctly. The absence of dependencies indicates
the possibility of simultaneous computations. These dependencies can be studied at
several distinct levels: blocks of computations level, statement (or expression) level,
variable level, and even bit level. Since we concentrate on algorithms for VLSI systolic
arrays, we will focus only on data dependencies at the variable level.
Consider a FORTRAN loop structure of the form:

DO 10 l1 = l1, u1
DO 10 l1 = l2, u2
:
DO 10 ln = ln, un (10.4)
S1 (l)
S2 (l)
:
SN (l)
10 CONTINUE

where lj and uj are integer-valued linear expressions involving I1, Ij-1 and I = (I1, I2, In) is
an index vector. S1, S2, SN are assignment statements of the form x = E where x is a
variable and E is an expression of some input variables.
The index set of the loop in Eq. is defined by:
Sn(I) = {(I1,…, In) : l1 < l1 < u1,…, ln < ln < un}
Consider two statements S(I1) and S(I2) which perform the functions f(I1) and
g(I2), respectively. Let V1(f(I1)) and V2(g(I2)) be the output variables of the two
statements respectively.

Variable V2(g(I2)) is said to be dependent on variable V1(f(I1)) and denote


V1(f(I1)) V2(g(I2)), if (i) I1 > I2 (less in the lexicographical sense); (ii) f(I1) = g(I2);
and (iii) V1(f(I1)) is an input variable in statement S(I2). The difference of their index
vectors d = I1 – I2 is called the data-dependence vector. In general, an algorithm is
characterized by a number of data-dependence vectors, which are functions of elements
VLSI Computations

of the index set defined. There is a large class of algorithms which have fixed or constant
data dependence vectors.

The transformation of the index set described above is the key towards an
efficient mapping of the algorithm into special-purpose VLSI arrays. The following
procedure is suggested to map loop algorithms into VLSI computing structures.

Mapping procedure
1. Pipeline all variables in the algorithm.
2. Find the set of data-dependence vectors.
3. Identify a valid transformation for the data-dependence vectors and the index set.
4. Map the algorithm into hardware structure.
5. Prove correctness and analyze performance.

We consider an example algorithm to illustrate the above procedure : the L-U


decomposition of a matrix A into lower- and upper-triangular matrices by Gaussian
elimination without pivoting. It is shown that better interconnection architectures can be
formally derived by using appropriate algorithm transformations.

Example: The L-U decomposition algorithm is expressed by the following


program:

for k 0 until n-1 do


begin
ukk 1/akk
for j k + 1 until n-1 do
ukj akj
for i k + 1 until n-1 do
lik aik ukk
for i k + 1 until n-1 do
for j k + 1 until n-1 do
aij aij + lik ukj
end.
VLSI Computations

This program can be rewritten into the following equivalent form in which all the
variables have been pipelined and all the data broadcasts have been eliminated:

for k 0 until n-1 do


begin
1: j k;
j k;
uikj 1/aktj
for j k + 1 until n-1 do
2: begin
i k;
uikj akij
end
for i k + 1 until n-1 do
3: begin
j k;
i
u kj ui-1; kj
likj akij uikj
end
for j k + 1 until n-1 do
for j k + 1 until n-1 do
4: begin
l jik l j-1; ik
i
u kj ui-1; kj
akij ak-1-ij lj-1ik ui-1kj

end
end

The data dependencies for this three-loop algorithm have the nice property that
d1 = (1, 0, 0)T
d2 = (0, 1, 0)T
d3 = (0, 0, 1)T
We write the above in matrix form D = [d1, d2, d3] = 1. There are several other
algorithms which lead to these simple data dependencies, and they were among the first
to be considered for the VLSI implementation.

The next step is to identify a linear transformation T to modify the data


dependencies to be T.D = , where = [s1, s2, s3] represents the modified data
VLSI Computations

dependencies in the new index space, which is selected a priori. This transformation T
must offer the maximum concurrency by minimizing data dependencies and T is a
bijection. A large number of choices exist, each leading to a different array geometry.
We choose the following one:

1 1 1 k k
T= 0 1 0 such that i =T i (10.9)
0 0 1 j j

The original indices k, i, j are being transformed by T into k, i, j. The organization of the
VLSI array for n = 5 generated by this T transformation is shown in Figure 10.29.
In this architecture, variables akij do not travel in space, but are updated in time.
Variables lkij move along the direction j (east with a speed of one grid per time unit), and
variables ukij move along the direction i (south) with the same speed. The network is
loaded initially with the coefficients of A, and at the end the cells below the diagonal
contain L and the cells above the diagonal contain U.
The processing time of this square array is 3n – 5. All the cells have the same
architecture. However, their functions at one given moment may differ. It can be seen
from the program in statement (10.7) that some cells may execute loop four, while others
execute loops two or three. If we wish to assign the same loops only to specific cells,
then the mapping must be changed accordingly.
For example, the following transformation :

1 1 1
T= -1 1 0
-1 0 1

introduces a new data communication link between cells toward north-west. These new
links will support the movement of variables akij. According to this new transformation,
the cells of the first row always compute loop two, the cells of the first column compute
VLSI Computations

loop three, and the rest compute loop four. The reader can now easily identify some
other valid transformations which will lead to different array organizations.

The design of algorithmically specialized VLSI devices is at its beginning. The


development of specialized VLSI devices is at its beginning. The development of
specialized devices to replace mathematical software is feasible but still very costly.
Several important technical issues remain unresolved and deserve further investigation.
Some of these are : I/O communication in VLSI technology, partitioning of algorithms to
maintain their numerical stability, and minimization of the communication among
computational blocks.

IMAGINE STREAM PROCESSOR


(An Example of a VLSI Implemented Processor)
The focus of the Imagine project is to develop a programmable architecture that
achieves the performance of special purpose hardware on graphics and image/signal
processing. This is accomplished by exploiting stream-based computation at the
application, compiler, and architectural level. At the application level, we have cast
several complex media applications such as polygon rendering, stereo depth extraction,
and video encoding into streams and kernels. At the compiler-level, we have developed
programming languages for writing stream-based programs and have developed software
tools that optimize their execution on stream hardware. Finally, at the architectural level,
we have developed the Imagine stream processor, a novel architecture that executes
stream-based programs and is able to sustain over tens of GFLOPs over a range of media
applications with a power dissipation of less than 10 Watts.
Imagine is a programmable single-chip processor that supports the stream
programming model. The figure to the right shows a block diagram of the Imagine stream
processor. The Imagine architecture supports 48 ALUs organized as 8 SIMD clusters.
Each cluster contains 6 ALUs, several local register files, and executes completely static
VLIW instructions. The stream register file (SRF) is the nexus for data transfers on the
processor. The memory system, arithmetic clusters, host interface, microcontroller, and
network interface all interact by transferring streams to and from the SRF.
VLSI Computations
VLSI Computations

RECONFIGURABLE PROCESSOR ARRAY


Algorithmically specialized processors often use different interconnection
structures. As demonstrated in Figure 10.30, five array structures have been suggested
for implementing different algorithms. The mesh is used for dynamic programming. The
hexagonally connected mesh was shown in the previous section for L-U decomposition.
The torus is used for transitive closure. The binary tree is used for sorting. The double-
rooted tree is used for searching. The matching of the structure to the right algorithm has
a fundamental influence on performance and cost effectiveness.
For example, if we have an n x n mesh-connected microprocessor structure and
want to find the maximum of n2 elements stored one per processor, 2n – 1 steps are
necessary and sufficient to solve the problem. But a faster algorithmically specialized
processor for this problem uses a tree machine to find the solution in 2 log n steps. For
large n, this is a benefit worth pursuing. Again, a bus can be introduced to link several
differently structured multiprocessors, including mesh and tree-connected
multiprocessors. But the bus bottleneck is quite serious. What we need is a more
polymorphic multiprocessor that does not compromise the benefits of VLSI technology.

A family of reconfigurable processor arrays is introduced in this section. This


configurable array concept was first proposed in 1982 by Lawrence Snyder at Purdue
University. Each configurable VLSI array is constructed with three types of components:
a collection of processing elements, a switch lattice, and an array controller. The switch
lattice is the most important component and the main source of differences among family
members. It is a regular structure formed from programmable switches connected by
data paths. The PEs are not directly connected to each other, but rather are connected at
regular intervals to the switch lattice. Figure 10.31 shows three examples of switch
lattices. Generally, the layout will be square, although other geometries are possible.
The perimeter switch is connected to external storage devices. With current technology,
only a few PEs and switches can be placed on a single chip. As improvements in
fabrication technology permit higher device densities, a single chip will be able to hold a
larger region of the switch lattice.
VLSI Computations

Each switch in the lattice contains local memory capable of storing several
configuration settings. A configuration setting enables the switch to establish an
example, we achieve a mesh interconnection pattern of the PEs for the lattice in Figure
10.31a by assigning north-south configuration settings to alternate switches in odd-
numbered rows and east-west settings to switches in the odd-numbered columns. Figure
10.32a illustrates the configuration; Figure 10.32b gives the configuration settings of a
binary tree.

The controller is responsible for loading the switch memory. The switch memory
is loaded preparatory to processing and is performed in parallel with the PE program
memory loading. Typically, program and switch settings for several phases can be
loaded together. The major requirement is that the local configuration settings for each
phase’s interconnection pattern be assigned to the same memory location in all switches.

Switch lattices It is convenient to think of the switches as being defined by several


characteristic parameters:
• m – the number of wires entering a switch on one data path (path width)
• d – the degree of incident data paths to a switch
• c – the number of configuration settings that can be stored in a switch
• g – the number of distinct data-path groups that a switch can connect
simultaneously.

The value of m reflects the balance struck between parallel and serial data transmission.
This balance will be influenced by several considerations, one of switch is the limited
number of pins on the package. Specifically, if a chip hosts a square region of the lattice
containing n PEs, then the number of pains required is proportional to m /n.

The value of d will usually be four, as in Figure 10.31a, or eight, as in Figure


10.31c. Figure 10.31b shows a mixed strategy that exploits the tendency of switches to be
used in two different roles. Switches at the intersection of the vertical and horizontal
VLSI Computations

switch corridors tend to perform most of the routing, while those interposed between two
adjacent PEs act more like extended PE ports for selecting data paths from the “corridor
buses.” The value of c is influenced by the number of configurations that may be needed
for a multiphase computation and the number of bits required per setting.

The crossover capability is a property of switches and refers to the number of


distinct data-path groups that a switch can simultaneously connect. Crossover capability
is specified by an integer g in the range 1 to d/2. thus, 1 indicates no crossover and d/2 is
the maximum number of distinct paths intersecting at a degrees d switch.
It is clear that lattices can differ in several ways. The PE degree, like the switch
degree, is the number of incident data paths. Most algorithms of interest use PEs of
degree eight or less. Larger degrees are probably not necessary since they can be
achieved either by multiplexing data paths or by logically coupling processing elements,
e.g., two degree-four PEs could be coupled to form a degree-six PE where one PE serves
only as a buffer.

The number of switches that separate two adjacent PEs is called the corridor
width, w. (See Figure10.31c for a w = 2 lattice.) This is perhaps the most significant
parameter of a lattice since it influences the efficiency of PE utilization, the convenience
of interconnection pattern embeddings, and the overhead required for the polymorphism.
VLSI Computations

CONCLUSION:
The applications of VLSI computations appear in real-time image processing as
well as real-time signal processing. The VLSI feature extraction introduced by Foley and
Sammon in 1975 enables signal and image processing computations effectively and
speedily.
Pattern embedding by wafer-scale integration (WSI) method introduced by
Hedlum is another application of VLSI computing structures.
Modular VLSI architectures for implementing large scale matrix arithmetic
processors have been introduced.
The design of algorithmically specialized VLSI devices is at its beginning. The
development of specialized VLSI devices is at its beginning. The development of
specialized devices to replace mathematical software is feasible but still very costly.
Several important technical issues remain unresolved and deserve further investigation.
Some of these are : I/O communication in VLSI technology, partitioning of algorithms to
maintain their numerical stability, and minimization of the communication among
computational blocks.
VLSI Computations

REFERENCES:
1. Fairbairn D.G.–“VLSI :A New Frontier for System Designers”–IEEE Comp. Jan 1982

2. Hwang K. and Cheng Y.H.-“Partitioned Matrix Algorithms for VLSI Arithmetic


Systems” – IEEE Trans. and Comp. December 1982

3. Hwang K. and Briggs F.A. – “Computer Architecture and Parallel Processing”


Mc Graw Hill Singapore

4. Kung H.T. and Lieserson C.E. – “Systolic Arrays” Society of Ind. and Appl. Math.
1978

5. Kung S.Y. and Arun K.S. – “Wavefront Array Processor: Language, Architecture and
Applications” – IEEE Trans. and Comp. November 1982.

6. www.rice.ces/edu

7. www.mhcollege.com

Das könnte Ihnen auch gefallen